I made my first vibe coded game, made casino style slots with lines, bets, free plays, auto play, in a linux shell, with color graphics, amazed how well it came out. Amazed how well it came out, with fixing graphics and adding features.
Might be nice on older games that dont have 4k HD texture packs, or even games like Skyrim or replaying older Witcher 1&2. Its upto the user right? Let people play older games with some better looking gfx. Seems like an easy win, use it or dont use it.
With all these long blog/news posts, been asking for a TLDR via AI. It does a good breakdown on thoughts vs actions. Then if it sounds interesting, then go back. Fabric app (oh github) works well ollama or lm-studio (and llm of choice)
It also works well with long YT videos using the closed captions, no time to listen to a 2 hour podcast.
TLDR:
* Trump and Elon Musk are reportedly using FCC Commissioner Brendan Carr to target speech they disagree with, posing a significant threat to First Amendment rights.
* Carr has been described as a tool in Trump's strategy to control free speech, with actions including investigations into major broadcasters like ABC, CBS, and NBC for their coverage or internal policies.
* The Verge's Nilay Patel criticizes this as an unprecedented attack on free speech, highlighting how the FCC, under Carr, might punish media for their editorial decisions, which contradicts traditional First Amendment protections.
* The narrative suggests that this administration's actions could fundamentally challenge the freedom of the press and speech in the U.S., using government authority in ways not seen before.
Pretty sure that latency and desktop improvement projects upped the timer for smoother mouse and windows movements also. Nice to see that it also improves application improvement of AI and encoders too.
I've been using LLM's to do searches for awhile, its quicker and i get better results. What happens in the future when new issues are only mentioned in github, x or reddit, and a different LLM is trained on each, have to use 3 searches?
From a legal viewpoint, selfcheck mistakes make you a criminal, and the stores make money for going after you for a settlement. Its already happening for customer mistakes to be prosecuted and settlements (walmart is the biggest culprit).
I'm buying groceries not getting into a possible legal issue, I'll avoid self checkout on a cart, maybe on 1 or 2 items self checkout is ok'ish.
Plus I feel using self checkout puts people out of jobs.
The political training in chatgpt has gotten in the way of asking funding and policy questions for basic questions.
I gave it a budget and asked it to give me the programs/departments/etc that have little return of value, overruns, possible fraud, spot problems, etc. so I could outsource or combine departments and save money.
It went on a long lecture that cutting funding was a horrible thing, and I'm horrible for asking, and refused to answer.
Really?
I'm asking basic auditing/restructure/spending, and it was trained to ignore my request and lecture against providing help, and refusal to give results.
You just said above, outsource/combine departments and save money. The model is not hallucinating that you want to let people go.
ClosedAI has to fight the image that this tech is going to make many more middle class people lose jobs than any tech before. Otherwise they are cooked. So they just instruct the model to react "firing people is wrong" whenever some vectors match or whatever.
Again, asking a LLM to parse data isn't real life, thats not how life works.
And again, mentioning outsource/combine departs doesnt mean layoffs, and was not the objective of the query. If I ask it to give me the stats on gas crossovers with good mpg, I don't want a political lecture on why EV's are better for the environment.
LLM's shouldn't interject its own viewpoints when asked a question on data.
l> If I ask it to give me the stats on gas crossovers with good mpg, I don't want a political lecture on why EV's are better for the environment.
Using LLM for that is crazy. If you want facts then try search instead.
An LLM is never neutral from the start. It is biased because all data is biased in human reality. If there are 100x more different stats about gas cars than EVs simply because gas cars are older then data is biased toward gad cars. If most of text data is by men and talks about women as not equal then data is gender biased.
Then it is more biased by people selecting dataset for training. Then it is biased by other engineers and managers who try to correct for other bias with their own bias. Etc.
If you want facts it is the wrong place to look. And if you want opinions don't be surprised when it has them
How would chatgpt be able to spot overruns, fraud, or make a value assessment off of a budget? Unless you provided significantly more information than implied in your post, eg actual spend, the entire exercise was pretty absurd.
This is the biggest danger of LLMs: people assuming that they have some sort of magical super intelligence.
Even if you did provide the data, if its tabular you can forget chatGPT understanding it properly, unless it is a very small table or without writing code to summarise things. If it writes code, theres a significant chance it still messes things up unless what you're asking is incredibly routine.
reply