n=1, been lifting weights for 25 years and lost 40 lbs on Zepbound and counting.
I can still do my routines easily with no issues. My muscles look slightly smaller I think, but maybe that's the fat around them that's been diminished.
We set time limits on tablet and rules like getting your homework done first, but my 9 year old specifically begs me to let him have Roblox, which I think he is too young to play.
I told him he can play it, but he has to beat Chrono Trigger (tablet version) first. He didn't even get to the county fair and gave up, saying it was too boring, the nerve!
What's frustrating is that a lot of these keys were generated a long time ago with a small amount of GCP services that they could connect to. (Ex. Firebase remote config, firestore, etc.)
When Gemini came around, rather than that service being disabled by default for those keys, Gemini was enabled, allowing exploiters to easily utilize these keys (Ex. a "public" key stored in an APK file)
Gemini API is not enabled by default, a project owner has to go explicitly enable it.
The problem described here is that developer X creates an API key intended for Maps or something, developer Y turns on Gemini, and now X's key can access Gemini without either X or Y realizing that this is the case.
The solution is to not reuse GCP projects for multiple purposes, especially in prod.
Please see my response to your pasted comment in another thread: for many APIs that you can enable on a GCP project, you are intended to use the same GCP project across the whole application for quota tracking. Google even makes you assert that you are only using one GCP project (or at least list out all GCP projects, which APIs are enabled on them and what their purpose is and why you have more than one) when seeking approval for public facing OAuth.
You are wrong that increasing projects have no cost; many services have project based costs (Cloud Armour rules cannot be used cross project at the base tier), many services (mostly observeability) degrade significantly cross project, the Google Cloud Console _sucks_ cross project.
You are also wrong in saying there are no projects that could reasonably have a safe api key made unsafe by this exploit.
One example, a service that has firebase auth must publish the key (Google's docs recommend). Later, you add gen ai to that service, managing access using IAM/service accounts (the proper way). You've now elevated the Firebase Auth Key to be a Gemini key. Really undeniably poor from Google.
The problem is that developer X did not properly scope the API key when he created it. Yes, separate projects would also stop this, but keys have been capable for ever and creating unrestricted keys is strongly discouraged. Pretty sure you can even set an org policy to prevent someone from doing so…
Most likely. Microsoft has pushed out everyone with deep enough knowledge of Windows to ever fix what the new blood has broken.
Windows 7 was likely an accident that will never be repeated, their lightning in a bottle OS.
They're at the point where the only way it can be fixed would be for a visionary CEO to be elected who will scrap Windows 11 and pay to rewrite the OS from the ground up in tandem with hardware manufacturers to scalpel out all of the Windows NT cruft and copilot/ai shit and rebuild from the ground up a Windows 12 that might possibly hearken back to the glory days and then charge us, the user, for the OS license again.
I am just as likely to win the Powerball tonight as that is to happen.
Windows is dead, Microsoft just doesn't believe it yet.
I'm curious as to what your thought process is for suggesting "the app" (not sure what app you are referring to) as an alternative for someone who essentially rage quits when they see a cookie banner, given that apps on average are even more so an invasion of privacy and riddled with dark patterns.
Steam itself is proprietary, and I imagine they'd expand the existing Steam chat and not do something separate like Proton. I don't think jumping into the arms of another company managing a centralized proprietary social platform is a good idea, even if Valve tend to be "good guys".
Steam Group Chats are sort of there; no video chat but text chats and drop-in voice chats like Discord. On the other hand they're basically ephemeral, with messages disappearing from history at some given point.
I also can't figure out a way to access them outside of the Steam client and in DOTA where I believe they're tied to the in-game guild system.
Outside of Dota, it's called "Group Chats" (below your friends list) and it looks very similar to a basic Discord interface. You may have to join a dota guild in order to see it, although everyone in my guild just uses discord.
Let's presume / speculate for a moment that companies will only need 1 developer to do the job of 10 developers because of AI. That would also mean 10 developers can do the job of 100 developers.
A company that cuts developers to save money whose moat is not big enough may quickly find themselves out-competed by a company that sees this as an opportunity to overtake their competitor. They will have to hire more developers to keep their product / service competitive.
So whether you believe the hype or not, I don't think engineering jobs are in jeopardy long-run, just cyclically as they always have been. They "might" be in jeopardy for those who don't use AI, but even as it stands, there are a lot of niche things out there that AI completely bombs on.
This sounds good in theory, but have you hired someone in 2026?
Developers are really lazy in general and don't want to work. The more people you hire, the more you run into the chance of gumming up productivity with unproductive developers.
Even if they are productive, once you cross the threshold of 30 people even productive developers become lazy because of entitlement, bad resource distribution, or complexities from larger teams.
We don't even have to talk about teams of 1000+. Ownership is just dead at that point.
In 2026, having just 5 engineers with AI means you can cut through all the waste and get stuff done. If they start being weird, you can see it pretty easily vs. when engineers are being weird in a team of 50-1000+.
It's not rocket science to see leadership decide to cut down on teams to better manage weirdness in devs. More people doesn't mean more results unfortunately because of work culture nowadays.
This sounds like a rant from a dysfunctional out of touch manager more than anything. From a 57 day old account here to pump AI because humans are terrible and not printing you lambos. Totally not a shill or anything. Humans = bad AI = good. Shill.
When you area asked specifics about how you use AI so effectively when others cannot you do not reply. Shill.
I've hired close to 200 people and 4 were bad apples that I had to fire. So no real life does not reflect what you wrote. Most people want to do a good job.
Will the modal developer of 2030 be much like a dev today?
Writing software was a craft. You learned to take a problem and turn it into precise, reliable rules in a special syntax.
If AI takes off, we'll see a new field emerging of AI-oriented architecture and project management. The skills will be different.
How do you deploy a massive compute budget effectively to steer software design when agents are writing the code and you're the only one responsible for the entire project because the company fired all the other engineers (or never hired them) to spend the money on AI instead?
Are there ways of factoring a software project that mitigate the problems of AI? For example, since AI has a hard time in high-context, novel situations but can crank out massive volumes of code almost for free, can you afford to spend more time factoring the project into low-context, heavily documented components that the AI can stitch together easily?
How do you get sufficient reliability in the critical components?
How do you manage a software project when no human understands the code base?
How do you insure and mitigate the risks of AI-designed products? Can you use insurance and lower prices if AI-designed software is riskier? Can we quantify and put a dollar value on the risk of AI-designed software compared to human-designed?
What would be the most useful tools for making large AI-generated codebases inspectable?
When I think about these questions, a lot of them sound like things an manager or analyst might do. They don't sound like the "craft of code." Even if 1 developer in 2030 can do the work of 10 today, that doesn't mean the typical dev today is going to turn into that 10x engineer. It might just be a very different skillset.
Nitpick, blacksmiths typically did forging, which is hammering heated metal into shape with benefits for the strength of the hammered material. CNC is machining, cutting things into the shape you want at room temperature.
Forging is machine assisted now with tons of tools but its still somewhat of a craft, you can't just send a CAD file to a machine.
I think we're still figuring out where on that spectrum LLM coding will settle.
Blacksmiths also spent a lot of their time repairing things, whereas modern replacements primarily produce more things. Kind of an interesting shift. Economies and jobs change in so many ways.
I don't think it necessarily scales that way. Larger organizations need more communication channels and coordination. If anything, assuming AI does give you 10x ability, there's probably a sweet spot where you have just enough developers that churn out code at a good pace but not too many that it gets too chaotic.
If you compare one developer to 10, for instance, one developer doesn't have to deal with communicating with 9 other people to make sure they're working on things that align with the work everyone else is doing. There is no consensus that has to be reached. No meetings, no messages that have to be relayed, no delays because someone wasn't around to get approval. That one developer just makes a decision and does it.
There are lots of big companies out there and in the past, small startups have been able to create successful products that never would have been created at the big company even though the big company hired way more developers.
Yeah I think this is a good way to think about it.
I mean Google, MSFT for example have effectively unlimited developers, and their products still suck in some areas (Teams is my number one worst) so maybe AI will allow them to upgrade their features and compete
At large companies, UI/UX is done by UI/UX designers and features are chosen and prioritized by product management and customer research teams. Developers don't get much input.
As Steve Jobs said long ago "The only problem with Microsoft is they just have no taste." but you can apply the same to Google and anyone else trying to compete with them. Having infinite AI developers doesn't help those who have UI designers and product managers that have no taste.
MSFT, GOOG et al have an enormous army of engineers. And yet, they dont seem to be continually releasing one hit product after another. Why is that? Because writing lines of code is not the bottleneck of continually producing and bringing new products to market.
Its crazy to me how people are missing the point with all this.
From outside as consumer. The end problem is that these product do not compete on price. A chat app on enterprise at the scale of customers they have should probably be 1€ a month. Not 10 or 20€.
That might not be multi billions a year business, but maybe chat app should not be one.
I think a big factor is generational. Bigcos are led mlby generations that are phone or email first. Chat is an afterthought. For orgs like that, Teams is great if chat is your least important collaboration method.
I see this sentiment everywhere, but I eat more than I did now that I am on Ozempic (1.7). I've always been perpetually starving + lift weights every day since I was a teen though.
Haven't noticed muscle mass decrease either, at least from the perspective that I can do the same reps / sets for my weight routine.
If you want a bunch of tailwind class slop, then yes. Otherwise, A lot of context engineering is needed if you want it to write modular tailwind components properly for large projects where consistency is important.
> Otherwise, A lot of context engineering is needed if you want
I am not seeing that. I have a few AI-assisted projects using tailwind and scrolling through it now 99% of it looks... completely modern and professional. I had previously asked it to "completely refactor, a rewrite if needed, all the tailwind/css/app styles. ensure visual and code consistency across pages".
Modern coding tools add tons of their own content, but none of the above was "a lot of context engineering".
Isn't that an article about using a frontend aesthetics prompt in order to avoid the AI tells? A lot of the with-aesthetics pages look pretty good imo.
It's describing the problem and also giving a solution. The problem of vibe coded sites all looking the same is very real however, if you don't consciously and actively guide the LLM towards being different, as described in the article.
And design too. I shouldn’t be able to tell Claude designed your site/app, but it is too often the case. Good taste still remains an advantage thankfully.
I can still do my routines easily with no issues. My muscles look slightly smaller I think, but maybe that's the fat around them that's been diminished.
reply