Hacker Newsnew | past | comments | ask | show | jobs | submit | elephanlemon's commentslogin

The haptic sensor is almost as good as the physical button, and the trade off of not having to worry about it breaking (which was likely after a few years with the physical ones) is well worth it for me.

Strongly disagree with this. Bad junior devs might be useless, but I’ve seen good ones absolutely tear through features. Junior devs fresh out of school typically have tons of energy, haven’t been burned out, and are serious about wanting to get work done.

And how do they compare to what a senior dev can do with Claude Code/Codex?

I bet you a senior could do with one good prompt to Claude what a junior would take a day to do before AI - and take time away from the senior.


Pretty favorably, because the coding agents suck.

So do junior devs. I’ve gotten great results treating coding agents as junior devs where I keep my hands on the wheel

Some of you folks think way too highly of yourselves. Junior devs are awesome. You tell them what needs doing, if it's not well defined you have them write a document to figure it out, and then they churn away at it and will often surprise you with a brilliant solution.

Meanwhile, I've never once seen a coding agent give a brilliant solution or design to just about anything, and anything with the barest whiff of undefined-ness will simply zero in on your existing biases.

This whole thread reads like absolute insanity to me. I love getting new junior devs. They do great work.


Now ask a junior dev to design a concurrency implementation. To know the complete in my case AWS SDK and write a script in 3 minutes.

https://docs.aws.amazon.com/boto3/latest/

Or do the same for IAC - same surface area - and use Terraform on one project, CloudFormation on another, and the CDK on a third and to generate code for you when you give them the correct architecture. It took me a day to do that before AI depending on the architectural complexity and I know AWS well (trust me on this). How long would it take me to delegate that to a junior dev? It took ChatGPT 2 minutes before I started using Claude just by my pasting a well labeled architecture diagram and explaining the goal.

It took me about 8 hours total to vibe code an internal web tool with a lot of features that if I had estimated before AI, I would have said a mid level developer would have taken two weeks at least. It wasn’t complex - just a CRUD app with Cognito authentication. How long would it have taken a junior developer?


The one reason I can't care about these kind of arguments is that you're describing the solution, not the problem. Based on my career (maybe shorter than yours), usually you put juniors on projects of low complexity and low impact while you play the mentor role. It's not about them being a productive worker or a menial helper, it's for them to train using practical projects. Your problems don't look like suitable projects unless you want them to train them in copy-pasta from the Internet.

First let’s define roles. I am not just pulling them out of thin air.

https://www.levels.fyi/blog/swe-level-framework.html

Junior - everything is spelled out in excruciating detail, the what and the how. They are going to be slow, not know best practices, constantly bug other developers and you srs going to have to correct them a lot.

Mid level developer - little ambiguity on the business case or their role in it. They are really good coders in their domain. They have the experience to turn well defined business requirements into code. You don’t have to explain the “how” to them just the what. They should have the ability to break an assigned “epic” based on the business requirements to well defined stories and be a single responsible individual for that Epic maybe working with juniors depending on the deliverable or other mid level developers.

A senior developer works at a higher level of ambiguity and a larger scope, the business may know they want something. But neither the business or technical requirements are well defined. Think of a team lead.

Senior+ - more involved with strategy.

If I have to define everything in great detail anyway, why not just use AI? It can do it faster, cheaper, more correct and the iteration is faster. I would go as far as saying in my recent coding agent experience, a coding agent is realistically 100x faster than a junior developer since you have to give both of them well defined tasks.

My experience with Claude code and codex recently is that even the difference between a mid level developer and a coding agent is taste when it comes to user facing development, knowing funky action at a distance, and knowing the business, with a mid level developer you can assume shared context and history with an ability to learn.

So again, why do I need to hire a junior developer in the age of AI?


From the article

  As an Entry Level Engineer, you’ll be expected to develop and maintain lower complexity components under the guidance and tutelage of more experienced team members.
That does not really contradict my point.

> If I have to define everything in great detail anyway, why not just use AI?

You don't have to define everything. And to do so is detrimental to their growth. If you're their mentor, you're supposed to give them problems, not recipes. And guidance may be as little as an hint or pointing them to some resource, not giving them the solution outright. The goal is not to get a problem solved (that's just a nice-to have), the goal is to nuture a future colleague.


Okay. But that still doesn’t answer the question.

Why should I hire a junior who doesn’t know the what or the how. Instead of hiring a mid level developer who could be an excellent developer who can turn business requirements into code and is more than likely better at certain things than I am since they live and breathe it everyday and can both do the work without supervision and can offer valuable advice and say something that might convince me that I didn’t think things clearly?

Reminding you that the difference above a mid level developer and a “senior”/“senior+” is scope and ambiguity not necessarily technical depth in one area.

What does a junior developer bring to the table that I should use my open req on?


In my experience it just boils down to:

1- You need a ton of internal knowledge so it doesn't really matter what they know past the basics.

2- Testing gets expensive with seniors

3- You can't get mid-senior level employees you like. I see very often companies having really high requirements for hiring leading to the only candidates passing being friends of employees. Juniors pass easier via the 'he's motivated to learn' path.

4- Juniors bring a motivation with them. Seniors tend to generally care less so a couple of energetic juniors can get them moving a bit quicker. Especially if you find a good one, since a senior really doesn't want to get outperformed by a fresh graduate. Also, since they usually suck at politics, it's easier to prod them about why things aren't working than the seniors who've played the blame game for 20 years and have perfected the art of dodging responsibility.


> Why should I hire a junior who doesn’t know the what or the how.

I'm not saying you should. It's the business model that will answer that question. But the traditional wisdom was that juniors are not costly and have few obligations tying them down. And juniors don't stay junior.

And some may know the what and the how, at least technically. What they may lack may be just how to develop their skills further to be useful in a professional settings. It's easy to learn programming languages, tools, libraries and frameworks when you have a lot of free time. And they're not asking to be your protégés, you're just training them to be useful for your team.


Design a concurrency implementation? I sure hope they would spend more than 3 minutes on it! Concurrency lends itself to subtle bugs even when experts write it.

I'd gladly take a junior dev to do any of that work there, because they can think for themself and not hang onto any bias you unknowingly build into the prompt like it's a religion.


I can absolutely guarantee you that a junior dev or even a senior dev could do complicated IAC as fast as AI. It isn’t that knowing the architecture is the problem, it’s just very tedious. You have to look up all of the properties involved for each service and each property of each resource. I trust AI to know proper AWS architecture from being trained on the total corpus of the internet more than a junior dev.

> I bet you a senior could do with one good prompt to Claude what a junior would take a day to do before AI

It would still be a waste of a seniors time to write that prompt. They should have more important things to spend time on


And it’s not a waste of their time to have to give detailed requirements and troubleshooting steps to a junior developer, constantly being interrupted, and then having to check their work thoroughly?

If you have to be that detailed anyway - you might as well use AI.


No, teaching the next generation of humans is not a waste of time

I'm very sorry for you that you think that way


So exactly how am I going to convince my management to open a req for a junior developer who is not going to help us meet our quarterly goals and take time away from the other senior developers that will either have to work longer hours or do less work?

I’m not going to work as a charity and neither are any of my coworkers. We are all here to exchange labor for money.


We as a collective need to convince our management of this, but that needs to start with people getting their heads out of their asses and working together instead of this mercenary attitude you have

I don’t have to do anything except keep my head down, do my job and enjoy my well earned autonomy. I’m definitely not going to try to convince my skip, skip, skip manager to change their hiring policies. It’s not like my line level manager has any power over anything

Even when I was at a startup before 2020 and I did have the ear of the CTO and the founders I knew my ultimate mission was to do what was needed to get acquired and before that I knew exactly what my mission was when k was hired to lead the tech initiatives as we were acquiring companies “find efficiencies” and go public.

Or do you think I could have convince anyone of anything as an L5 at AWS in the middle between architect at a startup and my current company?


Your getting unnecessarily down voted by devs who want to feel morally superior, but don't have any concrete answer to the conundrum you've posed.

It's about money, and the actual solution would be to lower pay at senior level and give it to juniors, with some lock in agreed by the junior in exchange for this grace.

I doubt the vast majority will agree to this.


They're getting downvoted because they are a miserable misanthrope, and it is our responsible as people in a society to punish obviously antisocial behavior.

Agree. I’d like more fine grained control of context and compaction. If you spend time debugging in the middle of a session, once you’ve fixed the bugs you ought to be able to remove everything related to fixing them out of context and continue as you had before you encountered them. (Right now depending on your IDE this can be quite annoying to do manually. And I’m not aware of any that allow you to snip it out if you’ve worked with the agent on other tasks afterwards.)

I think agents should manage their own context too. For example, if you’re working with a tool that dumps a lot of logged information into context, those logs should get pruned out after one or two more prompts.

Context should be thought of something that can be freely manipulated, rather than a stack that can only have things appended or removed from the end.


Oh that's quite a nice idea - agentic context management (riffing on agentic memory management).

There's some challenges around the LLM having enough output tokens to easily specify what it wants its next input tokens to be, but "snips" should be able to be expressed concisely (i.e. the next input should include everything sent previously except the chunk that starts XXX and ends YYY). The upside is tighter context, the downside is it'll bust the prompt cache (perhaps the optimal trade-off is to batch the snips).


So I built that in my chat harness. I just gave the agent a “prune” tool and it can remove shit it doesn’t need any more from its own context. But chat is last gen.

Good point on prompt cache invalidation. Context-mode sidesteps this by never letting the bloat in to begin with, rather than snipping it out after. Tool output runs in a sandbox, a short summary enters context, and the raw data sits in a local search index. No cache busting because the big payload never hits the conversation history in the first place.

Yeah, the fact that we have treated context as immutable baffles me, it’s not like humans working memory keeps a perfect history of everything they’ve done over the last hour, it shouldn’t be that complicated to train a secondary model that just runs online compaction, eg: it runs a tool call, the model determines what’s Germaine to the conversion and prunes the rest, or some task gets completed, ok just leave a stub in the context that says completed x, with a tool available to see the details of x if it becomes relevant again.

That's pretty much the approach we took with context-mode. Tool outputs get processed in a sandbox, only a stub summary comes back into context, and the full details stay in a searchable FTS5 index the model can query on demand. Not trained into the model itself, but gets you most of the way there as a plugin today.

This is a partial realization of the idea, but, for a long running agent the proportion of noise increases linearly with the session length, unless you take an appropriately large machete to the problem you’re still going to wind up with sub optimal results.

Yeah, I'd definitely like to be able to edit my context a lot more. And once you consider that you start seeing things in your head like "select this big chunk of context and ask the model to simply that part", or do things like fix the model trying to ingest too many tokens because it dumped a whole file in that it didn't realize was going to be as large as it was. There's about a half-dozen things like that that are immediately obviously useful.

Is it because of caching? If the context changes arbitrarily every turn then you would have to throw away the cache.

So use a block based cache and tune the block size to maximize the hit rate? This isn’t rocket science.

This seems misguided, you have to cache a prefix due to attention.

> For example, if you’re working with a tool that dumps a lot of logged information into context

I've set up a hook that blocks directly running certain common tools and instead tells Claude to pipe the output to a temporary file and search that for relevant info. There's still some noise where it tries to run the tool once, gets blocked, then runs it the right way. But it's better than before.


I think telling it to run those in a subagent should accomplish the same thing and ensure only the answer makes it to the main context. Otherwise you will still have some bloat from reading the exact output, although in some cases that could be good if you’re debugging or something

Not really because it reliably greps or searches the file for relevant info. So far I haven't seen it ever load the whole file. It might be more efficient for the main thread to have a subagent do it but probably at a significant slowdown penalty when all I'm doing is linting or running tests. So this is probably a judgement call depending on the situation.

> I think agents should manage their own context too.

My intuition is that this should be almost trivial. If I copy/paste your long coding session into an LLM and ask it which parts can be removed from context without losing much, I'm confident that it will know to remove the debugging bits.


I generally do this when I arrive at the agent getting stuck at a test loop or whatever after injecting some later requirement in and tweaking. Once I hit a decent place I have the agent summarize, discard the branch (it’s part of the context too!) and start with the new prompt

I’ve been wondering about this and just found this paper[1]: Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models

Looks interesting.

[1] https://arxiv.org/html/2510.04618v1


That's exactly what context-mode does for tool outputs. Instead of dumping raw logs and snapshots into context, it runs them in a sandbox and only returns a summary. The full data stays in a local FTS5 index so you can search it later when you need specifics.

what i want is for the agent to initially get the full data and make the right decision based on it, then later it doesnt need to know as much about how it got there.

isnt that how thinking works? intermediate tokens that then get replaced with the reuslt?


Trees in pi let you do this, after done debugging you move back up and continue, leaving all the debugging context in its own branch

i think something kinda easy for that could be to pretend that pruned output was actually done by a subagent. copy the detailed logs out, and replace it with a compacted summary.

Treat context like git shas. Yes, there is a specific order within a 'branch' but you should be able to do the equivalent of cherry-picking and rebasing it

Interesting. I’m having anything on Gemini being profitable though, do you happen to have a source?

Here's one, basically AI is driving 15% of Google's profits at the end of 2025.

https://advergroup.com/gemini-hits-650-million-users/

I didn't really realize how big Gemini was until I saw that Qualia was using it, they apparently used 0.01% of Geminis total tokens (100 billion) in about 3 months, they're in production with the title and escrow industry, so that's a great deal of data going through Gemini, unlike some chat subscription this is all API driven, which I doubt Google is charging at a loss for.

https://www.qualia.com/qualia-clear/

Unlike OpenAI, Google has an actual business model, not just strange circular deals.

Edit: I misswrote "majority of" instead of 15% of Google's profits.


> Here's one, basically AI is driving 15% of Google's profits at the end of 2025. https://advergroup.com/gemini-hits-650-million-users/

This does not at all tell us Gemini is profitable or driving 15% of its profits. The article does not mention profits even once. It then goes on to bizarrely compare Gemini's monthly active users to Open AI's weekly active ones.


Indeed, that article doesn't support a single part of that claim.

It kinda feels like an LLM-generated article that another LLM picked as a "citation", and then no human bothered to check if it actually said what the LLM said it did.

And, really, advergroup.com? Who sites an advertising agency as if it's a reliable resource?

https://advergroup.com/digital-marketing/

"AdverGroup Web Design and Creative Media Solutions is a full service advertising agency that delivers digital marketing services. We manage Google Ad Word campaigns and/or Meta Ad Campaigns for local clients in Chicago, Las Vegas and their surrounding suburbs."

So credible a resource on Gemini's performance/profitability... /sarc

But yeah it doesn't even actually say anything about profits, let alone attribute any specific percentage of profits to Gemini. It just vague marketing copy.


“You’re in luck if you’ve been hankering to have your wall connected to wifi.”


It’s so they can begin selling you a subscription to allow you to hang a picture.


Great news, there’s finally going to be sufficient motivation for people to both build out and use open source alternatives.


Interesting how pedantic he is!

> Then, too, Orwell had the technophobic fixation that every technological advance is a slide downhill. Thus, when his hero writes, he 'fitted a nib into the penholder and sucked it to get the grease off. He does so 'because of a feeling that the beautiful creamy paper deserved to be written on with a real nib instead of being scratched with an ink-pencil'.

> Presumably, the 'ink-pencil' is the ball-point pen that was coming into use at the time that 1984 was being written. This means that Orwell describes something as being written' with a real nib but being 'scratched' with a ball-point. This is, however, precisely the reverse of the truth. If you are old enough to remember steel pens, you will remember that they scratched fearsomely, and you know ball-points don't.

> This is not science fiction, but a distorted nostalgia for a past that never was. I am surprised that Orwell stopped with the steel pen and that he didn't have Winston writing with a neat goose quill.


I don't think it's pedantic, he's trying to make a broad point about his mentality, using the detail as the defining example


Intel Arc seems to be well liked, this seems to just be bad writing by Reuters. Unclear what is news here exactly as Demmers was hired a month ago…


“Gemini 3 Pro was often overloaded, which produced long spans of downtime that 2.5 Pro experienced much less often”

I was unclear if this meant that the API was overloaded or if he was on a subscription plan and had hit his limit for the moment. Although I think that the Gemini plans just use weekly limits, so I guess it must be API.


Geminii CLI has a specific "model is overloaded" error message which is distinct from "you're out of quota" so I suspect whatever tools they're using for this probably have something similar, and they're referring to that.


Double hyphen converts to em dash in Microsoft Word and I think some other places. I was taught that it was incorrect to use a hyphen in place of a dash, so I’ve always used em dashes -- sometimes I’ll just use two hyphens if the software doesn’t convert, like a forum :).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: