Hacker Newsnew | past | comments | ask | show | jobs | submit | DominikPeters's commentslogin

This approach started with the “Ask a question about your code” feature, which is more comparable to single chat message with relatively predictable token usage. Now it’s an agent who might work for 30 minutes, read the whole codebase, and write 1000 lines

30 minutes lol. I've gotten Opus to work for a full 8 hour day on one prompt.

They mention in the announcement that it will be possible to pool usage across an organization.

It's interesting that the API token cost of GPT-5.5 is double the cost of GPT-5.4, but Copilot charges a 7.5x multiplier and gates the model in premium plans. Clearly, they severely underpriced previous models like GPT-5.4 which they sell at $0.04 per request -- and these models can of course work for 30+ minutes in response to a single request and incur costs of several dollars.

As of this week, the "request" gravy train is over. Github copilot now meters tokens not requests.

> As of this week, the "request" gravy train is over. Github copilot now meters tokens not requests.

Where are you reading this?

I can't find anything that's been officially announced yet. It's just one article that's getting copied and pasted in various "news" outlets.


You got caught out by AI-generated slop news.

In my Codex dashboard, I can buy 1000 extra credits for $40. The credit cost for GPT-5.4 is 375 credits / 1M output tokens which translates to $15 / 1M output tokens which exactly equals the API rate.


Large cars impose heavy many negative externalities on people (take up more space, make it difficult to get through a narrow street when they park there, higher mortality when they drive into pedestrians or cyclists, reduce visibility for others, aesthetically offensive). Policy is slow to shift those costs onto the people causing the externalities but it is predictable that it will happen eventually.


Are you using Opus 4.5? Sounds more like Sonnet.


Yes I'm using Sonnet 4.5. Thanks for the tip, will try Opus 4.5, although costs might become an issue.


> although costs might become an issue.

If you have a ChatGPT subscription, try Codex with GPT-5.2-High or 5.2-codex High? In my experience, while being much slower, it produces far better results than Opus and seems even more aggressively subsidized (more generous rate limits).


This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.


Loads of researchers have only used LaTeX via Overleaf and even more primarily edit LaTeX using Overleaf, for better or worse. It really simplifies collaborative editing and the version history is good enough (not git level, but most people weren't using full git functionality). I just find that there are not that many features I need when paper writing - the main bottlenecks are coming up with the content and collaborating, with Overleaf simplifying the latter. It also removes a class of bugs where different collaborators had slightly different TeX setups.

I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.


Getting close to the "why Dropbox when you can rsync" mistake (https://news.ycombinator.com/item?id=9224)

@vicapow replied to keep the Dropbox parallel alive


Yeah I realized the parallel while I was writing my comment! I guess what I'm thinking is that a much better experience is available and there is no in-principle reason why overleaf and prism have to be so much worse, especially in the age of vibe-coding. Prism feels like the result of two days of Claude Code, when they should have invested at least five days.


I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.

You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.


> Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.

I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.


Accessibility does matter


If you inspect the Chain of Thought summaries, the LLM often knows full well what it is doing.


That's not knowing. That's just parotting in smaller chunks.


As an arXiv author who likes using complicated TeX constructions, the introduction of HTML conversion has increased my workload a lot trying to write fallback macros that render okay after conversion. The conversion is super slow and there is no way to faithfully simulate it locally. Still I think it's a great thing to do.


I believe dginev's Docker image https://github.com/dginev/ar5ivist is very close to what runs on arXiv and can be run locally. It uses a recent LaTeXML snapshot from September.


Given that the warming impacts of contrails are short-lived (roughly a day), I think it is a good idea to do research now on the weather forecasting needed to avoid producing contrails. But I don't really see a reason to actually start avoiding them now, with the associated costs in terms of fuel, CO2 emissions, and time. We can start avoiding them in a few decades when it might have become urgent to have cooling.


Aren't the impacts perpetual if we're creating new contrails every single day?

Taken from another comment, this seems pretty clear:

> Contrail cirrus may be air traffic's largest radiative forcing component, larger than all CO2 accumulated from aviation, and could triple from a 2006 baseline to 160–180 mW/m2 by 2050 without intervention.

[1] https://en.wikipedia.org/wiki/Contrail#Impacts_on_climate

The original article describes associated costs in time and fuel usage in the realm of 1% increase.


Not sure how you haven't noticed, but climate change is already affecting precipitation and drought patterns, it exacerbates heatwaves, cold snaps, and flooding, it affects harvests, disrupts ecosystems etc. etc. Reducing warming is an urgent matter.


There was a really good section of the article that went into great detail of the math and how it would easily outweigh the CO2. How it would only require something like diverting 2% of all flights as it is only that percentage of flights that make the majority of the contrails and that the diversion of the average flight would be something small like an extra 2 minutes flight time for shorter flights and like 6 minutes on a longer flight which the article states is not much increase in fuel consumption as well as not such a time increase to dissatisfy customers. So if the article is accurate in their math then the associated costs in all three fuel, CO2, and time are not an issue.


Given the feedback loops associated with climate change, I'd expect early interventions to have a larger impact on the climate than later ones.


It is already urgent.


It was urgent 40 years ago.


No. Addressing CO2 production was urgent then but the actual impacts of heat were not. They are now.


But the warming started already back then. Contrails contributed. So less contrails, less warming in the last 40 years.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: