Hacker Newsnew | past | comments | ask | show | jobs | submit | remus's commentslogin

> splitting the status page like they do, to the point where it is only a bit of humourous exaggeration to say that they track broken `git push` and `git pull` separately, is a sleight of hand / accounting / SLA-fudging that we should not excuse

This is a pretty ungenerous take. You could look at it the other way: if I don't use actions then it's useful for me to know that only actions are broken, and I can continue in my normal usage. If you bundle everything up then the status page is reporting an unhelpful false positive for me.


you can do both: report a number that shows how often your service as a whole is degraded, with a breakdown for individual components

example (not sponsored, i barely use codex and today's the first time i've ever had to look at this page; i don't know how much they're fudging the individual numbers or not reporting minor incidents):

https://status.openai.com/

most people who use chatgpt don't use all of the components under the "ChatGPT" heading. for codex, i don't use the vscode extension or codex web. etc


For all the negatives about github I agree. They offer a lot of free stuff, and LLMs seem likely to put massively increase their costs with no guarantee they'll be making money off it. I can't think of many (any?) large businesses which could scale up to meet so much new demand without some significant growing pains along the way.

Unless everything else stays the same (underlying traffic etc.) then you can't really compare. Could be that you hit some fundamental scaling limit with the old design and it completely falls over after a certain scale.

Oh as said I'm pretty sure things are more complex. It's just funny in a way that all these technologies that are usually being sold as "enablers for scale" don't seem to do their job very well.

I think this is overly harsh. After the guy has been working on the project for such a long period a handover would inevitably be a long process, not least to ensure whoever took over didn't abuse the existing user-base. Completely fair if the existing maintainer doesn't want to take on this work, and arguably a fork forces consumers to properly consider that someone else is in charge now.

I don't think the parent mentioned military secrets in particular? But the insider trading is already well documented e.g. https://www.bbc.co.uk/news/articles/cge0grppe3po

> the insider trading

The suspect hasn't been charged with insider trading. (OP said those "in DC seem to be able to do everything listed.")


> The suspect hasn't been charged with insider trading.

I think that was the point GP was making.


Pretty sure Count 1 through 5 above cover insider trading by administration officials too.

I think 3 and 4 are frauds on others in the prediction market agreement. As in, it’s fraud against the terms of the market.

The problem is "insider trading" has a definition and acting based on knowledge of government secrets isn't what it is.

And what I am saying is that the same articles of prosecution as in the soldier's case are applicable for their case too. Not going after them is a choice.

IANAL but what you state seems to literally fall under the STOCK Act of 2012. It is one kind of insider trading.

The dark pattern is how it was presented. It wasn't "your total is X, split it in 12 monthly payments" it was a yearly contract disguised as a monthly contract.


I think this is a pretty interesting comment because it gets to the heart of differing views on what quality means.

For you, non-buggy software is important. You could also reasonably take a more business centered approach, where having some number of paying customers is an indicator of quality (you've built something people are willing to pay for!) Personally I lean towards the second camp, the bugs are annoying but there is a good sprinkling of magic in the product which overall makes it something I really enjoy using.

All that is to say, I don't think there is a straightforward definition of quality that everyone is going to agree on.


What do I care if Anthropic makes money? Do you think Oracle makes money because they have a quality product?


> But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time.

Not to go too out of my way to defend GH's uptime because it's obviously pretty patchy, but I think this is a bad analogy. Most customers won't have a hard reliability on every user-facing gh feature. Or to put it another way there's only going to be a tiny fraction of users who actually experienced something like the 90% uptime reported by the site. Most people are in practice are probably experienceing something like 97-98%.


Sorry, by 'customer' I meant to say something like a large corporate customer - you're buying the whole package, and across your org, you're likely to be a little affected by even minor outages of niche services.

But yeah, totally agree that at the individual level, the observed reliability is between 90% and 99%, and probably toward the upper end of that range.


I think the parent is just pointing out that these things lie on a spectrum. I have a website that consists largely of static content and the (significant) scraping which occurs doesn't impact the site for general users so I don't mind (and means I get good, up to date answers from LLMs on the niche topic my site covers). If it did have an impact on real users, or cost me significant money, I would feel pretty differently.


Putting everything on a spectrum is what got us into this mess of zero regulation and moving goal posts. It's slippery slope thinking no matter which way we cut it, because every time someone calls for a stop sign to be put up after giving an inch, the very people who would have to stop will argue tirelessly for the extra mile.


What mess are you talking about? The existence of LLMs? I think it's pretty neat that I can now get answers to questions I have.

This is something I couldn't have done before, because people very often don't have the patience to answer questions. Even Google ended up in loops of "just use Google" or "closed. This is a duplicate of X, but X doesn't actually answer the question" or references to dead links.

Are there downsides to this? Sure, but imo AI is useful.


It's just repackaged Google results masquerading as an 'answer.' PageRank pulled results and displayed the first 10 relevant links and the LLM pulls tokens and displays the first relevant tokens to the query.

Just prompt it.


1. LLMs can translate text far better than any previous machine translation system. They can even do so for relatively small languages that typically had poor translation support. We all remember how funny text would get when you did English -> Japanese -> English. With LLMs you can do that (and even use a different LLM for the second step) and the texts remain very close.

2. Audio-input capable LLMs can transcribe audio far better than any previous system I've used. They easily understood my speech without problems. Youtube's old closed captioning system want anywhere close to as good and Microsoft's was unusable for me. LLMs have no such problems (makes me wonder if my speech patterns are in the training data since I've made a lot of YouTube videos and that's why they work so well for me).

3. You can feed LLMs local files (and run the LLM locally). Even if it is "just" pagerank, it's local pagerank now.

4. I can ask an LLM questions and then clarify what I wanted in natural language. You can't really refine a Google search in such a way. Trying to explain a Google search with more details usually doesn't help.

5. Iye mkx kcu kx VVW dy nomszrob dohd. Qyyqvo nyocx'd ny drkd pyb iye. - Google won't tell you what this means without you knowing what it is.

LLMs aren't magic, but I think they can do a whole bunch of things we couldn't really do before. Or at least we couldn't have a machine do those things well.


I’d argue putting everything in terms of black and white is the bigger issue than understanding nuance


Generalizing with "everything", "all", etc exclusive markers is exactly the kind of black/white divide you're arguing against. What happened to your nuanced reality within a single sentence? Not everything is black and white, but some situations are.


The person he's replying to argued against putting things on a spectrum. Does that not imply painting everything in black and white? Thus his response seems perfectly sensible to me.


He argued against putting things in a spectrum in many instances where that would be wrong, including the case under the question. What's your argument against that idea? LLM'ed too much lately?


He argued against and the response presented a counterargument. Both were based around social costs and used the same wording (ie "everything").

You made a specious dismissal. Now you're making personal attacks. Perhaps it's actually you who is having difficulty reasoning properly here?


In a funny way it reminds me of writing survey questions. You have to be so careful not to introduce some bias just with the wording, as you can basically nudge the LLM to the answer you want with some little hints in the q e.g. "is it right that..."


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: