Hacker Newsnew | past | comments | ask | show | jobs | submit | curtisf's commentslogin

Sure. A reasonable model for incoming requests within a short window of time is as a "Poisson process", which means the expected number of incoming requests within any interval is proportional to the length of that interval.

The parameter of that distribution is the expected (aka average) rate. If the intervals are time intervals, then the proper units of the parameter are events/second


It's basically using the "-" embedded in the definition of the eml operator.

Table 4 shows the "size" of the operators when fully expanded to "eml" applications, which is quite large for +, -, ×, and /.

Here's one approach which agrees with the minimum sizes they present:

        eml(x, y             ) = exp(x) − ln(y) # 1 + x + y
        eml(x, 1             ) = exp(x)         # 2 + x
        eml(1, y             ) = e - ln(y)      # 2 + y
        eml(1, exp(e - ln(y))) = ln(y)          # 6 + y; construction from eq (5)
                         ln(1) = 0              # 7
After you have ln and exp, you can invert their applications in the eml function

              eml(ln x, exp y) = x - y          # 9 + x + y
Using a subtraction-of-subtraction to get addition leads to the cost of "27" in Table 4; I'm not sure what formula leads to 19 but I'm guessing it avoids the expensive construction of 0 by using something simpler that cancels:

                   x - (0 - y) = x + y          # 25 + {x} + {y}


"I would rather read the prompt"

https://claytonwramsey.com/blog/prompt/

discussion: https://news.ycombinator.com/item?id=43888803

All of the output beyond the prompt contains, definitionally, essentially no useful information. Unless it's being used to translate from one human language to another, you're wasting your reader's time and energy in exchange for you own. If you have useful ideas, share them, and if you believe in the age of LLMs, be less afraid of them being unpolished and simply ask you readers to rely on their preferred tools to piece through it.


I have also found that LLMs do not help me communicate my ideas in any way because the bottleneck is getting the ideas out of my head and into the prompt in the first place, but I will disagree with the idea that the output beyond the prompt contains no useful information.

In the article you linked the output he is complaining about probably had a prompt like this: "What are the downsides of using Euler angles for rotation representation in robotics? Please provide a bulleted list and suggest alternatives." The LLM expanded on it based on its knowledge of the domain or based on a search tool (or both). Charitably, the student looked it over and thought through the information and decided it was good (or possibly tweaked around the edges) and then sent it over - though in practice they probably just assumed it was correct and didn't check it.

For writing an essay like "I would rather read the prompt" LLMs don't seem like they would speed up the process much, but for something that involves synthesizing or summarizing information LLMs definitely can generate you a useful essay (though at least at the moment the default system prompts generate something distinctively bland and awful).


Pretty balanced take. I think if a human gains information or saves time, it's still worthwhile. Surely, I don't publish those clickbaits. That's AI slop.


Sounds reasonable until you consider that the "prompt" might include a million tokens of context, not to mention follow-up/iterations


"Consensus" in this post refers to the "consensus problem", which is a fundamental and well-known problem in distributed systems.

It's not about political consensus.

However, the paper that introduced it and proved it possible, Lamport's "The Part Time Parliament", used an involved (and often cited as confusing) "Parliament" metaphor for computers in a distributed system

"Consensus" in distributed systems need not be limited to majorities; it really just requires no "split brain" is possible. For example, "consensus" is achieved by making one server the leader, and giving other servers no say. A majority is just the 'quorum' which remains available with that largest number of unavailable peers possible.


As feedback to the author, I made the same mistake initially. It was only around halfway through when I realized the voters in question didn't necessarily care what they were voting for in the usual preferential or political sense, only that they were trying to have any consensus at all.

Looking back at the page again from the top, I see the first paragraph references Paxos, which is a clue to those who know what that is, but I think using "There’s a committee of five members that tries to choose a color for a bike shed" as the example, which is the canonical case for people arguing personal preferences and going to the wall for them at the expense of every other rational consideration, threw me back off the trail. I'd suggest perhaps the sample problem being something as trivial as that in reality, but less pre-loaded with the exact opposite connotation.


> it really just requires no "split brain" is possible. For example, "consensus" is achieved by making one server the leader, and giving other servers no say.

Which is funny, because that actually describes political consensus as well, functionally, even if it’s not what people typically think of as the definition.

If you can effect enough of the right censorship or silencing or cancelling, you can achieve consensus (aka no split brain, at least no split with agency)


It could also be useful in low doses to supplement, for example, a seasonal vaccine in a year where they are especially unsure about prevalent strains, or where their predictions were already proved wrong early in the flu season


> For optional types, 0 is decoded as the default value of the underlying type (e.g. string? decodes 0 as "", not null).

In the "dense JSON" format, isn't representing removed/absent struct fields with `0` and not `null` backwards incompatible?

If you remove or are unaware of a `int32?` field, old consumers will suddenly think the value is present as a "default" value rather than absent


That is correct and that is a good catch, the idea though is that when you remove a field you typically do that after having made sure that all code no longer read from the removed field and that all binaries have been deployed.


How does this work if, for example, you persist the data in a database?


Let's imagine you have this:

``` struct User { id: int64; email: string?; name: string; } ```

You store some users in a database: [10,"john@gmail.com""john"], [11,"jane",null,"john@gmail.com"]

You remove the email field later:

``` struct User { id: int64; name: string; removed; } ```

Supposedly you remove a field after you have migrated all code that uses the field and you have deployed all binaries.

In your DB, you still have [10,john@gmail.com","john"], [11,null,"jane"], which you are able to deserialize fine (the email field is ignored). New values that you serialize are stored as [12,0,"jack"]. If you happen to have old binaries which still use the old email field and which are still running (which you shouldn't, but let's imagine you accidentally didn't deploy all your binaries before you removed the field), these new binaries will indeed decode the email field for new values (Jack) as an empty string instead of null.


Isn't it?

You can have Dependabot enabled, but turn off automatic PRs. You can then manually generate a PR for an auto-fixable issue if you want, or just do the fixes yourself and watch the issue number shrink.


Conscription is horribly inapt metaphor for mandatory inoculation.

Banning the playing of third-party Russian roulette, where you hold a mostly unloaded gun to the head of your neighbors, coworkers, and service staff, actually more accurately represents the risks involved to both yourself and the public, and importantly to the personal tax and effort required.


what about when a veteran returns from war with ptsd that can be triggered at any point and potentially result in violence to those around them ? thats about the same net effect as walking around with a loaded gun to everyones head , the only difference is the comparability in numbers. as well the covid death rates for young people are a fraction of the death rates of the elderly, who do deserve to be taken care of but ultimately are a net drain to society. so your comment is better stated as holding a gun to the head of the elderly ... which is horrible but not quite the same argument.


A lien on the property? Although almost all jurisdictions already have property taxes, so it hasn't been an insurmountable problem so far


This could be stated much more succinctly using Jobs to be Done (which is referenced in the first few paragraphs):

Your customers don't want to do stuff with AI.

They want to do stuff faster, better, cheaper, and more easily. (JtbD claims you need to be at least 15% better or 15% cheaper than the competition -- so if we're talking "AI", the classical ML or manual human alternative)

If the LLM you're trying to package can't actually solve the problem, obviously no one will buy it because _using AI_ OBVIOUSLY isn't anyone's _job-to-be-done_


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: