Hacker Newsnew | past | comments | ask | show | jobs | submit | farhanhubble's commentslogin

> Kiselev’s child reader is being treated as a participant in mathematics, not as a recipient of facts.

Not sure how the other makes this claim when the passage he himself cites is just another clever proof in the list of clever things that maths books throw at you:

> It is easy to convince oneself that there exist infinitely many prime numbers. Indeed, suppose the contrary, that the number of primes is finite. Then there must exist a greatest prime; let it be a. To refute this assumption, imagine the new number N formed by the rule N = (2·3·5·7···a) + 1, that is, the product of all the primes up to a, plus one… The first term is divisible by every number in the list 2, 3, 5, …, a, while the second (the unit) is not divisible by any of them. Hence there is no greatest prime, and so the sequence of primes is infinite.


The following paragraph: >In a typical American or British arithmetic textbook of the same period — or, frankly, today — the topic “primes” would consist of the definition of a prime, a list of the first few, and perhaps a procedure for testing primality. The infinity of the primes would be asserted, if at all. The proof would not appear, and the argument that no list of primes can be completewould not be made.

I think they meant to imply that "other", maybe western, mathematical education emphasizes learning "facts" over a more demonstrable experience. The author compares how textbooks would frame the multiplication of positives and negatives in the same sense later in the article.

>The point is not that “minus times minus is plus” because some external authority says so. It is that, if we want our rules to give consistent answers when applied to physical quantities that point in two opposite directions, this is what the rules must look like.


Letov should name it Motepad++, clearly mention that it's a fork and move on.

> the app will now be called “NextPad++,” an homage to NeXT Computer, and uses a frog icon rather than the Notepad++ lizard.

Scrotepad has a nice ring to it.

I wish someone had told me how common this was back when I worked myself to death fixing every UI abnormality that no one except some misincentivized testers used to report at my first job. At the time I thought it was dishonest to say something was irreproducible and it'd be beneath me to patch an issue knowing it'll sprout ten others.

I'm proud of fixing everything properly but I won't repeat it ever unless the company actually has that high a bar across the board.


The cost has always been the sum of:

1. The time spent to think and iteratively understand what you want to build 2. The time spent to spell out how you want to build it

The cost for #2 is nearly zero now. The cost for #1 too is slashed substantially because instead of thinking in abstract terms or writing tests you can build a version of the thing and then ground your reasoning in that implementation and iterate until you attain the right functionality.

However, once that thing is complex enough you still need to burn time on identifying the boundaries of the various components and their interplay. There is no gain from building "a browser" and then iterating on the whole thing until it becomes "the browser". You'll be up against combinatorial complexity. You can perhaps deal with that complexity if you have a way to validate every tiny detail, which some are doing very well in porting software for example.


There could be many plausible explanations.

1. The model's default world model and priors diverge from ours. It may assume that you have another car at the wash and that's why you ask the question to begin with.

2. Language models do not really understand how space, time and other concepts from the real-world work

3. LLM's attention mechanism is also prone to getting tricked as in humans


I haven't used it in a while but RedHat used to feel quite a bit like Windows.


Similar questions trick humans all the time. The information is incomplete (where is the car?) and the question seems mundane, so we're tempted to answer it without a second thought. On the other hand, this could be the "no real world model" chasm that some suggest agents cannot cross.


If the car is at the car wash already, how can I drive to it?


By walking to the car wash, driving it anywhere else, then driving it to the car wash.


Thanks for restoring fate in parts of humanity!


I agree, I don't understand why this is a useful test. It's a borderline trick question, it's worded weirdly. What does it demonstrate?


I don't know if it demonstrates anything, but I do think it's somewhat natural for people to want to interact with tools that feel like they make sense.

If I'm going to trust a model to summarize things, go out and do research for me, etc, I'd be worried if it made what looks like comprehension or math mistakes.

I get that it feels like a big deal to some people if some models give wrong answers to questions like this one, "how many rs are in strawberry" (yes: I know models get this right, now, but it was a good example at the time), or "are we in the year 2026?"


In my experience the tools feel like they make sense when I use them properly, or at least I have a hard time relating the failure modes to this walk/drive thing with bizarre adversarial input. It just feels a little bit like garbage in, garbage out.


Okay, but when you're asking a model to do things like summarizing documents, analyzing data, or reading docs and producing code, etc, you don't necessarily have a lot of control over the quality of the input.


Yes, my brain is just like an LLM.


….sorry what?!


I use Obsidian to record decisions, plan every day and take detailed notes. Very handy for recalling the nitty gritty for future reference be it performance reviews, writing blogs or updating my resume.


Same for me. I also make extensive use of adding links to anything relevant. Spent a bunch of time discussing something in a slack thread: link it. Read some documentation: link it. Had a chat with an llm in a chat window: link it. Writing notes about how a bunch of code works : link to the functions. For this last one I've registered a custom vim:// URL scheme on my system which lets me link to a symbol within a given file, and when clicked focuses the relevant tmux window and navigates the relevant vim instance (using named pipes) to the symbol, or opens a fresh one if not already open.


I generally try to avoid adding external links as I found that those resources tend to get lost very fast. Of course, this is not always feasible, but whenever I can try to copy over the contents into my notes.


I used to interview mentors for a big EdTech company and met some of the smartest and most humble engineers who were all from Kenya.


If the two are indeed "Linked", I see a case for users-first browsers to show system metrics right along the page.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: