Hacker Newsnew | past | comments | ask | show | jobs | submit | more aarestad's commentslogin

Same, and I actually learned from this book in a class from this very professor!


A bit of Deno trivia: it runs the big "Chicago Brick" video wall in Google's main office in Fulton Market; they open-source the code here:

https://github.com/google/chicago-brick

(I contributed a little "Penrose-tile" module during my time there, though I never got it in production :P)


Cool. Do you know of any recorded video of the video wall in action? I wasn't able to find any.


I can't find one either! Here's a 10-year-old article with a picture of it, though it looks slightly different now: https://www.dnainfo.com/chicago/20151204/west-loop/long-awai...


Yes, but humans can be held accountable.


I probably should have added sarcasm tags to my post. My very firm opinion is that AI should only make suggestions to humans and not decisions for humans.


I'd argue that humans also more easily learn from huge mistakes. Typically, we need only one training sample to avoid a whole class of errors in the future (also because we are being held accountable).


As annoying as it is when the human support tech is wrong about something, I'm not hoping they'll lose their job as a result. I want them to have better training/docs so it doesn't happen again in the future, just like I'm sure they'll do with this AI bot.


That only works well if someone is in an appropriate job though. Keeping someone in a position they are unqualified for and majorly screwing up at isn't doing anyone any favors.


Fully agree. My analogy fits here too.


> I'm not hoping they'll lose their job as a result

I have empathy for humans. It's not yet a thought crime to suggest that the existence of an LLM should be ended. The analogy would make me afraid of the future if I think about it too much.


How is this not an example of humans being held accountable? What would be the difference here if a help center article contained incorrect information? Would you go after the technical writer instead of the founders or Cursor employees responding on Reddit?


Best guess from Duckin' is an old military proposal called "Multiple Device Queuing System" https://ftp.arl.army.mil/~mike/papers/82usenix-mdqs/usenix.h...


There is no strategic understanding by any party of this tool (or the Sudoku one): the human "creator" nor the AI agent. Human knowledge is not advanced at all. The LLM can't tell you _why_ it makes its choices - maybe it could pretend to, but the descriptions themselves would probably be hallucinatory.

It's "just a game" but obviously this also applies to AI decision making in much more consequential settings. We should not strive just to come up with "the right answer" but ask _why_ it is the right answer.


I did a bit of googling to see if I could find a decent civilian introduction to "how to write a SAT solver" but everything I found were papers assume quite a bit of previous knowledge. Is there a good source where one could learn a bit about how to write a basic SAT solver and how to improve on it?


The general purpose algorithm that’s not necessarily the fastest but has an upper bound worst-case runtime:

1. Pick a paper in the field ~at random.

1b. If it’s basic enough for you, you’re done

2. Read the first few paragraphs and find the most basic statement that is cited

3. Find the cited paper

4. Go back to step 1b

Occasionally you’ll hit a book as the citation, in which case you must backtrack to take a different branch of the tree and try again.

In theory this is worst-case exponential, but in practice most searches end quickly

(Excuse the pun)


Why do you backtrack when you hit a book?

Btw, people often cite the earliest articles, but not necessarily the simplest.

Often there are papers like '[topic] simplified' etc, and they can be quite useful.

In our case, an alternative approach is to just do a web search for 'how to write a simple sat solver' and that yields eg https://www.gibiansky.com/blog/verification/writing-a-sat-so... and https://sahandsaba.com/understanding-sat-by-implementing-a-s...


> Why do you backtrack when you hit a book?

Because I usually don’t own those books and don’t have a university library

> Often there are papers like '[topic] simplified' etc, and they can be quite useful.

Yes! This is adding heuristics on top of the basic algorithm, of which there are many useful ones. (Figure out who the field luminaries are, what the best journals/conferences were, etc.)


You can find many textbooks online. With varying degrees of legality.



I guess “pun” because I proposed an exponential worst-case but usually actually fast algorithm based on backtracking, which is what SAT is.


I like Knuth's writing in general so possibly worth considering TAOCP Volume 4B. (NB: I have not yet obtained and worked through Volumes 4A and 4B, at my current rate of study I probably won't unless I live as long as he has)

https://www.informit.com/store/art-of-computer-programming-v...


In fact, just the part on Satisfiability was previously published (2015, before Volume 4B published 2023) as "Volume 4, Fascicle 6: Satisfiability", and the even-earlier draft (pre-fascicle) is available online at https://cs.stanford.edu/~knuth/fasc6a.ps.gz

In typical Knuth style, it is a very clear and solid explanation of the whole area, assuming no additional background than the rest of the book (high-school/undergraduate math), written in a lively style with concrete examples and humour. It also has the typical Knuth problems: everything is passed through Knuth's own notation and ideas (which are often clearer than mainstream, but nevertheless are different); it's all very densely written (every word matters, even in what looks like casual/breezy sentences): not skimmable, you have to read everything.


I was looking for that and didn't find it on his page. Thanks.


Well, the simple approach is to use backtracking. From there you can dive into back jumping, constraint learning and DPLL.

Now, implementing that is surely full of tricks too, which you may also find interesting, but is probably not discussed too much.


The simplest is probably WALKSAT-like heuristics, essentially local search: flip a random variable, evaluate, if you see an improvement accept the flip, otherwise try again. This can be surprisingly efficient on some SAT problems


https://sat-smt.codes/ is the best introductory to expert, and reference work I've ever read on SAT/SMT solvers


I'd recommend starting here: http://minisat.se/


I really loved this work on SAT:

http://www.cs.cornell.edu/selman/papers/pdf/97.aaai.invarian...

Local search heuristics typically involve a noise-parameter, which controls the tradeoff between exploration and exploitation. The paper found a statistical measure that was invariant in many heuristics they tried, allowing them quickly to ascertain the optimal noise level for new heuristics. This then allowed them to design better heuristics.

I always found this work quite interesting. Sadly, it's not indexed by ChatGPT because I wanted to interrogate it about recent progress in this specific direction.


I think you are supposed to start from here: https://ebooks.iospress.nl/volume/handbook-of-satisfiability...


Sure, but now do 11 at a time with a similar algorithm (which was part 2 of the problem).


I've just submitted in order to see the second part, which just says "now look for 14 unique".

I dont see the problem here, if you want a notation to express it, a macro for something like,

    0 != 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
    1 != 0, 2, 3, 4, 5, 6, 7, ...
is straightforward, and could be parameterised on the window size.


That's fine, but it's O(n) in the window size whereas the xor trick is O(1).


Part 1 and part 2 are the same, only the window size varies. But within a part, the window size is constant. For most, it should have just meant that, if you hard-coded the window size during part 1, that you need to make it a parameter.

But for both parts, the naïve solution should be O(N * W), where N is the input length, and W is the window size. Like a sibling says, since within a part the window size is fixed, it is acceptable to call it O(N).

Edit: ah, I see what his solution is doing. It is correct. I need to adjust my definition of naïve, I guess. His is O(N * W²). You can still reasonably consider W² constant, though, I think. And he gets to what I would call the "naïve" solution.

(Previously.) ~The "naïve solution" in the OP does not appear to be correct. Or at least, if it does work,~ it appears to do far more computation that it needs to.

The "xor trick" is O(N), too. (There cannot be a more efficient solution, as the entire input must be considered in the worst case. TFA is correct that its final form omits W, though.)


The window size is always constant in the problem, so it's O(1)

There shouldnt be a loop over the window size in solutions to this (particular) problem


TFA's implementation is using a loop over the window size to compare each character to all the other characters, to see if they're unique. So, the article's O(N * W²) is correct, but I agree, the window size is fixed and small; it's fair to boil it down to just O(N).

(I never even considered doing a for loop like that to determine if the window is unique; I just histogram the window, and then check if all the values in the histogram are 1. If yes, unique. If I had been cleverer, I wouldn't rebuild the histogram as the window shifts, but there was no need for that. This is essentially the next solution TFA presents.)


I already reported this bug a couple days ago based on a similar query that a friend of mine to dome about.


Guessing it’s the one that has the most people agreeing what the particular characters are.


So the answer is D: 351JRZ?


There is an answer sheet at the end.


I found SomaFM 22(!) years ago via Shoutcast, and have been a listener ever since, even making sure to come back after they briefly went off the air due to an early-00s fight over streaming fees. Love it love it. Thanks for keeping SomaFM ad-free all these years, Rusty - I will donate till I die. <3


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: