Hacker Newsnew | past | comments | ask | show | jobs | submit | galaxyLogic's commentslogin

Talk about shooting, how about shooting wearing a mask with no identifiable badge on then driving away in an unmarked car, and perhaps shooting more people who try to video you shooting people. Comparatively speaking there's nothing rotten in Denmark

I think this means reviewing is the main thing with AI, and therefore the language to use should be one where reviewing is easy, for humans.

You guys, when you use AI to solve some question or task and it succeeds, do you feel like typing "Thank You" to your LLM/Agent? I do feel that way often, but then I think that would be crazy, a waste of keystrokes an maybe also tokens, why would I do that? Yet I feel tempted to do that frequently.

But then I also wonder if this attitude "Why should I waste time thanking it?" will also spread to human-human interactions?


"Sam Altman considers it a worthwhile expense":

https://www.tomshardware.com/tech-industry/artificial-intell...


I wonder is there a version GoL where every bit on a computer-display or LCD TV is one cell? How does it look?

I did something like this: a 64x64 (4K) display of GoL (among other things) using addressable pixels. Alas, I only took one video when the display was working and it wasn't fully cleaned up: https://photos.app.goo.gl/WUmVgBVVi6rXDqSB7

Do you mean every pixel or every sub-pixel? Sub-pixel is interesting because the geometry of the grid isn't going to be the same from one screen to the other. It might also look compressed horizontally.

How many bits per pixel are you assuming, and are you imagining the red pixels are vertical neighbors with corresponding red pixels above and below, etc.?

Should be easy to do on my phone: https://itch.io/post/15723528


Good point to explore

At 8 bpp the effect is only colors moving vertically, up and down. After a while digging into it I realize why: most colors are too dark to be visible, we only see the most significant bits in each channel. And when those bits influence left or right the consequences are not visible.

Maybe I should try 2bpp. Or some HSL where I can clamp L.

Edit: Oh yes, it's interesting at 2bpp!


Conversely, it'd be cool to play it on an large empty office building.

One window = one pixel.



Or, in Krakow, Poland in 2012 :) https://www.youtube.com/watch?v=5KitN5HFGWc

Right but what if every window was an outward pointing LCD screen each having their own node-per-pixel GoL !:_)

Shouldn't the etiquette be that if you send someone a response from AI, you start your message by telling the prompt that produced that reponse?

That, would give the responder the chance to modify the prompt and get a perhaps better answer from the LLM?


Even better, reply only with the prompt that you would have used, not the resulting text. Don't even run the prompt through an LLM.

That results in a shorter and more concise message, and the original sender can choose to use the prompt you provided on their favourite LLM from the start.


Right. You might also consider high-lighting some things you learned from AI's response. Summarizing it and perhaps critiquing it.

AI, and different AIs, give different answers to the same question, so it may be useful if you can provide a good summary of the different responses you got.


Even better, what about piping the different AI responses through another LLM to provide the summary? That way you save yourself the time and effort of reading all the different AI responses.

You could even pipe the final summary directly to your email/IM client and save yourself the copy-paste.


I actually use multi-LLM consensus as a part of my daily work, it’s pretty effective.

AKA, just send your actual message directly and leave out the llm.

Interesting. Does it only work with known frameworks like Next, React etc. or could I use it with my plain Node.js app which produces browser-output?

No, doesn't use work with server-side only apps.

It's a server-side app whose GUI is in the browser, a bit like Electron or what have you.

I guess my question is does Tidewawe only work with a fixed set of known "frameworks" like React and Next, or is it a more general purpose tool for analysing an app based on its source-code and the HTML it produces for the browser?


Fixed set.

Right when you're coding with LLM it's not you asking the LLM questions, it's LLM asking you questions, about what to build, how should it work exactly, should it do this or that under what conditions. Because the LLM does the coding, it's you have to do more thinking. :-)

And when you make the decisions it is you who is responsible for them. Whereas if you just do the coding the decisions about the code are left largely to you nobody much sees them, only how they affect the outcome. Whereas now the LLM is in that role, responsible only for what the code does not how it does it.


Hehe, speak for yourself- as a 1x coder on a good day, having a nonjudgmental partner who can explain stuff to me is one of the best parts of writing with an llm :)

I like that aspect of it too. LLM never seems to get offended even when I tell it its wrong. Just trying to understand why some people say it can feel exhausting. Instead of focusing on narrowly defined coding tasks, the work has changed and you are responsible for a much larger area of work, and expectations are similarly higher. You're supposed to produce 10x code now.

Not sure if it's what you're talking about but I had a coworker trying to break into eSports and he talked a lot about the micro vs macro skills a game requires. Sounds like we all have an aimbot for programming so the competition has shifted hard towards the macro. That could definitely be tiring.

> Because the LLM does the coding, it's you have to do more thinking. :-)

I keep seeing this sentiment, but it sure sounds wrong to me.

Coding requires thinking (in humans, at any rate). When you're doing coding, you're doing both coding-thinking and the design thinking.

Now you're only doing one half of it.


I don't think we should protect "source-code", we should protect people. Source-code doesn't care, people do.

Should we protect developers and their rights? Surely, and users' rights too definitely. But protecting source-code as such seems a bit abstract to me.


I think when people give gifts they do expect something in return, at least the acknowledgment that it was THEY who gave the gift. More fame to them. What I don't like is if they start pointing out how people who don't follow their example are evil. The key word I've come to think in terms of is "self-serving".

Ok, but, what about median household size? Shouldn't we calculate the "richness" based not on how much each household makes but how much each member of a household gets from it? My guess is that households are smaller these days, but don't know.

Well if today's households are smaller that makes them even richer (more money split over fewer people).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: