Hacker Newsnew | past | comments | ask | show | jobs | submit | holsta's commentslogin

Oh, you have nothing to hide? Kindly paste all your payment and login credentials that your browser stores. Later we'll need to see all your DMs on Facebook, LinkedIn, Slack, Discord, etc.

Finally we'll want to know about disputes you've had with intimate partners, employers and other service providers, especially powerful ones like healthcare, insurance and financial organisations.


We should also have full published salary and benefits (etc) details right now, whatever their contract says about disclosing those, and 24x7 streamed video of their entire life with no censoring, including toilet breaks and sex and bars and parties.

And, along with all the credentials as you suggest, including private parts of PGP keys etc, accurate impressions/clones of any and all physical security/privacy devices they use such as keys to house and car and safe and gun safe and relatives' crypt, etc, etc...


> For example, requiring the “vibecoding” tag on all stories about AI even though very few of them are about vibecoding.

No? You either use AI or vibecoding, like the tag page says:

https://lobste.rs/tags


I should have said “all stories about AI usage” which is exactly what your link says. If you post anything related to using or exploring AI, it’s forced to use the vibecoding tag. It doesn’t matter if it’s about vibecoding or coding at all.

Forcing the “vibecoding” tag on to stories that aren’t vibecoding related has been a debate on the site for a while: https://lobste.rs/s/gkzmfy/let_s_rename_vibecoding_tag_llms

The top voted comments on that thread get to the meat of the issue. Vibecoding was embraced as a derogatory term and applied broadly to every LLM related topic, even when vibecoding wasn’t involved.


> It’s a short, specific puzzle. I’d love to know if you think the "Correct" path I designed matches your real-world experience, or if I’m off base.

As someone about to step into a C-suite role: I picked the "correct" path and as long as people are reasonable, it works well.


Glad the logic held up!

That caveat—'as long as people are reasonable'—is the biggest variable in the equation. The real challenge is sticking to that correct path when the other side acts irrationally.

Good luck with the new role!


I've used this but you might need to use your browser's translate feature:

https://kalendersiden.dk/


Many experiments have shown that when you take away people's concerns about money for housing and food, that frees up energy and attention to do other things.

Like the famous experiment in Finland where homeless people were given cash with no strings attached and most were able to rise out of their despair. The healthcare professionals could then focus their energy on the harder cases. It also saved a bunch of money in the process.


> for this price you can get a really nice N100/N150 laptop with a full HD IPS screen

Can you think of a name or model by any chance?


Yeah like the Chuwi Gemibook at the moment.

I really like them, I have a really old Chuwi atom-based laptop of more than 10 years old that's still working fine, even the battery still lasts a few hours. It was even cheaper and I mainly got it for the makerspace (not too much of a problem if a soldering iron rolls up against it) but it was surpisingly capable. It was reviewed by Anandtech at the time (which no longer even exists).

I don't normally buy laptops at all, I prefer mini-desktops. So this is still my only personal laptop.


> Hardly a "roll out across the UK".

What's your threshold for when it becomes a problem? Should we wait until it becomes a problem, or should we try to stop this level of facial recognition?

You should also assume this is a proof of concept. It'll get improved and scaled down to run on every police vehicle, and on every camera the police already control.


It has already been scaled down to android phones (you'll find phones are an excellent platform for this), where you can find apps that are meant to let venue-owners guard entrances against specific individuals. That's illegal, but obviously common enough to make such apps.


> It used to be that if you got stuck on a concept, you're basically screwed.

We were able to learn before LLMs.

Libraries are not a new thing. FidoNet, USENET, IRC, forums, local study/user groups. You have access to all of Wikipedia. Offline, if you want.


I learned how to code using the library in the 90s.

I think it's accurate to say that if I had to do that again, I'm basically screwed.

Asking the LLM is a vastly superior experience.

I had to learn what my local library had, not what I wanted. And it was an incredible slog.

IRC groups is another example--I've been there. One or two topics have great IRC channels. The rest have idle bots and hostile gatekeepers.

The LLM makes a happy path to most topics, not just a couple.


>Asking the LLM is a vastly superior experience.

Not to be overly argumentative, but I disagree, if you're looking for a deep and ongoing process, LLMs fall down, because they can't remember anything and can't build upon itself in that way. You end up having to repeat alot of stuff. They also don't have good course correction (that is, if you're going down the wrong path, it doesn't alert you, as I've experienced)

It also can give you really bad content depending on what you're trying to learn.

I think for things that represent themselves as a form of highly structured data, like programming languages, there's good attunement there, but you start talking about trying to dig around about advanced finance, political topics, economics, or complex medical conditions the quality falls off fast, if its there at all


I used llms to teach me a programming language recently.

It was way nicer than a book.

That's the experience I'm speaking from. It wasn't perfect, and it was wrong sometimes, sure. A known limitation.

But it was flexible, and it was able to do things like relate ideas with programming languages I already knew. Adapt to my level of understanding. Skip stuff I didn't need.

Incorrect moments or not, the result was i learned something quickly and easily. That isn't what happened in the 90s.


> and it was wrong sometimes, sure. A known limitation.

But that's the entire problem and I don't understand why it's just put aside like that. LLMs are wrong sometimes, and they often just don't give you the details and, in my opinion, knowing about certain details and traps of a language is very very important, if you plan on doing more with it than just having fun. Now someone will come around the corner and say 'but but but it gives you the details if you explicitly ask for them'. Yes, of course, but you just don't know where important details are hidden, if you are just learning about it. Studying is hard and it takes perseverance. Most textbooks will tell you the same things, but they all still differ and every author usually has a few distinct details they highlight and these are the important bits that you just won't get with an LLM


It's not my experience that there are missing pieces as compared to anything else.

Nobody can write an exhaustive tome and explore every feature, use, problem, and pitfall of Python, for example. Every text on the topic will omit something.

It's hardly a criticism. I don't want exhaustive.

The llm taught me what I asked it to teach me. That's what I hope it will do, not try to caution me about everything I could do wrong with a language. That list might be infinite.


> It's not my experience that there are missing pieces as compared to anything else.

How can you know this when you are learning something? It seems like a confirmation bias to even have this opinion?


I'd gently point out we're 4 questions into "what about if you went about it stupidly and actually learned nothing?"

It's entirely possible they learned nothing and they're missing huge parts.

But we're sort of at the point where in order to ignore their self-reported experience, we're asking philosophical questions that amount to "how can you know you know if you don't know what you don't know and definitely don't know everything?"

More existentialism than interlocution.

If we decide our interlocutor can't be relied upon, what is discussion?

Would we have the same question if they said they did it from a book?

If they did do it from a book, how would we know if the book they read was missing something that we thought was crucial?


I didn't think that was what was being discussed.

I was attempting to imply that with high-quality literature, it is often reviewed by humans who have some sort of knowledge about a particular topic or are willing to cross reference it with existing literature. The reader often does this as well.

For low-effort literature, this is often not the case, and can lead to things like https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect where a trained observer can point out that something is wrong, but an untrained observer cannot perceive what is incorrect.

IMO, this is adjacent to what human agents interacting with language models experience often. It isn't wrong about everything, but the nuance is enough to introduce some poor underlying thought patterns while learning.


That's easy. It's due to a psychological concept called: transfer of learning [0].

Perhaps the most famous example of this is Warren Buffet. For years Buffet missed out on returns from the tech industry [1] because he avoided investing in tech company stocks due to Berkshire's long standing philosophy to never invest in companies whose business model he doesn't understand.

His light bulb moment came when he used his understanding of a business he understood really well i.e. their furniture business [3] to value Apple as a consumer company rather than as a tech company leading to a $1bn position in Apple in 2016 [2].

[0] https://en.wikipedia.org/wiki/Transfer_of_learning

[1] https://news.ycombinator.com/item?id=33612228

[2] https://www.theguardian.com/technology/2016/may/16/warren-bu...

[3] https://www.cnbc.com/2017/05/08/billionaire-investor-warren-...


You are right and that's my point. To me it just feels like that too many people think LLMs are the holy grail for learning. No, you still have to study a lot. Yes, it can be easier than it was.


Your other responses kinda imply that you believe LLMs are not good for learning.

That's totally different than saying they are not flawless but they make learning easier than other methods, like you did in this comment


Most LLM user interfaces, such as ChatGPT, do have a memory. See Settings, Personalization, Manage Memories.


Sure, but there are limits here. Thats what I'm talking about, limits. The memory isn't infinitely expansive. I still have found it doesn't backtrack well enough to "remember" (for lack of a better term) that it told me something already, if its old enough, for example.

It also doesn't seem to do a good job of building on "memory" over time. There appears to be some unspoken limit there, or something to that affect.


Agreed, I'd add to the statement, "you're basically screwed, often, without investing a ton of time (e.g. weekends)"

Figuring out 'make' errors when I was bad at C on microcontrollers a decade ago? (still am) Careful pondering of possible meanings of words... trial and error tweaks of code and recompiling in hopes that I was just off by a tiny thing, but 2 hours later and 30 attempts later, and realizing I'd done a bad job of tracking what I'd tried and hadn't? Well, made me better at being careful at triaging issues. But it wasn't something I was enthusiastic to pick back up the next weekend, or for the next idea I had.

Revisiting that combination of hardware/code a decade later and having it go much faster with ChatGPT... that was fun.


Are we really comparing this research to just writing and having a good answer in a couple of seconds?

Like, I agree with you and I believe those things will resist and will always be important, but it doesn't really compare in this case.

Last week I was in the nature and I saw a cute bird that I didn't know. I asked an AI and got the correct answer in 10 seconds. Of course I would find the answer at the library or by looking at proper niche sites, but I would not have done it because I simply didn't care that much. It's a stupid example but I hope it makes the point


There's a gigantic difference between outsourcing your brain to generative AI (LLMs, Stable Diffusion, ..) and pattern recognition that recognises songs, birds, plants or health issues.


It’s not an or/either situation.


> We were able to learn before LLMs.

We were able to learn before the invention of writing, too!


> Both extremes want the same thing.

Citation needed. The 'extreme' feminism & LGBT tends to revolve around identical pay, being able to walk down the street without getting assaulted or being able to work without being harassed or discriminated against.


Feminist groups also regularly try to get games banned from Steam, typically for sexism or violence against women. Eg.

> Women in Games CEO Dr Marie-Claire Isaaman has called on Valve to “act urgently” and remove the game from Steam, saying the game’s content “is not only vile and dangerous, but also actively promotes the dehumanisation of women and girls.”

https://www.videogameschronicle.com/news/women-in-games-call...


They also try to ban books that disagree with their beliefs: https://www.nbcnews.com/feature/nbc-out/amazon-will-not-remo...


> In “Irreversible Damage,” Abigail Shrier argues youth are being “fast-tracked” into medical transition — a claim experts say isn’t true and harms trans youth.

Anti-trans activities is the fascist agenda. Fascism should always be stopped. Saving democracy and lives of transpeople by stopping fascists is not controversial in any way.


When you start digging past their marketing material, you quickly discover that these organisations are just right-wing fronts, against trans-people, against abortion.

Here's a 38 minute video that walks through some of the recent major incidents.

https://www.youtube.com/watch?v=SmHHnPLllUk

Feminists would not campaign to take down games (with zero sexual content) about queer lives made by queer people.


Who gets to decide who is or isn't a "feminist"?

Why should feminism be incompatible with right-wing politics?

How do I know what is or isn't a legitimate feminist position, given that the list apparently constantly changes over time?

I am constantly told that there are many different kinds of feminism and that my various complaints about feminism — based on actual interactions I've personally had with feminists — are not valid because they don't generalize across the whole thing (even if I point at well-known, established feminist literature and critiques thereof). Yet I also constantly see groups of self-identified feminists point at each other and try to claim that the others don't actually count as feminists because they disagree about some other issue.

I assume you accept the validity of more than two genders. Will you accept the validity of more than two kinds of political position?


>The 'extreme' feminism & LGBT tends to revolve around

There are countless statements from feminist authority figures that are impossible to reconcile with this claim. But HN is not the place to have this argument, or even to attempt to turn it into a discussion; and elsewhere on the Internet, I have repeatedly seen people persecuted as misogynists simply for collating such evidence.


It’s very easy to make everyone you agree with sound incredibly reasonable when you don’t have to give any examples of when they showed their true colors.


If this wasn't your point, perhaps you should consider how this applies to holsta's comment.


> A couple make flags that are useful [..]

But not portable. Please don't use them outside of your own non-distributable toy projects.


I will not restrict myself to an arcane subset of Make just because you refuse to type 'gmake' instead of 'make'. Parallel execution, pattern rules, order-only prerequisites, includes, not to mention the dozens of useful function like (not)dir, (pat)subst, info... There's a reason why most POSIX Makefiles nowadays are generated. It's not GNU's fault that POSIX is stale.

EDIT: There's one exception, and that would be using Guile as an extension language, as that is often not available. However, thanks to conditionals (also not in POSIX, of course), it can be used optionally. I once sped up a Windows build by an order of magnitude by implementing certain things in Guile instead of calling shell (which is notoriously slow on Windows).


Agreed. My company decided on using GNU Make on every platform we supported, which back then (last century) was a bunch of Unix variants, and Linux. That made it possible to write a simple and portable build system which could be used for everything we did, no hassle. And not difficult, because gmake was available basically everywhere, then just as now.


Completely agree. POSIX is irrelevant anyway. Every single unixlike has unique features that are vastly superior to whatever legacy happens to be standardized by POSIX. Avoiding their use leads to nothing but misery.


Portability is overrated. Better to make full use of one's tools. Restricting oneself to some "portable" subset of all features is pure masochism.

GNU Make is feature rich and is itself portable. It's also free software, as in freedom. Just use it.


And it's available everywhere. All Unix platforms had it back then, and the still existing ones (AIX is alive, at least) have it available. Which made it easy for our company to base our build system on GNU Make for everything, back in the day.


> Portability is overrated. > GNU Make is [..] itself portable.

Sounds like it's not overrated, then. You just prefer that other people write portable C and package GNU Make for all systems instead of you writing POSIX Make.


Not at all. I think we should all be using the full potential of our preferred system instead of sucky abstractions that provide the lowest common denominator of features.

Portability is overrated. Portability between POSIX systems is especially overrated. Linux and the BSDs have powerful exclusive features and people should be using them as much as possible in their software, simply because it's better than the legacy POSIX nonsense. This also applies to the features of Windows, macOS, iOS, etc.

GNU Make is powerful, ubiquitous and portable. That makes it even more pointless to avoid it. I won't claim it's perfect but it's absolutely a hell of a lot better than some "standard" POSIX variant of make that virtually nobody actually cares about. GNU Make will be present in pretty much every system capable of compiling software. Everyone is used to running make to build things. Avoiding things that make life easier because POSIX is pointless masochism.


Not only overrated, but also the source of extreme complex and gigantic pieces of software, which end being a nightmare to keep updated.

Just like optimization, it has its place and time.


People are too quick to [ab]use GNU Make features. IME, learning how to make do with portable make constructs can help discipline oneself to avoid excessive complexity, especially when it comes to macro definitions where GNU Make's Lispy looping and eval constructs are heavily overused and quickly lead to obtuse, impenetrable code. POSIX pattern substitutions are quite powerful and often produce easier to read code than the GNU equivalent. I'm not sure if computed variable names/nested variable references are well-defined in POSIX (e.g. "$($(FOO))"), but they are widely supported nonetheless, and often more readable than $(eval ...). (They can also be used for portable conditional constructs; I wouldn't argue they're more readable, though I often find them so.)

Some GNU Make constructs, like pattern rules, are indispensable in all but the simplest projects, but can also be overused.

For some reason there's a strong urge to programmatically generate build rules. But like with SQL queries, going beyond the parameterization already built into the language can be counter productive. A good Makefile, like a good SQL query, should be easy to read on its face. Yes, it often means greater verbosity and even repetition, but that can be a benefit to be embraced (at least embraced more than is instinctively common).

EDIT: Computed variable references are well-defined as of POSIX-2024, including (AFAICT) on the left-hand side of a definition. In the discussion it was shown the semantics were already supported by all extant implementations.


Absolutely. But the target has nothing to do with “portability” but a more fundamental and important principle(s) of readability and maintainability.


It's a matter of praxis. Targeting portable constructs is (IMO) a useful methodology for achieving the abstract goal. It doesn't have to be strict, but it provides a quantifiable, objective metric (i.e. amount of non-portable constructs employed) to help achieve an otherwise subjective goal.

Otherwise you face an ocean of choices that can be overwhelming, especially if you're not very experienced in the problem space. It's like the common refrain with C++: most developers settle on a subset of C++ to minimize code complexity; but which subset? (They can vary widely, across projects and time.) In the case of Make, you can just pick the POSIX and/or de facto portable subset as your target, avoiding alot of choice paralysis/anxiety (though you still face it when deciding when to break out of that box to leverage GNU extensions).


Unless you are actually targeting all of those platforms, of course. Which you're not.


exactly ! instead of writing portable Makefiles, use portable make !


The guide is basically about GNU Make, and the flags are obviously just for end users to invoke make.


Not every project has to be a multi-platform, multi-os, multi-language monster. It is perfectly fine to target a specific set of architecture, os, etc. And I find insulting and silly calling it a “toy project”


Agreed if you're looking at it through the lens of portable software that you plan to distribute. Automake generates portable Makefiles for a reason.

But there's another huge category: people who are automating something that's not open-source. Maybe it stays within the walls of their company, where it's totally fine to say "build machines will always be Ubuntu" or whatever other environment their company prefers.

GNU Make has a ton of powerful features, and it makes sense to take advantage of them if you know that GNU Make will always be the one you use.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: