Oh, you have nothing to hide? Kindly paste all your payment and login credentials that your browser stores. Later we'll need to see all your DMs on Facebook, LinkedIn, Slack, Discord, etc.
Finally we'll want to know about disputes you've had with intimate partners, employers and other service providers, especially powerful ones like healthcare, insurance and financial organisations.
We should also have full published salary and benefits (etc) details right now, whatever their contract says about disclosing those, and 24x7 streamed video of their entire life with no censoring, including toilet breaks and sex and bars and parties.
And, along with all the credentials as you suggest, including private parts of PGP keys etc, accurate impressions/clones of any and all physical security/privacy devices they use such as keys to house and car and safe and gun safe and relatives' crypt, etc, etc...
I should have said “all stories about AI usage” which is exactly what your link says. If you post anything related to using or exploring AI, it’s forced to use the vibecoding tag. It doesn’t matter if it’s about vibecoding or coding at all.
The top voted comments on that thread get to the meat of the issue. Vibecoding was embraced as a derogatory term and applied broadly to every LLM related topic, even when vibecoding wasn’t involved.
That caveat—'as long as people are reasonable'—is the biggest variable in the equation. The real challenge is sticking to that correct path when the other side acts irrationally.
Many experiments have shown that when you take away people's concerns about money for housing and food, that frees up energy and attention to do other things.
Like the famous experiment in Finland where homeless people were given cash with no strings attached and most were able to rise out of their despair. The healthcare professionals could then focus their energy on the harder cases. It also saved a bunch of money in the process.
I really like them, I have a really old Chuwi atom-based laptop of more than 10 years old that's still working fine, even the battery still lasts a few hours. It was even cheaper and I mainly got it for the makerspace (not too much of a problem if a soldering iron rolls up against it) but it was surpisingly capable. It was reviewed by Anandtech at the time (which no longer even exists).
I don't normally buy laptops at all, I prefer mini-desktops. So this is still my only personal laptop.
What's your threshold for when it becomes a problem? Should we wait until it becomes a problem, or should we try to stop this level of facial recognition?
You should also assume this is a proof of concept. It'll get improved and scaled down to run on every police vehicle, and on every camera the police already control.
It has already been scaled down to android phones (you'll find phones are an excellent platform for this), where you can find apps that are meant to let venue-owners guard entrances against specific individuals. That's illegal, but obviously common enough to make such apps.
Not to be overly argumentative, but I disagree, if you're looking for a deep and ongoing process, LLMs fall down, because they can't remember anything and can't build upon itself in that way. You end up having to repeat alot of stuff. They also don't have good course correction (that is, if you're going down the wrong path, it doesn't alert you, as I've experienced)
It also can give you really bad content depending on what you're trying to learn.
I think for things that represent themselves as a form of highly structured data, like programming languages, there's good attunement there, but you start talking about trying to dig around about advanced finance, political topics, economics, or complex medical conditions the quality falls off fast, if its there at all
I used llms to teach me a programming language recently.
It was way nicer than a book.
That's the experience I'm speaking from. It wasn't perfect, and it was wrong sometimes, sure. A known limitation.
But it was flexible, and it was able to do things like relate ideas with programming languages I already knew. Adapt to my level of understanding. Skip stuff I didn't need.
Incorrect moments or not, the result was i learned something quickly and easily. That isn't what happened in the 90s.
> and it was wrong sometimes, sure. A known limitation.
But that's the entire problem and I don't understand why it's just put aside like that. LLMs are wrong sometimes, and they often just don't give you the details and, in my opinion, knowing about certain details and traps of a language is very very important, if you plan on doing more with it than just having fun. Now someone will come around the corner and say 'but but but it gives you the details if you explicitly ask for them'. Yes, of course, but you just don't know where important details are hidden, if you are just learning about it. Studying is hard and it takes perseverance. Most textbooks will tell you the same things, but they all still differ and every author usually has a few distinct details they highlight and these are the important bits that you just won't get with an LLM
It's not my experience that there are missing pieces as compared to anything else.
Nobody can write an exhaustive tome and explore every feature, use, problem, and pitfall of Python, for example. Every text on the topic will omit something.
It's hardly a criticism. I don't want exhaustive.
The llm taught me what I asked it to teach me. That's what I hope it will do, not try to caution me about everything I could do wrong with a language. That list might be infinite.
I'd gently point out we're 4 questions into "what about if you went about it stupidly and actually learned nothing?"
It's entirely possible they learned nothing and they're missing huge parts.
But we're sort of at the point where in order to ignore their self-reported experience, we're asking philosophical questions that amount to "how can you know you know if you don't know what you don't know and definitely don't know everything?"
More existentialism than interlocution.
If we decide our interlocutor can't be relied upon, what is discussion?
Would we have the same question if they said they did it from a book?
If they did do it from a book, how would we know if the book they read was missing something that we thought was crucial?
I was attempting to imply that with high-quality literature, it is often reviewed by humans who have some sort of knowledge about a particular topic or are willing to cross reference it with existing literature. The reader often does this as well.
For low-effort literature, this is often not the case, and can lead to things like https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect where a trained observer can point out that something is wrong, but an untrained observer cannot perceive what is incorrect.
IMO, this is adjacent to what human agents interacting with language models experience often. It isn't wrong about everything, but the nuance is enough to introduce some poor underlying thought patterns while learning.
That's easy. It's due to a psychological concept called: transfer of learning [0].
Perhaps the most famous example of this is Warren Buffet. For years Buffet missed out on returns from the tech industry [1] because he avoided investing in tech company stocks due to Berkshire's long standing philosophy to never invest in companies whose business model he doesn't understand.
His light bulb moment came when he used his understanding of a business he understood really well i.e. their furniture business [3] to value Apple as a consumer company rather than as a tech company leading to a $1bn position in Apple in 2016 [2].
You are right and that's my point. To me it just feels like that too many people think LLMs are the holy grail for learning. No, you still have to study a lot. Yes, it can be easier than it was.
Sure, but there are limits here. Thats what I'm talking about, limits. The memory isn't infinitely expansive. I still have found it doesn't backtrack well enough to "remember" (for lack of a better term) that it told me something already, if its old enough, for example.
It also doesn't seem to do a good job of building on "memory" over time. There appears to be some unspoken limit there, or something to that affect.
Agreed, I'd add to the statement, "you're basically screwed, often, without investing a ton of time (e.g. weekends)"
Figuring out 'make' errors when I was bad at C on microcontrollers a decade ago? (still am) Careful pondering of possible meanings of words... trial and error tweaks of code and recompiling in hopes that I was just off by a tiny thing, but 2 hours later and 30 attempts later, and realizing I'd done a bad job of tracking what I'd tried and hadn't? Well, made me better at being careful at triaging issues. But it wasn't something I was enthusiastic to pick back up the next weekend, or for the next idea I had.
Revisiting that combination of hardware/code a decade later and having it go much faster with ChatGPT... that was fun.
Are we really comparing this research to just writing and having a good answer in a couple of seconds?
Like, I agree with you and I believe those things will resist and will always be important, but it doesn't really compare in this case.
Last week I was in the nature and I saw a cute bird that I didn't know. I asked an AI and got the correct answer in 10 seconds.
Of course I would find the answer at the library or by looking at proper niche sites, but I would not have done it because I simply didn't care that much. It's a stupid example but I hope it makes the point
There's a gigantic difference between outsourcing your brain to generative AI (LLMs, Stable Diffusion, ..) and pattern recognition that recognises songs, birds, plants or health issues.
Citation needed. The 'extreme' feminism & LGBT tends to revolve around identical pay, being able to walk down the street without getting assaulted or being able to work without being harassed or discriminated against.
Feminist groups also regularly try to get games banned from Steam, typically for sexism or violence against women. Eg.
> Women in Games CEO Dr Marie-Claire Isaaman has called on Valve to “act urgently” and remove the game from Steam, saying the game’s content “is not only vile and dangerous, but also actively promotes the dehumanisation of women and girls.”
> In “Irreversible Damage,” Abigail Shrier argues youth are being “fast-tracked” into medical transition — a claim experts say isn’t true and harms trans youth.
Anti-trans activities is the fascist agenda. Fascism should always be stopped. Saving democracy and lives of transpeople by stopping fascists is not controversial in any way.
When you start digging past their marketing material, you quickly discover that these organisations are just right-wing fronts, against trans-people, against abortion.
Here's a 38 minute video that walks through some of the recent major incidents.
Why should feminism be incompatible with right-wing politics?
How do I know what is or isn't a legitimate feminist position, given that the list apparently constantly changes over time?
I am constantly told that there are many different kinds of feminism and that my various complaints about feminism — based on actual interactions I've personally had with feminists — are not valid because they don't generalize across the whole thing (even if I point at well-known, established feminist literature and critiques thereof). Yet I also constantly see groups of self-identified feminists point at each other and try to claim that the others don't actually count as feminists because they disagree about some other issue.
I assume you accept the validity of more than two genders. Will you accept the validity of more than two kinds of political position?
>The 'extreme' feminism & LGBT tends to revolve around
There are countless statements from feminist authority figures that are impossible to reconcile with this claim. But HN is not the place to have this argument, or even to attempt to turn it into a discussion; and elsewhere on the Internet, I have repeatedly seen people persecuted as misogynists simply for collating such evidence.
It’s very easy to make everyone you agree with sound incredibly reasonable when you don’t have to give any examples of when they showed their true colors.
I will not restrict myself to an arcane subset of Make just because you refuse to type 'gmake' instead of 'make'. Parallel execution, pattern rules, order-only prerequisites, includes, not to mention the dozens of useful function like (not)dir, (pat)subst, info... There's a reason why most POSIX Makefiles nowadays are generated. It's not GNU's fault that POSIX is stale.
EDIT: There's one exception, and that would be using Guile as an extension language, as that is often not available. However, thanks to conditionals (also not in POSIX, of course), it can be used optionally. I once sped up a Windows build by an order of magnitude by implementing certain things in Guile instead of calling shell (which is notoriously slow on Windows).
Agreed. My company decided on using GNU Make on every platform we supported, which back then (last century) was a bunch of Unix variants, and Linux. That made it possible to write a simple and portable build system which could be used for everything we did, no hassle. And not difficult, because gmake was available basically everywhere, then just as now.
Completely agree. POSIX is irrelevant anyway. Every single unixlike has unique features that are vastly superior to whatever legacy happens to be standardized by POSIX. Avoiding their use leads to nothing but misery.
And it's available everywhere. All Unix platforms had it back then, and the still existing ones (AIX is alive, at least) have it available. Which made it easy for our company to base our build system on GNU Make for everything, back in the day.
> Portability is overrated.
> GNU Make is [..] itself portable.
Sounds like it's not overrated, then. You just prefer that other people write portable C and package GNU Make for all systems instead of you writing POSIX Make.
Not at all. I think we should all be using the full potential of our preferred system instead of sucky abstractions that provide the lowest common denominator of features.
Portability is overrated. Portability between POSIX systems is especially overrated. Linux and the BSDs have powerful exclusive features and people should be using them as much as possible in their software, simply because it's better than the legacy POSIX nonsense. This also applies to the features of Windows, macOS, iOS, etc.
GNU Make is powerful, ubiquitous and portable. That makes it even more pointless to avoid it. I won't claim it's perfect but it's absolutely a hell of a lot better than some "standard" POSIX variant of make that virtually nobody actually cares about. GNU Make will be present in pretty much every system capable of compiling software. Everyone is used to running make to build things. Avoiding things that make life easier because POSIX is pointless masochism.
People are too quick to [ab]use GNU Make features. IME, learning how to make do with portable make constructs can help discipline oneself to avoid excessive complexity, especially when it comes to macro definitions where GNU Make's Lispy looping and eval constructs are heavily overused and quickly lead to obtuse, impenetrable code. POSIX pattern substitutions are quite powerful and often produce easier to read code than the GNU equivalent. I'm not sure if computed variable names/nested variable references are well-defined in POSIX (e.g. "$($(FOO))"), but they are widely supported nonetheless, and often more readable than $(eval ...). (They can also be used for portable conditional constructs; I wouldn't argue they're more readable, though I often find them so.)
Some GNU Make constructs, like pattern rules, are indispensable in all but the simplest projects, but can also be overused.
For some reason there's a strong urge to programmatically generate build rules. But like with SQL queries, going beyond the parameterization already built into the language can be counter productive. A good Makefile, like a good SQL query, should be easy to read on its face. Yes, it often means greater verbosity and even repetition, but that can be a benefit to be embraced (at least embraced more than is instinctively common).
EDIT: Computed variable references are well-defined as of POSIX-2024, including (AFAICT) on the left-hand side of a definition. In the discussion it was shown the semantics were already supported by all extant implementations.
It's a matter of praxis. Targeting portable constructs is (IMO) a useful methodology for achieving the abstract goal. It doesn't have to be strict, but it provides a quantifiable, objective metric (i.e. amount of non-portable constructs employed) to help achieve an otherwise subjective goal.
Otherwise you face an ocean of choices that can be overwhelming, especially if you're not very experienced in the problem space. It's like the common refrain with C++: most developers settle on a subset of C++ to minimize code complexity; but which subset? (They can vary widely, across projects and time.) In the case of Make, you can just pick the POSIX and/or de facto portable subset as your target, avoiding alot of choice paralysis/anxiety (though you still face it when deciding when to break out of that box to leverage GNU extensions).
Not every project has to be a multi-platform, multi-os, multi-language monster. It is perfectly fine to target a specific set of architecture, os, etc. And I find insulting and silly calling it a “toy project”
Agreed if you're looking at it through the lens of portable software that you plan to distribute. Automake generates portable Makefiles for a reason.
But there's another huge category: people who are automating something that's not open-source. Maybe it stays within the walls of their company, where it's totally fine to say "build machines will always be Ubuntu" or whatever other environment their company prefers.
GNU Make has a ton of powerful features, and it makes sense to take advantage of them if you know that GNU Make will always be the one you use.
Finally we'll want to know about disputes you've had with intimate partners, employers and other service providers, especially powerful ones like healthcare, insurance and financial organisations.