Hacker Newsnew | past | comments | ask | show | jobs | submit | maxaw's commentslogin

The big difference is that in many cases the people who support this are the same ones that are addicted. You’re telling addicts to stop their moral panic over their own addiction

I don’t know a single person who after exposure to short form video has not had to exert special effort to regulate their consumption.

Is this a young people thing? I'm 40. I have never liked Shorts. What am I supposed to get out of 10 seconds of video? And all the sudden jump-cuts, and big obnoxious one-word-at-a-time subtitles... They're all literally unwatchable.

I watched my 78yo step mother become addicted to reels so older people are definitely not immune. But she was able to go cold turkey as she only communicated with her sister over instagram so it wasn’t a problem to just continue with WhatsApp. Young people real life networks are too enmeshed with instagram to have the same option.

Also, what you’re describing sounds like when you’ve haven’t spent enough time on the shorts for the content recommendation algorithm to learn your preferences. Which I agree, is unwatchable. I saw it recently when my friend put on YouTube shorts on a guest account (on an Airbnb smart tv). it was bad. But spend enough time and that will change. But best you don’t!


I find casinos unpleasant but plenty of people obviously don't. I also find games with a narrow FoV unpleasant; I was never able to enjoy DotA 2 because of this and League was only just barely tolerable. Similarly I detest modern web design and gravitate towards sites with an HN or spreadsheet style information dense layout.

I think that's all related, is at least partially a matter of what I'm accustomed to, but is largely just an inherent part of how I am.


Please, I beg you, stop and think about these things.

"is it a young people thing": no, obviously not because nothing is.

You're just as prone to addictive behaviours at 20 as at 40 at 80.

There might be some differences as to how you happen to be exposed, perhaps because of how your literal social network is behaving, but that's obviously not intrinsic.

I mean, yes, perhaps "young people" are slightly more likely to be exposed to it via advertising/peers/etc, but anyone with a similar exposure can be a victim.


Really? I watch a lot of long-form YouTube while doing the dishes, and occasionally poke at the Shorts. Some funny, mostly dumb and I move on.

Maybe a generational thing, but for most of the latter half of the 20th Century most folks had to “exert special effort to regulate their consumption” of network television. Should there have been lawsuits and regulation of couch potatoes?


If you mean 'should network TV be allowed to use behavioural psychology to manipulate people into being couch potatoes' then the answer is yes, that should be regulated against.

Anyway, the way you talk about shorts reminds me of drug addicts who talk about how they can control their consumption. Some can. Many cannot but delude themselves. The way I see people interact with shorts/TikTok/reels is very much not restrained. They're optimised for addictive scrolling in the same way a slot machine is - the fact that some people can use a slot machine without becoming addicted is besides the point.


Using behavioral psychology in commercial speech should be illegal?

Good luck with that one. Somebody probably used 18th Century behavioral psychology to try to sell George Washington a horse!


You dropped the second half of my sentence which pointed to a specific harm. You consequently argued against something which I didn't say. You are not arguing in good faith and this 'conversation' has clearly run its course as you are not capable of engaging the actual points someone is making.

Someone saying that someone shouldn't be able to promote specific harm x is not saying that the idea of 'promotion' of anything in general is necessarily bad, exactly in the same way that we restrict certain harmful things from being sold without being against the idea of selling things in general.


OK, sorry, so using behavioral psychology to encourage an audience to stay on the couch watching TV for prolonged periods should be illegal?

This is the Netflix business model, right now.


The difference is that the media is 30 seconds not 2 hours so the feedback loop is shorter and the content pool is far far far deeper because it is user submitted so the content recommendation algorithms become so effective , and the experience so compelling, that it becomes addictive. And as a wise man once said “a difference in scale is a difference in kind”

I’m actually strongly sympathetic to this argument, but I’d love to see some actual clinical research that suggests algorithmic short form video has mental and physiological effects that (say) video games do not.

Netflix makes the same profit whether you watch 30 minutes or 30 hours a month.

Tiktok gets paid for every extra second you spend there.


Netflix certainly doesn’t think about their subscriber audience that way.

Screens on their own aren’t “uniquely and magically addictive”, but infinitely scrollable short form video delivered through that screen is, because a few companies spent billions on the smartest minds in the world to make it so.

So you would support banning any form of entertainment that people spend more time on than TikTok since it would be above the threshold of addiction?

More or less, yeah. There might be some nuance about the threshold for maladaptive behaviour, but if it’s all or nothing I’ll take all.

How would you get around the First Amendment difficulties?

There are plenty of public interest limitations on free speech. Food labels, cigarette warnings, deceptive ad laws. Regulating addictive social media isn't really an outlier here.

Even commercial speech regulations need a stronger basis than, “People spend a lot of time listening to it.”

The parent comment set up a false choice and then had to adapt to the response calling their bluff.

The issue isn’t with reading or consuming content, as was set up in the challenge above.

The issue is with designing feeds and surfacing content in ways that take advantage of our brains.

As an analogy, loot boxes in video games, and slot machines come to mind. Both are designed to leverage behavioral psychology, and this design choice directly results in compulsive behavior amongst users.


I live in New Zealand, so I don't have to.

I didn’t mention time? From Cambridge dictionary: ‘addiction: an inability to stop doing or using something, especially something harmful.’ I am in support of regulating things which are harmful and which people have trouble not doing

Like potato chips?

If a specially designed endless bag of such were aggressively marketed and chemicals to induce appetite added to them then sure.

None of those attributes are necessary beyond those of an ordinary bag of Lays to meet the definition:

“things which are harmful and which people have trouble not doing”


It's a matter of degree.

I don't impulsively drive to the store to purchase another bag immediately after finishing the one I have whereas (for example) many people exhibit such behavior when it comes to tobacco.

In the case of social media the feed is intentionally designed to be difficult to walk away from and it is endless (or close enough as makes no practical difference). Even if it weren't endless, refreshing an ever changing page is trivial in comparison to driving to the store and spending money.


How would you contrast social media with Netflix in this regard?

An amusing question. Episodes are much longer and most shows only have one or a few seasons. I don't get the sense that streaming services optimize for difficulty to walk away and do something else any more or less than a good book does.

Maybe autoplay and immediately popping up a grid of recommendations should both be legally forbidden as tactics that blatantly prey on a well established psychological vulnerability. I'd likely support such legislation provided that it could be structured in such a way as to avoid scope creep and thus erosion of personal liberties.

In short I think Netflix is closer to a bag of Lays and modern social media closer to the cigarette industry of yore.


It’s definitely to encourage Claude code usage. Owning the interface through which your core product is delivered is a hedge against the commoditisation that everyone talks about. Eg, it’s much harder to switch from Claude code to cursor or vice versa than it is to switch between models in cursor (I sometimes don’t even notice model defaulting to composer inside cursor)

This is clearest reason for us to accustom ourselves to using open weight models on open source harnesses. Whatever advantages the frontier closed models offer, this will turn into ash in the mouth, when the enshittification cycle begins. And don't be mistaken, it will begin. There is no precedent which can claim otherwise.

I am sure the models themselves are being RLHF tuned to work very well with the proprietary agent harnesses. This is all turning into a huge trap right in front of our eyes and the target is not just programmers but also companies whose core product involves software production.


Fully agree with you

I can believe it - maybe they feel they have enough of a lead in usage with programmers with Opus that they want to locking down the tooling side as well.

edit: clarify


I quit my last job because of this. I’m pretty sure manager was using free chatgpt with no regard for context length too, because not only was it verbose it was also close to gibberish. Being asked to review urgently and estimate deadlines got old real fast

Fully agree. Will take some time though as immediate incentive not clear for consumer facing companies to do extra work to help ppl bypass website layer. But I think consumers will begin to demand it, once they experience it through their agent. Eg pizza company A exposes an api alongside website and pizza company B doesn’t, and consumer notices their agent is 10x+ faster interacting with company A and begins to question why.

Most perplexing product description I’ve read in some time from a major company


A proxy server to give my agent access to my Gmail with permissions as granular as I like. Like can create filters to custom label but not send to trash. As my inbox is at 99% due to years of zero discipline giving my email out to every company on the web :)


willing to share?


Definitely


Curious - are there not good tools for filtering through applications? there must be a lot of llm related offerings


Throw half of them in the bin. You can't afford to hire unlucky people.


While following OpenClaw, I noticed an unexpected resentment in myself. After some introspection, I realized it’s tied to seeing a project achieve huge success while ignoring security norms many of us struggled to learn the hard way. On one level, it’s selfish discomfort at the feeling of being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”). On another level, it feels genuinely sad that the culture of enforcing security norms - work that has no direct personal reward and that end users will never consciously appreciate, but that only builders can uphold - seems to be on it’s way out


But the security risk wasnt taken by OpenClaw. Releasing vulnerable software that users run on their own machines isn't going to compromise OpenClaw itself. It can still deliver value for it's users while also requiring those same users to handle the insecurity of the software themselves (by either ignoring it or setting up sandboxes, etc to reduce the risk, and then maybe that reduced risk is weighed against the novelty and value of the software that then makes it worth it to the user to setup).

On the other hand, if OpenClaw were structured as a SaaS, this entire project would have burned to the ground the first day it was launched.

So by releasing it as something you needed to run on your own hardware, the security requirement was reduced from essential, to a feature that some users would be happy to live without. If you were developing a competitor, security could be one feature you compete on--and it would increase the number of people willing to run your software and reduce the friction of setting up sandboxes/VMs to run it.


This argument has the same obvious flaws as the anti-mask/anti-vax movement (which unfortunately means there will always be a fringe that don't care). These things are allowed to interact with the outside world, it's not as simple as "users can blow their own system up, it's their responsibility".

I don't need to think hard to speculate on what might go wrong here - will it answer spam emails sincerely? Start cancelling flights for you by accident? Send nuisance emails to notable software developers for their contribution to society[1]? Start opening unsolicited PRs on matplotlib?

[1] https://news.ycombinator.com/item?id=46394867


We really needed to have made software engineering into a real, licensed engineering practice over a decade ago. You wanna write code that others will use? You need to be held to a binding set of ethical standards.


Even though it means I probably wouldn't have a job, I think about this a lot and agree that it should. Nowadays suggesting programmers should be highly knowledgeable at what they do will get you called a gatekeeper.


While it is literally gatekeeping, it's necessary. Doctors, architects, lawyers should be gatekept.

I used to work on industrial lifting crane simulation software. People used it to plan out how to perform big lift jobs to make sure they were safe. Literal, "if we fuck this up, people could die" levels of responsibility. All the qualification I had was my BS in CS and two years of experience. It was lucky circumstance that I was actually quiet good at math and physics to be able to discover that there were major errors in the physics model.

Not every programmer is going to encounter issues like that, but also, neither can we predict where things will end up. Not every lawyer is going to be a criminal defense lawyer. Not every doctor is going to be a brain surgeon. Not every architect is going to design skyscrapers. But they all do work that needs to be warranteed in some way.

We're already seeing people getting killed because of AI. Brian in middle management "getting to code again" is not a good enough reason.


> While it is literally gatekeeping, it's necessary. Doctors, architects, lawyers should be gatekept.

That was exactly my point. It's one of those things where deliberately use a word that is technically correct in a context where it doesn't, or shouldn't, hold true. Does this mean I want to stop people from "vibe coding" flappy bird. No, of course not, but as per your original comment yes, there should be stricter regulations when it comes to hiring.


Yeah, I know what you mean. It is a weapon people throw around on social media sites.


At least during the Covid response, your concerns over anti-mask and anti-vaccine issues seem unwarranted.

The claims being shared by officials at the time was that anyone vaccinated was immune and couldn't catch it. Claims were similarly made that we needed roughly 60% vaccination rate to reach herd immunity. With that precedent being set it shouldn't matter whether one person chose not to mask up or get the jab, most everyone else could do so to fully protect themselves and those who can't would only be at risk if more than 40% of the population weren't onboard with the masking and vaccination protocols.


> that anyone vaccinated was immune and couldn't catch it.

Those claims disappeared rapidly when it became clear they offered some protection, and reduced severity, but not immunity.

People seem to be taking a lot more “lessons” from COVID than are realistic or beneficial. Nobody could get everything right. There couldn’t possibly be clear “right” answers, because nobody knew for sure how serious the disease could become as it propagated, evolved, and responded to mitigations. Converging on consistent shared viewpoints, coordinating responses, and working through various solutions to a new threat on that scale was just going to be a mess.


Those claims were made after the studies were done over a short duration and specifically only watching for subjects who reported symptoms.

I'm in no way taking a side here on whether anyone should have chosen to get vaccinated or wear masks, only that the information at the time being pushed out from experts doesn't align with an after the fact condemnation of anyone who chose not to.


I specifically wasn't referring to that instance (if anything I'm thinking more of the recent increase in measles outbreaks), I myself don't hold a strong view on COVID vaccinations. The trade-offs, and herd immunity thresholds, are different for different diseases.

Do we know that 0.1% prevalence of "unvaccinated" AI agents won't already be terrible?


Fair enough. I assumed you had Covid in mind with an anti-mask reference. At least in modern history in the US, we have only even considered masks during the Covid response.

I may be out of touch, but I haven't heard about masks for measles, though it does spread through aerosol droplets so that would be a reasonable recommendation.


I think you're right - outside of COVID, it's not fringe, it's an accepted norm.

Personally I at least wish sick people would mask up on planes! Much more efficient than everyone else masking up or risking exposure.


Oh I wish sick people would just not get on a plane. I've cancelled a trip before, the last thing I want to do when sick is deal with the TSA, stand around in an airport, and be stuck in a metal tube with a bunch of other people.


Love passing off the externalities of security to the user, and then the second order externalities of an LLM that then blackmails people in the wild. Love how we just don’t care anymore.


You should join the tobacco lobby! Genius!


More straightforwardly, people are generally very forgiving when people make mistakes, and very unforgiving when computers do. Look at how we view a person accidentally killing someone in a traffic accident versus when a robotaxi does it. Having people run it on their own hardware makes them take responsibility for it mentally, so gives a lot of leeway for errors.


I think that’s generally because humans can be held accountable, but automated systems can not. We hold automated systems to a higher standard because there are no consequences for the system if it fails, beyond being shut off. On the other hand, there’s a genuine multitude of ways that a human can be held accountable, from stern admonishment to capital punishment.

I’m a broken record on this topic but it always comes back to liability.


Thats one aspect.

Another aspect is that we have much higher expectations of machines than humans in regards to fault-tolerance.


Traffic accidents are the same symptom of fundamentally different underlying problems among human-driven and algorithmically-driven vehicles. Two very similar people differ more than the two most different robo taxis in any given uniform fleet— if one has some sort of bug or design shortcoming that kills people, they almost certainly all will. That’s why product (including automobile) recalls exist, but we don’t take away everyone’s license when one person gets into an accident. People have enough variance that acting on a whole population because of individual errors doesn’t make sense— even for pretty common errors. The cost/benefit is totally different for mass-produced goods.

Also, when individual drivers accidentally kill somebody in a traffic accident, they’re civilly liable under the same system as entities driving many cars through a collection of algorithms. The entities driving many cars can and should have a much greater exposure to risk, and be held to incomparably higher standards because the risk of getting it wrong is much, much greater.


Oh please, why equate IT BS with cancer? If the null pointer was a billion dollar mistake, then C was a trillion dollar invention.

At this scale of investment countries will have no problem cheapening the value of human life. It's part and parcel of living through another industrial revolution.


Exactly! I was digging into Openclaw codebase for the last 2 weeks and the core ideas are very inspiring.

The main work he has done to enable personal agent is his army of CLIs, like 40 of them.

The harness he used, pi-mono is also a great choice because of its extensibility. I was working on a similar project (1) for the last few months with Claude Code and it’s not really the best fit for personal agent and it’s pretty heavy.

Since I was planning to release my project as a Cloud offering, I worked mainly on sandboxing it, which turned out to be the right choice given OpenClaw is opensource and I can plug its runtime to replace Claude Code.

I decided to release it as opensource because at this point software is free.

1: https://github.com/lobu-ai/lobu


I don't agree that making your users run the binaries means security isn't your concern. Perhaps it doesn't have to be quite as buttoned down as a commercial product, but you can't release something broken by design and wash your hands of the consequences. Within a few months, someone is going to deploy a large-scale exploit which absolutely ruins OpenClaw users, and the author's new OpenAI job will probably allow him to evade any real accountability for it.


> But the security risk wasnt taken by OpenClaw

This is the genius move at the core of the phenomenon.

While everyone else was busy trying to address safety problems, the OpenClaw project took the opposite approach: They advertised it as dangerous and said only experienced power users should use it. This warning seemingly only made it more enticing to a lot of users.

It’ve been fascinated by how well the project has just dodged and avoided any consequences for the problems it has introduced. When it was revealed that the #1 skill was malware masquerading as a Twitter integration I thought for sure there would be some reporting on the problems. The recent story about an OpenClaw bot publishing hit pieces seemed like another tipping point for journalists covering the story.

Though maybe this inflection point made it the most obvious time to jump off of the hype train and join one of the labs. It takes a while for journalists to sync up and decided to flip to negative coverage of a phenomenon after they cover the rise, but now it appears that the story has changed again before any narratives could build about the problems with OpenClaw.


I am guessing there will be an OpenClaw "competitor" targeting Enterprise within the next 1-2 months. If OpenAI, Anthropic or Gemini are fast and smart about it they could grab some serious ground.

OpenClaw showed what an "AI Personal Assistant" should be capable of. Now it's time to get it in a form-factor businesses can safely use.


With the guard rails up, right? Right?


Every single new tech industry thing has to learn security from scratch. It's always been that way. A significant number of people in tech just don't believe that there's anything to learn from history.


And the industry actively pushes graybeards away who have already been there done that.


> being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”).

I don't believe skimming diffs counts as being left behind. Survivor bias etc. Furthermore, people are going to get burned by this (already have been, but seemingly not enough) and a responsible mindset such as yours will be valued again.

Something that still up for grabs is figuring how how to do full agenetic in a responsible way. How do we bring the equivalent of skimming diffs to this?


For my entire career in tech (~20 years) I have been technically good but bad at identifying business trends. I left Shopify right before their stock 4xed during COVID because their technology was stagnating and the culture was toxic. The market didn't care about any of that, I could have hung around and been a millionaire. I've been at 3 early stage startups and the difference between winners and losers was nothing to do with quality or security.

The tech industry hasn't ever been about "building" in a pure sense, and I think we look back at previous generations with an excess of nostalgia. Many superior technologies have lost out because they were less profitable or marketed poorly.


  bad at identifying business trends
I think you’re being unduly harsh on yourself. At least by the Shopify/COVID example. COVID was a black swan event, which may very well have completely changed the fortunes of companies like Shopify when online commerce surged and became vital to the economy. Shortcomings, mismanagement and bad culture can be completely papered over by growth and revenue.

Right place, right time. It’s too bad you missed out on some good fortune, but it’s a helpful reminder of how much of our paths are governed by luck. Thanks for sharing, and wishing you luck in the future.


> seems to be on it’s way out

Change is fraught with chaos. I don't think exuberant trends are indicators of whether we'll still care about secure and high quality software in the long term. My bet is that we will.


i think your self reflection here is commendable. i agree on both counts.

i think the silver lining is that AI seems to be genuinely good at finding security issues and maybe further down the line enough to rely on it somewhat. the middle period we're entering right now is super scary.

we want all the value, security be damned, and have no way to know about issues we're introducing at this breakneck speed.

still i'm hopeful we can figure it out somehow


Hey, as a security engineer in AI, I get where you're coming from.

But one thing to remember - our job is to figure out how to enable these amazing usecases while keeping the blast radius as low as possible.

Yes, OpenClaw ignores all security norms, but it's our job to figure out an architecture in which agents like these can have the autonomy they need to act, without harming the business too much.

So I would disagree our work is "on the way out", it's more valuable than ever. I feel blessed to be working in security in this era - there has never been a better time to be in security. Every business needs us to get these things working safely, lest they fall behind.

It's fulfilling work, because we are no longer a cost center. And these businesses are willing to pay - truly life changing money for security engineers in our niche.


Security is always a cost center. We've seen multiple iterations of changes already impact security in the same ways over the last 20+ years. Nothing is different here and the outcomes will be the same: just good enough but always a step behind. The one thing that is a new lever to pull here is time, people need far less of it to make disastrous mistakes. But, ultimately, the game hasn't changed and security budgets will continue to be funneled to off the shelf products that barely work and the remainder of that budget will continue to go to the overworked and underpaid. Nothing really changes.


This is a normal reaction to unfairness. You see someone who you believe is Doing It Wrong (and I’d agree), and they’re rewarded for it. Meanwhile you Do It Right and your reward isn’t nearly as much. It’s natural to find this upsetting.

Unfortunately, you just have to understand that this happens all over the place, and all you can really do is try to make your corner of the world a little better. We can’t make programmers use good security practices. We can’t make users demand secure software. We can at least try to do a better job with our own work, and educate people on why they should care.


I've been feeling this SO much lately, in many ways. In addition to security, just the feeling of spending decades learning to write clean code, valuing having a deep understanding of my codebase and tooling, thorough testing, maintainability, etc, etc. Now the industry is basically telling me "all that expertise is pointless, you should give it up, all that we care about it is a future of endless AI slop that nobody understands".


I've been feeling a similar kind of resentment often. My whole life I have prided myself on being the guy that actually bothers to read the docs and understand how shit works. Seems like the whole industry is basically saying none of that matters, no need to understand anything deeply anymore. Feels bad man.


AI slop will collapse under its own weight without oversight. I really think we will need new frameworks to support AI-generated code. Engineers with high standards will be needed to build and maintain the tools and technologies so that AI-written code can thrive. It's not game over just yet


Thanks, I've been feeling the same way. But it seems like we're some years away from the industry fully realizing it. Makes me want to quit my job and just code my own stuff.


Well OpenClaw has ~3k open PRs (many touching security) on GitHub right now. Peter's move shows killer product UI/UX, ease of use and user growth trump everything. Now OpenAI with throw their full engineering firepower to squash those flaws in no time.

Making users happy > perfect security day one


"Peter's move shows killer product UI/UX, ease of use and user growth trump everything. "

Erm, is this some groundbreaking revelation?

Its always been that way. Unless its in the context of superior technology with minimal UI a-la Google Search in its early years.


google search did have a killer UI though, you might be forgetting what search looked like before google


A list of results is not a killer UI.

The technology was the killer. Technology providing the right list of results and fast.

OH and believe it or not, this continues to be the core of Google today - they suck at product design and marketing.


It was killer compared to alternatives. All other "homepages" of the internet were the cluttered mess of ads.

I feel like we are arguing semantics though. But IMO any UI that does the job that consumers want well is good UI. Just because it was simple doesn't mean it wasn't good


building this openclaw thing that competes with openai using codex is against the openai terms of service, which say you can't use it to make stuff that competes with them. but they compete with everyone. by giving zero fucks (or just not reading the fine print), bro was rewarded by the dumb rule people for breaking the dumb rules. this happens over and over. there is a lesson here


underrated comment

and this is why they bought Peter. i’m betting he will come to regret it.


I don't know. It's more of a sharp tool like a web browser (also called a "user agent") - yes an inexperienced user can quickly get themselves into trouble without realizing it (in a browser or openclaw), yes the agent means it might even happen without you being there.

A security hole in a browser is an expected invariant not being upheld, like a vulnerability letting a remote attacker control your other programs, but it isn't a bug when a user falls for an online scam. What invariants are expected by anyone of "YOLO hey computer run my life for me thx"?


But in this case following security norms would be a mistake. The right thing to take away is that you shouldn't dogmatically follow norms. Sometimes it's better to just build things if there is very little risk

Nothing actually bad happened in this case and probably never will. Maybe some people have their crypto or identity stolen, but probably not a rate rate significantly higher than background (lots of people are using openclaw)


> Lots of people are using openclaw

https://www.shodan.io/search?query=http.favicon.hash%3A-8055...

Indeed they are, at least 20,432 people :)


Your introspection missed the obvious point that you just wish you were him. Your resentment had nothing to do with security. It's a self-revelation that you don't actually care about it either and you resent wasting your time.


So my unsubstantiated conspiracy theory regarding Clawd/Molt/OpenClaw is that the hype was bought, probably by OpenAI. I find it too convenient that not long after the phrase “the AI bubble“ starts coming into common speech we see the emergence of a “viral” use case that all of the paid influencers on the Internet seem to converge on at the same time. At the end of the day piping AI output with tool access into a while loop is not revolutionary. The people who had been experimenting with these type of set ups back when LangChain was the hotness didn’t organically go viral because most people knew that giving a language model unrestricted access to your online presence or bank account is extremely reckless. The “I gave OpenClaw $100 and now I bought my second Lambo. Buy my ebook” stories don’t seem credible.

So don’t feel bad. Everything on the internet is fake.


The modern influencer landscape was such a boon for corporations.

For less than the cost of 1 graphics card you can get enough people going that the rest of them will hop on board for free just to try and ride the wave.

Add a little LLM generated comments that might not throw the product in your face but make sure it is always part of the conversation so someone else can do it for you for free and you are off to the races.


Security always is the most time consuming in a backend project


At the end of the day, he built something people want. That’s what really matters. OpenAI and Anthropic could not build it because of the security issues you point out. But people are using it and there is a need for it. Good on him for recognizing this and giving people what they want. We’re all adults and the users will be responsible for whatever issues they run into because of the lack of security around this project.


Admittedly, I might not be the.. targeted demographic here, but I can't say I understand what problem it solves, but even cursory read immediately flags all the way in which it can go wrong ( including recent 'rent a human hn post'). I am fascinated, and I wonder if that it is partially that fascination that drives current wave of adoption.

I will say openly: I don't get it and I used to argue for crypto use cases.


I think you should give your gut instinct more credit. The tech world has gotten a false sense of security from the big SaaS platforms running everything that make the nitty gritty security details disappear in a seamless user experience, and that includes LLM chatbot providers. Even open source development libraries with exposure to the wild are so heavily scrutinized and well-honed that it’s easy even for people like me that started in the 90s to lose sight of the real risk on the other side that. No more popping up some raw script on an Apache server to do its best against whatever is out there. Vibe coded projects trade a lot of that hard-won stability for the convenience of not having to consider some amount of the implementation details. People that are jumping all over this for anything except sandbox usage either don’t know any better, or forgot what they’ve learned.


Totally agree. And the fact that the author says

> What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.

do no not make me feel all warm and fuzzy: Yeah, changing the world with Tiel's money. Try joining a union instead.


Change the world into what? Techno-feudalism?

Ever since I was four, I've dreamed of doing my part to bring that about.


Very happy to see techno feudalism being mentioned here in HN.

Whatever the origins of the term, it now seems clear it’s kind of the direction things are going.


I recently met a guy that goes to these "San Francisco Freedom Club" parties. Check their website, it's basically just a lot of Capitalism Fans and megawealthies getting drunk somewhere fancy in SF. Anyway, he's an ultra-capitalist and we spent a day at a cafe (co-working event) chatting in a conversation that started with him proposing private roads and shot into orbit when he said "Should we be valuing all humans equally?"

Throughout the conversation he speculated on some truly bizarre possible futures, including an oligarchic takeover by billionaires with private armies following the collapse of the USA under Trump. What weirded me out was how oddly specific he got about all the possible futures he was speculating about that all ended with Thiel, Musk, and friends as feudal lords. Either he thinks about it a lot, or he overhears this kind of thing at the ultracapitalist soirées he's been going to.


It seems quite a reasonable output to the current input we are having. Elon having an army of robots is... well it is what it is. Yet that is the direction we are going.


So basically a bunch of rich tech edgelords are just doing blow and trying to bring about the world as depicted in Snow Crash?!

Guess I’ll have to get a Samurai sword soon and pivot to high stakes pizza delivery.

There are a disturbing amount of parallels between Elon and L Bob Rife.

It’s really disturbing that we have oligarchs trying to eagerly create a cyberpunk dystopia.


I was really into the idea of kings, knights, castles, princesses etc when I was 4.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: