Disclosure: I didn’t discover the bugs, but helped write the blog post.
These issues are technically classified as local code execution (AV:L), but they go against a pretty strong user expectation: that opening a file should be safe. In reality, they can be triggered through very common workflows like downloading and opening files, which makes them feel a lot closer to some remote scenarios, even if they’re not strictly RCE.
At the end of the day, regardless of how you classify them, it’s worth being aware of the risks when opening untrusted files in editors like Vim or Emacs.
I'm pretty sure the lesson is that at the end of the day, it’s worth being aware of the risks of using git, as security issues intrinsic to git can extend to other tools which use git as a component.
I think we can agree that Git is at least partly responsible for this issue, if not more.
That said, even being aware of that doesn’t necessarily help much in practice. When you’re using Emacs or Vim, you’re not really thinking about Git at all. You’re just opening and editing files. So it’s not obvious to most users why Git would be relevant in that context.
This is why I think editor maintainers should do more to protect their users. Even if the root cause sits elsewhere, users experience the risk at the point where they open files. From their perspective, the editor is the last line of defense, so it makes sense to add safeguards there.
Please read the LLM output critically instead of doubling down on it.
Your defense-in-depth framing makes no sense. If .git/config or similar mechanisms are the attack vector, then adding more editor safeguards would be treating a symptom, as the real problem is git's trust model. The "users don't think about git when using editors" argument also proves too much. Many users also do not think about PATH, shell configs, dynamic linker, or their font renderer either, but you cannot make editors bulletproof against all transitive dependencies...
Seriously, it is actually backwards. Git is where the defense belongs, not every downstream tool that happens to invoke git. Asking editors to sandbox git's behavior is exactly as absurd as it sounds.
And BTW, "technically AV:L but feels like RCE" is your usual blog-post hype. It either is, or is not.
Sure, but you said that was the end of the day analysis, and I didn't think you went far enough in your analysis.
FWIW, I'm not thinking about git at all since I use Mercurial, and never enabled vc hooks in my emacs, which is based on 25.3.50.1, so wasn't affected by this exploit - I tested. I use git and hg only from the command-line.
My end-of-day analysis is to avoid git entirely if you can't trust its security model. ;)
Should the emacs developers also do more to secure emacs against ImageMagick exploits?
But you would expect running "git status" or "git ls-files" in the unzipped directory to completely pwn your system? Probably not either.
If you don't trust git, you can remove from your system or configure emacs not to use it. If you are worried about unsuspecting people with both git and emacs getting into trouble when downloading and interacting with untrusted malware from the internet, the correct solution is to add better safeguards in git before executing hooks. But you did not report this to the git project (where even minor research beyond Claude Code would reveal to you that this has already been discussed in the git community).
I suspect that what happened here was that (1) you asked Claude to find RCEs in Emacs (2) Claude, always eager to please, told you that it indeed has found an RCE in Emacs and conjured up a convincing report with included PoC (3) since Claude told you it had found an RCE "in Emacs", you thought "success!", didn't think critically about it and simply submitted Claude's report to the Emacs project.
Had you instead asked Claude to find RCEs in git itself and it told you about git hooks, you probably would not have turned around and submitted vulnerability reports to all tools and editors that ever call a git command.
>But you would expect running "git status" or "git ls-files" in the unzipped directory to completely pwn your system? Probably not either.
That’s fair, but it would be pretty unusual for me to run Git commands in a directory I’m not actively working on. On the other hand, I open files from random folders all the time without really thinking about it, so that scenario feels much more realistic.
It’s extremely common for shell prompts to integrate Git status for the working directory.
Who’s responsible for the vulnerability? Your text editor? The version control system with a useful feature that also happens to be a vulnerability if run on a malicious repository? The thing you extracted the repository with? The thing you downloaded the malicious repository with?
Windows + NTFS has a solution, sometimes called the “mark of the web”: add a Zone.Identifier alternate data stream to files. And that’s the way you could mostly fix the vulnerability: a world where curl sets that on the downloaded file, tar propagates it to all of the extracted files, and Git ignores (and warns about) config and hooks in marked files. But figuring out where the boundaries of propagation lie would be tricky and sometimes controversial, and would break some people’s workflows.
If you untar a file and get a git repository, you should absolutely expect malicious behavior. No one does that, you clone repos not tarball them, and cloning doesn't copy hooks for precisely this reason
Thanks for sharing. I'm one of the co-authors of the blog post. Let me know if you have any questions!
tl;dr: We analyzed a LockBit v3 variant, and rediscovered a bug that allows us to decrypt some data without paying the ransom. We also found a design flaw that may cause permanent data loss. Nothing's earth-shattering, but it should be a fun read if you're into crypto and security!
You can use Google Search and tell Google not to log your search history or use the data for advertising purposes. See my comment [1] for how to turn on these privacy controls.
> not to log your search history or use the data for advertising purposes
People already know/feel that when Google states that it will not do something with people’s information but it does not say what it does... well, that doesn’t feel very comfortable. In fact, it’s so uncomfortable, that people avoid asking what does Google do with that not logged or not for advertising data. On mobile, being the only alternative a device that costs about 5 months of my work, I may not what to know the answer for that questions too and just hope for the best
This assumes Google is acting in good faith, and I find that hard to believe when Google's consent prompts are intentionally annoying and not GDPR compliant (for reasons outlined in another comment of mine: https://news.ycombinator.com/item?id=25373600) and they used dark patterns like intentionally disabling functionality such as saving specific locations when location history is disabled in Google Maps.
If you want to keep using Google services, here are some Google Alternatives Alternatives:
1/ Google Search, YouTube, Maps: visit https://myactivity.google.com/activitycontrols to turn on auto-deletion or turn off search history, location history or YouTube watch history. This page also allows you to turn off ads personalization. There are many security and privacy controls on https://myaccount.google.com/, turn them on however you see fit.
2/ Chrome: visit chrome://settings/syncSetup to turn off Chromesync, disallow Chrome sign-in, disable automcomplete searches and URLs, etc. You can also change the default search engine to something else, but see point 1/ if you want to use Google Search. Use Incognito mode more often.
3/ Gmail, Photos, Calendar, Drive, Docs: "we don’t use information in apps where you primarily store personal content—such as Gmail, Drive, Calendar and Photos—for advertising purposes, period." [1] In other words, Gmail, YouTube or Search ads are not targeted or personalized using your emails, photos, events, docs, etc.
Disclosure: I'm a Google's security engineer, advocating for and contributing to some of the aforementioned security/privacy controls.
For point 2/ if you are unwilling to switch to Firefox and want to keep using Chromium based browsers take a look at Ungoogled Chromium.
I switched from Firefox to Ungoogled Chromium after a long time because of atrocious UI/UX on macOS. However I am stuck with Google pushing Manifest v3
LOL no. Try using Google Maps logged out on an Android phone for more than a week. Google will find a way to reconnect you to the default Google account on the device.
An even better test:
- get an Android device (say, a OnePlus 6T)
- create some contacts on the phone, and add a few events in the default calendar
- open the Play Store (required to get many of the most popular apps)
- you're required to logged into a Google account
- log in and try to not have your contacts and calendar events uploaded to Google's servers.
That is not possible, because
1. you must be connected to the Internet in order to log into a Google account (obvious)
2. Google does not let you enable or disable the sync for a particular item before starting uploading everything
3. Google will enable the sync for all possible items (starting w/ contacts) in the background, and you cannot switch screens fast enough to prevent that.
This must have been the default for most Android devices for a decade now. They keep collecting billions contacts details without users' explicit consent, which is 100% illegal.
> Companies like Google get huge fines when they break their promise -- even accidentally.
Source?
Facebook has used phone numbers given exclusively for 2FA purposes for targeted advertising and got away with merely a slap on the wrist considering their revenue.
Facebook also collected data for years from their trackers but only relatively recently started exposing that to users (with their "Off-Facebook activity" page), which means that for 2 years they were in breach of the GDPR by not allowing people access to their own data and incriminated themselves (by now providing that webpage which proves they've collected this data for years). They are yet to be investigated & fined for this.
GDPR and privacy regulation enforcement is still a complete joke.
>Crypto is hard because you don't get quick feedback on whether you are doing well.
Well said.
If you are to implement a sorting algorithm, you'll know immediately whether it works or is fast enough. Crypto doesn't provide this feedback. It's important to get help from others, if you can't tell yourself whether your code works.
I wrote this article to encourage people to study the field I love. If I wanted to tell people to back off, why would I bother providing advice, material and telling people to have fun?
>Those rules aren't easy to follow, but they are simple to know about.
If the rules were so simple, how come your library had a signature bypass vulnerability that you did NOT understand its root cause until I explained?
The root cause looks deceptively simple, until you take your time to understand the underlying math. I didn't fully understand it until a professor at MIT explained it to me. Maybe it's just that I'm stupid, but it is never simple to me.
>About not using your crypto until it's been vetted by professionals… How do you get those vaunted professionals to even look at your work?
You can't expect people to pay attention to your work until you earn it. Why would anyone bother reviewing a random library from a random dude? Frank of libsodium fame didn't start by writing a brand new library from scratch, but built it on top of NaCl. If he did that, nobody would take him seriously either.
>About CTF (Capture The Flag), and cryptopal challenges, my advice is: don't waste your time. The penetration testing approach to secure systems does not work. You whack a mole, two more appear. We need ways to prevent whole classes of errors, like proofs. For instance, tools like https://verifpal.com/ can be a great help when designing a protocol.
This was how my friends and I got started. Maybe you were right that it isn't worth it, but it helps get us to where we are, being paid to do security and crypto.
There are two approaches to learning: top down or bottom up.
The former is what you get at university. If you want to learn crypto, you have to learn abstract algebra, linear algebra, probability theory, complexity theory, and computer security. If you want to learn complexity theory, you want to learn automata theory, computability theory, and algorithms. If you want to learn computer security, you want to learn computer architecture, operating systems, networking, etc.
The top down approach is systematic, but it might not prepare you for the real world. For that, you need internships and CTFs. They are reality checks with a fast feedback loop, showing you very quickly which skill or knowledge you are lacking or need improvement. They are also fun.
On the off chance that anyone here isn't familiar with 'cryptbe, he broke the Flickr URL signing scheme, he and Juliano Rizzo discovered and worked out the BEAST TLS attack (which, to hear Kenny Paterson describe it, more or less set the template for the next 10 years of applied TLS attack research), and then discovered CRIME, which is the first in a line of compression oracle attacks. He works on Daniel Bleichenbacher's team now doing Tink and and Wycheproof.
Hmm, that came out too harsh, sorry about that. I could clearly read your good intentions. It's just that at the same time, you bowed to the zeitgeist of "don't do this for real". It's kind of a ritual I see everywhere. Every time anyone writes about crypto, they feel obligated to say "oh by the way this is scary stuff".
Cryptography seems to be the only domain where this happens, even though many other kinds of code are just as critical: parsers, readers & players, network code… anything that reads potentially hostile input. I hate that double standard.
> If the rules were so simple, how come your library had a signature bypass vulnerability that you did NOT understand its root cause until I explained?
Because no single source actually explained what the rules were. I simply didn't know them. I didn't even know how to make a proper test suite when I introduced the vulnerability (in Spring 2017, well before v1.0.0).
By the time the vulnerability was discovered (in June 2018), I had a much better understanding of those rules, which allowed me to find the vulnerability from an odd report (which by itself wasn't a bug). The rule being "if you don't understand something, dig deeper". In other words, "don't mess with maths you don't understand".
Now however, after 4 years of practice, I think I have a rather keen understanding of the rules. And what do you know, they turned out to be fairly simple. Even side effects: you can know you've avoided all possible timing attacks. It's only a matter of making sure nothing flows from secrets to timings. (Side effects play a very small role in making code non-obvious. Almost negligible, compared to optimisations.)
---
Re CTF/challenges, my apologies. I should have qualified. I cannot deny they help you get a feel. I didn't go this route, but I reckon it's a valid one, especially if it's fun for you. Beyond that initial kick however, it really depends what you mean to do. Do you want to make crypto? Or do you want to break crypto?
If you want to break crypto (which would be extremely useful when dealing with existing systems), then sure, actually breaking broken crypto would be a huge help. Hands on experience.
I however want to make crypto. I don't care for legacy, I just want to either select or make something simple that suits my needs (something I'm currently doing at work, incidentally). And for this, I strongly believe that learning to break crypto is unnecessary. And beyond the very basics, inefficient. A faster way to build crypto is learning about testing and proofs. (I believe Collin Percival has a similar opinion https://www.daemonology.net/blog/2013-06-17-crypto-science-n...)
I don't think we need to go full top-down, however. My focus would be on mathematical proofs (proofs about algebra, about discrete maths, and about programs).
>I of course agree with all of this, but as someone pretty much at the bottom of the food chain who just wants to encrypt some data, there's often no libraries that safely glue the primitives together in the way that I require.
>I hope this doesn't come off as entitled, but I feel like the best way to get people to stop rolling their own crypto is to provide more/better libraries.
Author here. I mentioned libsodium [1] and Tink [2]. We started Tink because we want to provide more/better libraries.
>Granted, this is getting better, for example NaCl's crypto_box[0] is awesome and very hard to misuse. But say you want forward secrecy now. chirp, tumbleweed.
It looks like you want to build an interactive protocol. I'm not sure if libsodium has a solution, but Tink doesn't. So far we've been focusing on encryption at rest. Can you tell me more about your use case?
> It looks like you want to build an interactive protocol. I'm not sure if libsodium has a solution, but Tink doesn't. So far we've been focusing on encryption at rest. Can you tell me more about your use case?
I don't have any particular plans of something I want to build at the moment. I was just using group chat as an example where there on one hand you're told to not roll your own crypto, but otoh you can't really just use someone else's crypto because there's no way to just use it.
Say I want to use an encrypted transport, that's trivial, I can just use TLS relatively easy. For the most straight forward case i can do `http.Get("https://example.org")` in go and not have to worry at all about the crypto.
If I want E2E, there's libsodium and tink, yes. But then am I "allowed" to build e.g. a forward secrecy scheme using ephemeral keys with these libraries? On one hand I know enough about crypto that I could do that, otoh, I also know enough about crypto that doing so would already make me uncomfortable.
So what I dream about is to have something like "ssh/tls" for E2E. Something like libsignal generalized. Of course you will have to do some key management, and it will never be as simple as `http.Get("https://example.org")`.
It found the bug man. You didn't even read the advisory. It was credited to "Nicholas Carlini using Claude, Anthropic".
reply