I'm unpersuaded by the assertion that closing the source is an effective security bulwark.
From that page:
> Today, AI can be pointed at an open source codebase and systematically scan it for vulnerabilities.
Yeah, and AI can also be pointed at closed source as soon as that source leaks. The threat has increased for both open and closed source in roughly the same amount.
In fact, open source benefits from white hat scanning for vulnerabilities, while closed source does not. So when there's a vuln in open source, there will likely be a shorter window between when it is known by attackers and when authors are alerted.
The HN discussion on the announcement is just 90% posts of the theme "if a student can brute force your FOSS for $100, they can do you proprietary code for $200" and "if it's that cheap to find exploits, why don't you just do it yourself before pushing the code to prod?"
I believe that the reason the chose to close the source is just security theater to demonstrate to investors and clients. "Look at all these FOSS projects getting pwned, that's why you can trust us, because we're not FOSS". There is, of course, probably a negative correlation between closing source and security. I'd argue that the most secure operating systems, used in fintech, health, government, etc, got to be so secure specifically by allowing tens or hundreds of thousands of people to poke at their code and then allowing thousands or tens of thousands of people to fix said vulns pro bono.
I'd be interested to see an estimation of the financial value of the volunteer work on say the linux or various bsd kernels. Imagine the cost of PAYING to produce the modern linux kernel. Millions and possibly billions of dollars just assuming average SWE compensation rates, I'd wager.
Too bad cal.com is too short sighted to appreciate volunteers.
I think it's more prosaic, OSS is great for building a userbase but not great at generating revenue. So just wave the OSS flag while you build a userbase, then pull out whichever flimsy excuse seems workable at the time when you want to start step two of your enshittification plan.
Not only are they good at reading and writing machine code now, they are actively being used to turn video game cartridge dumps back into open source code the community can then compile for modern platforms.
If you believe they really did it for security, I have a very nice bridge to sell you for an extremely low price ...
Look, tech companies lie all the time to make their bad decisions sound less bad. Simple example: almost every "AI made us more efficient" announcement is really just a company making (unpopular) layoffs, but trying to brand them as being part of an "efficiency effort".
I'd bet $100 this company just wants to go closed source for business reasons, and (just like with the layoffs masquerading as "AI efficiency") AI is being used as the scapegoat.
Claude is already shockingly good at reverse engineering. Try it – it's really a step change. It has infinite patience which was always the limited resource in decompiling/deobfuscating most software.
It's SaaS though. You don't have access to the binary to decompile. There's only so much you can reverse-engineer through public URLs and APIs, especially if the SaaS uses any form of automatic detection of bot traffic.
Thanks you. This is what the parent post was trying to say. Don't know why it is down-voted. AI or not, if the API end points are well secured, for example use uuid-v7, then their is little that the ai can gain from just these points.
According to that site, Robert Kennedy's speech on the night Martin Luther King was killed[1] was almost entirely the product of GenAI, as were both of Obama's inaugural addresses[1][2].
By this logic, I'd venture a guess that "AI" was also responsible for some of Shakespeare's most famous lines.
I think the origin is in the phrase, "Will the dogs eat the dog food?" which was common VC-speak in the 90's and 00's, referencing dog food commercials that once ran on TV, and meaning something like "this has been made to sound great in an internal powerpoint presentation, but will customers actually like it?"
In 2015, Marc Andreessen memorably said of Mixpanel's success at product-led growth: "The dogs are fucking jumping through the screen door to eat the dog food. And he hasn’t done any marketing yet."
https://www.newyorker.com/magazine/2015/05/18/tomorrows-adva...
That then led to the idea of "eating your own dog food", because if even you won't eat it, what credibility do you have saying that the other dogs will?
I can't even remember when I first heard this expression in CS. It feels like it was already an idiom when I was in university in the early 90s. It also felt like it was tapping into a general cultural background I already had growing up in California. It did not require any explanation.
Without being able to cite a specific TV ad or other urban legend sort of baseline, it clearly communicated that you hold yourself and your products to a higher standard. As a dog-food producer, you don't just meet the minimum requirements for legal sales, but you make it well enough to be fit for human consumption too.
It's in the same category as someone demonstrating that they could safely drink or breathe byproducts of some other industrial process. And, ironically, there was also a widely understood corollary that we could expect PR types to do something like this while secretly fearing that it would actually harm them.
It's not "the largest representable number" because you're not representing numbers in any rigorous sense. If I give you 64 bits, you can't tell me what number those bits represent (first, because the rules of the game are ambiguous - what if I give you 8 bytes that are a valid program in two different languages; and second, because even if you made the rules precise, you don't know which bitstrings correspond to programs that halt). And if I give you a number, you can't tell me which 64 bits represent that number or even if the number is representable, and that's true even for small numbers and even if I give you unbounded time.
It seems far more natural to say that you're representing programs rather than numbers. And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less. Which is also fun and interesting!
> If I give you 64 bits, you can't tell me what number those bits represent
You have to tell me the (non-cheating) programming language that the 64 bit program is written in as well.
> And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less.
That's what the post ends up saying, after first discussing conventional representations, and then explicitly widening the representations to programs in (non-cheating) languages.
reply