Yes, it changes the nature of the work. Back when you started coding there were people experiencing the same thing about shifting to higher level languages. What some of them liked was the efficency and aesthetic of using just the right assembly language trick and good compilers with high quality instruction selection took that away. I'm sure there were programmers who missed the days of punching in hex into memory.
We start to understand those old fogeys who we blew past we were young once we get to their age. It's the way of the world.
If the AI is good enough to truly implement the whole thing to a similar level of reliability without copying it then who cares. At that point you should be able to decompile any program you want and find enough information inside that an AI can go write a similar quality program from the vague information about the call graph. We've transcended copyright in computer code.
If it can't and it costs a bunch of money to clean it up then same as always.
OTOH if what is actually happening is just that it is rewording the existing code so it looks different then it is still going to run afoul of copyright. You can't just rewrite harry potter with different words.
Note that even with Google vs oracle it was important they didn't need the actual code just the headers to get the function calls were enough. Yes it's true that the clean room isn't required but when you have an AI and you can show that it can't do it a second time without looking at the source (not just function declarations) that's pretty strong evidence.
That's assuming you won't need anyone to manage the purchased software platform and its integrations and that you need a full time engineer to maintain your version.
Could you clarify exactly what you think is an illegal tie-in? Because it seems like what you are upset about is literally the opposite -- Anthropic unbundling their offerings so you aren't required to buy the ability to offer third party access when you purchase the ability to use Claude code and their other models. Unless I really misunderstand you, your complaint is literally thaf
The laws prohibiting tie-ins don't make it illegal to sell two products that work well together. That's literally what the laws are designed to make you do -- seperate products into seperate pieces. The problem tie-in laws were designed to combat was situations like Microsoft making a popular OS then making a mediocre spreadsheet program and pushing the cost of that spreadsheet program into the cost of buying the OS. That way consumers would go "well it's expensive but I get excel with it so it's ok" and even if someone else made a slightly better spreadsheet they didn't have the chance to convince users because they had to buy it all as one package.
Anthropic would be doing something much closer to that if they did what you wanted. They'd be saying: hey we have this neat Claude code thing you all want to use but you can't buy that without also purchasing third party access. Now some company offering a cheaper/better third party usage product doesn't get the chance to convince you because anthropic forced you to buy that just to get claude code.
Ultimately this change unbundled products the opposite of a tie-in. What is upsetting about it is that it no longer feels to you like you are getting a good deal because you now have to fork over a bunch more cash to keep getting what you want. But that's not illegal, that's just not offering good value for money.
I guess I don't even really understand the objection. That's how ALL mathematics works. You specify some axioms or a construction and then reason about objects that satisfy those constraints. Some of them like the complex numbers turn out to be particularly useful.
But it's not fundamentally any different than what we do with the natural numbers. Those just feel more familiar to you.
Whoever wrote this just doesn't understand who apple's main customers really are. Yes, devs may be a high impact customer base but most of apple's customers are people like my mom who struggles with the difference between Gmail the app, Gmail the web page and Gmail in apple mail and is reasonably worried about scams and viruses because she knows she isn't really tech savvy enough to spot them. If she is going to run AI on her apple products it can't be 'well it probably won't delete your data.'. It needs to be something she can be sure is safe and is limited to the access she gives it.
That's a really tough problem. I'm not even sure yet google can pull it off.
What is the usecase for using ssh at all where you don't need to be resistant against timing analysis? Either it's not sensitive and you can use telnet (if necessary after using ssh to authenticate) or the game (or other stuff on the connection) might be sensitive and you need traffic analysis resistance.
If you get clever and write a client to ensure sensitive data like passwords or email are sent in a burst you could just use an encryption library just for that data instead.
Dont let this article be blinders to you. Ssh does much more than obfuscate keypress timings. Not needing the chaff means turn it off and keep all the other benefits. It doesn't mean "revert to telnet"
Lots of the real world vulnerabilities out there exist exactly because of people choosing to support a range of crypto algorithms.
Sure, if it's an internal tool you can recompile both ends and force a universal update. But anything else and you need to stay compatible with clients and anytime you allow negotiation of the cryptosuit you open yourself up to quite a few subtle attacks. Not saying that choice about go is clearly a good one but i don't think it's obviously wrong.
To clarify the point in the other reply -- imagine it sent one packet per keystroke. Now anyone sitting on the network gets a rough measurement of the delay between your keystrokes. If you are entering a password for something (perhaps not the initial auth) it can guess how many characters it is and turns out there are some systemic patterns in how that relates to the keys pressed -- eg letters typed with the same finger have longer delays between them. Given the redundancy in most text and especially structured input that's a serious security threat.
I couldn't think of something worse than demanding Amazon decide what is counterfeit or violates regulations and policing that rule. The law on both those points is far too complex for the result to be anything but Amazon blocks what the big brands tell them to and protect them from competition.
Amazon is essentially a logistics company with a search engine. It doesn't really make sense to have them enforce regulations or counterfeiting rules than it would to make UPS and google. It's not like they hide who the seller is on any item (it's listed as sold by).
What your complaining about is a fundamental consequence of anything that lowers the barriers to selling goods. You once needed to buy a storefront to sell retail goods, later you at least needed sufficient name recognition for people to visit your website -- that investment gave anyone whose goods were counterfeit as well as regulators assets to seize.
But just like making it easy for every citizen to publish their thoughts means we see lots of hate and dumb shit online -- anything that lowers the barriers to selling retail goods (in general a good thing) will make it easy to sell counterfeit or defective crap.
In the long run, I suspect tech will make reputable 3rd party evaluations easier to access but let's not blame Amazon for not becoming an arm of the state and judging what is and isn't legal.
It's far easier and efficient to have the seller be responsible for what they sell, rather than every buyer learn relevant regulations and research whether any potential buy follows that.
And regulations are necessary since many sellers are without ethics/morals and simply want to sell.
The cost to the individual can be huge (eg cancer, home burnt down), and the society as well (environment etc).
I get the line of thought that "a simple product search engine like Amazon" shouldn't be held responsible for every single small item sold, but I think they should. The information and power balance is incredibly offset here.
Don't forget that Amazon is one of the largest companies on this planet, to a large extent because they take this shortcut of "money first, responsibility later". So I do blame Amazon (among others). The old discussion of privatization of profit and society takes the risk and cost...
We start to understand those old fogeys who we blew past we were young once we get to their age. It's the way of the world.