Hacker Newsnew | past | comments | ask | show | jobs | submit | Gormo's commentslogin

Ubiquiti really should be the model for every company selling hardware today.

Their business model is a straightforward "sell a good product at a reasonable price" approach, and they seem to be quite successful at it without needing to resort to gimmickry, subscription fees, or other even less savory ways of monetizing other people's activities.


I'm still pretty sour that they removed the ability to self-host the NVR and you have to use their cloud solution now.

You’re talking about when you used to be able to run Unifi Video on your own distro? Yeah that was good, but you definitely don’t have to “use their cloud solution” for NVR now; you buy the box, the video is stored on the box.

Yeah you used to be able to run unifi video on your own hardware. Now you have to use their box and access it through their cloud. I had notifications working in the self-hosted version with VPN.

> I don't think they baited and switched?

Technically true, because bait-and-switch refers merely to advertising an attractive product offer in order to lure people into a pitch for a different product.

In this case, they actually sold a product, then decided to maliciously alter the product after it was sold to modify its behavior. That makes this a much more serious offense, equivalent to trespass, vandalism, or possibly even burglary.

It's equivalent to selling someone a house that includes a secret entrance that you retain access to, so you can surreptitiously enter the house to steal the new homeowners' property after they've moved in.


> I've always been told it's called business.

The "business" ended when the sale transaction concluded. The fact that you were the seller in that past transaction doesn't entitle you to vandalize goods that now belong to someone else.

This is just crime trying to disguise itself as legitimate business, as scams often do.


> The "business" ended when the sale transaction concluded.

Actually not, though not in a way that makes the rest of your post incorrect.

Various laws and regulations state that the seller has responsibilities to the buyer after the initial transaction has completed, one of which Bambu might¹ be transgressing by removing features that people we lead to believe were part of the product, and could reasonably expect to remain part of the product, at the time of the sale.

--------

[1] This has not been tested in court, and I'm no lawyer, take my idea of what is the case with a requisite serving of condiment.


I've been treated this way numerous times. First example: FlightRadar - I bought the app, they sunsetted it and created "new one" with monthly payments

Neither analog nor emojis. An analog emoji would just be a picture printed on paper.


¬_¬


> no, but in first century bc and after that the roman world was connected enough that rich young romans were doing their version of the grand tour.

So "yes", then.


The og question is about two centuries earlier. Time matters.


Define tourism, though. Even quit a bit before 300 BC Herodotus did go to Egypt for not particularly practical reasons


To write a travel guide papyrus. :)


> My strong hunch is they are moving to closed source because it is now trivial to copy a product with AI clean rooms. Which, tbf, is a totally valid reason to move closed source.

The "clean room" part of clean-room reverse engineering implies that there is no exposure to the original copyrighted code on the part of those doing the reimplementation, whether human developers or AI. Traditionally, if you're working of the source code itself, you have one party translate the source code back into a design document, specifying behavior, and then you have another party implement that design spec with original code.

If you already have a running copy of the software to model the behavior off of, then you don't need the original source code in the first place. So going closed source will have zero effect on the capacity of AI tools to be used for clean room reverse engineering: all you need is the runtime.

> But I'd want to see more adoption of something like the Ship of Theseus license (https://github.com/tilework-tech/nori-skillsets/pull/465/cha...) before giving up on open source entirely

This license doesn't seem valid: a license can't redefine what qualifies as a derivative work. That's determined by copyright law itself, and if copyright law says that a clean-room reimplementation isn't a derivative work, then it isn't restricted by copyright, so doesn't need a license in the first place.


> Now this is no longer viable and in fact it is simply helping competitors. So why do it then?

> The only open source that will remain will be the real open source projects that are true to the ethos.

Well, the second point seems like the answer to the previous question. The original model of monetizing FOSS -- support contracts, risk indemnification, etc., for an otherwise functionally equivalent product -- will still remain viable.

But those trying to thread the needle of trying to use open-source to push a "freemium" model are now going to hit a wall: if you were withholding features from the community version in order to paywall them for the premium version, and now AI has made it easy for users to add those features back without paying you, then you're screwed. The people who were going to use AI to bypass your paywall are still not going to be your customers, but you no longer have the differentiator to put you ahead of the competitors that were already closed-source to begin with for the customers who are willing to pay.

I originally deployed Cal.com because I wanted an open-source solution. But now, why would I choose a closed-source Cal.com over Calendly? If I'm forced to go SaaS, I'll probably go with the more widely used Calendly. If I'm not forced to go SaaS, I'll forego them both, and go back to something like EasyAppointments, knowing that I won't be in conflict with the authors if I choose to add my own "premium" features to it, whether with AI or by hand. All Cal.com did here was remove any chance that I'd ever pay them anything.


EasyAppointments (https://github.com/alextselegidis/easyappointments) is still FOSS. We managed our team calendars on it at a previous company, but I rolled out Cal.com due to more apparent support and sophistication at my current company. Turns out most of that sophistication turned out to be bloat: we saw successive version releases add features we didn't care about while increasingly putting previously open features, like team calendars, behind a paywall.

This move by Cal.com seems to be transparently an attempt to maintain that paywall against users who'd otherwise just use LLMs to remove it. I guess it's back to EasyAppointments, which still seems to work just fine.


But doesn't AI ultimately obviate "move fast and break things" by making it easier to move fast without breaking things?


Not at all, no; AI makes it harder to not break things, and it takes a lot of work to not break things.


It does take a lot of work not to break things. That's why "move fast and break things" are traditionally closely coupled: it's hard to avoid breaking things without slowing down.

But why would responsible AI users -- actual engineers using it to accelerate grunt work, not vibe coders -- not use the AI tooling to increase their capacity to do all of the work it takes to avoid breaking things while still moving fast, relatively speaking?

Testing a new incremental feature against the entire extant codebase, not just the bits of it that they had the bandwidth to tackle within the deadline, seems like exactly the sort of thing well-disciplined engineering teams would use AI to do.


There's a couple points of misunderstanding here.

For one, you architect your codebase into separate layers and logical chunks that are self-contained and can be reasoned about independently. That's not always possible, but you draw as many firm boundaries as you can. You don't ever want to be in the position where you have to test an entire codebase against your new change. That's a horrible nightmare scenario.

So you don't "test as much of the codebase as you have time for", you write tests for your code and the interface between it and other systems. Maybe integration or FE tests depending on what you have.

So testing against a whole codebase is rarely the problem, and if it is, you have bigger issues.

Also, LLMs don't make mistakes like humans do. They fuck up in weird unpredictable ways that mean you kinda have to treat them like a hostile adversary trying to sneak in subtle backdoors. It slows things down.

Also, actually writing code is usually the fast and easy part. It's all the other bits -- getting the requirements, building mockups, planning, review, standing up new infra etc etc etc. LLMs can't help with most of that.


> For one, you architect your codebase into separate layers and logical chunks that are self-contained and can be reasoned about independently. That's not always possible, but you draw as many firm boundaries as you can. You don't ever want to be in the position where you have to test an entire codebase against your new change. That's a horrible nightmare scenario.

Right, all of this goes without saying.

> So testing against a whole codebase is rarely the problem, and if it is, you have bigger issues.

Read "whole codebase" as a qualitative descriptor, not a quantitative one, where the codebase as deployed is the reference point for the overall business logic flow, including where different processes interact with or block each other.

The point is to use the AI tools to ensure that new features and functionality are implemented in a way that is consistent with the technical and business constraints that emerge from the entire tech stack, precisely so that adding something new in context A doesn't break functionality in context B.

> Also, actually writing code is usually the fast and easy part. It's all the other bits -- getting the requirements, building mockups, planning, review, standing up new infra etc etc etc. LLMs can't help with most of that.

Yes, that's true, but those are the perennial challenges in doing solution design before you even get to implementation. The "break things" problem happens when solutions for context-bound problems are being implemented in ways that lead out of their context -- narrowly-focused goals are often pursued in ways that create externalizes elsewhere in the organization precisely because of the limited focus available to the people working on solving for those goals.

The point here is that there's now the possibility for the AI tools to make much broader contextual awareness available as part of the solution design and implementation phase of every narrowly-targeted goal. If the AI has a reasonably accurate model of how systems and solutions affect each other in the broader organization, it can offer predictions of how any proposed new feature might impact all of the other business functions, and do so in near real time, and ultimately head off 90% of "break things" impacts that you'd otherwise need multiple series of meetings, testing sessions, buy-ins and sign-offs to avoid otherwise. That's what would get you moving fast with much less collateral damage.


> > If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.

But this has always been the reality of security: it's always been fundamentally an economic question about which party has stronger incentives and greater resources than the other. The increasing sophistication of AI is available to both parties equally, so I don't see how AI in itself fundamentally changes the equation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: