Hacker Newsnew | past | comments | ask | show | jobs | submit | LauraMedia's commentslogin

So this effectively means, if you buy a new phone and want to set it up, you'll have to do it tomorrow, because of an arbitrary flow Google created to save their play store percentages...

Is this really the state of AI in 2026?

It takes over your entire browser to center a div... and then fails to do so?


I usually would post it in our dev slack chat and rant for a message or two how many hours were lost "reverse-engineering" bad documentation. But I probably wouldn't post about it on here/BlueSky.


You are assuming: A) That everyone who saw this would go as far as post publicly about it (and not just chuckle / send it their peers privately) and B) Any post about this would reach you/HN and not potentially be lost in the sea of new content.


They are not assuming everyone would do that.


Which is basically what the US also wants.


except with a different brand of fascism.


> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

This... sounds highly concerning


A while ago they changed their TOS from something along the lines of "We will never sell your data" to "Your data is safe"


Feels like I've been hearing "Update X makes Y worse" or "Update X destroys hardware Y" a lot more this year with Windows. Seems like they are fully embracing AI at Microsoft.


Given the horrible stability of Windows this year, it seems like Microsoft went all in on that


Isn't it odd? All those tech CEOs tell us that we won't be able to live in a world without AI, how AI will be within every single app, service or codebase eventually...

And then they constantly try to shove it into their products, with no way to disable it. I'm assuming the user data would show that quite a lot of people would turn it off, so to not ruin your own statistics for the next shareholder/investor meeting, you need to force them


> I'm assuming the user data would show that quite a lot of people would turn it off

You would be wrong — outside of the Hacker News bubble very few people mess with their default settings, in any app.


How can you justify such a wildly conclusive statement without providing any supporting information? It's not just the "HN Bubble", recent articles on The Register discuss how there is no way to guarantee the information provided to the 3rd party LLM servers in the prompting is free from further disclosure. This raises a host of concerns for professionals with a non-deligable duty to safeguard client/patient information from disclosure. To the point that if LLM captures that information, by definition the attorney or physician has disclosed client/patient information putting their license to practice law or medicine at risk.

Exacerbating the matter is there is zero disclosure on Mozilla's part detailing exactly what information is sent to 3rd party servers as part of its AI rollout in FF 145. Would you risk your license on some FOMO AI rollout in FF that, unless you, as the lawyer or doctor, have stepped though each line of the tens of thousands of lines of FF code associated with this new AI to meet your ethical obligation and answer the bar or medical board's inquiry on whether client/patient information has been sent to a 3rd party?

Without the ability to completely disable and turn all AI submissions off, Mozilla's "Trust Us" position doesn't allow anyone with such a duty to meet it. This is before you even get to confidential and proprietary or trade-secret type information applicable in any professional setting.

These are all vexing questions from a legal standpoint.


Even if that's the case, I doubt usage rates are very high.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: