Hacker Newsnew | past | comments | ask | show | jobs | submit | herf's commentslogin

Apple makes a nice distinction between their "app layer" (iCloud drive and Messages, etc.) and the OS login. This would work fine for Windows power users, and for the most part Windows has already had this (your "store" login). But to require the cloud to replace your login, the cloud has to be essential to the functioning of Windows and you have to explain the security implications clearly, and it's not clear that any of these things happened.

For instance, almost none of the useful settings from win32 apps sync - migrating to a new PC is painful, your apps don't move, your settings are all missing. It takes weeks, you don't just login to it. So this idea that it makes all your settings sync is maybe 10% true.

The argument for this online account (vs just a container for apps) is that you think a few Windows appearance settings must be synced always, or that you want to save things like your BitLocker keys in the cloud (which probably makes them visible to FBI or whoever else). And the security implications need to be spelled out in plain language. And in the end, it's a pretty bad argument - Grandma doesn't need BitLocker, but the people who do want a clear explanation. A lot of the rest could live in a "Microsoft apps" credential layer: Edge, OneDrive, Office, etc.

I want to feel like I can login to a recovery console and fix a bad partition. I want to keep using the same username across Linux and Windows. I want to recover a router with the old laptop that has actual ethernet, and who knows if it has cached credentials? My Microsoft account is my least used one, and who knows if it is secure?

One last thing: logging in with biometrics is amazing, but why must I use a low-security PIN in place of your pre-existing password?

Please fix it all.


why must I use a low-security PIN in place of your pre-existing password?

FAFAIK, all characters that are allowed in a user password are also allowed in device PIN codes. Knowing Microsoft, I'm sure there's domain policies to alter/restrict this. And the idea behind it is sound: that PIN is tied only to a single device, meaning that even if someone watches you enter your device passcode (or uses a keylogger), they can't go to a different machine or online portal and re-use the captured credentials there.


When setting up the PIN you pick for it to be alphaneumaric (There is a option for it) and it acts just like a password field with a silly name.

The reason why it is tied to device isn't to protect against over the shoulder watchers, it is that the resulting key that is stored in the system is unique from system to system so you can't lift the key from one machine and use it on another. Maybe not as useful for a PIN but does make it harder to use a stored key to replace a biometric key so a compromised key doesn't leave every system you've ever logged into vulnerable to a key-auth attack.


Because nobody would use the same pin for different devices. This is a farcical argument.

Those are some strong words and nothing to back them up. Please, feel free to explain to us in your own words what threat model this device PIN is defending against and how it fails at that.

The argument presented was that a pin is better because it is only for that device. Which is false for 99% or more of users.

This "early velocity only" approach seems like a problem - how do you know with 5-minute training runs that you aren't affecting the overall asymptote? e.g., what if the AI picks a quantizer that happens to be faster in the first five minutes, but has a big noise floor where it can't make more progress?

Yes, it's greedy so may hit local optima. You can fit learning curves and try to extrapolate out to avoid that problem, to let you run long enough to be reasonably sure of a dead end, and periodically revive past candidates to run longer. See past hyperparameter approaches like freeze-thaw https://arxiv.org/abs/1406.3896 .

It's easy to lie to an OS about your age because it's a single-user experience, and if your parents allow you to lie (or don't know), that's all it takes. Social networks are so much better equipped to estimate age because they have a simple double-check, which is that most kids follow other kids in their grade level.

The patches on top of this are really bad. For instance, we are seeing "AI" biometric video detectors with a margin-of-error of 5-7 years (meaning the validation studies say when the AI says you're 23-25 you can be considered 18+), totally inadequate to do the job this new legislation demands.


They teach a lot of Taylor/Maclaurin series in Math classes (and trig functions are sometimes called "CORDIC" which is an old method too) but these are not used much in actual FPUs and libraries. Maybe we should update the curricula so people know better ways.


Taylor series makes a lot more sense in a math class, right? It is straightforward and (just for example), when you are thinking about whether or not a series converges in the limit, why care about the quality of the approximation after a set number of steps?


Not quite. The point of Taylor’s theorem is that the n-th degree Taylor polynomial around a is the best n-th degree polynomial approximation around a. However, it doesn’t say anything about how good of an approximation it is further away from the point a. In fact, in math, when you use Taylor approximation, you don’t usually care about the infinite Taylor series, only the finite component.


Taylor series have a quite different convergence behavior than a general polynomial approximation. Or polynomial fit for that matter. Many papers were written which confuse this.

For example, 1/(x+2) has a pole at x=-2. The Taylor series around 0 will thus not converge for |x|>2. A polynomial approximation for, say, a range 0<x<L, will for all L.


https://arxiv.org/pdf/2310.11453 The original paper [fig 1, bottom-right] seems to say it needs about 4-5x the parameters of a fp16 model. You can build it and run some models, but the selection is limited because it has to be trained from scratch. I imagine inference speed is faster compared with modern PTQ (4- and 8-bit quants) though.


I wish there were more ways to specify whether the Windows filesystem /mnt/c should be mounted in a WSL2 instance - it is kind of generally on or off. In cases where I want WSL2 to function as a "container" isolated from my desktop, I use a different Windows user just in case.


It's sad to see an LLM take over a blog, because you can see the line: before 2026 it's an interesting person you would like to talk to. After 2026, it's like generic LLM marketing-voice copy.


Thanks, which of my pre-AI blog posts are your favorite?


Sideloading without automatic updates is not very useful


That's one of the arguments for sideloading. I don't want to use your new liquid ass design.


    Liquod ass design
This made me laugh. I agree that's a plus! Auto updates are mostly bad. Look at the state of extensions on vscode. The permissions combined with silent updates is scary


why not?


The days you move between categories can establish your birthdate, which is a lot of bits if you are doing this on an individual level (basically it's a great start at a supercookie).


You know, LLMs could do automated code reviews for each update to avoid things like this. It would be much better than unexamined updates.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: