Hacker Newsnew | past | comments | ask | show | jobs | submit | spectraldrift's commentslogin

This is a misleading article including some quotes that wildly mischaracterize the original sources, while also including many lines of unrelated speculation that could be easily researched.

From the article:

> However, Google's privacy policy — a document that users often blindly agree to after purchasing a device — makes it clear that videos can be captured when a device is offline. “That means you may not see a visual indicator when your camera is sending the video footage to our servers," the policy states.

I looked this up because no link was provided [1]. The full quote is "Some models of our cameras support recording of video footage while offline. For these cameras, video footage will be uploaded when the camera goes back online after the video footage has been recorded. That means you may not see a visual indicator when your camera is sending the video footage to our servers. However, in those instances, a visual indicator would have been visible when the camera was actually recording the video footage."

So in this case, the author mangles "offline" (no Internet) with "recording".

Another quote:

> Google didn’t immediately respond to questions from The Associated Press about how the footage of the masked person was captured while the camera was apparently disconnected

The cameras was very clearly disconnected after the footage was recorded, because you can see the suspect begin to disconnect the camera.

There are many other examples of poor reporting in this article but I will leave it to the reader to find them.

[1] https://safety.google/intl/en_ca/products/nest/


Weird how they only share three hand-picked evals, ignoring the evals where they were left in the dust like ARC-AGI2. This post is so misleading, I don't even know whether to trust the numbers they did share. One is just fraction of a percentage point away from Gemini 3 pro, which is awfully convenient for marketing and easy to hide. Very open, OpenAI.


Not really that weird. This isn't intended to be a "general" model. This is a coding model so they showed the coding evals. The assumption would be relative to GPT5.1, non-coding evals would be likely regress or be similar.

Like when advertising the new airliner, most people don't care about how fast it taxis.


Having never heard of mojo before, I found this article fascinating. It provides a great example of how a toy regex parser works and an excellent explanation of why vanilla regex tends to be slow. It also presents a novel solution: compiling the regex into regular code, which can then be optimized by the compiler.


this is literally how 'lex' works. the one written in 1987 by Vern Paxson.


The original is 'lex', written in 1975 by Mike Lesk and Eric Schmidt.

Yes, that Eric Schmidt, CEO of Google.

1987 was the clone, 'flex' :-)

It did "compiling the regex into regular code, which can then be optimized by the compiler" before the C programming language as we know it was created. I think 'lex' was compiling regex to C before the C language even had 'struct' types, 'printf' or 'malloc'.


So I'm only 40 years behind! It's amazing how early innovations like this seamlessly fade into the background and can be taken for granted by folks like myself.


> Should You Use Postgres? Most of the time - yes

This made me wonder about a tangential statistic that would, in all likelihood, be impossible to derive:

If we looked at all database systems running at any given time, what proportion does each technology represent (e.g., Postgres vs. MySQL vs. [your favorite DB])? You could try to measure this in a few ways: bytes written/read, total rows, dollars of revenue served, etc.

It would be very challenging to land on a widely agreeable definition. We'd quickly get into the territory of what counts as a "database" and whether to include file systems, blockchains, or even paper. Still, it makes me wonder. I feel like such a question would be immensely interesting to answer.

Because then we might have a better definition of "most of the time."


SQLite likely dominates all other databases combined on the metrics you mentioned, I would guess by at least an order of magnitude.

Server side. Client side. iOS, iPad, Mac apps. Uses in every field. Uses in aerospace.

Just think for a moment that literally every photo and video taken on every iPhone (and I would assume android as well) ends up stored (either directly or sizable amounts of metadata) in a SQLite db.


Yes it seems like it is absent in this discussion but maybe it should have been “it” the whole time as a default option. I wonder if it could attain similar throughput numbers; bet the article would feel slightly sarcastic then though


This is the second article I've seen on taste here. It seems to me the author's definition of "tinkering" is primarily describing hyperfocused, repetitive behaviors found in neurodivergent individuals, potentially even a complex form of stimming. I think this is unrelated to taste.

I think good taste in engineering comes down to a mix of skill and knowledge. It isn't just about how you can reach a goal, but rather about having a solid internal map of the world and an understanding of which parts of the map you are unfamiliar with. To those lacking knowledge, the map can deceptively appear much smaller. Skill allows you to effectively find your way to the places you know you can go. With knowledge and skill, taste comes naturally. Those with bad taste, I've found, are those with limited knowledge of the vast universe of tools available and/or the lack of skill needed to utilize those tools effectively.


This sounds incredibly frustrating. But given you're in the leadership seat, it's worth taking a hard look at yourself asking how you're playing a role in this, especially since it's unlikely people are this difficult for no reason. I am rather a bit skeptical of this account as written.

One hint: he was told he was meant to be the lead. That's a bit of a shitty promise to be given and taken away.

This guy is almost certainly operating from a place of "status injury." He sees you as the person who took his job.

So, some hard questions for self-reflection:

Knowing he felt slighted, did you ever try to build an alliance with him and acknowledge his expertise? Or did you just expect him to fall in line?

Are you showing him respect, or just demanding it because of your title?

Are his arguments over "trivial things" really trivial? Or is it his (unproductive) way of trying to assert the technical authority he feels you're ignoring?

Right now, your manager hears a personal problem ("This guy is a jerk to me"). That's why he's giving you the weak "don't let it bother you" response.

Stop making it personal. Use your leadership skills to actually lead this person. Try to fix the relationship. Give him ownership.

If that fails, you can go to your manager with a leadership problem ("I've tried A, B, and C to leverage his skills, but his behavior is still causing X business risk"). That is a problem a manager has to solve.


They already did it with Google's transformer architecture- why not Google's open-source browser framework too? They're pretty much a fork of Google's good-faith open-source efforts at this point.


It seems odd to me that someone would write such a polished and comprehensive article and yet completely misunderstand the definition of the central topic.


That happens in system dynamics a lot, actually - there are many independently developed theories in many different disciplines that do not intertwine historically at all. I have met multiple people who work with systems mathematically on a professional level who had no idea about these other things.


I've seen this too. In particular there seems to be a huge dividing line between systems research stemming from the physical-mathematical heritage of formal dynamical systems, and the other line mostly stemming from everything Wiener did with cybernetics (and some others who were contemporaneous with Wiener). Both sides can be profitably informed by the other in various ways.


It fully depends on whether those rules or laws actually mean anything.


I'm not sure why this post gets boosted every few years- and unfortunately (as many have pointed out) the author demonstrates here that they do not understand distributed system design, nor how to use protocol buffers. I have found them to be one of the most useful tools in modern software development when used correctly. Not only are they much faster than JSON, they prevent the inevitable redefinition of nearly identical code across a large number of repos (which is what i've seen in 95% of corporate codebases that eschew tooling such as this). Sure, there are alternatives to protocol buffers, but I have not seen them gain widespread adoption yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: