the numbers they show are barely distinguishable from noise as far as I can interpret them.
For me, the impact is absolutely in hiring juniors. We basically just stopped considering it. There's almost no work a junior can do that now I would look at and think it isn't easier to hand off in some form (possibly different to what the junior would do) to an AI.
It's a bit illusory though. It was always the case that handing off work to a junior person was often more work than doing it yourself. It's an investment in the future to hire someone and get their productivity up to a point of net gain. As much as anything it's a pause while we reassess what the shape of expertise now looks like. I know what juniors did before is now less valuable than it used to be, but I don't know what the value proposition of the future looks like. So until we know, we pause and hold - and the efficiency gains from using AI currently are mostly being invested in that "hold" - they are keeping us viable from a workload perspective long enough to restructure work around AI. Once we do that, I think there will be a reset and hiring of juniors will kick back in.
If AI increases productivity, and juniors are cheaper to hire, but is just as able to hand off tasks to ai as a senior, then it makes more sense to hire more juniors to get them working with an AI as soon as possible. This produces output faster, for which more revenue could be derived.
So the only limiting factor is the possibility of not deriving more revenue - which is not related to the AI issue, but broader, macroeconomic issue(s).
Juniors are not as capable of delegating to AI as seniors are. Delegation to AI requires code review, catching the AI when it doesn’t follow good engineering practices, and catching the AI in semantic mistakes due to the AIs lack of broader context. Those things are all hard for juniors.
You would hire someone with the expactation that they learn, but you also need to pay them. New hires always slow the team down. And currently you wouldn't even get much out of them, as you can delegate those tasks to AI.
Additionally you can not even be sure that the junior will learn or just throw stuff at AI. The amount of vibecoded Code I have to review at the moment from Seniors is stunning.
So yeah, the market needs Seniors, but there is basically no incentive for a company to hire a Junior at the moment. It's just easier and cheaper to pay a bit better than the market and hire Seniors then to train a Junior for years.
I think this is the crux of it. Someone who doesn't know the right thing to do just isn't in a position to hand off anything. Accelerating their work will just make them do the wrong thing faster.
The vector API is really interesting but so frustrating that it has taken so long to materialise. The inability of Java to properly utilise parallel compute - whether it's SIMD or GPU - has been a huge factor in dealing it out of being at the forefront of modern compute.
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
The apology was for an earlier leaked post. In that post his tone descends into a diatribe, deserving of apology.
He lashes out, accusing others of lies, spin, gaslighting and peddling. He refers to "Twitter morons", takes a swipe at Trump (who doesn't) and self-delights in the belief that Anthropic are seen as "heroes" while the competition "sketchy".
What is truly amazing is the M1 Max is 400GB/s. 5 years later and we still only hit 1.5x on memory bandwidth. It's quite fascinating how high Apple spec'd it back then with apparently little foreknowledge of how important memory bandwidth would become, and then conversely how little they've managed to improve it now when it's so obvious how important it is.
The reason for that is that most memory bandwidth bumps come with new memory generations. For example an early DDR4 platform (e.g. Intel Skylake/Core iX-6000) and a late one (e.g. AMD Zen3/Ryzen 5000) only differ by 1.5x as well, typically.
The same trend is visible in GPUs: for example, my RTX 2070 (GDDR6) has the same memory bandwidth as a 3070 and only a little bit less than a 4070 (GDDR6X). However, a 5070 does get significantly more bandwidth due to the jump to GDDR7. Lower-end cards like the 4060 even stuck to GDDR6, which gave them a bandwidth deficit compared to a 3060 due to the narrower memory buses on the 40 series.
> Talos is a custom FPGA-based hardware accelerator built from the ground up to execute Convolutional Neural Networks with extreme efficiency
Makes it sound like it's new hardware. This is just (I'm inferring) software to program an off the shelf FPGA to do convolutions. Very minimal ones by the look of it (MNIST etc).
I think that's the nail in the coffin. Most others could say it was a giant whoopsie, but here it goes to the heart of their credibility. How could they continue write authoritatively about AI, having done this.
I do think it's completely unacceptable if Meta makes the glasses unable to be used for routine functions without (a) other humans reviewing your private content and (b) AI training on your content. There needs to be total transparency to people when this is happening - these are absolutes.
But I'm a bit confused by the article because it describes things that seem really unlikely given how the glasses work. They shine a bright light whenever recording. Are people really going into bathrooms, having sex, sharing rooms with people undressed while this light is on? Or is this deliberate tampering, malfunctioning, or Meta capturing footage without activating the light (hard to believe even Meta would do this intentionally).
Agreed. I'm confused trying to map what the article is saying to what's happening at a technical level. For example, obviously it's not doing on-device inference, so it's unsurprising that it won't work without a network connection, but this is totally distinct from your recordings ending up getting labeled. It talks about being able to opt into that, which is one thing. But I guess I don't understand if you don't opt in, if the data still gets sent out for labeling.
I feel like this article is either a bombshell, or totally confused.
My reading was that as soon as you enable the "AI" functionality you are opted into having your recordings labeled.
"But for the AI assistant to function, voice, text, image and sometimes video must be processed and may be shared onwards. This data processing is done automatically and cannot be turned off."
Right, that's the section I was confused by because it was in the context of an experiment trying to use the AI stuff without an Internet connection, which obviously won't work. The article is using the "shared onwards" terminology to refer to at least inference. But the inference part is uninteresting to me, and the data labeling is. The article doesn't really separate those out.
I would figure if there is AI labeling that some things will confuse the system and will be sent to a human. And some things will randomly be sent to a human for error checking. Same thing with Alexa, I figure there's always a low probability chance that anything I say to her will end up reaching a human. She's not always listening as some people fear (the data use would have been detected long ago if she were), but humans occasionally trigger her accidentally--and such errant triggers will be more likely to be sent to a human because they are not going to make sense.
>> but this is totally distinct from your recordings ending up getting labeled
The distinction here occurs wherever the data is processed, and it sounds as if the difference between using your video for labeling versus privately processing it through an AI is deliberately confusing and obscured to the user by the way the terms of service are written. Once the video is uploaded, which is necessary for the basic function, it's unclear how or whether it can be separated from other streams that do go through labeling. This confusion also seems to be an intentional dark pattern.
I do believe people do all of that with the light on. And then there are also people who tamper with the device to deactivate the light. You can find guides for that online.
The funny thing about the light is that it doesn't even matter when surreptitious recording devices are trivial to make these days. You can never know when you're being recorded, even when no one is wearing glasses.
my understanding is that the light is resistant to simply taping over it, and recording can't happen in this case. you have to intentionally modify the glasses to be able to surreptitiously record.
> my understanding is that the light is resistant to simply taping over it, and recording can't happen in this case.
I remember when the glasses came out and this was tested: if you tape it over before starting the recording it refuses, but if you tape it after starting it will happily continue to record. I don't know if they changed it, but that is how it use to be.
The glasses have in the same hole a led light and a small light sensor (similar to the ones used in monitors to set up auto-brightness).
On start recording the glasses check if the light sensor is above a certain threshold, if it is then it starts recording and turns on the led light.
So, if you start recording and then cover the hole, it keeps recording because the check only happens on start. Even if they wanted to fix this by making the light sensor do a constant check it wouldn't work as the privacy led light indicator is triggering the same sensor, which is a terrible design choice.
And to disable the light is as easy as using a small drill bit and breaking either the light sensor module or the led light. They can detect if it's been tampered with and they put a giant notice saying the privacy light is not working but they still let you record anyways lol.
> Even if they wanted to fix this by making the light sensor do a constant check it wouldn't work as the privacy led light indicator is triggering the same sensor,
The privacy led light could just turn off for a couple of milliseconds (or less) while the light sensor performs its check.
> The privacy led light could just turn off for a couple of milliseconds (or less) while the light sensor performs its check.
True but then that would mean a blinking led light instead of a constant turned on led light, which is a different product requirement from what it currently does.
I don't think the cheap light sensor would have a fast enough polling rate for that. And if you increase the polling rate I will just put a phosphorescent sticker that absorbs and reflects the light coming out of the led with a good enough afterglow that the photoresistor will still pick up as some value and still allow for recording.
Also what is the implication here? If you cover the hole accidentally for one microsecond do you invalidate the whole recording? Does it need to be covered for more than one second, two seconds, ten?
All of that for what? So that in 2 years we can have chinese off-brand clones for 50 dollars that offer no security mechanisms anyways?
We all need to understand this is the new normal, being able to be recorded anywhere anytime. Just like you can get punched in the street anywhere anytime. We only act on things that can be proven to have caused you prejudice in court.
We successfully shamed people out of wearing Google glasses. We also mostly have social norms about when recording with your smart phone is ok. We don't need to accept defeat about these glasses just yet
I feel like it was pretty common to have the red light blinking on and off every second when recording. In that time where it is off during that cycle it would make sense to preform the sensor checks.
Sounds like it would be pretty easy to fake out with a custom circuit too, for those that are willing to go beyond ‘whoops how did that happen’ levels.
Taping can not be done? But if there are guides on the www for this, is this a true statement? To me it is a difficult statement because ... taping can be done in many ways. I don't see how light can magically pass through it?
> (hard to believe even Meta would do this intentionally)
Are you referring to the same company that runs Facebook, WhatsApp and Instagram? Meta has, for well over a decade, been caught multiple times -as recently as 2 years ago caught for the third time I know of- burrowing into areas of phones that their apps weren't directly given access to. Android phones have been highly susceptible to this kind of snooping.
This is historically what we've had consumer protection regulations for. When they put lead, radium, asbestos, arsenic, or other poisons in consumer products the regulators step in and put a stop to it. It should be pretty clear at this point these consumer tech companies are no different--they're just producing poison. And it's not like there weren't signs, it's been like this for damn near a quarter century.
I'm going to guess that people are intentionally recording themselves having sex, assuming that they are creating a local recording that is not sent to Meta. Does the light mean "camera is recording" or "cloud services are involved"?
The article isn't clear on this point, I believe because Meta isn't clear on this themselves. Other bits of this piece highlight third parties reviewing the responses of the AI assistant; it's possible that people are recording and some sound they make triggers the AI assistant which, in turn, leads to the video being reviewed.
OTOH, Meta could just be desperate for training content and they're just slurping up all recordings by people who've opted into the AI function. It would be great for them to clarify how this works.
I am very much confused. People recorded sex way before the meta-spy-glasses.
I mean, not as if I were to visit such sites, right ... but video recordings can be done in numerous ways. Also on small devices. I mean the smartphones are fairly small.
If you're not paying a subscription for Meta to AI process your audio and video then they're going to get value out of it some way. It's just like any other 'free' digital service
It is absolutely within possibility that all "camera is on" lights are software controlled just like the camera and independently of the camera. They are meant to tell the user that they are using the camera. They are not meant to tell anyone that the owner of the devices back-end is using the camera.
It is also completely unacceptable to capture the public space without oversight and consent from third parties. If glass users are fine with that, why wouldn’t they accept it for themselves?
It's not "smearing" to use Zuckerberg's own words in a discussion of his character, and this is far from the only example of things he's done or said in the past 20 years that would lead a reasonable person to call into question his moral fiber.
It remains, however, a popular point of reference because:
1. It's fast and easy to read and digest.
2. The blunt language leaves little room for speculation about his feelings and intent at the time.
3. A lot of people understand that as Zuckerberg's wealth exploded, he surrounded himself with people (coaches, stylists, PR professionals, etc.) who are paid handsomely to rehabilitate and manage his image. Therefore, his pre-wealth behavior gives insight into who he really is.
People can change but based on Facebook's actions vis-a-vis privacy, mental health, etc. there's little evidence that Zuckerberg has gone from treating his users like "dumb f...." to treating them like human beings.
If we're going to talk about quotes, here's one: "money amplifies who you are".
Whatsapp is one of the only instances I can think of in corporate acquisitions where the side being acquired lashes out at the acquiring side as much as this ("It's time. Delete Facebook")
You're talking about someone who changes privacy settings, who was told about gay people being automatically added to groups and posting on their walls so it outed them, being told about this and dismissing it. Or "graph search". He doesn't think people deserve any respect when it's not him?
When a man changes it is on him to prove that he has changed. Has Zuck atoned himself in any way? Has Meta?
I'm a big believer in second chances and letting people rehabilitate, but there's no evidence the Meta or Zuck have changed for the better. Meanwhile, *there is plenty of evidence that suggests he has only become more uncaring and deceptive, as Meta has only become more invasive over time*, the article itself being one such example.
So I do believe Zuck has changed, but not in the direction that we should applaud and/or forgive him. I've only seen him change in the way that should make us more concerned and further justify the hatred. A man may change, but he does not always change for the better.
No, you didn't suggest that. You suggested that the quote is not representative of who he is now.
We'd need a lot more context (and words) for us to understand that sentence as anything other than defending him. At best you're giving him the benefit of doubt.
I think his actions speak for themselves. Facebook, effectively completely controlled by Zuckerberg, has consistently taken actions that erode privacy and degrade mental health.
And no, not every young person has the attitude that Zuckerberg demonstrated in his "dumb f...s" comment. If my son or daughter was behaving like that in their late teens/early twenties I would be ashamed and feel like a failure as a parent.
There's a big difference between "someone said something stupid as a kid"... "but now has changed and is a totally different person" and "is doing the same things but now knows how not to say the quiet part out loud"
Well, they don't, but this is a particularly damning statement and it's age is more of a feature than a flaw because it shows a long history of anti-social disdain for humanity.
I hear this rebuttal a lot; here's why it doesn't work for me:
I'm the exact same age as Zuckerberg. When I first read this quote, it struck me as a really gross mindset and a point of view that I could neither relate to nor have sympathy for. I would not have said (or thought) those things when I was his age. Fundamentally, this is a demonstration of poor character.
Now, people do grow and change. We've all said or done things that we regret. Life can be really hard, at times, for most of us, and more often than not young arrogant guys eventually learn some humility and grace and empathy after they confront the real world and experience the inevitable ups and downs of life.
But Zuckerberg had no such experience. His life during and after the time when he said this was one of accelerating material success and validation. The scam he was so heartlessly bragging about in that statement actually worked, and he became one of the richest men in the world. So my expectation of the likelihood that he matured away from this mindset is much lower than it would be for someone like you or me.
(And, as others have said in this thread, there's ample evidence from his subsequent decisions to support this)
>it is perhaps also damning that any time someone wants to smear zuck they have to reach 20 years into the past.
It is perhaps not, and perhaps a bit disingenuous to claim so in good faith, as if it exceeds your abilities to search for the list of facebook scandals in the decades following and see that the behavior is often consistent with this quote. Even if you choose to ignore all that, it's also not very reasonable to expect troves of juicier quotes after all the C-suites, lawyers, and HR departments showed up locked everything down with corporate speak. I'm sure if facebook were to be so kind as to leak all the messages and audio of zuck's internal comms since that time people would be able to have many other juicy quotes to work with.
It is often referenced because it's the best quote that represents the trailblazing era of preying on users' undying thirst for convenience in order to package their private data as a product.
Thank you for saying this. I would not find a better way to word the response myself.
"It is perhaps not, and perhaps a bit disingenuous to claim so in good faith, as if it exceeds your abilities to search for the list of facebook scandals in the decades following and see that the behavior is often consistent with this quote.
It is often referenced because it's the best quote that represents the trailblazing era of preying on users' undying thirst for convenience in order to package their private data as a product.
These sentences are deliciously delightful to read in this era of writing whose blandness and sloppiness is only amplified by LLM-driven "assistance".
It is difficult to be pithy without being bitter, but your writing achieves it within the span of a single comment. If you have a blog, I hope you share it!
You would have a good point if what Meta is doing now wasn’t far worse than what Zuck himself is describing in those comments, all while Zuck has remained at the helm the entire time.
This is a very important window into how the industry, by and large, views users and the concept of privacy. It's not merely authoritarian and predatory, to them users are subhuman.
I've tried to learn and grow from the stupid comments of my youth. I haven't been involved in a long list of scandals directly related to the ideas those comments expressed, and if I was, it would be pretty clear that I didn't learn or grow at all.
You haven't been involved in a long list of public scandals because you've never done anything in your life that's publicly notable.
By tricking yourself into believing you sit on a higher moral pedestal you're simply easing the pain of comparison.
When high school girls spread gossip that the pretty, popular girl has loose morals, they aren't performing this service out of the goodness of their hearts. They're hoping to elevate themselves by tearing down the competition.
>Now if only we could look up everything you said in chatrooms as a 19 year old and post the most inflammatory stuff on HN.
Sure. Wen I was in college, I didn't have the idea of snooping on other students and exploiting them as "dumb fucks" who were stupid enough to trust me.
Most of my public online history starts at around that time too.
And one of my first comments on Slashdot was expressing concern about Facebook violating people's privacy by introducing the feed back in 2006.
Before you posted this I actually edited my comment to remove a sentence at the end where I said "Now please proceed to call me a bootlicker while not rebutting my point."
I thought it would be too flame-war-y. Guess it was actually needed however! US politics getting hysterical has been like the eternal semptember for HN. This place is so braindead and predictable and uninteresting now.
The worst part isn't even that quote, its that nothing structurally has changed one bit since then. The business model still requires users as the product. Glasses that upload video to Meta's servers is the entire point.
This was one of the first hits on Kagi. 404 has a similar article (I think) but it's behind a paywall.
"The demand for this ‘Ray-Ban hack’ has been steadily increasing, with the hobbyist’s waiting list growing longer by the day. This demonstrates a clear desire among Ray-Ban owners to exercise more control over their privacy and mitigate concerns about unknowingly recording others."
If anyone were to record even when the light is not shining, it would be Meta. This would not surprise me at all, they have everything to win and nothing to lose, no country would fine them anything remotely relevant compared to the value of the data they'd be getting.
Presumably the 'drive-by' downvotes are coming from the ad-tech industry who would prefer the population to simply bend over and grab ankles with both hands the moment they request our personal data?
This kind of slop spewing into Github feels like the modern equivalent of toxic plumes coming from smoke stacks.
Utterly unmaintainable by any human, likely never to be completed or used, but now deposited into the atmosphere for future trained AI models and humans alike to stumble across and ingest, degrading the environment for everyone around it.
I'm guessing one form it will take is simply by omission.
User asks for recommendation. AI generates answer saying product is absolute garbage. Company pays to simply have that portion of the answer just not appear. It will be a post-filter sentiment analysis on the original answer. Nobody can ever prove what would have appeared or not.
This is the beauty of AI - while a search engine is at least semi deterministic and you can reasonably question why it wouldn't bring up a site that is clearly relevant, AI has plausible deniability. who can ever say why it generates this answer or that?
For me, the impact is absolutely in hiring juniors. We basically just stopped considering it. There's almost no work a junior can do that now I would look at and think it isn't easier to hand off in some form (possibly different to what the junior would do) to an AI.
It's a bit illusory though. It was always the case that handing off work to a junior person was often more work than doing it yourself. It's an investment in the future to hire someone and get their productivity up to a point of net gain. As much as anything it's a pause while we reassess what the shape of expertise now looks like. I know what juniors did before is now less valuable than it used to be, but I don't know what the value proposition of the future looks like. So until we know, we pause and hold - and the efficiency gains from using AI currently are mostly being invested in that "hold" - they are keeping us viable from a workload perspective long enough to restructure work around AI. Once we do that, I think there will be a reset and hiring of juniors will kick back in.
reply