Hacker Newsnew | past | comments | ask | show | jobs | submit | bigwheels's favoriteslogin

>someone raised the question of “what would be the role of humans in an AI-first society”.

Norbert Wiener, considered to be the father of Cybernetics, wrote a book back in the 1950's entitled "The Human Use of Human Beings" that brings up these questions in the early days of digital electronics and control systems. In it, he brings up things like:

- 'Robots enslaving humans for doing jobs better suited by robots due to a lack of humans in the feedback loop which leads to facist machines.'

- 'An economy without human interaction could lead to entropic decay as machines lack biological drive for anti-entropic organization.'

- 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".'

The human purpose is not to compete but to safeguard the telology (purpose) of the system.


Yeah, happy to be more specific. No intention of making any technically true but misleading statements.

The following are true:

- In our API, we don't change model weights or model behavior over time (e.g., by time of day, or weeks/months after release)

- Tiny caveats include: there is a bit of non-determinism in batched non-associative math that can vary by batch / hardware, bugs or API downtime can obviously change behavior, heavy load can slow down speeds, and this of course doesn't apply to the 'unpinned' models that are clearly supposed to change over time (e.g., xxx-latest). But we don't do any quantization or routing gimmicks that would change model weights.

- In ChatGPT and Codex CLI, model behavior can change over time (e.g., we might change a tool, update a system prompt, tweak default thinking time, run an A/B test, or ship other updates); we try to be transparent with our changelogs (listed below) but to be honest not every small change gets logged here. But even here we're not doing any gimmicks to cut quality by time of day or intentionally dumb down models after launch. Model behavior can change though, as can the product / prompt / harness.

ChatGPT release notes: https://help.openai.com/en/articles/6825453-chatgpt-release-...

Codex changelog: https://developers.openai.com/codex/changelog/

Codex CLI commit history: https://github.com/openai/codex/commits/main/


This March 2025 post from Aral Balkan stuck with me:

https://mastodon.ar.al/@aral/114160190826192080

"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.

When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."


> The fear is that these [AI] tools are allowing companies to create much of the software they need themselves.

AI-generated code still requires software engineers to build, test, debug, deploy, secure, monitor, be on-call, support, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.


I truly believe our industry needs to elevate our own anti-awards, like others have (Razzies, Worst Game of the Year, etc.) to shame those responsible for building the regressive tech that corporations and governments push.

There's already the Big Brother Awards [0] and EFF's smattering of Worst Government and Worst Data Breach articles each year. [1]

But I think we need more.

Personally I would love to nominate:

- Mark Stefik and Brad Cox for their contributions to DRM

- Erick Lavoie for his work on Wildvine DRM

- Vern Paxson for his contributions to DPI (Deep Packet Inspection)

- Latanya Sweeney and Alexandre de Montjoye for their contributions to re-identification of anonymized data

- Steven J. Murdoch and George Danezis for their work on de-anonymization attacks

[0]http://www.bigbrotherawards.org/

[1]https://www.eff.org/deeplinks/2025/12/breachies-2025-worst-w...


Thank you for sharing the breadcrumb~

How does Netflix detect "suspicious" activity? Does $NFLX allow 4k streaming over GrapheneOS? If so, could you pin a different certificate and do some HTTP proxy traffic manipulation to obfuscate the device (presumably an Android phone) identity or otherwise work around the DRM?

I want to understand more about this but unfortunately the reddit thread is bits and pieces scattered amongst clueless commentary, making it challenging to wade through.


My first implementation of gemma.cpp was kind of like this.

There's such a massive performance differential vs. SIMD though that I learned to appreciate SIMD (via highway) as one sweet spot of low-dependency portability that sits between C loops and the messy world of GPUs + their fat tree of dependencies.

If anyone want to learn the basics - whip out your favorite LLM pair programmer and ask it to help you study the kernels in the ops/ library of gemma.cpp:

https://github.com/google/gemma.cpp/tree/main/ops


Thats easy.

Check out library genesis, Anna's archive, and scihub for content.

Piracy isnt theft if buying isnt ownership.


> A deep dive on why these beastly cards fail so frequently compared to all other common current day hardware would be fascinating!

P=CV²f



Oh, that smell of molten keyboard plastic, those yellow spots burned into a display with its own heat exhaust, those laser-machined loudspeaker holes next to keyboard, all filled with grime! How I miss that time on a Macbook, with all the chords you have to press whenever you need a Home or End button to edit the line! Not to mention the power button right next to backspace.

It's so rewarding when its charger dies in a month, and you feel superior to your colleague, whose vintage 6 months old charging cable with none of that extraneous rubber next to the connector catches fire along with your office. What a time to be alive!

The best part is the motherboard produced in a way to fail due to moisture in a couple of years, with all the uncoated copper, with 0.1mm pitch debugging ports that short-circuit due to a single hair, and the whole Louis Rossmann's youtube worth of other hardware features meant to remind you to buy a new Apple laptop every couple of years. How would you otherwise be able to change the whole laptop without all the walls around repair manuals and parts? You just absolutely have to love the fact even transplanting chips from other laptops won't help due to all the overlapping hardware DRMs.

I'll go plug the cable into the bottom of my wireless Apple mouse, and remind myself of all the best times I had with Apple's hardware. It really rocks.


For an interesting interpretation of the recent AMD-OpenAI deal, see Matt Levine's column from a few days ago:

> OpenAI: We would like six gigawatts worth of your chips to do inference.

> AMD: Terrific. That will be $78 billion. How would you like to pay?

> OpenAI: Well, we were thinking that we would announce the deal, and that would add $78 billion to the value of your company, which should cover it.

> AMD: …

> OpenAI: …

> AMD: No I’m pretty sure you have to pay for the chips.

> OpenAI: Why?

> AMD: I dunno, just seems wrong not to.

> OpenAI: Okay. Why don’t we pay you cash for the value of the chips, and you give us back stock, and when we announce the deal the stock will go up and we’ll get our $78 billion back.

> AMD: Yeah I guess that works though I feel like we should get some of the value?

> OpenAI: Okay you can have half. You give us stock worth like $35 billion and you keep the rest.

https://www.bloomberg.com/opinion/newsletters/2025-10-06/ope...

https://archive.is/tS5sy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: