Hacker Newsnew | past | comments | ask | show | jobs | submit | dwheeler's commentslogin

I suspect this is really a surveillance bill, but we won't know until the text is revealed.

Agreed. The whole topic is a Trojan horse for surveillance companies to siphon off data. We need to start asking which politicians are pushing this and who’s pushing them to do it. They’re either doing it for money or being blackmailed into it by the existing surveillance apparatus.

I suggest this vocal performance: https://youtube.com/watch?v=GggK9SjJpuQ


I prefer the term "assistant". It can do some tasks, but today's AI often needs human guidance for good results.


I also made a list of tips on writing code with AI, with a special focus on security. Others may find the tips useful. Here they are: https://openssf.org/blog/2026/01/05/ai-software-development-...


This has many similarities to the Heartbleed vulnerability: it involves trusting lengths from an attacker, leading to unauthorized revelation of data.


Many people use Octave https://octave.org/ which is compatible (generally) with Matlab, supports this simple syntax, and is open source software. Indeed, I've taken at least one class where the instructor asked people use Octave for these kinds of calculations.


Yep -- Octave was very helpful for me in school.

Octave is not particularly fast.

RunMat is very fast (orders of magnitude -- see benchmarks).


That's only true if future improvements are easy to create as past ones, that customers care as much about those improvements, and there are no other differentiators.

For example, many companies do well by selling a less capable but more affordable and available product.


I love having built-in local natural language translation implemented by AI, which Firefox provides. Local models have different properties than remote properties, and natural language translation is a useful thing. AI should be added when it solves a real need, and the risks can be minimized (or at least controlled). The goal shouldn't be to use AI, the goal should be to solve problems for humans.


The Linux Foundation's Open Source Security Foundation (OpenSSF) has released a free online course "Secure AI/ML-Driven Software Development (LFEL1012)". It discusses protecting your software development environment, creating more secure software, and reviewing changes.


Yes, you need training if you want something good instead of slop. For example, when asked to write functions that can be secure or insecure, 45% of the time they'll do it the insecure way, and this has been stable for years. We in the OpenSSF are going to release a free course "Secure AI/ML-Driven Software Development (LFEL1012)". Expected release date is October 16. It will be here: https://training.linuxfoundation.org/express-learning/secure...

Fill in this form to receive an email notification when the course is available: https://docs.google.com/forms/d/e/1FAIpQLSfWW8M6PwOM62VHgc-Y...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: