Hacker Newsnew | past | comments | ask | show | jobs | submit | antiloper's commentslogin

AWS Bedrock has DeepSeek models running on their infrastructure. That should be enough to prevent training on user data (there's a markup compared to DeepSeek's pricing though).

And unfortunately AWS doesn't have prepaid billing, so you can't just give the internet access to your API key without getting FinDDoS'd.


The latest one available for serverless inference looks to be from 8 months (Deepseek v3.1), which is an eternity and far behind.

If anyone is looking for a solution in this space. Fire me an email, I have a partner whose focussed closely on that problem set!

Blacklisting a kernel module only prevents modprobe from loading it automatically. modprobe by name still works, even if the module is blacklisted, and so does insmod and the syscalls they use.

The author is way above their head and thinks that because they can write Copilot prompts they can write security critical software.


modprobe by name still works, even if the module is blacklisted, and so does insmod and the syscalls they use.

Agreed. There is a way but I would never recommend it to anyone. Showing just for completeness sake in the event anyone else suggests it but do not do this and certainly never put it in a config file or "bad things will happen ©2009-2026".

    # rmmod the module of concern first, then if that exits with the correct exit code:
    sysctl -w kernel.modules_disabled = 1
    sysctl -w kernel.kexec_load_disabled = 1
Once activated these settings will remain immutable until reboot. These settings can break OS updates among a myriad of other things. Calculating risk requires a dungeon leader, 4d20 dice and 12 magic 8-balls to form a quorum. Probably safer to just limit access based on role and then update the OS as soon as it is feasible to do so. Leave the role based access controls in place. If anyone complains add them to the on-call rotation.

Why does it check every five minutes? Do they think the kernel is changing in a running instance faster?

AI made this decision. It seems that the (human)? operator didn't review that.

OK, how about this then:

    sudo rm "$(modinfo -n algif_aead)"
Nice and simple. Or if we want to be more thorough:

    modinfo -n algif_aead && sudo mv "$(modinfo -n algif_aead)" "$(modinfo -n algif_aead)_"

Wouldn't manually loading a module require elevated privileges? Isn't the issue they are trying to solve that completely unprivileged users can exploit the module to elevate their privileges?

Let's consider a sysadmin who says "I blacklisted this module, so we shall never see it on this system."

And then, some random service or cronjob goes down a list and "modprobes" things. Such as a vulnerability scanner.

So the kernel module got loaded by name, until the next reboot.

Yeah, it's another coincidence and another narrowing of the conditions by which this can be exploited. But it's correct to say that blacklisting modules is not the panacea or a 100% airtight solution.


I just tried it on Ubuntu 24.04. Blacklisting algif_aead does not prevent the module from getting loaded by `nobody` using the unprivileged AF_ALG API.

So this project literally does nothing except spew some vibe coded slop across your cluster. Please just upgrade your kernel packages, it's way safer.


You must feel so smart.

Need this in the west as well

thanks chatgpt


This doesn't feel like it was LLM-written, if that's what you mean.


I don't know a good solution for this. 99% of websites asking for this hypothetical permission would not deserve it. Users (rightfully) don't expect that uploading a photo leaks their location.

Element (the matrix client) used to not strip geolocation metadata for the longest time. I don't know if they fixed that yet.


>Right before we hit send on our announcement emails for our new Build Awesome Kickstarter campaign... This is spam.


Yes, so they collected emails from users of one product and are now spamming marketing emails about a fundraising campaign for a different product.

That's at least two steps removed from being merely questionable. I'm really struggling to understand how they imagined that this wouldn't end up being blocked.


Thanks chatgpt


"Nawaz Dhandala"


I nominate Nawaz Dhandala as "the king of AI slop"


He's just an idiot doing it in public, because there are people generating hundreds of posts a day for years now without committing it on github under their real name.


That maintainer (also the one whose creds got stolen) also has an obvious chatgpt slop profile picture on github.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: