Hacker Newsnew | past | comments | ask | show | jobs | submit | guenthert's commentslogin

strtok() happily punches those holes in. Now you could argue, the resulting strings, while null-terminated, aren't true substrings (as the origin string is now corrupted), but in the context of parsing (particularly here using white-space as delimiter), that wouldn't be much an issue.

While this is an interesting project, I found following grating:

"Permissions without root

You don’t need root. Grant capabilities to SBCL:

sudo setcap cap_bpf,cap_perfmon+ep /usr/bin/sbcl

Now sbcl --load my-bpf-program.lisp works as your regular user. Tracepoint format files need chmod a+r to allow non-root compilation with deftracepoint."

That's obviously not ideal. Better might be to create a purpose-built image. Unlike perl, sbcl doesn't even pretend to care about security. Taint mode extension for sbcl, anybody?


> Unlike perl, sbcl doesn't even pretend to care about security.

Mind expanding? What particular stuff does Perl have in terms of security here?


A lot, to the point where there's an entire security page in perldoc: <https://perldoc.perl.org/perlsec>

I wonder if a taint mode for SBCL would mean ignoring SBCL_HOME... that'd be a bit annoying for running more up-to-date SBCL versions on distros shipping with older versions.



So much polemic and no numbers? If it is a performance issue, show me the numbers!

There are quite a few numbers in the article, although of course I'm happy to hear any more you'd like presented.

* A counterintuitive 25% reduction in disk writes at Instagram after enabling zswap

* Eventual ~5:1 compression ratio on Django workloads with zswap + zstd

* 20-30 minute OOM stalls at Cloudflare with the OOM killer never once firing under zram

The LRU inversion argument is just plain from the code presented and a logical consequence of how swap priority and zram's block device architecture interact, I'm not sure numbers would add much there.


> The LRU inversion argument is just plain from the code presented and a logical consequence of how swap priority and zram's block device architecture interact, I'm not sure numbers would add much there.

Yes, while it is all very plausible, the run times of a given workload (on a given, documented system) known to cause memory pressure to the point of swapping with vanilla Linux (default swappiness or some appropriate value), zram and zswap would be appreciated.

https://linuxblog.io/zswap-better-than-zram/ at least qualifies that zswap performs better when using a fast NVMe device as swap device and zram remains superior for devices with slow or no swap device.


I appreciate the intro, motivation and comparison to the PIO of the RP2040/2350. How would this compare to the (considerably older, slower, but more flexible) Parallax P8X32A ("Propeller")?

IIRC the Propeller is an eight thread barrel CPU with the same number of pipeline stages. So it "retires" just one instruction per cycle. All PIO state machines can run every cycle so they should be considered very small CPU cores. You can think of them as channel I/O co-processors for a microcontroller instead of a mainframe.

The Propeller 2 would be an interesting comparison as well, with it's own smart pins playing a similar role.

The MATE DE is fairly popular among the small (not so small in India) but growing (thanks to Windows 11 no doubt) Linux desktop market, isn't it? They strive for Wayland compatibility, but aren't quite there (such I take from their release notes; I myself use an ancient version of Ubuntu/Mate here, right now).

I'm not all that informed regarding Waylands benefits and shortcomings (just being puzzled when "performance" or "overhead" is quoted as reason to move away from X11, remembering that the latter didn't seem unbearable slow 30 years ago and that performance of computers in general and computer graphics in particular increased manifold since then). There are however some who should know, who don't seem all that excited: https://www.kicad.org/blog/2025/06/KiCad-and-Wayland-Support...


They didn't (not for AMOS at least, the UNIMOS capable machines had an external MMU).

"AMOS is also a strict real-memory operating system, which is to say there's no MMU, and programs were expected to be fully position-independent and run wherever the monitor ended up loading them. This makes it fast, but also makes it possible for jobs to stomp on other jobs, and it was not uncommon for busy systems to crash on a regular basis."


Even if it's the same data, the bit stream will be a variety of 0 and 1s. The period of that waveform will then be 1 frame length / data transfer rate (or rather 1/4 frame length / data transfer rate as this is a QSGMII link). I wonder how the scope triggers on that. Trigger criterium would be a bit pattern, say the Ethernet frame preamble of 7 octects (* 10/8) spread across four streams ...

Otoh, at 5Gbps, a sample rate of "just" 10GS/s would be sufficient (barely).

I rather suspect the oscilloscope is capable of 1TS/s equivalent time sampling, but that mode wasn't used.


> But… why?

> Performance is a frequently cited rationale for “Rewrite it in Rust” projects.

Rewrite from what? Python/Perl? If the original code is in C there _might_ be a performance gain (particularly if it was poorly written to begin with), but I wouldn't expect wonders.


> Ok, where are the companies using FreeBSD?

Not quite twenty years ago Yahoo! was using (also, I suspect) FreeBSD.

> How do you get hired if you do happen to have proper FreeBSD skills?

By advertising your other skills?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: