Hacker Newsnew | past | comments | ask | show | jobs | submit | more shellwizard's commentslogin

I still have to deal with a handful of UNIX systems at $WORK mostly AIX, and I don't really like it much compared to all of the Linux boxes that we mostly use. On one hand it seems to be rock solid and all of that but on the other it's like driving a Ferrari to go to work instead of a more sensitive Toyota. Most of them are being replaced by cheaper Linux servers where memory is not so pricey and mostly feel the same, albeit some memory allocation/caching difference


I did some work on AIX once. The thing that I remember is that I was granted some kind of zone/slice or wathever they call for compartmentalization. It didn't even had SSH so I had to use telnet.

The guy I was supposed to prepare the system for could only install Oracle from some crappy java UI wizard so I had to request the sysadmin to install a lot of Linux libraries and programs to enable X11 over SSH.


From memory there was LPAR "Logical Partitions" - which were effectively like a VM. and there was WPAR "Workload Partitions" - which had a shared OS and were more like a container.

I had some "interesting" experiences getting stuff to work on WPAR's.


IIRC, WPARs could be just for one process, or full OS (but sharing the resources of one AIX instance, I guess that running on an LPAR or directly in the hardware).

But yeah, bit more like a container.


Then it was probably an LPAR. Are those reliant on hardware magic or just something like cgroups?


LPARs use hardware virtualization. The PowerVM hypervisor (PHYP) is in firmware.


I first learned on an AIX box in college; Cygwin/X gave me X11 access and worked perfectly, although I couldn’t tell you whether that used telnet or ssh. Back then I used telnet a lot without any regard for security.


> crappy java UI wizard

Nicely put (oof!). I believe it also enforced a minimal color depth, which none of our machines could directly support on their own hardware, forcing the use of remote X11 displays.


Sounds painful. Why is there no CLI installer? Fortunately, I never had to deal with Oracle.


Yes we first had a world of telnet and networks that allowed anyone who pierced them with a transceiver to be part of it (thicknet). It was a simpler/kinder/less malicious world than todays.

X Windows ran great on AIX before Linux was a thing. IBM was involved with its's inception (Project Athena).

https://en.wikipedia.org/wiki/10BASE5


Is it true that 0x00000000 is a valid memory address on aix? I’m sure I read it somewhere but struggled to confirm it..


Yes, I believe this was an optimization to allow IBM’s compiler to do speculative loads before a null check.


Alien Infested uniX indeed :)


thats true on many systems... nothing special about 0x0 other than NULL happens to be defined as 0 in most toolchains an some functions use NULL to report an error.


Linux still has to copy a few Aix tricks, like the way lazy linking works.


From a cursory web search, it sounds like that just loads dynamic libraries when their functions are first called? Is that really so useful compared to either loading at start or dlopen()ing if they're optional?


Yes, because the compiler and linker do the work for you, instead of manually writing dlopen boilerplate.

This is a common feature on Windows by the way, Aix is special in many ways, one of them is being COFF land not ELF.

Another shared feature is symbols being private by default with explicit exports.


AIX uses XCOFF that has diverged from COFF so much that it is easier to treat it an entirely separate lineage – not entirely different from the Windows' PE object/executable format, which is also a COFF derivative.

XCOFF is pretty cool, actually, e.g. it does not require for two sets of the same library to exist, i.e. one to use for static linking and another for dynamic loading – the same .a archive can be used for both.

The .loader section keeps import and export symbol tables (what the binary provides to others, and what it needs resolved), library search paths and dependencies, and relocation and fix-up details. If the .a is being used for static linking, the .loader section is simply ignored.


Naive question: by your analogy, would a 1990s Ferrari perform today as it did back then?


I guess yes, although given today's petrol prices and environmental restrictions, it wouldn't be able to drive anywhere (at least in the EU)


Yes and no. Performancewise, the iconic Ferrari Testarossa from 80s/90s does 0-62mph in 5.8sec. That's in the ballpark of today's family SUV EV, like the Tesla Model Y (standard version, 'Performance' does 3.3sec) or Hyundai Ioniq 5 (again standard version, performance 'N' does 3.4sec).

But I'm sure the "fun factor" in a Ferrari is much greater and of course there's a nostalgia factor as well... it was "THE" supercar when I was a kid. I would love to drive one today and it would be much cooler than a Tesla Y or Ioniq 5 :-)


It's even funnier when you realize all of the V6 Camrys built in the last 10 years also have a 0-60 of 5.8 seconds or less, and with the right tires and some suspension tweaks probably handle about as well as an 80s Ferrari.


The last Testarossa I saw in the wild was around 2010 parked in Hoxton London. None of the upholstery was holding up and it looked like it might not be driveable. But it got there somehow.


Also, 80s/90s Ferraris weren't very reliable... :P


I don’t think this has changed much.


Absolute vs relative performance is important to consider


And normalized performance? :)


Complete article: https://archive.ph/xr5dL


The big selling point of Plex vs jellyfin is that their app is in all of the major stores.Samsung smart TVs for example


Some Intel and AMD CPU come with an iGPU that you can use to transcode videos even some Xeons, there's no need to add a GPU just for that


A lot of them don’t though. My Xeon doesn’t, so I threw a cheap used Nvidia Tesla P40 in there to do the job. Also it can handle a lot more simultaneous streams than any iGPU I’m aware of.


It looks similar to Carlito, which is an OFI font similar to MS Calibri


Are you talking about processor's C-states? My old 6th gen i3 stays most of the time idling around C8, averaging 5w, really impressive, I suppose newer gens will be even more efficient


Although Intel processors are efficient, modern AMD processors have much higher idle power usage, due to their chiplet design. They typically use at least 20W more power.


C8 is a good state if you can get it. Intel is really good at this. They don't even bother energizing the L3 caches immediately when exiting deep package C-states. But there are lots of conditions that will inhibit C8, notably an Ethernet link on a NIC capable of PTP. This is why wireless is better.


No, unless you pay month to month. If you wait till BF you can find some really good deals on sites like lowendspirit


Would you kindly share how to do it?


Wait... this whole time reading this thread, I'm racking my brain for what bitnami provided (I used to use them before docker came around. I never would have got Redmine up and going without them -- the install seemed so foreign.) that building a docker image couldn't, because surely everyone knows how to build one from scratch, right?... right?

Is all the panic because everyone is trying to avoid learning how to actually install the pieces of software (once), and their magic (free) black boxes are going away?

I recommend VS Code remote connections and docker builds via the docker extension to do rapid build-run-redo. Remember to make sure it works from scratch each time. You can automate them with Jenkins... (which came first, the Jenkins or the Jenkins Docker image?) I also recommend Platform One. (you'll need a smart card) I also recommend reading the particular software's documentation ;)


Thats super silly, it's so easy to make docker images... especially if you have a fast connection you can build a proper image which is production ready in a few hours.. (eg.30-40 builds)


To add, it's really satisfying to build your own, push it and host it on your own internal repo that anyone in your group can use.

"Just go get the DEV image, Josh."


Not OP, but in general the process goes like this:

  - you pick a base image you want to use, like Alpine (small size, good security, sometimes compatibility issues) or Debian or Ubuntu LTS (medium size, okay security, good compatibility) or whatever you please
  - if you want a common base image for whatever you're building, you can add some tools on top of it, configuration, CAs or maybe use a specific shell; not a must but can be nice to have and leads to layer reuse
  - you build the image like you would any other, upload it wherever you please (be it Docker Hub, another registry, possibly something self-hosted like Sonatype Nexus): docker build -t "my-registry.com/base/ubuntu" -f "ubuntu.Dockerfile" . && docker push "my-registry.com/base/ubuntu"
  - then, when you're building something more specific, like a Python or JDK image or whatever, you base it on the common image, like: FROM my-registry.com/base/ubuntu
  - the same applies not just for language tooling and runtimes, but also for software like databases and key value stores and so on, albeit you'll need to figure out how to configure them better
  - as for any software you want to build, you also base it on your common images then
Example of cleanly installing some packages on Ubuntu LTS (in this case, also doing package upgrades in the base image) when building the base image, without the package caches left over:

  FROM ubuntu:noble
  
  ... (your custom configuration here, default time zones, shells etc.)
  
  RUN apt-get update \
      && apt-get upgrade -y \
      && apt-get install -y \
          curl \
          wget \
          net-tools \
          traceroute \
          iputils-ping \
          zip \
          unzip \
      && apt-get clean \
      && apt-get autoremove -y --purge \
      && rm -rf /var/lib/apt/lists/*
In general, you'll want any common base images to be as slim as possible, but on the other hand unless you're a bank having some tools for debugging are nice to have, in case you ever need to connect to the containers directly. In the end, it might look a bit like this:

  upstream image --> your own common base image --> your own PostgreSQL image
  upstream image --> your own common base image --> your own OpenJDK image --> your own Java application image
In general, building container images like this will lead to bigger file sizes than grabbing an upstream image (e.g. eclipse-temurin:21-jdk-noble) but layer reuse will make this a bit less of an issue (if you have the same server running multiple images) and also it can be very nice to know what's in your images and have them be built in fairly straightforwards ways. Ofc you can make it way more advanced if you need to.


Thanks a ton for the lengthy explanation


If you use the non LTS branch of incus it supports OCI containers. Have you tried that instead of running docker inside of a LXC container?


That feature might be able to replace my docker usage on the host, so I don't need it and incus side by side. Which would be pretty neat.

Within the incus dev environment container though I'm pretty sure I want to keep docker, as I have a lot of tooling that expects it for better or worse (docker compose especially). It also doesn't appear incus integrates buildkit etc. so even if I used it here, I'd still need something else to _build_ OCI images.


They have released a fix for that and now comments are properly displayed again


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: