Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Object-Oriented Amiga Exec (1991) (archive.org)
72 points by mpweiher on Feb 20, 2021 | hide | past | favorite | 18 comments


It strikes me as odd describing Exec as object oriented, to me it looked like a fairly elegant minimal OS "micro-kernel" that's written in C and is directly connected to application code through jump tables instead of having a user- vs kernel-mode dividing line.

Jump tables could be described as "dynamic dispatch", and building nested C structs could be described as "inheritance" or "composition", but come on, by those standards nearly all C code bases would be "OOP". Most 8-bit home computer systems had jump tables into ROM too, but nobody would call those operating systems "object oriented".

There was an OOP system on AmigaOS 2.x and later called BOOPSI (Basic Object Oriented Programming System for Intuition), but that was only for the user interface system, and to be honest, it was exactly what one would expect when some API designers get infected by the early 90's OOP hype.

PS: looking at the date of that article I guess the author was also infected by the early 90's OOP hype, when everything had to be OOP, no matter if intended or not ;)


Wasn't it written in BCPL?


Only AmigaDOS (the I/O layer that was integrated in the last minute because there wasn't enough time until launch) was written in BCPL and a bit of an alien to the rest of AmigaOS.

For Exec, I'm actually not sure how much was written in C, and how much was 68k assembly, most likely a mix of both.


Exec was written in assembly. http://wandel.ca/homepage/execdis/exec_disassembly.txt

Intuition was written in C. Gfx was a mix of assembly and C.


The headlined article credits the Amiga ROM Kernel Manual: Exec. If you read that book, you'll find that there is indeed a user-mode/supervisor-mode dividing line. What you mean is that TRAP instructions aren't used as the system call interface, which isn't the same thing as not actually having a dividing line. The headlined article does in fact make this very point, however.

No, Exec was not an example of the micro-kernel paradigm. There was no inter-task memory protection, one of the foundations of the micro-kernel model, underpinning the idea that a bug in one part of the system cannot overwrite other parts of the system, which is clearly not the case for Exec. Again, the headlined article actually points out this lack.

Exec was a fairly clean design, with good modular cohesion. But the 1990s thinking evident in the article is to try to contrast it with OS/2 and others, along with the silly "Whatever my operating system doesn't do is obviously something that no-one needs on a personal computer." stuff, instead of recognizing (with 5 years of hindsight even at the time) that Exec was yet another case study of the same design changes happening in multiple systems. DOS+Windows 2.0 in 1987 and OS/2 1.0 in 1986 also exemplified the notions of user-mode-callable function libraries, directly callable by languages such as C and Pascal (and MS Fortran), for the system API, instead of special machine instructions as in MS/PC/DR-DOS, that were left to the runtime library implementors, or even to the applications programmers, to wrap in library functions. Those systems also exemplified the idea that some of this library code (such as the MSG library on OS/2 for example, which is implementable entirely on top of the Control Program API) was actually a user-mode layer on top of the kernel API and didn't need to be specifically kernel code itself; something that the headlined article tries to make out to be a contrast with other systems.

Indeed, at the time that the headlined article was published in 1991, Windows NT had already been implemented with an Object Manager as its name-to-handle mechanism. And the fact that it might be useful to have and to expose the common library functions that the system uses for manipulating its data structures, as Exec did for its linked lists, is evident in the existence of all of the RtlXXX() functions in the Native API.

These ideas weren't all necessarily new in the 1980s, as the Big Iron world had been there before as had the Unix world in the 1970s. What was happening was that they were catching on. Ironically, Linux took a step in the reverse direction when it came along, as its kernel API is (with a small exception) once again officially the special machine instructions rather than a pre-supplied library of mid-/high-level language callable functions. There are cases over the years, especially for languages other than C and C++, where if there had been a linux-system.so or some such, not tightly coupled to the (at least two) C runtime libraries, life would have been a lot easier.


Amiga Exec is not object-oriented, at least not if you think Smalltalk is OO. There is no dynamic dispatching involved when you work with structures like Message or MsgPort.

That said, there are places where dynamic dispatching is used in Amiga Exec. For example, communication with device drivers happens with commands sent in messages.

Anyway, the AmigaOS was absolutely remarkable. An operating system for home computers with concepts such as preemptive scheduling, dynamically loadable libraries, loadable device drivers with a unified API based on message passing, and loadable filesystems (called "handlers"). Ironically, the least mature parts of the OS where those parts responsible for graphics and UI.


Is Dynamic Dispatching supposed to be the holy grail of OOP per Smalltalk’s ideal design of message passing?

An example of Dynamic Dispatch, is say you create a Class object of a person. And you give that object the method ‘jump’.

Then, you create say 30 persons, one for each student in your classroom. Then, you collect them all into a list. And you iterate through that list, and tell each person to jump.

So in Python, it might be something like:

  John = Person()
  Jane = Person()
  # then create 30 of them students
  student_list = []
  student_list.extend([John, Jane, ...])

  # Then make them all jump. 
  for kid in student_list:
    kid.jump()
So, per my understanding this is what Dynamic Dispatch is. It allows you to reassign a variable to an existing set of objects, and to massively activate all of them. Such that, you’re sending an execute command to all of the associated objects.

Was this the genius innovation to OOP, as opposed to procedural languages before it? And can something like this be simulated in other imperative or functional type of programming languages?


> And can something like this be simulated in other imperative or functional type of programming languages?

im pretty sure it can be, but without it being built in, it probably would be a chore

> Was this the genius innovation to OOP, as opposed to procedural languages before it?

i think so (if you consider smallltalk to be oop, as opposed to c++ or simula)...

afaik, the other goal that would be enabled by message-passing, was to enable something akin to the actor system... but smalltalk largely stalled after st80....


It's been a while and I was just a dabbler at the time - but from what I recall, Exec at least has some object-oriented features or at least influences?

This is all hazy but wasn't there some sort of base "node" struct which had various extensions depending on the entity - at least I recall lists of such nodes where some operations could would with "base nodes" while others required specific "sub-type" nodes?

At least I remember finding the model very confusing initially - a lot of the technical documentation seemed quite abstract to me as a C novice at the time.


Most Exec data structures were organized as what's called today "intrusive doubly linked lists" where the list node struct is placed directly in front of the actual "payload data" in the same C struct.

One could call that "polymorphism" and "inheritance", but IMHO it's a bit of a stretch.


> Exec's task scheduler is not as elaborate as OS/2's, which is rumored to have been lifted bodily from IBM's VM/370 mainframe operating system

I have never heard that rumour before. I wonder how true it is.


It isn't, as that sort of thing was a well-known thing that was in computer textbooks of the time. What that article says about the scheduler is rubbish, by the way. Dynamically adjusting priorities according to recent CPU use was also a standard textbook notion. It isn't "magic".

For example: A contemporary textbook to that 1991 BYTE article is H. M .Deitel's Operating Systems, 2nd Edition, published in 1990. It deals with feedback scheduling and things like multilevel queues and dynamic priority adjustement in chapter 10. And OS/2 (1.x) including its scheduler is a case study in chapter 23.


I doubt it. Since this article predates the Microsoft-IBM split, I figured it was safe to consult my trusty Inside OS/2, by Microsoft’s Gordon Letwin:

The scheduler's dispatch algorithm is very straightforward: It executes the highest-priority runnable thread for as long as the thread wants the CPU. When that thread gives up the CPU--perhaps by waiting for an I/O operation--that thread is no longer runnable, and the scheduler executes the thread with the highest priority that is runnable. If a blocked thread becomes runnable and it has a higher priority than the thread currently running, the CPU is immediately preempted and assigned to the higher-priority thread. In summary, the CPU is always running the highest-priority runnable thread.

The scheduler's dispatcher is simplicity itself: It's blindly priority based.

That doesn’t sound like something that required bodily lifting from a mainframe OS.


It isn't. This is simple two-level scheduling that could be found in textbooks of the time.

You've quoted a discussion of the low-level scheduler, a.k.a. dispatcher. If you read just 3 pages on from that quotation, you'll find a description of the mid-level scheduler that dynamically adjusts priorities (in two priority classes) according to whether threads consume their entire quanta; a rather simpler dynamic variation mechanism than those of some other contemporary operating systems, which did things like calculating decayed estimates of recent CPU usage over the past few seconds, and certainly not "magic".

(Some operating systems have another level on top of those low-level and mid-level schedulers of high-level schedulers, that deal in things like scheduling jobs in batch, or performing whole-process swapping. OS/2 doesn't have that.)


TIL Carl Sassenrath is at Roku.


Is he still actively involved with REBOL?

Is REBOL - er, RED, still an active project? Dialecting sounded like an early attempt at a generalized DSL model, but I never had time to follow the details.


Rebol2 is de-facto abandoned, Rebol3 has split into divergent community forks [1], and Red suffers from setbacks with organizational issues trying to deliver its vision. Carl is not involved in any of the projects.

In the Rebol family, "dialect" is an umbrella term for embedded DSLs and micro-formats [2]; nothing new compared to Lisp, Forth, and Logo, from which these ideas were borrowed.

[1]: https://stackoverflow.com/a/31517518/5889272

[2]: https://news.ycombinator.com/item?id=24083108


Pretty sure Red is...

https://www.red-lang.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: