Hacker Newsnew | past | comments | ask | show | jobs | submit | pinealservo's commentslogin

If you mean wide use on a desktop, sure. QNX is definitely microkernel-based, though, and it's used widely in automotive head units and now Blackberry devices. Way nicer to write device drivers for than Linux!


I think you miss the point of a microkernel! The point is to keep all your "features" outside of it, and to use it only to implement the core set of functionality necessary to have secure shared access to the hardware. The general L4 concept has proven industry use, so there's no reason you can't take advantage of the already accomplished work on seL4 to bootstrap your own secure OS.

It's true that seL4 is not a magic safety or security bullet; in particular, their FAQ says that they haven't completed extending their proofs to DMA safety via VT-d or SystemMMU on ARM, so DMA-accessing services would have to be separately verified. And its particular feature set may not be appropriate for all situations. But if its API does work for you, it would be foolish not to at least consider using it.

It is really unreasonable to dismiss seL4 simply because it took them a lot of effort to create it. That effort is now done and can be re-used and magnified by further effort.


I didn't realize you were recommending to use sel4, I assumed you meant they should have proven theirs using similar methods.


Scheme is absolutely a suitable replacement for perl or ruby, but the ease with which you can accomplish things will depend on the set of libraries available, which varies a lot between implementations.

Racket is not just Scheme anymore, but it does still support various versions of Scheme as well as a number of Scheme-based languages. It has a package management system and a fairly large set of packages for basic tasks. The core system, base libraries, and major libraries are well documented and it's got a great built-in IDE.

Guile also does more than just Scheme, but it's been the "official extension language of GNU" for a very long time, so it's got bindings to a lot of libraries and it's used in some interesting applications and tools.

Chicken Scheme also comes with a package system, 'eggs', that has a pretty nice collection of practical packages available for easy installation. You can run your programs via the interpreter, or you can compile them via the compiler, which goes through C so it's got great C interop.

There are a lot of other options with a variety of strengths and weaknesses, but those are good ones to look at for starting out and doing scripting duties.


As a point of possible interest to people who are interested in programming language history, I'm going to elaborate a bit on this topic:

The root of this particular family tree is essentially a fusion of the lambda calculus-inspired parts of Algol 60 and Lisp with some new ideas for syntax that were promoted by Peter Landin in his highly influential paper from 1966, "The Next 700 Programming Languages". He called this language ISWIM; it was studied extensively but never directly implemented. Landin was also associated with Dana Scott and Christopher Strachey in their foundational work on programming language semantics.

Robin Milner was interested in the relationship between the logic underlying the semantics of programming languages and the possibility of using computers to prove propositions in that logic (a logic developed by Dana Scott in support of his semantics work). At Stanford, he and a small team (including Whitfield Diffie, who later went into cryptography...) developed a system called Stanford LCF (for Logic of Computable Functions). This was an interactive system in which the user states a main goal in the logic, then splits it to subgoals and more subgoals until they can be solved directly. Proofs were represented directly by data structures and were built directly by the proof manipulation commands, which corresponded to the inference rules of the system.

Milner moved to Edinburgh in 1973, where he worked on a subsequent version of LCF, Edinburgh LCF. Stanford LCF was limited by the size of the proof data structures; for Edinburgh LCF Milner had the idea to forget the proofs, but store the results of them; i.e. the theorems. The proof steps would be performed, but not recorded. To ensure that theorems could only be constructed via valid proofs, a meta-language was developed with a static type system that would only allow data structures corresponding to valid theorems to be built. It also allowed more sophisticated proof development, since the meta-language was a full higher-order programming language modeled after Landin's ISWIM. Exceptions were included to deal with the possibility of failure of particular proof strategies. This version was implemented in Lisp.

LCF spread to other universities; it was early on split into two parallel tracks, ML and Caml. Both have continued to be strongly associated with implementation of theorem proving systems. ML and LCF have become Standard ML and HOL; Caml has become OCaml and Coq. A lot of programming language research has also gone into ML and OCaml as languages in themselves due to their close association with the logic underlying semantics of programming languages.

Meanwhile, a language called PAL was developed in 1968 at MIT in response to ISWIM and Strachey's ideas on programming languages. David Turner, while starting his Ph.D. research at Oxford, got access to the PAL sources and used a simplified form of the language as the basis for his lectures on functional programming at St. Andrews; he called this language SASL. It was initially just a blackboard language, but a colleague surprised him by implementing it in Lisp.

In 1976, Turner changed the semantics of SASL from eager to lazy, based on a lazy version Landin's SECD machine. Somewhat later in his career, he combined ideas from lazy SASL, a cut-down version of SASL called KRC, and the ML-originated Hindley-Milner type system to form a language called Miranda. And Miranda is one of the primary influences on Haskell.

So, this leaves out a lot of other languages such as HOPE and lazy-ML that also were developed in this space. But what's interesting to me is how all these languages, from ML to Haskell, are so strongly related to Algol and thus the Pascal, C, Java, etc. languages that are more familiar to industry.

That got a lot bigger than I intended; I hope someone takes some interest from the digression.


I thought that you might be one of my college profs, but then realized that he would never apologize for a long post.


Indeed, thanks for taking the time to write that up.


Very interesting, thanks!


excellent, thank you for the insight


Here are some arguments from the guys who did something like this before for Oberon: ftp://ftp.cis.upenn.edu/pub/cis700/public_html/papers/Franz97b.pdf

To summarize:

1. Much smaller representation, which will mean faster download times 2. More control-flow structure retained, which means less needs to be rebuilt by JIT-compilers. 3. As a corollary of #2, we get easier and more helpful 'decompiling' of code blobs 4. Another corollary of #2: easier analysis for safety properties


There are definitely reasons for manual memory management to still be a thing, but if a GC disposes of memory "too soon", i.e. when there are still live references to the memory, then it is a bug in the GC or in some FFI code that didn't manage interaction with the GC correctly.

GC is generally a good thing precisely because it prevents use-after-free errors and enables programming at a higher level without worrying about memory management. There is often a performance cost to this, it's true, but for large classes of programs it's a cost that's worth paying.


The Peano Axioms are not about defining what 'addition' and 'multiplication' are; they're about presenting a model of the natural numbers along with the operations of addition and multiplication in first-order logic. This makes them great fodder for worksheets in proof-writing courses (and I did glance through your worksheet; it looks like a great resource!), but doesn't necessarily expose the standard mathematical notion of what 'addition' and 'multiplication' are! If you ask a random mathematician out of the blue what the axioms of arithmetic are, my guess is that you won't often get the Peano axioms as an answer, but rather the standard algebraic ring or field axioms.

Although it seems very common to define multiplication as repeated addition in dictionaries and materials for kids, it is in fact only a valid definition for a rather narrow conception of numbers, i.e. the natural numbers. It doesn't work without exceptions for the Integers, the Rationals, or the Reals. Considering that we want students to eventually be able to deal with the Real numbers, I think it would be better to avoid defining multiplication to be something that doesn't work outside of the Naturals! We would be in quite a pickle trying to explain the calculation of the area of a circle in terms of repeated addition...

By calling what they're teaching the 'repeated addition strategy' it seems like they've thought about this; it's indeed a strategy for calculating a product of two natural numbers. But that makes the marking off of a point all the more perplexing, because both repeated addition schemes are equally valid strategies for computing the same product, by virtue of the commutative property of multiplication! Which is indeed generally an axiom and not a derived theorem in the more general case of multiplication, because multiplication is not generally defined in terms of repeated addition. In general, the axioms only say that multiplication distributes over addition: https://en.wikipedia.org/wiki/Field_(mathematics)

My kids are actually going through this phase of their curriculum right now, and I know that here, at least, they do teach the commutative property of multiplication fairly quickly after multiplication is introduced. So I'm not really sure what pedagogical point of the grading of this assignment would be, but perhaps there is some point to it. Fortunately my kids have not run afoul of this kind of thing.


That's a little bit ironic, considering the hijinks Steve Jobs and Steve Wozniak got up to before they started selling computers: http://www.theatlantic.com/technology/archive/2013/02/the-de...


I used QNX extensively for a few projects at work, as it has historically been used quite a bit in the "automotive infotainment" world. I was really impressed by the overall clean architecture and documentation of the system, and I much prefer implementing device drivers in the QNX microkernel environment over implementing them in Linux, where interfaces are under-documented and always changing, and it's quite easy to lock up the entire system during driver development.

An interesting data point, though: QNX had a very clean and modularized 'microkernel-like' network stack called 'io-net'. But due to throughput issues in some situations, they switched to a new architecture a few years ago called 'io-pkt'. This is essentially the kernel networking code from BSD transplanted to a process in QNX; one advantage of this is that there's a large stock of drivers available to port and a lot of people are familiar with the BSD networking model, but some of the lesser-used corners of it weren't fully debugged when I was doing network protocol hacking, and in general it made me sad.

Anyway, you can definitely run full desktop environments on it, and you can especially run a full tablet environment if you buy a recent Blackberry tablet, as RIM now owns QNX and uses it as their latest Blackberry OS. This was, unfortunately, a step backwards in their openness and embrace of Open Source.


I have been studying AsciiDoc lately. It turns out to be a really hacked-together macro language for creating "ad-hoc lite markup" to "SGML/XML-based semantic markup" conversions. The core idea and the default ad-hoc lite markup are both pretty nice, but the implementation leaves a lot to be desired. AsciiDoctor is an alternative implementation which is used on GitHub and in other places, but I believe it interprets AsciiDoc as more of a "fixed format" like Markdown than the original reconfigurable macro-based approach.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: