Re: books on programming

From: Eugene.Leitl@lrz.uni-muenchen.de
Date: Wed Jan 03 2001 - 16:37:23 PST


"Robert S. Thau" wrote:

> Well, perhaps I should have said, "It's very hard to write a
> microkernel *Unix*...", but I thought that was clear from context.

My ignorance must be showing. I'm thinking about minimal cores, for
which *nix wrappers are available. QNX and L4/Fiasco both qualify.
 
> At any rate, the '80s style microkernel approach typified by Minix is
> a bad way to implement the Unix API because it is more work than the
> alternatives, and generally performs worse. Remember, that approach
> was not about shrinking the size of the total OS per se, but rather
> about moving the components of the OS (filesystems, device drivers,
> etc.) into separate address spaces that communicated by message
> passing, so that you could run, say, a suspect device driver while
> limiting the amount of damage it could do to the rest of the system.

Dunno, sounds good to me. And write the core kernel in hand-optimized
assembly, please. ~10 kBytes is very doable, though Forth OSses do it
in 2-4. (And you can understand them! Easily!).
 
> The point wasn't to gain performance. In fact, proponents of this
> approach (at least the honest ones) were up front about the inevitable
> performance cost due to extra context switches (which would have

This particular lunatic here thinks context switches need not
be Nemesis, if the amount of context pushed can be minimized
(hardware stack machines!), and if you use several copies of
hardware which retain state, and only push around the one
bit which flags the individual engines as currently active.
Only doable as MISC, of course, where a CPU core is worth
~10 kTransistors.

> been simple procedure calls in a monolithic kernel) --- but believed
> that the cost could be minimized, and the extra flexibility would be
> worth it.

I totally buy it. But it seems such design requires geniuses.
 
> QNX has managed to make this approach work, in its niche, in part by
> multithreading the filesystem (unlike Minix) --- which is what makes
> it interesting. But a lot of very smart people have tried and failed.

QNX FS is outside the kernel space, I thought.

> As to L4, I thought their Posix support was through L4Linux, which is
> Linux itself, modified to use L4 as a hardware-abstraction layer; a
> single body of code which runs in one kernel address space, with no
> internal message passing --- so it's not a microkernel OS in the same
> since that Minix is. That's what the DROPS folks seem to be using, at
> any rate. But perhaps google's doing something different?

Um, I referred to Google as the magic which will pull up the actual
benchmarks. Of course they use custom kernels, but I very much doubt
RT modified ones.
 
> > The advantages of tiny kernels and small contexts are obvious,
>
> Which has nothing to do with the aspects of Minix that I was
> criticizing --- the microkernel design doesn't make Minix as a whole
> smaller, just slower.

The red Tanenbaum book is rather cool, no objections. Being small enough to
understand alone is worth it.

> Is your desktop CPU-bound these days? Mine isn't... it spends most of
> its time waiting for something, typically the network.

I dunno, but Linux QoS behaviour is atrocious. Certain things rip large
holes into timing, sync and console switch being the largest, but there
are others. Plus, there's a reason there's RTLinux out there. It wouldn't
be, if there wasn't a niche. (And I wouldn't really call RT anything that
takes longer than a few 100 ns for reaction time, but that's just me).



This archive was generated by hypermail 2b29 : Fri Apr 27 2001 - 23:17:56 PDT