@jens Theres obviously a need to communicate between userspace and the kernel somehow, but now imagine the network subsystem doing the exact same thing when working with the memory management or filesystem subsystems
Maybe the problem isn't microkernels, but the in-built assumptions kernel devs make about what kind of environment their code is running in.
There's no question microkernels will always take longer to satisfy a service request than a monolithic kernel; so, instead of building APIs that are composed of lots and lots of tiny functions, you create requests that are basically aggregate common functionality.
Prior art: X11 works this way (where xlib queues "lots of little functions" into work units that are big enough that a network request's overhead is well amortized), 9P works this way (where Plan 9 queues seek and read/write interfaces into a single 9P transaction to minimize overhead), the L4 programming interface frequently works this way (where just about every message exchange is always a send-then-receive or receive-then-send, and is one reason why it outperforms Mach), etc.
And, speaking of 9P, although Plan 9 is not technically a microkernel, it is not exactly a monolithic kernel either. It's somewhere in between these two extremes, and any typical Plan 9 environment will make extensive use of microkernel-like functionality, since the vast majority of its system services are actually provided through user-space daemons and applications. ACME, the GUI itself, Plumber, etc. are all user-space, not kernel.
A lot of R&D has gone into making microkernel environments responsive. If they can make hard real-time devices using them, frequently in medical devices that keep people alive, I feel the issue of latency when invoking other system services is a problem which is now tolerable, if not solved.
@jens @espen Contrary to what my post might suggest, I'm not actually a microkernel apologist. My preferred architecture is exactly what Plan 9 uses; a hybrid design where you can get the best of both worlds, leaving it up to a system integrator to decide what resides in the kernel and what doesn't. Putting something in the kernel should be a performance optimization, not a hard requirement for basic operation.
Linux modules are a great compromise here, but I think their potential was never truly realized. I agree with Rob Pike that the basic philosophy behind Unix I/O primitives are under-utilized. Yeah, we now have /proc and /sys, but ... that's it?
I think it's obscene that Linux has over a thousand system calls when I've used, maybe, a total of 40 system calls in all my code since 1995, when I switched to using Linux as my desktop for the first time.
Old man shouting at clouds again, I suppose.
And though I'm actually doing only userspace things for a bunch of reasons, a kind of planet scale 9P-ish thing is sort of a goal.
My focus is currently on network and security topics. @alcinnz is, in a roundabout way, closer to the file-like protocol. It'll have to be a combination of both to become useful.
TL;DR, a better file-like API would probably make a lot of application code simpler and safer.
Still, that's only a factor of two to three, not an entire order of magnitude, difference. I'd argue that my point still stands.
A bunch of technomancers in the fediverse. This arcology is for all who wash up upon it's digital shore.