The matter of system programming seems to be rather difficult to discuss. Much more accessible is the discussion on the web and web browsers, which are the megaliths of the modern web.
If you want to introduce something new, you have the choice to either develop a new web (e.g. Gopher) or implement existing web standards. The latter is almost impossible given only large companies and hundreds of developers can even keep up with the ever-increasing complexity of web standards, and they are caught in what I’d call a vicious circle of standards development explosion, leading to an unavoidable “heat-death” of the modern web. What only remains is to select a subset and implement it well.
The same thought applies to system programming and reflecting on which subset you want to satisfy. Unix is very popular as it is very versatile, and any attempt for an alternative fights a steep uphill battle. Given these thoughts, I see the OpenBSD-approach as the best, given their radical nature in erasing legacy interfaces and code. But even it cannot gain a lot of traction because it does not keep up with functionality the Linux kernel offers. And when people have this mindset, or lack of mindshare, they say “to hell with it” and just stick with Linux, especially companies. The only times where alternatives to Linux are successful are when we look at niches that are appropriately filled up.
Although I think I agree with you wrt web browsers, depending on what you want to do, I don’t know whether you would necessarily be stuck trying to play catch-up with the mainstream browsers. I’m constantly impressed by what the NetSurf folks have achieved, for example, although it doesn’t have feature parity with (say) Firefox by a long shot, it can render a lot of stuff just fine and very fast.
My biggest gripe with Web standards is the brittleness of Javascript.
HTML and CSS are designed to fail gracefully, so browsers can implement a sub-set and ignore the rest. In contrast, JS executes sequentially, and aborts if an unhandled error (such as an API not being implemented) is encountered. This is thankfully limited to the current event handler, but would still cripple unrelated functionality on a lot of sites.
An alternative execution model, like a term rewriting system or logic programming, would have been much more robust; but I fear that ship has sailed :(
An alternative execution model, like a term rewriting system or logic programming, would have been much more robust; but I fear that ship has sailed :(
This is a great idea. I’m exploring term rewriting in my PhD (nothing to do with the web unfortunately) and I can definitely agree that this would have been very interesting to try.
I have used and written systems that do not have the megalith.
Want a file? Write your own file system first.
Guess what. It’s going to be a really sucky file system.
Want to debug or trace anything? Write your own tracing infrastructure.
Want to interop with the internet, wire in something like LWIP, but be prepared to hand carve a lot of the network interfaces from raw C. And all the tools that comes with the megalith aren’t there.
The list is huge and endless.
The megalith isn’t a stone around our necks, it’s a vast ever growing giant from whose shoulders we leap upwards.
You may argue the weight of it will drown us.
But with things like openembedded, you can pull in just what you need.
We build on abstractions. We throw away stuff that’s clearly unconnected. This is true for engineering, it’s true of hardware design (we use these logic blocks because making silicon do addition is hard to work out otherwise), and it’s true in software.
The box is not a problem. It’s the past we need to make the future.
The matter of system programming seems to be rather difficult to discuss. Much more accessible is the discussion on the web and web browsers, which are the megaliths of the modern web.
If you want to introduce something new, you have the choice to either develop a new web (e.g. Gopher) or implement existing web standards. The latter is almost impossible given only large companies and hundreds of developers can even keep up with the ever-increasing complexity of web standards, and they are caught in what I’d call a vicious circle of standards development explosion, leading to an unavoidable “heat-death” of the modern web. What only remains is to select a subset and implement it well.
The same thought applies to system programming and reflecting on which subset you want to satisfy. Unix is very popular as it is very versatile, and any attempt for an alternative fights a steep uphill battle. Given these thoughts, I see the OpenBSD-approach as the best, given their radical nature in erasing legacy interfaces and code. But even it cannot gain a lot of traction because it does not keep up with functionality the Linux kernel offers. And when people have this mindset, or lack of mindshare, they say “to hell with it” and just stick with Linux, especially companies. The only times where alternatives to Linux are successful are when we look at niches that are appropriately filled up.
Although I think I agree with you wrt web browsers, depending on what you want to do, I don’t know whether you would necessarily be stuck trying to play catch-up with the mainstream browsers. I’m constantly impressed by what the NetSurf folks have achieved, for example, although it doesn’t have feature parity with (say) Firefox by a long shot, it can render a lot of stuff just fine and very fast.
My biggest gripe with Web standards is the brittleness of Javascript.
HTML and CSS are designed to fail gracefully, so browsers can implement a sub-set and ignore the rest. In contrast, JS executes sequentially, and aborts if an unhandled error (such as an API not being implemented) is encountered. This is thankfully limited to the current event handler, but would still cripple unrelated functionality on a lot of sites.
An alternative execution model, like a term rewriting system or logic programming, would have been much more robust; but I fear that ship has sailed :(
This is a great idea. I’m exploring term rewriting in my PhD (nothing to do with the web unfortunately) and I can definitely agree that this would have been very interesting to try.
[Comment removed by author]
Meh.
I have used and written systems that do not have the megalith.
Want a file? Write your own file system first.
Guess what. It’s going to be a really sucky file system.
Want to debug or trace anything? Write your own tracing infrastructure.
Want to interop with the internet, wire in something like LWIP, but be prepared to hand carve a lot of the network interfaces from raw C. And all the tools that comes with the megalith aren’t there.
The list is huge and endless.
The megalith isn’t a stone around our necks, it’s a vast ever growing giant from whose shoulders we leap upwards.
You may argue the weight of it will drown us.
But with things like openembedded, you can pull in just what you need.
“…horse-and-carriage arrangement is so stable…”
We build on abstractions. We throw away stuff that’s clearly unconnected. This is true for engineering, it’s true of hardware design (we use these logic blocks because making silicon do addition is hard to work out otherwise), and it’s true in software.
The box is not a problem. It’s the past we need to make the future.