A rose by any other name would be known as a runtime platform or environment. See also Erlang/OTP.
A hosted operating system punts on all the distracting/interesting/useful parts of OS stuff in order to focus on the aesthetic of the programming environment.
A hosted operating system punts on all the distracting/interesting/useful parts of OS stuff in order to focus on the aesthetic of the programming environment.
This might an appropriate description of a virtual machine.
Inferno is a distributed operating system.
It’s rather different from a mainframe OS, such as Windows or Linux: all services of a network are served to the user with a uniform interface and programs can be distributed over the network transparently, moving data or computations as required.
The network as a whole is your computing device.
Now, once you understand this, you might wonder if it’s really so important who write the drivers for a single node.
Turns out it doesn’t matter that much: as long as the interface is uniform the system works as a whole.
Now I agree that an OS running on top of an OS is weird. Much more an OS running on top of a browser!
But sadly, it’s what your cloud vps do. And what any OS that target WASM want to do.
Guess what? Inferno did both things several years ago!
And better, with an uniform interface.
Try this on your mainframe of choice! ;-)
I’m not sure what you mean by mainframe OS, though it seems like it’s an attempt at derision?
Virtual machines like those you might rent from a cloud provider are much more a partitioning technology these days. Though there is another operating system running on the same hardware (in the “hypervisor” role), the guest operating system is also interacting directly with quite a lot of CPU management, and increasingly other hardware devices via passthrough mechanisms. Critically, that same software can also run outside the emulation/hypervisor environment: it can take control of an entire computer, providing services to other software, and is thus an operating system.
In the case of software that isn’t able to (or perhaps even intended to) run on a computer directly without other scaffolding, it’s really not an operating system. If it’s really a network service or programming environment built to run on top of other things (“hosted”, if you will!) it would be less confusing to call it that. There’s obviously no shame in building an amazing new platform for constructing distributed applications – it’d just be best to avoid hijacking existing terminology while doing so.
I’m not sure what you mean by mainframe OS, though it seems like it’s an attempt at derision?
Absolutely no!
I was trying to distingush the OSes that are designed for a single computer (thus in the ancient and noble tradiction of mainframes) from the OSes that are designed for a network of eterogenous computers.
When we talk about distributed operating systems, the focus is not in the control of the hardware of a single pc, but in the control of a whole network.
In the case of Inferno, you can run it on bare metal, on Windows, on Linux, on Plan9, on some game platforms and on IE8 (if I remember correctly).
This covers a variety of architectures that few mainstream OS could compete with.
Without an hardware emulator.
it’d just be best to avoid hijacking existing terminology while doing so
I’m afraid Inferno was defined as a distributed operating system before “existing terminology” was conceived.
So one might argue that existing terminology was designed by people either ignoring previous art or meaning something different.
In both cases, I will keep calling Inferno an OS.
I’m afraid Inferno was defined as a distributed operating system before “existing terminology” was conceived.
I don’t think that’s true at all. Even the Wikipedia page for Inferno suggests it was released in 1996, and links to at least one paper from the authors from around that time. I think we’d kind of settled on an operating system being a body of software chiefly responsible for controlling an actual machine and providing services to software and users by then.
By way of contrast, the Amoeba distributed operating system is another attempt (and seemingly prior to Inferno!) that is both distributed (providing network transparency as a core system primitive) and an operating system (Amoeba includes a microkernel base which runs on all of the machines throughout a deployment). Sprite is another similar project, also late 1980s to early 1990s, in which some level of network transparency was achieved in addition to the base job of an operating system: controlling the machine.
I’m not sure if this count as an objection. :-)
Fine, Amoeba and Sprite are distributed operating systems.
Plan 9 is a distributed operating system too. So is Inferno, that can run on bare metal AND hosted by another OS.
You mean different from a centralized mainframe. The CTOS system looks pretty close to a distributed OS. Customers were loving it, too.
Far as OS on an OS, IBM invented that (I think…) in VM/370 in the 1970’s. VM could even run itself mainly for debugging. Mainframes also supported multiple users and metered CPU/memory. The cloud reinvents mainframes on cheaper hardware with more flexible software. The core concepts were a mainframe advantage, though.
Right, it definitely doesn’t feel like an especially appropriate use of the term until it’s also in control of the actual machine. If it’s a runtime environment and library, it seems clearer to just call it that.
The point at which services are provided to programs by “the operating system” versus other programs present but not considered part of the “operating system” is blurry, and getting blurrier all the time in a distributed world. “Control of the actual machine” sounds like the definition of a kernel, which can certainly be part of an operating system, but isn’t the whole thing.
tl;dr: what you’re referring to as Linux is actually GNU/Linux,,,
Not all of the control of the machine is in the hands of the kernel in every operating system. For instance, in illumos we perform some amount of interrupt configuration from a privileged process running in usermode (intrd(1M)) – but it’s still part of the operating system.
Words have a meaning, and I think eroding the term operating system does us a terrible disservice. There are already loads of other terms that better describe a program that runs on top of an operating system and provides services to other programs, whether over local IPC mechanisms or through some network protocol.
It’s true that a distribution of Linux may include quite a lot of software that isn’t really “operating system” software per se; e.g., chat clients or drawing software. But if your software doesn’t have the ability to take a computer from cold start up to running some other workload, it’s really not, itself, an operating system.
I think eroding the term operating system does us a terrible disservice.
I’m totally for a precise and clear technical language!
But, why we write hardware emulators like qemu, xen, virtual box and so on… if we cannot run operating systems on them?
And if what run on qemu is an operating system when it provides the user all the tools she needs, why a software that does the same but run without the hardware emulator is that different?
Because they mention Inferno on the description they probably mean that in addition to running on ‘bare metal’ it can also run in an emulator inside another OS. Same as Inferno.
While I agree that unix is the best tag you have here, it’s rather inappropriate for something related or derived from Plan 9 from Bell Labs.
Maybe we should have a plan9 tag, but then people might argue about what is really a Plan9 and what not.
Also I saw enough posts related to research operating systems lately to think there should be a dedicated tag.
What is your opinion @pushcx?
They usually add them when something is submitted a lot on Lobsters. That way it can be filtered by some or highlighted by others. Really just need an OS tag. I tried. There was a lot of interest but not added.
I just tag them CompSci, Programming, or Release.
I don’t see that we get many Plan9 stories, but the procedure is to make a meta thread linking to untagged stories and inviting discussion. If there’s a strong consensus, I’ll add it.
Done. Let see if it gains traction.
People will argue about what’s really plan9 and what’s not, but I’m sure they could at least agree that this is more plan9 than not.
For sure!
I just wanted to notice that the issue we face here is more general.
Hackers love challenging mainstream wisdom and operating systems are definitely not a solved problem.
Node9 is a heavily inspired by Plan 9 through Inferno.
But, for example, the more I work on Jehanne the more heretic it becomes: in the long run, when I will complete the replacement of 9P2000, many will argue that it’s not a Plan 9 anymore even if I actually forked a well known Plan 9 kernel!
I’m a time/date nerd so didn’t expect to read anything new, but was pleasantly surprised.
It’s clearly adapted from a talk though.
Although I have one nit for this pun-laden infodump - no mention of week numbering? I’m sorry, but that’s just … week-sauce.
I find an intake of info like biannually quite satisfactory, but sometimes a biennial schedule is sufficient.
It’s clearly adapted from a talk though.
It even says so:
I’ve given this talk three times: at RubyConfIndia, RubyConf Australia, and Balkan Ruby. (Don’t worry, non-Rubyists; there’s no Ruby in this post. The conference topics were just happenstance. Also I love the word “happenstance”.)
MISRA (the automotive applications standard) specifically requires single-exit point functions. While refactoring some code to satisfy this requirement, I found a couple of bugs related to releasing resources before returning in some rarely taken code paths. With a single return point, we moved the resource release to just before the return. https://spin.atomicobject.com/2011/07/26/in-defence-of-misra/ provides another counterpoint though it wasn’t convincing when I read it the first time.
This is probably more relevant for non-GC languages. Otherwise, using labels and goto would work even better!
Maybe even for assembly, where before returning you must manually ensure stack pointer is in right place and registers are restored. In this case, there’s more chances to introduce bugs if there are multiple returns (and it might be harder for disassembly when debugging embedded code).
In some sense this is really just playing games with semantics. You still have multiple points of return in your function… just not multiple literal RET instructions. Semantically the upshot is that you have multiple points of return but also a convention for a user-defined function postamble. Which makes sense, of course.
Sure, but we do still see labels and gotos working quite well under certain circumstances. :)
For me, I like single-exit-point functions because they’re a bit easier to instrument for debugging, and because I’ve had many time where missing a return caused some other code to execute that wasn’t expected–with this style, you’re already in a tracing mindset.
Maybe the biggest complaint I have is that if you properly factor these then you tend towards a bunch of nested functions checking conditions.
Remember the big picture when focusing on a small, specific issue. The use of labels and goto might help for this problem. It also might throw off automated, analysis tools looking for other problems. These mismatches between what humans and machines understand is why I wanted real, analyzable macros for systems languages. I had one for error handling a long time ago that looked clean in code but generated the tedious, boring form that machines handle well.
I’m sure there’s more to be gleaned using that method. Even the formal methodists are trying it now with “natural” theorem provers that hide the mechanical stuff a bit.
Yes, definitely – I think in general if we were able to create abstractions from within the language directly to denote these specific patterns (in that case, early exits), we gain on all levels: clarity, efficiency and the ability to update the tools to support it. Macros and meta-programming are definitely much better options – or maybe something like an ability to easily script compiler passes and include the scripts as part of the build process, which would push the idea of meta-programming one step further.
I have mixed feelings about this. I think in an embedded environment it makes sense because cleaning up resources is so important. But the example presented in that article is awful. The “simpler” example isn’t actually simpler (and it’s actually different).
Overall, I’ve only ever found that forcing a single return in a function often makes the code harder to read. You end up setting and checking state all of the time. Those who say (and I don’t think you’re doing this here) that you should use a single return because MISRA C does it seem to ignore the fact that there are specific restrictions in the world MISRA is targetting.
C++, Rust, etc. have destructors, which do the work for you automatically (the destructor/drop gets called when a value goes out of scope).
Destructors tie you to using objects, instead of just calling a function. It also makes cleanup implicit vs. defer which is more explicit.
The golang authors could have implemented constructors and destructors but generally the philosophy is make the zero value useful, and don’t add to the runtime where you could just call a function.
defer can be accidentally forgotten, while working around RAII / scoped resource usage in Rust or C++ is harder.
Firstly he doesn’t address early return from error condition at all.
And secondly his example of single return…
singleRet(){
int rt=0;
if(a){
if(b && c){
rt=2;
}else{
rt=1;
}
}
return rt;
}
Should be simplified to…
a ? (b && c ? 2 : 1) : 0
Are you sure that wasn’t a result of having closely examined the control flow while refsctoring, rather than a positive of the specific form you normalised the control flow into? Plausibly you might have spotted the same bugs if you’d been changing it all into any other specific control flow format which involved not-quite-local changes?
I actually like the OpenBSD version; http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/usr.bin/true/true.c?rev=1.1&content-type=text/plain&only_with_tag=MAIN
vs the GNU version; https://github.com/coreutils/coreutils/blob/master/src/true.c
The GNU version also contains –help and –version handling.
And despite all of this, most shells have true and false as built-in commands anyway…
$ type true
true is a shell builtin
which, imo, gives another twist to the whole debate, since useless changes are made to a “useless” (since to my knowledge the binary itself isn’t used) program. The most it can offer us, is a self-reflective lesson in the Unix ideals and their betrayals.
Shell builtins aren’t used for exec calls and their ilk, no? So if you want to use false as the login shell for a an account, you’d need the “real” /bin/false?
Fair enough, that’s true (pardon the pun). I was only thinking about a shell scripting environment.
On the other hand, who knows how long it will take until systemd or some other do it all system totally takes over account management, making false and true superfluous in that department too.
They’re built-ins on most fully featured interactive shells (bash, zsh, etc.), but on many systems the default shell-script interpreter /bin/sh is a more minimalist shell where they aren’t built-ins. That’s the case on at least Debian (dash) and FreeBSD (ash). So the default for shell scripts on such systems is that true actually does run /bin/true.
The dash on my system (void, and another one running ubuntu and a server running debian(!)) claims that true was a built in command. Before writing my comment, I checked bash, where I already knew it was true, ksh on an OpenBSD system and dash on my laptop. Even the shell on my android phone (I belive the default one in /system/bin/sh) has the two program as built in components.
I haven’t tried ash, but it seems to me that it’s becoming ever rarer, even if a more minimalist shell might theoretically use the binaries directly.
Oops, my mistake, sorry. I had checked before posting that comment, but checked incorrectly. I did this (starting in a zsh shell):
mjn@mjn:~% which true
true: shell built-in command
mjn@mjn:~% ls -l /bin/sh
lrwxrwxrwx 1 root root 4 Jun 28 2017 /bin/sh -> dash
mjn@mjn:~% sh
$ which true
/bin/true
But I didn’t realize that the difference here is due to which, rather than true, going from builtin in zsh to not builtin in dash. Seems that making which a builtin is zsh-specific behavior, and the POSIX way to get info on how a command will be executed is command -V.
This is significant news in an important sector of our industry. Your reflexive negativity is destructive to this website.
This is significant news in an important sector of our industry.
Sure, but unfortunately we have somewhat limited space and attention bandwidth here, and if we were to support posting every piece of significant news in important sectors of our industry, we’d find ourselves flooded. There is a great site with news for hackers–this sort of stuff is a great fit for that other site!
Your reflexive negativity is destructive to this website.
I’m sorry if that’s how this is perceived. I’ve gone to some lengths to do better in terms of negativity. Unfortunately, it’s hard to be positive when pointing out pathological community behaviors that have actively ruined and destroyed other sites.
I think you’re somewhat right– I would have posted a more technical take like this one but didn’t see any posts about it at the time. After the other one was posted, I would have deleted this one if I was able to.
Aaand I missed the other post when submitting this: https://lobste.rs/s/mimoad/red_hat_acquire_coreos_expanding_its
Any post that calls electron ultimately negative but doesn’t offer a sane replacement (where sane precludes having to use C/C++) can be easily ignored.
There’s nothing wrong with calling out a problem even if you lack a solution. The problem still exists, and brining it to people’s attention may cause other people to find a solution.
There is something wrong with the same type of article being submitted every few weeks with zero new information.
Complaining about Electron is just whinging and nothing more. It would be much more interesting to talk about how Electron could be improved since it’s clearly here to stay.
it’s clearly here to stay
I don’t think that’s been anywhere near established. There is a long history of failed technologies purporting to solve the cross-platform GUI problem, from Tcl/tk to Java applets to Flash, many of which in their heydays had achieved much more traction than Electron has, and none of which turned out in the end to be here to stay.
Thing is that Electron isn’t reinventing the wheel here, and it’s based on top of web tech that’s already the most used GUI technology today. That’s what makes it so attractive in the first place. Unless you think that the HTML/Js stack is going away, then there’s no reason to think that Electron should either.
It’s also worth noting that the resource consumption in Electron apps isn’t always representative of any inherent problems in Electron itself. Some apps are just not written with efficiency in mind.
Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.
In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.
Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.
It’s always been insane, you can tell by the fact that those programs “crashing” is regarded as normal.
In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.
Shipping a cross-platform native app written in Python with PyQt or similar is a royal pain. Possibly no real technical work would be required to make it as easy as electron, just someone putting in the legwork to connect up all the pieces and make it a one-liner that you put in your build definition. Nevertheless, that legwork hasn’t been done. I would lay money that the situation with Smalltalk/Racket/Factor is the same.
Java Swing has just always looked awful and performed terribly. In principle it ought to be possible to write good native-like apps in Java, but I’ve never seen it happen. Every GUI app I’ve seen in Java came with a splash screen to cover its loading time, even when it was doing something very simple (e.g. Azureus/Vuze).
Writing C++ has been insane for decades, but not for the reasons you mention. Template metaprogramming is a weird lispy thing that warps your mind in a bad way, and you can never be sane again once you’ve done it. I write C++ professionally in fintech and wouldn’t use anything else for achieving low latency; and I can’t remember the last time I had a crash in production. A portable GUI in C++ is so much work though that it’s not worth the time spent.
C++ the language becomes better and better every few years– but the developer tooling around it is still painful.
Maybe that’s just my personal bias against cmake / automake.
Part of the struggle for us adopting something like rust internally is the syntax is too complex.
Even the “hello world” example:
fn main() {
println!("hello world")
}
Involves understanding what a macro is vs. a normal function call. The cognitive overhead of the language is a huge barrier and something we’ve eschewed for golang.
I hear you; we strive to not make Rust more complex than it has to be; unfortunately, the job it’s attempting to do is inherently complicated. A big focus of this year was on lowering the learning curve; it hasn’t all landed yet though.
That said, I’d hope that this particular example isn’t the biggest barrier, it boils down to “macros have a ! at the end.” In my ~five years with Rust, I’ve written exactly two macros, and they were less than 5 lines, and I mostly copy/paste/tweak’d them. They’re so minor we are even putting them in an appendix of the book, rather than giving them their own chapter.
That said, use the tools that work for you! Rust isn’t for everyone, and that’s 100% okay.
As I said above I think really good libraries will help a lot with this, but in a sense @bigdubs is saying what I was trying to say but more eloquently. I look forward to seeing the results of all the awesome work the Rust community is doing to ease the onboarding experience and smooth out the learning curve.
Sorry to be kinda pendantic, but it’s wrong to say that for the hello world example, rust is more complicated than other languages such as golang which you mentioned. Compare the two:
package main // What's a package?
import "fmt" // Why do I need to import something to print to the screen?
func main() {
fmt.Println("hello world") // Why do I need to prefix this with fmt? Why is Println capitalized?
}
To be fair I’ve never coded in rust and absolutely love coding in go for its simplicity and functionality, but your example doesn’t make a good comparison
you can use println in golang w/o the prefix / package name.
in action here: https://play.golang.org/p/y5XX4RDTW5
further what you’re nitting is “what is a package” which is a feature of the language you will have to explore countless times, vs. macros which as steve said are a niche feature of the rust language.
there is still plenty of magic with golang though. the thing that tripped me up personally at first was lower case letters on structs == package private, which is a weird isolation level to begin with but even then the only thing that governs the protection is the first letter of a name.
As an aside, according to the golang spec, one should not rely on println. “These functions are documented for completeness but are not guaranteed to stay in the language”
Hypothesis: no one with a positive expected contribution cares if something is named “bro”.
Other hypothesis: some people with a positive expected contribution are put off by pointless time-sink PC publicity efforts.
I don’t really care, but I’ve read 1984 and didn’t get the reference… so it’s sort of a bad/ineffective name?
“BigBro” would clarify for me.
Oh. Yeah, same. I was wondering how the hell is “Bro” “Orwellian”. I know the phrase “Big Brother” and yet “Bro” just flew over my head completely.
Are those hypotheses based on anything other than you personally not caring?
Personally I’d be far more likely to 1) pay attention to something not called “BroWhatever” and 2) want to be involved with a project that understands that language is important & cares about inclusion.
I have a strong anti-bro personal leaning. I would feel sort of grossed out working with a group that I thought may be a bunch of folks that fall under that blanket category. Symbolic pollution is a real thing. Would you want to tell your friends and family that you worked for this awesome company named ButtHole? (I would actually sort of count this example as a perk if the gig was right, and my family would understand and be similarly amused, but you get the point, some symbols are not appealing to everyone)
Symbols matter. Our brains work by associations. It’s nice to have positive associations with things you invest in. I just renamed a database from rsdb to sled because I don’t want to compete with the racial slur database on google. Sled feels fast and fun to ride on top of (to me).
“PC” is criticized as a movement to remove all those who don’t think alike– but isn’t your second hypothesis exactly that?
I think there’s a big difference between the stated goals of PC (which nominally contain inclusivity) and the actual effects of PC.
I’m pretty strongly anti PC publicity efforts but I have it admit the name bro rubs me the wrong way. It sounds dumb.
They didn’t want to use a certain type of database because they decided it was immature. That’s a good idea: data is often the lifeblood of a business, so you don’t play games with it.
Then they decided to go with Rust in 2015. I am looking forward to using Rust for professional projects in a few years, but the ecosystem still has a lot of bleeding to do in 2017. Ecosystem matters.
So I think their reasoning has a whiff of rationalization to it.
But someone has to blaze that trail for the rest of us, so hey… have at it TiDB dudes.
What aspects of the ecosystem need to be more mature to implement a storage engine? It seems so domain-specific that one would be writing a ton of their own code anyway.
The one I always mess up is that github just gives you the clone repo location but bitbucket gives you the whole clone command, so I write git clone git clone git@bitbucket.org:.... I wonder whether I can alias the clone command to correct for that.
From the git-config manpage:
To avoid confusion and troubles with script usage, aliases that hide existing Git commands are ignored.
Months later the programmer was reading git –help config for a different reason and found enlightenment.
However, a binary named git-command overrides the built in command. So you could stick a git-clone binary in your path that does what you want and it’ll be invoked rather than the built in command.
I use this to override git clear to run /usr/bin/clear instead of removing all my local changes.
This is only somewhat relevant, but I just discovered that newer versions of git now support your XDG CONFIG dir, so you can move your global gitconfig to $XDG_CONFIG_HOME/git/config (usually ~/.config/git/config)
Sounds awful, why would I made it harder to get to it? I hate dotfiles, but I’d rather make the file visible where it is, or in some easy to get directory like ~/lib instead of hiding it even more.
Oh yeah, that’s what I need, even more state and configuration. No thanks.
Not that I said ~/lib/file, not ~/lib/foo/config. At least without this XDG nonsense (.local? .config? .cache? .run? Fuck you XDG!), even if my files have stupid dotnames, they are all in one single directory, $HOME…
I’ll stick to this scheme, thanks.
Chill. I don’t like excess configuration also.
XDG is indeed overt. However I at least appreciate .cache a bit, because it’s one directory that I can safely remove to reclaim some bytes.
If you’re feeling cheeky, you can use Chrome’s dev tools to fake being offline in the Network tab (unsure of a Firefox equivalent).
But I recommend truly going offline for the full effect.
Firefox has a Work Offline menu item on the File menu that works for this page. (works on FF 57.0a1, anyway.)
Thanks for correction!. I wonder why I didn’t catch that when writing the post, I was using Python 3.5. Probably screwed up reloading the code or something. Will update post after dinner.
I’m trying to finish up my implementation of Plan 9’s “file” protocol, 9p, in rust. I’m still trying to wrap my head around implementing certain things in Serde.
Ideally, I’d like to have a minimal working sample in time for rustconf.
After all that’s come out about the product and company’s practices, why isn’t the advice just “return it”?