One-line summary for the impatient: A BSD proponent defends systemd (at a BSD conference) and suggests that BSD can take ideas from it
Interesting talk.
It does, but users don’t use any of the VMSy goodness in Windows: To them it’s just another shitty UNIX clone, with everything being a file or a program (which is also a file). I think that’s the point.
Programmers rarely even use the VMSy goodness, especially if they also want their stuff to work on Mac. They treat Windows as a kind of retarded UNIX cousin (which is a shame because the API is better; IOCP et al)
Sysadmins often struggle with Windows because of all the things underneath that aren’t files.
Message/Object operating systems are interesting, but for the most part (OS/2, BeOS, QNX) they, for the most part, degraded into this “everything is a file” nonsense…
Until they got rid of the shared filesystem: iOS finally required messaging for applications to communicate on their own, and while it’s been rocky, it’s starting to paint a picture to the next generation who will finally make an operating system without files.
If we talk user experiences, it’s more a CP/M clone than anything. Generations later, Windows still smells COMMAND.COM.
Bowels is a good metaphor. There’s good stuff in Windows, but you’ve got to put on a shoulder length glove and grab a vat of crisco before you can find any of it.
I think you’re being a little bit harsh. End-users definitely don’t grok the VMSy goodness; I agree. And maybe the majority of developers don’t, either (though I doubt the majority of Linux devs grok journald v. syslogs, really understand how to use /proc, grok Linux namespaces, etc.). But I’ve worked with enough Windows shops to promise you that a reasonable number of Windows developers do get the difference.
That said, I have a half-finished book from a couple years ago, tentatively called Windows Is Not Linux, which dove into a lot of the, “okay, I know you want to do $x because that’s how you did it on Linux, and doing $x on Windows stinks, so you think Windows stinks, but let me walk you through $y and explain to you why it’s at least as good as the Linux way even though it’s different,” specifically because I got fed up with devs saying Windows was awful when they didn’t get how to use it. Things in that bucket included not remoting in to do syswork (use WMI/WinRM), not doing raw text munging unless you actually have to (COM from VBScript/PowerShell are your friends), adapting to the UAC model v. the sudo model, etc. The Windows way can actually be very nice, but untraining habits is indeed hard.
I don’t disagree with any of that (except maybe that I’m being harsh), but if you parse what I’m saying as “Windows is awful” then it’s because my indelicate tone has been read into instead of my words.
The point of the article is that those differences are superficial, and mean so very little to the mental model of use and implementation as to make no difference: IOCP is just threads and epoll, and epoll is just IOCP and fifos. Yes, IOCP is better, but I desperately want to see something new in how I use an operating system.
I’ve been doing things roughly the same way for nearly four decades, despite the fact that I’ve done Microsoft/IBM for a decade, Linux since Slackware 1.1 (Unix since tapes of SCO), Common Lisp (of all things) for a decade, and OSX for nearly that long. They’re all the same, and that point is painfully clear to anyone who has actually used these things at a high level: I edit files, I copy files, I run programs. Huzzah.
But: It’s also obvious to me who has gone into the bowels of these systems as well: I wrote winback which was for a long time the only tools for doing online Windows backups of standalone exchange servers and domain controllers; I’m the author of (perhaps) the fastest Linux webserver; I wrote ml a Linux emulator for OSX; I worked on ECL adding principally CL exceptions to streams and the Slime implementation. And so on.
So: I understand what you mean when you say Windows is not Linux, but I also understand what the author means when he says they’re the same.
That actually makes a ton of sense. Can I ask what would qualify as meaningfully different for you? Oberon, maybe? Or a version of Windows where WinRT was front-and-center from the kernel level upwards?
I didn’t use the term “meaningfully different”, so I might be interpreting your question you too broadly.
When I used VMS, I never “made a backup” before I changed a file. That’s really quite powerful.
The Canon Cat had “pages” you would scroll through. Like other forth environments, if you named any of your blocks/documents it was so you could search [leap] for them, not because you had hierarchy.
I also think containers are very interesting. The encapsulation of the application seems to massively change the way we use them. Like the iOS example, they don’t seem to need “files” since the files live inside the container/app. This poses some risk for data portability. There are other problems.
I never used Oberon or WinRT enough to feel as comfortable commenting about them as I do about some of these other examples.
If it’s any motivation I would love to read this book.
Do you know of any books or posts I could read in the meantime? I’m very open to the idea that Windows is nice if you know which tools and mental models to use, but kind of by definition I’m not sure what to Google to find them :)
I’ve just been hesitant because I worked in management for two years after I started the book (meaning my information atrophied), and now I don’t work with Windows very much. So, unfortunately, I don’t immediately have a great suggestion for you. Yeah, you could read Windows Internals 6, which is what I did when I was working on the book, but that’s 2000+ pages, and most of it honestly isn’t relevant for a normal developer.
That said, if you’ve got specific questions, I’d love to hear them. Maybe there’s a tl;dr blog post hiding in them, where I could salvage some of my work without completing the entire book.
I, for one, would pay for your “Windows is not Linux” book. I’ve been developing for Windows for about 15 years, but I’m sure there are still things I could learn from such a book.
but users don’t use any of the VMSy goodness in Windows: To them it’s just another shitty UNIX clone, with everything being a file or a program (which is also a file). I think that’s the point.
Most users don’t know anything about UNIX and can’t use it. On the UI side, pre-NT Windows was a Mac knockoff mixed with MSDOS which was based on a DOS they got from a third party. Microsoft even developed software for Apple in that time. Microsoft’s own users had previously learned MSDOS menu and some commands. Then, they had a nifty UI like Apple’s running on MSDOS. Then, Microsoft worked with IBM to make a new OS/2 with its philosophy. Then, Microsoft acquired OpenVMS team, made new kernel, and a new GUI w/ wizard-based configuration of services vs command line, text, and pipes like in UNIX.
So, historically, internally, layperson-facing, and administration, Windows is a totally different thing than UNIX. Hence, the difficulty moving Windows users to UNIX when it’s a terminal OS with X Windows vs some Windows-style stuff like Gnome or KDE.
You’re also overstating the everything is a file by conflating OS’s that store programs or something in files vs those like UNIX or Plan 9 that use file metaphor for about everything. It’s a false equivalence: from what I remember, you don’t get your running processes in Windows by reading the filesystem since they don’t use that metaphor or API. It’s object based with API calls specific to different categories. Different philosophy.
Bitsavers has some internal emails from DEC at the time of David Cutler’s departure.
I have linked to a few of them.
David Cutler’s team at DECwest was working on Mica (an operating system) for PRISM (a RISC CPU architecture). PRISM was canceled in June of 1988. Cutler resigned in August of 1988 and 8 other DECwest alumni followed him at Microsoft.
I have my paper copy of The Unix Hater’s Handbook always close at hand (although I’m missing the barf bag, sad to say).
It was edited by Simson Garfinkel, who co-wrote Building Cocoa Applications: a step-by-step guide. Which was sort of a “port” of Nextstep Programming Step One: object-oriented applications
Or, in other words, “yes” :)
Add me to the list curious about what they ended up using. The hoaxers behind UNIX admitted they’ve been coding in Pascal on Macs. Maybe it’s what the rest were using if not Common LISP on Macs.
Beat me to it. Author is full of it right when saying Windows is built on UNIX. Microsoft stealing, cloning, and improving OpenVMS into Windows NT is described here. This makes the Linux zealots’ parodies about a VMS desktop funnier given one destroyed Linux in desktop market. So, we have VMS and UNIX family trees going in parallel with the UNIX tree having more branches.
“we are forced to choose from: Windows, Apple, Other (which I shall refer to as “Linux” despite it technically being more specific). All of these are built around the same foundational concepts, those of Unix.”
Says it’s built on the foundational concepts of UNIX. It’s built on a combo of DOS, OS/2, OpenVMS, and Microsoft concepts they called the NT kernel. The only thing UNIX-like was the networking stack they got from Spider Systems. They’ve since rewritten their networking stack from what I heard.
Says it’s built on the foundational concepts of UNIX.
I don’t see any reason to disagree with that.
The only thing UNIX-like …
I don’t think that’s a helpful definition of “unix-like”.
It’s got files. Everything is a file. Windows might even be a better UNIX than Linux (since UNC)
Cutler might not have liked UNIX very much, but Windows NT ended up UNIX anyway because none of that VMS-goodness (Versions, types, streams, clusters) ended up in the hands of Users.
It’s got files. Everything is a file.
Windows is object-based. It does have files which are another object. The files come from MULTICS which UNIX also copied in some ways. Even the name was a play on it: UNICS. I think Titan invented the access permissions. The internal model with its subsystems were more like microkernel designs running OS emulators as processes. They did their own thing for most of the rest with the Win32 API and registry. Again, not quite how a UNIX programming guide teaches you to do things. They got clustering later, too, with them and Oracle using the distributed, lock approach from OpenVMS.
Windows and UNIX are very different in approach to architecture. They’re different in how developer is expected to build individual apps and compose them. It wasn’t even developed on UNIX: they used OS/2 workstations for that. There’s no reason to say Windows is ground in the UNIX philosophy. It’s a lie.
“Windows NT ended up UNIX anyway because none of that VMS-goodness (Versions, types, streams, clusters) ended up in the hands of Users.”
I don’t know what you’re saying here. Neither VMS nor Windows teams intended to do anything for UNIX users. They took their own path except for networking for obvious reasons. UNIX users actively resisted Microsoft tech, too. Especially BSD and Linux users that often hated them. They’d reflexively do the opposite of Microsoft except when making knockoffs of their key products like Office to get desktop users.
Windows is object-based.
Consider what methods of that “object” a program like Microsoft Word must be calling besides “ReadFile” and “WriteFile”.
That the kernel supports more methods is completely pointless. Users don’t interact with it. Programmers avoid it. Sysadmins don’t understand it and get it wrong.
I don’t know what you’re saying here.
That is clear, and yet you’re insisting I’m wrong.
Except, that’s completely wrong.
I just started Word and dumped a summary of its open handles by object type:
C:\WINDOWS\system32>handle -s -p WinWord.exe
Nthandle v4.11 - Handle viewer
Copyright (C) 1997-2017 Mark Russinovich
Sysinternals - www.sysinternals.com
Handle type summary:
ALPC Port : 33
Desktop : 1
Directory : 3
DxgkSharedResource: 2
DxgkSharedSyncObject: 1
EtwRegistration : 324
Event : 431
File : 75
IoCompletion : 66
IoCompletionReserve: 1
IRTimer : 8
Key : 171
KeyedEvent : 24
Mutant : 32
Process : 2
Section : 67
Semaphore : 108
Thread : 138
Timer : 7
Token : 3
TpWorkerFactory : 4
WaitCompletionPacket: 36
WindowStation : 2
Total handles: 1539
Each of these types is a distinct kernel object with its own characteristics and semantics. And yes, you do create and interact with them from user-space. Some of those will be abstracted by lower-level APIs, but many are directly created and managed by the application. You’ll note the number of open “files” is a very small minority of the total number of open handles.
Simple examples of non-file object types commonly manipulated from user-land include Mutants (CreateMutex) and Semaphores (CreateSemaphore). Perhaps the most prominent example is manipulating the Windows Registry; this entails opening “Key” objects, which per above are entirely distinct from regular files. See the MSDN Registry Functions reference.
None of these objects can exist on a disk; they cannot persist beyond shutdown, and do not have any representation beyond their instantaneous in-memory instance. When someone wants an “EtwRegistration” they’re creating it again and again.
Did you even read the article? Or are you trolling?
What exactly are you after?
Just go read the article.
It’s about whether basing our entire interactions with a computer on a specific reduction of verbs (read and write) is really exploring what the operating system can do for us.
That is a very interesting subject to me.
Some idiot took party to the idea that Windows basically “built on Unix” then back-pedalled it to be about whether it was based on the same “foundational” concepts, then chooses to narrowly and uniquely interpret “foundational” in a very different way than the article.
Yes, windows has domains and registries and lots of directory services, but they all have the exact same “file” semantics.
But now you’re responding to this strange interpretation of “foundational” because you didn’t read the article either. Or you’re a troll. I’m not sure which yet.
Read the article. It’s not well written but it’s a very interesting idea.
Each of these types is a distinct kernel object with its own characteristics and semantics
Why do you bring this up in response to whether Windows is basically the same as Unix? Unix has lots of different kernel “types” all backed by “handles”. Some operations and semantics are shared by handles of different types, but some are distinct.
I don’t understand why you think this is important at all.
It sounds like you’re not interested in a constructive and respectful dialogue. If you are, you should work on your approach.
Do you often jump into the middle of a conversation with “Except, that’s completely wrong?”
Or are you only an asshole on the Internet?
Or are you only an asshole on the Internet?
I’m not in the habit of calling people “asshole” anywhere, Internet or otherwise. You’d honestly be more persuasive if you just made your points without the nasty attacks. I’ll leave it at that.
networking for obvious reasons
Them being what? Is the BSD socket API really the ultimate networking abstraction?
The TCP/IP protocols were part of a UNIX. AT&T gave UNIX away for free. They spread together with early applications being built on UNIX. Anyone reusing the protocols or code will inherit some of what UNIX folks were doing. They were also the most mature networking stacks for that reason. It’s why re-using BSD stacks was popular among proprietary vendors. On top of the licensing.
Edit: Tried to Google you a source talking about this. I found one that mentions it.
As you can see, ed is not an especially talkative program.
This was a major attraction of ed for at least one blind programmer, Karl Dahlke. I can’t find a source for this now, but I recall reading that for years, he had to use a Votrax speech synthesizer, which had no ability to interrupt its output. That would certainly explain the attraction of something that minimized output, even if it took great skill to master. Karl describes his philosophy in his essay Command Line Programs for the Blind.
To be clear, blind programmers that use ed (or Karl’s edbrowse) are a tiny minority of a minority. The blind programmers I know use a mainstream IDE or editor with a screen reader, mostly under Windows.
Why re-create code editors, simulators, spreadsheets, and more in the browser when we already have native programs much better suited to these tasks?
Because the Web is the non-proprietary application platform that actually has traction.
But it’s too dangerous.
No matter how optimized Blazor gets, it will always have some overhead compared to JavaScript, because it will need to ship a .NET/Mono runtime. So we’ll always be able to do more with our 130KB compressed first-load budget if we stick with JavaScript, or something that compiles to JS with little or no overhead. It’s better still, of course, if we can do what we need in much less than 130KB compressed. This is why I think something like Sapper should be the future of web application frameworks.
As developers, we need to prioritize the user experience, not our convenience. So in my opinion, something like Blazor should be a non-starter. But if someone can compile a meaningful subset of .NET to JavaScript with a hello-world no bigger than Sapper’s, then that will be worth looking at.
Given how good fluid reasoning is of a predictor of complex job performance, I wonder if a battery of novel logic problems in a programming veneer would be a good substitute for traditional initial employee screenings. Then the remaining candidates could get evaluated on a paid take-home task that replicates what the actual work would be as much as possible.
It would be great to just go straight to work-like tasks to evaluate prospective employees, but it’s costly, time-consuming, and will filter out candidates that won’t make that much of a commitment on first contact.
I, personally, won’t do any take-home work without the prospective employer also having invested something in the process. For all I know, my 2-hour project has been given to 100 other candidates, and there’s a good chance they’ll decide they don’t actually need to hire that position and not look at a single one.
I buy into your premise that fluid intelligence correlates with complex job performance, but how many of us work in truly “complex jobs”?
For churning out stylish CRUDs and ticking off tasks from a backlog, there’s very little fluid intelligence required. Ability to focus and deal with the occasional boredom would be a much better predictor, I conjecture. Concretely, you can probe for this by asking candidates about projects they’ve been working on and making sure there’s at least a handful of them that they’ve taken to completion.
I think machines are coming for the sort of tedious jobs that only require work ethic, i.e. the ability to focus and get through boredom. If that’s so, we’ll only be left with the complex jobs that require real intelligence.
I’m in my favorite job of my career so far, doing my best ever work. I had a very reasonable interview loop without a single trivia question. It’s the first time I got referred in by a friend and ex-coworker, through a network of his friends.
My new rule of thumb for myself is, when possible, work with my friends or their friends. Referrals referrals. Find people you like to work with and stick together. The only way to know if somebody can solve real problems for a real company (with warts and all) is by actually being in the trenches with them for months and years.
Working only with friends and friends of friends seems wrong to me. The words that come to mind are exclusive and cliquish. To get diverse perspectives and live up to the ideal of equal-opportunity employment, we need to be comfortable working with strangers.
Traveling between companies as a group of friends doesn’t mean that you’re only working with said friends, since there will almost always be other co-workers involved, but that you’re working with more known quantities. As an employee-side strategy, I don’t think it’s hugely problematic, especially given the amount of information asymmetry that’s often in play in hiring. I’ve also had luck with referrals (I’ve found out about 3 out 4 of my development jobs via some sort of reference or connection, including my current one).
On the employer side, I could see only working with referrals being somewhat problematic, but I doubt most employers do that.
I mean, my friend’s friend who I am now working for is roughly 3000 miles away from where I met my friend. I was definitely exposed to diverse perspectives by moving out here, and I necessarily have to interact and learn from all of my new coworkers (who are all diverse strangers to me). What I gained was a pre-selection stamp; somebody vouched that I’m not an idiot.
It’s also bi-directional preselection. I know that my friend wouldn’t send me off to work for a real dumpster fire of a company - I know that I’ll be working with good people on good projects, and it would benefit my own personal growth.
Friend networks can span multiple cities, countries, companies, and cultures. It doesn’t have to imply inbreeding.
The words that come to mind are exclusive and cliquish.
This sounds fun in practice until you hire a terrible stranger. Then you’re back to square one of “how do you find out if an interview candidate is good?” And it’s a hard question.
What are the actual benefits of compiling to wasm rather than JS? One drawback is that, at least for the near future, a wasm module has its own heap rather than participating in the garbage-collected heap of the JS engine. It’s also likely that wasm means a larger runtime footprint.
How is that a benefit? I would agree if compilation to WebAssembly avoids GC, but Grain seems to require GC anyway.
I just found that uBlock Origin even blocks ublock.org by default ;):
https://github.com/gorhill/uBlock/wiki/Badware-risks#ublockorg
(Not that I disagree, ublock.org is a borderline scam.)
I noticed that too when clicking on the link. It’s kinda funny. It’s hard to keep everything straight these days, but use, uBlock/AdBlock/AdBlockPlus all are a little shady in how they monetize their development, either by letting it white-list “non-intrusive” ads or by collecting usage data.
As far as I know, the uBlock Origin project is currently uncompromising in this regard. Still, they all use public and community maintained blacklists/greylists.
Here’s an interesting blog post on Debian versus current development practices, from a Debian perspective.
Great read. It’s a well reasoned opinion.
Anyone who’s been programming less than 12332 days is a young whipper-snapper and shouldn’t be taken seriously.
This will be my next favourite quote for the next 7 years.
I’m young again! :-)
A question for anyone who might have context – from this piece it seems like they have a cluster per restaurant, which doesn’t make much sense in terms of complexity versus payoff to my mind. The thing that would make more sense and be very interesting is if they’re having these nodes join a global or regional k8s cluster. Am I misreading this?
They seem to be using NUCs as their Kubernetes nodes, so the hardware cost isn’t going to be too great.
I imagine it’s down to a desire to not be dependent on an internet connection to run their POS and restaurant management applications, I’m sure the costs of a connection with an actual SLA are obscene compared to the average “business cable” connection you can use if it doesn’t need to be super reliable.
Still, restaurants have been using computers for decades. It looks as if they have a tech team that’s trying very hard to apply trendy tools and concepts (Kuberneetes, “edge computing”) to a solved problem. I’d love to be proven wrong, though.
I’ve never been to one of these restaurants but I can’t imagine anything that needs a literal cluster to run its ordering and payments system.
Sounds like an over engineered Rube Goldberg machine because of some resume/cv padding.
While restaurants certainly have been using computers for decades the kind of per location ordering integrations needed for today’s market are pretty diverse:
If you run a franchise like Chick-fil-A, you don’t want a downtime in the central infrastructure to prevent internet orders at each location, as it would make your franchisees upset that their business was impacted. You also want your franchisees to have easy access to all the ordering methods available in their market. This hits both as it allows them to run general compute using the franchisee’s internet, and easily deploy new integrations, updates, etc w/o an IT person at the location.
I have a strong suspicion that this is why I see so many Chick-fil-As on almost every food delivery service.
Beyond that, it’s also easier and cheaper to deploy applications onto a functional k8s/nomad/mesos stack than VMS or other solutions because of available developer interest and commodity hw cost. Most instability I’ve seen in these setups is a function of how many new jobs or tasks are added. Typically if you have pretty stable apps you will have fewer worries than other deployment solutions. Not saying there aren’t risks, but this definitely simplifies things.
As an aside I would say that while restaurants have been using computers for decades they haven’t necessarily been using them well and lots of the systems were proprietary all in one (hw/sw/support) ‘solutions.’ That’s changed a bit but you’ll still see lots of integrated POS systems that are just a tablet+app+accessories in a nice swivel stand. I’ve walked into places where they were tethering their POS system to someone’s cell phone because the internet was down and the POS app needed internet to checkout (even cash).
Most retail stores like this use a $400/mo T1 which is 1.5mbit/sec (~185kb/sec) symmetrical – plenty for transaction processing but not much else. Their POS system is probably too chatty to run on such a low bandwidth link.
It could just be a basic, HA setup or load balancing cluster on several, cheap machines. I recommended these a long time ago as alternatives to AS/400’s or VMS clusters which are highly reliable, but pricey. They can also handle extra apps, provide extra copies of data to combat bitrot, support rolling upgrades, and so on. Lots of possibilities.
People can certainly screw them up. You want person doing setup to know what they’re doing. I’m just saying there’s benefits.
I hope the designer will pay attention to little details that can make the assembled machine less user-friendly. For example, with some SBC enclosures (or maybe the problem is the boards themselves) and one DIY laptop that I know of, if you insert the microSD card the wrong way, it falls somewhere inside the case. I struggle to insert microSD cards correctly, so I find such a misfeature very frustrating.
Creator is pretty tech saavy and has made other hardware projects in the past, see website
I actually find these sort of talks to be a time sink. There is the time spent on typing, the distraction as the presenter fixes typos etc. What I prefer is a set of use cases followed by highlights of software features relevant to those use cases followed by some code snippets to reinforce memory. When I need detail I just need to know the available concepts and some keywords. The slides aren’t the only docs, right? Right?
I’ll go one further: I think technical talks in general are a time sink. It would be better to just write up the content in an article, with code snippets where appropriate. Then, everyone can absorb the information in their own way, on their own time, at their own speed. An article is also better for accessibility; for example, blind people can’t access your projected content (at least not in real time), and making your spoken content accessible to deaf people is an extra cost.
Of course, if you’ve already decided that you’re going to present at a conference, then this doesn’t apply. But it’s something to consider if you just want to share some information and haven’t committed to a particular way of doing it.
For the most part I agree with you, but there are edge cases where I have absolutely loved live coding talks. (David Beazley’s series of talks on generators comes to mind.) Now, perhaps I would have liked those talks even more if they were not live coded… but I don’t know. With an interpreted language, watching a program come together makes it feel almost as if you are putting it together yourself. When a speaker is masterfully putting together something it’s almost like pairing with someone with far more skill.
Interesting. A couple more thoughts on why developers feel compelled to go the SPA route:
Developers, or their employers, sometimes assume that they have to develop a no-compromise native mobile app. That’s like a SPA, so one might as well be consistent and use the same back-end for both.
There’s also the offline first camp:
We live in a disconnected & battery powered world, but our technology and best practices are a leftover from the always connected & steadily powered past.
What’s ironic about that is that many SPAs are themselves power-hungry, having been developed under the assumptions of the desktop-centric past.
For those that choose to go the SPA route, I think Sapper is an interesting option. It’s isomorphic, has server-side rendering, and focuses on small JavaScript bundles (with code splitting). My one reservation about it is that it doesn’t seem to support TypeScript.
But I agree with the author that for a great many applications, an SPA isn’t justified.
Not surprisingly, this shift from face-to-face to electronic interaction made employees less effective.
Why is this taken as a given? Remote-only, distributed teams are a thing.
Face-to-face communication is likely more effective. However, in an open office, you have to put up with noise, lack of privacy, lack of space, increased sick days etc. That drags down productivity, so I’m sure remote work can easily be more effective than open offices - but probably not as effective as a good office.
Face to face is the very highest bandwidth form of communication. Organizations can succeed without it, but it requires a lot of compensatory work.
Any suggestions on what we in the west should do to help put an end to this? Would it do any good to reduce all new electronics purchases to a minimum and go to some trouble to get more life out of old or malfunctioning devices? Or would that merely be a hollow, symbolic gesture?
The best we can do is to spread knowledge of the blood footprint of consumerism.
It’s naive to think that not buying from a multinational company exploiting poors is enough to annoy them.
But if you actively spread knowledge of the pain the cause to other humans, if you publicly blame them, you are going to be a problem for them.
And if you manage to create a culture that put shame on people mindlessy buying gadgets as status symbols, if you attack the assumptions of their propaganda/marketing, you will force them to address the issue.
If they cannot kill you for cheap, they will fix their supply chains.
Given that most popular email clients these days are awful and can’t handle basic tasks like “sending email” properly
I agree with the sentiment in general, but once you’re in the position where everybody else does it wrong and you’re the last person on the planet that does it right, then maybe it’s time to acknowledge that the times have changed and that the old way has been replaced by the new way and that maybe it is you who is wrong and not everybody else.
And I’m saying this as a huge fan of plain-text only email, message threading and inline quotes using nested > to define the quote level.
It’s just that I acknowledge that I have become a fossil as the times have changed.
once you’re in the position where everybody else does it wrong and you’re the last person on the planet that does it right
Thankfully we haven’t reached this position for email usage on technical projects. Operating systems, browsers, and databases still use developer mailing lists, and system programmers know how to format emails properly for the benefit of precise line-oriented tools.
I acknowledge that I have become a fossil as the times have changed
If the technology and processes you prefer have intrinsic merit, then why regretfully and silently abandon them? I’m not saying we should refuse to cooperate on interesting new projects simply because they use slightly worse development processes. But we should let people know about the existence of other tools and ways to collaborate, and explain the pros and cons.
If the technology and processes you prefer have intrinsic merit, then why regretfully and silently abandon them?
Because when I didn’t, people were complaining about my quoting style, not understanding which part of the message was mine and which wasn’t and complaining that me stripping off all the useless bottom quote caused them to lose context.
This was a fight it didn’t feel worth fighting.
I can still use my old usenet quoting habits when talking to other old people on mailing lists (which is another technology on the way out it seems), but I wouldn’t say that the other platforms and quoting styles the majority of internet users use these days are wrong.
After all, if the maiority uses them, it might as well be the thing that finally helped the “other” people to get online to do their work, so it might very well be time for our “antiquated” ways to die off.
I’d like to try to convince you that it’s _good* that plain text email is no longer the norm.
First, let’s dispense with a false dichotomy: I’m not a fan of HTML emails that are heavy on layout tables and (especially) images with no text equivalents. Given my passion for accessibility (see my profile), that should come as no surprise.
But HTML emails are good for one thing: providing hyperlinks without exposing URLs to people. As much as good web developers aim for elegant URLs, the fact remains that URLs are for machines, not people. A hyperlink with descriptive text, where the URL is available if and only if the reader really wants it, is more humane.
For longer emails, HTML is also good for conveying the structure of the text, e.g. headingsg and lists.
Granted, Markdown could accomplish the same things. But HTML email actually took off. Of course, you could hack together a system that would let you compose an email in Markdown and send it in both plain text and HTML. For folks like us that don’t prefer WYSIWYG editors, that might be the best of all worlds.
But HTML emails are good for one thing: providing hyperlinks without exposing URLs to people.
That doesn’t come without a huge cost. People don’t realize that they need to know the underlying URL and don’t care to pay attention to it. That leads to people going places they didn’t expect or getting phished and the like.
Those same people probably wouldn’t notice the difference between login.youremail.com and login.yourema.il.com either, though. So I’m not saying the URL is the solution but at least, putting it in front of you, gives you a chance.
As much as good web developers aim for elegant URLs, the fact remains that URLs are for machines, not people.
I’m not sure about this… at least the whole point of DNS is to allow humans to understand URLs. Unreadable URLs seem to be a relatively recent development in the war against users.
Not only do I completely agree with you but you are also absolutely right about that.
Excerpt from section 4.5 of the RFC3986 - Uniform Resource Identifier (URI): Generic Syntax:
Such references are primarily intended for human interpretation
rather than for machines, with the assumption that context-based
heuristics are sufficient to complete the URI [...]
BTW, the above URL is a perfect example of how one should look like.
Personally, I hate HTML in email - I don’t think it belongs there. Mainly, for the very reasons you had just mentioned.
Let’s take phishing, for example - and spear phishing in particular. At an institution where I work, people - especially those at the top - are being targeted. And it’s no longer click here-type of emails - institutional HTML layouts are being used to a great effect to collect people’s personal data (passwords, mainly). With the whole abstraction people cannot distinguish whether an email, or even a particular link, is genuine.
When it comes it the structure itself, all of that can be achieved with plain text email - the conventions used predate Markdown, BTW, and are just as readable as they were several decades ago.
are these conventions well-defined? is there some document which describes conventions for stuff like delimiting sections of plain text emails?
are these conventions well-defined? is there some page which describes conventions for stuff like delimiting sections of plain text emails?
It’s just that I acknowledge that I have become a fossil as the times have changed.
Well, there are just too many of us fossils to acknowledge this just yet.
Yup… And bug trackers the world over are still littered with things like… https://bugs.ruby-lang.org/issues/8770
Can you clarify what you mean here? I love this piece and refer to it frequently myself. I think it brilliantly illustrates why many things in tech are the way they are. I feel like you’re suggesting a point I’m missing.
Yes, it does illustrate brilliantly why things in tech are the way they are.
In fact, why many things in our economy are the way they are.
Their solution to the PC losering problem is to transfer the complexity from the OS to the user.
ie. Every author of a program, that invokes almost any system call, must on every invocation, remember to correctly handle, the possibility of it returning EINTR.
Conversely, setting up a good test that proves that your code handles EINTR correctly, every time, is hard and you receive no help from the OS in doing so.
Now if the effort that has now been deployed on literally tens of thousands of packages on fixing obscure sporadic EINTR related bugs….. had been expended on solving the PC losering problem correctly…
We would all be much much better off.
ie. The cost of solving the PC losering problem has been externalised to all users of syscalls. This enabled the “Worse is Better” solution to win in the market place by winning the “time to market” race.
Alas, as is the case with very very many parts of our economic system, we reward via perverse incentives the “cheats” that externalize their costs…. but in the long run our entire civilization pays and pays and pays.
Alas, as is the case with very very many parts of our economic system, we reward via perverse incentives the “cheats” that externalize their costs…. but in the long run our entire civilization pays and pays and pays.
Is there any known solution to this problem that would not also sacrifice a lot of good things in the process?
I think you will find any and every proposed solution will be condemned out of hand, and indeed fought tooth and nail, by those benefiting most from externalizing their costs.
Thus caution is advised when listening to anyone saying “It won’t work”.
The world seems to be (deliberately) stuck in this foolish black xor white thinking about economic systems, instead of more thoughtful and nuance debate.
ie. I think the world needs it’s systems refactored, not rewritten.
ie. We should be focusing on sinks of productivity and value, and tweaking the rules to reduce them.
As of September 2016, the author (Robert J. Sawyer) is still using WordStar, now on a 64-bit Windows 10 machine inside a DOS emulator.
vDosPlus is specifically focused on word processing applications and people seem to prefer it because it has some powerful print processing features as well as keyboard and mouse mapping that would be difficult to get under pure virtualization.
I’ve not had much experience with it, not being a Windows user, but it is the environment of choice and defacto standard of hardcore WordPerfect, XyWrite and ChiWriter users as well.
The divide between the “continuous deployment” world and the embedded Linux “LTS” world is interesting. Would be amazing if some company decides to merge these worlds and make, say, a phone that updates to a new nightly kernel build every day. Purism could do this :)
What would be the point? Most people want their devices to be stable, right?
Most people probably, but pretty much all LineageOS users run nightlies. Which aren’t really “unstable” from my experience. The point is getting improvements fast.
I don’t believe in “stability by using old stuff” (like centos and debian stable). They’re not “stable”, just “outdated”.
CentOS and Debian are stable in the sense of not changing. It’s very useful to be able to install an OS on your computer(s), then know that it will stay the same for X years, with the exception of truly important (e.g. security) updates. It may not be “improving”, but then again, especially when it comes to UI, a lot of people just want it to stay the same so they can get on with their lives.