The website seems to have been taken down since I get a 403, maybe the author didn’t like being linked to Lobste.rs or they’re shy.
This is cool, but is there a “getting started” guide of sorts? It feels a bit dull poking it without any direction of what to expect or where to go at all; and a bit hard to explore when you know none of the “rules” of the system. (Others might not also understand “why Multics?”)
I am going to post a follow up with an FAQ this weekend, but, I’ve made every attempt to ensure the system is configured in a way that is secure - especially in the case that a Guest user shouldn’t be capable of breaking anything.
Otherwise - common sense rules apply - don’t be malicious, and if you do discover some clever exploit or privilege escalation I’d appreciate it being reported rather than abused. Users who are obviously intentionally disruptive or abusive in their use of resources may be bumped and eventually banned.
http://www.bitsavers.org/pdf/honeywell/multics/ has full manuals including the command manual.
There is a very complete help system available with “help” - to search the help pages use the “lh” command.
A nicer shell with Emacs-like editing functionality and optional history is available. You can activate this with the following command:
“stty -ttp vt102;wdc invoke”
Instead of vt102 (which assumes 80x24) you can use use VT102_132C_50L and VT102_132C_78L for a 132x50 or 132x78 display, respectively.
Common commands are - ls (list), pr (print), cwd (change_wdir), pwd (print directory) cd (create_dir). < is like UNIX .. and > is used instead of / (>udd>u>name might be a path, for example). la is used to show ACL and sa to set them. Other commands to try include “who -lg” and “user all” to see all your attributes. Wildcard parsing is closer to VMS than UNIX. “ls >path>here>**.blah -a”, etc. Use “ls -a” to show all types of entries otherwise it wil only show regular segments. The eor command (or dprint for guests) will request print jobs which can be picked up as formatted PDF files.
The qedx editor is quite nice for a line editor and built around regular expression parsing.
Messaging tools are sm, smx, mail, forum, etc. The “easier to use” tools are snarkily called “xmail” and “xforum”, short for “Executive Mail” and “Executive Forum”. :-)
Other tools to check out are are compose and runoff, the precursors to Unix troff/nroff. Most everything else is well documented but if you questions let me know - it helps for building the FAQ!
The FAQ is still a work in progress but is available on the system.
You can view it with help primer when logged in.
This is a little dated - but it should also help:
also, when using the line editors and interfaces based on qedx (which is a lot of the system), it helps to know that \f (that is a literal backslash and f) means end of input or file and essentially the equivalent of UNIX EOF or DOS ^Z. (If you see a “level N” after your ready prompt in case of a crash or error or interruption, you can use “rl” to release it, or “rl -a” to release all the levels.)
Knowing that \f tidbit not only helps you to use the system but it makes the Ford Multics shutdown cake picture on the Multicians site quite touching.
Here is a new link to the Ford cake.
Since some have asked, here another example, this time, for setting up a plan file as used by the finger daemon, for an account with a User ID of JRDobbs.
cwd [dwd]
pwd
qedx
a
This is my plan. There are many like it, but this one is fine.
\f
w JRDobbs.plan
q
sa JRDobbs.plan r Service.Daemon.*
This changes to your default working directory, analogous to your home directory on Unix, and displays it. You are then invoking the QEDX editor, adding/appending your text, then writing it to the file named “JRDobbs.plan” and quitting. The sa is short for set_acl, in which you allow the Service.Daemon user (which runs the finger service daemon) read access to your JRDobbs.plan file.
Nobody has recovered the sources for any of the original Multics finger daemons, nor was I was able to locate any logs of what the original output looked like, so I simply implemented it as it seemed appropriate. When a user is fingered they are notified by the Daemon if they are online - there is no attempt to do any identd mapping yet.
There is no Multics finger client - yet. There is also no option that will allow the user to be excluded from the daemons output to appear as [redacted] in the listing, but these are features that will come soon.
The Internet gateway services are now running as Service.Arpa. You’ll need to replace Service.Daemon.* with Service.Arpa.* in the above example to make your .plan file public.
SAO started tracking satellites with an 8K (nonvirtual) 36-bit IBM 704 in 1957 when Sputnik went into orbit. The Julian day was 2435839 on January 1, 1957. This is 11225377 octal, which was too big to fit into an 18-bit field. With only 8K of memory, the 14 bits left over by keeping the Julian date in its own 36-bit word would have been wasted. They also needed the fraction of the current day (for which 18 bits gave enough accuracy), so it was decided to keep the number of days in the left 18 bits and the fraction of a day in the right 18 bits of one word.
Eighteen bits allows the truncated Julian day (the SAO day) to grow as large as 262143, which from November 17, 1858, allowed for 7 centuries. Possibly, the date could only grow as large as 131071 (using 17 bits), but this still covers 3 centuries and leaves the possibility of representing negative time. The 1858 date preceded the oldest star catalogue in use at SAO, which also avoided having to use negative time in any of the satellite tracking calculations.
This sort of history is absolutely fascinating to me, and also makes perfect sense in retrospect, especially considering the popularity of VMS in the astronomy and physics disciplines.
I wonder which came to VMS first — the SAO Julian date epoch, or the astrophysicists?
I’m currently on a Matias Quiet Pro, because I wanted something fairly quiet with NKRO and a decent Alps-feeling switch. They’re not quite as nice as while Alps, but they are meaningfully tactile and the “click” is nice and high.
My previous keyboard was a ~30 year old NTC KB-6153EA with white clicky Alps switches, which are just lovely. They make Cherry MX Blues feel and sound like complete junk.
I also have a slightly worse-for-wear IBM M122 which I plan on building a converter for just for the sake of it. Who can resist the prospect of 24 F-keys?!
Unicomp PC 122 is a modern USB-supporting version of the M122 5250 keyboard and is readily available.
While remaining a big fan of the original Model M, I stick with my trusty Unicomp UNIX keyboards, specifically, the Unicomp Inc R6_x Bright_Linux keyboards, having tried all the more expensive high end trendy ones.
(Apologies for the dirty keyboard picture.)
I am very interested in trying an Esrille NISSE, however.
It’s very much alive and kicking - there’s even NeoMutt, a fork with added features. As someone who’s used Mutt/NeoMutt almost every day for 20+ years, it’s still very much useable today. Yes, HTML email does make things a bit painful, but there are workarounds.
I’m using a stripped down version of elinks to do HTML -> plaintext conversions, both for mail and some other projects. w3m is also popular for this task.
Do you have other solutions you’d like to share?
I’m using pretty much the same, albeit with w3m. I use a modified version of view_attachment.sh to handle attachments (grabbed from The Homely Mutt - there are plenty of other great tips in that article).
Thanks. I’m working on a bidirectional mail gateway which does Unicode/MIME/RFC-5322/RFC-6854 <—> ASCII-ANSI-X3.4-1986/RFC-822 conversions.
Converting MIME/Base64 encoded parts into to UUENCODE and back is straightforward (and lossless).
The lossy transliteration of Unicode characters into plaintext equivalents is less straightforward and there is a wealth of prior art.
The task of ceating a usable presentation of modern HTML mail as plaint text, however, is more of an art than a science.
FWIW urlscan is another useful tool https://github.com/firecat53/urlscan
In mutt I bind this to C-b so I can quickly open some link in my browser
I’m actually still an elm user, myself.
I also still use it. Works great, no nonsense. Sure, when I want to see an image I have to scp it to my local system, but hey :)
I’m glad we still have such a stripped-down email implementation being kept up. I’d probably only use it in cases where I needed the extensibility though, as automatic email filtering is far too big a boon to give up.
You can always use Sieve on the IMAP side, or fdm, maildrop, or the venerable procmail for local filtering.
To clarify: I’m specifically thinking of Inbox.
My issue with most implementations of 2FA is that they rely on phones and MMS/SMS which is beyond terrible and is often less secure than no-2FA at all - as well as placing you at the mercy of a third party provider of which you are a mere customer. Don’t pay your bill because of hard times or, worse yet, have an adversary inside the provider or government that has influence over the priced and all bets are off - your password is going to get reset or account ‘recovered’ and there isn’t much you can do.
For these reasons, the best 2FA, IMO, is a combination of “something you have” - a crypto key - and “something you know” - the password to that key. Then you can backup your own encrypted key, without being at the mercy of third parties.
Of course, if you loose the key or forget the password then all bets are off - but that’s much more acceptable to me than alternative.
(FYI - I don’t use Github and I’m not familiar with their 2FA scheme, but commenting generally that most 2FA is done poorly and sometimes it’s better not to use it at all, depending on how it’s implemented.)
(FYI - I don’t use Github and I’m not familiar with their 2FA scheme, but commenting generally that most 2FA is done poorly and sometimes it’s better not to use it at all, depending on how it’s implemented.)
GitHub has a very extensive 2FA implementation and prefers Google Authenticator or similar apps as a second factor.
https://help.github.com/articles/securing-your-account-with-two-factor-authentication-2fa/
I don’t use Google’s search engine or any of their products nor do I have a Google account, and I don’t use social media - I have no Facebook or Twitter or MySpace or similar (that includes GitHub because I consider it social networking). Lobste.rs is about as far into ‘social networking’ as I go. Sadly, it appears that the GitHub 2FA requires using Google or a Google product - quite unfortunate.
You can use any app implementing the appropriate TOTP mechanisms. Authenticator is just an example.
https://help.github.com/articles/configuring-two-factor-authentication-via-a-totp-mobile-app/
Google Authenticator does not require a Google account, nor does it connect with one in any way so far as I am aware.
Github also offers U2F (Security Key) support, which provides the highest level of protection, including against phishing.
This is very good to know - thank you for educating me. I only wish every service gave these sort of options.
You can also use a U2F/FIDO dongle as a second factor (with Chrome or Firefox, or the safari extension if you use macOS). Yubikey is an example, but GitHub has also released and open sourced a software U2F app
My issue with most implementations of 2FA is that they rely on phones and MMS/SMS which is beyond terrible and is often less secure than no-2FA at all
A second factor is never less secure than one factor. Please stop spreading lies and FUD. The insecurity of MMS/SMS is only a concern if you are being targeted by someone with the resources required to physically locate you and bring equipment to spy on you and intercept your messages or socially engineer your cellular provider to transfer your service to their phone/SIM card.
2FA with SMS is plenty secure to stop script kiddies or anyone with compromised passwords from accessing your account.
I happen to disagree completely. This is not lies nor FUD. This is simple reality.
The when the second factor is something that is easily recreated by a third party it does not enhance security. Since many common “two-factor” methods allow resetting of a password with only SMS/MMS and a password, the issue should be quite apparent.
If you either do not believe or simply choose to ignore this risk, you do so at your own peril - but to accuse me of lying or spreading FUD only shows your shortsightedness here, especially with all of the recent exploits which have occurred in the wild.
Give me an example of such a vulnerable service with SMS 2FA. I will create an account and enable 2FA. I will give my username and password and one year to compromise my account. If you succeed I will pay you $100USD.
We both know $100 doesn’t even come close to covering the necessary expenses or risks of such an attack - $10,000 or $100,000 is a much different story - and it’s happened over and over and over.
For example, see:
Even weak hackers can pull off a password reset MitM attack via account registration
Hackers Have Stolen Millions Of Dollars In Bitcoin – Using Only Phone Numbers
Coinbase vulnerability is a good reminder that SMS-based 2FA can wreak havoc
After years of warnings, mobile network hackers exploit SS7 flaws to drain bank accounts
Just because I’m not immediately able to exploit your account does not mean that it’s wise to throw best-practices to the wind.
This is like deprecating MD5 or moving away from 512-bit keys - while you might not be able to immediately crack such a key or find a collision, there were warnings in place for years which were ignored - until the attacks become trivial, and then it’s a scramble to replace vulnerable practices and replace exploitable systems.
I’m not sure what there is to gain in trying to downplay the risk and advising against best practices. Be part of the solution, not the problem.
Edit: Your challenge is similar to: “I use remote access to my home computer extensively - I’ll switch to using Telnet for a month and pay you $100 when you’ve compromised my account.”
Even if you can’t that doesn’t justify promoting insecure authentication and communication methods. Instead of arguing about the adaquecy of SMS 2FA long after it’s been exposed as weak, we should instead be pushing for secure solutions (as GitHub already has and was mentioned in the threads above).
I also wanted to apologize for the condescending attitude in my precious response to you.
So you’re admitting that SMS 2FA is perfectly fine for the average person unless they’ve been specifically targeted by someone who has a lot of money and resources.
Got it.
DES, MD5, and unencrypted Telnet connections are perfectly fine for the average person too - until they are targeted by someone with modest resources or motivation.
So, yes, I admit that. It still is no excuse to refuse best practices and use insecure tech because it’s “usually fine”.
Please study up on Threat Models. Grandma has a different Threat Model than Edward Snowden. Sure, Grandma should be using a very secure password with a hardware token for 2FA, but that is not a user friendly or accessible technology for Grandma. Her bank account is significantly more secure with SMS 2FA than nothing.
That actually depends on how much money is in Grandma’s bank account. And if SMS can be used for a password reset, I’d highly recommend grandma avoid it - it simply is not safer than using a strong unique password. With the prevalence of password managers, this is now trivial.
While I don’t have any grandma’s left, I still have a mother in her 80’s, and, bless her heart, she uses 2FA with her bank - which is integrated into the banking application itself that runs on the tablet I bought her - it does not rely on SMS. At the onset of her forgetful old age she started using the open-source “pwsafe” program to generate and manage her passwords. She also understands phishing and similar risks better than most of the kids these days simply because she’s been using technology for many years. She grew up with it and knows more of the basics, because schools seem to no longer teach the basics outside of a computer science curriculum.
These days, being born in the 1930s or 1940s means that you would have entered college right at the first big tech boom and the introduction of widescale computing - I find that many “grandma/grandpa” types actually have a better understanding of technology and it’s risks than than millennials.
I do understand Theat Models, but this argument falls apart when it’s actually easier to use the strong unique passwords than the weaker ones - and the archetype of the technology oblivious senior, clinging to their fountain pens and their wall mounted rotary phones is, as of about ten years ago, a thing of the past.
More on SMS 2FA posts:
https://pages.nist.gov/800-63-3/sp800-63b.html#pstnOOB
https://www.schneier.com/blog/archives/2016/08/nist_is_no_long.html
NIST is no longer recommending two-factor authentication systems that use SMS, because of their many insecurities. In the latest draft of its Digital Authentication Guideline, there’s the line: [Out of band verification] using SMS is deprecated, and will no longer be allowed in future releases of this guidance.
Since NIST has come out strongly against using SMS 2FA years ago it should be fairly straightforward to cease any recommendations for it’s use at this point.
This weekend my only plan is the watch the World Cup fixtures — and finish up my Multics RFC-822 gateway. But mostly football.
I very much like this - and I’m an avid elinks user.
I’m going to investigate this for some use cases where I’m currently abusing elinks -dump to translate HTML into usable plaintext for archiving, summarization, e-mail, etc. It’s heavyweight, yes - but sometimes the quality of the textual rendering is important - elinks usually does a decent job here, but it’s nice to have alternatives.
See previous discussion here: https://lobste.rs/s/1ydlnj/10_years_internal_atari_emails_1982_1992
Suggest merging, but this link is excellent for the context it provides.
(Also, as a lifetime VMS partisan, the newbie VAX/VMS questions in the some of the mails are amusing.)
If you want to play with CP/M (68000) without effort, I have a public CP/M instance up - ssh or mosh to cpm68k@m.trnsz.com.
The emulator is an experiment in progress extending the work of Roger Ivie - the disk image includes all his work. The simulator is using the CPU core from MAME. The entire disk is writable, but not persistent - you get a fresh session each time.
I haven’t touched this in awhile, but I had plans for adding a way for users to login and logout which would keep their changes to the disk intact. I might revisit it at some time.
[Comment removed by author]
I’ve only had a few interactions with him, and nothing offensive. I’m willing to give everyone the benefit of the doubt, especially because I’m often guilty of immediately reacting and attacking and being ‘difficult’.
rain1 actually knows this about me first hand, from having some discussions with me and providing a listening ear for my rantings. :)
As an anecdote- back when Theo De Raadt used be a regular on EFnet, he patiently and humbly spent many hours helping me get an early OpenBSD release working on a non-standard 486SX embedded board - and once it finally worked, he absolutely refused to commit the changes back to support my “broken-ass crap hardware”, or similar. And he /quit. :)
He probably doesn’t remember the interaction, but I sure do, and it left a lasting impression on me as an strongly opinionated but good and decent guy. Theo is historically known as a “difficult” personality. The moral here is to always give people a second chance.
If people can’t handle feedback it’s not going to be something you can rely on.
Like all free software s6 comes with no warranty. What security issue did you discover how was the author rude?
Here is an excerpt from my #s6 Freenode IRC log from June 19th 2018. It contains rain1’s report about a security issue, but doesn’t contain the follow-up conversation. Given the implication of rain1’s now deleted comment, I think it is worth clearing the air over precisely what was reported:
[10:51:42] <rain1> if i try ssh '`evilcommand`@my-s6-system'
[10:52:09] <rain1> @400000005a5803150fca7ec6 Failed keyboard-interactive/pam
for invalid user `evilcommand` from 127.0.0.1 port 46200 ssh2
[10:52:11] <rain1> is in the logs
[10:52:36] <rain1> and if i try to run that as a script it executes the evil command,
then errors saying @400000005a5803150fca7ec6: command not found
[10:54:04] <rain1> so based on this I wonder if we should think about a different way
that s6 log can express status without using the +x bit as a signal?
https://skarnet.org/software/s6/s6-log.html
The report is observing that s6-log uses the execute bit to track state during log rotation, and that if one executes a log file you might run a command embedded in your log file by an attacker.
I don’t consider this a security issue in s6-log, though I do consider it reasonable to discuss using file permissions for non-permission like state. djb did this with .qmail files, using the sticky bit to indicate whether delivery to a mailbox is enabled.
Very cool! As an old Amiga hand it tickles me to hear about work still going into a fork of AmigaDOS.
I have to wonder though - who is actually USING this? Folks with old actual Amiga hardware and PowerPC processor extender cards?
MorphOS is a bit different. See its overview. It aims to be a full-featured desktop that can run on newer hardware like PPC Macs. It has a microkernel that can emulate old stuff or run new stuff. The Amiga community also had some computers purpose-built for them. AmigaOne X1000 was on high end.
The Cyrus+/X5000 is the latest from A-Eon - the cost is high but the products are much loved.
Still cheaper non-x86 than a Raptor Talos II. Chip ain’t as fast but it’s quite usable with lean software. The Amiga people seem to be all over that concept.
A couple of notes…
Some demoscene guys, commercial Amiga programmers fans started building MorphOS. There was no people connected with Commodore directly or OS3.x developers which is kinda weird
Actually, not weird at all.
Firstly, if you let someone who has seen the original source code of a piece of software work on what is supposed to be legally clean reimplementation of it, you are essentially asking to get sued.
Secondly, most of the Commodore software engineers working on AmigaOS were hired hands. They did it for a pay cheque, not passion. Even the original development team, who did some personal sacrifices to keep the project alive, were a lot less attached to Amiga and AmigaOS than the end users to who opted to stick around until today. If you read their work bios, they usually moved on very quickly and worked on all kinds of different platforms.
That said, it is worth mentioning that the original MorphOS team included developers who had previously provided essential system software such graphics card driver stacks (CybergraphX), Magic User Interface (GUI toolkit), the Voyager web browser, and others. These would normally be included with an operating system but had to be developed by third parties since Commodore was unable to do so.
There’s a myth that original non-public and some first public released were based off from OS3.x sources stolen from somewhere
Feel free to look up how much cleaning up the developers with actual access to the original source code had to do.
It is a silly idea that was primarily spread by an individual who set up a contract promising to port AmigaOS from 68k to PowerPC in a matter of months for a mere 25.000 EUR, failed to do so, then turned around and sued his client for several years, and was eventually granted the AmigaOS rights because Amiga’s only investor unexpectedly died and there was nobody around who had the money to keep paying expensive lawyers…
So, everyone, please consider the source.
The “blues” (MorphOS Team) got partnered with bPlan (later Genesi) which found MorphOS a nice target OS for their PowerPC G3/G4 boards: EFIKA, Pegasos I, Pegasos II and some R&D boards not publicly known. They pumped some serious money into project, hired some developers full-time for few years and generally accelerated the development
Actually, Thendic France spent a lot less on MorphOS directly than you might think. Lots and lots of unpaid bills. Also, the very few hired full-time developers that were there lost their jobs or outright quit after mere months, not years.
License is kinda expensive
As always with prices, this is subjective as well as relative. You can never please everybody.
However, I think it is worthwhile to add that updates have always been free. People who registered MorphOS 2.0 all the way back in 2008 have received a total of 20 free updates over the course of 10 years.
Even when Microsoft still allowed to upgrade to Windows 10 for free, that offer did not include 10 years old Windows versions. And this is a billion dollar company, not enthusiasts who have to buy their own development hardware to develop and maintain drivers, etc.
Even if the kernel can, OS can’t allocate more than 1 gigabyte of RAM (or maybe 2?) even when Amiga could theoreticaly allocate 4GB of RAM in 32-bit address space.
Commodore’s AmigaOS used a 31bit address space (2GB). Sadly, backwards compatibility depends on it.
I do not mean to sugarcoat this whatsoever but, based on user feedback, 2GB is actually still decent for the time being.
The system was built with GCC 2.95.3 to this day
That is incorrect.
Same for computers sold/transferred between users, you must rename your license.
Nobody needs to rename anything. There are some users who are apparently bothered by the fact that the registration information lists a different name than their own. Those have the option to get a new keyfile with their own name in it.
From all of my knowledge there’s only a single German company which bought about 30 licenses late 2000s for their work machines and nobody actually know what they really do
If that happened, it was probably a tax write off issue or so. Companies buy crazy things near the end of a fiscal year ;)
No supported office software exists.
If you take a look at the new Iris IMAP email client, its email editor is half-way there to a word processor…
No Vim or Emacs. This is ridiculous (…) I asked on MorphOS IRC channel for help, everyone turned off as “it’s a linux shit, get away with it”
Actually, there is a port of Vim 8.0.1… I do not recall anything but appreciative comments after its release.
That said, there is also Flow Studio, which uses the Scintilla engine and offers lots of handy features that make MorphOS development more convenient.
But it has some irrational design issues and the community is horrible when you can’t get their spirit of old grumpy Amiga user who’s angry at everything around and frustrated from 20 years waiting for “next Amiga” which never happened.
Before anybody takes this for gospel, please visit MorphZone and see for yourself whether the community is “horrible” or not.
I thought about linking to it but figured if mighg be too much for casual readers. I saved that one for when people ask about the deeper history. ;)
Thanks for that! Given that you say in the article that Hyperion shuttered days before you wrote it, who’s putting out new MorphOS versions? :)
From what I remember, Hyperion did AmigaOS variant. The MorphOS variant is this team. They were competing groups. I still want to know which company is referenced in this quote:
“From all of my knowledge there’s only a single German company which bought about 30 licenses late 2000s for their work machines and nobody actually know what they really do”
It could be pretty boring with some terrible acquisition practices. It might also be a very, interesting outfit. Maybe something in between.
The death of Hyperion seems to have been premature. Also, MorphOS isn’t Hyperion.
There are several forms of AmigaOS like operating systems: AROS, AmigaOS 3, AmigaOS 4, and MorphOS.
There are also “hybrid” projects such as AfA (AROS for AmigaOS) which is replacing parts of AmigaOS with AROS components when those components are stabl, compatible, and offering additional functionality.
Side note I wanted to add. I know the developer and there are still some features needing to be added such as the functionality for it to sync articles with other NNTP servers which is obviously quite common (e.g. Usenet).
Such as? I am interested to find out what is available as setting something up like that is on my todo list.
https://github.com/dustin/nntpsucka is a useful start. If I remember some of the other solutions I’ll update you - most are quite old and my memory fails me at the moment.
Awesome. Thanks. I will try this out probably tonight then.
I will setup two wendezelNNTPds and then nntpsuckas in both I assume (for bidirectional sync (I haven’t read how it works yet, gonna click the link now)).
It seems the most classic solution (suck and rpost) has also been recently updated - https://github.com/lazarus-pkgs/suck
The Wikipedia articles on Magic Cap and Telescript are decently informative and well referenced.
However, the Telescript manual is a really great and highly recommended read.
Edit: The overall design goals and many of the concepts of the General Magic model and the Telescript language seem similar to the distributed vision of TRON which was actually supressed by the U.S. government.