1. 5

    Relevant - Unisys (the company that owns and has been working on converging the IP of the Burroughs Large Systems and UNIVAC hardware lines, and the MCP and OS2200 operating systems, respectively, and whom actively markets and maintains these platforms) has made available FREE (well, licensed, but, free as in beer) releases of the current software as well as the virtual machines software necessary to run them on commodity hardware.

    If you’d like to be able to experiment hands on with these platforms, check out https://lobste.rs/s/r5wino/clearpath_mcp_express which I previously posted.

    For many of us, this was a long-awaited development from Unisys, coming as a quite a surprise after many years of seemingly futile requests for some sort of hobbyist program.

    This is a great way for mainframe hobbyists, enthusiasts, students, and others who are just curious to gain experience with these platforms.

    1. 2

      I find it especially interesting that the C and BLISS compiler, can “compile on OpenVMS” and “link and execute on Linux or MacOS”. Also, interesting also that they use the LLVM standard libc++ and extend it for OpenVMS compatibility.

      Finally, I hope it is a license they end up giving away for their “guess-when-it-boots” contest. (I guess we need to just trust them on that contest, though!)

      1. 1

        Same here. It would be great if they’d resurrect the hobbyist license.

      1. 5

        Another APL post that hinges on the “special keyboard”!?

        Does it really occur to him/her that all the non-English speakers use “special keyboards”?

        APL’s downfall is not because its symbols, it is because some draconian company still charges thousands of dollars for a small interpreter. https://www-112.ibm.com/software/howtobuy/buyingtools/paexpress/Express?P0=E1&part_number=D50Z7LL&catalogLocale=en_US&Locale=en_US&country=USA&PT=jsp&CC=USA&VP=&TACTICS=&S_TACT=&S_CMP=&brand=SB07

        1. 4

          Tangential, but I believe the most vibrant APL community these days is around Dyalog APL rather than the old IBM one, which is still commercial but not quite that exhorbitantly priced (and as of fairly recently, is free for personal/noncommercial use).

          1. 10

            There’s also the smaller J community, which is open source.

            1. 5

              There’s a project, Co-dfns, that’s implementing a version of the Dyalog language with parallelism built-in, and it’s released under AGPL v3.

              1. 2

                The problem lies behind the reason of, and is reinforced by the fact of, companies charging a fortune for an interpreter.

                Non of the implementations are fully compatible. They all decide to include some extensions that supposedly make their own implementation better, which introduces fragmentation. If somebody’s code works on IBM’s interpreter, it will likely cause problem with other implementations, which makes IBM able to charge whatever amount of money they want.

                On the other hand, most of the APLers moved on to FORTRAN in the 80’s, for the relatively free compiler and the performance of the compiled machine code from FORTRAN, and the HPC numerical computing communities never looked back to APL.

                1. 2

                  If you want a historical or “real” APL, there is MVT4APL - a distribution of OS/360-MVT 21.8F, customized for use with APL\360 Version 1 Modification Level 1. “Real” IBM mainframe APL on Windows or Linux.

              2. 2

                A+, openAPL and NARS2000 have been available since the 1980s and many other free implementations exist as well.

                1. 2

                  Well, there is GNU APL, and it has existed for some time.

                1. 1

                  “Will OpenVMS ever be a hypervisor? Extremely unlikely.”

                  Oh, come on!! This was one of the very uses I was hoping to get out of it. Need something rock solid if we’re not going micro/separation-kernel on the situation. OpenVMS is rock-solid. Love to throw it in Dom0 for availability-focused situations. Guess the VMS Cloud is not to be. Still would make a great database, recovery, or user config server backing up a bunch of BSD/Linux boxes, though.

                  1. 2

                    I was disappointed as well, especially since the work had already been done once before. :(

                  1. 6

                    This is great. It’s been so long since I’ve seen OS/2.

                    Also, it’s amusing that this has been kept up-to-date with the old school website feel, including the “IE is EVIL!!!” section. Party like it’s 1999.

                    1. 1

                      Remember, there is https://www.arcanoae.com/ which produces ArcaOS, a maintained and updated OS/2 distribution.

                    1. 2

                      Rereading Wolfram’s “A New Kind of Science”, coincidentally, recently made freely available!

                      1. 3

                        Related: I always had the idea that one could create a reverse-robots.txt-search engine (aka it indexes only stuff robots.txt tells it shouldn’t index). Not aware of anyone ever did it, but it probably would be interesting.

                        I’d expect that plenty of people mistakenly think robots.txt is a security mechanism.

                        1. 1

                          Unfortunately (DMCA, etc) I would assume this would likely be considered a criminal act, and Americans or US entities anywhere in the world may be found to be committing crimes by even linking to the results of such an engine.

                          It sure would be neat to see however, there apepar very few places it would be both ‘safe’ and stable enough to run such a site.

                        1. 4

                          To use actual C as a scripting language, check out TCC!

                          TCC can also be used to make C scripts, i.e. pieces of C source that you run as a Perl or Python script. Compilation is so fast that your script will be as fast as if it was an executable. You just need to add #!/usr/local/bin/tcc -run at the start of your C source:

                          #!/usr/local/bin/tcc -run
                          #include <stdio.h>
                          
                          int main() 
                          {
                              printf("Hello World\n");
                              return 0;
                          }
                          
                          1. 2

                            The output of compiling Little is tcl code. See http://www.little-lang.org/why.html

                            1. 1

                              I’m late, but GCC works fine too, works for CGI nicely, here are two one-liners:

                              » cat demo.sh

                              #!/opt/misc/chax

                              void main(void){printf("Content-Type: text/plain\n\n");printf("Hello World!\n");}

                              » cat /opt/misc/chax

                              #!/bin/sh

                              (TMPO=`mktemp`;sed -n '2,$p' "$@"|gcc 2>/dev/null -std=gnu99 -pipe -O2 -x c -o $TMPO - &&$TMPO 2>/dev/null;rm -f $TMPO)

                            1. 1

                              Aww, I was hoping for a better history of the pre-Solaris 2.x BSD-based SunOS releases from 1987 through 1992, which isn’t covered. This was still very interesting, however.

                              1. 3

                                Sorry. I am not qualified to say much about Solaris/SunOS development from that era. From 1985 to 1993 I was just a customer. I was with Sun from 1995 to 2008. I did work on two ports of Solaris other than Solaris/x86:

                                1. the port to IA-64
                                2. the second port to PowerPC.

                                That message was written to a small audience, the Unix Users Association of Southern California (UUASC), where I was to give a presentation in 2009 and it covered only topics relevant to that presentation to that audience.

                                – Guy Shaw

                              1. 7

                                Alan Coopersmith of Oracle just pointed me to the Berkeley CSUA MOTD archive, saving for posterity a record of how the (world-writeable) MOTD file there was used as a communication medium by the students (and administration) from ’93 on. Very cool!

                                Edit: It was also pointed out to me that “news” lived on in Solaris 2 before being dropped for the 11 release. See http://illumos.org/man/1/news for the latest version which isn’t too far from the original.

                                Edit 2: Added release tag so people notice the ancient code, revived.

                                Edit 3: Here is the last release of the “news” utility.

                                1. 1

                                  I should do some additional digital archeology here, and look for prior art, etc - so consider this is just my memory and opinion.

                                  1. 3

                                    Let’s go with a fresh new idea of a database file system. https://en.wikipedia.org/wiki/Record_Management_Services

                                    Somewhere in one of the early UNIX papers there is a discussion of how hard it is to write “cat” with a record-oriented database file system.

                                    1. 1

                                      Yep. That was my first thought after BeOS. The other was AS/400 that includes integrated database. Both OpenVMS and AS/400 put the concept to good use. Hoff once said, though, in a mailing list that RMS was slower than regular filesystems for reading/writing data, worse than SQLite etc as a database, and maybe OpenVMS should just have regular filesystem + a database. BeOS similarly had some performance problems during compiles that were due to the microkernel style, the filesystem, or combo of both. Not sure.

                                      So, I just advocate we support multiple types of filesystems or DB’s the admins can select based on their usage. I do prefer something like RMS or ZFS by default for the system partition with critical stuff. Versioning, checksums, clustering tool, and some easy backup option (which may or may not use clustering tool).

                                      1. 6

                                        Treating files as sequences of bytes and making databases run as applications has been an enormous success for modular design. Same with putting the command line interpreter in a “shell” and not in the OS. The tradeoff is not guaranteed to remain a win forever, but databases and file systems are each really complex programs and combining them needs some serious design thought about what is gained and what is lost.

                                        1. 1

                                          I totally agree.

                                        2. 2

                                          OpenVMS used to make a fantastic workstation - DECwindows (the VMS port of X11, Motif, and CDE). The (VMS filesystem)[https://en.m.wikipedia.org/wiki/Files-11] is still best-in-class for its market and would be a great starting point for implementing database-like features. The design of the VMS kernel was used for Windows NT. DCL still beats the Unix shell in every category. I’m hoping that once the x64 port is completed that it might be possible to use VMS as a workstation again.

                                          (Once the market for VMS workstations dried up - as the VMS server market expanded - these components essentially were just maintained and patched to stay working but never improved, developer time went elsewheee, and today the system is representive of a 20-year GUI environment. Last I checked there was only support for a couple video boards, but I believe it should still be possible to use DW on a HP Integrity workstation.)

                                          1. 1

                                            I’ve considered it but never met someone who actually used it as a workstation. Do you have a link to anyone who did writing up pros and cons?

                                            Far as desktop, one could always port a window manager from UNIX or embedded scene. MorphOS also has a beautiful one with interesting features. You should look up screenshots of it.

                                            Far as filesystem, Id rather people start with modern designs that are built for current CPU’s and storage architectures. It was great for its time but some new stuff is a lot better.

                                            1. 2

                                              Not recently, sadly. Using VMS as a workstation would essentially give you a desktop circa 2001-ish based on CDE. Not horrible but nothing I’d celebrate.

                                              As to a filesystem, HAMMER might be something to look at - it essentially does everything like snapshots, volume spanning, transparent versioning, integrity checking, network clustering/mirroring, and all in a very lightweight and efficient way. It’s hungry for disk space but light on memory, unlike ZFS.

                                              1. 2

                                                Oh boy, those motif desktops. Not nostalgic about them. On the other hand, DEC blowing off the desktop PC market to milk high VAX margins was the one of the two or threee really indefensible decisions DEC management made.

                                                1. 2

                                                  “Nobody would ever want a smaller, cheaper computer that could handle most of their needs.”

                                                  Indefensible indeed.

                                                  1. 2

                                                    One of the people I knew at DEC used to say about their last CEO who got a $20M(? maybe more, seems so modest these days) parachute: “I would have run the company into the ground for half that.”

                                                    1. 1

                                                      That’s what Im telling my current bosses lol.

                                      1. 2

                                        cf https://www.theatlantic.com/amp/article/537090/ for what I’m choosing to believe is a cunning satire of climatology denial.

                                        1. 1

                                          I’m sure some will downvote and that’s fine, but the links and views presented in the story really are not only interesting but make me wonder how the culture of science and work to be not just “trustable” like the other great article posted of late, but convincing to the layman. I found it on-topic and fascinating and hope others do.

                                          I’m often unsure how to even talk to these fringe who are so wrong they aren’t even wrong.

                                          I do believe some of these people MUST be parodies or trolls and can’t be serious.

                                        1. 2

                                          Since B was a knock-off of BCPL, I recommend people just go to it to see the real thing that invented “the programmer is in control.” On top of this if they find it interesting. It’s just that Thompson had to remove stuff to squeeze it into PDP-7.

                                          1. 2

                                            There is a full modern BCPL available from Martin Richards’ BCPL site with a 64-bit port available as well as Cintpos, a port of Tripos using Cintcode BCPL. Tripos was the ancestor of AmigaDOS.

                                            Richards has a related language, MCPL available, which he describes as “a simple typeless language which is based on BCPL. It makes extensive use of pattern matching somewhat related to that used in ML and Prolog, and some other features come from C.”

                                            Edit: Robert Nordier has made available Classic BCPL for modern systems.

                                          1. 10

                                            As someone who has always had backups proper and worked at places that do proper backups, I’ve actually not really cared at all about “ransomware” - it’s a regular occurrence at work and makes little to no impact beyond having the restore a backup and that employee losing a few hours of work, being schooled (again!) to not do insert-whatever-they-did-here. Seems to me a dude and I’m really shocked and in utter disbelief that any of these attacks actually caused any damage worth noting. Who doesn’t do backups? Seriously?

                                            I used to backup my C64 files to the other sides of old disks and onto cassette tapes. In the pre-PC and early days we had these boxes that used VCR tapes to do backups, can’t recall the names. Later, “Bernoulli Boxes”, probably 1982 or so? Then a little later QIC tape, etc. Never in my entire time of owning computers have I not done regular backups. Ransomware is barely an annoyance.

                                            What I find horrifying and hope never hits the mainstream would be “Leakware”. “Pay us $1mil or all your company data is suddenly available at xxxxzzzxxxxzzzzz.onion”* or maybe worse, Pay us $10,000 or your entire phone including all your pictures is available to everyone on the internet. All your nudies pics and all your SMS conversations.

                                            Or maybe the worst I could think of is Pay us $nnnnn or else we just SEND copies, unsolicited, to everyone in your phone book. - That would be catastrophic and would absolutely end careers and many marriages, and with the right victim, it could result in criminal cases being overturned or being thrown out, for example. Medical records being published. Maybe confidential data and maybe even nuclear secrets.

                                            If you care about data you have backups, so nobody should care about data being destroyed.

                                            Not a lot of people think about the consequences of that data they care about being ex-filtrated and published automatically.

                                            1. 6

                                              As someone who has always had backups proper and worked at places that do proper backups, I’ve actually not really cared at all about “ransomware” - it’s a regular occurrence at work and makes little to no impact beyond having the restore a backup and that employee losing a few hours of work, being schooled (again!) to not do insert-whatever-they-did-here. Seems to me a dude and I’m really shocked and in utter disbelief that any of these attacks actually caused any damage worth noting. Who doesn’t do backups? Seriously?

                                              Reverting from backup on many workstations for a large company means: lost work of thousands, lost business of thousands of hours, high “opportunity costs”, followup costs for cleanup and making sure the damage is contained. Possible cost for re-certification of your infrastructure. Even if you do everything properly, a ransomware outbreak can have high costs.

                                              This is about businesses, not about your personal data.

                                              I once had a client that got their version control system server turned into a spambot through an automatic hack. While this is a bit embarassing, there was no problem to get it up and running in an hour. Forensics to find out that everything is still the same was the costly part and basically halted development for 5 days.

                                              1. 1

                                                Reverting from backup on many workstations for a large company means: lost work of thousands, lost business of thousands of hours,

                                                Most attacks hit a small subset of workstations. Usually someone extra foolish or unlucky. There’s not much of a loss. Then, for large numbers, many can roll them out after office hours or during lunch for critical ones. Finally, there used to be hard disks to solve this problem that were write-protected by admin with them having to give permission for permanent changes to specific areas. They did updates through their own tooling at specific times when they turned write-protect off. These didn’t take off much outside of defense or high-security commercial where management wanted to cut cost per person for IT no matter what happened. And then we’re right back to things being compromised with long, recovery times. ;)

                                                If the key data is centralized as a backup, then there’s even more options that keep the security higher or possible fixes faster. Increased chance of leaks along with the integrity or availability benefits but most stuff is like that at many companies.

                                                1. 2

                                                  Yes, I concur, in the company I work at, I’ve seen ransomware affect 2 or 3 workstations and take about 3 hours to fix (out of about 150 computers here) in a single day once and that was considered sort of a disaster. Having to halt work for 5 days would mean the IT people would be fired immediately. It usually takes less than an hour to restore a system from a ransomware attack. I can’t imagine a ransomware attack being more than trivial because most business users don’t keep important business data on their workstations, it’s checked in and archived when done and all workstations don’t write permissions to corrupt the entire companys data. I still can’t really comprehend how ransomware is an issue to any legitimate business. For the cat lady who doesn’t do backups, “give me 2 bitcoins or you never see your cat pics again!” might be a lift-changing attack, however.

                                                  Most of the time the issue is a employee opening an attachment from a user they expect to get an attachment from which turns out to bogus caused by an outbreak at the senders side.

                                                  The stories of hospitals or “real” companies getting affected by ransomware, to me, just indicates a complete dereliction of duty on behalf of the IT staff and a sign of major incompetence.

                                                  1. 1

                                                    If you ‘check in’ files by putting them on a shared drive, ransomware can overwrite the entire shared drive. Lots of companies operate this way with a nightly backup.

                                                    Even if the backup restores cleanly, the disruption of not having the shared drive available for a day is considerable.

                                                    1. 1

                                                      Sorry to be so argumentative - but I feel like the lone guy living in the future here where these peoblems are solved?

                                                      Why would a shared resource be unavailable an entire day? That is insane. If you check your files into a repository it’s usually just a single command to revert to the previous commit. Most important data is stored these days in versioned repositories, on versioned filesystems or databases or on append-only media, is it not? This has essentially been my experience since the late 1970s to mid 1980s. Microsoft has VSS/Shadow available for about 15 years and a setup with hourly snapshots combined with daily backups seems the norm.

                                                      I’ve just never heard of any legitimate company operating the way you describe and there is simply no excuse for it, if so.

                                                      Edit: Maybe this is just culture shock, not being a Linux guy but coming from the world of big iron and enterprise administration and bringing my mindset with me. The tools in the Unix “small systems” world these days are there to provide a lot of the same functionality. Why aren’t they used?

                                                      Are most admins and operators really decades behind in best practices?

                                                      1. 1

                                                        Software companies manage code in a VCS, but smaller non software companies frequently manage their systems by putting an excel document on a network share.

                                                        This is getting less common but there’s plenty of it about. As malware gets smarter it’ll be able to also wipe git repos, google apps documents etc.

                                              2. 3

                                                What I find horrifying and hope never hits the mainstream would be “Leakware”. “Pay us $1mil or all your company data is suddenly available at xxxxzzzxxxxzzzzz.onion”*

                                                HBO hack?

                                                1. 2

                                                  Sony Hack but for a drug or technology company with non-patented stuff. Alternatively, the kind of places that invest in high-availability or fault-tolerant solutions since downtime costs them so much. Maybe even a data broker.

                                                  1. 2

                                                    Actual tech they probably don’t care. Using illegally acquired trade secrets from a competitor is bad news bears. Remember, when somebody tried to sell Pepsi the secret formula for Coke, Pepsi called the FBI. I don’t think drug company A wants anything to do with drug company B’s secret documents.

                                                    1. 1

                                                      What? A direct competitor in same area buying them is significant liability esp against brand the buyer has developed. Plenty of others in same country or esp foreign will buy. Russia and China are especially popular for making this into a big part of their economy. Many others, too.

                                                      If routers or mobile, then Huwei stands out recently.

                                                1. 1

                                                  Also, does anyone have David Harland’s “Rekursiv: Object-Oriented Computer Architecture”, published by Ellis Horwood (ISBN: 0-7458-0396-2), August 1988?

                                                1. 2

                                                  The article certainly rings true to me, though I can’t confirm it. Anyone inside security at a large organisation who can speak to whether the author’s claims hold up in reality?

                                                  I do find it interesting that while “ransomware” is relatively new, data-destructive malware certainly is not (though there was a long period, perhaps more than a decade, between the technovandalism of the late ‘90s and early 2000s and modern ransomwhere where malware mostly ran bots and spyware and so had an incentive to operate subtly) and destructive worms never motivated a huge investment in IT security. Is it just that there’s now a revenue stream attached to “destroying” data, so the attacks are more widespread? Digital data wasn’t worth enough in the ’90s to protect? Some recent victims of ransomware have been very high-profile? Something else?

                                                  1. 6

                                                    Data point: Maersk reported a loss in the hundreds of millions of dollars from petya. That’s pretty motivational if you ask me (even if still only a fraction of their total profit).

                                                    Blaster and slammer and other worms of yore weren’t as destructive. Maybe they bring the network down, but they mostly just spread without nuking data. Also probably less reliance on computer systems. Reliance (and exposure) only increases over time.

                                                    1. 3

                                                      I don’t work in a “large organization” but ransomware has been generally a non-concern. We already have backups.

                                                      The bigger concern that actually did scare a lot of people and turned out to be nothing, in retrospect, were the rumors of hardware destructive viruses. As I recall, there were quite a few of those. AntiCMOS and Chernobyl

                                                      1. 2

                                                        I wonder if it’s just the very direct price tag attached. Loss of data can sort of be waved away as something that happens, a normal cost of business; paying a ransom is a separate line item that’s going to stand out. Even if you don’t pay it, it provides an anchor, making you ask “is this data loss costing the business more or less than $x?”

                                                      1. 4

                                                        Academics should come to expect published papers to be in the form of Jupyter Notebooks, with the data embedded (or otherwise available for re-calculation).

                                                        The notebook tech doesn’t have to be that Jupyter brand. Org-mode with babel is another option.

                                                        The point is that during peer review, the peer should be able to change a datapoint and re-render the rest of the document to see how that change cascades through the work.

                                                        Academics seem to use the term ‘reproducible research’ when talking about this.

                                                        1. 7

                                                          That does seem to be the term catching on, but I’m wary of using “reproducible” for this kind of thing where you’re literally just re-running the original author’s code, in their original setup (sometimes even down to a whole VM with gigabytes of gunk in it), rather than independently reproducing the results. With the classic idea of reproducibility in the natural sciences, independently constructing your own experimental setup, using your own completely separate materials, is critical, because part of the point of reproduction is to try to tease out if there was any hidden dependence on specifics of the original equipment, or confounding factors like minor impurities or miscalibrated equipment. So you don’t reproduce research by going into the original lab, pushing the same buttons on the same equipment, and observing the same results— you do it by reading a published description of the experimental protocol, and then independently working only from that, see if you can reproduce the results on your own completely separate equipment, with different staff, different sample suppliers, in a different location, etc.

                                                          For computational work, I do see benefit to sharing the data specifically, especially in the case of data that’s expensive to collect. But for good reproducibility the person trying to reproduce a result really shouldn’t be working directly from the original author’s Jupyter notebooks, VMs, scripts, etc.; ideally they’d write their own scripts to avoid inadvertantly copying some quirk that seemed irrelevant but turns out to matter.

                                                          1. 2

                                                            Hell yeah!

                                                            So, one problem to solve is the Greece debt paper issue: that paper itself wasn’t ‘reproducible’ using their own materials. At least the authors released their excel sheet, without which the problems wouldn’t have been identified. Documents-as-code or ‘reproducible papers’ or ‘literate programming’ addresses this problem.

                                                            The problem you describe is outside of my area of expertise. It’s important, thank you!

                                                            1. 1

                                                              Would the characteristic be better named “public method” or something?

                                                              In science, your code frequently is your method; if you don’t publish your method it should be hard to take your results seriously.

                                                            2. 1

                                                              Speaking to angersock’s ‘clusterfuckery’ comment re: terrible code and practices… What you do is author your paper using the notebook. So if you want your fancy cluster to crunch some numbers, you put the shell commands to make that happen into the notebook. Right down to the first command:

                                                              for i in `seq 1 1000`; do ssh node-$i.cluster wget http://the-local-shared-drive/20gb-list-of-numbers.csv; done
                                                              

                                                              For that one, you’d want tell your readers where to get that data. The point is, the first metric peers will use during review is “can I reproduce this?”.

                                                              1. 1

                                                                Hasn’t this been somewhat of a solved issue for a while now, with technologies like Wolfram’s Computable Document Format in wide use for 5+ years, and in the FOSS world there are alternatives, albeit more geared to number theory or computation like SageMath having been around for more than a decade now?

                                                                Maybe the question becomes how to get people to adopt new practices? As the original articles states, this is not a technical issue - that part of the puzzle is already solved.

                                                                This is a problem of making people change their ways and use new tools and adopt new practices.

                                                              1. 2

                                                                I remember finding some security issue with the FreeBSD implementation about ~5 years ago in late 2012 but when I looked to the -CURRENT sources I saw it was removed in early 2013, can’t recall exact dates. The portalfs could likely be revived via FUSE or the like.

                                                                1. 1

                                                                  I am totally shocked that nobody has submitted this here yet. Given the recent renewed interest in shells, I thought folks might find it interesting.

                                                                  1. 1

                                                                    Way late to the game, but I had my own rant at a friend privately about csh, usually tcsh “programming” these days. Signal kept crashing trying to copy the text, so here is an image: with a little language but nothing as offensive as csh itself:

                                                                    https://ban.ai/_matrix/media/v1/download/m.trnsz.com/ebvCeQyMmUfysrPtvztVCAjY