As I age, reading this kind of retro thing starts to become more and more strange as things start to overlap with my real world experiences. It’s odd to see people finding fun in older systems and software where I have a lot of memories of them being tedious frustrating chores. This piece is a good example, because the goal is a perfect fit for one of my dullest war stories.
Right back at the start of my “career”, when I was working as an apprentice in a software house, I remember one day my whole work day consisted of having to prepare two lab rooms full of PCs for a planned training course, which involved wiping a heterogeneous group of maybe 12 to 16 PCs of the 486 era, and installing fresh Windows 95 and some developer tools (probably PowerBuilder, not Delphi, can’t remember though, might have been something else entirely), from a stack of about 15 floppy disks. There was a bit of a knack to doing it, you could kind of load balance between 3 machines at once, with a certain amount of waiting and moving around between desks to swap disks. I remember it as a tedious day, with bouts of odd trouble-shooting, and a couple of false starts/rebuilds, I ended up working through lunch and leaving late in order to meet the next-day deadline. Hated it.
I still understand the appeal though, I enjoy reading about and exploring systems from before my own time myself. There’s definitely a great benefit to how much more comprehensible the simpler older systems can be, set against the complexity explosion modern software developers have to work within. And huge value in learning where you’ve come from. I’m still left shaking my head about the increasing frequency of early 32 bit PC computing as entertainment / hobby pursuits, it’s hard for me to shake my own bias that these machines were terrible to work with
I only slightly overlapped with this time period, but I think the big difference then versus now is just availability of documentation. When I was having a build error in Visual C++, I had to hope that I could sort out the exact magic words to look for in help, or I had to hope that I recognized the class of the problem from one of the books I happened to have read. There was no StackOverflow, there was no Google, newsgroups were an unreliable source of information and at any rate slow to respond, etc. Like you, making no progress for a couple of days because I had to just spend time reading and trying was a pretty normal situation to be in—and that’s a hell of a difference from today, where I can usually get the answers I need in a matter of minutes to single-digit hours.
I haven’t done legacy Windows programming specifically in some time, but having dived a little bit back into the Mac Toolbox and Sega Genesis, the sheer availability of documentation makes one hell of a difference, and changes this kind of legacy work for me from a slog exactly like what you’re describing, to being genuinely fun from at least a point of nostalgia.
I think part of the appeal is exactly that. On a 486 you had to troubleshoot your own stuff. But - it was small enough that you could to that. Modern “complexity explosion” means you can’t totally hack a modern PC, too many things to know.
I don’t think I understand what this means. They seem to have a path that’s depending on some binary somewhere but then builds a chain of C compilers that builds them a working system? I’m not really sure what this buys them. I can bootstrap a FreeBSD system (kernel, userland, and packages) if I have a moderately recent C++ toolchain that is capable of building a modern Clang (which then builds the rest of the system), bmake, and a couple of other tools. The extra steps in bootstrapping the compiler look like more places for malicious code to hide, but maybe each step is verifiable in some way?
The kernel on the host system that’s doing the builds is malicious and so if you’re worried about not being able to trust the compiler, I hope you got the compiler and kernel from different providers.
That 510byte binary is an operating system kernel written in the MBR of a hard drive/floppy disk and started by the bios hardwired to build a 4KB POSIX kernel on power up.
So there is no kernel to trust. But perhaps you mean about the bios bootstrap trust problem which we have not yet solved.
AIUI what they want is to have the initial binary[0] as simple as possible, ie. disassembly is understandable by someone with passing familiarity with ASM. Rest is supposed to be shell and guile scripts, and starting with those ingredients, tinyCC is built, with path towards full GCC.
It is rather moot if it’s all running on a pre-existing POSIX environment since the kernel could compromise anything. I’m still confused about this project’s aims.
Full bootstrapping needs to begin with a computer with all non-volatile storage devices, including firmware flash chips, fully erased, and everything built from source, including firmware. You would need to begin with manually programming a hex monitor similar to that proposed by this project into something. Probably by building a PCB out of fixed-function logic devices that allows you to manually generate SPI transactions (to program an SPI flash) by flipping a switch on and off to manually input binary. The harder part is probably building Linux without a POSIX environment.
Thanks, that makes sense. I’m not sure what this buys you that simply compiling your bootstrap tools with two different toolchains doesn’t though. For example, I can build the FreeBSD bootstrap tools with Clang or GCC (on a FreeBSD or Linux host) and then compare that the binaries that they build are the same. With something like Guix, I’d expect them to be much more able to lean into a package-transparency model, where many people can do the same bootstrap starting with different host platforms and add the hashes that they get and see if they’re getting a different output.
This Bootstrappable Builds project, together with the Reproducible Builds project, has completed one of the only practical examples of “Diverse Double Complication” as described by David A. Wheeler.
I would expect the, to produce functionally equivalent binaries. This is how GCC’s trusting trust defence works. You first build gcc with the system compiler, then with the newly built gcc, then with that compiler. The second and third binaries are both produced by the same version of GCC, compiled with different compilers, and so should be identical. Clang also tries very hard to produce identical output independent of how the compiler was built (and most of the test suite depends on this property).
All of the tools needed to build FreeBSD are in the source tree and are compiled once with the host compiler during the bootstrap phase. They are then used to build everything that ends up being installed. The result is that the final binary should not depend on the compiler used to build the bootstrap tools.
OK, I think you’re saying compile your desired build compiler with two different compilers and then compare the output of the resultant candidates, presumably against unpredictable input, which should be identical. I didn’t quite get that from the initial comment.
Practically, I can see how this is a useful verification method, even if it doesn’t seem to be completely equivalent.
It depends a bit on the threat model. I assume that the kernel is in scope if you’re worried about supply-chain vulnerabilities because it would be trivial to have a kernel patch that spots something that looks like a compiler, watches for specific patterns in its output, and replaces them with something different. If you are using a *NIX distro as your build environment, precisely the same people have access to introduce trojans into the kernel as do in the compiler, so removing the compiler from the TCB doesn’t buy you much. You can kind-of work around this if you build in an attested environment (i.e. have a valid secure boot chain to a known-good kernel), but that depends on having a trusted environment and if you actually trust the environment then you don’t need any of these mitigations. If you assume that an attacker can compromise your supply chain then diversity is a better defense. If I build the bootstrap tools on FreeBSD with clang and Ubuntu with gcc then it’s very hard for someone to inject a trojan into both. If I then compare the outputs and I get the same thing then I have a lot of confidence that any malware in my final system image was present in my source tree.
Its package ecosystem is in excellent condition and packages such as org-mode and eglot / lsp-mode make even the most demanding programming languages a joy to work with in Emacs.
I work on a large C/C++ codebase as part of my day job and use lsp-mode/eglot (currently eglot) to navigate the code, with very few extensions. I also use the latest mainline Emacs with native compilation. I have been using Emacs for over 25 years and my customization is best categorized as “very light”. In short, my Emacs set up is not much beyond what ships with it.
And it’s still just… slow. GCC has some pretty big files and opening them can take up to 10 seconds thanks to font-lock mode. (Yes, I know I can configure it to be less decorative, but I find that decoration useful.) It’s much worse when you open a file that is the output from preprocessor expansion (easily 20000+ lines in many cases).
Log files that are hundreds of megabytes are pretty much a guaranteed way to bring Emacs to a crawl. Incremental search in such a buffer is just painful, even if you M-x find-file-literally.
I had to turn off nearly everything in lsp-mode/eglot because it does nothing but delay my input. I can start typing and it will be 3-4 characters behind as it tries to find all the completions I’m not asking for. Company, flymake, eldoc are all intolerably slow when working with my codebase, and I have turned them all off or not installed them in the first place.
M-x term is really useful, but do not attempt to run something that will produce a lot of output to the terminal. It is near intolerable. Literally orders of magnitude slower to display than an xterm or any other terminal emulator. (M-x eterm is no better.)
The problem, of course, is that Elisp is simply not performant. At all. It’s wonderfully malleable and horribly slow. It’s been this way since I started using it. I had hopes for native compilation, but I’ve been running it for a few months now and it’s still bad. I love Emacs for text editing and will continue to use it. I tried to make it a “lifestyle choice” for a while and realized it’s not a good one if I don’t want to be frustrated all the time. Emacs never seems to feel fast, despite the fast hardware I run it on.
The performance was the reason for me to leave Emacs. I was an evil mode user anyways so the complete switch to (neo)Vim was simple for me. I just could not accept the slowness of Emacs when in Vim everything is instant.
E.g. Magit is always named as one of the prime benefits of Emacs. While its functionality is truly amazing its performance is not. Working on a large code base and repository I was sometimes waiting minutes! for a view to open.
I actually use Emacs because I found it really fast compared to other options. For example, the notmuch email client is really quick on massive mailboxes.
Some packages might be slow, though. I think the trick is to have a minimal configuration with very well chosen packages. I am particularly interested in performance because my machine is really humble (an old NUC with a slow SATA disk).
To be fair it was some time ago and I don’t remember all the details but using LSPs for code completion/inspection was pretty slow e.g.
Compared to IDEs it might not even have been slow but similar. I however have to compare to Vim where I have equal capabilities but pretty much everything is instant.
Thanks for the notice! I may try it again in the future but currently I am very happy with my Neovim setup, which took me a long time to setup/tweak :)
Out of curiosity, were you using Magit on Windows?
I use Magit every day and my main machine is very slow. (1.5GHz 4 core cortex A53) Magit never struck me as particularly slow, but I’ve heard that on Windows where launching subprocesses takes longer it’s a different story.
but I’ve heard that on Windows where launching subprocesses takes longer
Ohh you have no idea how slow in a corporate environment. Going through MSYS2, Windows defender, with windows being windows and a corporate security system on top, it takes… ages. git add a single file? 20 seconds. Create a commit? Over a minute. It’s bonkers if you hit the worst case just right. (On a private Windows install, MSYS2 + Exceptions set in Windows Defender it’s fine though, not much slower as my FreeBSD laptop)
I asked around and there is a company wide, hardcoded path on every laptop, that has exceptions in all the security systems just to make life less miserable for programmers. Doesn’t solve it completly, but helps.
Either wait an eternity or make a mokery of the security concept. Suffice to say I stopped using Windows and Cross-Compile from now on.
With Windows I think it’s it’s particularly git that is slow, and magit spawns git repeatedly. It used also to be very slow on Mac OS as well because of problems with fork performance. On linux, it used to be slow with tramp. There are some tuning suggestions for all of these in the magit manual I think.
Nope on Linux. As mentioned our code base is big and has many branches etc. Not sure where exactly Magit’s bottleneck was. It was quite some time ago. I just remember that I found similar reports online and no real solution to them.
I now use Lazygit when I need something more than git cli and it’s a fantastic tool for my purpose. I also can use it from within Vim.
Working on a large code base and repository I was sometimes waiting minutes! for a view to open.
This happens for me as well with large changes. I really like Magit but when there are a lot of files it’s nearly unusable. You literally wait for minutes for it to show you an update.
I actually switched to M-x shell because I found the line/char mode entry in term-mode to be annoying (and it seems vterm is the same in this respect). shell-mode has all the same slowness of term-mode, of course. I’ve found doing terminal emulation in Emacs to be a lost cause and have given up on it after all these years. I think shell-mode is probably the most usable since it’s more like M-x shell-command than a terminal (and that’s really its best use case).
If you need ansi/curses there’s no good answer and while I like term it was too slow in the end and I left. I do think that for “just” using a shell that eshell is fine though.
Do you use the jit branch of emacs? I found once I switched to that and it had jit compiled things my emacs isn’t “fast” but its pretty boring now in that what used to be slow is now at least performant enough for me not to care.
I use the emacs-plus1 package. it compiles the version you specify. currently using emacs-plus@29 with --with-native-comp for native compilation, and probably some other flags.
Awesome! also, check out pixel-scroll-precision-mode for the sexiest pixel-by-pixel scrolling. seems to be a little buggy in info-mode, can’t replicate with emacs -Q though, so YMMV.
I gotta convert more of my config over but that was enough to build it and get my existing ~/.emacs.d working with it and speedy to the point I don’t care about emacs slowness even on macos anymore.
Here you go. It changes a little bit here and there with some experiments.The packages I currently have installed and use are: which-key, fic-mode, counsel, smartparens, magit, and solarized-theme. There may be a few others that I was trying out or are only installed for some language support (markdown, yaml, and so forth).
Quick addendum on the config: that’s my personal config, which morphs into my work setup. My work one actually turns off flymake and eldoc when using eglot.
Is there anything that has prevented a Neovim-style rewrite of Emacs? A Neomacs?
I keep hearing about the byzantine C-layer of Emacs and the slowness of Elisp. And Emacs definitely has the community size to develop such an alternative.
Why do you think no one has attempted such an effort? Or maybe I should add “recently” to the question. As I know there are other Emacs implementations.
As crusty as Emacs source can be, it’s nowhere near as bad Vim source was, which was a rat’s nest of #ifdef. That’s why Neovim had to basically rewrite their way to a fork. The Emacs implementation is surprisingly clean, as long as you can tolerate some of the aged decisions (and GNU bracing).
There is Climacs, which isn’t exactly the same, but is close.
The problem for any new Emacs clone will that it has to run all the Elisp out there. Unless there is a substantial speed improvement to Elisp or a very good automatic translation tool, any such project will be doomed from the start.
This sshd got started inside the “doubly niced” environment
As for why “the processes didn’t notice and then undo the nice/ionice values”, think about it. Everyone assumes they’re going to get started at the usual baseline/default values. Nobody ever expects that they might get started down in the gutter and have to ratchet themselves back out of it. Why would they even think about that?
These days, this should stand out as a red flag – all these little scripts should be idempotent.
You shouldn’t write scripts where if you Ctrl-C them, and then re-run it, you’ll get these “doubling” effects.
Otherwise if the machine goes down in the middle, or you Ctrl-C, you are left with something that’s very expensive to clean up correctly. Writing Idempotent scripts avoids that – and that’s something that’s possible with shell but not necessarily easy.
As far as I can tell, idempotence captures all fhte benefits of being “declarative”. The script should specify the final state, not just a bunch of steps that start from some presumed state – which may or may not be the one you’re in!
I believe the “doubly niced” refers to “both ionice and nice”. There wasn’t any single thing being done twice by accident. The issue is with processes inheriting the settings due to Unix semantics.
That is an interesting quirk of nice()/renice, but in this case I believe they explicitly stated they originally set the nice value to 19, which is the maximum.
But still the second quote talks about something related to idempotence. It talks about assuming you’re in a certain state, and then running a script, but you weren’t actually in that state. Idempotence addresses that problem. It basically means you will be in the same finishing state no matter what the starting state is. The state will be “fixed” rather than “mutated”.
Hmm, I still don’t think this is the case. The state being dealt with is entirely implicit, and the script in question doesn’t do anything with nice values at all, and yet still should be concerned about them.
The first computer that I owned was an Amstrad PC 1640 HD20. This was second hand (my father’s company had some in a store room and wanted to clear the space). Mine had the EGA display and the 20MB disk replaced with a 40MB one (which, due to early FAT16 limitations, needed to be partitioned as an 8MB C: and a 32MB D:). I think mine had a NEC V30 CPU as well.
These machines came with GEM, which ran a lot better than Windows 3.0 on the same machine. I think GEM was the first GUI that I ever used, even before I had that machine. It had a vector-drawing program that kept me entertained as a very small child.
The first Windows installs the company had were as a result of buying a diagramming program program called Meta Design. The authors had come to the conclusion that bundling a copy of Windows 3.0 (possibly 3.1?) with their program was cheaper than licensing a set of GUI / print libraries for DOS. After a couple of years, Windows was sufficiently common that they could just stop shipping Windows.
Yes, in the UK, the Amstrad PC was a relatively large selling range, and opened the market for PC clone up completely. I had a PC1512 when I was a teen, (or rather my father’s company did too!). Because of their attention to cost they shipped with DR DOS and yes, GEM for at least the graphical displays. If I remember rightly, the PC1512 even though it had only a CGA compatible adapter on board, also had an entirely non-standard hi res mono mode for presenting GEM. So these machines were relatively common in the UK in the end of the 80s, and so was GEM as an interface, I remember enough third party software existing to get reviews in the local rags.
most interestingly I think, these machines also shipped with locomotive Basic 2, an iteration on locomotive software’s really rather excellent BASIC implementation that was fully integrated with GEM, and supported building evented gui apps in BASIC with extensions for mouse input and messages, apis for standard components and 2d drawing, presented with an IDE. It was pretty rudimentary, and I can’t remember much, we’re not talking smalltalk quality but it was fairly capable and I built several GUI programs, mostly paint and draw things. This was years ahead of any other simply available RAD environment, and it shipped free with the machine.
Here’s a video of someone using the supplied demo app. This was really pretty advanced stuff for commodity consumer hardware in 1986.
Mine didn’t have DR-DOS. According to the Wikipedia entry it came with both DR-DOS and MS-DOS 3.3 on floppy disks. Mine had MS-DOS 4.0 installed (so GW BASIC was the version I used) and I didn’t have the original floppy disks so I was running Windows 3.0 instead of GEM. I’m quite jealous of Locomotive BASIC. I didn’t do any GUI programming until I got a 386 a few years later and Visual Basic 2.0 (on Windows 3.11).
The display connectors, as I recall, were completely non-standard and used different connectors for all of the different models. When the EGA monitor on mine died, there wasn’t a good way of replacing it with anything else.
As a child (I would have been 10, I think, when I got the machine), my favourite feature of the machine was the fact that the keyboard had an Amstrad joystick port on the back of the keyboard. This used the same joystick connector as most 8-bit computers and mapped the two buttons and the 8 directions to key codes, so any game that worked with the keyboard and let you configure key bindings could be made to work with the joystick. It also had a volume control on the PC speaker so I could turn down the beeps if I wanted to play games before my parents woke up.
I think it’s a mistake to group objective orientation completely under self (although it discusses SmallTalk a lot in the commentary for this section). Those two are message passing / evented object oriented systems, polymorphic by shared method interfaces, and as noted build on a persistent system state, and clearly represent a distinct family.
The bulk of what people consider to be ‘object oriented’ programming after that inflection point though is the C++ / java style where objects are composite static types with associated methods and polymorphic through inheritance heirarchies - I think this comes from Simula and I think this approach to types and subtypes could be important enough to add to the list as an 8th base case.
I wouldn’t group C++ and Java like that. Java is a Smalltalk-family language, C++’s OO subset is a Simula-family language (though modern C++ is far more a generic programming language than an object-oriented programming language).
You can implement Smalltalk on the original JVM by treating every selector as a separate interface (you can use invoke_dynamic on newer ones) and Redline Smalltalk does exactly this. You can’t do the same on the C++ object model without implementing an entirely new dispatch mechanism.
Some newer OO languages that use strong structural and algebraic typing blur the traditional lines between the static and dynamic a lot. There are really two axes that often get conflated:
Static versus dynamic dispatch.
Structural versus nominal typing.
Smalltalk / Self / JavaScript have purely dynamic dispatch and structural typing. C++ has nominal typing and both static and dynamic dispatch and it also (via templates) has structural typing but with only static dispatch, though you can just about fudge it with wrapper templates to almost do dynamic over structural types. Java has only dynamic dispatch and nominal typing.
Newer languages, such as Go / Pony / Verona have static and dynamic dispatch and structural typing. This category captures, to me, the best set of tradeoffs: you can do inlining and efficient dispatch when you know the concrete type, but the you can also write completely generic code and the decision whether to do static or dynamic dispatch depends on the type information available at the call site. Your code feels more like Smalltalk to write, but can perform more like C++ (assuming your compiler does a moderately good job of reification and inlining, which Go doesn’t but Pony does).
From the implementation side yes, the JVM definitely feels more like Smalltalk. But is Java really used in the same dynamic fashion to such an extent that you could say it too is Smalltalk? Just because it’s possible, doesn’t mean it’s idiomatic. I’d argue that most code in Java, including the standard library/classpath, is written in a more Simula-like fashion, the same as C++, and would place it in that same category.
Interfaces, which permit dynamic dispatch orthongonal to the implementation hierarchy, are a first-class parts of Java and the core libraries. Idiomatic Java makes extensive use of them. The equivalent in C++ would be abstract classes with pure virtual methods and these are very rarely part of an idiomatic C++ codebase.
Java was created as a version of Smalltalk for the average programmer, dropping just enough of the dynamic bits of Smalltalk to allow efficient implementation in both an interpreter and a compiler. C++ was designed to bring concepts from Simula to C.
Interesting replies, thanks. The point about Java dispatch is interesting and suggests it is not as good an example as I thought it was. (I’ve not really used it extensively for a very long time). The point I was trying to make was for the inclusion of Simula based on the introduction of classes and inheritance, itself an influence on Smalltalk. I accept that simula is built on Algol, and maybe that means it’s not distinct enough for a branch within this taxonomy. I would note that both Stroustrup and Gosling nominate Simula as a direct influence example citation
(NB: I always thought of java as an attempt to write an objective-C with a more C++ syntax myself, but that’s just based on what seemed to be influential at the time. Sun were quite invested in OpenStep shortly before they pivoted everything into Java)
(NB: I always thought of java as an attempt to write an objective-C with a more C++ syntax myself, but that’s just based on what seemed to be influential at the time. Sun were quite invested in OpenStep shortly before they pivoted everything into Java)
And Objective-C was an attempt to embed Smalltalk in C. A lot of the folks that worked on OpenStep went on to work on Java and you can see OpenStep footprints in a lot of the Java standard library. As I understand it, explicit interfaces were added to Java largely based on experience with performance difficulties implementing Objective-C with efficient duck typing. In Smalltalk and Objective-C, every object logically implements every method (though it may implement it by calling #doesNotUnderstand: or -forwardInvocation:), so you need an NxM matrix to implement (class, selector) -> method lookups. GNU family runtimes implement this as a tree for each object that contains every method, with copy-on-write to reduce memory overhead for inheritance and with a leaf not-implemented node that’s referenced for large runs of missing selectors. The NeXT family runtimes implement it with a per-object hash table that grows as methods are referenced. Neither is great for performance.
The problem is worse in Objective-C than in some other languages for two reasons:
Categories and reflection APIs mean that methods can be added to a class after it’s created. Replacing a method is easy (you already have a key->value pair for it in whatever your lookup structure is, but adding a new valid selector means that you can’t optimise the layout easily).
The fallback dispatch mechanisms (-forwardInvocation: and friends) mean that you really do have the complete matrix, though you can optimise for long runs of not-currently-implemented selectors.
Requiring nominal interfaces rather than simple structural equality for dynamic dispatch meant that Java could use vtables for dispatch (like C++). Each class just has an array of methods it implements, indexed by a stable ordering of the method names. Each interface has a similar vtable and nominal interfaces mean that you can generate the interfaces up-front. It’s more expensive to do an interface-to-interface cast, but that’s possible to optimise quite a lot.
Languages that do dynamic dispatch but don’t allow the reflection or fallback dispatch mechanism, but still do structural typing, can use selector colouring. This lets you have a vtable-like dispatch table, where every selector is a fixed index into an array, but where many selectors will share the same vtable index because you know that no two classes implement both selectors. The key change that makes this possible is that the class-to-interface cast will fail at compile time if the class doesn’t implement the interface and an interface-to-interface cast will fail at run time. This means that once you have an interface, you never need any kind of fallback dispatch mechanism: it is guaranteed to implement the methods it claims. Interfaces in such a language can be completely erased during the compilation process: the class has a dispatch table that lays out selectors in such a way that selector foo in any class that is ever converted to interface X is at index N, so given an object x of interface type X you can dispatch foo by just doing x.dtable[N](args...). If foo appears in multiple interfaces that are all implemented by an overlapping set of classes, then foo will map to the same N. If one class implements bar and another implements baz, but these two methods don’t ever show up in the same interfaces then they can be mapped to the same index.
Smalltalk has been one of the big influences on Verona too. I would say that we’re trying to do for the 21st century what Objective-C tried to do for the ‘80s: provide a language that captures the programmer flexibility of Smalltalk but is amenable to efficient implementation on modern hardware and modern programming problems. Doing it today means that we care as much about scalability to manycore heterogeneous systems as Objective-C cared about linear execution speed (which we also care about). We want the same level of fine-grained interoperability with C[++] that Objective-C[++] has but with the extra constraint that we don’t trust C anymore and so we want to be able to sandbox all of our C libraries. We also care about things like live updates more than we care about things like shared libraries because we’re targeting systems that typically do static linking (or fake static linking with containers) but have 5+ 9s of uptime requirements.
Fascinating reading again, thanks. I had not previously heard of Verona, it sounds very interesting. Objective-C was always one of my favourite developer experiences, the balance of C interoperability with such a dynamic runtime was a sweet spot, but the early systems were noticeably slow, as you say.
It’s because Java and C++ are both ALGOL family with something called “objects” in it. Neither have enough unique features to warrant a family or being part of anything but the ALGOL group.
Something a bit more like what people more usually call components these days, more than object oriented languages. It’s all about packaging collections of behaviour behind reusable modular abstractions. He’s right, about a lot of it, although the vocabulary is dated, and we have coalesced more of it into and around the idea of APIs
Remember the NeXT idea of OOP is dynamic, late bound, loose types and message passing, with Smalltalk as the primary influence, not objects in the mainstream as eventually happened in the more static bound sense of Java or C++.
Some of what they were shooting for was objects as closed components that could be distributed and sold like pieces of a construction kit and you’d be able to quickly assemble desktop apps by dragging them together in a visual editor and just serialising that out to dump a working application image. (Which is kind of how NeXT Interface Builder worked)
Squint and you can see it in today’s apps that tie together APIs from disparate service providers, and we don’t really talk about this in the vocabulary of objects so much any more, but the early roots of SOA do have a lot of it present in CORBA, XML RPC, SOAP etc. And there is that ‘O’ in JSON still ;-)
I am confused about why the Rest crowd is all over grpc ant the likes. I thought the reason why Rest became a thing was that they didn’t really thought RPC protocols were appropriate. Then Google decides to release an binary (no less) RPC protocol and all of the sudden, everyone thinks RPC is what everyone should do. SOAP wasn’t even that long ago. It’s still used out there.
Could it be just cargo cult? I’ve yet to see a deployment where the protocol is the bottleneck.
Because a lot of what is called REST wends up as something fairly close to an informal RPC over HTTP in JSON, maybe with an ad-hoc URI call scheme, and with these semantics, actual binary rpc is mostly an improvement.
(Also everyone flocks to go for services and discover that performant JSON is a surprisingly poor fit for that language)
I’I imagine that the hypermedia architectural constraints weren’t actually buying them much. For example, not many folks even do things like cacheability well, never mind building generic hypermedia client applications.
But a lot of the time the bottleneck is usually around delivering new functionality. RPC style interfaces are cheapter to build, as they’re conceptually closer to “just making a function call” (albeit one that can fail half way though), wheras more hypermedia style interfaces requires a bit more planning. Or at least thinking in a way that I’ve not seen often.
There has never been much, if anything at all, hypermedia specific about HTTP, It’s just a simple text based stateless protocol on top of TCP. At this day an age, that alone buys anyone more than any binary protocol. I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations. Which I don’t think are common to encounter even among tech giants.
Virtually every computing device has a TCP/IP stack these days. $2 microcontrollers have it. Text protocols were a luxury in the days where each kilobyte came with high costs. We are 20-30 years pst that time. Today even in the IoT world HTTP and MQTT are the go to choices for virtually everyone, no one bothers to buy into the hassle of an opaque protocol.
I agree with you, but I think the herd is taking the wrong direction again. My suspicion is that the whole Rest histeria was a success because of being JSON over HTTP which are great easy to grasp and reliable technologies. Not because of the alleged architectural advantages as you well pointed out.
SOAP does provide “just making a function call”, I think the reason why it lost to Restful APIs, was because requests were not easy to assemble without resourcing to advanced tooling. And implementations in new programming languages were demanding. I do think gRPC suffers from these problems too. It’s all fun and games while developers are hyped “because google is doing”, once the hype dies out, I’m picturing this old embarrassing beast no one wants to touch, in the lines of GWT, appengine, etc.
I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations.
Those are not rare situations, believe me. Binary protocols can be much more efficient, in bandwidth and code complexity. In version 2 of the product I work on we switched from a REST-based protocol to a binary one and greatly increased performance.
As for bandwidth, I still remember a major customer doing their own WireShark analysis of our protocol and asking us to shave off some data from the connection setup phase, because they really, really needed the lowest possible bandwidth.
Sure, but the framing mostly comes from Roy Fielding’s thesis, which compares network architectural styles, and describes one for the web.
But even then, you have the constraints around uniform access, cacheability and a stateless client, all of which are present in HTTP.
just a simple text based stateless protocol
The protocol might have comparatively few elements, but it’s just meant that other folks have had to specify their own semantics on top. For example, header values are (mostly) just byte strings. So for example, in some sense, it’s valid to send Content-Length: 50, 53 in a response to a client. Interpreting that and maintaing synchronisation within the protocol is hardly simple.
herd is taking the wrong direction again
I really don’t think that’s a helpful framing. Folks aren’t paid to ship something that’s elegant, they’re paid to ship things that work, so they’ll not want to fuck about too much. And while it might be crude and and inelegant, chunking JSON over HTTP achived precisely that.
By and large gRPC succeeded because it lets developers ignore a whole swathe of issues around protocol design. And so far, it’s managed to avoid a lot of the ambiguity and interoperability issues that plagued XML based mechanisms.
Cargo Cult/Flavour of the Week/Stockholm Syndrome.
A good portion of JS-focussed developers seem to act like cats: they’re easily distracted by a new shiny thing. Look at the tooling. Don’t blink, it’ll change before you’ve finished reading about what’s ‘current’. But they also act like lemmings: once the new shiny thing is there, they all want to follow the new shiny thing.
And then there’s the ‘tech’ worker generic “well if it works for google…” approach that has introduced so many unnecessary bullshit complications into mainstream use, and let slide so many egregious actions by said company. It’s basically Stockholm syndrome. Google’s influence is actively bad for the open web and makes development practices more complicated, but (a lot of) developers lap it up like the aforementioned Lemming Cats chasing a saucer of milk that’s thrown off a cliff.
Partly for sure. It’s true for everything coming out of Google. Of course this also leads to a large userbase and ecosystem.
However I personally dislike Rest. I do not think it’s a good interface and prefer functions and actions over (even if sometimes very well) forcing that into modifying a model or resource. But it also really depends on the use case. There certainly is standard CRUD stuff where it’s the perfect design and it’s the most frequent use case!
However I was really unhappy when SOAP essentially killed RPC style Interfaces because it brought problems that are not inherent in RPC interfaces.
I really liked JSON RPC as a minimal approach. Sadly this didn’t really pick up (only way later inside Bitcoin, etc.). This lead to lots of ecosystems and designs being built around REST.
Something that has also been very noticeable with REST being the de-facto standard way of doing APIs is that oftentimes it’s not really followed. Many, I would say most REST-APIs do have very RPC-style parts. There’s also a lot of mixing up HTTP+JSON with REST and RPC with protobufs (or at least some binary format). Sometimes those “mixed” pattern HTTP-Interfaces also have very good reasons to be like they are. Sometimes “late” feature additions simply don’t fit in the well-designed REST-API and one would have to break a lot of rules anyways, leading to the questions of whether the last bits that would be worth preserving for their cost. But that’s a very specific situation, that typically would only arise years into the project, often triggered by the business side of things.
I was happy about gRPC because it made people give it another shot. At the same time I am pretty unhappy about it being unusable for applications where web interfaces need to interact. Yes, there is “gateways” and “proxies” and while probably well designed in one way or another they come at a huge price essentially turning them into a big hack, which is also a reason why there’s so many grpc-alikes now. None as far as I know has a big ecosystem. Maybe thrift. And there’s many approaches not mentioned in the article, like webrpc.
Anyways, while I don’t think RPC (and certainly gRPC) is the answer to everything I also don’t think restful services are, nor graphql.
I really would have liked to see what json-rpc would have turned to if it got more traction, because I can imagine it during for many applications that now use REST. But this is more a curiosity on an alternative reality.
So I think like all Google Project (Go, Tensorflow, Kubernetes, early Angular, Flutter, …) there is a huge cargo cult mentality around gRPC. I do however think that there’s quite a lot of people that would have loved to do it themselves, if that could guarantee that it would not be a single person or company using it.
I also think the cargo cult is partly the reason for contenders not picking up. In cases where I use RPC over REST I certainly default to gRPC simply because there’s an ecosystem. I think a competitor would have a chance though if it would manage a way simpler implementation which most do.
I can’t agree more with that comment! I think the RPC approach is fine most of the time. Unfortunately, SOAP, gRPC and GraphQL are too complex. I’d really like to see something like JSON-RPC, with a schema to define schemas (like the Protobuf or GraphQL IDL), used in more places.
Working in a place that uses gRPC quite heavily, the primary advantage of passing protobufs instead of just json is that you can encode type information in the request/response. Granted you’re working with an extremely limited type system derived from golang’s also extremely limited type system, but it’s WONDERFUL to be able to express to your callers that such-and-such field is a User comprised of a string, a uint32, and so forth rather than having to write application code to validate every field in every endpoint. I would never trade that in for regular JSON again.
Strong typing is definitely nice, but I don’t see how that’s unique to gRPC. Swagger/OpenAPI, JSON Schema, and so forth can express “this field is a User with a string and a uint32” kinds of structures in regular JSON documents, and can drive validators to enforce those rules.
Quite surprised by this piece of common lore which seems to have passed me by entirely at the time. I used cheap ne2000 clones preferentially and almost exclusively for building my small linux networks through the mid nineties and I can’t really think of any problems. Most of my cursed networking from that era was struggling with linux NFS implementations.
Ditto to the former (but I didn’t build out Linux networks). When switching to PC from Amiga and building out my first box, I followed sage advice and went with an NE2000 because “everything supports it” and the alternatives realistically available in my price budget didn’t have Linux support, or had worse support than the NE2000. I never noticed any problems with it; the two other students I shared a house with that year were also compsci students and we had a household network for our machines.
Linux NFS was so bad that discovering it actually worked under FreeBSD was a delight. (I mean, later at ISP postmaster scale, I got too familiar with quirks of FreeBSD/SunOS/NetApp and all the wonderful NFS bugs which could still come up, but nobody was seriously proposing we try to add Linux into the mix: we later added Linux to the mail setup for malware scanning with a commercial product, but since the scanner was closed source we kept it away from the filer network anyway).
I use Apple Notes on iOS and then Notes in icloud.com on Linux. I haven’t found anything that works better. Ok, maybe beorg/mobileorg, but it’s not as effortless on mobile. Notes is damn near perfect.
It is also fully IMAP accessible, like most providers.
iCloud contacts and Calendar are also accessible via carddav and caldav respectively. Though you need to generate an API access token on iCloud.com.
I actually have really good experiences running my own Carddav and Caldav servers on iOS and MacOS. (God, I sound like such a shill these last few days! - macs are good but I really do prefer the commandline and tiling window manager ecosystem of Linux, personally)
I actually have really good experiences running my own Carddav and Caldav servers on iOS and MacOS.
Same here. It also works fine on Android. It’s a pain with Thunderbird though and I haven’t really found anything on Windows that works well with them.
One of the huge things that Apple did for usability (and that Android somewhat copied) was to separate system services from the UI. The address book, calendar, spell checker, and so on are all system services that any application can use. On Windows, Office implements its own ones of these and I get the impression that there was a lot of pressure from the Office team 20 years ago to prevent Windows from implementing them and enabling competitions. Cocoa’s NSTextView is a joy to use and includes a complete (and completely configurable) typesetting engine that lets you flow text however you want, replace the hypenation engine, and so on. Windows’ rich text control is awful in comparison. As a result, everyone who needs anything nontrivial implements their own version and the UI is massively fragmented.
the first thing i did when i got my hands on the first Jolla smartphone, back in the day, fresh out of the box was download the then newest emacs tarball, and build it. I was on the train at the time, commuting to work :-) https://flic.kr/p/je333q
I used BeOS as my primary OS for a year or so, eventually dual-booting with Linux and then dropping it altogether.
Many things about BeOS were sort of incredible. Booted in a couple seconds on the machines of the era, easily 5-10x more quickly than Linux. One of the “demos” was playing multiple MP3 files backwards simultaneously, a feat that nothing else could really do at the time, or multiple OpenGL applications in windows next to each other. The kernel really did multiprocessing in a highly responsive, very smooth way that made you feel like your machine was greased lightning, much faster than it felt under other OSes. This led to BeOS being used for radio stations, because nothing you were doing in the foreground stood a chance of screwing up the media playback.
BeOS had a little productivity suite, Gobe Productive. It had an interesting component embedding scheme, I guess similar to what COM was trying to be, so you just made a “document” and then fortified it with word processing sections or spreadsheet sections.
There were a lot of “funny” things about BeOS that were almost great. Applications could be “replicants,” and you could drag the app out of the window frame and directly onto your desktop. Realistically, there were only a couple for which this would be useful, like the clock, but it was sort of like what “widgets” would become in a few years with Windows and Mac OS X.
The filesystem was famous for being very fast and for having the ability to add arbitrary metadata to it. The mail client was really just a mail message viewer; the list of messages was just a Tracker window (like Finder) showing attributes for To, From, Subject, etc. Similarly, the media player was just able to play one file, if you wanted a playlist, you just used Tracker; the filetype, Title, Artist, Album, etc. were just attributes on the file. I’m not entirely sure how it parsed them out, probably through a plugin or something. You could do what we now call “smart searches” on Mac OS X by saving a search. These worked just like folders for all the apps.
The POSIX compatibility was only OK. I remember it being a little troublesome to get ports of Unix/Linux software of the era going. At the time, using a shittier browser than everyone else wasn’t really a major impediment to getting anything done, so usually I used NetPositive. There was a port of Mozilla, but it was a lot slower, and anyway, NetPositive gave you haiku if something went wrong.
There were not a huge number of applications for BeOS. I think partly it was a very obscure thing to program for. There were not a lot of great compatibility libraries you could use to easily make a cross-platform app with BeOS as a target. I wasn’t very skilled at C++ (still am not) but found trying to do a graphical app with BeOS and its libraries a pretty huge amount of work. Probably it was half or less the work of doing it in Windows, but you had to have separate threads for the app and the display and send messages between them, and it was a whole thing. Did not increase my love for C++.
All in all, it was a great OS for the time. So much of my life now is spent in Emacs, Firefox and IntelliJ that if I had those three on there I could use it today, but it was such an idiosyncratic platform I imagine it would have been quite difficult to get graphical Emacs on there, let alone the others. But perhaps it’s happening with Haiku.
the filetype, Title, Artist, Album, etc. were just attributes on the file. I’m not entirely sure how it parsed them out, probably through a plugin or something.
Querying was built into the filesystem. There was a command-line query, too. So many applications became so much simpler with that level of support for queries, it was great.
you had to have separate threads for the app and the display and send messages between them, and it was a whole thing
Yeah, that was a downside, but it was very forward-thinking at the time.
So much of my life now is spent in Emacs, Firefox and IntelliJ that if I had those three on there I could use it today
Well, you’re almost in luck. Emacs is available – a recent version, too!
IntelliJ is there, too, but 1- only the community edition, and 2- it’s a bit further behind in versions.
Unfortunately, Firefox doesn’t have a Haiku port at this time. Rust has been ported, but there are still a boatload of dependencies that haven’t been. The included browser, WebPositive, is based on a (iirc, recent) version of webkit, fwiw, so it’s not antiquated.
the problem with relying on additional file metadata for functionality in a networked world is that you have to find a way to preserve the metadata across network transfers. I also used BeOS for several years for daily everything. Networking in BeOS was practically an afterthought.
Sure, and you need to be able to populate metadata for untagged files from the network.
Fortunately, most modern file types have metadata in them, so discarding the fields outgoing doesn’t hurt, and populating them incoming isn’t too hard. IIRC, that sort of thing was generally part of the application. So, e.g., the IMAP sync app would populate your email files with metadata from the email header fields, the music player app would populate metadata from the mp3 or ogg info headers, etc.
but then this becomes a schema problem. next-gen ideas like tagging files pervasively with identical metadata regardless of type for relating and ordering dies as soon as you tar it up and pass it through a system that doesn’t know about your attributes - unless you have abitrary in-band metadata support, and then it becomes a discoverability and a taxonomy problem, and if you have it in multiple places you have to keep it synchronised and stable with regards to shallow copies like links. You can still have the support for it as a second layer of metadata, of course, and the ability to index and query otherwise extant metadata out of band is useful as an optimisation, but once you extend the idea of the file namespace to include foreign data, you lose out on ‘smart metadata’ as a first class foundation. A similar thing happened with multi-fork files for MacOS.
A similar thing happened with multi-fork files for MacOS.
Sure, but it’s still so useful that when Apple rewrote their filesystem a couple years ago, they included support for resource forks. NTFS supports them, too, as does (iirc) the SMB protocol.
Apple standard practice has moved to bundle directories for fork-requiring executables, sure, and that reduces those interop problems a little bit.
I guess what I’m saying is: file forks are still widely supported, regardless of difficulty integrating with un*x filesystems. Since they’re still incredibly useful ways of interacting with file systems, I don’t see why we should avoid them.
BeOS was my primary operating system for a couple of years (I even bought the Professional Edition…I might still have the box somewhere). Did my research and built a box that only had supported hardware - dual Celeron processors, 17” monitor at 1024x768, and some relatively large disk for the time.
I remember downloading it and playing around with it (maybe it was small enough to boot from a floppy?) but I couldn’t do anything useful with it. Was a bit too young as well, I guess today I could make do better with unfamiliar stuff.
It was my daily driver. 99% of my work at the time involved being connected to a remote device (routers and firewalls mostly), and BeOS could do that just fine.
It was a great system. There hasn’t been a better one since.
I had triple boot machine - Windows/Linux/BeOS that time. I used BeOS mainly to learn C++ programming. Their GUI toolkit was at that time quite nice - much nicer than MFC :)
Ah, my bad. I don’t remember the motherboard; this was 20 years ago. Sadly, I haven’t built my own since…probably 2002? I’m so out of the loop it’s not even funny.
(Unless you count putting a Raspberry Pi in a little plastic case as “building your own machine”. If so, then…it’s still been a few years.)
oh that’s quite OK. The BP-6 was quite famous in that era for allowing SMP with celerons that were built to disallow it. It was quite a popular choice for x86 BeOS at the time.
panic() is the equivalent of the exception mechanism many languages use to great effect. Idiomatically it’s a last resort, but it’s a superior mechanism in many ways (e.g. tracebacks for debugging, instead of Go’s idiomatic ‘here’s an error message, good luck finding where it came from’ default.)
Go’s idiomatic ‘here’s an error message, good luck finding where it came from’
I think the biggest problem here is that too often if err != nil { return err } is used mindlessly. You then run in to things like open foo: no such file or directory, which is indeed pretty worthless. Even just return fmt.Errorf("pkg.funcName: %s", err) is a vast improvement (although there are better ways, such as github.com/pkg/errors or the new Go 1.13 error system).
I actually included return err in a draft of this article, but decided to remove it as it’s not really a “feature” and how to effectively deal with errors in Go is probably worth an article on its own (if one doesn’t exist yet).
it’s pretty straightforward to decorate an error to know where it’s coming from. The most idiomatic way to pass on an error with go code is to decorate it, not pass it unmodified. You are supposed to handle errors you receive after all.
if err != nil {
return fmt.Errof("%s: when doing whatever", err)
}
not the common misassumption
if err != nil {
return err
}
in fact, the 1.13 release of go formally adds error chains using a new Errorf directive %w that formalises wrapping error values in a manner similar to a few previous library approaches, so you can interrogate the chain if you want to use it in logic (rather than string matching) .
It’s unfortunate IMO that interrogating errors using logic in Go amounts to performing a type assertion, which, while idiomatic and cheap, is something I think a lot of programmers coming from other languages will have to overcome their discomfort with. Errors as values is a great idea, but I personally find it to be a frustratingly incomplete mechanism without sum types and pattern matching, the absence of which I think is partly to blame for careless anti-patterns like return err.
You can now use errors.Is to test the error type and they added error wrapping to fmt.Errorf. Same mechanics underneath but easier to use. (you could just do a switch with a default case)
I greatly prefer the pithy, domain-oriented error decoration that you get with this scheme to the verbose, obtuse set of files and line numbers that you get with stack traces.
This article is getting a few things about git wrong. They claim git only supports ‘One check-out per repository’. Heard of git worktree?
They also claim git is only portable to POSIX, yet it runs fine on Windows with full line-ending support. (They achieve this by including the tools like ls, ssh and cat, thereby not requiring the host OS to be posix)
They claim Sqlite is a superior storage method, yet it is widely known for getting corrupted (probably the reason they run integrity checks all the time), lacks the ability of multiple entities accessing it at the same time, and almost all its column types are silently converted to strings columns with no type checks.
They also claim git is only portable to POSIX, yet it runs fine on Windows with full line-ending support. (They achieve this by including the tools like ls, ssh and cat, thereby not requiring the host OS to be posix)
From the article:
This is largely why the so-called “Git for Windows” distributions (both first-party and third-party) are actually an MSYS POSIX portability environment bundled with all of the Git stuff, because it would be too painful to port Git natively to Windows. Git is a foreign citizen on Windows, speaking to it only through a translator.
this was also somewhat true of mac os, if you were working with it’s traditional approach to case insensitivity. As the article notes, this is almost by design. Git was built to facilitate linux development, and it’s not surprising that linux hosting is a prerequisite for that.
It’s not 50 USD for a server, but for a project. So if you had two “projects” (I’m guessing web sites/apps) it’d be 100 USD instead. I imagine the overhead is for if you don’t want to deal with AWS/Google yourself.
It’s not exactly like that. There’s a little more provided to ‘project’ than just ‘a server’. The project model gets you an app (I think just one on the standard plan, but it can be more) connected to provisioned services (databases, search index, queues, whatever) and a git server and some storage. Within your project plan you get a certain number of environments, which are branches. (e.g. staging, feature branch etc.) When you branch you can clone the whole setup, services, data, etc. and everything can be driven via git So there is additional value and a different workflow compared to just provisioning some cloud servers.
Their site isn’t very clear (your description confirms things that I’ve guessed at from their site) but it sounds like you get a lot for your 50 USD. They’re taking care of CloudFront, ELB/ALB, CodeCommit/CodePipeline, DynamoDB/RDS, ElasticSearch, SQS etc. for you. If you set it all up yourself you’d undoubtedly pay less to AWS per month, but then you’d have to operate it all yourself.
For devs it sounds great if you don’t want to manage all that yourself (or don’t have a team that does it for you at work). It really does remind me of Webflow, which does a similar thing for content sites (i.e. they do everything for you including visual design tool, CMS, form creation & submission handling etc.).
That depends a lot on what you want to use it for and what your personal tastes are like. As people have said in other threads, CL is kitchen sink, and standardised a long time ago which means it is very stable: code written decades ago is going to work unmodified in CL today. There are several high-quality implementations around. On the flip side, it has many warts.
Racket is a single implementation-defined language. On the other hand, if you learn Scheme, most of it just carries over into Racket, and you can also choose from a bevy of implementations depending on your requirements. It’s a clean and elegant language, but that also means many things are missing. For those, you’ll have to rely on SRFIs, portable libraries or implementation-specific extensions.
while the stability argument is probably true from a high level perspective, I’ve run into a few problems with libraries that don’t want to build on older CL installations, e.g. if using the old sbcl that comes with debian, quicklisp systems don’t always build. So in practice, you still have to migrate things forward.
while the stability argument is probably true from a high level perspective, I’ve run into a few problems with libraries that don’t want to build on older CL installations
It’s possible to write unportable, nonstandard Common Lisp, but relatively little care is required to write it properly.
if using the old sbcl that comes with debian, quicklisp systems don’t always build.
That’s entirely because Quicklisp isn’t written properly. If you ever take a look at the code, you’ll notice it’s leagues more complicated than it has any need to be, as merely one issue with it. Of course, Quicklisp hosting doesn’t even bother testing with anything that isn’t SBCL as of the last time I checked.
So in practice, you still have to migrate things forward.
This is wrong. All of my libraries work properly and will continue to work properly. Don’t believe that merely because some libraries aren’t written well, that none or a significant amount of them are. I’m inclined to believe most of the libraries are written by competent programmers and according to the standard.
That’s entirely because Quicklisp isn’t written properly
It’s completely fair to say that things that do not build portably could be better written to do so. I would like to add that it is not quicklisp per se that I had seen problems, rather building systems within it. Off the top of my head, ironclad and perhaps cffi both exhibited problems on older sbcl. I haven’t checked, but I think that this would be the case if they were just build with asdf, so I do not wish to imply quicklisp introduced these problems. I think both of these libs are very tightly coupled to the host system libraries, and could be considered atypically lisp in that sense.
probably I should have better said in practice you may have to migrate things forwards
This is wrong. All of my libraries work properly and will continue to work properly. Don’t believe that merely because some libraries aren’t written well, that none or a significant amount of them are. I’m inclined to believe most of the libraries are written by competent programmers and according to the standard.
The last library you shared (the alternative to uiop:quit) is most definitely not written in portable Common Lisp so as the /u/cms points out the implementations may change their may change their APIs and the code would need to be updated.
Firstly, it should be understood that a library with the sole purpose of working over differences of implementations in this manner is different from my other libraries, which don’t. Secondly, if you look at the documentation, I note that the library will merely LOAD properly, but may not actually exit the implementation, which is something one may want to test against, as it’s a feature caveat. Thirdly, if any implementation thinks about changing the underlying function, such as SBCL has already done once, I’d rather complain about the stupid decision than change my program.
In any case, sure I could’ve explicitly mentioned that one library, but it disturbed the flow of the sentence and I figured these points were obvious enough, but I suppose not so.
The problem is more likely due to the fact that you are using the version packaged by Debian instead of your SBCL being old. You should avoid all lisp software packaged by Linux distributions, they tend to give you nothing but trouble.
However it is true that not all Lisp code is portable, especially with the implementation-compatibility shims that are becoming more common. And while one is likely to encounter code that uses implementation specific extensions there tends to be a fallback for when the feature is not available. As a data point I’ve loaded software from before ASDF (that used MK:DEFSYSTEM) with little modifications.
Yes, that could well be so. It doesn’t really change the point that it’s not as straightforward as just assuming you have a working lisp, everything you need will just be stable. I think we’re in agreement there. Also, I’m building standalone executables for 32 bit ARM, I’m not super-surprised that there’s system-specific bugs in things like math / crypto primitives. FWIW I would favour CL for building anything myself, but not because I think stable dependencies are just a moot point.
(I did actually manage to work quite fine on debian’s ancient sbcl for quite a while so it’s not useless)
That’s always been a bit dubious (Smalltak has had changesets since at least the late 80s), but it’s been truly false for a long time. Squeak had Monticello, VisualWorks had ENVY and StORE, and Pharo just uses Git straight-up these days. I’m not arguing images don’t have other issues with them, but collaboration isn’t one of them.
completely fair point. I didn’t only mean source code control, I’m also thinking that the developer process, incrementally manipulating a running image isn’t very easily mapped onto distributed working, maybe never was?
e.g. are there workflows/tools where multiple developers push changes to a central image ? Because that’s kind of the mapping there - if I’m writing C, I am diffing text files, and compiling the changed ones into new objects, linking everything, running tests - this extends quite naturally to continous integration, and automation for collaborators.
When I’m working on an image style system, I’m updating a running thing typically, usually interactively testing as I go. Ideall collaboration flow for this kind of thing would be to pull small upstream changes directly into my image, switch branches without resetting the world, this kind of thing.
I don’t know very much about the detail of your counter-examples, but I did not mean to suggest it was impossible, so much as ungainly, which was my understanding.
Sorry for responding so late; I know others won’t see this, but thought you deserved a response.
I’m also thinking that the developer process, incrementally manipulating a running image isn’t very easily mapped onto distributed working, maybe never was?
You do kind of have to decide if you’re gonna work in the classic Smalltalk mold, or if you’re going to work in a modern mold; that’s fair. It’s just that the modern mold is really common, to the point that relatively few people sculpt an app out of a Smalltalk image (which is closer to the original intent) than write Smalltalk code that really is the program.
are there workflows/tools where multiple developers push changes to a central image ?
This is in fact exactly how at least GNU Smalltalk and Pharo (which is to Smalltalk what Racket is to Scheme) do. E.g., this is Pharo’s Jenkins server, which works by just building off master constantly, just as any other project would do. The only difference is that, rather than diffing or thinking in terms of files, you think in terms of changes to classes and methods. Behind the scenes, this is converted into files using a tool called Iceberg.
The only place this system calls down is if you’re building constants directly in the image, rather than in code. E.g., if I were truly building a Smalltalk program in a traditional Smalltalk way, I might just read an image into a variable and then keep the variable around. That’s obviously not going to have a meaningful source representation; there might be a class variable called SendImage, but the contents it happens to have in my image won’t be serialized out. Instead, I’d have to have the discipline to store the source image alongside the code in the repository, and then have a class method called something like initializeImages that set SendImage to the contents of that image file. In practice, this isn’t that difficult to do, and tools like CI can easily catch when you mess up.
Whether this is working against or with the image system is debatable. I’ve used several image systems (Common Lisp and Factor being two big ones) that don’t suffer “the image problem”, but tools in the ilk of Smalltalk or Self are obviously different beasts.
As I age, reading this kind of retro thing starts to become more and more strange as things start to overlap with my real world experiences. It’s odd to see people finding fun in older systems and software where I have a lot of memories of them being tedious frustrating chores. This piece is a good example, because the goal is a perfect fit for one of my dullest war stories.
Right back at the start of my “career”, when I was working as an apprentice in a software house, I remember one day my whole work day consisted of having to prepare two lab rooms full of PCs for a planned training course, which involved wiping a heterogeneous group of maybe 12 to 16 PCs of the 486 era, and installing fresh Windows 95 and some developer tools (probably PowerBuilder, not Delphi, can’t remember though, might have been something else entirely), from a stack of about 15 floppy disks. There was a bit of a knack to doing it, you could kind of load balance between 3 machines at once, with a certain amount of waiting and moving around between desks to swap disks. I remember it as a tedious day, with bouts of odd trouble-shooting, and a couple of false starts/rebuilds, I ended up working through lunch and leaving late in order to meet the next-day deadline. Hated it.
I still understand the appeal though, I enjoy reading about and exploring systems from before my own time myself. There’s definitely a great benefit to how much more comprehensible the simpler older systems can be, set against the complexity explosion modern software developers have to work within. And huge value in learning where you’ve come from. I’m still left shaking my head about the increasing frequency of early 32 bit PC computing as entertainment / hobby pursuits, it’s hard for me to shake my own bias that these machines were terrible to work with
I only slightly overlapped with this time period, but I think the big difference then versus now is just availability of documentation. When I was having a build error in Visual C++, I had to hope that I could sort out the exact magic words to look for in help, or I had to hope that I recognized the class of the problem from one of the books I happened to have read. There was no StackOverflow, there was no Google, newsgroups were an unreliable source of information and at any rate slow to respond, etc. Like you, making no progress for a couple of days because I had to just spend time reading and trying was a pretty normal situation to be in—and that’s a hell of a difference from today, where I can usually get the answers I need in a matter of minutes to single-digit hours.
I haven’t done legacy Windows programming specifically in some time, but having dived a little bit back into the Mac Toolbox and Sega Genesis, the sheer availability of documentation makes one hell of a difference, and changes this kind of legacy work for me from a slog exactly like what you’re describing, to being genuinely fun from at least a point of nostalgia.
I think part of the appeal is exactly that. On a 486 you had to troubleshoot your own stuff. But - it was small enough that you could to that. Modern “complexity explosion” means you can’t totally hack a modern PC, too many things to know.
I don’t think I understand what this means. They seem to have a path that’s depending on some binary somewhere but then builds a chain of C compilers that builds them a working system? I’m not really sure what this buys them. I can bootstrap a FreeBSD system (kernel, userland, and packages) if I have a moderately recent C++ toolchain that is capable of building a modern Clang (which then builds the rest of the system), bmake, and a couple of other tools. The extra steps in bootstrapping the compiler look like more places for malicious code to hide, but maybe each step is verifiable in some way?
The kernel on the host system that’s doing the builds is malicious and so if you’re worried about not being able to trust the compiler, I hope you got the compiler and kernel from different providers.
it means, it is possible to have a binary root of trust only 510bytes in size.
No need to trust any operating system kernel or anything else.
It can all be built from source code alone.
We proved this out with live-bootstrap and builder-hex0
How does that 510-byte binary write things to files without trusting the kernel that it’s running on?
That 510byte binary is an operating system kernel written in the MBR of a hard drive/floppy disk and started by the bios hardwired to build a 4KB POSIX kernel on power up.
So there is no kernel to trust. But perhaps you mean about the bios bootstrap trust problem which we have not yet solved.
Aha, that’s the bit of the story I was missing. I thought it was a userspace binary that ran on a host kernel.
AIUI what they want is to have the initial binary[0] as simple as possible, ie. disassembly is understandable by someone with passing familiarity with ASM. Rest is supposed to be shell and guile scripts, and starting with those ingredients, tinyCC is built, with path towards full GCC.
[0] https://github.com/oriansj/bootstrap-seeds/blob/master/POSIX/x86/hex0-seed
It is rather moot if it’s all running on a pre-existing POSIX environment since the kernel could compromise anything. I’m still confused about this project’s aims.
Full bootstrapping needs to begin with a computer with all non-volatile storage devices, including firmware flash chips, fully erased, and everything built from source, including firmware. You would need to begin with manually programming a hex monitor similar to that proposed by this project into something. Probably by building a PCB out of fixed-function logic devices that allows you to manually generate SPI transactions (to program an SPI flash) by flipping a switch on and off to manually input binary. The harder part is probably building Linux without a POSIX environment.
Thanks, that makes sense. I’m not sure what this buys you that simply compiling your bootstrap tools with two different toolchains doesn’t though. For example, I can build the FreeBSD bootstrap tools with Clang or GCC (on a FreeBSD or Linux host) and then compare that the binaries that they build are the same. With something like Guix, I’d expect them to be much more able to lean into a package-transparency model, where many people can do the same bootstrap starting with different host platforms and add the hashes that they get and see if they’re getting a different output.
This is at least partially a response to the trusting trust attack
This Bootstrappable Builds project, together with the Reproducible Builds project, has completed one of the only practical examples of “Diverse Double Complication” as described by David A. Wheeler.
https://reproducible-builds.org/news/2019/12/21/reproducible-bootstrap-of-mes-c-compiler/
It is very much a response to the Trusting Trust attack.
Why would you expect two different compiler pipelines to produce identical binaries, that sounds unlikely to be the case?
I would expect the, to produce functionally equivalent binaries. This is how GCC’s trusting trust defence works. You first build gcc with the system compiler, then with the newly built gcc, then with that compiler. The second and third binaries are both produced by the same version of GCC, compiled with different compilers, and so should be identical. Clang also tries very hard to produce identical output independent of how the compiler was built (and most of the test suite depends on this property).
All of the tools needed to build FreeBSD are in the source tree and are compiled once with the host compiler during the bootstrap phase. They are then used to build everything that ends up being installed. The result is that the final binary should not depend on the compiler used to build the bootstrap tools.
OK, I think you’re saying compile your desired build compiler with two different compilers and then compare the output of the resultant candidates, presumably against unpredictable input, which should be identical. I didn’t quite get that from the initial comment.
Practically, I can see how this is a useful verification method, even if it doesn’t seem to be completely equivalent.
It depends a bit on the threat model. I assume that the kernel is in scope if you’re worried about supply-chain vulnerabilities because it would be trivial to have a kernel patch that spots something that looks like a compiler, watches for specific patterns in its output, and replaces them with something different. If you are using a *NIX distro as your build environment, precisely the same people have access to introduce trojans into the kernel as do in the compiler, so removing the compiler from the TCB doesn’t buy you much. You can kind-of work around this if you build in an attested environment (i.e. have a valid secure boot chain to a known-good kernel), but that depends on having a trusted environment and if you actually trust the environment then you don’t need any of these mitigations. If you assume that an attacker can compromise your supply chain then diversity is a better defense. If I build the bootstrap tools on FreeBSD with clang and Ubuntu with gcc then it’s very hard for someone to inject a trojan into both. If I then compare the outputs and I get the same thing then I have a lot of confidence that any malware in my final system image was present in my source tree.
Slashdot would send an X-Fry header with a quote from Futurama.
Gmail’s IMAP server responds to the verb xyzzy with “nothing happens”
I love things like this.
Also X-Bender, It alternated, or perhaps randomly chose. Maybe others, but I only saw those two myself.
I work on a large C/C++ codebase as part of my day job and use lsp-mode/eglot (currently eglot) to navigate the code, with very few extensions. I also use the latest mainline Emacs with native compilation. I have been using Emacs for over 25 years and my customization is best categorized as “very light”. In short, my Emacs set up is not much beyond what ships with it.
And it’s still just… slow. GCC has some pretty big files and opening them can take up to 10 seconds thanks to font-lock mode. (Yes, I know I can configure it to be less decorative, but I find that decoration useful.) It’s much worse when you open a file that is the output from preprocessor expansion (easily 20000+ lines in many cases).
Log files that are hundreds of megabytes are pretty much a guaranteed way to bring Emacs to a crawl. Incremental search in such a buffer is just painful, even if you
M-x find-file-literally
.I had to turn off nearly everything in lsp-mode/eglot because it does nothing but delay my input. I can start typing and it will be 3-4 characters behind as it tries to find all the completions I’m not asking for. Company, flymake, eldoc are all intolerably slow when working with my codebase, and I have turned them all off or not installed them in the first place.
M-x term
is really useful, but do not attempt to run something that will produce a lot of output to the terminal. It is near intolerable. Literally orders of magnitude slower to display than an xterm or any other terminal emulator. (M-x eterm
is no better.)The problem, of course, is that Elisp is simply not performant. At all. It’s wonderfully malleable and horribly slow. It’s been this way since I started using it. I had hopes for native compilation, but I’ve been running it for a few months now and it’s still bad. I love Emacs for text editing and will continue to use it. I tried to make it a “lifestyle choice” for a while and realized it’s not a good one if I don’t want to be frustrated all the time. Emacs never seems to feel fast, despite the fast hardware I run it on.
The performance was the reason for me to leave Emacs. I was an evil mode user anyways so the complete switch to (neo)Vim was simple for me. I just could not accept the slowness of Emacs when in Vim everything is instant.
E.g. Magit is always named as one of the prime benefits of Emacs. While its functionality is truly amazing its performance is not. Working on a large code base and repository I was sometimes waiting minutes! for a view to open.
What did you find slow on Emacs aside from Magit?
I actually use Emacs because I found it really fast compared to other options. For example, the notmuch email client is really quick on massive mailboxes.
Some packages might be slow, though. I think the trick is to have a minimal configuration with very well chosen packages. I am particularly interested in performance because my machine is really humble (an old NUC with a slow SATA disk).
To be fair it was some time ago and I don’t remember all the details but using LSPs for code completion/inspection was pretty slow e.g.
Compared to IDEs it might not even have been slow but similar. I however have to compare to Vim where I have equal capabilities but pretty much everything is instant.
My machine was BTW pretty good hardware.
lsp-mode became much more efficient during the last year or so. Eglot is even more lightweight, I think. Perhaps it is worth giving it another go.
I think there was some initial resistance to LSP in the Emacs community and therefore they were not given the attention they deserve.
Thanks for the notice! I may try it again in the future but currently I am very happy with my Neovim setup, which took me a long time to setup/tweak :)
Out of curiosity, were you using Magit on Windows?
I use Magit every day and my main machine is very slow. (1.5GHz 4 core cortex A53) Magit never struck me as particularly slow, but I’ve heard that on Windows where launching subprocesses takes longer it’s a different story.
Ohh you have no idea how slow in a corporate environment. Going through MSYS2, Windows defender, with windows being windows and a corporate security system on top, it takes… ages. git add a single file? 20 seconds. Create a commit? Over a minute. It’s bonkers if you hit the worst case just right. (On a private Windows install, MSYS2 + Exceptions set in Windows Defender it’s fine though, not much slower as my FreeBSD laptop) I asked around and there is a company wide, hardcoded path on every laptop, that has exceptions in all the security systems just to make life less miserable for programmers. Doesn’t solve it completly, but helps.
Either wait an eternity or make a mokery of the security concept. Suffice to say I stopped using Windows and Cross-Compile from now on.
Can confirm. I use Magit on both Linux and Windows, and it takes quite a bit of patience on Windows.
With Windows I think it’s it’s particularly git that is slow, and magit spawns git repeatedly. It used also to be very slow on Mac OS as well because of problems with fork performance. On linux, it used to be slow with tramp. There are some tuning suggestions for all of these in the magit manual I think.
Nope on Linux. As mentioned our code base is big and has many branches etc. Not sure where exactly Magit’s bottleneck was. It was quite some time ago. I just remember that I found similar reports online and no real solution to them.
I now use Lazygit when I need something more than git cli and it’s a fantastic tool for my purpose. I also can use it from within Vim.
This happens for me as well with large changes. I really like Magit but when there are a lot of files it’s nearly unusable. You literally wait for minutes for it to show you an update.
I know you’re not looking to customise much but wrt. terminals, vterm is a lot better in that regard.
I actually switched to
M-x shell
because I found the line/char mode entry in term-mode to be annoying (and it seems vterm is the same in this respect). shell-mode has all the same slowness of term-mode, of course. I’ve found doing terminal emulation in Emacs to be a lost cause and have given up on it after all these years. I think shell-mode is probably the most usable since it’s more likeM-x shell-command
than a terminal (and that’s really its best use case).If you need ansi/curses there’s no good answer and while I like term it was too slow in the end and I left. I do think that for “just” using a shell that eshell is fine though.
Do you use the jit branch of emacs? I found once I switched to that and it had jit compiled things my emacs isn’t “fast” but its pretty boring now in that what used to be slow is now at least performant enough for me not to care.
Is there a brew recipe or instructions on compiling on Mac? Or does checking out the source and running make do the business?
I use the emacs-plus1 package. it compiles the version you specify. currently using emacs-plus@29 with
--with-native-comp
for native compilation, and probably some other flags.Thanks again, this is appreciably faster and I’m very pleased 😃
Awesome! also, check out
pixel-scroll-precision-mode
for the sexiest pixel-by-pixel scrolling. seems to be a little buggy ininfo-mode
, can’t replicate withemacs -Q
though, so YMMV.Thank you that sounds perfect
I’m a Mac user and I found it very hard to compile Emacs.
This might be a good starting point however:
https://github.com/railwaycat/homebrew-emacsmacport
I honestly don’t know I use nix+home-manager to manage my setup on macos, this is all I did to make it work across nixos/darwin:
Added it as a flake input: https://github.com/mitchty/nix/blob/7e75d7373e79163f665d7951829d59485e1efbe2/flake.nix#L42-L45
Then added the overlay nixpkgs setup: https://github.com/mitchty/nix/blob/7e75d7373e79163f665d7951829d59485e1efbe2/flake.nix#L84-L87
Then just used it like so: https://github.com/mitchty/nix/blob/6fd1eaa12bbee80b6e80f78320e930d859234cd4/home/default.nix#L87-L90
I gotta convert more of my config over but that was enough to build it and get my existing ~/.emacs.d working with it and speedy to the point I don’t care about emacs slowness even on macos anymore.
Yes. I’ve been using the libgccjit/native compilation version for some time now.
That’s half of it. Another half is that, IIRC, Emacs has rather poor support for asynchrony: most of elisp that runs actually blocks UI.
Can share your config? I’m curious to know how minimal you made it.
Here you go. It changes a little bit here and there with some experiments.The packages I currently have installed and use are: which-key, fic-mode, counsel, smartparens, magit, and solarized-theme. There may be a few others that I was trying out or are only installed for some language support (markdown, yaml, and so forth).
Thank you very much.
Quick addendum on the config: that’s my personal config, which morphs into my work setup. My work one actually turns off flymake and eldoc when using eglot.
Is there anything that has prevented a Neovim-style rewrite of Emacs? A Neomacs?
I keep hearing about the byzantine C-layer of Emacs and the slowness of Elisp. And Emacs definitely has the community size to develop such an alternative. Why do you think no one has attempted such an effort? Or maybe I should add “recently” to the question. As I know there are other Emacs implementations.
As crusty as Emacs source can be, it’s nowhere near as bad Vim source was, which was a rat’s nest of
#ifdef
. That’s why Neovim had to basically rewrite their way to a fork. The Emacs implementation is surprisingly clean, as long as you can tolerate some of the aged decisions (and GNU bracing).There is Climacs, which isn’t exactly the same, but is close.
The problem for any new Emacs clone will that it has to run all the Elisp out there. Unless there is a substantial speed improvement to Elisp or a very good automatic translation tool, any such project will be doomed from the start.
This sshd got started inside the “doubly niced” environment
As for why “the processes didn’t notice and then undo the nice/ionice values”, think about it. Everyone assumes they’re going to get started at the usual baseline/default values. Nobody ever expects that they might get started down in the gutter and have to ratchet themselves back out of it. Why would they even think about that?
These days, this should stand out as a red flag – all these little scripts should be idempotent.
You shouldn’t write scripts where if you Ctrl-C them, and then re-run it, you’ll get these “doubling” effects.
Otherwise if the machine goes down in the middle, or you Ctrl-C, you are left with something that’s very expensive to clean up correctly. Writing Idempotent scripts avoids that – and that’s something that’s possible with shell but not necessarily easy.
As far as I can tell, idempotence captures all fhte benefits of being “declarative”. The script should specify the final state, not just a bunch of steps that start from some presumed state – which may or may not be the one you’re in!
I guess there is not a lot of good documentation about this, but here is one resource I found: https://arslan.io/2019/07/03/how-to-write-idempotent-bash-scripts/
Here’s another one: https://github.com/metaist/idempotent-bash
I believe the “doubly niced” refers to “both ionice and nice”. There wasn’t any single thing being done twice by accident. The issue is with processes inheriting the settings due to Unix semantics.
The problem is the API - it increments the nice value rather than setting it. From the man page:
So the nice value did end up bigger than desired.
That is an interesting quirk of nice()/renice, but in this case I believe they explicitly stated they originally set the nice value to 19, which is the maximum.
Thanks, you’re right! Took me a second reading…
Ah yeah you could be right …
But still the second quote talks about something related to idempotence. It talks about assuming you’re in a certain state, and then running a script, but you weren’t actually in that state. Idempotence addresses that problem. It basically means you will be in the same finishing state no matter what the starting state is. The state will be “fixed” rather than “mutated”.
Hmm, I still don’t think this is the case. The state being dealt with is entirely implicit, and the script in question doesn’t do anything with nice values at all, and yet still should be concerned about them.
The first computer that I owned was an Amstrad PC 1640 HD20. This was second hand (my father’s company had some in a store room and wanted to clear the space). Mine had the EGA display and the 20MB disk replaced with a 40MB one (which, due to early FAT16 limitations, needed to be partitioned as an 8MB C: and a 32MB D:). I think mine had a NEC V30 CPU as well.
These machines came with GEM, which ran a lot better than Windows 3.0 on the same machine. I think GEM was the first GUI that I ever used, even before I had that machine. It had a vector-drawing program that kept me entertained as a very small child.
The first Windows installs the company had were as a result of buying a diagramming program program called Meta Design. The authors had come to the conclusion that bundling a copy of Windows 3.0 (possibly 3.1?) with their program was cheaper than licensing a set of GUI / print libraries for DOS. After a couple of years, Windows was sufficiently common that they could just stop shipping Windows.
Yes, in the UK, the Amstrad PC was a relatively large selling range, and opened the market for PC clone up completely. I had a PC1512 when I was a teen, (or rather my father’s company did too!). Because of their attention to cost they shipped with DR DOS and yes, GEM for at least the graphical displays. If I remember rightly, the PC1512 even though it had only a CGA compatible adapter on board, also had an entirely non-standard hi res mono mode for presenting GEM. So these machines were relatively common in the UK in the end of the 80s, and so was GEM as an interface, I remember enough third party software existing to get reviews in the local rags.
most interestingly I think, these machines also shipped with locomotive Basic 2, an iteration on locomotive software’s really rather excellent BASIC implementation that was fully integrated with GEM, and supported building evented gui apps in BASIC with extensions for mouse input and messages, apis for standard components and 2d drawing, presented with an IDE. It was pretty rudimentary, and I can’t remember much, we’re not talking smalltalk quality but it was fairly capable and I built several GUI programs, mostly paint and draw things. This was years ahead of any other simply available RAD environment, and it shipped free with the machine.
Here’s a video of someone using the supplied demo app. This was really pretty advanced stuff for commodity consumer hardware in 1986.
https://www.youtube.com/watch?v=EH0BGEVYNYk
Mine didn’t have DR-DOS. According to the Wikipedia entry it came with both DR-DOS and MS-DOS 3.3 on floppy disks. Mine had MS-DOS 4.0 installed (so GW BASIC was the version I used) and I didn’t have the original floppy disks so I was running Windows 3.0 instead of GEM. I’m quite jealous of Locomotive BASIC. I didn’t do any GUI programming until I got a 386 a few years later and Visual Basic 2.0 (on Windows 3.11).
The display connectors, as I recall, were completely non-standard and used different connectors for all of the different models. When the EGA monitor on mine died, there wasn’t a good way of replacing it with anything else.
As a child (I would have been 10, I think, when I got the machine), my favourite feature of the machine was the fact that the keyboard had an Amstrad joystick port on the back of the keyboard. This used the same joystick connector as most 8-bit computers and mapped the two buttons and the 8 directions to key codes, so any game that worked with the keyboard and let you configure key bindings could be made to work with the joystick. It also had a volume control on the PC speaker so I could turn down the beeps if I wanted to play games before my parents woke up.
I think it’s a mistake to group objective orientation completely under self (although it discusses SmallTalk a lot in the commentary for this section). Those two are message passing / evented object oriented systems, polymorphic by shared method interfaces, and as noted build on a persistent system state, and clearly represent a distinct family.
The bulk of what people consider to be ‘object oriented’ programming after that inflection point though is the C++ / java style where objects are composite static types with associated methods and polymorphic through inheritance heirarchies - I think this comes from Simula and I think this approach to types and subtypes could be important enough to add to the list as an 8th base case.
I wouldn’t group C++ and Java like that. Java is a Smalltalk-family language, C++’s OO subset is a Simula-family language (though modern C++ is far more a generic programming language than an object-oriented programming language).
You can implement Smalltalk on the original JVM by treating every selector as a separate interface (you can use invoke_dynamic on newer ones) and Redline Smalltalk does exactly this. You can’t do the same on the C++ object model without implementing an entirely new dispatch mechanism.
Some newer OO languages that use strong structural and algebraic typing blur the traditional lines between the static and dynamic a lot. There are really two axes that often get conflated:
Smalltalk / Self / JavaScript have purely dynamic dispatch and structural typing. C++ has nominal typing and both static and dynamic dispatch and it also (via templates) has structural typing but with only static dispatch, though you can just about fudge it with wrapper templates to almost do dynamic over structural types. Java has only dynamic dispatch and nominal typing.
Newer languages, such as Go / Pony / Verona have static and dynamic dispatch and structural typing. This category captures, to me, the best set of tradeoffs: you can do inlining and efficient dispatch when you know the concrete type, but the you can also write completely generic code and the decision whether to do static or dynamic dispatch depends on the type information available at the call site. Your code feels more like Smalltalk to write, but can perform more like C++ (assuming your compiler does a moderately good job of reification and inlining, which Go doesn’t but Pony does).
From the implementation side yes, the JVM definitely feels more like Smalltalk. But is Java really used in the same dynamic fashion to such an extent that you could say it too is Smalltalk? Just because it’s possible, doesn’t mean it’s idiomatic. I’d argue that most code in Java, including the standard library/classpath, is written in a more Simula-like fashion, the same as C++, and would place it in that same category.
Interfaces, which permit dynamic dispatch orthongonal to the implementation hierarchy, are a first-class parts of Java and the core libraries. Idiomatic Java makes extensive use of them. The equivalent in C++ would be abstract classes with pure virtual methods and these are very rarely part of an idiomatic C++ codebase.
Java was created as a version of Smalltalk for the average programmer, dropping just enough of the dynamic bits of Smalltalk to allow efficient implementation in both an interpreter and a compiler. C++ was designed to bring concepts from Simula to C.
Interesting replies, thanks. The point about Java dispatch is interesting and suggests it is not as good an example as I thought it was. (I’ve not really used it extensively for a very long time). The point I was trying to make was for the inclusion of Simula based on the introduction of classes and inheritance, itself an influence on Smalltalk. I accept that simula is built on Algol, and maybe that means it’s not distinct enough for a branch within this taxonomy. I would note that both Stroustrup and Gosling nominate Simula as a direct influence example citation
(NB: I always thought of java as an attempt to write an objective-C with a more C++ syntax myself, but that’s just based on what seemed to be influential at the time. Sun were quite invested in OpenStep shortly before they pivoted everything into Java)
And Objective-C was an attempt to embed Smalltalk in C. A lot of the folks that worked on OpenStep went on to work on Java and you can see OpenStep footprints in a lot of the Java standard library. As I understand it, explicit interfaces were added to Java largely based on experience with performance difficulties implementing Objective-C with efficient duck typing. In Smalltalk and Objective-C, every object logically implements every method (though it may implement it by calling
#doesNotUnderstand:
or-forwardInvocation:
), so you need an NxM matrix to implement (class, selector) -> method lookups. GNU family runtimes implement this as a tree for each object that contains every method, with copy-on-write to reduce memory overhead for inheritance and with a leaf not-implemented node that’s referenced for large runs of missing selectors. The NeXT family runtimes implement it with a per-object hash table that grows as methods are referenced. Neither is great for performance.The problem is worse in Objective-C than in some other languages for two reasons:
-forwardInvocation:
and friends) mean that you really do have the complete matrix, though you can optimise for long runs of not-currently-implemented selectors.Requiring nominal interfaces rather than simple structural equality for dynamic dispatch meant that Java could use vtables for dispatch (like C++). Each class just has an array of methods it implements, indexed by a stable ordering of the method names. Each interface has a similar vtable and nominal interfaces mean that you can generate the interfaces up-front. It’s more expensive to do an interface-to-interface cast, but that’s possible to optimise quite a lot.
Languages that do dynamic dispatch but don’t allow the reflection or fallback dispatch mechanism, but still do structural typing, can use selector colouring. This lets you have a vtable-like dispatch table, where every selector is a fixed index into an array, but where many selectors will share the same vtable index because you know that no two classes implement both selectors. The key change that makes this possible is that the class-to-interface cast will fail at compile time if the class doesn’t implement the interface and an interface-to-interface cast will fail at run time. This means that once you have an interface, you never need any kind of fallback dispatch mechanism: it is guaranteed to implement the methods it claims. Interfaces in such a language can be completely erased during the compilation process: the class has a dispatch table that lays out selectors in such a way that selector
foo
in any class that is ever converted to interfaceX
is at indexN
, so given an objectx
of interface typeX
you can dispatchfoo
by just doingx.dtable[N](args...)
. Iffoo
appears in multiple interfaces that are all implemented by an overlapping set of classes, thenfoo
will map to the sameN
. If one class implementsbar
and another implementsbaz
, but these two methods don’t ever show up in the same interfaces then they can be mapped to the same index.Smalltalk has been one of the big influences on Verona too. I would say that we’re trying to do for the 21st century what Objective-C tried to do for the ‘80s: provide a language that captures the programmer flexibility of Smalltalk but is amenable to efficient implementation on modern hardware and modern programming problems. Doing it today means that we care as much about scalability to manycore heterogeneous systems as Objective-C cared about linear execution speed (which we also care about). We want the same level of fine-grained interoperability with C[++] that Objective-C[++] has but with the extra constraint that we don’t trust C anymore and so we want to be able to sandbox all of our C libraries. We also care about things like live updates more than we care about things like shared libraries because we’re targeting systems that typically do static linking (or fake static linking with containers) but have 5+ 9s of uptime requirements.
Fascinating reading again, thanks. I had not previously heard of Verona, it sounds very interesting. Objective-C was always one of my favourite developer experiences, the balance of C interoperability with such a dynamic runtime was a sweet spot, but the early systems were noticeably slow, as you say.
It’s because Java and C++ are both ALGOL family with something called “objects” in it. Neither have enough unique features to warrant a family or being part of anything but the ALGOL group.
I forgot how strongly he attributed NextStep’s productivity to being “Object Oriented.”
I wonder what he understood the term to mean. It was probably quite a bit different than what I mean if I use the term.
Something a bit more like what people more usually call components these days, more than object oriented languages. It’s all about packaging collections of behaviour behind reusable modular abstractions. He’s right, about a lot of it, although the vocabulary is dated, and we have coalesced more of it into and around the idea of APIs
Remember the NeXT idea of OOP is dynamic, late bound, loose types and message passing, with Smalltalk as the primary influence, not objects in the mainstream as eventually happened in the more static bound sense of Java or C++.
Some of what they were shooting for was objects as closed components that could be distributed and sold like pieces of a construction kit and you’d be able to quickly assemble desktop apps by dragging them together in a visual editor and just serialising that out to dump a working application image. (Which is kind of how NeXT Interface Builder worked)
Squint and you can see it in today’s apps that tie together APIs from disparate service providers, and we don’t really talk about this in the vocabulary of objects so much any more, but the early roots of SOA do have a lot of it present in CORBA, XML RPC, SOAP etc. And there is that ‘O’ in JSON still ;-)
I believe I remember the term “software ICs” being used back then.
I am confused about why the Rest crowd is all over grpc ant the likes. I thought the reason why Rest became a thing was that they didn’t really thought RPC protocols were appropriate. Then Google decides to release an binary (no less) RPC protocol and all of the sudden, everyone thinks RPC is what everyone should do. SOAP wasn’t even that long ago. It’s still used out there.
Could it be just cargo cult? I’ve yet to see a deployment where the protocol is the bottleneck.
Because a lot of what is called REST wends up as something fairly close to an informal RPC over HTTP in JSON, maybe with an ad-hoc URI call scheme, and with these semantics, actual binary rpc is mostly an improvement.
(Also everyone flocks to go for services and discover that performant JSON is a surprisingly poor fit for that language)
I’I imagine that the hypermedia architectural constraints weren’t actually buying them much. For example, not many folks even do things like cacheability well, never mind building generic hypermedia client applications.
But a lot of the time the bottleneck is usually around delivering new functionality. RPC style interfaces are cheapter to build, as they’re conceptually closer to “just making a function call” (albeit one that can fail half way though), wheras more hypermedia style interfaces requires a bit more planning. Or at least thinking in a way that I’ve not seen often.
There has never been much, if anything at all, hypermedia specific about HTTP, It’s just a simple text based stateless protocol on top of TCP. At this day an age, that alone buys anyone more than any binary protocol. I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations. Which I don’t think are common to encounter even among tech giants.
Virtually every computing device has a TCP/IP stack these days. $2 microcontrollers have it. Text protocols were a luxury in the days where each kilobyte came with high costs. We are 20-30 years pst that time. Today even in the IoT world HTTP and MQTT are the go to choices for virtually everyone, no one bothers to buy into the hassle of an opaque protocol.
I agree with you, but I think the herd is taking the wrong direction again. My suspicion is that the whole Rest histeria was a success because of being JSON over HTTP which are great easy to grasp and reliable technologies. Not because of the alleged architectural advantages as you well pointed out.
SOAP does provide “just making a function call”, I think the reason why it lost to Restful APIs, was because requests were not easy to assemble without resourcing to advanced tooling. And implementations in new programming languages were demanding. I do think gRPC suffers from these problems too. It’s all fun and games while developers are hyped “because google is doing”, once the hype dies out, I’m picturing this old embarrassing beast no one wants to touch, in the lines of GWT, appengine, etc.
Those are not rare situations, believe me. Binary protocols can be much more efficient, in bandwidth and code complexity. In version 2 of the product I work on we switched from a REST-based protocol to a binary one and greatly increased performance.
As for bandwidth, I still remember a major customer doing their own WireShark analysis of our protocol and asking us to shave off some data from the connection setup phase, because they really, really needed the lowest possible bandwidth.
Sure, but the framing mostly comes from Roy Fielding’s thesis, which compares network architectural styles, and describes one for the web.
But even then, you have the constraints around uniform access, cacheability and a stateless client, all of which are present in HTTP.
The protocol might have comparatively few elements, but it’s just meant that other folks have had to specify their own semantics on top. For example, header values are (mostly) just byte strings. So for example, in some sense, it’s valid to send
Content-Length: 50, 53
in a response to a client. Interpreting that and maintaing synchronisation within the protocol is hardly simple.I really don’t think that’s a helpful framing. Folks aren’t paid to ship something that’s elegant, they’re paid to ship things that work, so they’ll not want to fuck about too much. And while it might be crude and and inelegant, chunking JSON over HTTP achived precisely that.
By and large gRPC succeeded because it lets developers ignore a whole swathe of issues around protocol design. And so far, it’s managed to avoid a lot of the ambiguity and interoperability issues that plagued XML based mechanisms.
Cargo Cult/Flavour of the Week/Stockholm Syndrome.
A good portion of JS-focussed developers seem to act like cats: they’re easily distracted by a new shiny thing. Look at the tooling. Don’t blink, it’ll change before you’ve finished reading about what’s ‘current’. But they also act like lemmings: once the new shiny thing is there, they all want to follow the new shiny thing.
And then there’s the ‘tech’ worker generic “well if it works for google…” approach that has introduced so many unnecessary bullshit complications into mainstream use, and let slide so many egregious actions by said company. It’s basically Stockholm syndrome. Google’s influence is actively bad for the open web and makes development practices more complicated, but (a lot of) developers lap it up like the aforementioned Lemming Cats chasing a saucer of milk that’s thrown off a cliff.
Partly for sure. It’s true for everything coming out of Google. Of course this also leads to a large userbase and ecosystem.
However I personally dislike Rest. I do not think it’s a good interface and prefer functions and actions over (even if sometimes very well) forcing that into modifying a model or resource. But it also really depends on the use case. There certainly is standard CRUD stuff where it’s the perfect design and it’s the most frequent use case!
However I was really unhappy when SOAP essentially killed RPC style Interfaces because it brought problems that are not inherent in RPC interfaces.
I really liked JSON RPC as a minimal approach. Sadly this didn’t really pick up (only way later inside Bitcoin, etc.). This lead to lots of ecosystems and designs being built around REST.
Something that has also been very noticeable with REST being the de-facto standard way of doing APIs is that oftentimes it’s not really followed. Many, I would say most REST-APIs do have very RPC-style parts. There’s also a lot of mixing up HTTP+JSON with REST and RPC with protobufs (or at least some binary format). Sometimes those “mixed” pattern HTTP-Interfaces also have very good reasons to be like they are. Sometimes “late” feature additions simply don’t fit in the well-designed REST-API and one would have to break a lot of rules anyways, leading to the questions of whether the last bits that would be worth preserving for their cost. But that’s a very specific situation, that typically would only arise years into the project, often triggered by the business side of things.
I was happy about gRPC because it made people give it another shot. At the same time I am pretty unhappy about it being unusable for applications where web interfaces need to interact. Yes, there is “gateways” and “proxies” and while probably well designed in one way or another they come at a huge price essentially turning them into a big hack, which is also a reason why there’s so many grpc-alikes now. None as far as I know has a big ecosystem. Maybe thrift. And there’s many approaches not mentioned in the article, like webrpc.
Anyways, while I don’t think RPC (and certainly gRPC) is the answer to everything I also don’t think restful services are, nor graphql.
I really would have liked to see what json-rpc would have turned to if it got more traction, because I can imagine it during for many applications that now use REST. But this is more a curiosity on an alternative reality.
So I think like all Google Project (Go, Tensorflow, Kubernetes, early Angular, Flutter, …) there is a huge cargo cult mentality around gRPC. I do however think that there’s quite a lot of people that would have loved to do it themselves, if that could guarantee that it would not be a single person or company using it.
I also think the cargo cult is partly the reason for contenders not picking up. In cases where I use RPC over REST I certainly default to gRPC simply because there’s an ecosystem. I think a competitor would have a chance though if it would manage a way simpler implementation which most do.
I can’t agree more with that comment! I think the RPC approach is fine most of the time. Unfortunately, SOAP, gRPC and GraphQL are too complex. I’d really like to see something like JSON-RPC, with a schema to define schemas (like the Protobuf or GraphQL IDL), used in more places.
Working in a place that uses gRPC quite heavily, the primary advantage of passing protobufs instead of just json is that you can encode type information in the request/response. Granted you’re working with an extremely limited type system derived from golang’s also extremely limited type system, but it’s WONDERFUL to be able to express to your callers that such-and-such field is a User comprised of a string, a uint32, and so forth rather than having to write application code to validate every field in every endpoint. I would never trade that in for regular JSON again.
Strong typing is definitely nice, but I don’t see how that’s unique to gRPC. Swagger/OpenAPI, JSON Schema, and so forth can express “this field is a User with a string and a uint32” kinds of structures in regular JSON documents, and can drive validators to enforce those rules.
Quite surprised by this piece of common lore which seems to have passed me by entirely at the time. I used cheap ne2000 clones preferentially and almost exclusively for building my small linux networks through the mid nineties and I can’t really think of any problems. Most of my cursed networking from that era was struggling with linux NFS implementations.
Ditto to the former (but I didn’t build out Linux networks). When switching to PC from Amiga and building out my first box, I followed sage advice and went with an NE2000 because “everything supports it” and the alternatives realistically available in my price budget didn’t have Linux support, or had worse support than the NE2000. I never noticed any problems with it; the two other students I shared a house with that year were also compsci students and we had a household network for our machines.
Linux NFS was so bad that discovering it actually worked under FreeBSD was a delight. (I mean, later at ISP postmaster scale, I got too familiar with quirks of FreeBSD/SunOS/NetApp and all the wonderful NFS bugs which could still come up, but nobody was seriously proposing we try to add Linux into the mix: we later added Linux to the mail setup for malware scanning with a commercial product, but since the scanner was closed source we kept it away from the filer network anyway).
Ha ha, I had exactly the same FreeBSD epiphany. Wait, NFS works on this one? Mind…blown.
You can use apple mail on Linux in the browser in the form of iCloud.com, surprised they didn’t mention this route.
The author uses Gmail, not iCloud, as their mail provider. Mail.app is a generic IMAP/POP3/ActiveSync client.
I use Apple Notes on iOS and then Notes in icloud.com on Linux. I haven’t found anything that works better. Ok, maybe beorg/mobileorg, but it’s not as effortless on mobile. Notes is damn near perfect.
Does anyone know if there are any alternatives?
It is also fully IMAP accessible, like most providers.
iCloud contacts and Calendar are also accessible via carddav and caldav respectively. Though you need to generate an API access token on iCloud.com.
I actually have really good experiences running my own Carddav and Caldav servers on iOS and MacOS. (God, I sound like such a shill these last few days! - macs are good but I really do prefer the commandline and tiling window manager ecosystem of Linux, personally)
Same here. It also works fine on Android. It’s a pain with Thunderbird though and I haven’t really found anything on Windows that works well with them.
One of the huge things that Apple did for usability (and that Android somewhat copied) was to separate system services from the UI. The address book, calendar, spell checker, and so on are all system services that any application can use. On Windows, Office implements its own ones of these and I get the impression that there was a lot of pressure from the Office team 20 years ago to prevent Windows from implementing them and enabling competitions. Cocoa’s NSTextView is a joy to use and includes a complete (and completely configurable) typesetting engine that lets you flow text however you want, replace the hypenation engine, and so on. Windows’ rich text control is awful in comparison. As a result, everyone who needs anything nontrivial implements their own version and the UI is massively fragmented.
the first thing i did when i got my hands on the first Jolla smartphone, back in the day, fresh out of the box was download the then newest emacs tarball, and build it. I was on the train at the time, commuting to work :-) https://flic.kr/p/je333q
You haven’t lived until you’ve run BeOS on an actual BeBox. Love those blinkenlights.
I have been reviving mine over the holidays, so I was quite surprised to see this story surface at around the same time. https://www.reddit.com/r/vintagecomputing/comments/eku19u/dusted_off_one_of_my_old_beboxes/
The dude has three, one being a Hobbit?! My jelly runneth over. Took me ages to find the one 133MHz I own.
that dude is me, the username is the clue :-)
I used BeOS as my primary OS for a year or so, eventually dual-booting with Linux and then dropping it altogether.
Many things about BeOS were sort of incredible. Booted in a couple seconds on the machines of the era, easily 5-10x more quickly than Linux. One of the “demos” was playing multiple MP3 files backwards simultaneously, a feat that nothing else could really do at the time, or multiple OpenGL applications in windows next to each other. The kernel really did multiprocessing in a highly responsive, very smooth way that made you feel like your machine was greased lightning, much faster than it felt under other OSes. This led to BeOS being used for radio stations, because nothing you were doing in the foreground stood a chance of screwing up the media playback.
BeOS had a little productivity suite, Gobe Productive. It had an interesting component embedding scheme, I guess similar to what COM was trying to be, so you just made a “document” and then fortified it with word processing sections or spreadsheet sections.
There were a lot of “funny” things about BeOS that were almost great. Applications could be “replicants,” and you could drag the app out of the window frame and directly onto your desktop. Realistically, there were only a couple for which this would be useful, like the clock, but it was sort of like what “widgets” would become in a few years with Windows and Mac OS X.
The filesystem was famous for being very fast and for having the ability to add arbitrary metadata to it. The mail client was really just a mail message viewer; the list of messages was just a Tracker window (like Finder) showing attributes for To, From, Subject, etc. Similarly, the media player was just able to play one file, if you wanted a playlist, you just used Tracker; the filetype, Title, Artist, Album, etc. were just attributes on the file. I’m not entirely sure how it parsed them out, probably through a plugin or something. You could do what we now call “smart searches” on Mac OS X by saving a search. These worked just like folders for all the apps.
The POSIX compatibility was only OK. I remember it being a little troublesome to get ports of Unix/Linux software of the era going. At the time, using a shittier browser than everyone else wasn’t really a major impediment to getting anything done, so usually I used NetPositive. There was a port of Mozilla, but it was a lot slower, and anyway, NetPositive gave you haiku if something went wrong.
There were not a huge number of applications for BeOS. I think partly it was a very obscure thing to program for. There were not a lot of great compatibility libraries you could use to easily make a cross-platform app with BeOS as a target. I wasn’t very skilled at C++ (still am not) but found trying to do a graphical app with BeOS and its libraries a pretty huge amount of work. Probably it was half or less the work of doing it in Windows, but you had to have separate threads for the app and the display and send messages between them, and it was a whole thing. Did not increase my love for C++.
All in all, it was a great OS for the time. So much of my life now is spent in Emacs, Firefox and IntelliJ that if I had those three on there I could use it today, but it was such an idiosyncratic platform I imagine it would have been quite difficult to get graphical Emacs on there, let alone the others. But perhaps it’s happening with Haiku.
Querying was built into the filesystem. There was a command-line query, too. So many applications became so much simpler with that level of support for queries, it was great.
Yeah, that was a downside, but it was very forward-thinking at the time.
Well, you’re almost in luck. Emacs is available – a recent version, too!
IntelliJ is there, too, but 1- only the community edition, and 2- it’s a bit further behind in versions.
Unfortunately, Firefox doesn’t have a Haiku port at this time. Rust has been ported, but there are still a boatload of dependencies that haven’t been. The included browser, WebPositive, is based on a (iirc, recent) version of webkit, fwiw, so it’s not antiquated.
the problem with relying on additional file metadata for functionality in a networked world is that you have to find a way to preserve the metadata across network transfers. I also used BeOS for several years for daily everything. Networking in BeOS was practically an afterthought.
Sure, and you need to be able to populate metadata for untagged files from the network.
Fortunately, most modern file types have metadata in them, so discarding the fields outgoing doesn’t hurt, and populating them incoming isn’t too hard. IIRC, that sort of thing was generally part of the application. So, e.g., the IMAP sync app would populate your email files with metadata from the email header fields, the music player app would populate metadata from the mp3 or ogg info headers, etc.
but then this becomes a schema problem. next-gen ideas like tagging files pervasively with identical metadata regardless of type for relating and ordering dies as soon as you tar it up and pass it through a system that doesn’t know about your attributes - unless you have abitrary in-band metadata support, and then it becomes a discoverability and a taxonomy problem, and if you have it in multiple places you have to keep it synchronised and stable with regards to shallow copies like links. You can still have the support for it as a second layer of metadata, of course, and the ability to index and query otherwise extant metadata out of band is useful as an optimisation, but once you extend the idea of the file namespace to include foreign data, you lose out on ‘smart metadata’ as a first class foundation. A similar thing happened with multi-fork files for MacOS.
Sure, but it’s still so useful that when Apple rewrote their filesystem a couple years ago, they included support for resource forks. NTFS supports them, too, as does (iirc) the SMB protocol.
Apple standard practice has moved to bundle directories for fork-requiring executables, sure, and that reduces those interop problems a little bit.
I guess what I’m saying is: file forks are still widely supported, regardless of difficulty integrating with un*x filesystems. Since they’re still incredibly useful ways of interacting with file systems, I don’t see why we should avoid them.
BeOS was my primary operating system for a couple of years (I even bought the Professional Edition…I might still have the box somewhere). Did my research and built a box that only had supported hardware - dual Celeron processors, 17” monitor at 1024x768, and some relatively large disk for the time.
It was great.
It was - Very fast, very fun.
Out of interest, what did you use it for?
I remember downloading it and playing around with it (maybe it was small enough to boot from a floppy?) but I couldn’t do anything useful with it. Was a bit too young as well, I guess today I could make do better with unfamiliar stuff.
It was my daily driver. 99% of my work at the time involved being connected to a remote device (routers and firewalls mostly), and BeOS could do that just fine.
It was a great system. There hasn’t been a better one since.
I had triple boot machine - Windows/Linux/BeOS that time. I used BeOS mainly to learn C++ programming. Their GUI toolkit was at that time quite nice - much nicer than MFC :)
was it the Abit BP-6? I had two of those as well, for BeOS. Loved them almost as much as I loved a real bebox. Way faster too :-)
Nah, all self-built, back when building your own machine could actually be significantly cheaper than buying a prebuilt one.
the bp-6 is a motherboard. I hope that counts as self-built :-)
Ah, my bad. I don’t remember the motherboard; this was 20 years ago. Sadly, I haven’t built my own since…probably 2002? I’m so out of the loop it’s not even funny.
(Unless you count putting a Raspberry Pi in a little plastic case as “building your own machine”. If so, then…it’s still been a few years.)
oh that’s quite OK. The BP-6 was quite famous in that era for allowing SMP with celerons that were built to disallow it. It was quite a popular choice for x86 BeOS at the time.
panic()
is the equivalent of the exception mechanism many languages use to great effect. Idiomatically it’s a last resort, but it’s a superior mechanism in many ways (e.g. tracebacks for debugging, instead of Go’s idiomatic ‘here’s an error message, good luck finding where it came from’ default.)I think the biggest problem here is that too often
if err != nil { return err }
is used mindlessly. You then run in to things likeopen foo: no such file or directory
, which is indeed pretty worthless. Even justreturn fmt.Errorf("pkg.funcName: %s", err)
is a vast improvement (although there are better ways, such as github.com/pkg/errors or the new Go 1.13 error system).I actually included
return err
in a draft of this article, but decided to remove it as it’s not really a “feature” and how to effectively deal with errors in Go is probably worth an article on its own (if one doesn’t exist yet).it’s pretty straightforward to decorate an error to know where it’s coming from. The most idiomatic way to pass on an error with go code is to decorate it, not pass it unmodified. You are supposed to handle errors you receive after all.
not the common misassumption
in fact, the 1.13 release of go formally adds error chains using a new Errorf directive
%w
that formalises wrapping error values in a manner similar to a few previous library approaches, so you can interrogate the chain if you want to use it in logic (rather than string matching) .It’s unfortunate IMO that interrogating errors using logic in Go amounts to performing a type assertion, which, while idiomatic and cheap, is something I think a lot of programmers coming from other languages will have to overcome their discomfort with. Errors as values is a great idea, but I personally find it to be a frustratingly incomplete mechanism without sum types and pattern matching, the absence of which I think is partly to blame for careless anti-patterns like
return err
.You can now use
errors.Is
to test the error type and they added error wrapping tofmt.Errorf
. Same mechanics underneath but easier to use. (you could just do a switch with a default case)I guess you mean
but yes point taken :)
Sure, but in other languages you don’t have to do all this extra work, you just get good tracebacks for free.
I greatly prefer the pithy, domain-oriented error decoration that you get with this scheme to the verbose, obtuse set of files and line numbers that you get with stack traces.
I built a basic Common-Lisp-style condition system atop Go’s
panic
/defer
/recover
. It is simple and lacking a lot of the syntactic advantages of Los, and it is definitely not ready for prime time, at all, but I think maybe there’s a useful core in there.But seriously, it’s a hack.
This article is getting a few things about git wrong. They claim git only supports ‘One check-out per repository’. Heard of
git worktree
?They also claim git is only portable to POSIX, yet it runs fine on Windows with full line-ending support. (They achieve this by including the tools like
ls
,ssh
andcat
, thereby not requiring the host OS to be posix)They claim Sqlite is a superior storage method, yet it is widely known for getting corrupted (probably the reason they run integrity checks all the time), lacks the ability of multiple entities accessing it at the same time, and almost all its column types are silently converted to strings columns with no type checks.
I don’t think they got this one wrong:
From the article:
this was also somewhat true of mac os, if you were working with it’s traditional approach to case insensitivity. As the article notes, this is almost by design. Git was built to facilitate linux development, and it’s not surprising that linux hosting is a prerequisite for that.
What’s so special about platform.sh that it charges 50$ for a tiny server. ?
Well, based on the domain name, I assume it’s written entirely in Bash, so that probably takes some extra cycles.
It’s not 50 USD for a server, but for a project. So if you had two “projects” (I’m guessing web sites/apps) it’d be 100 USD instead. I imagine the overhead is for if you don’t want to deal with AWS/Google yourself.
Their pricing model reminds me of Webflow.
It’s not exactly like that. There’s a little more provided to ‘project’ than just ‘a server’. The project model gets you an app (I think just one on the standard plan, but it can be more) connected to provisioned services (databases, search index, queues, whatever) and a git server and some storage. Within your project plan you get a certain number of environments, which are branches. (e.g. staging, feature branch etc.) When you branch you can clone the whole setup, services, data, etc. and everything can be driven via git So there is additional value and a different workflow compared to just provisioning some cloud servers.
I think we are both saying the same thing :-)
Their site isn’t very clear (your description confirms things that I’ve guessed at from their site) but it sounds like you get a lot for your 50 USD. They’re taking care of CloudFront, ELB/ALB, CodeCommit/CodePipeline, DynamoDB/RDS, ElasticSearch, SQS etc. for you. If you set it all up yourself you’d undoubtedly pay less to AWS per month, but then you’d have to operate it all yourself.
For devs it sounds great if you don’t want to manage all that yourself (or don’t have a team that does it for you at work). It really does remind me of Webflow, which does a similar thing for content sites (i.e. they do everything for you including visual design tool, CMS, form creation & submission handling etc.).
That depends a lot on what you want to use it for and what your personal tastes are like. As people have said in other threads, CL is kitchen sink, and standardised a long time ago which means it is very stable: code written decades ago is going to work unmodified in CL today. There are several high-quality implementations around. On the flip side, it has many warts.
Racket is a single implementation-defined language. On the other hand, if you learn Scheme, most of it just carries over into Racket, and you can also choose from a bevy of implementations depending on your requirements. It’s a clean and elegant language, but that also means many things are missing. For those, you’ll have to rely on SRFIs, portable libraries or implementation-specific extensions.
while the stability argument is probably true from a high level perspective, I’ve run into a few problems with libraries that don’t want to build on older CL installations, e.g. if using the old sbcl that comes with debian, quicklisp systems don’t always build. So in practice, you still have to migrate things forward.
It’s possible to write unportable, nonstandard Common Lisp, but relatively little care is required to write it properly.
That’s entirely because Quicklisp isn’t written properly. If you ever take a look at the code, you’ll notice it’s leagues more complicated than it has any need to be, as merely one issue with it. Of course, Quicklisp hosting doesn’t even bother testing with anything that isn’t SBCL as of the last time I checked.
This is wrong. All of my libraries work properly and will continue to work properly. Don’t believe that merely because some libraries aren’t written well, that none or a significant amount of them are. I’m inclined to believe most of the libraries are written by competent programmers and according to the standard.
It’s completely fair to say that things that do not build portably could be better written to do so. I would like to add that it is not quicklisp per se that I had seen problems, rather building systems within it. Off the top of my head, ironclad and perhaps cffi both exhibited problems on older sbcl. I haven’t checked, but I think that this would be the case if they were just build with asdf, so I do not wish to imply quicklisp introduced these problems. I think both of these libs are very tightly coupled to the host system libraries, and could be considered atypically lisp in that sense.
probably I should have better said in practice you may have to migrate things forwards
The last library you shared (the alternative to
uiop:quit
) is most definitely not written in portable Common Lisp so as the /u/cms points out the implementations may change their may change their APIs and the code would need to be updated.Firstly, it should be understood that a library with the sole purpose of working over differences of implementations in this manner is different from my other libraries, which don’t. Secondly, if you look at the documentation, I note that the library will merely LOAD properly, but may not actually exit the implementation, which is something one may want to test against, as it’s a feature caveat. Thirdly, if any implementation thinks about changing the underlying function, such as SBCL has already done once, I’d rather complain about the stupid decision than change my program.
In any case, sure I could’ve explicitly mentioned that one library, but it disturbed the flow of the sentence and I figured these points were obvious enough, but I suppose not so.
The problem is more likely due to the fact that you are using the version packaged by Debian instead of your SBCL being old. You should avoid all lisp software packaged by Linux distributions, they tend to give you nothing but trouble.
However it is true that not all Lisp code is portable, especially with the implementation-compatibility shims that are becoming more common. And while one is likely to encounter code that uses implementation specific extensions there tends to be a fallback for when the feature is not available. As a data point I’ve loaded software from before ASDF (that used MK:DEFSYSTEM) with little modifications.
Yes, that could well be so. It doesn’t really change the point that it’s not as straightforward as just assuming you have a working lisp, everything you need will just be stable. I think we’re in agreement there. Also, I’m building standalone executables for 32 bit ARM, I’m not super-surprised that there’s system-specific bugs in things like math / crypto primitives. FWIW I would favour CL for building anything myself, but not because I think stable dependencies are just a moot point.
(I did actually manage to work quite fine on debian’s ancient sbcl for quite a while so it’s not useless)
I think some of it is becaue image-based development does not have good collaborative tools
That’s always been a bit dubious (Smalltak has had changesets since at least the late 80s), but it’s been truly false for a long time. Squeak had Monticello, VisualWorks had ENVY and StORE, and Pharo just uses Git straight-up these days. I’m not arguing images don’t have other issues with them, but collaboration isn’t one of them.
completely fair point. I didn’t only mean source code control, I’m also thinking that the developer process, incrementally manipulating a running image isn’t very easily mapped onto distributed working, maybe never was?
e.g. are there workflows/tools where multiple developers push changes to a central image ? Because that’s kind of the mapping there - if I’m writing C, I am diffing text files, and compiling the changed ones into new objects, linking everything, running tests - this extends quite naturally to continous integration, and automation for collaborators.
When I’m working on an image style system, I’m updating a running thing typically, usually interactively testing as I go. Ideall collaboration flow for this kind of thing would be to pull small upstream changes directly into my image, switch branches without resetting the world, this kind of thing.
I don’t know very much about the detail of your counter-examples, but I did not mean to suggest it was impossible, so much as ungainly, which was my understanding.
Sorry for responding so late; I know others won’t see this, but thought you deserved a response.
You do kind of have to decide if you’re gonna work in the classic Smalltalk mold, or if you’re going to work in a modern mold; that’s fair. It’s just that the modern mold is really common, to the point that relatively few people sculpt an app out of a Smalltalk image (which is closer to the original intent) than write Smalltalk code that really is the program.
This is in fact exactly how at least GNU Smalltalk and Pharo (which is to Smalltalk what Racket is to Scheme) do. E.g., this is Pharo’s Jenkins server, which works by just building off
master
constantly, just as any other project would do. The only difference is that, rather than diffing or thinking in terms of files, you think in terms of changes to classes and methods. Behind the scenes, this is converted into files using a tool called Iceberg.The only place this system calls down is if you’re building constants directly in the image, rather than in code. E.g., if I were truly building a Smalltalk program in a traditional Smalltalk way, I might just read an image into a variable and then keep the variable around. That’s obviously not going to have a meaningful source representation; there might be a class variable called
SendImage
, but the contents it happens to have in my image won’t be serialized out. Instead, I’d have to have the discipline to store the source image alongside the code in the repository, and then have a class method called something likeinitializeImages
that setSendImage
to the contents of that image file. In practice, this isn’t that difficult to do, and tools like CI can easily catch when you mess up.Whether this is working against or with the image system is debatable. I’ve used several image systems (Common Lisp and Factor being two big ones) that don’t suffer “the image problem”, but tools in the ilk of Smalltalk or Self are obviously different beasts.
Thanks for the reply! I wish I had more smalltalk experience. Maybe some day.
Smalltalk does have Monticello and Metacello. I’ve heard good things about them.