1. 2

    I think it’s a mistake to group objective orientation completely under self (although it discusses SmallTalk a lot in the commentary for this section). Those two are message passing / evented object oriented systems, polymorphic by shared method interfaces, and as noted build on a persistent system state, and clearly represent a distinct family.

    The bulk of what people consider to be ‘object oriented’ programming after that inflection point though is the C++ / java style where objects are composite static types with associated methods and polymorphic through inheritance heirarchies - I think this comes from Simula and I think this approach to types and subtypes could be important enough to add to the list as an 8th base case.

    1. 4

      I wouldn’t group C++ and Java like that. Java is a Smalltalk-family language, C++’s OO subset is a Simula-family language (though modern C++ is far more a generic programming language than an object-oriented programming language).

      You can implement Smalltalk on the original JVM by treating every selector as a separate interface (you can use invoke_dynamic on newer ones) and Redline Smalltalk does exactly this. You can’t do the same on the C++ object model without implementing an entirely new dispatch mechanism.

      Some newer OO languages that use strong structural and algebraic typing blur the traditional lines between the static and dynamic a lot. There are really two axes that often get conflated:

      • Static versus dynamic dispatch.
      • Structural versus nominal typing.

      Smalltalk / Self / JavaScript have purely dynamic dispatch and structural typing. C++ has nominal typing and both static and dynamic dispatch and it also (via templates) has structural typing but with only static dispatch, though you can just about fudge it with wrapper templates to almost do dynamic over structural types. Java has only dynamic dispatch and nominal typing.

      Newer languages, such as Go / Pony / Verona have static and dynamic dispatch and structural typing. This category captures, to me, the best set of tradeoffs: you can do inlining and efficient dispatch when you know the concrete type, but the you can also write completely generic code and the decision whether to do static or dynamic dispatch depends on the type information available at the call site. Your code feels more like Smalltalk to write, but can perform more like C++ (assuming your compiler does a moderately good job of reification and inlining, which Go doesn’t but Pony does).

      1. 4

        From the implementation side yes, the JVM definitely feels more like Smalltalk. But is Java really used in the same dynamic fashion to such an extent that you could say it too is Smalltalk? Just because it’s possible, doesn’t mean it’s idiomatic. I’d argue that most code in Java, including the standard library/classpath, is written in a more Simula-like fashion, the same as C++, and would place it in that same category.

        1. 6

          Interfaces, which permit dynamic dispatch orthongonal to the implementation hierarchy, are a first-class parts of Java and the core libraries. Idiomatic Java makes extensive use of them. The equivalent in C++ would be abstract classes with pure virtual methods and these are very rarely part of an idiomatic C++ codebase.

          Java was created as a version of Smalltalk for the average programmer, dropping just enough of the dynamic bits of Smalltalk to allow efficient implementation in both an interpreter and a compiler. C++ was designed to bring concepts from Simula to C.

          1. 3

            Interesting replies, thanks. The point about Java dispatch is interesting and suggests it is not as good an example as I thought it was. (I’ve not really used it extensively for a very long time). The point I was trying to make was for the inclusion of Simula based on the introduction of classes and inheritance, itself an influence on Smalltalk. I accept that simula is built on Algol, and maybe that means it’s not distinct enough for a branch within this taxonomy. I would note that both Stroustrup and Gosling nominate Simula as a direct influence example citation

            (NB: I always thought of java as an attempt to write an objective-C with a more C++ syntax myself, but that’s just based on what seemed to be influential at the time. Sun were quite invested in OpenStep shortly before they pivoted everything into Java)

            1. 4

              (NB: I always thought of java as an attempt to write an objective-C with a more C++ syntax myself, but that’s just based on what seemed to be influential at the time. Sun were quite invested in OpenStep shortly before they pivoted everything into Java)

              And Objective-C was an attempt to embed Smalltalk in C. A lot of the folks that worked on OpenStep went on to work on Java and you can see OpenStep footprints in a lot of the Java standard library. As I understand it, explicit interfaces were added to Java largely based on experience with performance difficulties implementing Objective-C with efficient duck typing. In Smalltalk and Objective-C, every object logically implements every method (though it may implement it by calling #doesNotUnderstand: or -forwardInvocation:), so you need an NxM matrix to implement (class, selector) -> method lookups. GNU family runtimes implement this as a tree for each object that contains every method, with copy-on-write to reduce memory overhead for inheritance and with a leaf not-implemented node that’s referenced for large runs of missing selectors. The NeXT family runtimes implement it with a per-object hash table that grows as methods are referenced. Neither is great for performance.

              The problem is worse in Objective-C than in some other languages for two reasons:

              • Categories and reflection APIs mean that methods can be added to a class after it’s created. Replacing a method is easy (you already have a key->value pair for it in whatever your lookup structure is, but adding a new valid selector means that you can’t optimise the layout easily).
              • The fallback dispatch mechanisms (-forwardInvocation: and friends) mean that you really do have the complete matrix, though you can optimise for long runs of not-currently-implemented selectors.

              Requiring nominal interfaces rather than simple structural equality for dynamic dispatch meant that Java could use vtables for dispatch (like C++). Each class just has an array of methods it implements, indexed by a stable ordering of the method names. Each interface has a similar vtable and nominal interfaces mean that you can generate the interfaces up-front. It’s more expensive to do an interface-to-interface cast, but that’s possible to optimise quite a lot.

              Languages that do dynamic dispatch but don’t allow the reflection or fallback dispatch mechanism, but still do structural typing, can use selector colouring. This lets you have a vtable-like dispatch table, where every selector is a fixed index into an array, but where many selectors will share the same vtable index because you know that no two classes implement both selectors. The key change that makes this possible is that the class-to-interface cast will fail at compile time if the class doesn’t implement the interface and an interface-to-interface cast will fail at run time. This means that once you have an interface, you never need any kind of fallback dispatch mechanism: it is guaranteed to implement the methods it claims. Interfaces in such a language can be completely erased during the compilation process: the class has a dispatch table that lays out selectors in such a way that selector foo in any class that is ever converted to interface X is at index N, so given an object x of interface type X you can dispatch foo by just doing x.dtable[N](args...). If foo appears in multiple interfaces that are all implemented by an overlapping set of classes, then foo will map to the same N. If one class implements bar and another implements baz, but these two methods don’t ever show up in the same interfaces then they can be mapped to the same index.

              Smalltalk has been one of the big influences on Verona too. I would say that we’re trying to do for the 21st century what Objective-C tried to do for the ‘80s: provide a language that captures the programmer flexibility of Smalltalk but is amenable to efficient implementation on modern hardware and modern programming problems. Doing it today means that we care as much about scalability to manycore heterogeneous systems as Objective-C cared about linear execution speed (which we also care about). We want the same level of fine-grained interoperability with C[++] that Objective-C[++] has but with the extra constraint that we don’t trust C anymore and so we want to be able to sandbox all of our C libraries. We also care about things like live updates more than we care about things like shared libraries because we’re targeting systems that typically do static linking (or fake static linking with containers) but have 5+ 9s of uptime requirements.

              1. 1

                Fascinating reading again, thanks. I had not previously heard of Verona, it sounds very interesting. Objective-C was always one of my favourite developer experiences, the balance of C interoperability with such a dynamic runtime was a sweet spot, but the early systems were noticeably slow, as you say.

      2. 1

        It’s because Java and C++ are both ALGOL family with something called “objects” in it. Neither have enough unique features to warrant a family or being part of anything but the ALGOL group.

      1. 2

        I forgot how strongly he attributed NextStep’s productivity to being “Object Oriented.”

        I wonder what he understood the term to mean. It was probably quite a bit different than what I mean if I use the term.

        1. 5

          Something a bit more like what people more usually call components these days, more than object oriented languages. It’s all about packaging collections of behaviour behind reusable modular abstractions. He’s right, about a lot of it, although the vocabulary is dated, and we have coalesced more of it into and around the idea of APIs

          Remember the NeXT idea of OOP is dynamic, late bound, loose types and message passing, with Smalltalk as the primary influence, not objects in the mainstream as eventually happened in the more static bound sense of Java or C++.

          Some of what they were shooting for was objects as closed components that could be distributed and sold like pieces of a construction kit and you’d be able to quickly assemble desktop apps by dragging them together in a visual editor and just serialising that out to dump a working application image. (Which is kind of how NeXT Interface Builder worked)

          Squint and you can see it in today’s apps that tie together APIs from disparate service providers, and we don’t really talk about this in the vocabulary of objects so much any more, but the early roots of SOA do have a lot of it present in CORBA, XML RPC, SOAP etc. And there is that ‘O’ in JSON still ;-)

          1. 3

            I believe I remember the term “software ICs” being used back then.

        1. 24

          I am confused about why the Rest crowd is all over grpc ant the likes. I thought the reason why Rest became a thing was that they didn’t really thought RPC protocols were appropriate. Then Google decides to release an binary (no less) RPC protocol and all of the sudden, everyone thinks RPC is what everyone should do. SOAP wasn’t even that long ago. It’s still used out there.

          Could it be just cargo cult? I’ve yet to see a deployment where the protocol is the bottleneck.

          1. 14

            Because a lot of what is called REST wends up as something fairly close to an informal RPC over HTTP in JSON, maybe with an ad-hoc URI call scheme, and with these semantics, actual binary rpc is mostly an improvement.

            (Also everyone flocks to go for services and discover that performant JSON is a surprisingly poor fit for that language)

            1. 14

              I’I imagine that the hypermedia architectural constraints weren’t actually buying them much. For example, not many folks even do things like cacheability well, never mind building generic hypermedia client applications.

              But a lot of the time the bottleneck is usually around delivering new functionality. RPC style interfaces are cheapter to build, as they’re conceptually closer to “just making a function call” (albeit one that can fail half way though), wheras more hypermedia style interfaces requires a bit more planning. Or at least thinking in a way that I’ve not seen often.

              1. 10

                There has never been much, if anything at all, hypermedia specific about HTTP, It’s just a simple text based stateless protocol on top of TCP. At this day an age, that alone buys anyone more than any binary protocol. I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations. Which I don’t think are common to encounter even among tech giants.

                Virtually every computing device has a TCP/IP stack these days. $2 microcontrollers have it. Text protocols were a luxury in the days where each kilobyte came with high costs. We are 20-30 years pst that time. Today even in the IoT world HTTP and MQTT are the go to choices for virtually everyone, no one bothers to buy into the hassle of an opaque protocol.

                I agree with you, but I think the herd is taking the wrong direction again. My suspicion is that the whole Rest histeria was a success because of being JSON over HTTP which are great easy to grasp and reliable technologies. Not because of the alleged architectural advantages as you well pointed out.

                SOAP does provide “just making a function call”, I think the reason why it lost to Restful APIs, was because requests were not easy to assemble without resourcing to advanced tooling. And implementations in new programming languages were demanding. I do think gRPC suffers from these problems too. It’s all fun and games while developers are hyped “because google is doing”, once the hype dies out, I’m picturing this old embarrassing beast no one wants to touch, in the lines of GWT, appengine, etc.

                1. 9

                  I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations.

                  Those are not rare situations, believe me. Binary protocols can be much more efficient, in bandwidth and code complexity. In version 2 of the product I work on we switched from a REST-based protocol to a binary one and greatly increased performance.

                  As for bandwidth, I still remember a major customer doing their own WireShark analysis of our protocol and asking us to shave off some data from the connection setup phase, because they really, really needed the lowest possible bandwidth.

                  1. 2

                    hypermedia specific about HTTP

                    Sure, but the framing mostly comes from Roy Fielding’s thesis, which compares network architectural styles, and describes one for the web.

                    But even then, you have the constraints around uniform access, cacheability and a stateless client, all of which are present in HTTP.

                    just a simple text based stateless protocol

                    The protocol might have comparatively few elements, but it’s just meant that other folks have had to specify their own semantics on top. For example, header values are (mostly) just byte strings. So for example, in some sense, it’s valid to send Content-Length: 50, 53 in a response to a client. Interpreting that and maintaing synchronisation within the protocol is hardly simple.

                    herd is taking the wrong direction again

                    I really don’t think that’s a helpful framing. Folks aren’t paid to ship something that’s elegant, they’re paid to ship things that work, so they’ll not want to fuck about too much. And while it might be crude and and inelegant, chunking JSON over HTTP achived precisely that.

                    By and large gRPC succeeded because it lets developers ignore a whole swathe of issues around protocol design. And so far, it’s managed to avoid a lot of the ambiguity and interoperability issues that plagued XML based mechanisms.

                2. 3

                  Cargo Cult/Flavour of the Week/Stockholm Syndrome.

                  A good portion of JS-focussed developers seem to act like cats: they’re easily distracted by a new shiny thing. Look at the tooling. Don’t blink, it’ll change before you’ve finished reading about what’s ‘current’. But they also act like lemmings: once the new shiny thing is there, they all want to follow the new shiny thing.

                  And then there’s the ‘tech’ worker generic “well if it works for google…” approach that has introduced so many unnecessary bullshit complications into mainstream use, and let slide so many egregious actions by said company. It’s basically Stockholm syndrome. Google’s influence is actively bad for the open web and makes development practices more complicated, but (a lot of) developers lap it up like the aforementioned Lemming Cats chasing a saucer of milk that’s thrown off a cliff.

                  1. 2

                    Partly for sure. It’s true for everything coming out of Google. Of course this also leads to a large userbase and ecosystem.

                    However I personally dislike Rest. I do not think it’s a good interface and prefer functions and actions over (even if sometimes very well) forcing that into modifying a model or resource. But it also really depends on the use case. There certainly is standard CRUD stuff where it’s the perfect design and it’s the most frequent use case!

                    However I was really unhappy when SOAP essentially killed RPC style Interfaces because it brought problems that are not inherent in RPC interfaces.

                    I really liked JSON RPC as a minimal approach. Sadly this didn’t really pick up (only way later inside Bitcoin, etc.). This lead to lots of ecosystems and designs being built around REST.

                    Something that has also been very noticeable with REST being the de-facto standard way of doing APIs is that oftentimes it’s not really followed. Many, I would say most REST-APIs do have very RPC-style parts. There’s also a lot of mixing up HTTP+JSON with REST and RPC with protobufs (or at least some binary format). Sometimes those “mixed” pattern HTTP-Interfaces also have very good reasons to be like they are. Sometimes “late” feature additions simply don’t fit in the well-designed REST-API and one would have to break a lot of rules anyways, leading to the questions of whether the last bits that would be worth preserving for their cost. But that’s a very specific situation, that typically would only arise years into the project, often triggered by the business side of things.

                    I was happy about gRPC because it made people give it another shot. At the same time I am pretty unhappy about it being unusable for applications where web interfaces need to interact. Yes, there is “gateways” and “proxies” and while probably well designed in one way or another they come at a huge price essentially turning them into a big hack, which is also a reason why there’s so many grpc-alikes now. None as far as I know has a big ecosystem. Maybe thrift. And there’s many approaches not mentioned in the article, like webrpc.

                    Anyways, while I don’t think RPC (and certainly gRPC) is the answer to everything I also don’t think restful services are, nor graphql.

                    I really would have liked to see what json-rpc would have turned to if it got more traction, because I can imagine it during for many applications that now use REST. But this is more a curiosity on an alternative reality.

                    So I think like all Google Project (Go, Tensorflow, Kubernetes, early Angular, Flutter, …) there is a huge cargo cult mentality around gRPC. I do however think that there’s quite a lot of people that would have loved to do it themselves, if that could guarantee that it would not be a single person or company using it.

                    I also think the cargo cult is partly the reason for contenders not picking up. In cases where I use RPC over REST I certainly default to gRPC simply because there’s an ecosystem. I think a competitor would have a chance though if it would manage a way simpler implementation which most do.

                    1. 1

                      I can’t agree more with that comment! I think the RPC approach is fine most of the time. Unfortunately, SOAP, gRPC and GraphQL are too complex. I’d really like to see something like JSON-RPC, with a schema to define schemas (like the Protobuf or GraphQL IDL), used in more places.

                      1. 2

                        Working in a place that uses gRPC quite heavily, the primary advantage of passing protobufs instead of just json is that you can encode type information in the request/response. Granted you’re working with an extremely limited type system derived from golang’s also extremely limited type system, but it’s WONDERFUL to be able to express to your callers that such-and-such field is a User comprised of a string, a uint32, and so forth rather than having to write application code to validate every field in every endpoint. I would never trade that in for regular JSON again.

                        1. 1

                          Strong typing is definitely nice, but I don’t see how that’s unique to gRPC. Swagger/OpenAPI, JSON Schema, and so forth can express “this field is a User with a string and a uint32” kinds of structures in regular JSON documents, and can drive validators to enforce those rules.

                  1. 1

                    Quite surprised by this piece of common lore which seems to have passed me by entirely at the time. I used cheap ne2000 clones preferentially and almost exclusively for building my small linux networks through the mid nineties and I can’t really think of any problems. Most of my cursed networking from that era was struggling with linux NFS implementations.

                    1. 2

                      Ditto to the former (but I didn’t build out Linux networks). When switching to PC from Amiga and building out my first box, I followed sage advice and went with an NE2000 because “everything supports it” and the alternatives realistically available in my price budget didn’t have Linux support, or had worse support than the NE2000. I never noticed any problems with it; the two other students I shared a house with that year were also compsci students and we had a household network for our machines.

                      Linux NFS was so bad that discovering it actually worked under FreeBSD was a delight. (I mean, later at ISP postmaster scale, I got too familiar with quirks of FreeBSD/SunOS/NetApp and all the wonderful NFS bugs which could still come up, but nobody was seriously proposing we try to add Linux into the mix: we later added Linux to the mail setup for malware scanning with a commercial product, but since the scanner was closed source we kept it away from the filer network anyway).

                      1. 1

                        Ha ha, I had exactly the same FreeBSD epiphany. Wait, NFS works on this one? Mind…blown.

                    1. 1

                      You can use apple mail on Linux in the browser in the form of iCloud.com, surprised they didn’t mention this route.

                      1. 8

                        The author uses Gmail, not iCloud, as their mail provider. Mail.app is a generic IMAP/POP3/ActiveSync client.

                        1. 2

                          I use Apple Notes on iOS and then Notes in icloud.com on Linux. I haven’t found anything that works better. Ok, maybe beorg/mobileorg, but it’s not as effortless on mobile. Notes is damn near perfect.

                          Does anyone know if there are any alternatives?

                          1. 1

                            It is also fully IMAP accessible, like most providers.

                            iCloud contacts and Calendar are also accessible via carddav and caldav respectively. Though you need to generate an API access token on iCloud.com.

                            I actually have really good experiences running my own Carddav and Caldav servers on iOS and MacOS. (God, I sound like such a shill these last few days! - macs are good but I really do prefer the commandline and tiling window manager ecosystem of Linux, personally)

                            1. 2

                              I actually have really good experiences running my own Carddav and Caldav servers on iOS and MacOS.

                              Same here. It also works fine on Android. It’s a pain with Thunderbird though and I haven’t really found anything on Windows that works well with them.

                              One of the huge things that Apple did for usability (and that Android somewhat copied) was to separate system services from the UI. The address book, calendar, spell checker, and so on are all system services that any application can use. On Windows, Office implements its own ones of these and I get the impression that there was a lot of pressure from the Office team 20 years ago to prevent Windows from implementing them and enabling competitions. Cocoa’s NSTextView is a joy to use and includes a complete (and completely configurable) typesetting engine that lets you flow text however you want, replace the hypenation engine, and so on. Windows’ rich text control is awful in comparison. As a result, everyone who needs anything nontrivial implements their own version and the UI is massively fragmented.

                          1. 4

                            the first thing i did when i got my hands on the first Jolla smartphone, back in the day, fresh out of the box was download the then newest emacs tarball, and build it. I was on the train at the time, commuting to work :-) https://flic.kr/p/je333q

                            1. 1

                              You haven’t lived until you’ve run BeOS on an actual BeBox. Love those blinkenlights.

                              1. 1

                                I have been reviving mine over the holidays, so I was quite surprised to see this story surface at around the same time. https://www.reddit.com/r/vintagecomputing/comments/eku19u/dusted_off_one_of_my_old_beboxes/

                                1. 1

                                  The dude has three, one being a Hobbit?! My jelly runneth over. Took me ages to find the one 133MHz I own.

                                  1. 1

                                    that dude is me, the username is the clue :-)

                              1. 13

                                I used BeOS as my primary OS for a year or so, eventually dual-booting with Linux and then dropping it altogether.

                                Many things about BeOS were sort of incredible. Booted in a couple seconds on the machines of the era, easily 5-10x more quickly than Linux. One of the “demos” was playing multiple MP3 files backwards simultaneously, a feat that nothing else could really do at the time, or multiple OpenGL applications in windows next to each other. The kernel really did multiprocessing in a highly responsive, very smooth way that made you feel like your machine was greased lightning, much faster than it felt under other OSes. This led to BeOS being used for radio stations, because nothing you were doing in the foreground stood a chance of screwing up the media playback.

                                BeOS had a little productivity suite, Gobe Productive. It had an interesting component embedding scheme, I guess similar to what COM was trying to be, so you just made a “document” and then fortified it with word processing sections or spreadsheet sections.

                                There were a lot of “funny” things about BeOS that were almost great. Applications could be “replicants,” and you could drag the app out of the window frame and directly onto your desktop. Realistically, there were only a couple for which this would be useful, like the clock, but it was sort of like what “widgets” would become in a few years with Windows and Mac OS X.

                                The filesystem was famous for being very fast and for having the ability to add arbitrary metadata to it. The mail client was really just a mail message viewer; the list of messages was just a Tracker window (like Finder) showing attributes for To, From, Subject, etc. Similarly, the media player was just able to play one file, if you wanted a playlist, you just used Tracker; the filetype, Title, Artist, Album, etc. were just attributes on the file. I’m not entirely sure how it parsed them out, probably through a plugin or something. You could do what we now call “smart searches” on Mac OS X by saving a search. These worked just like folders for all the apps.

                                The POSIX compatibility was only OK. I remember it being a little troublesome to get ports of Unix/Linux software of the era going. At the time, using a shittier browser than everyone else wasn’t really a major impediment to getting anything done, so usually I used NetPositive. There was a port of Mozilla, but it was a lot slower, and anyway, NetPositive gave you haiku if something went wrong.

                                There were not a huge number of applications for BeOS. I think partly it was a very obscure thing to program for. There were not a lot of great compatibility libraries you could use to easily make a cross-platform app with BeOS as a target. I wasn’t very skilled at C++ (still am not) but found trying to do a graphical app with BeOS and its libraries a pretty huge amount of work. Probably it was half or less the work of doing it in Windows, but you had to have separate threads for the app and the display and send messages between them, and it was a whole thing. Did not increase my love for C++.

                                All in all, it was a great OS for the time. So much of my life now is spent in Emacs, Firefox and IntelliJ that if I had those three on there I could use it today, but it was such an idiosyncratic platform I imagine it would have been quite difficult to get graphical Emacs on there, let alone the others. But perhaps it’s happening with Haiku.

                                1. 3

                                  the filetype, Title, Artist, Album, etc. were just attributes on the file. I’m not entirely sure how it parsed them out, probably through a plugin or something.

                                  Querying was built into the filesystem. There was a command-line query, too. So many applications became so much simpler with that level of support for queries, it was great.

                                  you had to have separate threads for the app and the display and send messages between them, and it was a whole thing

                                  Yeah, that was a downside, but it was very forward-thinking at the time.

                                  So much of my life now is spent in Emacs, Firefox and IntelliJ that if I had those three on there I could use it today

                                  Well, you’re almost in luck. Emacs is available – a recent version, too!

                                  IntelliJ is there, too, but 1- only the community edition, and 2- it’s a bit further behind in versions.

                                  Unfortunately, Firefox doesn’t have a Haiku port at this time. Rust has been ported, but there are still a boatload of dependencies that haven’t been. The included browser, WebPositive, is based on a (iirc, recent) version of webkit, fwiw, so it’s not antiquated.

                                  1. 2

                                    the problem with relying on additional file metadata for functionality in a networked world is that you have to find a way to preserve the metadata across network transfers. I also used BeOS for several years for daily everything. Networking in BeOS was practically an afterthought.

                                    1. 2

                                      Sure, and you need to be able to populate metadata for untagged files from the network.

                                      Fortunately, most modern file types have metadata in them, so discarding the fields outgoing doesn’t hurt, and populating them incoming isn’t too hard. IIRC, that sort of thing was generally part of the application. So, e.g., the IMAP sync app would populate your email files with metadata from the email header fields, the music player app would populate metadata from the mp3 or ogg info headers, etc.

                                      1. 2

                                        but then this becomes a schema problem. next-gen ideas like tagging files pervasively with identical metadata regardless of type for relating and ordering dies as soon as you tar it up and pass it through a system that doesn’t know about your attributes - unless you have abitrary in-band metadata support, and then it becomes a discoverability and a taxonomy problem, and if you have it in multiple places you have to keep it synchronised and stable with regards to shallow copies like links. You can still have the support for it as a second layer of metadata, of course, and the ability to index and query otherwise extant metadata out of band is useful as an optimisation, but once you extend the idea of the file namespace to include foreign data, you lose out on ‘smart metadata’ as a first class foundation. A similar thing happened with multi-fork files for MacOS.

                                        1. 1

                                          A similar thing happened with multi-fork files for MacOS.

                                          Sure, but it’s still so useful that when Apple rewrote their filesystem a couple years ago, they included support for resource forks. NTFS supports them, too, as does (iirc) the SMB protocol.

                                          Apple standard practice has moved to bundle directories for fork-requiring executables, sure, and that reduces those interop problems a little bit.

                                          I guess what I’m saying is: file forks are still widely supported, regardless of difficulty integrating with un*x filesystems. Since they’re still incredibly useful ways of interacting with file systems, I don’t see why we should avoid them.

                                1. 13

                                  BeOS was my primary operating system for a couple of years (I even bought the Professional Edition…I might still have the box somewhere). Did my research and built a box that only had supported hardware - dual Celeron processors, 17” monitor at 1024x768, and some relatively large disk for the time.

                                  It was great.

                                  1. 3

                                    It was great.

                                    It was - Very fast, very fun.

                                    1. 2

                                      Out of interest, what did you use it for?

                                      I remember downloading it and playing around with it (maybe it was small enough to boot from a floppy?) but I couldn’t do anything useful with it. Was a bit too young as well, I guess today I could make do better with unfamiliar stuff.

                                      1. 5

                                        It was my daily driver. 99% of my work at the time involved being connected to a remote device (routers and firewalls mostly), and BeOS could do that just fine.

                                        It was a great system. There hasn’t been a better one since.

                                        1. 3

                                          I had triple boot machine - Windows/Linux/BeOS that time. I used BeOS mainly to learn C++ programming. Their GUI toolkit was at that time quite nice - much nicer than MFC :)

                                        2. 1

                                          was it the Abit BP-6? I had two of those as well, for BeOS. Loved them almost as much as I loved a real bebox. Way faster too :-)

                                          1. 1

                                            Nah, all self-built, back when building your own machine could actually be significantly cheaper than buying a prebuilt one.

                                            1. 1

                                              the bp-6 is a motherboard. I hope that counts as self-built :-)

                                              1. 1

                                                Ah, my bad. I don’t remember the motherboard; this was 20 years ago. Sadly, I haven’t built my own since…probably 2002? I’m so out of the loop it’s not even funny.

                                                (Unless you count putting a Raspberry Pi in a little plastic case as “building your own machine”. If so, then…it’s still been a few years.)

                                                1. 1

                                                  oh that’s quite OK. The BP-6 was quite famous in that era for allowing SMP with celerons that were built to disallow it. It was quite a popular choice for x86 BeOS at the time.

                                        1. 5

                                          panic() is the equivalent of the exception mechanism many languages use to great effect. Idiomatically it’s a last resort, but it’s a superior mechanism in many ways (e.g. tracebacks for debugging, instead of Go’s idiomatic ‘here’s an error message, good luck finding where it came from’ default.)

                                          1. 5

                                            Go’s idiomatic ‘here’s an error message, good luck finding where it came from’

                                            I think the biggest problem here is that too often if err != nil { return err } is used mindlessly. You then run in to things like open foo: no such file or directory, which is indeed pretty worthless. Even just return fmt.Errorf("pkg.funcName: %s", err) is a vast improvement (although there are better ways, such as github.com/pkg/errors or the new Go 1.13 error system).

                                            I actually included return err in a draft of this article, but decided to remove it as it’s not really a “feature” and how to effectively deal with errors in Go is probably worth an article on its own (if one doesn’t exist yet).

                                            1. 6

                                              it’s pretty straightforward to decorate an error to know where it’s coming from. The most idiomatic way to pass on an error with go code is to decorate it, not pass it unmodified. You are supposed to handle errors you receive after all.

                                              if err != nil {
                                                  return fmt.Errof("%s: when doing whatever", err)
                                              }
                                              

                                              not the common misassumption

                                              if err != nil {
                                                  return err
                                              }
                                              

                                              in fact, the 1.13 release of go formally adds error chains using a new Errorf directive %w that formalises wrapping error values in a manner similar to a few previous library approaches, so you can interrogate the chain if you want to use it in logic (rather than string matching) .

                                              1. 5

                                                It’s unfortunate IMO that interrogating errors using logic in Go amounts to performing a type assertion, which, while idiomatic and cheap, is something I think a lot of programmers coming from other languages will have to overcome their discomfort with. Errors as values is a great idea, but I personally find it to be a frustratingly incomplete mechanism without sum types and pattern matching, the absence of which I think is partly to blame for careless anti-patterns like return err.

                                                1. 3

                                                  You can now use errors.Is to test the error type and they added error wrapping to fmt.Errorf. Same mechanics underneath but easier to use. (you could just do a switch with a default case)

                                                2. 4

                                                  I guess you mean

                                                  if err != nil {
                                                      return fmt.Errorf("error doing whatever: %w", err)
                                                  }
                                                  

                                                  but yes point taken :)

                                                  1. 3

                                                    Sure, but in other languages you don’t have to do all this extra work, you just get good tracebacks for free.

                                                    1. 1

                                                      I greatly prefer the pithy, domain-oriented error decoration that you get with this scheme to the verbose, obtuse set of files and line numbers that you get with stack traces.

                                                  2. 1

                                                    I built a basic Common-Lisp-style condition system atop Go’s panic/defer/recover. It is simple and lacking a lot of the syntactic advantages of Los, and it is definitely not ready for prime time, at all, but I think maybe there’s a useful core in there.

                                                    But seriously, it’s a hack.

                                                  1. 3

                                                    This article is getting a few things about git wrong. They claim git only supports ‘One check-out per repository’. Heard of git worktree?

                                                    They also claim git is only portable to POSIX, yet it runs fine on Windows with full line-ending support. (They achieve this by including the tools like ls, ssh and cat, thereby not requiring the host OS to be posix)

                                                    They claim Sqlite is a superior storage method, yet it is widely known for getting corrupted (probably the reason they run integrity checks all the time), lacks the ability of multiple entities accessing it at the same time, and almost all its column types are silently converted to strings columns with no type checks.

                                                    1. 5

                                                      I don’t think they got this one wrong:

                                                      They also claim git is only portable to POSIX, yet it runs fine on Windows with full line-ending support. (They achieve this by including the tools like ls, ssh and cat, thereby not requiring the host OS to be posix)

                                                      From the article:

                                                      This is largely why the so-called “Git for Windows” distributions (both first-party and third-party) are actually an MSYS POSIX portability environment bundled with all of the Git stuff, because it would be too painful to port Git natively to Windows. Git is a foreign citizen on Windows, speaking to it only through a translator.

                                                      1. 1

                                                        this was also somewhat true of mac os, if you were working with it’s traditional approach to case insensitivity. As the article notes, this is almost by design. Git was built to facilitate linux development, and it’s not surprising that linux hosting is a prerequisite for that.

                                                    1. 3

                                                      What’s so special about platform.sh that it charges 50$ for a tiny server. ?

                                                      1. 5

                                                        Well, based on the domain name, I assume it’s written entirely in Bash, so that probably takes some extra cycles.

                                                        1. 2

                                                          It’s not 50 USD for a server, but for a project. So if you had two “projects” (I’m guessing web sites/apps) it’d be 100 USD instead. I imagine the overhead is for if you don’t want to deal with AWS/Google yourself.

                                                          Their pricing model reminds me of Webflow.

                                                          1. 1

                                                            It’s not exactly like that. There’s a little more provided to ‘project’ than just ‘a server’. The project model gets you an app (I think just one on the standard plan, but it can be more) connected to provisioned services (databases, search index, queues, whatever) and a git server and some storage. Within your project plan you get a certain number of environments, which are branches. (e.g. staging, feature branch etc.) When you branch you can clone the whole setup, services, data, etc. and everything can be driven via git So there is additional value and a different workflow compared to just provisioning some cloud servers.

                                                            1. 2

                                                              I think we are both saying the same thing :-)

                                                              Their site isn’t very clear (your description confirms things that I’ve guessed at from their site) but it sounds like you get a lot for your 50 USD. They’re taking care of CloudFront, ELB/ALB, CodeCommit/CodePipeline, DynamoDB/RDS, ElasticSearch, SQS etc. for you. If you set it all up yourself you’d undoubtedly pay less to AWS per month, but then you’d have to operate it all yourself.

                                                              For devs it sounds great if you don’t want to manage all that yourself (or don’t have a team that does it for you at work). It really does remind me of Webflow, which does a similar thing for content sites (i.e. they do everything for you including visual design tool, CMS, form creation & submission handling etc.).

                                                        1. 7

                                                          That depends a lot on what you want to use it for and what your personal tastes are like. As people have said in other threads, CL is kitchen sink, and standardised a long time ago which means it is very stable: code written decades ago is going to work unmodified in CL today. There are several high-quality implementations around. On the flip side, it has many warts.

                                                          Racket is a single implementation-defined language. On the other hand, if you learn Scheme, most of it just carries over into Racket, and you can also choose from a bevy of implementations depending on your requirements. It’s a clean and elegant language, but that also means many things are missing. For those, you’ll have to rely on SRFIs, portable libraries or implementation-specific extensions.

                                                          1. 3

                                                            while the stability argument is probably true from a high level perspective, I’ve run into a few problems with libraries that don’t want to build on older CL installations, e.g. if using the old sbcl that comes with debian, quicklisp systems don’t always build. So in practice, you still have to migrate things forward.

                                                            1. 1

                                                              while the stability argument is probably true from a high level perspective, I’ve run into a few problems with libraries that don’t want to build on older CL installations

                                                              It’s possible to write unportable, nonstandard Common Lisp, but relatively little care is required to write it properly.

                                                              if using the old sbcl that comes with debian, quicklisp systems don’t always build.

                                                              That’s entirely because Quicklisp isn’t written properly. If you ever take a look at the code, you’ll notice it’s leagues more complicated than it has any need to be, as merely one issue with it. Of course, Quicklisp hosting doesn’t even bother testing with anything that isn’t SBCL as of the last time I checked.

                                                              So in practice, you still have to migrate things forward.

                                                              This is wrong. All of my libraries work properly and will continue to work properly. Don’t believe that merely because some libraries aren’t written well, that none or a significant amount of them are. I’m inclined to believe most of the libraries are written by competent programmers and according to the standard.

                                                              1. 1

                                                                This is wrong. All of my libraries work properly and will continue to work properly. Don’t believe that merely because some libraries aren’t written well, that none or a significant amount of them are. I’m inclined to believe most of the libraries are written by competent programmers and according to the standard.

                                                                The last library you shared (the alternative to uiop:quit) is most definitely not written in portable Common Lisp so as the /u/cms points out the implementations may change their may change their APIs and the code would need to be updated.

                                                                1. 1

                                                                  Firstly, it should be understood that a library with the sole purpose of working over differences of implementations in this manner is different from my other libraries, which don’t. Secondly, if you look at the documentation, I note that the library will merely LOAD properly, but may not actually exit the implementation, which is something one may want to test against, as it’s a feature caveat. Thirdly, if any implementation thinks about changing the underlying function, such as SBCL has already done once, I’d rather complain about the stupid decision than change my program.

                                                                  In any case, sure I could’ve explicitly mentioned that one library, but it disturbed the flow of the sentence and I figured these points were obvious enough, but I suppose not so.

                                                                2. 1

                                                                  That’s entirely because Quicklisp isn’t written properly

                                                                  It’s completely fair to say that things that do not build portably could be better written to do so. I would like to add that it is not quicklisp per se that I had seen problems, rather building systems within it. Off the top of my head, ironclad and perhaps cffi both exhibited problems on older sbcl. I haven’t checked, but I think that this would be the case if they were just build with asdf, so I do not wish to imply quicklisp introduced these problems. I think both of these libs are very tightly coupled to the host system libraries, and could be considered atypically lisp in that sense.

                                                                  probably I should have better said in practice you may have to migrate things forwards

                                                                3. 1

                                                                  if using the old sbcl that comes with debian

                                                                  The problem is more likely due to the fact that you are using the version packaged by Debian instead of your SBCL being old. You should avoid all lisp software packaged by Linux distributions, they tend to give you nothing but trouble.

                                                                  However it is true that not all Lisp code is portable, especially with the implementation-compatibility shims that are becoming more common. And while one is likely to encounter code that uses implementation specific extensions there tends to be a fallback for when the feature is not available. As a data point I’ve loaded software from before ASDF (that used MK:DEFSYSTEM) with little modifications.

                                                                  1. 1

                                                                    Yes, that could well be so. It doesn’t really change the point that it’s not as straightforward as just assuming you have a working lisp, everything you need will just be stable. I think we’re in agreement there. Also, I’m building standalone executables for 32 bit ARM, I’m not super-surprised that there’s system-specific bugs in things like math / crypto primitives. FWIW I would favour CL for building anything myself, but not because I think stable dependencies are just a moot point.

                                                                    (I did actually manage to work quite fine on debian’s ancient sbcl for quite a while so it’s not useless)

                                                              1. 3

                                                                I think some of it is becaue image-based development does not have good collaborative tools

                                                                1. 5

                                                                  That’s always been a bit dubious (Smalltak has had changesets since at least the late 80s), but it’s been truly false for a long time. Squeak had Monticello, VisualWorks had ENVY and StORE, and Pharo just uses Git straight-up these days. I’m not arguing images don’t have other issues with them, but collaboration isn’t one of them.

                                                                  1. 2

                                                                    completely fair point. I didn’t only mean source code control, I’m also thinking that the developer process, incrementally manipulating a running image isn’t very easily mapped onto distributed working, maybe never was?

                                                                    e.g. are there workflows/tools where multiple developers push changes to a central image ? Because that’s kind of the mapping there - if I’m writing C, I am diffing text files, and compiling the changed ones into new objects, linking everything, running tests - this extends quite naturally to continous integration, and automation for collaborators.

                                                                    When I’m working on an image style system, I’m updating a running thing typically, usually interactively testing as I go. Ideall collaboration flow for this kind of thing would be to pull small upstream changes directly into my image, switch branches without resetting the world, this kind of thing.

                                                                    I don’t know very much about the detail of your counter-examples, but I did not mean to suggest it was impossible, so much as ungainly, which was my understanding.

                                                                    1. 3

                                                                      Sorry for responding so late; I know others won’t see this, but thought you deserved a response.

                                                                      I’m also thinking that the developer process, incrementally manipulating a running image isn’t very easily mapped onto distributed working, maybe never was?

                                                                      You do kind of have to decide if you’re gonna work in the classic Smalltalk mold, or if you’re going to work in a modern mold; that’s fair. It’s just that the modern mold is really common, to the point that relatively few people sculpt an app out of a Smalltalk image (which is closer to the original intent) than write Smalltalk code that really is the program.

                                                                      are there workflows/tools where multiple developers push changes to a central image ?

                                                                      This is in fact exactly how at least GNU Smalltalk and Pharo (which is to Smalltalk what Racket is to Scheme) do. E.g., this is Pharo’s Jenkins server, which works by just building off master constantly, just as any other project would do. The only difference is that, rather than diffing or thinking in terms of files, you think in terms of changes to classes and methods. Behind the scenes, this is converted into files using a tool called Iceberg.

                                                                      The only place this system calls down is if you’re building constants directly in the image, rather than in code. E.g., if I were truly building a Smalltalk program in a traditional Smalltalk way, I might just read an image into a variable and then keep the variable around. That’s obviously not going to have a meaningful source representation; there might be a class variable called SendImage, but the contents it happens to have in my image won’t be serialized out. Instead, I’d have to have the discipline to store the source image alongside the code in the repository, and then have a class method called something like initializeImages that set SendImage to the contents of that image file. In practice, this isn’t that difficult to do, and tools like CI can easily catch when you mess up.

                                                                      Whether this is working against or with the image system is debatable. I’ve used several image systems (Common Lisp and Factor being two big ones) that don’t suffer “the image problem”, but tools in the ilk of Smalltalk or Self are obviously different beasts.

                                                                      1. 1

                                                                        Thanks for the reply! I wish I had more smalltalk experience. Maybe some day.

                                                                  2. 2

                                                                    Smalltalk does have Monticello and Metacello. I’ve heard good things about them.

                                                                  1. 6

                                                                    I own a number of thinkpads, also have a bench power supply than can source 5a @ 20v, so I can do some analysis next week by removing the battery and looking at the current consumption under various configurations and workloads.

                                                                    The power management system on modern laptops should be able track power usage with a fairly fine granularity.

                                                                    1. 3

                                                                      yes, actual measurement would be much more helpful than my anecdata. Although, even if I’ve just enabled more accurate estimate reporting, I’ll take it.

                                                                      1. 2

                                                                        I don’t have a thinkpad any more, but won’t most modern laptops’ battery management system give you a measurement of battery current draw when they’re disconnected from a charger? On my current Dell it’s at /sys/class/power_supply/BAT0/current_now. PowerTop shows this metric as well, I think.

                                                                        I would expect this could be as accurate as measuring the average draw with a bench supply, if not more accurate (the BMS needs a reasonably accurate measurement of battery discharge current for “gas gauge” state of charge estimation.)

                                                                        1. 2

                                                                          I’ll take a look the next time I boot up my x220, I had planned on removing the battery and only operate it off the bench supply so load wouldn’t be conflated with any power going directly to charging.

                                                                          1. 2

                                                                            Yeah, I got that - it’s a good idea, I just thought this option might also be useful to know about (has the similar-but-opposite problem that the battery’s internal current meter is not useful if external power is connected to the battery).

                                                                      1. 3

                                                                        My newer linux systems use ip and ifconfig docker0 down doesn’t change anything. What’s the equivalent command with ip ?

                                                                        1. 2

                                                                          ip link set down docker0 i think

                                                                        1. 5

                                                                          I think there’s something else going on.

                                                                          I’ve had this same problem, used the same solution, and several extra watts moved to some other device or task. After doing this repeatedly, I’ve seen the screen or sometimes the Ethernet claim to use four watts. The power usage is still there, but moves.

                                                                          I’ve seen this on four Thinkpad models I own. Either powertop is confused or something is masquerading as another device or process.

                                                                          Maybe the Intel management engine is mining Bitcoin?

                                                                          1. 3

                                                                            heh. The accounting is definitely obscure, but the reported drain drop for me is consistent. Early days though.

                                                                          1. 2

                                                                            i don’t feel I have any understanding about setv variable scoping

                                                                            Although the author doesn’t say so explicitly, from an outsiders perspective, this seems to be hylangs biggest negative, it doesnt support let. It was removed from the language after several attempts to implement it correctly.

                                                                            1. 1

                                                                              Yes, that’s a succinct way of putting it. Indeed, when I am using a lisp, I usually am aware of dynamic / lexical scope to some degree, but like I say I am not really sure where hy is putting them. I’m sure it’s all documented and demonstrable of course.

                                                                            1. 6

                                                                              I decided to spend a year away from Twitter. I hope this will nudge me into more side projects and experiments with solo web publishing

                                                                              1. 4

                                                                                I found quitting twitter was pretty easy once I uninstalled the app from my phone. Desktop wise I still click on the occasional link but I no longer consume timelines, and haven’t posted in over a year.

                                                                                Good luck!

                                                                              1. 2

                                                                                This is perfect timing. I was just wondering about how to add webmention to my mostly static site.

                                                                                1. 2

                                                                                  Glad to be of service <3