1. 1

    It’d be nice if this had multi-sort so you could e.g. find out the cheapest machine with a given number of cores, but I guess you could always implement it yourself.

    1. 1

      ah great idea. I can implement it. The service is a very simple Golang service btw https://github.com/yeo/ec2.shop

    1. 1

      I haven’t dug into it much yet, but wouldn’t something akin to ccache help with dependency compilation times?

      1. 5

        It does to a degree; for example, Mozilla’s sccache is a ccache like tool with optional support for sharing build artefacts on cloud storage services which ships with support for the Rust toolchain (it works as a wrapper around rustc, so it should work even if you don’t use Cargo to build Rust code). Linking times can still be slow though, and obviously the cache isn’t useful if you’re running beta or nightly and frequently updating your compiler.

        1. 3

          Or using a build system like bazel or Nix.

        1. 27

          There’s a huge funnel problem for computer science at the moment. Go and Rust have some pretty serious evangelical marketing teams, but they are a drop in the ocean compared to the emergent ultramarketing behemoth that feeds JavaScript to the new developer.

          Part of this is that JS is constantly “new and modern” – with the implication that it’s a bandwagon that you’ll be safe on, unlike some of the old cobwebbed bandwagons. Constant change and “improvement” is itself a safety generator.

          Another part is that it’s so easy to get to hello, webpage. The sweet spot on the racket is enormous. Every computer including your phone comes with at least 1 and usually several JS interpreters. Frictionlessness drives adoption.

          The problem is that JS is, violently, garbage for most purposes. It’s a local maxima that has essentially everyone trapped, including the next generation. It’s not clear how we escape from this one.

          1. 15

            I feel about JS similarly to the way I felt about the x86 ISA taking over the world. “A local maxima that has everyone trapped”, that caused billions of dollars of R&D to be diverted into clever hardware, code generation, psychological treatment for programmers, etc. (I think the last thing is a joke, but I’m not sure.) One could even draw a parallel between the modern “micro-op” architectural approach to bypassing literal x86 in hardware and the WASM approach to bypassing literal JS in the browser.

            1. 10

              The longer it goes, the more this talk gets correctly.

              1. 1

                I’m not sure any other ISA would have been better than x86 at the time x86 began to take off. Lots of companies were trying lots of things in the RISC world, and plenty of money was being spent on RISC hardware and compilers, and the x86 still began to take off. Intel had a lot of money? IBM had a lot of money, and IBM was working on RISC. HP had a lot of money, and HP was working on RISC. And so on.

                1. 2

                  Of the obvious choices available at the time x86 began to take off (1984ish), I would say the 680x0 was a better choice, demonstrated by watching the parallel evolution of the two. At least the extensions to 32-bit and virtual memory seemed a lot more straightforward on the 68k. They both would have run out of steam and gotten weird by now, but I feel like it would have been less weird.

              2. 2

                It’s not clear how we escape from this one.

                Simply wait. There are some better tools out there (for some value of the objective function “better.”)

                We’ve moved on from C, C++ and Java, all of which have had a similar level of death grip. JS is not invincible. The more users it attains, the more it suffers problems of perception due to the wide variance of quality. This gives rise to new opportunities.

                Really, I get a bit disappointed that everyone is content to rewrite everything every seven years, but, hey, it’s their life.

                1. 0

                  I take strong offense at this proclamation of JavaScript as garbage. What kind of an opinion is that? If you don’t know JavaScript, don’t call out garbage. If you do know if well, you’ll see that it isn’t any more garbage then any other modern language for most purposes for which it is used - at least on backend.

                  The entire ecosystem has some issues, but every ecosystem does. And considering that node was created only a decade ago and has vastly more users then some “serious” things that exist for ages, it’s obvious that there’s something that it’s going correctly.

                  1. 27

                    I take strong offense at this proclamation of JavaScript as garbage.

                    You are offended on behalf of Javascript? People can dislike things that you like. It doesn’t do anything.

                    1. 4

                      Okay, inprecise choice of words on my part. I’ve wanted to state that I strongly disagree with his statement, english isn’t my first language and so I didn’t think every word completely through. I think you can still get the message. Be flexible a bit and you’ll understand it.

                      1. 6

                        I’m not sure what other languages you know, but compared to most popular languages:

                        • Javascript has a weird type-system that doesn’t really pay off. Sure, they added inheritance recently (welcome to the 80s), but there are all kinds of inconvenient relics like having to do Object.keys(x) when in Python you can just do x.keys() (as in most popular languages)

                        • Javascript makes it much harder to catch errors. Its implicit typecasting (undefined+“1”==“undefined1”, really??) and misuse of exceptions in the API means that when something goes wrong, you’ll often find out a few functions later, and then struggle to zero-in on the actual problem.

                        • The ecosystem is too fractured. The lack of a curated “standard library” sends novices into all sorts hacky solutions, and makes other people’s code less familiar.

                        I could really go on for a long while, and there are tons of example that I could give. I can say positive things too, like it has a good JIT, lots of interesting libraries, and some nifty syntax features (while lacking others), but overall think it’s a really bad language, that just happened to be in the right place and in the right time.

                        1. 1

                          For the record, JavaScript has always had inheritance, it just hasn’t had a class syntactical sugar that made it trivial to use until ES6.

                          1. 1

                            I wouldn’t call it bad. It just has bad parts. The == operator you mentioned is a trap for many beginners. On the other hand, there’s no inheritance problem because the inheritance was always there - you just have to know how prototypes work. The type system without extra tools and some practice is a pain. Yet the closure or say prototypical inheritance and composability you get is great.

                            JavaScript got really lousy reputation because unlike, say, C++ or Java, everybody wrote it, not just people who studied software engineering for five years and know how to use tools, have structure and not break the rules.

                            And it’s keeping the lousy reputation because it is still being done - randos adding jquery plugins for perfectly css-able animations even today.

                            Plus it’s got 25 years of backwards compatibility to maintain, so those bad parts never leave. Yes, it has been at the right place at the right time - but many others have tried and are still trying.

                            But despite all the bad parts and lack of standards, it’s still immensely flexible and productive and for a lot of use cases actually provides the best concepts to do them. Could you put something like elixir in a browser engine? Or java? Or Rust? Probably. Would it be better? Maybe, but I suspect not by much.

                            I don’t know, I’m probably not seeing it. I’m not claiming that it’s a great language or that it’s even good for everything and everyone. But people only see the ads and single page apps where they don’t belong or where they simply don’t like it, and ignore a lot of directness that the language provides, look down on flexibility, and don’t allow the little language that could its deserved praise (where it does deserve it).

                            Once again, as I’ve said in another comment - I don’t claim it’s the best language ever. I’m just saying it is not garbage. These days the tooling and the entire ecosystem is mature and there are no more language issues in writing a typical NodeJS application then in most other similar languages.

                            1. 1

                              Javascript has a weird type-system that doesn’t really pay off. Sure, they added inheritance recently (welcome to the 80s)

                              It added inheritance as a sop to people who didn’t know composition and refused to learn. Are you not satisfied with that? Does it have to become a pure Smalltalk clone for you? Composition is more powerful but it’s less familiar. Now JS has easy ways to do both.

                        2. 23

                          I have a deep understanding of JavaScript; I promise that my opinion is rooted in experience.

                          1. 6

                            Even though you personally find Node impressive, there might be people who don’t get impressed by languages (or any software really).

                            Being impressed or having respect (for a software) makes no sense. It is just a tool that should be used or discarded at will. Not a cultural icon.

                            If you do know if well, you’ll see that it isn’t any more garbage then any other modern language for most purposes for which it is used - at least on backend.

                            You have stated no arguments that support your conclusion. I believe that the jury is still out on the matter of language effect on productivity. All the studies I know of were totally botched.

                            Anecdotally, I have seen more poor Node backends than Python ones. I have also seen less high quality Haskell backends, which confuses me greatly. Still, I wouldn’t go as far as to conclude that everything is the same crap.

                            1. 5

                              Denying the role of aesthetic judgement in programming is madness.

                              1. 1

                                I didn’t state I find Node impressive. I do love it for the productivity it brings to me. And that is just the point of my reply. The guy claimed JavaScript is garbage, also without arguments. I’ve just pointed out that. No more garbage then any other language, in my opinion. If anybody brings arguments into this whole discussion, including the original post of the thread that claims people are pushing node, we can talk about with arguments. Otherwise we’re all just giving opinions and estimations.

                                Anecdotally I’m seeing much more crappy Java Enterprise and Spring backend code lately then node. Likely because I work at a java shop now. But I don’t claim that people are pushing Java exclusively for backend (even though they do at my company, we literally have 100% Java in my 800 people department), nor do I claim Java is garbage.

                                I hope that clarifies my objection to the claim of garbageness.

                                1. 4

                                  I think you are right that JavaScript brings great initial productivity. Especially in the single developer case. For many simple tasks, it is easy to use node, easy to use npm, and easy to deploy code. Where I have seen problems that I know are avoidable in other languages is in its long term sustenance and operation, which are vitally important parts of software engineering.

                                  Other languages have, for example, highly vetted and deep standard libraries of functions which are included in the system and move very slowly, which eliminates large classes of dependency management issues. Or, they have good type systems, which helps prevent common problems, especially in dynamic languages. Or they have exceptional tooling that enforces good practices. Or they are especially operable in production, with good intrinsic performance or observability characteristics.

                                  But the most important thing, to me, is that most other languages have a culture of remembering, but this is distinctly lacking in JavaScript. I attribute this to many JavaScript programmers starting from scratch in their software career inside JavaScript, and not having broader exposure to the vast surface of other kinds of software. And I attribute it also to the “stack overflow” method of programming, in which rather than engineer your software, you assemble it from the guesses of others, losing fidelity and getting more blurry with each iteration.

                                  It could sound like I’m being a pretentious jerk. I’ll confess to that. But having been a professional software engineer for now 32 years, and having seen probably a hundred languages and environments, the current JavaScript one is my least favorite. I appreciate that it’s one that you have significant personal investment in, but I would also encourage you to step out and up, and look around.

                                  1. 1

                                    Thanks, this also raises some quite relevant concerns. And I agree with these things. And now I have to fall back to the fact that despite these problems, it’s not garbage. I did step out and I try other languages and their frameworks, but I’m simply personally the most productive with it. And when I worked in teams where Node was used on the backend, the teams were professionals and mostly didn’t have problems with the language itself - maybe more or less then if it was done in, say, Spring or Rails, but I would say the typical problems are always stupid things like commas in the database where they break serialization somehow.

                                    And that is the point of my original comment. Which was that I object to the claim that JavaScript is garbage. Maybe a lot of garbage is written in JavaScript today, but it in itself is not that, and can provide perfectly productive envrionment for doing our jobs.

                              2. 5

                                I wouldn’t call JavaScript “garbage” myself, but it sure has some problems that few other languages have. [1] + [2] resulting in "12" being a simple example. JavaScript definitely took the 90s/00s vogue of “all things should be dynamic!” more than a few steps too far.

                                considering that node was created only a decade ago and has vastly more users then some “serious” things that exist for ages, it’s obvious that there’s something that it’s going correctly.

                                Well, just because it’s popular doesn’t mean it’s good ;-)

                                I notice myself that I often use tools that I know, even when it’s not necessarily the best fit, simply because it’s so much easier. Yesterday I wrote some Ruby code for a Jekyll plugin and had to look up a lot of basic stuff and made a few simple mistakes along the way; there was quite a lot of overhead here. And I programmed Ruby for a living for 2 years (but that was 5 years ago, and looks like I have forgotten much).

                                JavaScript is rather unique in the sense that almost everyone has to deal with it because it’s the only language supported by browsers[1], so there are a lot of people familiar with JS who would rather like to use it in other scenarios as well: not necessarily because it’s the “best”, but because it’s just easier as they won’t have to re-learn a whole bunch of stuff.

                                That’s not necessarily a bad thing, by the way; I’m all for giving people the tools to Get Shit Done. My point is merely that much of NodeJS’s popularity probably stems from factors other than intrinsic qualities of the language, runtime, and/or ecosystem.

                                [1]: Ignoring wasm, which isn’t quite ready for production-use, and “X-to-JS” compilers, which typically still require knowledge of JS.

                                1. 1

                                  Oh, I wasn’t claiming that the popularity of NodeJS comes from it’s quality. I’m just saying JavaScript is not garbage, which the commenter claimed without argumenting it.

                                  I mean, I know that [1] + [2] is “12” in JavaScript. But if you ask me how many times I’ve had a problem because of that in the last year, I would possibly be mistaken, but I would say 0 times.

                                  Again, it’s not the best language and it has its problems. But it’s not garbage.

                                  1. 2

                                    I mean, I know that [1] + [2] is “12” in JavaScript. But if you ask me how many times I’ve had a problem because of that in the last year, I would possibly be mistaken, but I would say 0 times.

                                    I find this interesting, because a few comments upwards, you state this:

                                    The typical problems are always stupid things like commas in the database where they break serialization somehow.

                                    It might just be me, but in no other language have I ever heard of specifically this problem. If you dig a little deeper, you’ll notice that there are a few fundamental issues in the language. But definitely the most prominent root issue is that types are treated extremely loosely, which causes all sorts of offshoot problems, like the two above.

                                    I always try to think a bit more holistically about this, and when you do, you start to see this pop up everywhere. Just yesterday I was debugging an issue with a colleague where he accidentally was throwing a JSON-encoded string at a program where in fact he should have been taking the JSON document, extract a specific key’s string value and send that to the program. Basically the same thing: it’s too easy to mix up types.

                                    I occasionally see this in Python as well: when you accidentally have a string instead of a list of strings; because of the “everything is a sequence” abstraction it’s too easy to mix things up, and you end up chasing down the rabbit hole before you figure out where it’s going wrong. It’s better if things error out early when you try to do something that makes no sense.

                                    Having type-specific operators makes it easier to get your bearings when reading unfamiliar code, too, in my experience.

                                2. 2

                                  And considering that node was created only a decade ago and has vastly more users then some “serious” things that exist for ages, it’s obvious that there’s something that it’s going correctly.

                                  No, it means JS has a powerful monopoly (the web) and thus commands a huge mindshare. Some subset of those people want to take their painstakingly earned experience to the server.

                                  1. 0

                                    So, ‘heads I win, tails you lose’, eh? Good argument. /s

                                1. 8

                                  It’s good to know that 22 years later the libraries/training/popularity (aka - popularity) is still the impediment to… popularity. I’m seriously tempted to summarize a publication to a tautology, but yes the implication goes the other way - a lack of popularity is in part due to libraries and training (and existing, self-propagating, popularity).

                                  Both F# and Scala try to evade the library issue by borrowing all the work from their respective platforms. Haskell, SML, and OCaml have taken the harder path of building their own community and growing their own libraries. I used to think this was why they failed to achieve as significant a user base but since then GoLang, Rust, and Dart^H^H^H have all gained popularity without borrowing from another community’s success. Both those received major corporate backing (more than F# even, I believe). Is there an example of a language succeeding in the modern age that did not receive such support or build on an existing community like JVM or .Net?

                                  1. 4

                                    Maybe Lua? I’m not a fan, but it seems to be popular. I think it fits into the “build on an existing community” niche. It’s used to customize Awesome WM, Wireshark, Nmap, Neovim, games, an internal thing at the company I’m working for… It’s everywhere.

                                    1. 2

                                      How “functional” is Lua on a scale from Haskell to JavaScript?

                                      1. 5

                                        Very much on the JavaScript end; Lua and JavaScript are almost like dialects of the same language in some ways.

                                        1. 1

                                          Lua is a bit different in that its equality semantics are almost exactly egal from Henry Baker’s paper, and it’s use of coroutines can encourage a more lexical approach.

                                          Not sure it counts as “modern age” as it predates Ruby and Python.

                                    2. 3

                                      I used to think this was why they failed to achieve as significant a user base but since then GoLang, Rust, and Dart^H^H^H have all gained popularity without borrowing from another community’s success.

                                      Who were all originally backed by entities with the means to prop things up with money and/or people. That I know of, none of those began outside of Google or Mozilla. They are the products of these entities wanting to solve problems that they think they have. As far as adoption, “Nobody Gets Fired For Buying Google” might as well be the new “Nobody Gets Fired For Buying IBM.”

                                      1. 1

                                        Younger me found this “popularity begets popularity” tendency to inspire cynicism. These days I’m a bit more at peace about it.

                                        Part of that is Rust, part of that is seeing that there are some impediments to FP languages, part of that is accepting that first impressions color a ton, and people fought tooth and nail against OOP for years, citing its difficulty. It took many years of just being around for people to think it was normal. Change is hard, but I’ve benefited enormously from keeping an open mind and looking for that challenge.

                                        1. 1

                                          I still find it mind boggling how much money Sun spent marketing Java:

                                          https://www.theregister.co.uk/2003/06/09/sun_preps_500m_java_brand/

                                        1. 3

                                          Reading between the lines here, they specifically mentioned DLLs in relation to Red Alert and C&C, so does this mean (especially given the reference to OpenRA) that the engine code itself is going to remain closed source? But that wouldn’t be compatible with the GPLv3?

                                          1. 2

                                            It’s not compatible for someone to recompile the game DLL, link it to the original engine binary, and then distribute that as a binary (no path to distribution under GPLv3).

                                            But it should be OK for OpenRA to link their existing GPLv3 engine to a game library derived from the released source code. (Similar comment here). I’m not super familiar with OpenRA but this seems like a good outcome (modern & compatible game engine, original game mechanics.)

                                            (I am a bit curious why EA wouldn’t release the whole thing at this point, though. Maybe they don’t own the entire engine copyright clearly, or maybe they don’t actually have the source any more?)

                                            1. 1

                                              Yeah, it’s clearly not an issue for OpenRA. What is interesting is the other implications - distributing the game in this fashion is somewhat dubious (although it can at least be worked around by claiming the DLLs shipped aren’t licensed under the GPLv3), and the suggested mod usecase becomes impermissible for redistribution as the GPLv3 is not compatible with linking against non-GPLv3 binaries.

                                          1. 4

                                            Wow. My first thought was: “who does this?” Then I realized how easy it is to do by accident: mock up some code with hard coded secret, read it from elsewhere later, and accidentally commit the older bytecode.

                                            This is really a study in sane defaults. Putting .pyc files alongside source is not a sane default. Why not stuff them into a profile dir controlled by Python?

                                            1. 2

                                              This is really a study in sane defaults. Putting .pyc files alongside source is not a sane default. Why not stuff them into a profile dir controlled by Python?

                                              Maybe because that’s traditionally where compilers put binaries built from source?

                                              I’d argue that it’s on the developer to understand the tools they are using (source control) and properly use features in that tool (e.g. .gitignore) to prevent leaking secrets.

                                              1. 3

                                                I think it’s hard to argue that it’s both convention and not a problem when C and C++ projects have largely moved on from doing this (and most encourage completely out of tree builds), and that Python 3.x moved .pyc files to a separate __pycache__ folder.

                                                1. 2

                                                  Maybe because that’s traditionally where compilers put binaries built from source?

                                                  In the case of a compiler, the user is explicitly requesting to generate an executable to run. In the case of Python, the interpreter just spits them out alongside source, and we have to tell users to ignore them. Intent makes all the difference here.

                                                  I’d argue that it’s on the developer to understand the tools they are using (source control) and properly use features in that tool (e.g. .gitignore) to prevent leaking secrets.

                                                  Agree, though I see Python’s .pyc files as incidental complexity which shouldn’t be foisted on users.

                                                  1. 2

                                                    I think you could just as easily blame git for not having sensible defaults for excluding compiled files by default.

                                                    1. 1

                                                      The problem with this is that git doesn’t (and arguably shouldn’t) know what a compiled file looks like for every single language in existence. It’s up to somebody to tell git that using .gitignore.

                                                      I don’t know whether there is a common way to set up a new python project (like rust’s cargo init or Haskell’s cabal init) which could be modified to (offer to) create a sensible python .gitignore as part of the process.

                                                      Btw, you can have a global .gitignore if you know you never want to accidentally commit a certain file pattern:

                                                      git config --global core.excludesfile '~/.gitignore'
                                                      

                                                      This might allow you to have a “sensible default” on your machine at least.

                                                      1. 2

                                                        I don’t know whether there is a common way to set up a new python project

                                                        There’s nothing built in to the language’s packaging toolchain to do this. There are popular third-party options like cookiecutter. GitHub itself also provides stock .gitignore files for many languages, including Python, and supports creating a repository structure from a template.

                                                        1. 1

                                                          The global .gitignore is what I’m talking about. I’m sure there’s a reason, but why isn’t that set up with “common” (as defined by whoever makes a PR) compiled filename wildcards? Seems like the problem here is git is overly eager to commit everything (which makes sense!), but should it also not be git’s responsibility to make a reasonable effort to protect users from themselves?

                                                          Isn’t there something about how with great power comes great responsibility?

                                                          1. 1

                                                            Git is a tool, and it assumes that the user knows what they’re doing. To me, what you’re suggesting sounds like a request for a hammer which refuses to hit screws. Git offers many ways to stage and commit changes, most of which aren’t overly eager to commit everything. If users are learning to use things like git add -A by default, that’s more of an education problem. Yes, git could try to make it harder to do the wrong thing, but at a cost of removing features that can be useful. A better option could be to encourage new users to use one of the many git GUIs or TUIs, which make it more obvious what’s being committed and how to pick and choose which changes to commit.

                                                            To expand on this a bit, not excluding any files (except .git) by default is the “least surprising” option. If you ask git to add all files, it adds all files. If you asked it to add all files and it missed out one or two because it thought they looked like build artefacts from some obscure language you’ve never heard of, that would be confusing and frustrating.

                                                            1. 1

                                                              I agree with all of that, but I also find it strange that you put the onus on setting up a proper .gitignore file on python (or any other language’s tooling).

                                                              I’d argue that since git is not designed to hold build artifacts it is surprising that it does not, by default, attempt to prevent them from being committed. Sure, you can then argue about common vs obscure but that’s a bit of a silly argument since you could just say it’ll only cover languages that are popular enough by some arbitrary metric.

                                                1. 1

                                                  I’m very out of the loop on server hardware, so I didn’t realize NUMAs are apparently so common these days. Interesting.

                                                  1. 5

                                                    NUMA is becoming increasingly common, especially when the hyperscalers are primarily limited by real estate footprint, it becomes more effective at scale for growing capacity to pack as many CPUs as thermals will allow. With AMD’s Zen parts there’s even 1P NUMA systems now, as they attempt to make the most of economies of scale by manufacturing high core count parts as multi-chip modules.

                                                    1. 4

                                                      One thing I’ve seen in the past (not sure if it since changed) with low-end rackmount boxes is that when I wanted to pack a lot of RAM into a single server, there were ranges where it’s cheaper to pay to put a second CPU in the server (even if you’re expecting it to have zero utilisation) just to be able to use more DIMM slots ’cuz the less heavily stacked DIMMs were much cheaper.

                                                      AIUI if you have more than one CPU socket filled, you have NUMA.

                                                    1. 6

                                                      I, unfamiliar with C++ and Rust, found the discussion on overflow checks difficult to follow, but I found this to be a helpful resource:

                                                      http://huonw.github.io/blog/2016/04/myths-and-legends-about-integer-overflow-in-rust/

                                                      In particular, I liked the list of PRs at the end where Rust’s debug-build checks helped catch real-world errors!

                                                      1. 1

                                                        In particular, I liked the list of PRs at the end where Rust’s debug-build checks helped catch real-world errors!

                                                        What do you mean by “PRs” here? Because I’m seeing this acronym used more and more frequently for what appears to be “bug report” (and not “pull request” or “peer review”, as I would presume), but I can’t make the connection.

                                                        1. 3

                                                          I’ve heard of PR being used for “Problem Report” in the FreeBSD community (I think this even predates the common use of GitHub). Not sure if this has lead to some confusion.

                                                          1. 1

                                                            Pull requests are not GitHub’s invention. (If that’s where your comment about GitHub comes from.)

                                                            1. 1

                                                              They aren’t, but I’d argue that it’s what popularised the term amongst the wider programming community.

                                                          2. 2

                                                            PR stands for “problem report”. This usage predates GitHub, for example see FreeBSD documentation.

                                                            1. 1

                                                              Pull requests are not GitHub’s invention. (If that’s where your comment about GitHub comes from.)

                                                        1. 7

                                                          I’m working on reverse engineering the entire engine of an older commercial PC game. It has an interesting esoteric scripting VM and is a good specimen as it has binary artefacts for x86 Win32, OSX PPC32(!) and Linux i386(!!!) available, which was especially uncommon for the time.

                                                          1. 2

                                                            Which one ran on all three?

                                                            1. 4

                                                              It’s a fairly niche game, Creatures 3, a sort of virtual life sandbox game which was originally released for Windows PCs in 1999. It later got a downloadable free standalone expansion which added online features around 2001, at which time a concurrent in-house Linux port was released (which was very uncommon, Unreal Tournament is the only title I can think of in the same vintage that got a port). The Mac port was done much later (2004-2005) with a rerelease and was handled by an external company.

                                                              1. 2

                                                                Artificial Life was a pretty neat field when I looked at it. Vaguely remember Creatures. Cool that you’re working on something like that.

                                                            2. 2

                                                              I am also doing this but concentrating on a DOS game from 1989/1990. The game uses a virtual 8/16 bit processor which I think allowed it to run on DOS, Amiga, Apple IIGS, and C64. I am reversing the DOS version.

                                                            1. 13

                                                              However, using a system without notifications has been very pleasant. Nothing to distract you while you’re in the zone

                                                              However, I used this to my advantage and decided not to consume media on my laptop

                                                              Taking away audio and notifications feels more stupid than simple to me.

                                                              1. 7

                                                                well, in a saner world there wouldn’t be dbus required for a bluetooth stack and notifications. most linux distributions sure have the “looming npm left pad disaster” feeling for me.

                                                                1. 14

                                                                  I think dbus isn’t so bad; you need some way to do IPC, and dbus isn’t really worse than anything else I’ve seen. For example for notifications, how else should dunst get the information from another application to display anything? There are probably some things that could be done better in hindsight (there always are), but I don’t see any major concerns.

                                                                  If you really don’t like the reference implementation, then there is a published protocol and you can implement it yourself.

                                                                  I’m not sure where the author got the notion that dbus is “Poetterware”, with just 67 commits over a 11-year span he’s hardly a top contributor, and I don’t see his name on the specification doc either. Not that this is a very good argument in the first place.

                                                                  1. 9

                                                                    I think dbus isn’t so bad; you need some way to do IPC, and dbus isn’t really worse than anything else I’ve seen.

                                                                    To expand on that, dbus is basically the outcome of a unification of previous IPC systems, namely DCOP and Bonobo (essentially CORBA) and as such it was very welcome that not every DE decided to do their own things and not be interoperable with any other software on any other DE. And dbus turned out a lot better than another (classic, Unixy, thus by definition good) protocol, ICCCM.

                                                                    1. 2

                                                                      I think dbus isn’t so bad; you need some way to do IPC, and dbus isn’t really worse than anything else I’ve seen. For example for notifications, how else should dunst get the information from another application to display anything? There are probably some things that could be done better in hindsight (there always are), but I don’t see any major concerns.

                                                                      just brainstorming here: file structure like /proc and inotify for listeners to avoid polling.

                                                                      If you really don’t like the reference implementation, then there is a published protocol and you can implement it yourself.

                                                                      haha :)

                                                                      Not that this is a very good argument in the first place.

                                                                      it’s politics after all. poettering isn’t exactly known for being non self righteous.

                                                                      1. 2

                                                                        Yeah, there are other ways of doing it for sure, such as sockets and whatnot. Every approach has their upsides and downsides. Doing a FS implementation would make it less cross-platform for example (dbus as a userspace daemon runs just as well on OpenBSD), and sockets are unstructured.

                                                                        Reasonable people can disagree on what the “best” way is, and different usages have different requirements, but I don’t see anything that’s so bad about dbus that makes it so terrible that “in a saner world there wouldn’t be dbus required for a bluetooth stack and notifications”? I never looked much at the dbus internals, but as a user I’ve never had to think about it, which is a good thing.

                                                                        1. 3

                                                                          Doing a FS implementation would make it less cross-platform for example (dbus as a userspace daemon runs just as well on OpenBSD), and sockets are unstructured.

                                                                          it doesn’t even have to be a kernel implementation, again brainstorming: with 9p/fuse for example it could be a directory for publishers and subscribers creating files there, reading blockingly. the publisher then would write to these files. 9p/fuse should be mountable nearly everywhere.

                                                                          i’m not sure that i’d say freedesktop.org targets their standards being cross platform. that dbus is cross platform is because it’s so old. maybe it’s just a lucky coincidence.

                                                                          1. 4

                                                                            The important question to ask is “Why is this protocol better?”. Why is an IPC daemon that you interact with through fuse meaningfully better than an IPC daemon you interact with over sockets?

                                                                            1. 2

                                                                              because files are the default mechanism, everything can use files without library magic as long as the contents of the files are easily parseable (like lines of tab delimited fields). easily parseable as requirement has the side effect that things don’t get features thrown in.

                                                                              but it’s kind of moot to discuss these things here lately. even technical people are satisfied with the status quo, the web is the new os, so it’s really irrelevant in the end.

                                                                              1. 2

                                                                                It’s not really library magic, it’s a defined protocol over a socket.

                                                                                Text files are certainly simpler on first glance, but unstructured text has its own problems of complexity and inefficiency.

                                                                                It is not at all clear to me that a dbus-fs would be better, or that the 9p way is actually better in general than the way BSD and others prefer APIs and libraries.

                                                                                1. 1

                                                                                  I don’t know about dbus specifically, but how many different implementations are there which aren’t wrappers and implement every feature?

                                                                                  Text files are certainly simpler on first glance, but unstructured text has its own problems of complexity and inefficiency.

                                                                                  so, \n delimited rows and \t seperated columns are unstructured? also, don’t miss that part of the structure can be represented as file hierarchy, not file contents.

                                                                                  It is not at all clear to me that a dbus-fs would be better, or that the 9p way is actually better in general than the way BSD and others prefer APIs and libraries.

                                                                                  i was just thinking out loud about how things could have been different. But most of the time here the answer (at least between the lines) is that I’d better be grateful that these things are barely working now (systemd, dbus, the web, etc.) because now we finally are “on par” with windows. It sometimes feels like inadequacy paired with stockholm syndrome. it’s just not really fun to discuss this way. sorry. “better” is too hard to define in absolute terms.

                                                                                  nb: i’m a bit tired and grumpy ;)

                                                                                  1. 2

                                                                                    dbus allows transmitting all kinds of binary data. Trees and images and objects aren’t so easily expressed as tab delimited tables. That means that there has to be a richer format. If you don’t specify one then you just end up with something adhoc and that’s even more work.

                                                                                    I think Dan Luu makes a good argument about this: https://danluu.com/cli-complexity/

                                                                                    I think it’s good to want to make things better, but there are genuine problems with the way that UNIX and 9p worked and there are advantages to the way things like dbus work.

                                                                                    Similarly with the web, loads of people grumping, often with very little appreciation for what the web actually is or how to do better. I don’t like a lot of the modern web either, but there are important use cases that are not fulfilled by just turning the clock back. The web gives us an application platform that works almost everywhere and is fairly well sandboxed. That’s a huge development.

                                                                                    1. 1

                                                                                      I think it’s good to want to make things better, but there are genuine problems with the way that UNIX and 9p worked and there are advantages to the way things like dbus work.

                                                                                      yes, they aren’t perfect of course. i just think that most of the current technologies are trying to solve too much, leading to exploding complexity, where things should better be compartmentalized. this isn’t universally so, from what i’ve seen about wayland, it seems like an improvement over x11 for the most parts. dbus is kind-of middle-ground, i think it is too complex, but there could be much worse things in terms of complexity.

                                                                                      Similarly with the web, loads of people grumping, often with very little appreciation for what the web actually is or how to do better. I don’t like a lot of the modern web either, but there are important use cases that are not fulfilled by just turning the clock back. The web gives us an application platform that works almost everywhere and is fairly well sandboxed. That’s a huge development.

                                                                                      well, the web “won” because it solved cross-platform ui and getting around shitty firewalls. it’s a hack to keep decades of hacks usable :)

                                                                            2. 2

                                                                              inotify is itself a Linux specific API, though, so now you need an abstraction layer for that.

                                                                              1. 1

                                                                                see that “brainstorming”? please also read my other comment for a maybe potable solution

                                                                                sorry, read that on mobile, you are referring to my first suggestion. i don’t know enough about file-change-notification-implementations across systems, but this seems like it could be reasonably abstracted.

                                                                        2. 2

                                                                          I’m not sure where the author got the notion that dbus is “Poetterware”,

                                                                          Because of the tight systemd integration. It’s a silly association fallacy and not a technical argument by any means.

                                                                          1. 2

                                                                            i guess the association is due to freedesktop.org choices of “standard” technologies

                                                                    1. 38

                                                                      Are people really still whining about this?!?

                                                                      Python 2 is open source free software and you’re a software developer. Grab the code, build it yourself, and keep running Python 2 as long as you want. Nobody is stopping you.

                                                                      This is even more silly because Python2 was ALREADY DEPRECATED in 2011 when the author started his project.

                                                                      1. 5

                                                                        /Are people really still using this argument?!?/ Just because software, packages and distributions don’t cost money doesn’t mean that people don’t use them and have expectations from them. In fact, that is exactly why they were provided in the first place. This “you should have known better” attitude is totally counterproductive because it implies that if you want any kind of stability or support with some QoS you should not use free/open-source software. I don’t think any of us want to suggest that. It would certainly not do most open source software justice.

                                                                        1. 9

                                                                          This “you should have known better” attitude is totally counterproductive because it implies that if you want any kind of stability or support with some QoS you should not use free/open-source software.

                                                                          My comment doesn’t imply that, though. In fact, as I pointed out, the author can still download Python2 and use it if he wants to. Free to use does not imply free support, and I think it’s a good thing for people to keep in mind.

                                                                          Furthermore, I don’t think a “you should have known better” attitude is out of line towards somebody who ignored 10 years of deprecation warnings. What did he think was going to happen? He had 10 years of warning - he really should have known better…

                                                                          1. 1

                                                                            if you argue with the 10 years of warning you’re missing the point.

                                                                            The point is not that there was no time to change it. The point is that it shouldn’t need change at all.

                                                                          2. 13

                                                                            Just because software, packages and distributions don’t cost money doesn’t mean that people don’t use them and have expectations from them

                                                                            Haven’t there been a few articles recently about people being burt out from maintaining open source projects? This seems like the exact kind of entitled attitude that I think many of the authors were complaining about. I’m sure there would be plenty of people to maintain it for you if you paid them, but these people are donaiting their time. Expecting some developer to maintain software depreciated in 2011 for you is absurd.

                                                                            1. 1

                                                                              Yeah, I’ve read a few of those articles, too. And don’t get me wrong I’m not trying to say that things should be this way. A lot of open source work deserves to be paid work!

                                                                              But I also don’t think there is anything entitled about this point of view. It’s simply pragmatic: people make open source software, want others to use it, and that is why they support and maintain it. Then the users become dependent. Trouble ensues when visions diverge or no more time can be allocated for maintenance.

                                                                              1. 9

                                                                                At the same time, it’s not like a proprietary software vendor that you staked your entire business on. The source code to Python 2 isn’t going anywhere. Just because the PSF and your Linux distribution decided to stop maintaining and packaging an ancient version doesn’t mean you can’t continue to rely on some company (or yourself!) to maintain it for you. For instance, Red Hat will keep updating Python 2 for RHEL until June 2024.

                                                                                And as crazy as it might seem to have to support software yourself, consider that the FreeBSD people kept a 2007 version of GCC in their build process until literally this week. That’s 13 years where they kept it working themselves. It’s not like it’s hard to build and package obsolete userspace software; nothing is going to change in the way Linux works that would prevent you from running Python 2 on it in five years (unlike most system software which might make more assumptions about the system it’s running on).

                                                                                Some amount of gratuitous change is worth getting worked up about. For example, it’s a well-known issue in fast-moving ecosystems like JavaScript that you might not be able to get your old project to build with new dependency versions if you step away for a year. That’s a problem.

                                                                                I, for one, am extremely glad that it’s now okay for library authors to stop maintaining Python 2 compatibility. The alternative would have been maintaining backwards compatibility using something like a strict mode (JavaScript, Perl) or heavily encouraging only using a modern subset of the language (C++). The clean break that Python made may have alienated some people with legacy software to keep running, but it moved the entire ecosystem forwards.

                                                                                1. 1

                                                                                  The source code to Python 2 isn’t going anywhere. Just because the PSF and your Linux distribution decided to stop maintaining and packaging an ancient version doesn’t mean you can’t continue to rely on some company (or yourself!) to maintain it for you.

                                                                                  1. Some distros are eager to make python launch python3. This action is vanity-based hostile to having Python 2 and 3 side-by-side (with 2 coming from a non-distro source).
                                                                                  2. By not keeping Python 2 open to maintainance by willing parties in the obvious place (at the PSF) and by being naming-hostile to people doing it elsewhere in a way that not only maintains but adds features, the PSF is making pooling effort for continued maintenance of Python 2 harder than it has to be.
                                                                                  1. 2

                                                                                    It’s arguably more irresponsible to continue to implicitly pushing Python 2.x as the “default” python by continuing to be refer to it by the python name out of deference to “not breaking things” when it is explicitly unmaintained.

                                                                            2. 7

                                                                              it implies that if you want any kind of stability or support with some QoS you should not use free/open-source software

                                                                              If you want support with guarantees attached you shouldn’t expect to get that for free. If you are fine with community/developer-provided support with no guarantees attached, then free software is fine.

                                                                              I think being deprecated for a decade before support being ended is pretty amazing for free community-provided support, to be honest.

                                                                          1. 4

                                                                            Lots of good things were originally unintended or semi-intended results of technical limitations. The /usr split is still a good idea today even if those technical limitations no longer exist. It’s not a matter of people not understanding history, or of people not realising the origins of things, but that things outgrow their history.

                                                                            Rob’s email is, in my opinion, quite condescending. Everyone else is just ignorantly cargo-culting their filesystem hierarchy. Or perhaps not? Perhaps people kept the split because it was useful? That seems a bit more likely to me.

                                                                            1. 19

                                                                              I’m not sure it is still useful.
                                                                              In fact, some linux distributions have moved to a “unified usr/bin” structure, where /bin, /sbin/, and /usr/sbin all are simply symlinks (for compatibility) to /usr/bin. Background on the archlinux change.

                                                                              1. 2

                                                                                I’m not sure it is still useful.

                                                                                I think there’s a meaningful distinction there, but it’s a reasonable decision to say ‘there are tradeoffs for doing this but we’re happy with them’. What I’m not happy with is the condescending ‘there was never any good reason for doing this and anyone that supports it is just a cargo culting idiot’ which is the message I felt I was getting while reading that email.

                                                                                In fact, some linux distributions have moved to a “unified usr/bin” structure, where /bin, /sbin/, and /usr/sbin all are simply symlinks (for compatibility) to /usr/bin. Background on the archlinux change.

                                                                                I’m not quite sure why they chose to settle on /usr/bin as the one unified location instead of /bin.

                                                                                1. 14

                                                                                  That wasn’t the argument though. There was a good reason for the split (they filled up their hard drive). But that became a non-issue as hardware quickly advanced. Unless you were privy to these details in the development history of this OS, of course you would copy this filesystem hierarchy in your unix clone. Cargo culting doesn’t make you an idiot, especially when you lack design rationale documentation and source code.

                                                                                  1. 2

                                                                                    … it’s a reasonable decision to say ‘there are tradeoffs for doing this but we’re happy with them’. What I’m not happy with is the condescending ‘there was never any good reason for doing this and anyone that supports it is just a cargo culting idiot’ which is the message I felt I was getting while reading that email.

                                                                                    Ah. Gotcha. That seems like a much more nuanced position, and I would tend to agree with that.

                                                                                    I’m not quite sure why they chose to settle on /usr/bin as the one unified location instead of /bin

                                                                                    I’m not sure either. My guess is since “other stuff” was sticking around in /usr, might as well put everything in there. /usr being able to be a single distinct mount point that could ostensibly be set as read-only, may have had some bearing too, but I’m not sure.
                                                                                    Personally, I think I would have used it as an opportunity to redo hier entirely into something that makes more sense, but I assume that would have devolved into endless bikeshedding, so maybe that is why they chose a simpler path.

                                                                                    1. 3

                                                                                      My guess is since “other stuff” was sticking around in /usr, might as well put everything in there. /usr being able to be a single distinct mount point that could ostensibly be set as read-only, may have had some bearing too, but I’m not sure.

                                                                                      That was a point further into the discussion. I can’t find the archived devwiki entry for usrmerge, but I pulled up the important parts from Allan.

                                                                                      Personally, I think I would have used it as an opportunity to redo hier entirely into something that makes more sense, but I assume that would have devolved into endless bikeshedding, so maybe that is why they chose a simpler path.

                                                                                      Seems like we did contemplate /kernel and /linker at one point in the discussion.

                                                                                      What convinced me of putting all this in /usr rather than on / is that I can have a separate /usr partition that is mounted read only (unless I want to do an update). If everything from /usr gets moved to the root (a.k.a hurd style) this would require many partitions. (There is apparently also benefits in allowing /usr to be shared across multiple systems, but I do not care about such a setup and I am really not sure this would work at all with Arch.)

                                                                                      https://lists.archlinux.org/pipermail/arch-dev-public/2012-March/022629.html

                                                                                      Evidently, we also had an request to symlink /bin/awk to /usr/bin/awk for distro compatability.

                                                                                      This actually will result in more cross-distro compatibility as there will not longer be differences about where files are located. To pick an example, /bin/awk will exist and /usr/bin/awk will exist, so either hardcoded path will work. Note this currently happens for our gawk package with symlinks, but only after a bug report asking for us to put both paths sat in our bug tracker for years…

                                                                                      https://lists.archlinux.org/pipermail/arch-dev-public/2012-March/022632.html

                                                                                      And bug; https://bugs.archlinux.org/task/17312

                                                                                2. 18

                                                                                  Sorry, I can’t tell from your post - why is it still useful today? This is a serious question, I don’t recall it ever being useful to me, and I can’t think of a reason it’d be useful.

                                                                                  1. 2

                                                                                    My understanding is that on macOS, an OS upgrade can result in the contents of /bin being overwritten, while the /usr/local directory is left untouched. For that reason, the most popular package manager for macOS (Homebrew) installs packages to /usr/local.

                                                                                    1. 1

                                                                                      I think there are cases where people want / and /usr split, but I don’t know why. There are probably also arguments that the initramfs/initrd is enough of a separate system/layer for unusual setups. Don’t know.

                                                                                      1. 2

                                                                                        It’s nice having /usr mounted nodev, whereas I can’t have / mounted nodev for obvious reasons. However, if an OS implements their /dev via something like devfs in FreeBSD, this becomes a non-issue.

                                                                                        1. 2

                                                                                          Isn’t /dev an own mountpoint anyways?

                                                                                          1. 1

                                                                                            It is on FreeBSD, which is why I mentioned devfs, but idk what the situation is on Linux, Solaris and AIX these days off the top of my head. On OpenBSD it isn’t.

                                                                                            1. 2

                                                                                              Linux has devtmpfs per kernel default.

                                                                                    2. 14

                                                                                      The complexity this introduced has far outweighed any perceived benefit.

                                                                                      1. 13

                                                                                        I dunno, hasn’t been useful to me in the last 20 years or so. Any problem that it solves has a better solution in 2020, and probably had a better solution in 1990.

                                                                                        1. 6

                                                                                          Perhaps people kept the split because it was useful? That seems a bit more likely to me.

                                                                                          Do you have a counter-example where the split is still useful?

                                                                                          1. 3

                                                                                            The BSDs do have the related /usr/local split which allows you to distinguish between the base system and ports/packages, which is useful since you may want to install different versions of things included in the base system (clang and OpenSSL for example). This is not really applicable to Linux of course, since there is no ‘base system’ to make distinct from installed software.

                                                                                            1. 3

                                                                                              Doesn’t Linux have the same /usr/local split? It’s mentioned in the article.

                                                                                              1. 5

                                                                                                I tend to rush for /opt/my-own-prefix-here (or per-package), myself, mainly to make it clear what it is, and avoid risk of clobbering anything else in /usr/local (like if it’s a BSD). It’s also in the FHS, so pedants can’t tell you you’re doing it wrong.

                                                                                                1. 4

                                                                                                  It does - this is generally used for installing software outside the remit of the package manager (global npm packages, for example), and it’s designated so by the FHS which most distributions follow (as other users have noted in this thread), but it’s less prominent since most users on Linux install very little software not managed by the package manager. It’s definitely a lot more integral in BSD-land.

                                                                                                  1. 3

                                                                                                    […] since most users on Linux install very little software not managed by the package manager

                                                                                                    The Linux users around me still do heaps of ./configure && make install; but, I see your point when contrasted against the rise of PPAs, Docker and nodenv/rbenv/pyenv/…

                                                                                                    1. 3

                                                                                                      Yeah, I do tons of configure make install stuff, sometimes of things that are also in the distro - and this split of /usr/local is sometimes useful because it means if I attempt a system update my custom stuff isn’t necessarily blasted.

                                                                                                      But the split between /bin and /usr/bin is meh.

                                                                                                2. 1

                                                                                                  That sounds sensible. Seems like there could be a command that tells you the difference. Then, a versioning scheme that handles the rest. For example, OpenVMS had file versioning.

                                                                                            1. 11

                                                                                              Nice to see Crystal out in the wild (the compiler is written in Crystal).

                                                                                              1. 2

                                                                                                As a Rubyist, it’d be nice to see a write-your-own-compiler guide written in Crystal (or Ruby, for that matter).

                                                                                                1. 3

                                                                                                  Like this?

                                                                                                  1. 2

                                                                                                    Yes, thanks!

                                                                                                2. 1

                                                                                                  I wonder if this will hobble adoption somewhat on Windows until proper Crystal win32 target support arrives though. You can run Crystal apps in WSL but more steps = less users.

                                                                                                  1. 1

                                                                                                    Oh yeah, I forgot that they’re still working on Windows support.

                                                                                                1. 0

                                                                                                  The app is called MacPlayer and works thanks to the magic of Spotify Connect. The speaker itself streams and plays the music, and the Mac simply tells the speaker which song to play (as well as volume, current playlist, shuffle mode and other settings). Communication is over Wifi.

                                                                                                  So this is not an actual player which decodes the stream and plays it on Mac’s audio DMA, only the UI controller. It’s very annoying in such kind of projects these days, same for “Slack client on C64!!!!!” which is actually a VT100 telnet client communicating with actual Slack client on Raspberry Pi. And so on, in modern programming too (like, people call their borderlees website an “application”)…

                                                                                                  In other words, people want to be “cool” and shown like ultimate heroes, but they are scared about actual hard code and special challenges related to the platform they’re targeting, which is sad and most likely will turn out bad in the future.

                                                                                                  1. 13

                                                                                                    In other words, people want to be “cool” and shown like ultimate heroes, but they are scared about actual hard code and special challenges related to the platform they’re targeting, which is sad and most likely will turn out bad in the future.

                                                                                                    I’m disappointed by this. By the same logic, since all the playlists and music is stored on remote computers in the first place, the official Spotify players themselves aren’t really apps, just attempts to be “cool.”

                                                                                                    Older computers had many task-specific peripherals to get the job done. That’s why Apple IIs, early PCs, and even cheaper computers like the Commodore 64 had so many expansion ports. This to me feels very much in the spirit of how people programmed those computers. The C64 doesn’t have enough power to do speech synthesis? Fine, here’s a cartridge. Your PC can’t make real sound with just the PC speaker? Fine, here’s a Yamaha DSP. Or in this case, the Mac doesn’t have enough CPU to decode MP3s and handle the DRM and reencrypt to HTTPS? Fine, we’ll use a couple separate specialty components for those things.

                                                                                                    You’re both discounting what was accomplished here and being unrealistic about how it would’ve been done with an actual Mac SE back in the day. I’m sad to see such negativity here. This forum is usually much more appreciative of this kind of fun hack.

                                                                                                    1. 1

                                                                                                      I’m disappointed by this. By the same logic, since all the playlists and music is stored on remote computers in the first place, the official Spotify players themselves aren’t really apps, just attempts to be “cool.”

                                                                                                      It’s stored remotely, but it’s played locally. Just like you would use a regular player with, for example, an SMB share. Same thing on this particular Mac would be completely okay if you lack a harddisk.

                                                                                                      Older computers had many task-specific peripherals to get the job done. That’s why Apple IIs, early PCs, and even cheaper computers like the Commodore 64 had so many expansion ports. This to me feels very much in the spirit of how people programmed those computers. The C64 doesn’t have enough power to do speech synthesis? Fine, here’s a cartridge. Your PC can’t make real sound with just the PC speaker? Fine, here’s a Yamaha DSP. Or in this case, the Mac doesn’t have enough CPU to decode MP3s and handle the DRM and reencrypt to HTTPS? Fine, we’ll use a couple separate specialty components for those things.

                                                                                                      This isn’t correct at all. I mean, it is, but only in burger mentality where you don’t even try to work a bit harder on particular problem, but pay a slave worker to do this.

                                                                                                      The C64 doesn’t have enough power to do speech synthesis?

                                                                                                      Of course it does, and performs really well. The software is patched SAM & Reciter to work with Polish. The VIC-II is disabled during speech to free up DMA speed for cartridge access, but no one prevents you from precalculate speech sample and put it in RAM, like some games (remember Impossible Mission?) did at build time.

                                                                                                      Your PC can’t make real sound with just the PC speaker?

                                                                                                      Oh dude. There are numerous more and less creative ways to play regular PCM-based tunes on PC speaker and I didn’t even looked deeper than first YouTube link. Methods to achieve that are so diverse you can even play a sound using only an Atari XL/XE video output chip (GTIA, not to confuse with actual GPU, ANTIC) and fancy interrupt handling.

                                                                                                      Or in this case, the Mac doesn’t have enough CPU to decode MP3s and handle the DRM and reencrypt to HTTPS?

                                                                                                      This particular Mac SE/30 has a Motorola 68030@16MHz and 1 to 128 megs of RAM (I assume some modest quality of life upgrade, 4MB at least). It’s absolutely enough to decode MP3s (with some assumptions about buffer of course) and on Amiga it was even more than needed, leaving you with some cycles to do your office work.

                                                                                                      You might be right about HTTPS though - due to its recently increasing complexity because why the fuck no, everything beyond x86 at Pentium II grade is out of the league. On the other hand, the OpenSSH w/ SSH2 and recent ciphers works acceptable on Amigas with 68060 without blocking other tasks too much.

                                                                                                      So, some sort of ssl-stripping proxy in the middle would be accepted. But it’s no tied to this particular software and can be used with anything else. The software running on the Mac wouldn’t be tied to the outside bits of code, just requiring the HTTPS endpoint which you can provide however you want.

                                                                                                      You’re both discounting what was accomplished here

                                                                                                      Yes, because it’s just a mockup and requires actual modern machine to do all the stuff. In 90s, everyone would call this software a “lame” way. Huge expectations, little code, pretends to be something that it’s not.

                                                                                                      and being unrealistic about how it would’ve been done with an actual Mac SE back in the day.

                                                                                                      Absolutely no. 80% of regular Spotify client feature set could be done on that Mac (and even more on later 68k MacOS workstations). But it requires actual understanding of the target platform and - what’s seriously missing here - some bits of true love.

                                                                                                      You see, all that retrocomputing stuff isn’t about showing off and acting cool. It’s about these platforms. People are doing excellent things to prove their loved machines can still be relevant and do modern work without offloading it onto Raspberry Pi hidden under the table to gather internet points on reddit and orange site. People are dedicated to particular line or brand or model they used in the past, can utilize and show its unique features and don’t strip everything into least common denominator.

                                                                                                      I think this particular project, while “looking cool” is actually disrespectful to the Macs. This guy could do the same thing on PC, Amiga, C64, Atari or even a ZX Spectrum - requires only the monochrome bitmap graphics, keyboard and serial to communicate with ESP8266 (which even does HTTPS on its own).

                                                                                                    2. 3

                                                                                                      Pretty close to my reaction too. All the audio decoding and network streaming is handled by some very fancy speakers; http-based API auth delegated to a phone app. So, it’s mostly a visual gimmick… but still cool. I don’t blame them for not wanting to put the (much more difficult) technical effort into achieving (much lower quality) “authentic” sound.

                                                                                                      I used 68k Macs long ago, but it’s not really my scene. I didn’t even know there was a wifi card for the SE/30.

                                                                                                      1. 3

                                                                                                        There wasn’t, it’s also a trickery. There’s a dongle with ESP8266 which plugs into serial port and provides extended AT commands set so it can act more or less like dialup modem.

                                                                                                        1. 12

                                                                                                          Trickery feels like the wrong word there. That sounds like a damn good hack to me.

                                                                                                          1. 1

                                                                                                            The „fake” is more appropriate, because it strips down the running computer to the form of dumb terminal and deprecates all its unique features so it doesn’t matter which machine you have at the end of the day.

                                                                                                            You can surely set up a TCP stack on C64 or Atari. Well, you can even get a working 802.11 stack on Amiga and it doesn’t require using such lame solutions.

                                                                                                          2. 2

                                                                                                            I’m not sure about that. In the picture, it sure looks like there’s a wifi antenna coming out the back, and nothing plugged in to the serial ports. Author linked to https://github.com/antscode/MacWifi which mentions some devices. I recall having a 68040 PowerBook with onboard ethernet (via a ridiculous dongle) and could use wifi PCMCIA cards, so it doesn’t seem all that far-fetched. I’m not going to dig any further, though. I waste enough time on even older machines!

                                                                                                            1. 1

                                                                                                              I need more information about this! Do you have a link?

                                                                                                              1. 1

                                                                                                                Just duck the “esp8266 serial WiFi”

                                                                                                            2. 1

                                                                                                              I wonder if any of the 68k Macs even have enough power to decode AAC…

                                                                                                              1. 2

                                                                                                                Spotify uses AAC only on macOS/iOS from what I remember, other platforms get Vorbis or MP3, depending on chosen quality.

                                                                                                                1. 2

                                                                                                                  I imagine that even 128kbps MP3 is a big struggle for a 68030 on it’s own, but I wasn’t aware they still served some music as MP3.

                                                                                                          1. 32

                                                                                                            To me the big deal is that Rust can credibly replace C, and offers enough benefits to make it worthwhile.

                                                                                                            There are many natively-compiled languages with garbage collection. They’re safer than C and easier to use than Rust, but by adding GC they’ve exited the C niche. 99% of programs may work just fine with a GC, but for the rest the only practical options were C and C++ until Rust showed up.

                                                                                                            There were a few esoteric systems languages or C extensions that fixed some warts of C, but leaving the C ecosystem has real costs, and I could never justify use of a “weird” language just for a small improvement. Rust offered major safety, usability and productivity improvements, and managed to break out of obscurity.

                                                                                                            1. 38

                                                                                                              Ada provided everything except ADTs and linear types, including seamless interoperability with C, 20 years before Rust. Cyclone was Rust before Rust, and it was abandoned in a similar state as Rust was when it took off. Cyclone is dead, but Ada got a built-in formal verification toolkit in its latest revision—for some that stuff alone can be a reason to pick instead of anything else for a new project.

                                                                                                              I have nothing against Rust, but the reason it’s popular is that it came at a right time, in the right place, from a sufficiently big name organization. It’s one of the many languages based on those ideas that, fortunately, happened to succeed. And no, when it first got popular it wasn’t really practical. None of these points makes Rust bad. One just should always see a bigger picture especially when it comes to heavily hyped things. You need to know the other options to decide for yourself.

                                                                                                              Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

                                                                                                              Only in languages that cannot umabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

                                                                                                              1. 15

                                                                                                                I’ve seen Cyclone when it came out, but at that time I dismissed it as “it’s C, but weird”. It had the same basic syntax as C, but added lots of pointer sigils. It still had the same C preprocessor and the same stdlib.

                                                                                                                Now I see it has a feature set much closer to Rust’s (tagged unions, patterns, generics), but Rust “sold” them better. Rust used these features for Result which is a simple yet powerful construct. Cyclone could do that, but didn’t. It kept nullable pointers and added Null_Exception.

                                                                                                                1. 12

                                                                                                                  Ada provided everything except ADTs and linear types

                                                                                                                  Unfortunately for this argument, ADTs, substructural types and lifetimes are more exciting than that “everything except”. Finally the stuff that is supposed to be easy in theory is actually easy in practice, like not using resources you have already cleaned up.

                                                                                                                  Ada got a built-in formal verification toolkit in its latest revision

                                                                                                                  How much of a usability improvement is using these tools compared to verifying things manually? What makes types attractive to many programmers is not that they are logically very powerful (they are usually not!), but rather that they give a super gigantic bang for the buck in terms of reduction of verification effort.

                                                                                                                  1. 17

                                                                                                                    I would personally not compare Ada and Rust directly as they don’t even remotely fulfill the same use-cases.

                                                                                                                    Sure, there have been languages that have done X, Y, Z before Rust (the project itself does not lay false claim to inventing those parts of the language which may have been found elsewhere in the past), but the actual distinguishing factor for Rust that places it into an entirely different category from Ada is how accessible and enjoyable it is to interact with while providing those features.

                                                                                                                    If you’re in health or aeronautics, you should probably be reaching for the serious, deep toolkit provided by Ada, and I’d probably be siding with you in saying those people probably should have been doing that for the last decade. But Ada is really not for the average engineer. It’s an amazing albeit complex language, that not only represents a long history of incredible engineering but a very real barrier of entry that’s simply incomparable to that of Rust’s.

                                                                                                                    If, for example, I wanted today to start writing from scratch a consumer operating system, a web browser, or a video game as a business venture, I would guarantee you Ada would not even be mentioned as an option to solve any of those problems, unless I wanted to sink my own ship by limiting myself to pick from ex-government contractors as engineers, whose salaries I’d likely be incapable of matching. Rust on the other hand actually provides a real contender to C/C++/D for people in these problem spaces, who don’t always need (or in some cases, even want) formal verification, but just a nice practical language with a systematic safety net from the memory footguns of C/C++/D. On top of that, it opens up these features, projects, and their problem spaces to many new engineers with a clear, enjoyable language free of confusing historical baggage.

                                                                                                                    1. 6

                                                                                                                      Have you ever used Ada? Which implementation?

                                                                                                                      1. 15

                                                                                                                        I’ve never published production Ada of any sort and am definitely not an Ada regular (let alone pro) but I studied and had a fondness for Spark around the time I was reading “Type-Driven Development with Idris” and started getting interested in software proofs.

                                                                                                                        In my honest opinion the way the base Ada language is written (simple, and plain operator heavy) ends up lending really well to extension languages, but it also can make difficult for beginners to distinguish the class of concept used at times, whereas Rust’s syntax has a clear and immediate distinction between blocks (the land of namespaces), types (the land of names), and values (the land of data). In terms of cognitive load then, it feels as though these two languages are communicating at different levels. Like Rust is communicating in the mode of raw values and their manipulation through borrows, while the lineage of Ada languages communicate at a level that, in my amateur Ada-er view, center on the expression of properties of your program (and I don’t just mean the Spark stuff, obviously). I wasn’t even born when Ada was created, and so I can’t say for sure without becoming an Ada historian (not a bad idea…), but this sort of seems like a product of Ada’s heritage (just as Rust’s so obviously written to look like C++).

                                                                                                                        To try and clarify this ramble of mine, in my schooling experience, many similarly young programmers of my age are almost exclusively taught to program at an elementary level of abstract instructions with the details of those instructions removed, and then after being taught a couple type-level incantations get a series of algorithms and their explanations thrown at their face. Learning to consider their programs specifically in terms of expressing properties of that program’s operations becomes a huge step out of that starting box (that some don’t leave long after graduation). I think something that Rust’s syntax does well (if possibly by mistake) is fool the amateur user into expressing properties of their programs on accident while that expression becomes part of what seems like just a routine to get to the meat of a program’s procedures. It feels to me that expressing those properties are intrinsic to the language of speaking Ada, and thus present a barrier intrinsic to the programmer’s understanding of their work, which given a different popular curriculum could probably just be rendered as weak as paper to break through.

                                                                                                                        Excuse me if these thoughts are messy (and edited many times to improve that), but beyond the more popular issue of familiarity, they’re sort of how I view my own honest experience of feeling more quickly “at home” in moving from writing Rust to understanding Rust, compared to moving from just writing some form of Ada, and understanding the program I get.

                                                                                                                    2. 5

                                                                                                                      Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

                                                                                                                      Only in languages that cannot unabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

                                                                                                                      OCaml can unambiguously infer the principal type, and I still find myself writing the type of top level functions explicitly quite often. More than once have I been guided by a type error that only happened because I wrote the type of the function I was writing in advance.

                                                                                                                      At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL. More than once have I been surprised. More than once that surprise was caused by a bug in my code. Had I not checked the type of my function, I would catch the bug only later, when using the function, and the error message would have made less sense to me.

                                                                                                                      1. 2

                                                                                                                        At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL

                                                                                                                        Why not use Merlin instead? Saves quite a bit of time.

                                                                                                                        That’s a tooling issue too of course. Tracking down typing surprises in OCaml is easy because the compiler outputs type annotations in a machine-readable format and there’s a tool and editor integrations that allow me to see the type of every expression in a keystroke.

                                                                                                                        1. 2

                                                                                                                          Why not use Merlin instead? Saves quite a bit of time.

                                                                                                                          I’m a dinosaur, that didn’t take the time to learn even the existence of Merlin. I’m kinda stucks in Emacs’ Tuareg mode. Works for me for small projects (all my Ocaml projects are small).

                                                                                                                          That said, my recent experience with C++ and QtCreator showed me that having warnings at edit time is even more powerful than a REPL (at least as long as I don’t have to check actual values). That makes Merlin look very attractive all of a sudden. I’ll take a look, thanks.

                                                                                                                    3. 5

                                                                                                                      Rust can definitely credibly replace C++. I don’t really see how it can credibly replace C. It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                                                                                                                      1. 43

                                                                                                                        I’ve been a C programmer for over a decade. I’ve tried switching to C++ a couple of times, and couldn’t stand it. I’ve switched to Rust and love it.

                                                                                                                        My reasons are:

                                                                                                                        • Robust, automatic memory management. I have the same amount of control over memory, but I don’t need goto cleanup.
                                                                                                                        • Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.
                                                                                                                        • Slices are awesome: no array to pointer decay. Work great with substrings.
                                                                                                                        • Safety is not just about CVEs. I don’t need to investigate memory murder mysteries in GDB or Valgrind.
                                                                                                                        • Dependencies aren’t painful.
                                                                                                                        • Everything builds without fuss, even when supporting Windows and cross-compiling to iOS.
                                                                                                                        • I can add two signed numbers without UB, and checking if they overflow isn’t a party trick.
                                                                                                                        • I get some good parts of C++ such as type-optimized sort and hash maps, but without the baggage C++ is infamous for.
                                                                                                                        • Rust is much easier than C++. Iterators are so much cleaner (just a next() method). I/O is a Read/Write trait, not a hierarchy of iostream classes.
                                                                                                                        1. 6

                                                                                                                          I also like Rust and I agree with most of your points, but this one bit seems not entirely accurate:

                                                                                                                          Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.

                                                                                                                          AFAIK Rust:

                                                                                                                          • doesn’t guarantee thread-safety — it guarantees the lack of data races, but doesn’t guarantee the lack of e.g. deadlocks;
                                                                                                                          • guarantees the lack of data races, but only if you didn’t write any unsafe code.
                                                                                                                          1. 20

                                                                                                                            That is correct, but this is still an incredible improvement. If I get a deadlock I’ll definitely notice it, and can dissect it in a debugger. That’s easy-peasy compared to data races.

                                                                                                                            Even unsafe code is subject to thread-safety checks, because “breaking” of Send/Sync guarantees needs separate opt-in. In practice I can reuse well-tested concurrency primitives (e.g. WebKit’s parking_lot) so I don’t need to write that unsafe code myself.

                                                                                                                            Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                                                                                                                            1. 6

                                                                                                                              I share your enthusiasm. Just wanted to prevent a common misconception from spreading.

                                                                                                                              Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                                                                                                                              I did not know it could do that. That’s fantastic.

                                                                                                                            2. 9

                                                                                                                              Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                                                                                                                              guarantees the lack of data races, but only if you didn’t write any unsafe code.

                                                                                                                              Rust application code generally avoids unsafe.

                                                                                                                              1. 4

                                                                                                                                Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                                                                                                                                My comment was not a criticism of Rust. Just wanted to prevent a common misconception from spreading.

                                                                                                                                Rust application code generally avoids unsafe.

                                                                                                                                That depends on who wrote the code. And unsafe blocks can cause problems that show in places far from the unsafe code. Meanwhile, “written in Rust” is treated as a badge of quality.

                                                                                                                                Mind that I am a Rust enthusiast as well. I just think we shouldn’t oversell it.

                                                                                                                              2. 7

                                                                                                                                guarantees the lack of data races, but only if you didn’t write any unsafe code.

                                                                                                                                As long as your unsafe code is sound it still provides the guarantee. That’s the whole point, to limit the amount of code that needs to be carefully audited for correctness.

                                                                                                                                1. 2

                                                                                                                                  I know what the point is. But proving things about code is generally not something that programmers are used to or good at. I’m not saying that the language is bad, only that we should understand its limitations.

                                                                                                                                2. 1

                                                                                                                                  I find it funny that any critique of Rust needs to be prefixed with a disclaimer like “I also like Rust”, to fend off the Rust mob.

                                                                                                                              3. 11

                                                                                                                                This doesn’t really match what we see and our experience: a lot of organisations are investigating their replacement of C and Rust is on the table.

                                                                                                                                One advantage that Rust has is that it actually lands between C and C++. It’s pretty easy to move towards a more C-like programming style without having to ignore half of the language (this comes from the lack of classes, etc.).

                                                                                                                                Rust is much more “C with Generics” than C++ is.

                                                                                                                                We currently see a high interest in the embedded world, even in places that skipped adopting C++.

                                                                                                                                I don’t think the fundamental difference in approach is as large as you make it (sorry for the weak rebuttal, but that’s hard to quantify). But also: approaches are changing, so that’s less of a problem for us, as long as we are effective at arguing for our approach.

                                                                                                                                1. 2

                                                                                                                                  It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                                                                                                                                  Human minds are sometimes less flexible than rocks.

                                                                                                                                  That’s why we still have that stupid Qwerty layout: popular once for mechanical (and historical) reasons, used forever since. As soon as the mechanical problems were fixed, Sholes imself devised a better layout, which went unused. Much later, Dvorak devised another better layout, and it is barely used today. People thinking in Qwerty simply can’t bring themselves to take the time to learn the superior layout. (I know: I’m in a similar situation, though my current layout is not Qwerty).

                                                                                                                                  I mean, you make a good point here. And that’s precisely what’s make me sad. I just hope this lack of flexibility won’t prevent C programmers from learning superior tools.

                                                                                                                                  (By the way, I would chose C over C++ in many cases, I think C++ is crazy. But I also know ML (OCaml), a bit of Haskell, a bit of Lua… and that gives me perspective. Rust as I see it is a blend of C and ML, and though I have yet to write Rust code, the code I have read so far was very easy to understand. I believe I can pick up the language pretty much instantly. In my opinion, C programmers that only know C, awk and Bash are unreasonably specialised.)

                                                                                                                                  1. 1

                                                                                                                                    I tried to switch to DVORAK twice. Both times I started to get pretty quick after a couple of days but I cheated: if I needed to type something I’d switch back to QWERTY, so it never stuck.

                                                                                                                                    The same is true of Rust, incidentally. Tried it out a few times, was fun, but then if I want to get anything useful done quickly it’s just been too much of a hassle for me personally. YMMV of course. I fully intend to try to build something that’s kind of ‘C with lifetimes’, a much simpler Rust (which I think of as ‘C++ with lifetimes’ analogously), in the future. Just have to, y’know, design it. :D

                                                                                                                                    1. 3

                                                                                                                                      I too was tempted at some point to design a “better C”. I need:

                                                                                                                                      • Generics
                                                                                                                                      • Algebraic data types
                                                                                                                                      • Type classes
                                                                                                                                      • coroutines, (for I/O and network code, I need a way out of raw poll(2))
                                                                                                                                      • Memory safety

                                                                                                                                      With the possible exception of lifetimes, I’d end up designing Rust, mostly.

                                                                                                                                      1. 2

                                                                                                                                        I agree that you need some way of handling async code, but I don’t think coroutines are it, at least not in the async/await form. I still feel like the ‘what colour is your function?’ stuff hasn’t been solved properly. Any function with a callback (sort with a key/cmp function, filter, map, etc.) needs an async_ version that takes a callback and calls it with await. Writing twice as much code that’s trivially different by adding await in some places sucks, but I do not have any clue what the solution is. Maybe it’s syntactic. Maybe everything should be async implicitly and you let the compiler figure out when it can optimise things down to ‘raw’ calls.

                                                                                                                                        shrug

                                                                                                                                        Worth thinking about at least.

                                                                                                                                        1. 4

                                                                                                                                          Function colors are effects. There are two ways to solve this problem:

                                                                                                                                          1. To use polymorphism over effects. This is what Haskell does, but IMO it is too complex.
                                                                                                                                          2. To split large async functions into smaller non-async ones, and dispatch them using an event loop.

                                                                                                                                          The second approach got a bad reputation due to its association with “callback hell”, but IMO this reputation is undeserved. You do not need to represent the continuation as a callback. Instead, you can

                                                                                                                                          1. Define a gigantic sum type of all possible intermediate states of asynchronous processes.
                                                                                                                                          2. Implement each non-async step as an ordinary small function that maps intermediate states (not necessarily just one) to intermediate states (not necessarily just one).
                                                                                                                                          3. Implement the event loop as a function that, iteratively,
                                                                                                                                            • Takes states from an event queue.
                                                                                                                                            • Dispatches an appropriate non-async step.
                                                                                                                                            • Pushes the results, which are again states, back into the event queue.

                                                                                                                                          Forking can be implemented by returning multiple states from a single non-async step. Joining can be implemented by taking multiple states as inputs in a single non-async step. You are not restricted to joining processes that were forked from a common parent.

                                                                                                                                          In this approach, you must write the event loop yourself, rather than delegate it to a framework. For starters, no framework can anticipate your data type of intermediate states, let alone the data type of the whole event queue. But, most importantly, the logic for dispatching the next non-async step is very specific to your application.

                                                                                                                                          Benefits:

                                                                                                                                          1. Because the data type of intermediate states is fixed, and the event loop is implemented in a single centralized place, it is easier to verify that your code works “in all cases”, either manually or using tools that explicitly model concurrent processes using state machines (e.g., TLA+).

                                                                                                                                          2. Because intermediate states are first-order values, rather than first-class functions, the program is much easier to debug. Just stop the event loop at an early time and pretty-print the event queue. (ML can automatically pretty-print first-order values in full detail. Haskell requires you to define a Show instance first, but this definition can be generated automatically.)

                                                                                                                                          Drawbacks:

                                                                                                                                          1. If your implementation language does not provide sum types and/or pattern matching, you will have a hard time checking that every case has been covered, simply because there are so many cases.

                                                                                                                                          2. The resulting code is very much non-extensible. To add new asynchronous processes, you need to add constructors to the sum type of intermediate states. This will make the event loop fail to type check until you modify it accordingly. (IMO, this is not completely a drawback, because it forces you to think about how the new asynchronous processes interact with the old ones. This is something that you eventually have to do anyway, but some people might prefer to postpone it.)

                                                                                                                                          1. 3

                                                                                                                                            I agree that you need some way of handling async code, but I don’t think coroutines are it

                                                                                                                                            Possibly. I actually don’t know. I’d take whatever let me write code that looks like I’m dispatching an unlimited number of threads, but dispatches the computation over a reasonable number of threads, possibly just one. Hell, my ideal world is green threads, actually. Perhaps I should have lead with that…

                                                                                                                                            Then again, I don’t know the details of the tradeoffs involved. Whatever let me solve the 1M connections cleanly and efficiently works for me.

                                                                                                                                  2. 5

                                                                                                                                    I agree with @milesrout. I don’t think Rust is a good replacement for C. This article goes into some of the details of why - https://drewdevault.com/2019/03/25/Rust-is-not-a-good-C-replacement.html

                                                                                                                                    1. 17

                                                                                                                                      Drew has some very good points. Its a shame he ruins them with all the other ones.

                                                                                                                                      1. 25

                                                                                                                                        Drew has a rusty axe to grind: “Concurrency is generally a bad thing” (come on!), “Yes, Rust is more safe. I don’t really care.”

                                                                                                                                        Here’s a rebuttal of that awful article: https://telegra.ph/Replacing-of-C-with-Rust-has-been-a-great-success-03-27 (edit: it’s a tongue-in-cheek response. Please don’t take it too seriously: the original exaggerated negatives, so the response exaggerates positives).

                                                                                                                                        1. 11

                                                                                                                                          So many bad points from this post.

                                                                                                                                          • We can safely ignore the “features per year”, since the documentation they are based on don’t follow the same conventions. I’ll also note that, while a Rust program written last year may look outdated (I personally don’t know Rust enough to make such an assessment), it will still work (I’ve been told breaking changes are extremely rare).

                                                                                                                                          • C is not really the most portable language. Yes, C and C++ compilers, thanks to having decades of work behind them, target more devices than everything else put together. But no, those platforms do not share the same flavour of C and C++. There are simply too many implementation defined behaviours, starting with integer sizes. Did you know that some platforms had 32-bit chars? I worked with someone who worked on one.

                                                                                                                                            I wrote a C crypto library, and went out of my way to ensure the code was very portable. and it is. Embedded developers love it. There was no way however to ensure my code was fully portable. I right-shift negative integers (implementation defined behaviour), and I use fixed width integers like uint8_t (not supported on the DSP I mentioned above).

                                                                                                                                          • C does have a spec, but it’s an incomplete one. In addition to implementation defined behaviour, C and C++ also have a staggering amount of undefined and unspecified behaviour. Rust has no spec, but it still tries to minimise undefined behaviour. I expect this point will go away when Rust stabilises and we get an actual spec. I’m sure formal verification folks will want to have a verified compiler for Rust, like we currently have for C.

                                                                                                                                          • *C have many implementations… and that’s actually a good point.

                                                                                                                                          • C has a consistent & stable ABI… and so does Rust, somewhat? OK, it’s opt-in, and it’s contrived. My point is, Rust does have an FFI which allows it to talk to the outside world. It doesn’t have to be at the top level of a program. On the other hand, I’m not sure what would be the point of a stable ABI between Rust modules. C++ at least seems to be doing fine without that.

                                                                                                                                          • Rust compiler flags aren’t sable… and that’s a good point. They should probably stabilise at some point. On the other hand, having one true way to manage builds and dependencies is a god send. Whatever we’d use stable compile flags for, we probably don’t want to depart from that.

                                                                                                                                          • Parallelism and Concurrency are unavoidable. They’re not a bad thing, they’re the only thing that can help us cheat the speed of light, and with it single threaded performance. The ideal modern computer is more likely a high number of in-order cores, each with a small amount of memory, and an explicit (exposed to the programmer) cache hierarchy. Assuming performance and energy consumption trumps existing C (and C++) programs. Never forget that current computers are optimised to run C and C++ programs.

                                                                                                                                          • Not caring about safety is stupid. Or selfish. Security vulnerabilities are often mere externalities, which you can ignore if it doesn’t damage your reputation to the point of affecting your bottom line. Yay Capitalism. More seriously, safety is a subset of correctness, and correctness is the main point of Rust’s strong type system and borrow checker. C doesn’t just make it difficult to write safe programs, it makes it difficult to write correct programs. You wouldn’t believe how hard that is. My crypto library had to resort to Valgrind, sanitisers, and the freaking TIS interpreter to eke out undefined behaviour. And I’m talking about “constant time” code, that has fixed memory access patterns. It’s pathologically easy to test, yet writing tests took as long as writing the code, possibly longer. Part of the difficulty comes from C, not just the problem domain.

                                                                                                                                          Also, Drew DeVault mentions Go as a possible replacement for C? For some domains, sure. But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance). Such constrained environment are basically the remaining niche for C (and C++). For the rest, the only thing that keeps people hooked on C (and C++) are existing code and existing skills.

                                                                                                                                          1. 4

                                                                                                                                            Rust compiler flags aren’t sable… and that’s a good point. They should probably stabilise at some point. On the other hand, having one true way to manage builds and dependencies is a god send. Whatever we’d use stable compile flags for, we probably don’t want to depart from that.

                                                                                                                                            This is wrong, though. rustc compiler flags are stable, except flags behind the -Z flag, which intentionally separates the interface between stable and unstable flags.

                                                                                                                                            1. 2

                                                                                                                                              Okay, I stand corrected, thanks.

                                                                                                                                            2. 0

                                                                                                                                              But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance).

                                                                                                                                              The Go garbage collector can be turned off with debug.SetGCPercent(-1) and triggered manually with runtime.GC(). It is also possible to allocate memory at the start of the program and use that.

                                                                                                                                              Go has several compilers available. gc is the official Go compiler, GCC has built-in support for Go and there is also TinyGo, which targets microcontrollers and WASM: https://tinygo.org/

                                                                                                                                              1. 5

                                                                                                                                                Can you realistically control allocations? If we have ways to make sure all allocations are either explicit or on the stack, that could work. I wonder how contrived that would be, though. The GC is on by default, that’s got to affect idiomatic code in a major way. To the point where disabling it probably means you don’t have the same language any more.

                                                                                                                                                Personally, to replace C, I’d rather have a language that disables GC by default. If I am allowed to have a GC, I strongly suspect there are better alternatives than Go. (My most major objection being “lol no generics”. And if the designers made that error, that kind of cast doubt over their ability to properly design the rest of the language, and I lose all interest instantly. Though if I were writing network code, I would also say “lol no coroutines” at anything designed after 2015 or so.)

                                                                                                                                                1. 1

                                                                                                                                                  I feel like GC by default vs no GC is one of the biggest decision points when designing a language. It affects so much of how the rest of a language has to be designed. GC makes writing code soooo much easier, but you can’t easily put non-GC’d things into a GC’d language. Or maybe you can? Rust was originally going to have syntax for GC’d pointers. People are building GC’d pointers into Rust now, as libraries - GC manages a particular region of memory. People are designing the same stuff for C++. So maybe we will finally be able to mix them in a few years.

                                                                                                                                                  1. 1

                                                                                                                                                    Go is unrealistic not only because of GC, but also segmented stacks, thick runtime that wants to talk to the kernel directly, implicit allocations, and dynamism of interface{}. They’re all fine if you’re replacing Java, but not C.

                                                                                                                                                    D lang’s -betterC is much closer, but D’s experience shows that once you have a GC, it influences the standard library, programming patterns, 3rd party dependencies, and it’s really hard to avoid it later.

                                                                                                                                                    1. 1

                                                                                                                                                      Can you realistically control allocations? If we have ways to make sure all allocations are either explicit or on the stack, that could work.

                                                                                                                                                      IIRC you can programmatically identify all heap allocations in a given go compilation, so you can wrap the build in a shim that checks for them and fails.

                                                                                                                                                      The GC is on by default, that’s got to affect idiomatic code in a major way.

                                                                                                                                                      Somewhat, yes, but the stdlib is written by people who have always cared about wasted allocations and many of the idioms were copied from that, so not quite as much as you might imagine.

                                                                                                                                                      That said - if I needed to care about allocations that much, I don’t think it’d be the best choice. The language was designed and optimized to let large groups (including many clever-but-inexperienced programmers) to write reliable network services.

                                                                                                                                              2. 1

                                                                                                                                                I don’t think replacing C is a good usecase for Rust though. C is relatively easy to learn, read, and write to the level where you can write something simple. In Rust this is decidedly not the case. Rust is much more like a safe C++ in this respect.

                                                                                                                                                I’d really like to see a safe C some day.

                                                                                                                                                1. 6

                                                                                                                                                  Have a look at Cyclone mentioned earlier. It is very much a “safe C”. It has ownership and regions which look very much like Rust’s lifetimes. It has fat pointers like Rust slices. It has generics, because you can’t realistically build safe collections without them. It looks like this complexity is inherent to the problem of memory safety without a GC.

                                                                                                                                                  As for learning C, it’s easy to get a compiler accept a program, but I don’t think it’s easier to learn to write good C programs. The language may seem small, but the actual language you need to master includes lots of practices for safe memory management and playing 3D chess with the optimizer exploiting undefined behavior.

                                                                                                                                              1. 6

                                                                                                                                                There’s also the IETF HTTP Signatures draft which addresses some of these issues and lets you additionally sign key headers of your choice. It does present some issues with buffering requests though (and being a draft, support is somewhat limited and subject to change).

                                                                                                                                                1. 6

                                                                                                                                                  I haven’t used Guile in a long time, but it’s always been what I want to be my favorite Scheme. In particular, Guile always feels like S-expression Dylan, or a cleaned-up Common Lisp, in the best possible way: a huge chunk of batteries and nods to practicality are included, even if they might not be the most academically pure option, but the overall environment still feels far more consistent and coherent than a lot of real-world languages. It basically has always looked like a have-my-cake-and-eat-it-too environment. The two main things that have held me back in the past have been its speed (it’s generally been slower than Chicken Scheme by enough I care) and concern over the project health. But both have picked up a lot over the past couple years, from what I’ve seen. I’ll definitely look forward to finding an excuse to do more here in the future.

                                                                                                                                                  1. 4

                                                                                                                                                    I feel like Guix has probably revived some interest in Scheme.

                                                                                                                                                    1. 3

                                                                                                                                                      That, and Guile in Emacs increasingly looking like a thing that might actually happen, have been two big parts, IMVHO.

                                                                                                                                                      1. 3

                                                                                                                                                        Has the Guile Emacs project had activity lately? My impression (mostly from https://www.emacswiki.org/emacs/GuileEmacs and the linked repos) was that it had effectively died off a few years ago. I’d be excited to learn that’s not the case.

                                                                                                                                                      2. 1

                                                                                                                                                        (To clarify, I meant Guile specifically rather than Scheme)

                                                                                                                                                      3. 2

                                                                                                                                                        The two main things that have held me back in the past have been its speed (it’s generally been slower than Chicken Scheme by enough I care)

                                                                                                                                                        If what is being said is true, the these pre-preleases to 3.0, and of course the 3.0 release itself should deliver considerable improvements when it comes to performance. I know that the publishing of Chez scheme’s source pushed Racket to adopt some of their performance tricks, but it might have also been the case with Guile too.

                                                                                                                                                        1. 4

                                                                                                                                                          If I’m not mistaken the publishing of Chez Scheme’s source encouraged racket to host their language on top of it. So now Racket code compiles down to Chez.

                                                                                                                                                      1. 0

                                                                                                                                                        I think the big takeaway is that more people need to be aware that even where cryptography primitives are composable, their security guarantees are not. It’s time we started getting as serious on not reimplementing cryptography applications as well as algorithms (a la NaCl).

                                                                                                                                                        1. 4

                                                                                                                                                          What am I supposed to be seeing here that I am not?
                                                                                                                                                          The page is just a perspective-skewed screenshot and the message

                                                                                                                                                          Replay is an early experiment. We’ll let you know on @FirefoxDevTools when it’s ready for input.

                                                                                                                                                          I can make some guesses from the screenshot (the words “Paused in Recording”), but there’s no explanation what the “early experiment” even is. There is a link to @FirefoxDevTools on Twitter but it doesn’t mention Replay in any recent posts.

                                                                                                                                                          1. 4

                                                                                                                                                            yes there was more info before but it’s a debugging replay tool:

                                                                                                                                                            https://web.archive.org/web/20191128111509/https://firefox-replay.com/

                                                                                                                                                            1. 3

                                                                                                                                                              I think someone found a page in testing and linked it here.

                                                                                                                                                              To my knowledge, Firefox Replay is rr, but for the web.

                                                                                                                                                              1. 1

                                                                                                                                                                It’s probably a debugging tool that records what happens in the background to replay later on and dive deeper into code execution?

                                                                                                                                                                1. 2

                                                                                                                                                                  That’s my best-guess assumption. I was wondering why this has so many upvotes for a zero-information page that isn’t even a release announcement. Since I can’t downvote (only flag, which seems wrong), figured I would ask.

                                                                                                                                                                  At this point I figure perhaps they changed the page since it was posted (as of writing this, 7 hours ago)?

                                                                                                                                                                  1. 4

                                                                                                                                                                    When it was posted the site had more content, yes.

                                                                                                                                                                    1. 1

                                                                                                                                                                      Yeah, there’s an archive.org link elsewhere in the thread which represents what was actually on the page at the time it was posted.

                                                                                                                                                                1. 9

                                                                                                                                                                  I wonder how many projects requiring these trendier build systems like meson or ninja are using them as intended or to the capacity they allegedly facilitate. Meanwhile, make is unsexy but it’s everywhere.

                                                                                                                                                                  I sort of get cmake but even that boils down to a Makefile. Do front ends like that really reduce enough boilerplate to justify making them a build requirement? They haven’t for my own projects but I’m generally not building in the large.

                                                                                                                                                                  1. 6

                                                                                                                                                                    I sort of get cmake but even that boils down to a Makefile. Do front ends like that really reduce enough boilerplate to justify making them a build requirement? They haven’t for my own projects but I’m generally not building in the large.

                                                                                                                                                                    My experience with cmake is that it’s a worse wrapper than autotools/configure is, which is really saying something. I tried to get a i386 program to build on x86_64 and I had immense trouble just communicating to gcc what flags it should have, cmake actually lacked any architecture specific options that would have enabled me to do that.

                                                                                                                                                                    1. 9

                                                                                                                                                                      I’m not sure what you hit specifically, but cmake does provide those kind of options, namely CMAKE_SYSTEM_NAME, CMAKE_CROSSCOMPILING, and CMAKE_SYSTEM_PROCESSOR. I found this document really helpful in my own endeavours.

                                                                                                                                                                      My personal experience with CMake is that it has a bear of a learning curve, but once you’ve seem some example setups and played around with different options a bit, it starts to click. Once you are more comfortable with it, it is actually quite nice. It has its rough edges, but overall I’ve found it to work pretty smoothly in practice.

                                                                                                                                                                      1. 1

                                                                                                                                                                        Ah, thanks! I’ll keep that in mind for next time I end up doing that

                                                                                                                                                                    2. 4

                                                                                                                                                                      CMake isn’t really an abstraction over Makefiles; in fact, there’s plenty you can probably do in Makefiles that would be cumbersome or perhaps impossible to do purely in CMake. It’s a cross platform build system that just uses Makefiles as one of it’s targets for doing the actual building step.

                                                                                                                                                                      Where CMake tends to get it’s use (and what it seems to be intended for) is:

                                                                                                                                                                      • Providing a build system for large, complex C++ (primarily) projects where lots of external dependencies exist and the project is likely to be distributed by a distribution or generally not controlled by the project itself
                                                                                                                                                                      • Cross platform projects that are largely maintained for platforms where Makefiles are not well supported in a way that is compatible with GNU or BSD make, or where supporting ‘traditional’ IDEs is considered a priority (i.e. Win32/MSVC).
                                                                                                                                                                      1. 2

                                                                                                                                                                        Unfortunately, ninja is too simple for many tasks (e.g. no pattern rules) and building a wrapper is a more complex solution than Make.

                                                                                                                                                                        Meson is too complex for simple tasks like LaTeX, data analysis, or plot generation. Make is great for these use cases but a few improvement are still possible:

                                                                                                                                                                        • Hide output unless a command fails.
                                                                                                                                                                        • Parallel by default.
                                                                                                                                                                        • Do not use mtime alone, but in addition size, inode number, file mode, owner uid/gid. Apenwarr explained more.
                                                                                                                                                                        • Automatic “clean” and “help” commands.
                                                                                                                                                                        • Changes in Makefiles implicit trigger rebuilds where necessary.
                                                                                                                                                                        • Continuous mode where it watches for file system changes.
                                                                                                                                                                        • Proper solution for multi-file output.
                                                                                                                                                                        1. 2

                                                                                                                                                                          CMake can generate Makefiles, but I would hardly say its value simply boils down to what Make can provide. All of the value provided by CMake is what goes in to generating those files. It also benefits from being able to generate other build system files, i.e. using Ninja, or Visual Studio makefiles. A lot of complexity comes in when you need to adapt to locating dependencies in different places, detecting/auto-configuring compiler flags, generating code or other files based on configuration/target, conditionally linking things in, etc., and doing those things in plain Make is a huge pain in the ass, or not even possible in practice.

                                                                                                                                                                          I wouldn’t say CMake is perfect, not by a long shot, but it works pretty well in my experience.