1. 7

    Dear God, they were talking about GNU Hurd even back then…

    1.  

      for 30 years they’ve been talking about it as if it’s almost ready.

      1.  

        I always used to joke by substituting “when hell freezes over” with “when Duke Nukem Forever is ported to GNU Hurd”. I was a little bit sad when Duke Nukem Forever was finally released as it killed my joke :-(

        1.  

          Imagine if we got +net fusion power before GNU Hurd?

          1.  

            I don’t think this is quite fair, the GNU project have been mostly describing it as rather complete and usable since 2015. They describe it as an interesting development project, suitable for further development and technical curiosity, rather than necessarily a “production” OS, but the idea that HURD is perpetually nearly ready is fictional.

            1. 5

              Hurd has been making progress somewhat more slowly than the baseline requirements for a useful OS have been progressing. It passed the point of an early ‘90s *NIX kernel quite a long time ago (basic filesystem, able to run an X server, small number of drivers) and had quite a lot of nice features. The design of Hurd means that things like containers are supported automatically (anything that’s a global namespace in a traditional *NIX is just a handle that you get from the parent, so creating an isolated namespace is trivial. I still find it an interesting example of worse-is-better that the overwhelming majority of container deployments are on the one contemporary system that doesn’t have native support for containers.

              1.  

                For me the biggest problem with contemporary Hurd is also the one I’m unqualified to fix: the drivers are all Linux 2.2-2.6 era. Given a more modern filesystem and newer drivers it’d be quite liveable.

          2.  

            I thought a nod in the announcement for Linux, but no. In very early email threads, Linus wrote things like “this might be a fun toy to play with, until Hurd is usable, in a year or two.”

          1. 2

            The computer I stream from has a Logitech X50, and the one it captures for the stream has a Ducky Shine 6. My main work computer is a Mac, with a Matias Tactile Pro 4.

            1. 2

              You want to claim that version 3.2 is compatible with version 3.1 somehow, but how do you know that? You know the software basically “works” because of your unit tests, but surely you changed the tests between 3.1 and 3.2 if there were any intentional changes in behavior. How can you be sure that you didn’t remove or change any functions that someone might be calling?

              Semantic versioning states that a minor release such as 3.2 should only add backwards compatible changes.

              So all your existing unit tests from 3.1 should still be in place, untouched. You should have new unit tests, for the functionality added in 3.2.

              I stopped reading after this, because the argument seems to boil down to either not understanding Semantic versioning, or not having full unit test coverage.

              1. 20

                I stopped reading after this

                If you stopped reading at 10% of the article, you should probably also have stopped yourself from commenting.

                not understanding Semantic versioning

                The fallacy you’re committing here is very well documented.

                1. 1

                  If you are questioning whether the function you removed/changed is used by anyone when deciding the next version increment, you are not using semantic versioning correctly (unless you always increase the major, regardless of how many people used the feature you modified). As the parent said, if you need to edit 3.1 tests, you broke something, and the semver website is quite clear about what to do on breaking changes.

                  1. 7

                    If you don’t only test the public API, it’s entirely possible to introduce required changes in tests in bugfix versions.

                    More importantly, my point about “no true Scotsman” was that saying “SemVer is great if and only if you follow some brittle manual process to the dot” proves the blog post’s narrative. SemVer is wishful thinking. You can have ambitions to adhere to it, you can claim your projects follow it, but you shouldn’t ever blindly rely on others doing it right.

                    1. 5

                      The question then becomes: why does nobody do it then? Do you truly believe that in a world, where it’s super rare that a major version exceeds “5” nobody ever had to change their tests, because some low-level implementation detail changed?

                      We’re talking about real packages that have more than one layer. Not a bunch of pure functions. You build abstractions over implementation details and in non-trivial software, you can’t always test the full functionality without relying on the knowledge of said implementation details.

                      Maybe the answer is: “that’s why everybody stays in ZeroVer” which is another way of saying that SenVer is impractical.

                  2. 6

                    The original fight about the PyCA cryptography package repeatedly suggested SemVer had been broken, and that if the team behind the package had adopted SemVer, there would have been far less drama.

                    Everyone who suggested this overlooked the fact that the change in question (from an extension module being built in C, to being built in Rust) did not change public API of the deliverable artifact in a backwards-incompatible way, and thus SemVer would not have been broken by doing that (i.e., if you ran pip install cryptography before and after, the module that ended up installed on your system exposed a public API that was compatible after with what you got before).

                    Unless you want to argue that SemVer requires version bump for any change that any third-party observer might notice. In which case A) you’ve deviated from what people generally say SemVer is about (see the original thread here, for example, where many people waffled between “only about documented API” and “but cryptography should’ve bumped major for this”) and B) have basically decreed that every commit increments major, because every commit potentially produces observable change.

                    But if you’d like to commit to a single definition of SemVer and make an argument that adoption of it by the cryptography package would’ve prevented the recent dramatic arguments, feel free to state that definition and I’ll see what kind of counterargument fits against it.

                    1. 1

                      Everyone who suggested this overlooked the fact that the change in question (from an extension module being built in C, to being built in Rust) did not change public API of the deliverable artifact in a backwards-incompatible way

                      I think you’re overlooking this little tidbit:

                      Since the Gentoo Portage package manager indirectly depends on cryptography, “we will probably have to entirely drop support for architectures that are not supported by Rust”. He listed five architectures that are not supported by upstream Rust (alpha, hppa, ia64, m68k, and s390) and an additional five that are supported but do not have Gentoo Rust packages (mips, 32-bit ppc, sparc, s390x, and riscv).

                      I’m not sure many people would consider “suddenly unavailable on 10 CPU architectures” to be “backwards compatible”.

                      But if you’d like to commit to a single definition of SemVer and make an argument that adoption of it by the cryptography package would’ve prevented the recent dramatic arguments, feel free to state that definition and I’ll see what kind of counterargument fits against it.

                      If you can tell me how making a change in a minor release, that causes the package to suddenly be unavailable on 10 CPU architectures that it previously was available on, is not considered a breaking change, I will give you $20.

                      1. 8

                        Let’s take a simplified example.

                        Suppose I write a package called add_positive_under_ten. It exposes exactly one public function, with this signature:

                        def add_positive_under_ten(x: int, y: int) -> int
                        

                        The documented contract of this function is that x and y must be of type int and must each be greater than 0 and less than 10, and that the return value is an int which is the sum of x and y. If the requirements regarding the types of x and y are not met, TypeError will be raised. If the requirements regarding their values are not met, ValueError will be raised. The package also includes an automated test suite which exhaustively checks behavior and correctness for all valid inputs, and verifies that the aforementioned exceptions are raised on sample invalid inputs.

                        In the first release of this package, it is pure Python. In a later, second release, I rewrite it in C as a compiled extension. In yet a later, third release, I rewrite the compiled C extension as a compiled Rust extension. From the perspective of a consumer of the package, the public API of the package has not changed. The documented behavior of the functions (in this case, single function) exposed publicly has not changed, as verified by the test suite.

                        Since Semantic Versioning as defined by semver.org applies to declared public API and nothing else whatsoever, Semantic Versioning would not require that I increment the major version with each of those releases.

                        Similarly, Semantic Versioning would not require that the pyca/cryptography package increment major for switching a compiled extension from C to Rust unless that switch also changed declared public API of the package in a backwards-incompatible way. The package does not adhere to Semantic Versioning, but even if it did there would be no obligation to increment major for this, under Semantic Versioning’s rules.

                        If you would instead like to argue that Semantic Versioning ought to apply to things beyond the declared public API, such as “any change a downstream consumer might notice requires incrementing major”, then I will point out that this is indistinguishable in practice from “every commit must increment major”.

                        1. 1

                          We don’t need a simplified, synthetic example.

                          We have the real world example. Do you believe that making a change which effectively drops support for ten CPU architectures is a breaking change, or not? If not, why not? How is “does not work at all”, not a breaking change?

                          1. 9

                            The specific claim at issue is whether Semantic Versioning would have caused this to go differently.

                            Although it doesn’t actually use SemVer, the pyca/cryptography package did not do anything that Semantic Versioning forbids. Because, again, the only thing Semantic Versioning forbids is incompatibility in the package’s declared public API. If the set of public classes/methods/functions/constants/etc. exposed by the package stays compatible as the underlying implementation is rewritten, Semantic Versioning is satisfied. Just as it would be if, for example, a function were rewritten to be more time- or memory-efficient than before while preserving the behavior.

                            And although Gentoo (to take an example) seemed to be upset about losing support for architectures Gentoo chooses to support, they are not architectures that Python (the language) supported upstream, nor as far as I can tell did the pyca/cryptography team ever make any public declaration that they were committed to supporting those architectures. If someone gets their software, or my software, or your software, running on a platform that the software never committed to supporting, that creates zero obligation on their (or my, or your) part to maintain compatibility for that platform. But at any rate, Semantic Versioning has nothing whatsoever to say about this, because what happened here would not be a violation of Semantic Versioning.

                        2. 7

                          If you can tell me how making a change in a minor release, that causes the package to suddenly be unavailable on 10 CPU architectures that it previously was available on, is not considered a breaking change, I will give you $20.

                          None of those architectures were maintained or promised by the maintainers, but were added by third parties. No matter what your opinion on SemVer is, activities of third parties about whose existence you possibly didn’t even know about, is not part of it.

                          Keep your $20 but try to be a little more charitable and open-minded instead. We all have yet much to learn.

                          1. 0

                            Keep your $20 but try to be a little more charitable and open-minded instead. We all have yet much to learn.

                            If you think your argument somehow shows that breaking support for 10 CPU architectures isn’t a breaking change, then yes, we all have much to learn.

                            1. 8

                              You still haven’t explained why you think Semantic Versioning requires this. Or why you think the maintainers had any obligation to users they had never made any promises to in the first place.

                              But I believe I’ve demonstrated clearly that Semantic Versioning does not consider this to be a change that requires incrementing major, so if you’re still offering that $20…

                              1. 0

                                Part of what they ship is code that’s compiled, and literally the first two sentences of the project readme are:

                                cryptography is a package which provides cryptographic recipes and primitives to Python developers. Our goal is for it to be your “cryptographic standard library”.

                                If your self stated goal is to be the “standard library” for something and you’re shipping code that is compiled (as opposed to interpreted code, e.g. python), I would expect you to not break things relating to the compiled part of the library in a minor release.

                                Regardless of whether they directly support those other platforms or not, they ship code that is compiled, and their change to that compiled code, broke compatibility on those platforms.

                                1. 8

                                  Regardless of whether they directly support those other platforms or not, they ship code that is compiled, and their change to that compiled code, broke compatibility on those platforms.

                                  There are many types of agreements – some formal, some less so – between developers of software and users of software regarding support and compatibility. Developers declare openly which parts of the software they consider to be supported with a compatibility promise, and consumers of the software declare openly that they will not expect support or compatibility promises for parts of the software which are not covered by that declaration.

                                  Semantic Versioning is a mildly-formal way of doing this. But it is focused on only one specific part: the public API of the software. It is not concerned with anything else, at all, ever, for any reason, under any circumstances. No matter how many times you pound the table and loudly demand that something else – like the build toolchain – be covered by a compatibility guarantee, Semantic Versioning will not budge on it.

                                  The cryptography change did not violate Semantic Versioning. The public API of the module after the rewrite was backwards-compatible with the public API before the rewrite. This is literally the one, only, exclusive thing that Semantic Versioning cares about, and it was not broken.

                                  Meanwhile, you appear to believe that by releasing a piece of software, the author takes on an unbreakable obligation to maintain compatibility for every possible way the software might ever be used, by anyone, on any platform, in any logically-possible universe, forever. Even if the author never promised anything resembling that. I honestly do not know what the basis of such an obligation would be, nor what chain of reasoning would support its existence.

                                  What I do know is that the topic of this thread was Semantic Versioning. Although the cryptography library does not use Semantic Versioning, the rewrite of the extension module in Rust did not violate Semantic Versioning. And I know that nothing gives you the right to make an enforceable demand of the developers that they maintain support and compatibility for building and running on architectures that they never committed to supporting in the first place, and nothing creates any obligation on their part to maintain such support and compatibility. The code is under an open-source license. If you depended on it in a way that was not supported by the developers’ commitments, your remedy is to maintain your own fork of it, as with any other upstream decision you dislike.

                      2. 4

                        “Should” is the key word here because I haven’t ever contributed to an open source project that has that as part of their policy neither have I observed it’s wide application given the state of third party packages.

                        The article specifically speaks about the divergence between aspiration and reality and what conclusions can be drawn from that.

                        1. 3

                          Unfortunately the aspiration is broken too.

                          1. 2

                            Baby steps 😇

                        2. 3

                          It sounds like you’re proposing to use unit tests to prove that a minor release doesn’t introduce backwards-compatible changes. However, tests cannot substitute for proofs; there are plenty of infinite behaviors which we want to write down in code but we cannot exhaustively test.

                          All of these same problems happen in e.g. Haskell’s ecosystem. It turns out that simply stating that minor releases should only add backwards-compatible changes is just an opinion and not actually a theorem about code.

                          1. 1

                            No I think they have a valid point. “Surely” implies that it’s normal to “change” unittests between minor versions, but the term “change” here mixes “adding new” and “modifying existing” in a misleading way. Existing unittests should not change between minor versions, as they validate the contract. Of course, they may change anyway, for instance if they were not functional at all, or tested something wrong, but it should certainly not be common.

                            edit: I am mixing up unittests and system tests, my apologies. Unit tests can of course change freely, but they also have no relation to SemVer; the debate only applies to tests of the user-facing API.

                            1. 2

                              I know people use different terminology for the same things, but if the thing being tested is a software library, I would definitely consider any of the tests that aren’t reliant on something external (e.g. if you’re testing a string manipulation method) to be unit tests.

                              1. 1

                                Take any function from the natural numbers to the natural numbers. How do you unit-test it in a way that ensures that its behavior cannot change between semantic versions? Even property tests can only generate a finite number of test cases.

                                1. 2

                                  I think the adage “code is written for humans to read, and only incidentally for computers to execute” applies to tests especially. Of course you can’t test every case, but intention does count.

                              2. 1

                                Aside:

                                I just recently added a test that exercises the full API of a Rust library of mine, doing so in such a way that any backwards-compatible breaking changes would error if added. (The particular case was that I’d add a member to a config struct, and so anyone constructing that struct without including a ..StructName::default() at the end would suddenly have a compile error because they were missing a field.) This seemed to do the trick nicely and would remind me to bump the appropriate part of semver when making a release.

                                I work on the library (and in the Rust ecosystem) infrequently so it’s not at the front of my mind. More recently I accepted a PR, and made a new release including it after. Then I got the warning, again, that I’d broken semver. Of course, the failing test was seen by the contributor and fixed up before they submitted the PR, so I never saw the alarm bells ringing.

                            1. 8

                              I think there are valid arguments on both sides here, but this post doesn’t seem to be grounded in experience.

                              Practically speaking, users of weird architectures do contribute patches back. Those people eventually become the maintainers. When those people go away, the project drops support for certain architectures. That happened with CPython, e.g. it doesn’t support Mac OS 9 anymore as far as I remember.

                              It’s sort of a self-fulfilling prophesy – if the code is in C, you will get people who try to compile it for unusual platforms. If it’s in Rust, they won’t be able to try.

                              I’m not saying which one is better, just that this post misses the point. If you want to use Rust and close off certain options, that’s fine. Those options might not be important to the project. Someone else can start a different project with the goal of portability to more architectures.

                              Changing languages in the middle of the project is a slightly different case. But that’s why the right to fork exists.

                              1. 25

                                Author here: this post is grounded in a couple of years of experience as a packager, and a couple more years doing compiler engineering (mostly C and C++).

                                Practically speaking, users of weird architectures do contribute patches back. Those people eventually become the maintainers. When those people go away, the project drops support for certain architectures. That happened with CPython, e.g. it doesn’t support Mac OS 9 anymore as far as I remember.

                                This is the “hobbyist” group mentioned in the post. They do a fantastic job getting complex projects working for their purposes, and their work is critically undervalued. But the assumptions that stem from that work are also dangerous and unfounded: that C has any sort of “compiled is correct” contract, and that you can move larger, critical work to novel architectures just by patching bugs as they pop up.

                                1. 6

                                  OK I think I see your point now. TBH the post was a little hard to read.

                                  Yes, the people contributing back patches often have a “it works on my machine” attitude. And if it starts “working for others”, the expectation of support can arise.

                                  And those low quality patches could have security problems and tarnish the reputation of the project.

                                  So I would say that there are some projects where having the “weird architectures” off to the side is a good thing, and some where it could be a bad thing. That is valid but I didn’t really get it from the post.


                                  I also take issue with the “no such thing as cross platform C”. I would say it’s very hard to write cross platform C, but it definitely exists. sqlite and Lua are pretty good examples from what I can see.

                                  After hacking on CPython, I was surprised at how much it diverged from that. There are a lot of #ifdefs in CPython making it largely unportable C.

                                  In the ideal world you would have portable C in most files and unportable C in other files. Patches for random architectures should be limited to the latter.

                                  In other words, separate computation from I/O. The computation is very portable; I/O tends to be very unportable. Again, sqlite and Lua are good examples – they are parameterized by I/O (and even memory allocators). They don’t hard-code dependencies, so they’re more portable. They use dependency inversion.

                                  1. 10

                                    TBH the post was a little hard to read.

                                    That’s very fair; I’m not particularly happy with how it came out :-)

                                    I also take issue with the “no such thing as cross platform C”. I would say it’s very hard to write cross platform C, but it definitely exists. sqlite and Lua are pretty good examples from what I can see.

                                    I’ve heard this argument before, and I think it’s true in one important sense: C has a whole bunch of mechanisms for making it easy to get your code compiling on different platforms. OTOH, to your observation about computation being generally portable: I think this is less true than C programmers generally take for granted. A lot of C is implicitly dependent on memory models that happen to be shared by the overwhelming majority of today’s commercial CPUs; a lot of primitive operations in C are under-specified in the interest of embedded domains.

                                    Maybe it’s possible to truly cross-platform C, but it’s my current suspicion that there’s no to verify that for any given program (even shining examples of portability like sqlite). But I admit that that’s moving the goalposts a bit :-)

                                    1. 11

                                      Maybe it’s possible to truly cross-platform C, but it’s my current suspicion that there’s no to verify that for any given program (even shining examples of portability like sqlite).

                                      I think the argument holds up just fine despite the existence of counterexamples like Sqlite and Lua; basically it means that every attempt to write portable and safe code in C can be interpreted as an assertion that the author (and every future contributor) is as capable and experienced as Dr. Hipp!

                                      1. 6

                                        A lot of C is implicitly dependent on memory models that happen to be shared by the overwhelming majority of today’s commercial CPUs

                                        That’s largely a result of the CPU vendors optimising for C, due to its popularity. Which leads to its popularity. Which…

                                        1. 2

                                          A lot of C is implicitly dependent on memory models that happen to be shared by the overwhelming majority of today’s commercial CPUs; a lot of primitive operations in C are under-specified in the interest of embedded domains.

                                          As the author of a C library, I can confirm that fully portable C is possible (I target the intersection of C99 and C++). It wasn’t always easy, but I managed to root out all undefined and unspecified behaviour. All that is left is one instance of implementation defined behaviour: right shift of negative integers. Which I have decided is not a problem, because I don’t know a single platform in current use that doesn’t propagate the sign bit in this case.

                                          The flip side is that I don’t do any I/O, which prevents me from directly accessing the system’s RNG.

                                          Incidentally, I’m a firm believer in the separation of computation and I/O. In practice, I/O makes a relatively small portion of programs. Clearly separating it from the rest turns the majority of the program into “pure computation”, which (i) can be portable, and (ii) is much easier to test than I/O.

                                        2. 5

                                          I also take issue with the “no such thing as cross platform C”. I would say it’s very hard to write cross platform C, but it definitely exists. sqlite and Lua are pretty good examples from what I can see.

                                          I see this as parallel to “no such thing as memory-safe C”. Sure, cross-platform C exists in theory, but it’s vanishingly rare in practice, and I’d wager even the examples you cite are likely to have niche platform incompatibilities that haven’t been discovered yet.

                                          1. 1

                                            I’d wager even the examples you cite are likely to have niche platform incompatibilities that haven’t been discovered yet.

                                            Portability in C is hard, but it is simple: no undefined behaviour, no unspecified behaviour, no implementation defined behaviour. If you do that, and there are still are platform incompatibilities, then the platform’s compiler is at fault: it has a bug, fails to implement part of the standard, or simply conforms to the wrong standard (say, C89 where the code was C99).

                                            If we’re confident a given project is free of undefined, unspecified, and implementation defined behaviour, then we can be confident we’ll never discover further niche platform incompatibilities. (Still, achieving such confidence is much harder than it has any right to be.)

                                            1. 3

                                              Portability in C is hard, but it is simple: no undefined behaviour, no unspecified behaviour, no implementation defined behaviour.

                                              That is a very tall order, though. Probably impossibly tall for many (most?) people. I asked how to do this and the answers I would say were mixed at best. Simple isn’t good enough if it’s so hard nobody can actually do it.

                                      2. 3

                                        If it’s in Rust, they won’t be able to try.

                                        I think this is the most trenchant point here. If someone wants to maintain a project for their own “weird” architecture, then they need to maintain the toolchain and the project. I’ve been in that position and it sucks. In fact, it’s worse, because they need to maintain the toolchain before they even get to the project.

                                        I’m particularly sensitive to this because I’m typing this on ppc64le. We’re lucky that IBM did a lot of the work for us, but corporate interests shift. There’s no Rust compiler for half the systems in this room.

                                        1. 2

                                          I’m not familiar with these systems. What are they used for? What kind of devices use them? What industries/sectors/etc.?

                                          1. 3

                                            Ppc is very common in aerospace and automotive industries. Of course there are also power servers running Linux and Aix, but those are comparatively a niche compared to the embedded market.

                                            1. 6

                                              Got it. Sounds like definitely something that would not be hobbyists working on side projects using mass-market hardware. I think the article was referring to this–these corporate users should be willing to pay up to get their platforms supported.

                                              1. 3

                                                So does that mean we should only care about architectures that have corporate backing? Like I say, this isn’t a situation where it’s only a project port that needs maintainers. The OP puts it well that without a toolchain, they can’t even start on it. If Rust is going to replace C, then it should fill the same niche, not the same niche for systems “we like.”

                                                For the record, my projects are all officially side projects; my day job has nothing officially to do with computing.

                                                1. 8

                                                  So does that mean we should only care about architectures that have corporate backing?

                                                  Yes, it does. Money talks. Open source is not sustainable without money. I can work on a pet project on the side on evenings and weekends only for a relatively short period of time. After that it’s going to go unmaintained until the next person comes along to pick it up. This is going to happen until someone gets a day job working on the project.

                                                  If Rust is going to replace C, then it should fill the same niche, not the same niche for systems “we like.”

                                                  C has a four-decade head start on Rust, if no one is allowed to use Rust until it’s caught up to those four decades of porting and standardization effort–for the sake of people’s side projects–then that argument is a non-starter.

                                                  1. 3

                                                    Yes, it does. Money talks.

                                                    In such a world there would be no room for hobbyists, unless they work with what other people are using. Breakage of their interests would be meaningless and unimportant. That’s a non-starter too.

                                                    But, as you say, you’re unfamiliar with these systems, so as far as you’re concerned they shouldn’t matter, right?

                                                    1. 9

                                                      In that (this) world, there is room for hobbyists only insofar as they support their own hobbies and don’t demand open source maintainers to keep providing free support for them.

                                                2. 2

                                                  OpenWrt runs on TP-Link TL-WDR4900 WiFi Router. This is a PowerPC system. OpenWrt is nearly a definition of hobbyists working on side projects using mass-market hardware.

                                                  1. 2

                                                    It says on that page that this device was discontinued in 2015. Incidentally, same year Rust reached 1.0.

                                                    1. 2

                                                      I am not sure what you are trying to argue. The same page shows it to be in OpenWrt 19.07, which is the very latest release of OpenWrt.

                                        1. 2

                                          A strictly message-passing ObjC implementation of fizzbuzz.

                                          1. 1

                                            This is awesome

                                          1. 5

                                            Could be a huge marketing move for Microsoft to look cool, and wouldn’t hurt much their profits as it’s all about Azure now.

                                              1. 3

                                                Updated: Apr 11, 2019

                                                1. 3

                                                  2020 doesn’t help the case. MS carried on selling Xbox, Windows, and Dynamics, and renting LinkedIn to recruiters. Intelligent cloud went up but Microsoft is far from “all about Azure now”.

                                            1. 13

                                              I suspect these “nothing’s happened since 19XX” articles are just strawman arguments. Humans build on old technology. Technology isn’t made in a vacuum. To some degree, “nothing new has happened here” can be said about anything. For example, humans have basically discovered every place on the surface of Earth, does that mean that archaeologists and explorers are useless?

                                              1. 28

                                                Thermionic tubes are made in a vacuum.

                                                1. 2

                                                  Thank you for brightening my day :)

                                              1. 2

                                                Computer History Museum has an interview with him:

                                                https://www.youtube.com/watch?time_continue=8722&v=1xrL2d5omuA&feature=emb_title

                                                He had interesting thoughts on Open Source:

                                                – I wanted to create a Future where developers could earn by creating components. But now they are all controlled by advertising and malware.”

                                                – But in open source you can fix what is broken.

                                                – For sure. you can build a house with mud and stones. but who would like to live there? it is better to use general components like we all do with houses.

                                                1. 1

                                                  it is better to use general components like we all do with houses.

                                                  I mean this is where we got to, that’s why package repositories are so huge these days (npm, crates, etc).

                                                  1. 2

                                                    Indeed, I think that although a lot of thought went into the design of a general interface for reusable software modules (pipes, functions, objects, software-ICs…) it was ultimately the open source license that provided most impetus for reuse.

                                                1. 2

                                                  Now add Apple to this model, who work in none of those ways accused of being the “Silicon Valley” ways.

                                                  1. 2

                                                    They’re not really a software company though, software is a means to an end to them - just like with “traditional” companies.

                                                    Their main pillar is hardware and they try to shift to services, and the software they ship on their hardware isn’t great (basically living off the NeXT-inheritance from 20 years ago) and from what can be seen with their services, neither is it great there.

                                                    1. 3

                                                      The problem is that there are no true Scotsmen: no company is a software company. Facebook is an ad broker. Netflix is a media channel. Microsoft is a licensing company. Red Hat is a training company. It just happens that they each use software quite a bit in delivering their “true” business, just like Apple.

                                                      1. 4

                                                        Yeah, shipping the 2nd most popular desktop, mobile os and web browser is pretty trivial. Any “real” software companies could do it. All the tech has been there for 20 years after all.

                                                        1. 2

                                                          iCloud had a rough start (and even more so its predecessors .Mac, MobileMe, etc.) but it seems mostly rock-solid today and has an astronomical amount of traffic. A billion and a half active devices, I believe, with a large proportion using multiple iCloud services all day every day. I’m not saying Apple doesn’t have room for improvements in services, but “Apple is bad at services” is just a decade old meme at this point, IMO.

                                                      1. 5

                                                        @hwayne: Is there a way I could pay you some amount of money for these essays? Or would you be willing to put together some swag or the like?

                                                        1. 3

                                                          Thanks for the kind offer! The easiest way to throw money my way would be to buy a copy of my newsletter archives. As a bonus, you get 300 pages of software essays!

                                                          1. 1

                                                            I would buy this if I could put it on my physical shelf without figuring out how to run my printer. I don’t know if paper publishing is easy to do, once you have a virtual book.

                                                            1. 1

                                                              Yes; “print on demand” is the phrase. Not endorsing either, but both Lulu and Amazon offer it.

                                                        1. 1

                                                          I admittedly don’t give a shit about R, but this is a very interesting part to me:

                                                          However, the Apple silicon platform uses a different application binary interface (ABI) which GFortran does not support, yet.

                                                          Does this mean that the ABI for core Apple libs is different? That seems expected if you’re switching to a whole new arch. Or do they mean that something like the calling convention is different? I’m super interested in the differences here.

                                                          1. 1

                                                            I have no expertise on the platform, but I did find in some Apple docs a reference to the C++ ABI now matching that of iOS: https://developer.apple.com/documentation/xcode/writing_arm64_code_for_apple_platforms#//apple_ref/doc/uid/TP40009020-SW1 (which itself makes reference to developer.arm.com, so changing ABI is likely not a decision made by Apple alone).

                                                            1. 9

                                                              Most of those look pretty much like the 64-bit Arm PCS. I presume that Apple is using the same ABI for AArch64 macOS as iOS. The main way that I’m aware that this differs from the official one is in handling of variadic arguments. Apple’s variadic ABI is based on an older version of the Arm one, where all variadic arguments were passed on the stack. This is exactly the right thing to do for two reasons:

                                                              • Most variadic functions are thin wrappers around a version that takes a va_list, so anything other than passing them on the stack requires the caller to put them into registers and then the callee to spill them to the stack. This is much easier if the caller just sticks them on the stack in the first place.
                                                              • If all variadic arguments are contiguous on the stack, the generated code for va_next is simpler. So much simpler that, in more complex implementations, va_start is often compiled to something that writes all of the arguments that are in registers into the stack.

                                                              As an added bonus, if you have CHERI, MTE, or Asan, you can trivially catch callees going past the last argument. This is exactly how variadics worked on the PDP-11 and i386, because all arguments were passed on the stack. In K&R C, you didn’t actually have variadics as a language feature, you just took the address of the last formal argument and kept walking up the stack.

                                                              The down side is that now your variadic and non-variadic calling conventions are different if you non-variadic convention passes any arguments in registers. That shouldn’t matter, because it’s undefined behaviour in C to call a function with the wrong calling convention. It did matter in practice because (when AArch64 was released, at least, possibly fixed now) some high-impact bits of software (the Perl and Python interpreters, at least) used a table of something like int(*)(int, ...) function pointers and didn’t bother casting them to the correct type before invoking them. They worked because on most mainstream architectures the because the variadic and non-variadic conventions happened to be the same for functions that up to four integer-or-pointer arguments.

                                                              I am still sad that Arm made the (commercially correct) decision not to force people to fix their horrible code for AArch64.

                                                              I believe that the new Apple chips also support Arm’s pointer signing extension and so there are a bunch of features in the ABI related to that, which probably aren’t in GCC yet.

                                                              1. 1

                                                                It did matter in practice because (when AArch64 was released, at least, possibly fixed now) some high-impact bits of software (the Perl and Python interpreters, at least) used a table of something like int(*)(int, …) function pointers and didn’t bother casting them to the correct type before invoking them.

                                                                I think you just explained for me why apple’s ObjC recently started demanding explicit casts of IMP (some thing like id(*)(id, SEL, …), which I’m aware you already know but readers may not).

                                                                1. 1

                                                                  I don’t think that should be a new thing. Back in the PowerPC days, there were a bunch of corner cases (particularly around things that involved floating-point arguments) where that cast was important. On 32-bit x86, if you called a function using the IMP type signature but it returned a float or double then it would leave the x87 floating point stack in an unbalanced state and lead to a difficult-to-debug crash later on.

                                                                  On Apple AArch64; however, you’re right that it’s a much bigger impact: all arguments other than self and _cmd will be corrupted if you call a method using the IMP signature.

                                                                  One of the breaking changes I’d like to make to Objective-C is adding a custom calling convention to IMP so that C functions that you want to use as IMPs have to be declared with __attribute__((objc_method)) or similar. It would take a few years of that being a compiler warning before code is migrated but once it’s done you have the freedom to make the Objective-C calling convention diverge from the C ones.

                                                          1. 11

                                                            I like Apple hardware a lot, and I know all of the standard this-is-why-it-is-that-way reasoning. But it’s wild that the new MacBook Pros only have two USB-C ports and can’t be upgraded past 16GB of RAM.

                                                            1. 18

                                                              Worse yet, they have “secure boot”, where secure means they’ll only boot an OS signed by Apple.

                                                              These aren’t computers. They are Appleances.

                                                              Prepare for DRM-enforced planned obsolence.

                                                              1. 9

                                                                I would be very surprised if that turned out to be the case. In recent years Apple has been advertising the MacBook Pro to developers, and I find it unlikely they would choose not to support things like Boot Camp or running Linux based OSs. Like most security features, secure boot is likely to annoy a small segment of users who could probably just disable it. A relevant precedent is the addition of System Integrity Protection, which can be disabled with minor difficulty. Most UEFI PCs (to my knowledge) have secure boot enabled by default already.

                                                                Personally, I’ve needed to disable SIP once or twice but I can never bring myself to leave it disabled, even though I lived without it for years. I hope my experience with Secure Boot will be similar if I ever get one of these new computers.

                                                                1. 12

                                                                  Boot Camp

                                                                  Probably a tangent, but I’m not sure how Boot Camp would fit into the picture here. ARM-based Windows is not freely available to buy, to my knowledge.

                                                                  1. 7

                                                                    Disclaimer: I work for Microsoft, but this is not based on any insider knowledge and is entirely speculation on my part.

                                                                    Back in the distant past, before Microsoft bought Connectix, there was a product called VirtualPC for Mac, an x86 emulator for PowerPC Macs (some of the code for this ended up in the x86 on Arm emulator on Windows and, I believe, on the Xbox 360 compatibility mode for Xbox One). Connectix bought OEM versions of Windows and sold a bundle of VirtualPC and a Windows version. I can see a few possible paths to something similar:

                                                                    • Apple releases a Boot Camp thing that can load *NIX, Microsoft releases a Windows for Macs version that is supported only on specific Boot Camp platforms. This seems fairly plausible if the number of Windows installs on Macs is high enough to justify the investment.
                                                                    • Apple becomes a Windows OEM and ships a Boot Camp + Windows bundle that is officially supported. I think Apple did this with the original Boot Camp because it was a way of de-risking Mac purchases for people: if they didn’t like OS X, they had a clean migration path away. This seems much less likely now.
                                                                    • Apple’s new Macs conform to one of the new Arm platform specifications that, like PREP and CHRP for PowerPC, standardise enough of the base platform that it’s possible to release a single OS image that can run on any machine. Microsoft could then release a version of Windows that runs on any such Arm machine.

                                                                    The likelihood of any of these depends a bit on the economics. In the past, Apple has made a lot of money on Macs and doesn’t actually care if you run *NIX or Windows on them because anyone running Windows on a Mac is still a large profit-making sale. This is far less true with iOS devices, where a big chunk of their revenue comes from other services (And their 30% cut on all App Store sales). If the new Macs are tied more closely to other Apple services, they may wish to discourage people from running another OS. Supporting other operating systems is not free: it increases their testing burden and means that they’ll have to handle support calls from people who managed to screw up their system with some other OS.

                                                                    1. 2

                                                                      Apple’s new Macs conform to one of the new Arm platform specifications

                                                                      We already definitely know they use their own device trees, no ACPI sadly.

                                                                      Supporting other operating systems is not free

                                                                      Yeah, this is why they really won’t help with running other OS on bare metal, their answer to “I want other OS” is virtualization.

                                                                      They showed a demo (on the previous presentation) of virtualizing amd64 Windows. I suppose a native aarch64 Windows VM would run too.

                                                                    2. 2

                                                                      ARM-based Windows is available for free as .vhdx VM images if you sign up for the Windows Insider Program, at least

                                                                    3. 9

                                                                      In the previous Apple Silicon presentation, they showed virtualization (with of-course-not-native Windows and who-knows-what-arch Debian, but I suspect both native aarch64 and emulated amd64 VMs would be available). That is their offer to developers. Of course nothing about running alternative OS on bare metal was shown.

                                                                      Even if secure boot can be disabled (likely – “reduced security” mode is already mentioned in the docs), the support in Linux would require lots of effort. Seems like the iPhone 7 port actually managed to get storage, display, touch, Wi-Fi and Bluetooth working. But of course no GPU because there’s still no open PowerVR driver. And there’s not going to be an Apple GPU driver for a loooong time for sure.

                                                                      1. 2

                                                                        I think dual-booting has always been a less-than-desireable “misfeature” from Apple’s POV. Their whole raisin de et is to offer an integrated experience where the OS, hardware, and (locked-down) app ecosystem all work together closely. Rip out any one of those and the whole edifice starts to tumble.

                                                                        So now they have a brand-new hardware platform with an expanded trusted base, so why not use it to protect their customers from “bad ideas” like disabling secure boot or side-loading apps? Again, from their perspective they’re not doing anything wrong, or hostile to users; they’re just deciding what is and isn’t a “safe” use of the product.

                                                                        I for one would be completely unsurprised to discover that the new Apple Silicon boxes were effectively just as locked down as their iOS cousins. You know, for safety.

                                                                        1. 3

                                                                          They’re definitely not blocking downloading apps. Federighi even mentioned universal binaries “downloaded from the web”. Of course you can compile and run any programs. In fact we know you can load unsigned kexts.

                                                                          Reboot your Mac with Apple silicon into Recovery mode. Set the security level to Reduced security.

                                                                          Remains to be seen whether that setting allows it to boot any unsigned kernel, but I wouldn’t just assume it doesn’t.

                                                                          1. 4

                                                                            They also went into some detail at WWDC about this, saying that the new Macs will be able to run code in the same contexts existing ones can. The message they want to give is “don’t be afraid of your existing workflow breaking when we change CPU”, so tightening the gatekeeper screws alongside the architecture shift is off the cards.

                                                                          2. 2

                                                                            I think dual-booting has always been a less-than-desireable “misfeature” from Apple’s POV. Their whole raisin de et is to offer an integrated experience where the OS, hardware, and (locked-down) app ecosystem all work together closely. Rip out any one of those and the whole edifice starts to tumble.

                                                                            For most consumers, buying their first Mac is a high-risk endeavour. It’s a very expensive machine and it doesn’t run any of their existing binaries (especially since they broke Wine with Catalina). Supporting dual boot is Apple’s way of reducing that risk. If you aren’t 100% sure that you’ll like macOS, there’s a migration path away from it that doesn’t involve throwing away the machine: just install Windows and use it like your old machine. Apple doesn’t want you to do that, but by giving you the option of doing it they overcome some of the initial resistance of people switching.

                                                                            1. 7

                                                                              The context has switched, though.

                                                                              Before, many prospective buyers of Macs used Windows, or needed Windows apps for their jobs.

                                                                              Now, many more prospective buyers of Macs use iPhones and other iOS devices.

                                                                              The value proposition of “this Mac runs iOS apps” is now much larger than the value proposition of “you can run Windows on this Mac”.

                                                                              1. 2

                                                                                There’s certainly some truth to that but I would imagine that most iOS users who buy Macs are doing so because iOS doesn’t do everything that they need. For example, the iPad version of PowerPoint is fine for presenting slides but is pretty useless for serious editing. There are probably a lot of other apps where the iOS version is quite cut down and is fine for a small device but is not sufficient for all purposes.

                                                                                In terms of functionality, there isn’t much difference between macOS and Windows these days, but the UIs are pretty different and both are very different from iOS. There’s still some risk for someone who is happy with iOS on the phone and Windows on the laptop buying a Mac, even if it can run all of their iOS apps. There’s a much bigger psychological barrier for someone who is not particularly computer literate moving to something new, even if it’s quite like similar to something they’re more-or-less used to. There are still vastly more Windows users than iOS users, though it’s not clear how many of those are thinking about buying Macs.

                                                                                1. 2

                                                                                  There are still vastly more Windows users than iOS users, though it’s not clear how many of those are thinking about buying Macs.

                                                                                  Not really arguing here, I’m sure you’re right, but how many of those Windows users choose to use Windows, as opposed to having to use it for work?

                                                                                  1. 1

                                                                                    I don’t think it matters very much. I remember trying to convince people to switch from MS Office ‘97 to OpenOffice around 2002 and the two were incredibly similar back then but people were very nervous about the switch. Novell did some experiments just replacing the Office shortcuts with OpenOffice and found most people didn’t notice at all but the same people were very resistant to switching if you offered them the choice.

                                                                          3. 1

                                                                            That “developer” might means Apple developers.

                                                                          4. 3

                                                                            Here is the source of truth from WWDC 2020 about the new boot architecture.

                                                                            1. 2

                                                                              People claimed the same thing about T2 equipped intel Macs.

                                                                              On the T2 intels at least, the OS verification can be disabled. The main reason you can’t just install eg Linux on a T2 Mac is the lack of support for the ssd (which is managed by the T2 itself). Even stuff like ESXi can be used on T2 Macs - you just can’t use the built in SSD.

                                                                              That’s not to say that it’s impossible they’ve added more strict boot requirements but I’d wager that like with other security enhancements in Macs which cause some to clutch for their pearls, this too can probably be disabled.

                                                                            2. 10

                                                                              … This is the Intel model it replaces: https://support.apple.com/kb/SP818?viewlocale=en_US&locale=en_US

                                                                              Two TB3/USB-C ports; Max 16GB RAM;

                                                                              It’s essentially the same laptop, but with a non-intel CPU/iGPU, and with USB4 as a bonus.

                                                                              1. 1

                                                                                Fair point! Toggling between “M1” and “Intel” on the product page flips between 2 ports/4 ports and 16GB RAM/max 32GB RAM, and it’s not clear this is a base model/higher tier toggle. I still think this is pretty stingy, but you’re right – it’s not a new change.

                                                                              2. 5

                                                                                These seem like replacements for the base model 13” MBP, which had similar limitations. Of course, it becomes awkward that the base model now has a much, much better CPU/IGP than the higher-end models.

                                                                                1. 2

                                                                                  I assume this is just a “phase 1” type thing. They will probably roll out additional options when their A15 (or whatever their next cpu model is named) ships down the road. Apple has a tendency to be a bit miserly (or conservative, depending on your take) at first, and then the next version looks that much better when it rolls around.

                                                                                  1. 2

                                                                                    Yeah, they said the transition would take ~2 years, so I assume they’ll slowly go up the stack. I expect the iMacs and 13-16” MacBook Pros to be refreshed next.

                                                                                    1. 3

                                                                                      Indeed. Could be they wanted to make the new models a bit “developer puny” to keep from cannabalizing the more expensive units (higher end mac pros, imacs) until they have the next rev of cpu ready or something. Who knows the amount of marketing/portfolio wrangling that goes behind the scenes to suss out timings for stuff like this (billion dollar industries), in order to try to hit projected quarterly earnings for a few quarters out down the road.

                                                                                      1. 5

                                                                                        I think this is exactly right. Developers have never been a core demographic for Apple to sell to - it’s almost accidental that OS X being a great Unix desktop, coupled with software developer’s higher income made Macs so popular with developers (iOS being an income gold mine helped too, of course).

                                                                                        But if you’re launching a new product, you look at what you’re selling best of (iPads and Macbook Air’s) and you iterate on that.

                                                                                        Plus, what developer in their right mind would trust their livelihood to a 1.0 release?!

                                                                                        1. 9

                                                                                          I think part of the strategy is that they’d rather launch a series of increasingly powerful chips, instead of starting with the most powerful and working their way down - makes for far better presentations. “50% faster!” looks better than “$100 cheaper! (oh, and 30% slower)”.

                                                                                          1. 2

                                                                                            It also means that they can buy more time for some sort of form-factor update while having competent, if not ideal, machines for developers in-market. I was somewhat surprised at the immediate availability given that these are transition machines. This is likely due to the huge opportunity for lower-priced machines during the pandemic. It is prudent for Apple to get something out for this market right now since an end might be on the horizon.

                                                                                            I’ve seen comments about the Mini being released for this reason, but it’s much more likely that the Air is the product that this demographic will adopt. Desktop computers, even if we are more confined to our homes, have many downsides. Geeks are not always able to understand these, but drive the online conversations. Fans in the Mini and MBP increase the thermal envelope, so they’ll likely be somewhat more favourable for devs and enthusiasts. It’s going to be really interesting to see what exists a year from now. It will be disappointing, if at least some broader changes to the form factor and design aren’t introduced.

                                                                                          2. 1

                                                                                            Developers have never been a core demographic for Apple to sell to

                                                                                            While this may have been true once, it certainly isn’t anymore. The entire iPhone and iPad ecosystem is underpinned by developers who pretty much need a Mac and Xcode to get anything done. Apple knows that.

                                                                                            1. 2

                                                                                              Not only that, developers were key to switching throughout the 00s. That Unix shell convinced a lot of us, and we convinced a lot of friends.

                                                                                              1. 1

                                                                                                In the 00s, Apple was still an underdog. Now they rule the mobile space, their laptops are probably the only ones that make any money in the market, and “Wintel” is basically toast. Apple can afford to piss off most developers (the ones who like the Mac because it’s a nice Unix machine) if it believes doing so will make a better consumer product.

                                                                                                1. 2

                                                                                                  I’ll give you this; developers are not top priority for them. Casual users are still number one by a large margin.

                                                                                              2. 1

                                                                                                Some points

                                                                                                • Developers for iOS need Apple way more than Apple needs them
                                                                                                • You don’t need an ARM Mac to develop for ARM i-Devices
                                                                                                • For that tiny minority of developers who develop native macOS apps, Apple provided a transition hardware platform - not free, by the way.

                                                                                                As seen by this submission, Apple does the bare minimum to accommodate developers. They are certainly not prioritized.

                                                                                                1. 1

                                                                                                  I don’t really think it’s so one-sided towards developers - sure, developers do need to cater for iOS if they want good product outreach, but remember that Apple are also taking a 30% cut on everything in the iOS ecosystem and the margins on their cut will be excellent.

                                                                                            2. 2

                                                                                              higher end mac pros

                                                                                              Honestly trepidatiously excited to see what kind of replacement apple silicon has for the 28 core xeon mac pro. It will either be a horrific nerfing or an incredible boon for high performance computing.

                                                                                      2. 4

                                                                                        and can’t be upgraded past 16GB of RAM.

                                                                                        Note that RAM is part of the SoC. You can’t upgrade this afterwards. You must choose the correct amount at checkout.

                                                                                        1. 2

                                                                                          This is not new to the ARM models. Memory in Mac laptops, and often desktops, has not been expandable for some time.

                                                                                        2. 2

                                                                                          I really believe that most people (including me) don’t need more than two Thunderbolt 3 ports nowadays. You can get a WiFi or Bluetooth version of pretty much anything nowadays and USB hubs solve the issue when you are at home with many peripherals.

                                                                                          Also, some Thunderbolt 3 displays can charge your laptop and act like a USB hub. They are usually quite expensive but really convenient (that’s what I used at work before COVID-19).

                                                                                          1. 4

                                                                                            it’s still pretty convenient to have the option of plugging in on the left or right based on where you are sitting so disappointing for that reason

                                                                                            1. 4

                                                                                              I’m not convinced. A power adapter and a monitor will use up both ports, and AFAIK monitors that will also charge the device over Thunderbolt are pretty uncommon. Add an external hard drive for Time Machine backups, and now you’re juggling connections regularly rather than just leaving everything plugged in.

                                                                                              On my 4-port MacBook Pro, the power adapter, monitor, and hard drive account for 3 ports. My 4th is taken up with a wireless dongle for my keyboard. Whenever I want to connect my microphone for audio calls or a card reader for photos I have to disconnect something, and my experiences with USB-C hubs have shown them to be unreliable. I’m sure I could spend a hundred dollars and get a better hub – but if I’m spending $1500 on a laptop, I don’t think I should need to.

                                                                                              1. 2

                                                                                                and AFAIK monitors that will also charge the device over Thunderbolt are pretty uncommon

                                                                                                Also, many adapters that pass through power and have USB + a video connector of some sort only allow 4k@30Hz (such as Apple’s own USB-C adapters). Often the only way to get 4k@60Hz with a non-Thunderbolt screen is by using a dedicated USB-C DisplayPort Alt Mode adapter, which leaves only one USB-C port for everything else (power, any extra USB devices).

                                                                                            2. 1

                                                                                              I’ve been trying to get a Mac laptop with 32GB for years. It still doesn’t exist. But that’s not an ARM problem.

                                                                                              Update: Correction, 32GB is supported in Intel MBPs as of this past May. Another update: see the reply! I must have been ignoring the larger sizes.

                                                                                              1. 3

                                                                                                I think that link says that’s the first 13 inch MacBook Pro with 32GB RAM. I have a 15 inch MBP from mid-2018 with 32GB, so they’ve been around for a couple of years at least.

                                                                                                1. 1

                                                                                                  You can get 64GB on the 2020 MBP 16” and I think on the 2019, too.

                                                                                              1. 8

                                                                                                AMIGA replacement :)

                                                                                                1. 4

                                                                                                  A strong No. This is no Amiga replacement.

                                                                                                  If the idea was to rekindle the playfulness and exploration of that generation of computers, then the Raspberry has failed.

                                                                                                  Neither the Hardware front (due to complexity) nor the software front (if the operating system is Linux) are comparable in that regard.

                                                                                                  1. 7

                                                                                                    I’ve done an awful lot more exploration and programming with Linux than my Amiga. And I love my Amiga. But much of what we can do with them now, 30+ years later, is due to reverse engineering. They weren’t open systems. There isn’t a particularly viable programming environment on them OOTB.

                                                                                                    1. 4

                                                                                                      But much of what we can do with them now, 30+ years later, is due to reverse engineering. They weren’t open systems.

                                                                                                      I digress. See: http://amigadev.elowar.com/

                                                                                                      All this documentation has always been available. They also published the PCB schematics (they’re in the user manuals!).

                                                                                                      What’s missing is the latter AGA chipset documentation, which had to be reverse engineered, and of course AmigaOS’s source code and the internal designs of the custom chips.

                                                                                                      Unlike with current hardware, the Amiga custom chips had lowish transistor count, so it was not extremely hard to figure out how they worked in detail. Thus cycle-exact emulators (uae then winuae) and open hardware reimplementations (minimig and aoecs).

                                                                                                      1. 4

                                                                                                        Unlike with current hardware, the Amiga custom chips had lowish transistor count, so it was not extremely hard to figure out how they worked in detail.

                                                                                                        And they were correspondingly less powerful. The solution is not for modern computers to be handicapped by being forced to a low transistor count. The ultimate solution is open architectures. Meanwhile, the Pi as a platform is far from perfectly open, but there’s enough open about it (especially on the software side) that there’s plenty for enthusiasts to do.

                                                                                                        1. 2

                                                                                                          And they were correspondingly less powerful.

                                                                                                          In a meaningful way. You’d get to see the difference between fast code and slow code.

                                                                                                          The solution is not for modern computers to be handicapped by being forced to a low transistor count.

                                                                                                          The solution to what problem exactly? If the purpose is to understand and learn about computers, then the priorities are not the same.

                                                                                                          Meanwhile, the Pi as a platform is far from perfectly open,

                                                                                                          And thus fails at its original goal.

                                                                                                          (especially on the software side)

                                                                                                          Especially on the hardware side. The SoC peripherals, outside the CPU. Especially the GPU.

                                                                                                    2. 3

                                                                                                      nothing stops you from installing another OS on this board. I think 9front should just work on it, and that’s a plenty playful OS.

                                                                                                      on the hardware front, the GPIO pins are available still, so while you might not be able to fiddle with the internals, you can access the outer world easily.

                                                                                                    3. 0

                                                                                                      Shame it’s running Unix, though.

                                                                                                      1. 3

                                                                                                        And the Ctrl key should be where the Caps Lock is.

                                                                                                      2. 1

                                                                                                        Now if we could figure out a way to get the form factor of the Amiga UI into a modern Linux system, I’d be so happy :).

                                                                                                        1. 1

                                                                                                          There is amiwm if your modern Linux system still uses X.org (mine does). There’s also amibian if you’ve got the ROMs, which you can legitimately obtain from Cloanto.

                                                                                                          1. 3

                                                                                                            Careful amiwm is not OSI/FSF approved Open Source/Free Software.

                                                                                                            As for Cloanto, here’s my take: Please don’t feed companies that somehow own historical Amiga IP and are keeping it to themselves and exploiting it for profit.

                                                                                                            This is specially annoying because of Cloanto’s empty promises of intent to open source, and no action in that front no matter how many years do pass.

                                                                                                            Everything Amiga should be part of the public domain, in a sane world.

                                                                                                            My take for an Amiga today? There’s a few options for the hardware, including but not limited to:

                                                                                                            • FPGA-based open implementations (e.g. minimig or aoecs on miSTer hardware or ULX3S).
                                                                                                            • Old Amiga from second-hand market. A500 are particularly common and easy to obtain cheaply, while they can run most software and have plenty of expansions available, including modern open hardware accelerators.
                                                                                                            • WinUAE, fs-uae or some other emulator on a powerful PC; Not a raspberry, It cannot emulate at full speed with cycle accuracy. The software emulation option comes with a latency penalty even on the fastest computers.
                                                                                                            1. 1

                                                                                                              Everything Amiga should be part of the public domain, in a sane world.

                                                                                                              It will be, 70 years after the death of the authors, in my locality.

                                                                                                              There’s a few options for the hardware

                                                                                                              Indeed, my own preference is the Apollo Vampire V4. I stream Amiga software development and we’re currently using an emulator with that, I’d prefer to switch to the Apollo but there are some problems getting amivnc to work that I’m not qualified to fix. I’m in favour of AROS becoming a good, free way of running Amiga software. In practice a lot of “running Amiga software” means games, and a lot of games need the original, proprietary kickstart libraries.

                                                                                                              1. 1

                                                                                                                It will be, 70 years after the death of the authors, in my locality.

                                                                                                                Way too late, and that’s only if the source code isn’t just lost.

                                                                                                                Apollo Vampire V4

                                                                                                                Completely closed, both software and hardware. Has proprietary extensions to the instruction set and the chip set which could lead to vendor lock-in, as there’s no alternative implementations of these. Unfortunate.

                                                                                                                And full of annoyances. To date, it is not even possible to use your kickstart of choice on the accelerators, without running the one they embed first. They really want you to see their logos and such. There’s other small things that make it not feel right. Full disclosure: I own a V500v2.

                                                                                                                My take is that we should focus on the oshw and open source software fronts.

                                                                                                                We should use/enhance the available open hardware such as TerribleFire’s accelerator boards, the minimig core, aoecs, tg68k, fx68k and such.

                                                                                                                We should rewrite, one piece at a time, all of AmigaOS’s rom modules, and the on-disk parts.

                                                                                                                Until we manage to get our shit together and do that, we’ll always be the laughing stock of the Atari community, which has emuTOS, FreeMiNT and a vibrant ecosystem of open source software and hardware.

                                                                                                                1. 1

                                                                                                                  Full disclosure: I own a V500v2.

                                                                                                                  The V4 is a different experience, evidently. The kickstart they embed is the open source AROS kickstart, and while CoffinOS has to remap the Hyperion kickstart after the AROS one has booted, it can do that without showing a Vampire logo should you wish (and you could do the same to boot to Commodore kickstart/AmigaOS). And the software is a fork of AROS. I think it already mostly does implement the full ROM and on-disk OS, actually at a better level than the out of date status page upstream.

                                                                                                                  1. 1

                                                                                                                    AROS is unfortunately not open source, by FSF/OSI definition or even Debian guidelines. Vampire’s ROM extensions and patches aren’t, either.

                                                                                                                    The only reason they bundle AROS with the standalone V4 is to sidestep the legal nightmare (ownership is disputed) of licensing actual AmigaOS. End users can simply load AmigaOS themselves and they generally do, as the AROS isn’t a real alternative on 68k/amiga platform.

                                                                                                                    MiSTer or ULX3S development boards are, by the way, much cheaper than Vampire V4, and when loaded with the Open Hardware minimig/aoecs cores they will run existing software much faster than the old hardware, with great compatibility.

                                                                                                                    Personally I ordered a ULX3S (for Amiga unrelated reasons), and I will be getting an OSSC Pro once available (miSTer compatible).

                                                                                                                    1. 1

                                                                                                                      AROS is unfortunately not open source, by FSF/OSI definition or even Debian guidelines.

                                                                                                                      The AROS public license specifically hasn’t been approved by OSI, but that doesn’t mean that the license isn’t open source or free software. It’s the MPL with the word “Netscape” removed.

                                                                                                            2. 2

                                                                                                              Unfortunately, amiwm just takes care of the window decorations – everything inside them is still huge widgets with slow animations.

                                                                                                            3. 1

                                                                                                              Which Amiga UI are we talking about, Workbench 1.3 or all of the meh? :)

                                                                                                              https://thumbs.gfycat.com/TatteredEmptyAegeancat-mobile.mp4

                                                                                                              1. 2

                                                                                                                One man’s meh is another man’s treasure!

                                                                                                                (Edit: the meh one’s where it’s at for me but Workbench 1.3’s flat, but contrasting and space-efficient layout is IMHO better than pretty much any modern flat theme. Properly anti-aliased and coupled with a similarly low-latency widget set it would beat any Arc-powered desktop.)

                                                                                                                1. 1

                                                                                                                  https://github.com/letoram/awb

                                                                                                                  The code do need some patches to handle x11/wl clients, and it is what it is quality wise - but it did run on a first gen rPI ;-)

                                                                                                                  1. 2

                                                                                                                    Oh I know it, I’ve fiddled with it quite a bit!

                                                                                                                    1. 1

                                                                                                                      Forgive me for doubting you :D

                                                                                                                      I did get a bit curious as to how much it would take for an OKish live image to the pi400 though - or make a voxel-like VR version..

                                                                                                                      With the newer helper-scripts, I think it’s just the really stupid stuff that might have a problem running due to the open source drivers – like the CRT glowtrails effect, it’s quite complicated as a ‘wanted to get vectrex like screen effects, takes like a ring buffer of n client supplied frames rotating, sampling, blurring and weight-mixing. The lil’ pi isn’t exactly a master of memory bandwidth.

                                                                                                                  2. 1

                                                                                                                    I recently changed my x pointer to the kickstart 1.3 one, albeit scaled to 128x128, which helps me see the damn thing on a 4k display.

                                                                                                                  3. 1

                                                                                                                    Workbench 1.3

                                                                                                                    Nitpick: The video you’ve linked is a (poor) recreation of 1.0, not 1.3.

                                                                                                                    1. 3

                                                                                                                      You’re the recreation of 1.0!! :-p

                                                                                                                      Seriously though, what’s your trigger? 1.3 was the first to add the 3D drawers!

                                                                                                                      http://theamigamuseum.com/amiga-kickstart-workbench-os/workbench/workbench-1-3/

                                                                                                                      1. 1

                                                                                                                        The window titlebar appearance (waves, buttons).

                                                                                                                        On a second look, it does look like neither 1.0 nor 1.3. But certainly 1.x inspired, not 2+.

                                                                                                                2. 1

                                                                                                                  More like Acorn replacement, if used with RISC OS.

                                                                                                                1. 1

                                                                                                                  Native apps allow this too. For example on macOS AppKit, override -[NSText copy:] with an implementation that creates a pasteboard item with your custom content.

                                                                                                                  1. 2

                                                                                                                    All of these related words: typecast, stereotype etc come from hot metal printing so that’s what I think of.

                                                                                                                    1. 2

                                                                                                                      As an OpenBSD observer but not-yet convert, the thing that I find most off-putting about the setup on laptop is editing byzantine config files to connect to wifi like I’m on early 2000s Linux. Is there a “pull-down menu, discover visible networks, choose, enter key” GUI to make that more convenient?

                                                                                                                      1. 7
                                                                                                                        join WiFiHome wpakey secretSupersecret
                                                                                                                        join WiFiWork wpakey lesssecret
                                                                                                                        dhcp
                                                                                                                        

                                                                                                                        Seems pretty simple to me :P

                                                                                                                        It’s also all done via ifconfig. One single command to manage network interfaces.

                                                                                                                        On linux there is (was?): ip, iw, iwconfig, ifconfig, iwctl, iwd.. probably others I can’t remember..

                                                                                                                        That complexity didn’t vanish, it’s just been hidden by NetworkManager.

                                                                                                                        1. 3

                                                                                                                          Having done this on macOS, Linux, and OpenBSD, I like OpenBSD’s setup the best for anything network related. It is well documented, and consistently works the way it should.

                                                                                                                          I would greatly prefer to use OpenBSD’s wifi setup to the mess that is NetworkManager/netplan/etc. Since I switched to Ubuntu 20.04, I’ve had no end of trouble with getting networking to work reliably, where it all just worked on OpenBSD on the same hardware. Sadly I need Ubuntu to run certain proprietary packages, so I’m stuck with it for the time being.

                                                                                                                          I think this is a really enjoyable aspect of OpenBSD – there is no “secret sauce”. Usually the config files you are editing fully define the behavior of whatever they configure, there isn’t some magical daemon snarfing things up and changing the system state behind the scenes (looking at you, NetworkManager, netplan, systemd-resolved, etc.).

                                                                                                                          That said, because OpenBSDs tools tend to be well documented, simple, and consistant, they tend to be easy to wrap. I did this for mixerctl.

                                                                                                                        1. 12

                                                                                                                          I’m the original author, and would suggest removing the satire tag on the basis that this isn’t satire. Supposedly agile software teams aren’t self-organising, this article is a checklist of self-organisation indicators on a hypothetical team.

                                                                                                                          1. 4

                                                                                                                            Once again I think people are confusing chaos for anarchy. As a social anarchist I always find it irksome that whenever discussing politics with anyone, the first hour of the conversation is spent doing an undergrad political science course.

                                                                                                                            For this post, I think the philosophy tag would be much more apt.

                                                                                                                            1. 2

                                                                                                                              As every nerdy middle-schooler used to know, the Chaotic/Orderly and Good/Neutral/Evil axes are orthogonal.

                                                                                                                            2. 1

                                                                                                                              Management pay performance-related benefits like bonuses to the gang for the gang’s collective output, not to individuals.

                                                                                                                              Does the team then divide them evenly or based on contribution as measured somehow by the team itself? If the former, what do you do if there is a large and obvious difference in value/output among the members?

                                                                                                                              1. 3

                                                                                                                                Let the team chose?

                                                                                                                                Another way I could see it is that if you are someone who look to contribute a lot and more in return, then you are free, and perhaps you should associate with people with similar expectation. If you contribute more than another member and you are fine with that, then what’s the matter?

                                                                                                                                One of the common point of view from anarchism / self-organization is:

                                                                                                                                From each according to their ability, to each according to their need.

                                                                                                                                1. 3

                                                                                                                                  Would it be a self-organising team if we told them what rules to follow?

                                                                                                                                  1. 1

                                                                                                                                    Your whole article is a list of rules to follow.

                                                                                                                                    1. 1

                                                                                                                                      No it isn’t. Feel welcome not to follow them and not to treat them as rules: I didn’t when writing them (which is why “rules” and “follow” and their synonyms aren’t in the original post).

                                                                                                                              1. 10

                                                                                                                                Oof, the software engineering stack exchange always makes me so sad. Anyway!

                                                                                                                                Exceptions are, in essence, sophisticated GOTO statements

                                                                                                                                What everybody missed and you picked up on is that both exceptions and GOTO are language dependent constructs. The original “Go to considered harmful” paper was dealing with GOTOs that could jump from anywhere in a program to anywhere else in a program, like the middle of a function to the middle of a different function. The GOTOs we have now are much more limited and so much more useful. Similarly, what you can do with exceptions depends on the language you are in. Eiffel Exceptions, for example, force you to clean up the state and retry if you want to continue the program flow vs, say, halting with error.

                                                                                                                                1. 3

                                                                                                                                  Oof, the software engineering stack exchange always makes me so sad.

                                                                                                                                  You can’t leave us hanging like that.

                                                                                                                                  1. 3

                                                                                                                                    Their treatment of formal methods was so bad I wrote a 6,000 word post on why they are wrong, and one of the “hottest” questions this month was “what do you do if Agile fails in your specific circumstance”, and the top answer is “don’t be in that circumstance.”

                                                                                                                                    1. 1

                                                                                                                                      From SO:

                                                                                                                                      TDD & BDD are approaches built on the principles of Hoare logic, a foundation of formal approaches. They improve efficiency not detract from it.

                                                                                                                                      I. Cannot.

                                                                                                                                    2. 2

                                                                                                                                      I can’t speak for @hwayne but the problems I have include that you aren’t allowed to discuss software engineering on it. Many questions where people have asked how to design computer-based solutions to problems (you know, engineering software) are ignored or downvoted. Questions I’ve asked on things like software licensing (which is definitely about the social applications of technical solutions, or “engineering”) or professional ethics (a hallmark of professional disciplines, including engineering) have been closed as off-topic. And the stack overflow ontology in which questions can be unambiguously and correctly answered by providing the appropriate fact (which can be correctly identified by popularity contest) doesn’t match the problem domain, so in many situations two answers which are mutually incompatible will be given and receive upvotes, with no-one taking the obvious next step of synthesising something useful from them.

                                                                                                                                  1. 3

                                                                                                                                    Any time you read of a new methodology that includes a new buzzword in place of “team”, you know somebody’s trying to sell you some snake oil.

                                                                                                                                    1. 4

                                                                                                                                      This “new buzzword” came from existing literature on self-organising workforces. The methodology is not “new”, it’s written in the manifesto for agile software development. And no “selling” is implied, though I’ll definitely take your money if you want.

                                                                                                                                    1. 3

                                                                                                                                      hell yah, anarcho-syndicalist software teams, a two points I want to touch on:

                                                                                                                                      Specialization

                                                                                                                                      #5 seems like a really good idea, but effectively limits the type of people who can work in this environment. Now, to work on this team, you need to be able to be a good union rep, a good people manager, and a crack software dev. People like to say that anyone can be a manager, but it is a set of skills that one must hone, just like being a dev. Asking people to hone both sets will lead to them being less sharp in each.

                                                                                                                                      (I’m ignoring union rep because I have no experience with that, but there’s a similar issue I’d imagine.)

                                                                                                                                      Contracts

                                                                                                                                      If I could make an addition to these rules, I would say that the gang has to setup and sign a contract when they are hired by management. Note, this is not their contract with the management, but a contract with each other. The contract would need to dictate a bunch of the “fill-in-the-gaps” questions that get asked of this stuff. For a couple examples:

                                                                                                                                      • What do we define as wide enough consensus on an issue?
                                                                                                                                      • How do we enforce #4 while maintaining autonomy?
                                                                                                                                      • What is the recourse for violations of these rules?
                                                                                                                                      • etc…

                                                                                                                                      These contracts would have to be re-drafted and signed between each project, though some reuse could obviously occur.

                                                                                                                                      1. 1

                                                                                                                                        These contracts would have to be re-drafted and signed between each project

                                                                                                                                        I would be very glad to form a new workers’ co-op for each project I start. Much sunk cost fallacy comes from still being in the company that sank the cost.

                                                                                                                                        1. 1

                                                                                                                                          I would be very glad to form a new workers’ co-op for each project I start.

                                                                                                                                          But going back to my first point, are we requiring our programmers to now have contract negotiation skills just to make sure their needs are met?

                                                                                                                                          1. 2

                                                                                                                                            If you lack the ability to negotiate over consensus, community norm enforcement etc - you are only really able to have your needs met by lucking into a community that happens to match what you want.

                                                                                                                                            1. 2

                                                                                                                                              Wait, don’t employees negotiate their contracts, or end up with unknown and arbitrary terms, already?

                                                                                                                                            2. 0

                                                                                                                                              Add lawyers too.

                                                                                                                                              There should be electroshock collars on the new hires too