1. 2

    That’s probably the main downside of AOT-compiled languages: you have to decide to inline or not during compilation, not during run time, and you can’t inline after dynamic linking. And function calls are relatively expensive.

    I’m curious: are there examples of JIT compilers that can also monomorphize at run time? Probably JITs for dynamic languages do almost that, but are there JITs for static languages that do monomorphization?

    1. 3

      JVM probably does it.

      Dart and TypeScript pretend to have types at “compile” time, and then run in a dynamically-typed VM with hidden classes optimization.

      1. 3

        Note that inlining itself is, in some sense, an AOT specific concept. In a JIT, you don’t need to care about function boundaries at all, you can do a tracing JIT.

        The TL;DR is that you observe a program at runtime and identify a runtime loop: a sequence of instructions that is repeatedly executed. You than compile this loop as a whole. The loop can span multiple source functions/runtime function calls. In each function, the loop includes only the hot path parts.

        So, a powerful JIT can transparently tear through arbitrary many layers of dynamic dispatch, the code is fully monomorphized in terms of instruction sequence.

        What a JIT can’t do transparently, without the help from source level semantics, is optimize the data layout. If a Point is heap allocated, and a bunch of Points is stored in a HashMap, the JIT can’t magically specialize the map to store the points inline. Layout of data in memory has to be fixed, as it must be compatible with non-optimized code. The exception here is that, when an object doesn’t escape, JIT might first stack-allocate it, and then apply scalar replacement of aggregates.

        1. 1

          The exception here is that, when an object doesn’t escape, JIT might first stack-allocate it, and then apply scalar replacement of aggregates.

          JITs can be a bit more clever with escape analysis: they don’t have to prove that an object never escapes in order to deconstruct its parts, they just have to make sure that any deconstruction of an object is never visible to the outside world. In other words, one can deconstruct an object temporarily provided it’s reconstructed at exit points from the JIT compiled code.

          1. 1

            For dynamic language JIT compilers—e.g. LuaJIT—the compiler has to insert type check guards into the compiled code, right? And other invariant checks. How much do these guards typically cost?

            I can imagine how a large runtime loop (100s of instructions) could place guards only at the entry point, leaving the bulk of the compiled section guard-free. I can also imagine eliding guards if the compiler can somehow prove a variable can only ever be one type. But for dynamic languages like Lua it could be too hard to perform meaningful global analysis in that way. If you have any insight I’d appreciate it, I’m just speculating.

            1. 2

              I am not really an expert, but here’s my understanding.

              Language dynamism is orthogonal to AOT/JIT and inlining. JITs for dynamic languages do need more deoptimization guards. The guards themselves are pretty cheap: they are trivial predicated branches.

              As usual, what kills you is not the code, it’s data layout in memory. In a static language, an object is a bunch of fields packed tightly in memory, in a dynamic language, a general object is some kind of hash-map. Optimizing those HashMap lookups to direct accesses via offset is where major performance gains/losses are.

          2. 2

            I believe C#’s CLR does this, it acts like Java at compile time but then monomorphizes generics at run time.

            1. 1

              .NET generics use monomorphization for value types and a shared instantiation for reference types. .NET Generics Under the Hood shows some of the implementation details.

            2. 2

              To alleviate this, I think Clang has a feature to profile a program and to use that information to guide the optimisation when you compile it again. This is used by Apple when building Clang itself.

            1. 1

              I know Rust hasn’t gotten around to ABI stability yet, but when it does, inline functions exposed from a shared library are problematic. Since the function gets compiled into the dependent code, changing it in the library and swapping in the newer library (without rebuilding the dependent code) still leaves obsolete instances of the inline in the dependent code, which can easily cause awful and hard-to-diagnose bugs. (Obsolete struct member offsets, struct sizes, vtable indices…)

              For comparison, Swift, which did recently gain ABI stability in 5.1, has some special annotations and rules covering module-public inline functions.

              1. 4

                The main problem with library boundaries is not inlined methods but heavy use of polymorphism (without dyn) in most Rust code, because polymorphism is easily accessible and static-dispatch by default. C++ has this issue too (there are even “header-only libraries”), despite virtual methods having dynamic dispatch only. Swift probably inherited Objective C’s tradition of heavy use of dynamic dispatch.

                Some libraries intentionally limit use of static-dispatch polymorphism, for example Bevy game framework stated it as one of its distinguishing features (however the main concern there is compilation speed, not library updates).

                1. 8

                  Swift probably inherited Objective C’s tradition of heavy use of dynamic dispatch.

                  Not really: Swift uses compiler heroics to blur the boundary between static and dynamic approaches to polymorphism. Things are passed around without boxing but still allow for separate compilation and ABI stability. Highly recommend

                  1. 3

                    Across ABI boundary it’s still dynamic dispatch. It’s “sized” only because their equivalent of trait objects has methods for querying size and copying.

                    1. 2

                      Hm, I think it’s more of a “pick your guarantees” situation:

                      • for public ABI, there are attributes to control the tradeoff between dynamism and resilience to ABI changes
                      • for internal ABIs (when you compile things separately, but in the same compilation session) the compiler is allowed, but not required, to transparently specialize calls across compilation unit boundaries.
                2. 2

                  An interesting case-study here is Zig’s selfhosted compiler. Merging the compiler and linker it already allows for partial recompilation inside one compilation unit, including inlined calls.

                1. 1

                  in C++ the keyword const does not completely refer to immutability. For instance, you can use the keyword const in a function prototype to indicate you won’t modify it, but you can pass a mutable object as this parameter.

                  I don’t know C++ enough, but doesn’t const makes object itself immutable, not only variable holding it? Unlike most languages, i.e. javascript, where const only makes variable constant, not its value. I.e. you can’t call non-const methods on this object, you can’t modify its fields. At least if it’s not pointer to object, seems that for pointers it’s complicated. I thought this works almost the same way as in Rust, where you can’t modify non-mut references.

                  1. 7

                    I don’t know C++ enough, but doesn’t const makes object itself immutable, not only variable holding it?

                    It’s C++ so the answer to any question is ‘it’s more complicated than that’. The short answer is that const reference in C++ cannot be used to modify the object, except when it can.

                    The fact that the this parameter in C++ is implicit makes this a bit difficult to understand. Consider this in C++:

                    struct Foo
                    {
                       void doAThing();
                    };
                    

                    This is really a way of writing something like:

                    void doAThing(Foo *this);
                    

                    Note that this is not const-qualified and so you cannot implicitly cast from a const Foo* to a Foo*. Because this is implicit, C++ doesn’t let you put qualifiers on it, so you need to write them on the method instead:

                    struct Foo
                    {
                       void doAThing() const;
                    };
                    

                    This is equivalent to:

                    void doAThing(const Foo *this);
                    

                    Now this works with the same overload resolution rules as the rest of C++: You can call this method with a const Foo* or a Foo*, because const on a parameter just means that the method promises not to mutate the object via this reference. There are three important corner cases here. First, consider a method like this:

                    struct Foo
                    {
                       void doAThing(Foo *other) const;
                    };
                    

                    You can call this like this:

                    Foo f;
                    const Foo *g = &f;
                    g->doAThing(&f);
                    

                    Now the method has two references to f. It can mutate the object through one but not the other. The second problem comes from the fact that const is advisory and you can cast it away. This means that it’s possible to write things in C++ like this:

                    struct Foo
                    {
                       void doAThing();
                       void doAThing() const
                       {
                         const_cast<Foo*>(this)->doAThing();
                       }
                    };
                    

                    The const method forwards to the non-const one, which can mutate the class (well, not this one because it has no state, but the same is valid in a real thing). The second variant of this is the keyword mutable. This is intended to allow C++ programmers to write logically immutable objects that have internal mutability. Here’s a trivial example:

                    struct Foo
                    {
                       mutable int x = 0;
                       void doAThing() const
                       {
                         x++;
                       }
                    };
                    

                    Now you can call doAThing with a const pointer but it will mutate the object. This is intended for things like internal caches. For example, clang’s AST needs to convert from C++ types to LLVM types. This is expensive to compute, so it’s done lazily. You pass around const references to the thing that does the transformation. Internally, it has a mutable field that caches prior conversions.

                    Finally, const does not do viewpoint adaptation, so just because you have a const pointer to an object does not make const transitive. This is therefore completely valid:

                    struct Bar
                    {
                        int x;
                    };
                    struct Foo
                    {
                      Bar *b;
                      void doAThing() const
                      {
                        b->x++;
                      }
                    };
                    

                    You can call this const method and it doesn’t modify any fields of the object, but it does modify an object that a field points to, which means it is logically modifying the state of the object.

                    All of this adds up to the fact that compilers can do basically nothing in terms of optimisation with const. The case referenced from the talk was of a global. Globals are more interesting because const for a global really does mean immutability, it will end up in the read-only data section of the binary and every copy of the program / library running will share the same physical memory pages, mapped read-only[1]. This is not necessarily deep immutability: a const global can contain pointers to non-const globals and those can be mutated.

                    In the specific example, that global was passed by reference and so determining that nothing mutated it required some inter-procedural alias analysis, which apparently was slightly deeper than the compiler could manage. If Jason had passed the sprite arrays as template parameters, rather than as pointers, he probably wouldn’t have needed const to get to the same output. For example, consider this toy example:

                    namespace 
                    {
                      int fib[] = {1, 1, 2, 3, 5};
                    }
                    
                    int f(int x)
                    {
                        return fib[x];
                    }
                    

                    The anonymous namespace means that nothing outside of this compilation unit can write to fib. The compiler can inspect every reference to it and trivially determine that nothing writes to it. It will then make fib immutable. Compiled with clang, I get this:

                            .type   _ZN12_GLOBAL__N_13fibE,@object  # @(anonymous namespace)::fib
                            .section        .rodata,"a",@progbits
                            .p2align        4
                    _ZN12_GLOBAL__N_13fibE:     # (anonymous namespace)::fib
                            .long   1                               # 0x1
                            .long   1                               # 0x1
                            .long   2                               # 0x2
                            .long   3                               # 0x3
                            .long   5                               # 0x5
                            .size   _ZN12_GLOBAL__N_13fibE, 20
                    

                    Note the .section .rodata bit: this says that the global is in the read-only data section, so it is immutable. That doesn’t make much difference, but the fact that the compiler could do this transform means that all other optimisations can depend on fib not being modified.

                    Explicitly marking the global as const means that the compiler doesn’t need to do that analysis, it can always assume that the global is immutable because it’s UB to mutate a const object (and a compiler is free to assume UB doesn’t happen. You could pass a pointer to the global to another compilation unit that cast away the const and tried to mutate it, and on a typical OS that would then cause a trap. Remember this example the next time someone says compilers shouldn’t use UB for optimisations: if C/C++ compilers didn’t depend on UB for optimisation then they couldn’t do constant propagation from global constants without whole-program alias analysis.

                    For anything else, the guarantees that const provides are so weak that they’re useless. Generally, the compiler can either see all accesses to an object (in which case it can infer whether it’s mutated and get more accurate information than const) or it can’t see all accesses to an object (and so must assume that one of them may cast away const and mutate the object).

                    [1] On systems with MMUs and sometimes it needs to contain so may actually be mutable unless you’ve linked with relro support.

                    1. 1

                      No, you might have D in mind where const is transitive.

                    1. 2

                      The 6502 is notoriously hostile to being a target for C compilers. The 658c16 tweaks it enough that it could theoretically have one, but so far the best one is ORCA/C.

                      1. 2

                        Yeah, Forth seems a better fit. I used FIG-Forth a lot on my Apple ][ back in the day. I think it was under 8KB of code for a fairly complete environment.

                        1. 1

                          Is there a good explanation somewhere of “why?” it’s notoriously hostile for C compilers, for someone like myself who is not super familiar with the 6502?

                          1. 2
                            • C heavily uses stack for local variables and function parameters. 6502 does not have stack-relative addressing. Probably use of stack is not required by C specification and it’s just an implementation detail, but you have to store local variables somewhere at dynamic addresses and there’s no base + offset addressing in 6502. Some languages designed for such 8-bit CPUs allocate fixed memory addresses for local variables but this requires dropping support for recursion (and other things like function pointers).
                            • Default type is int, which has, by specification, minimum size of 16 bits. 6502 does not have 16-bit operations. Signed 16-bit arithmetic is even more tricky and int is signed. (However you can use unsigned char in your code where possible).

                            See also: Why do C to Z80 compilers produce poor code?. Z80 is slightly more powerful than 6502: it has IX and IY registers for relative addressing, but offsets are fixed and these operations are slow (compilers tend to use these registers to access stack-based variables); it has 16-bit arithmetic, but it’s limited. So mostly the same problems as for 6502.

                              1. 1

                                There was some discussion of how to support the 6502 on the LLVM list a few years ago. The suggestion was to not target the 6502 directly, but instead target SWEET16, which is a virtual ISA and emulator that Woz wrote as the target for Apple BASIC. SWEET16 has 16 16-bit registers and a full suite of 16-bit operations, implemented in about 300 bytes of 6502 assembly. It’s a lot slower to use than raw 6502 machine code, but it’s vaguely plausible as a target for something with a C-like abstract machine such as LLVM IR.

                          1. 3

                            I am not an electrical engineer, but shouldn’t there be some current-limiting resistors in front of RPi’s GPIO pins? Also, those pins operate on 3.3V and are not 5V tolerant.1 Hardware debounce might be relevant as well.2

                            1. 5

                              Input pins are in high-impedance state so normally they don’t need current limiting resistors. However, current limiting resistors on inputs might be used as a (non-recommended) hack to make non-5V-tolerant pins work at 5V signal: usually the problem with such pins is in ESD protection that sinks 5V to 3.3V power supply line and series resistor at input pin limits current of this. Probably, here it’s simpler to just power the pedal with 3.3V.

                              Also, probably the pedal itself contains necessary pull-up or pull-down resistors as it has both power and ground wires. But Button abstraction from that Python library turns on builtin pullup on its pin by default. It’s unclear how the whole system works. BTW, that library has software debounce, but it isn’t enabled by default.

                              1. 2

                                Thanks for details.

                            1. 2

                              Styling Tetris so that it’s now a game about stacking bodies in a mass grave isn’t decoration but design. Horrifying, but still a design.

                              Probably, this is a reference to the Monty Python & the Quest for the Holy Grail game from 1996. It featured Tetris with dead bodies mini-game called “Drop Dead”.

                              1. 3

                                I’m not a fan of such laws. They make it impossible for e.g. small car repair shop to make website by themselves, for example in Microsoft Word. Even if they buy making of website from professionals: one misplaced ARIA tag and you get huge fine, having websites became dangerous.

                                What these car repair shops, cafes and bodegas will do? They will make account in Instagram instead of website. This is much worse for people with disabilities than static website made in Word. Lots of text will be in jpegs.

                                1. 1

                                  I am torn about this sort of thing myself, and I’m a blind person. I mean, all laws come with an implicit threat of punishment for noncompliance. Right action done in order to comply with the law is coersed, not voluntary, and I have a problem with that. But lobsters isn’t a place for that kind of philosophical discussion.

                                  I do believe that laws should apply differently based on the size of an organization. We’re talking about a large regional chain here, not a local small business.

                                1. 23

                                  This feels like a classic “don’t piss on my leg and tell me it’s raining” situation.

                                  What I don’t understand is why Google thinks that any other browser vendors would agree to implement this. The only reason they want to do it is that they’re an advertising company that happens to also produce a browser in order to sell more ads. No other browser vendor has this perverse incentive structure. (except kinda Brave, I guess)

                                  Are they just so used to their position of dominance that they assume everyone will do their bidding regardless of whether it’s a good idea or not?

                                  1. 11

                                    They don’t need to. 85-90% of people use Chrome. They can implement it, force it through, and then blame non-standards-conforming browsers when people’s sites have problems.

                                    1. 9

                                      when people’s sites have problems

                                      What are the problems though? The ads aren’t … targeted enough? Isn’t that a good thing?

                                      1. 3

                                        Sites will just block you if your browser doesn’t send the right reply, like they do now if you don’t press the button to allow cookies or ads or whatever.

                                        1. 2

                                          Now most sites work if I block ads, tracking scripts or whole third-party js services (i.e. ubiquitous “chat with sales”). Blocking these is more invasive than non-supporting some tracking mechanism. Only most pervert websites are actively trying to ban me from adblocking/script blocking, and such sites are quite rare (mostly in “gadget news” and “gaming news” category).

                                        2. 2

                                          I can imagine FLoC being used as an additional signal for services like reCAPTCHA. People whose browsers don’t provide these fingerprints could get locked out or antagonized by spam prevention systems, similar to people who use anti-tracking addons today.

                                    1. 5

                                      The problem is that most sites are relying on ad revenue. If this does not change, the situation will not change. Web ads are only worth as much because they are personalized.

                                      I think there needs to be a service/interface which transmit a small amount of money to the website owner per visit. The amount should roughly be what the ad company pays the website owner nowadays. No sign ups per website. Currently if I want to watch one video on YouTube ad-free or read an article behind a paywall I need a full subscription. This would make the web user oriented instead of ad oriented. The problem here is that so many users (including me) are so used to the fact that most of the content on the internet is free.

                                      1. 12

                                        This would make the web user oriented instead of ad oriented.

                                        Well, it would make the web wealthy-user oriented, anyway. Ad-supported models have the (in my opinion) highly desirable characteristic of not restricting access to information based on the viewer’s income level because ad revenue is aggregated across the entire user base.

                                        Discussions about moving toward an ad-free micropayments model, from what I’ve seen, generally assume that everyone has enough disposable income to replace their share of the ad money, but the Internet is global. A resource that is priced at a level such that a German user pays for it without a second thought may be prohibitively expensive to someone in rural Kenya trying to use the web on their cheap Android phone to educate themselves out of subsistence farming.

                                        1. 2

                                          Thanks for your response. I might want to add two thoughts:

                                          1. The same pricing mechanism also applies to ads (especially targeted ads). Companies pay according to the possible revenue of a future customer. Also like other online services there could be different prices, depending on your country (e.g. Netflix). Sadly this is not compatible with an anonymous service.
                                          2. This proposal would not be mandatory, but an alternative way to browse websites.
                                          1. 1

                                            /save

                                          2. 7

                                            It somehow worked in 2000s, when ads were linked to page content, not to dossier on user.

                                            I doubt if current targeting technology works at all, I always see ads completely unrelated to my interests and needs.

                                            1. 1

                                              Content targeted ads aren’t possible to do when the user visits eg facebook.com or some similar aggregator; it’s not possible to know what content is shown at a particular URL. So the site owner is the only one who can match ads to content, but then they need to know more about the user by tracking them on other sites.

                                              Content/URL targeted ads require that URL contents don’t change, basically, and that the content is accessible to the ad companies’ classifiers and scanners. This doesn’t work well with timeline based sites.

                                              1. 1

                                                The 2000s was a very wealth-based web. Many poorer households even in developed countries didn’t have computers at home, and having broadband was hit-or-miss. Developing countries only really embraced the internet en masse after mobile phones became cheap and ubiquitous.

                                              2. 4

                                                The received wisdom is that sites make all their money from advertising but I’d be interested to see numbers on what that looks like today.

                                                Sites (YouTube included, as you mentioned) are increasingly pushing subscription models and I suspect it is because ad revenue is actually not brilliant - unless you’re Google or Facebook, because you are providing the ads and you’ve virtually cornered the market between you.

                                                Certainly when I worked for a major newspaper, management were in the process of realising that online ads would not bring in the profits they wanted and could not replace subscription revenue.

                                                All that said, even if I’m right, not everyone will be easily convinced. There are sunk costs, advertisers who rely on the market, and no clear alternative business model for the web at the moment.

                                                Something like what you suggest might be the most palatable option.

                                                1. 4

                                                  I wish it were so but check the annual reports that Alphabet, Facebook, etc. file with the SEC. Ads account for an overwhelming proportion of their revenue.

                                                  There are good reasons for them to offer subscriptions as well. Maybe it’s a hedge against the ad bubble bursting, maybe it serves a particularly desirable group of consumers, maybe investors like it, or maybe it keeps the regulators away. I don’t know.

                                                  1. 3

                                                    Alphabet and Facebook are precisely the companies that @owent predicted would be making money from ads. They’re talking about everyone else: news sites, blogs, forums, etc. Are they getting much money from ads? Could they do better with subscriptions or another revenue model?

                                                    Personally, most of the sites I use are either paid for by their owners out of charity, vanity or self-interest or they are supported by subscriptions, or product sales. Exceptions include reddit (which is mostly a time-vampire anyway), youtube, stackoverflow (whose ads don’t seem that obnoxious), and search engines.

                                              1. 25

                                                I’m pretty sure people printed out and circulated “C, Algol, and the futility of replacing assemblers” posts in the 70’s. ;)

                                                1. 7

                                                  I remember lots of such posts even in 2000s. Not exactly “futility of replacing assemblers”, but “C, C++ won but we should go back to assembler, because of unnecessary abstractions causing bloat”. Even “Windows suxx because it’s written in C”. With the same arguments as today: “high-level languages are for those with poor skills, skilled programmers can write everything in assembler”.

                                                1. 2

                                                  It’s best to use standalone desktop app, as browser extensions tend to lose all subscriptions (happened for me too).

                                                  1. 1

                                                    Oh, thanks for letting us know. A shame that the desktop one doesn’t sync

                                                  1. 2

                                                    Rubocop has bad defaults (methods are limited to 10 lines max, really?) and no syntax for disabling checks for a block of code (so no exceptions are allowed).

                                                    1. 4

                                                      You may disable cops inline via magic comments.

                                                      # rubocop:disable Layout/LineLength, Style/StringLiterals
                                                      [...]
                                                      # rubocop:enable Layout/LineLength, Style/StringLiterals
                                                      

                                                      https://docs.rubocop.org/rubocop/configuration.html#disabling-cops-within-source-code

                                                      1. 2

                                                        Now, if LineLength will be disabled in config, it will remain enabled in the part of file (or even multiple files?) after that closing comment. It’s not proper “disable in block” instruction.

                                                        1. 2

                                                          If you use a postfix comment like:

                                                          def some_method # rubocop:disable Layout/LineLength
                                                            […]
                                                          end
                                                          

                                                          or:

                                                          class SomeClass # rubocop:disable Layout/LineLength
                                                            […]
                                                          end
                                                          

                                                          it is block-specific. (Layout/LineLength is an ironic one given how postfixing these can make the lines quite long, but I’ve never faced that problem.)

                                                      2. 3

                                                        It does have magic comments to disable checks. I find that sort of annotation distasteful, but it’s there.

                                                        It has bad defaults - on that I agree. Which might be something to be looked-past, except how under-my-skin it is that those defaults are dubbed “the community’s guidelines” which I reject. It’s @bbatsov and the other maintainers’ curation based on participation in their issue trackers. That’s not the community. That’s A community, but every place I see rubocop, there’s exceptions that people configure. It’s not laid to rest, self-evident, universally agreed - any of that. What I consider a REASONABLE default would be to have tons of rules available, but only a few core ones enabled to start. There’s an “omakase” argument to make, sure, but at least don’t pretend to speak for the entire Ruby community.

                                                        I liked this line from the article along these lines:

                                                        The problem starts when it is viewed not as a tool, but as a set of divine commandments

                                                        In fairness there WAS a survey they did about the defaults in May 2020 that got about 800 results. Several of the results that were shared showed how “unsettled” these formatting decisions are across the community. It was also shared here

                                                        we’re definitely going to tackle the cop presets idea at some point and provide a smaller set of “essential” cops, alongside the current somewhat heavy-handed default set of cops.

                                                        Edit: also want to be clear that I like code formatters - golang’s is great. I mostly think Rubocop goes too far. It’s a great thing to exist in the Ruby ecosystem, though, and I very much appreciate the hard engineering work put into it.

                                                        1. 2

                                                          There are things that I don’t like about the defaults, but I don’t know that @bbatsov has necessarily made the wrong choice. For example, there are practical reasons why I prefer trailing commas, and all the counterarguments I’ve seen are some variation of “it doesn’t look pleasing to me”, but there is no denying that it’s a minority opinion (roughly 1 in 4 according to the survey). So it is wrong to make no trailing commas the default? I don’t know that it is. Where is the cut off point for a majority opinion? 75%? 50%? 95%? It’s hard to say.

                                                          Personally, I would prefer that the defaults only include things that have >90% agreement. As commenters have not been shy to point out, if people want the rules to be more strict they can edit the config and stop complaining about it. It cuts both ways.

                                                      1. 8

                                                        AMIGA replacement :)

                                                        1. 4

                                                          A strong No. This is no Amiga replacement.

                                                          If the idea was to rekindle the playfulness and exploration of that generation of computers, then the Raspberry has failed.

                                                          Neither the Hardware front (due to complexity) nor the software front (if the operating system is Linux) are comparable in that regard.

                                                          1. 7

                                                            I’ve done an awful lot more exploration and programming with Linux than my Amiga. And I love my Amiga. But much of what we can do with them now, 30+ years later, is due to reverse engineering. They weren’t open systems. There isn’t a particularly viable programming environment on them OOTB.

                                                            1. 4

                                                              But much of what we can do with them now, 30+ years later, is due to reverse engineering. They weren’t open systems.

                                                              I digress. See: http://amigadev.elowar.com/

                                                              All this documentation has always been available. They also published the PCB schematics (they’re in the user manuals!).

                                                              What’s missing is the latter AGA chipset documentation, which had to be reverse engineered, and of course AmigaOS’s source code and the internal designs of the custom chips.

                                                              Unlike with current hardware, the Amiga custom chips had lowish transistor count, so it was not extremely hard to figure out how they worked in detail. Thus cycle-exact emulators (uae then winuae) and open hardware reimplementations (minimig and aoecs).

                                                              1. 4

                                                                Unlike with current hardware, the Amiga custom chips had lowish transistor count, so it was not extremely hard to figure out how they worked in detail.

                                                                And they were correspondingly less powerful. The solution is not for modern computers to be handicapped by being forced to a low transistor count. The ultimate solution is open architectures. Meanwhile, the Pi as a platform is far from perfectly open, but there’s enough open about it (especially on the software side) that there’s plenty for enthusiasts to do.

                                                                1. 2

                                                                  And they were correspondingly less powerful.

                                                                  In a meaningful way. You’d get to see the difference between fast code and slow code.

                                                                  The solution is not for modern computers to be handicapped by being forced to a low transistor count.

                                                                  The solution to what problem exactly? If the purpose is to understand and learn about computers, then the priorities are not the same.

                                                                  Meanwhile, the Pi as a platform is far from perfectly open,

                                                                  And thus fails at its original goal.

                                                                  (especially on the software side)

                                                                  Especially on the hardware side. The SoC peripherals, outside the CPU. Especially the GPU.

                                                            2. 3

                                                              nothing stops you from installing another OS on this board. I think 9front should just work on it, and that’s a plenty playful OS.

                                                              on the hardware front, the GPIO pins are available still, so while you might not be able to fiddle with the internals, you can access the outer world easily.

                                                            3. 0

                                                              Shame it’s running Unix, though.

                                                              1. 3

                                                                And the Ctrl key should be where the Caps Lock is.

                                                              2. 1

                                                                Now if we could figure out a way to get the form factor of the Amiga UI into a modern Linux system, I’d be so happy :).

                                                                1. 1

                                                                  There is amiwm if your modern Linux system still uses X.org (mine does). There’s also amibian if you’ve got the ROMs, which you can legitimately obtain from Cloanto.

                                                                  1. 3

                                                                    Careful amiwm is not OSI/FSF approved Open Source/Free Software.

                                                                    As for Cloanto, here’s my take: Please don’t feed companies that somehow own historical Amiga IP and are keeping it to themselves and exploiting it for profit.

                                                                    This is specially annoying because of Cloanto’s empty promises of intent to open source, and no action in that front no matter how many years do pass.

                                                                    Everything Amiga should be part of the public domain, in a sane world.

                                                                    My take for an Amiga today? There’s a few options for the hardware, including but not limited to:

                                                                    • FPGA-based open implementations (e.g. minimig or aoecs on miSTer hardware or ULX3S).
                                                                    • Old Amiga from second-hand market. A500 are particularly common and easy to obtain cheaply, while they can run most software and have plenty of expansions available, including modern open hardware accelerators.
                                                                    • WinUAE, fs-uae or some other emulator on a powerful PC; Not a raspberry, It cannot emulate at full speed with cycle accuracy. The software emulation option comes with a latency penalty even on the fastest computers.
                                                                    1. 1

                                                                      Everything Amiga should be part of the public domain, in a sane world.

                                                                      It will be, 70 years after the death of the authors, in my locality.

                                                                      There’s a few options for the hardware

                                                                      Indeed, my own preference is the Apollo Vampire V4. I stream Amiga software development and we’re currently using an emulator with that, I’d prefer to switch to the Apollo but there are some problems getting amivnc to work that I’m not qualified to fix. I’m in favour of AROS becoming a good, free way of running Amiga software. In practice a lot of “running Amiga software” means games, and a lot of games need the original, proprietary kickstart libraries.

                                                                      1. 1

                                                                        It will be, 70 years after the death of the authors, in my locality.

                                                                        Way too late, and that’s only if the source code isn’t just lost.

                                                                        Apollo Vampire V4

                                                                        Completely closed, both software and hardware. Has proprietary extensions to the instruction set and the chip set which could lead to vendor lock-in, as there’s no alternative implementations of these. Unfortunate.

                                                                        And full of annoyances. To date, it is not even possible to use your kickstart of choice on the accelerators, without running the one they embed first. They really want you to see their logos and such. There’s other small things that make it not feel right. Full disclosure: I own a V500v2.

                                                                        My take is that we should focus on the oshw and open source software fronts.

                                                                        We should use/enhance the available open hardware such as TerribleFire’s accelerator boards, the minimig core, aoecs, tg68k, fx68k and such.

                                                                        We should rewrite, one piece at a time, all of AmigaOS’s rom modules, and the on-disk parts.

                                                                        Until we manage to get our shit together and do that, we’ll always be the laughing stock of the Atari community, which has emuTOS, FreeMiNT and a vibrant ecosystem of open source software and hardware.

                                                                        1. 1

                                                                          Full disclosure: I own a V500v2.

                                                                          The V4 is a different experience, evidently. The kickstart they embed is the open source AROS kickstart, and while CoffinOS has to remap the Hyperion kickstart after the AROS one has booted, it can do that without showing a Vampire logo should you wish (and you could do the same to boot to Commodore kickstart/AmigaOS). And the software is a fork of AROS. I think it already mostly does implement the full ROM and on-disk OS, actually at a better level than the out of date status page upstream.

                                                                          1. 1

                                                                            AROS is unfortunately not open source, by FSF/OSI definition or even Debian guidelines. Vampire’s ROM extensions and patches aren’t, either.

                                                                            The only reason they bundle AROS with the standalone V4 is to sidestep the legal nightmare (ownership is disputed) of licensing actual AmigaOS. End users can simply load AmigaOS themselves and they generally do, as the AROS isn’t a real alternative on 68k/amiga platform.

                                                                            MiSTer or ULX3S development boards are, by the way, much cheaper than Vampire V4, and when loaded with the Open Hardware minimig/aoecs cores they will run existing software much faster than the old hardware, with great compatibility.

                                                                            Personally I ordered a ULX3S (for Amiga unrelated reasons), and I will be getting an OSSC Pro once available (miSTer compatible).

                                                                            1. 1

                                                                              AROS is unfortunately not open source, by FSF/OSI definition or even Debian guidelines.

                                                                              The AROS public license specifically hasn’t been approved by OSI, but that doesn’t mean that the license isn’t open source or free software. It’s the MPL with the word “Netscape” removed.

                                                                    2. 2

                                                                      Unfortunately, amiwm just takes care of the window decorations – everything inside them is still huge widgets with slow animations.

                                                                    3. 1

                                                                      Which Amiga UI are we talking about, Workbench 1.3 or all of the meh? :)

                                                                      https://thumbs.gfycat.com/TatteredEmptyAegeancat-mobile.mp4

                                                                      1. 2

                                                                        One man’s meh is another man’s treasure!

                                                                        (Edit: the meh one’s where it’s at for me but Workbench 1.3’s flat, but contrasting and space-efficient layout is IMHO better than pretty much any modern flat theme. Properly anti-aliased and coupled with a similarly low-latency widget set it would beat any Arc-powered desktop.)

                                                                        1. 1

                                                                          https://github.com/letoram/awb

                                                                          The code do need some patches to handle x11/wl clients, and it is what it is quality wise - but it did run on a first gen rPI ;-)

                                                                          1. 2

                                                                            Oh I know it, I’ve fiddled with it quite a bit!

                                                                            1. 1

                                                                              Forgive me for doubting you :D

                                                                              I did get a bit curious as to how much it would take for an OKish live image to the pi400 though - or make a voxel-like VR version..

                                                                              With the newer helper-scripts, I think it’s just the really stupid stuff that might have a problem running due to the open source drivers – like the CRT glowtrails effect, it’s quite complicated as a ‘wanted to get vectrex like screen effects, takes like a ring buffer of n client supplied frames rotating, sampling, blurring and weight-mixing. The lil’ pi isn’t exactly a master of memory bandwidth.

                                                                          2. 1

                                                                            I recently changed my x pointer to the kickstart 1.3 one, albeit scaled to 128x128, which helps me see the damn thing on a 4k display.

                                                                          3. 1

                                                                            Workbench 1.3

                                                                            Nitpick: The video you’ve linked is a (poor) recreation of 1.0, not 1.3.

                                                                            1. 3

                                                                              You’re the recreation of 1.0!! :-p

                                                                              Seriously though, what’s your trigger? 1.3 was the first to add the 3D drawers!

                                                                              http://theamigamuseum.com/amiga-kickstart-workbench-os/workbench/workbench-1-3/

                                                                              1. 1

                                                                                The window titlebar appearance (waves, buttons).

                                                                                On a second look, it does look like neither 1.0 nor 1.3. But certainly 1.x inspired, not 2+.

                                                                        2. 1

                                                                          More like Acorn replacement, if used with RISC OS.

                                                                        1. 1

                                                                          This scripting seems to replicate features of a build system. I wondered about this before: Why does nobody treat test reports as build artifacts? Let Make (or whatever) figure out the dependencies and incremental creation.

                                                                          1. 2

                                                                            Sometimes you have multiple build systems. For example, let’s say I have a repo with two independent dirs - one containing Javascript (npm builds) and one containing Android (gradle builds). Both build incrementally fine on my machine but on a CI, if I am only modifying the Android code then it is a waste to build and test the Javascript dir. Incremental creation does not work since the past artifacts are missing. And they are intentionally missing to ensure that the builds are clean and reusable.

                                                                            I have actually seen a bug where keeping the past artifacts created a subtle bug which was removed after we removed persistent caching from Circle CI.

                                                                            1. 2

                                                                              Some build systems, i.e. Bazel do it (it’s called “caching”, the same as saving build artifacts). This build system is especially designed for monorepos. Probably Buck, a build system with similar origins, does this too.

                                                                              However, writing tests for this behavior can be tricky, as it requires “hermeticity”: tests can only read data from their direct dependencies. Otherwise, “green” build may become cached and stay green in subsequent runs, where it will become red if cache is cleared.

                                                                              Sadly, it’s quite hard to use Bazel for js/ruby/python and similar, it does not have builtin rules for ecosystems of these languages, and for shell-based general rule you have to know what files your shell command will output before it runs (directories can’t be output of rules).

                                                                              1. 2

                                                                                My inspiration in some form came from both Bazel (which I used inside Google) and Buck (which I used at Facebook). Both are great tools. Setting them up and training the whole team to use them , however, is a time-consuming effort.

                                                                                1. 2

                                                                                  it requires “hermeticity”: tests can only read data from their direct dependencies.

                                                                                  Nix is able to guarantee this since it heavily sandboxes builds, only allowing access to declared dependencies and the build directory. I haven’t seen anyone exploiting this for CI yet but it might be worth playing with.

                                                                                  1. 1

                                                                                    How long does it take to setup Nix?

                                                                                    1. 1

                                                                                      I’m not really sure how to answer that. Learning nix definitely takes a while and the documentation isn’t great. Writing a hello world build script takes seconds. Setting up some really complicated system probably takes longer ¯_(ツ)_/¯

                                                                                      I guess I can at least point at some examples:

                                                                                      1. 1

                                                                                        Thanks. After reading through the links, I am happy with my setup which returns 99% of the benefit without making every developer learn to write a new build system.

                                                                              1. 1

                                                                                What is the current sentiment on using a Rust-based web framework? I see it hasn’t made it into anyone’s answer yet.

                                                                                1. 2

                                                                                  I don’t think it’s practical to use non-gc aot “systems” language for most web projects. Additional overhead to manage memory, problems with stacktraces, slow compilation times.

                                                                                  (There are lots of valid uses for http servers in Rust, however; for example, web interfaces embedded in applications or http-based protocols for networked services. Just not standard “database-backed massive user-facing thing with lots of html templates”).

                                                                                  1. 1

                                                                                    I can’t think of any reason I’d want to write a web application in Rust, and I’m pro-rust.

                                                                                  1. 2

                                                                                    Name somewhat clashes with smoltcp, an embedded tcp/ip stack for rust a-la lwip.

                                                                                    1. 1

                                                                                      “Bad UX is good” principle is everywhere in modern mainstream consumer internet services, as they’re not going to optimize for usability and better access to information; not even for direct ad views and clicks right now, they optimize for maximum triggering of mental issues instead. “Smartphone is a slot machine” and so on.

                                                                                      What’s even more awful is that companies that don’t show ads or suck out personal information, like small shops, take advice and inspiration from “industry leaders” like Facebook and end up with the same torturous UX. Even the term “UX” had become associated with that UX.

                                                                                      1. 6

                                                                                        It seems like trying to find PC hardware old enough to have a parallel port but new enough to boot off USB is going to be more work than just spinning up a cheap modern microcontroller, and not really any easier. I guess this is a nice bit of history for those who didn’t experience the days when it wasn’t considered unusual to hook up your own hardware to a PC, but it wouldn’t be the route I’d suggest someone wanting to get started go. Something like a CircuitPython board would give you a nicer development experience (the boards plug into your USB port and mount as a mass storage device – you just open the code in your editor and save directly back to the board to update), and access to a bunch of modern hardware that would be difficult to interface w/ parallel port bit banging.

                                                                                        1. 4

                                                                                          It really depends if you have such hardware laying around already (and a monitor that still supports it). I don’t think that it makes much sense for any other reason.

                                                                                          1. 2

                                                                                            Yes you want to drive an 8-bit numbers on LEDs over a serial port you can just use a shift register and a USB to serial cable.

                                                                                            1. 2

                                                                                              CH341A-based usb to serial adapters even have i2c, spi and limited parallel port (they are rare, however).

                                                                                            2. 1

                                                                                              Why is the “boot from usb” bit important?

                                                                                              Booting from the network (ipxe), booting from IDE (ide2cf) and booting from your own custom rom (as an extrom on e.g. a romcard) is more than good enough. So is a floppy most of the time.

                                                                                              1. 2

                                                                                                There’s always the Plop boot manager.

                                                                                                1. 1

                                                                                                  When it works. The USB boot with boards that do not support it is hit or miss, and haven’t had much luck with it lately. That feature aside, I prefer grub.

                                                                                                  If only if grub had the starfield background.

                                                                                            1. 8

                                                                                              Ignoring the dialog on privacy or surveillance capitalism:

                                                                                              I’m selfishly happy. The best application experiences are native and this forces more shops to make native applications.

                                                                                              Now, if we could just kill electron…

                                                                                              1. 3

                                                                                                Then most of apps will be Windows-only.

                                                                                                1. 0

                                                                                                  You’re wrong. The best application experience is with apps coded in <something> for <something>. Everyone knows that and no matter how good any other app appears to be, it isn’t.

                                                                                                  I mean, anything that isn’t coded in Rust, compiled for a 64 bit ARM core, utilizing an embedded MongoDB and has been strictly linted will ever have any good graphical design or UX. Just plain logical.

                                                                                                  1. 3

                                                                                                    Nice straw man.

                                                                                                    1. 1

                                                                                                      I was with you until the sarcasm. I don’t see why we should mock and caricature others’ preferences: we’re all here because we have weird & specific preferences ;)

                                                                                                  1. 1

                                                                                                    Compilation is so popular in js world nowadays (even react does it with jsx) and macros are quite popular in some other languages (downsides of macros are well-known but every lisper praises them). So I doubt if such techniques are highly overused in Svelte.

                                                                                                    BTW, this reminded me of sweet.js, very old macros implementation for javascript.