1. 34

    I had to stop coding right before going to bed because of this. Instead of falling asleep, my mind would start spinning incoherently, thinking in terms of programming constructs (loops, arrays, structs, etc.) about random or even undefined stuff, resulting in complete nonsense but mentally exhausting.

    1. 12

      I dreamt about 68k assembly once. Figured that probably wasn’t healthy.

      1. 4

        Only once? I might have gone off the deep end.

        1. 3

          Just be thankful it wasn’t x86 assembly!

          1. 3

            I said dream, not nightmare.

            1.  

              Don’t you mean unreal mode?

              being chased by segment descriptors

              only got flat 24bit addresses, got to calculate the right segment bases and offsets, faster than the pursuer

        2. 6

          One of my most vivid dreams ever was once when I had a bad fever and dreamed about implementing Puyo Puyo as a derived mode of M-x tetris in Emacs Lisp.

          1. 19

            When I was especially sleep-deprived (and also on call) in the few months after my first daughter was born, I distinctly remember waking up to crying, absolutely convinced that I could solve the problem by scaling up another few instances behind the load balancer.

            1. 4

              Oh my god.

              1. 2

                Wow that’s exactly what tetris syndrome is about. Thanks for sharing!

            2. 5

              Even if I turn off all electronics two hours before bed, this still happens to me. My brain just won’t shut up.

              “What if I do it this way? What if I do it that way? What was the name of that one song? Oh, I could do it this other way! Bagels!”

              1. 4

                even undefined stuff

                Last thing you want when trying to go to sleep is for your whole brain to say “Undefined is not a function” and shut down completely

                1. 4

                  Tony Hoare has a lot to answer for.

                2. 2

                  Different but related: I’ve found out (the hard way) that I need to stop coding one hour before sleeping. If I go to bed less than one hour after coding, I spend the remaining of the said hour not being able to sleep.

                  1. 1

                    I know this all too well. Never heard of the tetris syndrome before. I need to investigate this now right before going to bed.

                  1. 5

                    Dragonfly has gone a long way since; now they’re trading blows with Linux in the performance front, despite the tiny team, particularly when contrasting it with Linux’s huge developer base and massive corporate funding.

                    This is no coincidence; it has to do with SMP leveraged through concurrent lockfree/lockless servers instead of filling the kernel with locks.

                    1.  

                      This comparison, which seems pretty reasonable, makes it look like it’s still lagging behind.

                      1.  

                        What I don’t like about Phoronix benchmark results generally is that they lack depth. It’s all very well to report MP3 encoding test running for 32 seconds on FreeBSD/DragonflyBSD and only 7 seconds on Ubuntu, but that raises a heck of a question: why is there such a huge difference for a CPU-bound test?

                        Seems quite possible that the Ubuntu build is using specialised assembly, or something like that, which the *BSD builds don’t activate for some reason (possibly even because there’s an overly restrictive #ifdef in the source code). Without looking into the reason for these results, it’s not really a fair comparison, in my view.

                    1. 2

                      It’s a flag used to signal to the mods that this can be removed as a duplicate.

                      Merging is for different URIs discussing a common hot topic. AFAIK you can’t merge 2 submissions with the exact same URL (which ‘already posted’ targets).

                      1. 1

                        But you can’t even post another submission with the same URL (it gives you an error message).

                        1. 1

                          AIUI only if the older submission with the same URL was done recently.

                      1. 3

                        “I found a 40 year old video of an 80 year old BBS transcript”

                        1. 1

                          A 80 year old BBS transcript… that’d be seriously cool to have.

                          1. 5

                            But seriously, it’s just text. Why make a video instead of just posting it on a blog or something?

                            1. 3

                              Because this person preferred to make a youtube video, and likely didn’t want to go through the effort to scan everything when that doesn’t bring them any money, and because they already make youtube videos and have all of that set up?

                              I don’t see how your parent comment at all figures into this one.

                              1. 2

                                I’m not a fan of this trend either.

                                But I heard that:

                                • blogging is dead
                                • engagement is higher and longer with videos
                                • zoomers prefer videos
                                • youtube views yield higher revenue (google pays creators; ads in video form are more effective)

                                And other such stories. There’s also the revenue from views, thanks to ads.

                          1. 1

                            Cute aesthetically (I would compare it to Haiku but lugubriously 2000s/2010s instead), but what makes it interesting at the back of the cabinet? What differentiates it from everything else?

                            1. 3

                              Skimming the source code, it looks as if the main thing is the complete lack of security. System calls with unknown arguments panic the kernel, others blindly trust the caller. For example, if you do an open system call, nothing validates the path length, the kernel then allocates sufficient space to hold a copy, doesn’t check for allocation failure, and then copies over the returned pointer (I don’t know how allocation failure is handled, but if it returns null then now you’ve either got a kernel-mode null-pointer dereference or a kernel-mode memory overwrite primitive, depending on how the MMU is configured - does it even have a kernel / userspace separation).

                              This is the kind of thing that modern C++ would make very easy to avoid, with some higher-level constructions. Starting in 2017, C++14 support was pretty mature and C++17 just provides some small cleanups rather than anything fundamentally better (auto parameters on lambdas were introduced in C++14 and they were the last thing that made particularly big improvements for things like this).

                              1. 3

                                Skimming the source code, it looks as if the main thing is the complete lack of security

                                Ahhh, the good ol’ days! Unfortunately, I suspect a lot of people talking about “bloat” would basically consider all this unnecessary even if that’s what reality needs.

                                As an alternative, as much as Serenity has heart albeit not originality, they are using modern C++ effectively.

                                1. -2

                                  For example, if you do an open system call,

                                  Therefore not a microkernel multiserver system.

                                  Basically a toy like Linux. Thanks for the heads up.

                                  1. 2

                                    Nobody said it was a microkernel. In fact, the word “micro” never appears on the gitlab page for the project.

                                    1. 1

                                      A toy by means of not being a microkernel, if I wasn’t clear enough.

                              1. 1

                                Amiga floppies have 2 bootblocks (512*2 = 1024), and AmigaOS is running, all the libraries in the kickstart are available.

                                Makes for a lot of bootblock game/demo/tool potential.

                                1. 18

                                  The whole damn thing.

                                  Instead of having this Frankenstein’s monster of different OSs and different programming languages and browsers that are OSs and OSs that are browsers, just have one thing.

                                  There is one language. There is one modular OS written in this language. You can hot-fix the code. Bits and pieces are stripped out for lower powered machines. Someone who knows security has designed this thing to be secure.

                                  The same code can run on your local machine, or on someone else’s machine. A website is just a document on someone else’s machine. It can run scripts on their machine or yours. Except on your machine they can’t run unless you let them and they can’t do I/O unless you let them.

                                  There is one email protocol. Email addresses can’t be spoofed. If someone doesn’t like getting an email from you, they can charge you a dollar for it.

                                  There is one IM protocol. It’s used by computers including cellphones.

                                  There is one teleconferencing protocol.

                                  There is one document format. Plain text with simple markup for formatting, alignment, links and images. It looks a lot like Markdown, probably.

                                  Every GUI program is a CLI program underneath and can be scripted.

                                  (Some of this was inspired by legends of what LISP can do.)

                                  1. 23

                                    Goodness, no - are you INSANE? Technological monocultures are one of the greatest non-ecological threats to the human race!

                                    1. 1

                                      I need some elaboration here. Why would it be a threat to have everyone use the same OS and the same programming language and the same communications protocols?

                                      1. 6

                                        One vulnerability to rule them all.

                                        1. 2

                                          Pithy as that sounds, it is not convincing for me.

                                          Having many different systems and languages in order to have security by obscurity by having many different vulnerabilities does not sound like a good idea.

                                          I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.

                                          1. 4

                                            It is not security through obscurity, it is security through diversity, which is a very different thing. Security through obscurity says that you may have vulnerabilities but you’ve tried to hide them so an attacker can’t exploit them because they don’t know about them. This works as well as your secrecy mechanism. It is generally considered bad because information disclosure vulnerabilities are the hardest to fix and they are the root of your security in a system that depends on obscurity.

                                            Security through diversity, in contrast, says that you may have vulnerabilities but they won’t affect your entire fleet. You can build reliable systems on top of this. For example, the Verisign-run DNS roots use a mixture of FreeBSD and Linux and a mixture of bind, unbound, and their own in-house DNS server. If you find a Linux vulnerability, you can take out half of the machines, but the other half will still work (just slower). Similarly, a FreeBSD vulnerability can take out half of them. A bind or unbound vulnerability will take out a third of them. A bind vulnerability that depends on something OS-specific will take out about a sixth.

                                            This is really important when it comes to self-propagating malware. Back in the XP days, there were several worms that would compromise every Windows machine on the local network. I recall doing a fresh install of Windows XP and connecting it to the university network to install Windows update: it was compromised before it was able to download the fix for the vulnerability that the worm was exploiting. If we’d only had XP machines on the network, getting out of that would have been very difficult. Because we had a load of Linux machines and Macs, we were able to download the latest roll-up fix for Windows, burn it to a CD, redo the install, and then do an offline update.

                                            Looking at the growing Linux / Docker monoculture today, I wonder how much damage a motivated individual with a Linux remote arbitrary-code execution vulnerability could do.

                                            1. 1

                                              Sure, but is this an intentional strategy? Did we set out to have Windows and Mac and Linux in order that we could prevent viruses from spreading? It’s an accidental observation and not a really compelling one.

                                              I’ve pointed out my thinking in this part of the thread https://lobste.rs/s/sdum3p/if_you_could_rewrite_anything_from#c_ennbfs

                                              In short, there must be more principled ways of securing our computers than hoping multiple green field implementations of the same application have different sets of bugs.

                                            2. 3

                                              A few examples come to mine though—heartbleed (which affected anyone using OpenSSL) and Specter (anyone using the x86 platform). Also, Microsoft Windows for years had plenty of critical exploits because it had well over 90% of the desktop market.

                                              You might also want to look up the impending doom of bananas, because over 90% of bananas sold today are genetic clones (it’s basically one plant) and there’s a fungus threatening to kill the banana market. A monoculture is a bad idea.

                                              1. 1

                                                Yes, for humans (and other living things) the idea of immunity through obscurity (to coin a phrase) is evolutionarily advantageous. Our varied responses to COVID is one such immediate example. It does have the drawback that it makes it harder to develop therapies since we see population specificity in responses.

                                                I don’t buy that the we need to employ the same idea in an engineered system. It’s a convenient back-ported bullet list advantage of having a chaotic mess of OSes and programming languages, but it certainly wasn’t intentional.

                                                I’d rather have an engineered, intentional robustness to the systems we build.

                                                1. 4

                                                  To go in a slightly different direction—building codes. The farther north you go, the steeper roofs tend to get. In Sweden, one needs a steep roof to shed show buildup, but where I live (South Florida, just north of Cuba) building such a roof would be a waste of resources because we don’t have snow—we just need a shallow angle to shed rain water. Conversely, we don’t need codes to deal with earthquakes, nor does California need to deal with hurricanes. Yet it would be so much simpler to have a single building code in the US. I’m sure there are plenty of people who would love to force such a thing everywhere if only to make their lives easier (or for rent-seeking purposes).

                                                  1. 2

                                                    We have different houses for different environments, and we have different programs for different use cases. This does not mean we need different programing languages.

                                              2. 2

                                                I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.

                                                In principle, yeah. But even the best security engineers are human and prone to fail.

                                                If every deployment was the same version of the same software, then attackers could find an exploitable bug and exploit it across every single system.

                                                Would you like to drive in a car where every single engine blows up, killing all inside the car? If all cars are the same, they’ll all explode. We’d eventually move back to horse and buggy. ;-) Having a variety of cars helps mitigate issues other cars have–while still having problems of its own.

                                                1. 1

                                                  In this heterogeneous system we have more bugs (assuming the same rate of bugs everywhere) and fewer reports (since there are fewer users per system) and a more drawn out deployment of fixes. I don’t think this is better.

                                                  1. 1

                                                    Sure, you’d have more bugs. But the bugs would (hopefully) be in different, distinct places. One car might blow up, another might just blow a tire.

                                                    From an attacker’s perspective, if everyone drives the same car, it the attacker knows that the flaws from one car are reproducible with 100% success rate, then the attacker doesn’t need to spend time/resources of other cars. The attacker can just reuse and continue to rinse, reuse, recycle. All are vulnerable to the same bug. All can be exploited in the same manner reliably, time after another.

                                                    1. 3

                                                      To go by the car analogy, the bugs that would be uncovered by drivers rather than during the testing process would be rare ones, like, if I hit the gas pedal and brake at the same time it exposes a bug in the ECU that leads to total loss of power at any speed.

                                                      I’d rather drive a car a million other drivers have been driving than drive a car that’s driven by 100 people. Because over a million drivers it’s much more likely someone hits the gas and brake at the same time and uncovers the bug which can then be fixed in one go.

                                        2. 3
                                          1. 1

                                            Yes, that’s probably the LISP thing I was thinking of, thanks!

                                          2. 2

                                            I agree completely!

                                            We would need to put some safety measures in place, and there would have to be processes defined for how you go about suggesting/approving/adding/changing designs (that anyone can be a part of), but otherwise, it would be a boon for the human race. In two generations, we would all be experts in our computers and systems would interoperate with everything!

                                            There would be no need to learn new tools every X months. The UI would familiar to everyone, and any improvements would be forced to go through human testing/trials before being accepted, since it would be used by everyone! There would be continual advancements in every area of life. Time would be spent on improving the existing experience/tool, instead of recreating or fixing things.

                                            1. 2

                                              I would also like to rewrite most stuff from the ground up. But monocultures aren’t good. Orthogonality in basic building blocks is very important. And picking the right abstractions to avoid footguns. Some ideas, not necessarily the best ones:

                                              • proven correct microkernel written in rust (or similar borrow-checked language), something like L4
                                              • capability based OS
                                              • no TCP/HTTP monoculture in networks (SCTP? pubsub networks?)
                                              • are our current processor architectures anywhere near sane? could safe concurrency be encouraged at a hardware level?
                                              • less walled gardens and centralisation
                                              1. 2

                                                proven correct microkernel written in rust (or similar borrow-checked language), something like L4

                                                A solved problem. seL4, including support for capabilities.

                                                1. 5

                                                  seL4 is proven correct by treating a lot of things as axioms and by presenting a programmer model that punts all of the difficult bits to get correct to application developers, making it almost impossible to write correct code on top of. It’s a fantastic demonstration of the state of modern proof tools, it’s a terrible example of a microkernel.

                                                  1. 2

                                                    FUD unless proven otherwise.

                                                    Counter-examples exist; seL4 can definitely be used, as demonstrated by many successful uses.

                                                    The seL4 foundation is getting a lot of high profile members.

                                                    Furthermore, Genode, which is relatively easy to use, supports seL4 as a kernel.

                                              2. 2

                                                Someone wrote a detailed vision of rebuilding everything from scratch, if you’re interested. 1

                                                  1. 10

                                                    I never understood this thing.

                                                    1. 6

                                                      I think that is deliberate.

                                                  2. 1

                                                    And one leader to rule them all. No, thanks.

                                                    1. 3

                                                      Well, I was thinking of something even worse - design by committee, like for electrical stuff, but your idea sounds better.

                                                    2. 1

                                                      We already have this, dozens of them. All you need to do is point guns at everybody and make them use your favourite. What a terrible idea.

                                                    1. 1

                                                      This is just a laptop with a proprietary parts system.

                                                      Not sure if this advertisement belongs here.

                                                      1. 8

                                                        proprietary parts system

                                                        That’s true for some of the parts I’m sure (due to necessity since the market doesn’t have a concept of “standardized laptop enclosures”), but the expansion cards are just internal USB C dongles. They’ve also released the CAD files for the expansion card housing, so people can make their own.

                                                        1. 5

                                                          Maybe, apart from the screen, expansion cards with ports on them, speakers, memory, storage, camera, microphone, plastic bit around the screen, wifi module.

                                                          Nothing is stopping people from buying the same components or compatible components or even making new compatible components. If I am wrong and naive then please tell me why.

                                                          1. 1

                                                            Proprietary as in “only used by the one company”, or proprietary as in “fees required for production of compatible devices”?

                                                            If the former, that’s how most good hardware standards start off - someone makes their version and shows it can work (and gains nontrivial marketshare), then others produce components that can match.

                                                            If the latter, well, that’s news to me.

                                                            1. 1

                                                              If the latter, well, that’s news to me.

                                                              AIUI only the USB3-based slots are open and royalty-free. Anything else is proprietary.

                                                            1. 8

                                                              …and they’re almost ready to ship Qomu, which has an ARM and an FPGA. https://www.crowdsupply.com/quicklogic/qomu

                                                            1. 1

                                                              With the whole industry (save the obvious) backing RISC-V, it isn’t just here to stay; it is going to replace everything else.

                                                              1. 1

                                                                I hope this doesn’t mean death for Helios64 altogether.

                                                                There’s demand for this sort of hardware, a NAS/SAN appliance that is not a blackbox.

                                                                1. 3

                                                                  Yet another kind reminder there’s no securing a kernel that’s literal megabytes big, all running in supervisor mode.

                                                                  Linux is not a sustainable approach.

                                                                  1. 5

                                                                    Not from programming but a younger me communicated so much using a computer that instead of a voice in my head, I’d imagine myself typing my thoughts out on a keyboard.

                                                                    1. 2

                                                                      I still do this.

                                                                    1. 2

                                                                      I don’t get why they didn’t report it to Microsoft.

                                                                      The issue is that Windows 10 downloads and runs Razer crap.

                                                                      1. 6

                                                                        Anyone know why this took so long?

                                                                        1. 6

                                                                          Some malicious features implemented by Microsoft?

                                                                          One of the claims was related to having modified Windows 3.1 so that it would not run on DR DOS 6.0 although there were no technical reasons for it not to work.

                                                                          1. 1

                                                                            Don’t know. Looking at the patch so many things are stub/null but the hooks just weren’t there.

                                                                          1. 2

                                                                            This has been a very long time nag, and thus good news.

                                                                            Further steps needed would be to get this cleaned up and merged, and to work on implementing whatever APIs so that there’s no need to use himem.sys/etc from msdos.

                                                                            Also, I understand 3.11 for workgroups still doesn’t work.

                                                                            1. 15

                                                                              This is exactly what I was thinking when more and more stuff was pushed into the USB-C stack.

                                                                              Previously it was kinda easy to explain that “no, you can’t put the USB cable in the HDMI slot, that won’t work”. Now you have cables that look identical, but one can charge with 90W and the other can’t, even though both fit. It’s going to be confusing for everyone having to be careful with the which cable can be plugged into where.

                                                                              1. 9

                                                                                Everything about the official naming/branding used in USB 3 onward seems purposely designed to be confusing.

                                                                                1. 5

                                                                                  It seems like for some reason the overriding priority was making the physical connector the same, but it’s fine to run all kinds of incompatible power and signals through it. I preferred the old way of giving different signals different connectors so you knew what was going on!

                                                                                  1. 2

                                                                                    The downside to that is I guess that each different type of device that you would want to be able to connect to a small form factor device such as a phone or a slim laptop would need to have with every type of connector that you might want, or alternatively you would need dongles left and right.

                                                                                    I can now charge my phone with my laptop charger, that has not been the case in previous generations.

                                                                                    I believe we are moving into POE-enabled network cable territory on some conceptual level; data+power (either optional) is the level of abstraction that the connector is made common on.

                                                                                    1. 4

                                                                                      I’m surprised the business laptop manufacturers haven’t tried getting into PoE based chargers, considering most offices have not just Ethernet, but PoE, and it’d solve two cables at once.

                                                                                      1. 1

                                                                                        I think (a hunch more than data-backed) that the last meters of networking is increasingly moving towards wireless, at least if the end-user equipment is a laptop. Monitors on the other hand are still cable-connected, and that is one of the singled-out use-cases for usb-c now that we have it.

                                                                                        Looking back at before usb-c, then I’d agree power and networking would be a neat thing to combine, but it would have to have a different connector than regular ethernet, those plastic tab spring lock things would not last long.

                                                                                        1. 1

                                                                                          I mean, I would like an Ethernet AAM for Type C…

                                                                                        2. 1

                                                                                          I’d love to know what you’re basing this thesis on because as far as I know I’ve never worked in a German office with PoE in the last 20 years. (Actually it was a big deal in my last company because we had PoE-powered devices, so there was kind of a “where and in which room do we put PoE switches for the specialty hardware”)

                                                                                          1. 1

                                                                                            Most offices nowadays have PoE if only for deskphones.

                                                                                      2. 1

                                                                                        It’s fine if you have a manufacturer that you can trust to make devices that work with everything (USB, DP, TB, PD, etc.) the cable can throw at it. (Like, my laptop will do anything a Type C cable can do, so there’s no confusion.) The problem is once you get less scrupulous manufacturers of the JKDSUYU variety on Amazon et al, the plan blows up spectaularly.

                                                                                      3. 1

                                                                                        When the industry uses a bunch of mutually-incompatible connectors for different types of cables, tech sites complain “Ugh, why do I need all these different types of cables! It’s purposely designed to be overcomplex and confusing!”

                                                                                        When the industry settles on one connector for all cable types, tech sites complain “Ugh, how am I supposed to tell which cables do which things! It’s purposely designed to be overcomplex and confusing!”

                                                                                        1. 5

                                                                                          Having the same connector but incompatible cable is much worse than the alternative.

                                                                                          1. 3

                                                                                            The alternative is that every use case develops its own incompatible connector to distinguish its particular power/data rates and feature set. At which point you need either a dozen ports on every device, or a dozen dongles to connect them all to each other.

                                                                                            This is why there have already been well-intentioned-but-bad-idea-in-practice laws trying to force standardization onto particular industries (like mobile phones). And the cost of standardization is that not every cable which has the connector will have every feature of every other cable that currently or might in the future exist.

                                                                                          2. 1

                                                                                            They could’ve avoided this by either making it obvious when one cable or connector doesn’t support the full set or simply disallowing any non full-featured cables and connectors. Have fun buying laptops and PCs while figuring out how many of their only 3 USB-C connections are actually able to handle what you need, which of them you can use in parallel for stuff you want to use in parallel and which of them is the only port that can actually do everything but is also reserved for charging your laptop. It’s a god damn nightmare and makes many laptops unusable outside some hipster coffee machine.

                                                                                            Meanwhile I’m going to buy something that has a visible HDMI, DP, LAN and USB-A connector, so I’m not stranded with either charging, mouse connection, external display or connecting my USB3 drive. It’s enraging.

                                                                                            1. 1

                                                                                              or simply disallowing any non full-featured cables and connectors

                                                                                              OK, now we’re back on the treadmill, because the instant someone works out a way to do a cable that can push more power or data through, we need a new connector to distinguish from already-manufactured cables which will no longer be “full-featured”. And now we’re back to everything having different and incompatible connectors so that you either need a dozen cables or a dozen dongles to do things.

                                                                                              Or we have to declare an absolute end to any improvements in cable features, so that a cable manufactured today will still be “full-featured” ten years from now.

                                                                                              There is no third option here that magically lets us have the convenience of a universal connector and always knowing the cable’s full capabilities just from a glance at the connector shape, and ongoing improvements in power and data transmission. In fact for some combinations it likely isn’t possible to have even two of those simultaneously.

                                                                                              It’s a god damn nightmare and makes many laptops unusable outside some hipster coffee machine.

                                                                                              Ah yes, it is an absolute verifiable objective fact that laptops with USB-C ports are completely unsuitable and unusable by any person, for any use case, under any circumstance, in any logically-possible universe, ever, absolutely and without exception.

                                                                                              Which was news to me as I write this on such a laptop. Good to know I’m just some sort of “hipster coffee” person you can gratuitously insult when you find you’re lacking in arguments worthy of the name.

                                                                                              1. 1

                                                                                                Ah yes, it is an absolute verifiable objective fact that laptops with USB-C ports are completely unsuitable and unusable by any person, for any use case, under any circumstance, in any logically-possible universe, ever, absolutely and without exception.

                                                                                                You really do want to make this about yourself, don’t you ? I never said you’re not allowed to have fun with them, I just say that for many purposes those machines are pretty bad. And there are far too many systems produced now with said specs, that its becoming a problem for people with a different use case than the one you have. With more connections, less dongles or hubs and with the requirement that you know about the specific capabilities before buying it: Have fun explaining your family why model X doesn’t actually do what they thought, because their USB-C is just a USB 2.0. Why their USB-C cable doesn’t work - even though it looks the same, why there are multiple versions of the same connector with different specs, why one port of USB-C doesn’t mean it can do everything the port right beside it can do. Why there is no way to figure out if the USB-C cable is actually able to handle a 4k60 display before trying it out. Even for 1000+€ models that you might want to use with an external display, mouse, keyboard, headset,charging and some yubikey you get 3 USB-C connections these days. USB-C could’ve been something great, but now it’s a RNG for what you actually get. And some colored cables and requirements towards labeling the capabilities would have already helped a lot.

                                                                                                Yes I’m sorry for calling it hipster in my rage against the reality of USB-C, let’s call it “people who do not need many connections (2+ in real models) / like dongles or hubs / do everything wireless”. Which is someone commuting by train, going to lectures or whatnot. But not me when I’m at home or at work.

                                                                                                This is where I’m gonna mute this thread, you do not seem to want a relevant conversation.

                                                                                        2. 1

                                                                                          Yeah, I was thinking this too. Though even then we were already starting to get into it with HDMI versions.

                                                                                        1. 1

                                                                                          iirc, mips are still used within csco routers ? there used to be a book called “see mips run” that i have used for hacking around in mips asm. quite good too (fwiw).

                                                                                          1. 5

                                                                                            If you read ‘seem MIPS run’, make sure it’s the 32-bit version. The 64-bit version has a huge number of errors in it.

                                                                                            That said, even at my most cranky, MIPS assembly is not something I would ever inflict on someone, no matter how much they’d annoyed me. Between the lack of useful addressing modes, the inconsistent register naming (what is $t0? Depends on the assembler you’re using!), the huge number of pseudos that most MIPS assemblers make look like normal instructions but that will clobber $at, the magic of $25 in PIC modes, branch delay slots, and the exciting logic in the assembler for either letting you fill delay slots, padding them with nops, or trying to fill them from one of your instructions depending on the mode, it’s an awful experience.

                                                                                            I’m not really a fan of RISC-V, but RISC-V manages to copy MIPS while avoiding the most awful parts of MIPS. If you want to learn a simple RISC assembly language, RISC-V is a better choice than MIPS. If you want to learn assembly language for a well-designed ISA, learn AArch64. If you want to learn assembly language that’s a joy to write, learn AArch32 (things like stm and ldm, predication, and the fact that $pc is a general-purpose register are great to use for assembly programmers, difficult to use for compilers, and awful to implement).

                                                                                            1. 1

                                                                                              There’s an implicit “RISC-V is not a well-designed ISA” there.

                                                                                              Could you elaborate what issue do you see with RISC-V?

                                                                                          1. 6

                                                                                            Interesting tutorial. But now that the development of the MIPS architecture as ended, it would be cool to have an equivalent in RISC-V. Not that there is anything wrong with deprecated architectures (I personally like 6502).

                                                                                            1. 1

                                                                                              Do you know any particularly good resource for 6502 assembly? I’ve looked at the instruction set listings and a few programs, and it seems simple and easy enough to familiarise oneself with; but I’d love a text that goes into more detail on common techniques, patterns and optimisations.

                                                                                              1. 2

                                                                                                I recommend “Assembly Lines: The Complete Book” by Roger Wagner and edited by Chris Torrence but it is more specific to the Apple ][. If you’re interested, you can get it directly from Chris’ website (including a spiffy spiral bound edition) and Roger receives a bigger cut than on a certain well-known online store.

                                                                                              2. 1

                                                                                                Chibiakuma’s learnasm.net and youtube channel have some introduction to RISC-V assembler.

                                                                                                It was sufficient for me to get up and running.

                                                                                              1. 4

                                                                                                Hopefully the vaxorcist is still around.