Threads for glhaynes

  1. 6

    An interesting anecdote about the Lisa operating system: Apple’s warranty for the Lisa covered software bugs in addition to hardware. This apparently caused a large number of returns over relatively simple bugs.

    1. 2

      Do you have any links or citations for this? I’ve never heard it but it sounds fascinating — why did they add this to the warranty (assuming Apple II, e.g., didn’t have this included), how do you define what a “bug” is, how does that differ from today’s warranties (I assume that modern warranties would cover software bugs that prevent advertised operation, but am not certain), which bugs did Apple take returns over, etc etc

      1. 1

        Just Wikipedia and this article, which may just be referencing the Wikipedia article.

        1. 2

          Given that it has a stray unfixed “[citation needed]” in the middle with no particular sign that this is a deliberate affectation, I assume large chunks of it were copied from Wikipedia.

    1. 6

      I really wish systemd didn’t insist on being PID 1. Then it would be the perfect answer to “how do I run multiple processes in a single container?”

      1. 7

        Yes, you don’t know what you are missing until you’ve used an init system that supports arbitrary recursion, like the Genode init.

        1. 3

          If you’d like, could you say a little more about what value you’ve found in that sort of thing?

          1. 12

            In Genode the init component basically enforces security policies, so you can create a tree of subinits that are successively more locked down, and there isn’t any escape hatch to escalate privilege. File-system and networking is in userspace, so managed by init, and you can arbitrarily namespace networking and file-systems by isolating instances in different inits.

            1. 4

              This means the same process manager can be used on a per-project level. You could write your systemd units for development, which would be pretty close to those for the system.

          2. 3

            What semantics of systemd do you think are better suited inside containers than other (perhaps less opinionated) supervisor inits?

            1. 4

              Familiarity, and the ability to write daemons for both in-container and out-of-container use.

              1. 2

                And being able to use existing software which expects to be launched by systemd!

            2. 3

              It’s also worth noting that podman just has a ‘–systemd=true|false|always’ flag that allows this behaviour.

              1. 1

                From the RHEL containers manual (emphasis mine):

                The UBI init images, named ubi-init, contain the systemd initialization system, making them useful for building images in which you want to run systemd services, such as a web server or file server. […]

                Because the ubi8-init image builds on top of the ubi8 image, their contents are mostly the same. However, there are a few critical differences:

                ubi8-init:

                • CMD is set to /sbin/init to start the systemd Init service by default
                • includes ps and process related commands (procps-ng package)
                • sets SIGRTMIN+3 as the StopSignal, as systemd in ubi8-init ignores normal signals to exit (SIGTERM and SIGKILL), but will terminate if it receives SIGRTMIN+3

                ubi8/ubi-init in the Red Hat Container Catalog. Red Hat’s UBI images are free for everyone. I am not affiliated with Red Hat.

                1. 1

                  …I am affiliated with Red Hat and didn’t know this.

                  Whoops. Thank you!

              1. 3

                I love the list of companies the ad features. A bunch of these clearly are only using Xenix internally (like Apple) but some actually resell it like Radio Shack on the TRS-80.

                I used Xenix some back in the day. It was quite the oddball and no fun at all if you had to port code to it from other *NIXen.

                1. 2

                  I’m not an Apple fan and, thus, I’m not too familiar with their history, so take this with a grain of salt – but I don’t think they were just using it internally. Xenix ran on the Apple Lisa. It didn’t work with Lisa’s graphical display, though – instead, you hooked a serial terminal to it. Some commercial applications, like Lyrix, were available for it (see https://www.macintoshrepository.org/23293-lyrix-for-apple-lisa-xenix ). Bitsavers has a copy of Xenix for Lisa (see http://bitsavers.trailing-edge.com/bits/Apple/Lisa/xenix_3.0_rel1.0/ ) so I’m guessing this was an external thing as well. Apple may not have been selling it (software releases appear to be SCO’s) but I think this was used in places other than Apple’s own shops :-).

                  1. 1

                    What a strange product, I can’t imagine what the point of it was! Serial-terminal-only Unix for a $9,995 machine whose primary feature was its mouse-driven, high-res GUI. How many of the 10,000 lifetime sales of the machine could’ve been to people that wanted to put in serial cards and hang dumb terminals off of it?

                    I guess there weren’t too many 68000 machines in 1983, maybe that was some of the appeal? Or maybe somebody at Microsoft or SCO just loved both the Lisa and Xenix… EDIT: or, as another commenter noted: “The eighties were crazy.”

                    1. 1

                      TIL! That’s awesome!

                      this tumblr seems to agree with you :)

                      1. 2

                        I gotta say, hooking a serial terminal to the Lisa sounds like the most stereotypically Unix nerd thing I’ve ever heard :-D. The eighties were crazy.

                  1. 8

                    I have mixed feelings about retrocomputing myself. I pay some amount of attention to certain types of retrocomputing - I watch youtube channels like The 8 bit guy’s or Cathode Ray Dude’s for instance. I can appreciate how computers of the 8 bit era are technically interesting computing devices in their own right, as well as being of nostalgic interest to the generation that grew up using those computers as kids, and as primitive predecessors of what would eventually become the desktop PCs that are now commonplace in our world.

                    At the same time, I can’t help but think that many types of retrocomputer are just too primitive to be really interesting. Every 8 bit computer I’ve seen just seems incredibly limiting. No one actually writes a document or crunches numbers in a spreadsheet on a computer from the 80s other than to see what it was like to those tasks at that time - certainly not if you have real work you need to get done. Network connectivity outside of dialing up BBSs on a (very slow) modem barely existed, and a computer that can’t fetch data from a network seems incredibly boring to me.

                    I do agree that there’s some value in a computing system that immediately drops you into a programming environment when you turn it on, as many 8 bit-era machines did with the BASIC prompt. Encouraging computer users to also be computer programmers is good - all else being equal. But BASIC itself is an incredibly primitive programming tool, particularly on those machines with a whopping 65 K of RAM. I remember playing with (variants of) BASIC myself as a kid (on much faster machines), and getting bored with it quickly - what I really wanted to do was learn how to make a window with graphics like the other Windows 98 software on my family’s computer could clearly do. Kids learning to program today (or even a decade or two ago) are much better off for being able to open a browser and write Javascript that can do things like “manipulate a 1024x768 jpeg” or “encrypt a message”, even if it’s not literally the first thing they see when they flip on their computer.

                    Video games probably retain the most value today out of all the software written for those machines - of course, the space of videos games you could make once hardware got better massively exploded. King’s Quest II may have been a fun game, but so was Quake, or Civilization II, or Doukutsu Monogatari.

                    I agree with the author that personal nostalgia is a large part of the interest in retrocomputing, and I’m too young to remember the 8-bit era myself. The computers I used as a child were already late 90s-vintage machines like iMacs and multi-hundred-MHz desktop PCs running Windows 98, and by the time I was old enough to really understand computers, the PC ecosystem was recognizably just a somewhat worse version of the ecosystem that exists to this day. I certainly didn’t understand “all of” the windows 98 computer my family had when I was a kid, or expect to. If I wanted to be nostalgic for what it was like to compute at that time, I could get into the Serenity OS project - but even that is really only taking the rough visual design of a 90s UNIX GUI, and is really a modern software project in every way that matters.

                    As I alluded to earlier, I think retrogaming is the aspect of retrocomputing that provides the most value for people engaging in it today. Video games were meant to be fun experiences designed around whatever the available hardware could do at the time they were made. If the designers did a good enough job, that experience will still be fun today even if there are other video game experiences made possible by better hardware. A lot of the video games I played as a kid were console video games, so it’s fun to see people doing things like speedrunning romhack’d Super Metroid or empirically testing which of the original 151 Pokemon would make the best starter.

                    1. 7

                      For me, I just don’t care about 8-bit “bitty boxes” - to me, a computer has to have things like an MMU or a network to be interesting. Software is the personality, and an operating system is the root of it. Thus, my interests align more with stuff like, PDP-11s at minimum, but also late 80s/90s PC/Mac stuff, workstations, minicomputers, etc.

                      1. 1

                        I enjoy both, but I know exactly what you mean. There were two phases for me growing up: getting introduced to 8-/16-bit systems with minimal OSes (including game consoles) in the ’80s and then discovering Unix, OS/2, NT, etc in the ‘90s. The two “kinds” of systems feel largely unrelated in a way I haven’t really pondered before.

                    1. 1

                      I’d like to hear more about what the starfield layers do.

                      1. 2

                        Funny thing, if the sources I’ve found online (including MAME source) are to be believed, there were only ever two games that used the starfield generator for its intended purpose. Everyone else, if they used it at all, just used it as a no-memory-cost way to get flat backgrounds, by setting all the palette entries equal.

                        From what I’ve been able to tell by sourcediving jtcps1 and this doc:

                        • Each starfield layer had an X and Y scroll coordinate; those and the 15-color palette used by the starfields were all the CPU had control over.
                        • The starfield data was burned into ROM.
                        • The starfields were always at the back of the layer stack.

                        So really just a pattern of dots (presumably enough bigger than one screen that the player wouldn’t get distracted by repetition) that you could palette how you chose and scroll on demand. And two of them so that you could scroll them at different rates and create a 3D-ish effect.

                        1. 1

                          Thank you! I’d imagined it was probably something like that. What an unusual feature.

                      1. 14

                        Is private browsing mode meant to make you anonymous or is it meant to prevent sites from showing up in your browser history and, therefore, autocomplete suggestions? I always thought it was the latter.

                        1. 20

                          when it was invented everyone called it “porn mode” which was essentially the use case. Vendor marketing said it was for “buying surprise gifts for your loved ones”, but let’s be real. Its main use-case is not having porn sites in our browser history.

                          1. 17

                            Speak for yourself.

                            The primary use case is checking to see if you need credentials, and testing login flows during development, or before sharing links.

                            1. 9

                              The primary use case is checking to see if you need credentials, and testing login flows during development, or before sharing links.

                              Careful with that – as someone condemned with the dark and evil knowledge of blessed with the knowledge to support OIDC/SAML integrations, private browsing can cause heartburn. Chrome for sure shares cookies across private browsing sessions, and at least used to maintain existing sessions when you opened private browsing.

                              Firefox’s multi-account containers can work great, but if you don’t practice really good hygiene you can have headaches there too.

                              I’ve been bitten by these problems often enough that I simply use Chrome as my “isolation browser”: as soon as I open it I clear all state so I know I’m not going to spend an hour chasing ghosts.

                              1. 4

                                Dead right. This is the primary reason I use Safari as my day-to-day browser. It doesn’t share cookies or any other session data between private tabs. I also like that it’s the most MacOS-native ‘feel’ but that’s icing on the cake.

                              2. 1

                                Ah, don’t be emberassed. It is fine. Everyone does it.

                              3. 15

                                Last year I actually did buy a surprise gift for my wife using private browsing. I felt so weird, like I was the first person in history to actually do that.

                                1. 0

                                  You probably were

                                2. 4

                                  I think browser vendors who want to compete with Chrome need to lean into this more. Reducing embarrassment is more viscerally appealing than increasing privacy and may be easier to implement. For example, being able to keep things in browser history but hiding them from autocomplete.

                                  1. 1

                                    Either go to porn sites or don’t. Your choice. Hiding it seems silly.

                                    Hiding surprises, or checking what a page looks like when logged out, or temporarily logging in to a different account on a website… So many legitimate uses for this feature, we don’t need to resort to such base assumptions

                                    1. 7

                                      Big difference between “this person, like a large portion of the population, occasionally visits porn sites” and “this person visits these porn sites for this amount of time and searches for these things while they’re there”.

                                      1. 2

                                        Hiding it seems silly

                                        Some people have religious nuts as parents and benefit from hiding their online activities. Not everyone is an independent adult living in a progressive western society.

                                        1. 1

                                          What are the use cases for the general population (non techies) though? Don’t tell me it is there for QA reasons, if that was the case, it was part of developer tools, not in the main menu.

                                          1. 3

                                            Getting around paywalls.

                                      2. 11

                                        Primarily the latter. Browser vendors attempt and pitch a weak version of the former sometimes, too.

                                      1. 3

                                        The fact that it works at all is amazing. However, 6502 is a really tough target for compiled languages. Even something as basic as having a standard function calling convention is expensive.

                                        1. 3

                                          GEOS has a pretty interesting calling convention for some of its functions (e.g. used at https://github.com/mist64/geowrite/blob/main/geoWrite-1.s#L82): Given that there’s normally no concurrency, and little recursive code, arguments can be stored directly in code:

                                          jsr function
                                          .byte arg1
                                          .byte arg2
                                          

                                          function then picks apart the return address to get at the arguments, then moves it forward before returning to skip over the data. A recursive function (where the same call site might be re-entered before leaving, with different arguments) would have to build a trampoline on a stack or something like that:

                                          lda #argcnt
                                          jsr trampoline
                                          .word function
                                          .byte arg1
                                          ...
                                          .byte argcnt
                                          

                                          where trampoline creates jsr function, a copy of the arguments + rts on the stack, messes with the returrn address to skip the arguments block, then jumps to that newly created contraption. But I’d rather just avoid recursive functions :-)

                                          1. 1

                                            Having to need self-modifying code to deal with function calls is reminding me of the PDP-8, which didn’t even have a stack - you had to modify code to put your return address in.

                                            1. 1

                                              Are those the actual arguments and self-modifying code is used to get non-constant data there? Or are the various .byte values the address to find the argument, in Zero Page?

                                              That’s pretty compact at the call site, but a lot of work in the called function to access the arguments. It would be ok for big functions that are expensive anyway, but on 6502 you probably (for code compactness) want to call a function even for something like adding two 32 bit (or 16 bit) integers.

                                              e.g. to add a number at address 30-31 into a variable at address 24-25 you’d have at the caller …

                                                  jsr add16
                                                  .byte 24
                                                  .byte 30
                                              

                                              … and at the called function …

                                              add16:
                                                  pla
                                                  sta ARGP
                                                  tax
                                                  pla
                                                  sta ARGP+1
                                                  tay
                                                  clc
                                                  txa
                                                  adc #2
                                                  pha
                                                  tya
                                                  adc #0
                                                  pha
                                                  ldy #0
                                                  lda (ARGP),y
                                                  tax
                                                  iny
                                                  lda (ARGP),y
                                                  tay
                                              
                                              add16_q:
                                                  clc
                                                  lda $0000,y
                                                  adc $00,x
                                                  sta $00,x
                                                  lda $0001,y
                                                  adc $01,x
                                                  sta $01,x
                                                  rts
                                              

                                              So the stuff between add16 and add16_q is 26 bytes of code and 52 clock cycles. The stuff in add16_q is 16 bytes of code and 28 clock cycles. The call to add16 is 5 bytes of code and 6 clock cycles.

                                              It’s possible to replace everything between add16 and add16_q with a jsr to a subroutine called, perhaps, getArgsXY. That will save a lot of code (because it will be used in many such subroutines) but add even more clock cycles – 12 for the JSR/RTS plus more code to pop/save/load/push the 2nd return address on the stack (26 cycles?).

                                              But there’s another way! And this is something I’ve used myself in the past.

                                              Keep add16_q and change the calling code to…

                                                  ldx #24
                                                  ldy #30
                                                  jsr add16_q
                                              

                                              That’s 7 bytes of code instead of 5 (bad), and 10 clock cycles instead of 6 – but you get to entirely skip the 52 clock cycles of code at add16 (maybe 90 cycles if you call a getArgsXY subroutine instead).

                                              You may quite often be able to omit the load immediate of X or Y because one or the other might be the same as the previous call, reducing the calling sequence to 5 bytes.

                                              If there’s some way to make add16 more efficient I’d be interested to know, but I’m not seeing it.

                                              Maybe you could get rid of all the PLA/PHA and use TSX;STX usp;LDX #1;STX usp+1 to duplicate the stack pointer in a 16-bit pointer in Zero Page, grab the return address using LDA instead of PLA, and increment the return address directly on the stack. It’s probably not much better, if at all.

                                              1. 1

                                                These calling conventions are provided for some functions only, and mostly the expensive ones. From the way it’s implemented for BitmapUp, without looking too closely at the macros, it seems they store the return address at a known address and index through that.

                                                GEOS has pretty complex functions and normally uses virtual registers in the zero page, so I guess this is more an optimization for constant calls: no need to have endless lists of lda #value; sta $02; ... in your code - as GEOS then copies it into the virtual registers and just calls the regular function, the only advantage of the format is compactness.

                                            2. 2

                                              Likewise, I’m very impressed it works. Aside from you correctly pointing out how weak stack operations are on the 6502, however, it doesn’t generate even vaguely idiomatic 6502 assembly. That clear-screen extract was horrible.

                                              1. 2

                                                The 6502 is best used treating zero page as a lot of registers with the same kind of calling convention as modern RISC (and x86_64) use: some number of registers that are used for passing arguments and return values and for temporary calculations inside a function (and so that leaf functions don’t have to save anything), plus a certain number of registers that are preserved over function calls and you have to save and restore them if you want to use them. The rest of zero page can be used for globals, the same as .sdata referenced from a Global Pointer register on machines such as RISC-V or Itanium.

                                                If you do that then the only stack accesses needed are push and pop or a set of registers. If you generate the code appropriately then you only have to know to save N registers on function entry and restore the same N and then return on function exit. You can use a small set of special subroutines for that, saving code size. RISC-V does exactly the same thing with the -msave-restore option to gcc or clang.

                                                Of course for larger programs you’ll want to implement your own stack (using two zero page locations as the stack pointer) for the saved registers. 256 bytes should be enough for just the function return addresses.

                                                1. 1

                                                  But I wonder how much of the zero page you can use without stepping on the locations reserved for ROM routines, particularly on the Apple II. It’s been almost three decades since I’ve done any serious programming on the Apple II, but didn’t its ROM reserve some zero-page locations for allowing redirection of ROM I/O routines? If I were programming for that platform today, I’d still want to use those routines, so that, for example, the Textalker screen reader (used in conjunction with the Echo II card) would work. My guess is that similar considerations would apply on the C64.

                                                  1. 1

                                                    The monitor doesn’t use a lot. AppleSoft uses a lot more, but that’s ok because it initialises what it needs on entry.

                                                    https://pbs.twimg.com/media/E_xJ5oWUYAAUo3a?format=jpg&name=4096x4096

                                                    Seems a shame now to have defaced the manual, but in my defence I did it 40 years ago.

                                                  2. 1

                                                    Now I’ve looked into the implementation I see they’re doing something like this, but using only 4 zero page bytes as caller-saved registers. This is nowhere near enough!

                                                    Even 32 bit ARM uses 4 registers, which should probably translate to 8 bytes on 6502 (four pointers or 16 bit integers).

                                                    x86_64, which has the same number of registers as arm32, uses six argument registers. RISC-V uses 8 argument registers, plus another 7 “temporary” registers which a called function is free to overwrite. PowerPC uses 8 argument registers.

                                                    6502 effectively has 128 16-bit registers (the size of pointers or int). There is no reason why you shouldn’t be at least as generous with argument and temporary registers as the RISC ISAs that have 32 registers.

                                                    I’d suggest maybe 16 bytes for caller-save (arguments), 16 bytes for temporaries, 32 bytes for callee-save. That leaves 192 bytes for globals (2 bytes of which will be the software stack pointer).

                                                    1. 1

                                                      Where are you going to save them? In the 256 BYTE stack the 6502 has? Even if the stack wasn’t limited, you still only have as most 65,536 bytes of memory to work with.

                                                      1. 1

                                                        Would be cool to see if this stuff were built to expect bank switching hardware.

                                                        1. 1

                                                          I quote myself:

                                                          Of course for larger programs you’ll want to implement your own stack (using two zero page locations as the stack pointer) for the saved registers. 256 bytes should be enough for just the function return addresses.

                                                          64k of total memory is of course a fundamental limitation of the 6502, so is irrelevant to what details of code generation and calling convention you use. Other than that you want as compact code as possible, of course.

                                                  1. 2

                                                    I forgot how strongly he attributed NextStep’s productivity to being “Object Oriented.”

                                                    I wonder what he understood the term to mean. It was probably quite a bit different than what I mean if I use the term.

                                                    1. 5

                                                      Something a bit more like what people more usually call components these days, more than object oriented languages. It’s all about packaging collections of behaviour behind reusable modular abstractions. He’s right, about a lot of it, although the vocabulary is dated, and we have coalesced more of it into and around the idea of APIs

                                                      Remember the NeXT idea of OOP is dynamic, late bound, loose types and message passing, with Smalltalk as the primary influence, not objects in the mainstream as eventually happened in the more static bound sense of Java or C++.

                                                      Some of what they were shooting for was objects as closed components that could be distributed and sold like pieces of a construction kit and you’d be able to quickly assemble desktop apps by dragging them together in a visual editor and just serialising that out to dump a working application image. (Which is kind of how NeXT Interface Builder worked)

                                                      Squint and you can see it in today’s apps that tie together APIs from disparate service providers, and we don’t really talk about this in the vocabulary of objects so much any more, but the early roots of SOA do have a lot of it present in CORBA, XML RPC, SOAP etc. And there is that ‘O’ in JSON still ;-)

                                                      1. 3

                                                        I believe I remember the term “software ICs” being used back then.

                                                    1. 2

                                                      Most of you have probably seen this by now but I’ll leave it here for those who haven’t.

                                                      Also…

                                                      1990s Pentium PC WWW

                                                      2000s Laptop Web 2.0

                                                      2010s Smart Phones Apps

                                                      2020s Wearables TBD

                                                      2030s Embeddables TBD

                                                      I’ve seen this table in 2000 and 2010 and now again in 2020. Each time the “wearables” is touted as next decade’s big thing. I think it’s something that we won’t be able to achieve before the year of Linux on the desktop :-).

                                                      Granted, people have been singing dirges for the personal computer since about that same time, too. First it was thin clients (were it not for that stupid slow-ass network!). Then it was phones and tablets (were it not for them simpletons whose work did not consist of forwarding emails and attending meetings). But, you know, if you predict things at a high enough rate, some of them are bound to come true.

                                                      1. 2

                                                        2020 smart watch, fitness armbands

                                                        They are not as dominant as the others though.

                                                        1. 1

                                                          I regularly take walks without my phone, wearing my cellular watch streaming audiobooks and podcasts to my wireless earbuds, responding to messages through the voice assistant. No “smartglasses” yet, but wearables are important today and a huge growth area.

                                                          Still, yeah, doesn’t feel like anywhere near the the impact of PCs or smartphones. Once glasses get here, I think it will.

                                                        1. 3

                                                          Very cool! I had actually wanted to experiment with Rust to do a simple 4-op FM synths on an ARM microcontroller.

                                                          For anyone not familiar with FM synth sounds… you’ve definitely heard it if you listened any pop music from the 80s - either from the Synclavier or the famous Yamaha DX7. Yamaha’s cost-reduced 4-op FM synths (like the DX100) were also a staple of 90s house music.

                                                          1. 4

                                                            Or played on a Sega Genesis / Mega Drive.

                                                            1. 3

                                                              Can we get an in-browser version of the OP-1? ;)

                                                              1. 5

                                                                You just want the cow, be honest with yourself. :-)

                                                                But more seriously, I got a similar idea about making a sampler in the browser à la MPC or SP404. Not sur if the workflow may fit a no-pad no touch device. Or going back to a tracker like Renoise or Sunvox that fit the computer interface.

                                                              2. 2

                                                                You should def. go for it! It would be really cool to get something running on a microcontroller like that, and Rust seems like the perfect language to make that happen.

                                                              1. 48

                                                                I learned how to balance a red-black tree in college, 20+ years ago, and that’s the last time I ever balanced a red-black tree. Unless the job is writing data structures libraries, why would you ask me that?

                                                                I’ve built large, production systems used in the most secure environments in the world. My code is secure, performant, accurate, and safe…but no, I don’t remember how to find all palindromes in a string off the top of my head.

                                                                I remember interviewing at one of the Big Companies. I said I knew C. They asked me which format specifier in printf would do some obscure thing. I didn’t remember. Guess what? I’ve been writing C for…26? years now and I still sometimes look at man pages. I’d be more worried about a developer who didn’t, honestly.

                                                                1. 19

                                                                  Unless the job is writing data structures libraries, why would you ask me that?

                                                                  Additionally, if I asked an engineer to build a data structures library with red-black trees and they started coding without immediately reaching for a description of the operations, invariants to maintain, etc, for a proper red-black tree, I’d be really nervous. It’s like when a waiter doesn’t write down your order.

                                                                  1. 5

                                                                    To be fair, an experienced waiter can probably keep your order in their head…

                                                                    1. 22

                                                                      I know that in some places it’s considered a badge of honour to be able to take everyone’s order without writing it down, but for many customers it just makes the service worse. I literally do not care how my order gets to the chef. All I care about is that it is correct. Writing it down increases my confidence that it will be correct, meaning that in the wait between ordering and getting my food, I can relax, confident that in due time they will bring me the right things, instead of worrying that I’m going to have to spend my evening negotiating with the waiter and waiting for the chef to get my order right by trial and error.

                                                                      In a similar fashion, remembering how to implement a selection of obscure algorithms is really low on the list of priorities for a software engineer. You could almost argue that for interview purposes, you want a problem the interviewee hasn’t met before, so you can actually observe how they go about solving it.

                                                                      1. 9

                                                                        instead of worrying that I’m going to have to spend my evening negotiating with the waiter and waiting for the chef to get my order right by trial and error

                                                                        Everybody’s got their own thing going on, but I can’t help thinking you might be optimizing for the wrong kind of restaurant experience.

                                                                        1. 5

                                                                          At risk of breaking the metaphor, we should optimize for safety first: Don’t serve allergens to patrons who indicate food allergies. This suggests that orders should be written down or tabulated in point-of-sale systems, rather than memorized, and that orders should be systematically assembled rather than designated by petnames.

                                                                      2. 13

                                                                        Whether the waiter can or not, I trust the process less if they don’t write it down.

                                                                        1. 2

                                                                          I think this might be a “restaurant as status experience” thing? The waiter shows off their memory, this demonstrates that they’re a good waiter, which makes this a good restaurant, which makes you a person who eats at a good restaurant.

                                                                          1. 9

                                                                            I don’t know if it is an US thing but having been waiter/bartender/manager in multiple bars and restaurants in Europe, I think a lot of folks here seems to have had a bad experience with waiters or hold a grudge on how to optimize the certainty of having exactly what they said done. And for siblings comments comparing taking notes as a SE and waiters, I would love to see how SE making minimal wage and living on tips would learn to optimize their workflow.

                                                                            I have been trained to not write down for any table under five people. I insist on trained, first we were allowed to take note for anything and after a few days/a week, you stop taking notes for any table under four etc. It began to be a challenge between colleagues. Bref, all in all you develop a kind of memory palace of your restaurant and their table and make weird association between guest and their commands, your optimize also your work and your are faster and more precise at the end of the day because you will remember longer. Heck, I still have in my memory the set up of tables of all the place I worked in and the orders of regulars and from random people that I happen to see in the city I was living burned in my head years later.

                                                                            As a manager, my rule to train my team was to be able to take order for table of X where X was our average table. It was speeding up the process, making waiter more aware of the flow of his tables, better balance the workload to the bar and the kitchen by timing when to take some orders, reshuffle orders order to let a two people table bypass the ten people table before it reaches the kitchen or later with kitchen. And bartenders and cooks have to learn and do the same. A restaurant is never a FIFO sequential process, you have to manage a concurrent/parallel environment when everybody needs to be served at the right timing, within a known acceptable lag. Having waiters able to memorize your order but also remember it as long as you are in the restaurant, it is similar to the cookie session in your browser.

                                                                        2. 0

                                                                          Zeigarnik effect. Waiters don’t have to analyze, break down, reshuffle and regroup their orders. Software developers do that all the time. I don’t trust those who don’t take notes. All software developers take notes. Good and bad ones. Those who don’t - are not software developers, at best, they are code-monkeys.

                                                                      3. 9

                                                                        Also, red-black trees suck. Keeping the colour in every tree node bloats the data – quite probably by 8 bytes for the struct size on a modern machine, and many malloc libraries round up to the next multiple or 16 or 32 bytes. And both red-black and AVL algorithms are complex.

                                                                        Hash tables are generally more useful now, and b-tree like structures make more sense given cache line sizes (or VM pages), but if I do require a balanced binary tree then my go-to now is the scapegoat tree. The code is much simpler and smaller, there is nothing extra in each node, and it requires only a few bytes of extra global storage (for powers of your acceptable unbalance factor) for all trees, plus one integer per tree if you will be allowing nodes to be deleted. I can and have written complete bug-free code for scapegoat tree in 5 minutes in programming contests/exams where standard libraries were not allowed to be used.

                                                                        But, yes, the main point here is that if I need to write code for a data structure or algorithm for my actual job then I research the literature, find the best thing, implement it very carefully (possibly with modifications), put it into production, AND THEN FORGET THE DETAILS to make room in my brain for the next task.

                                                                      1. 1

                                                                        I’m no expert on these things but I’d think for the vast majority of cases, all you’d be interested in is whether a minimum version is installed, so wouldn’t you want to check that rather than if it was the system Ruby?

                                                                        1. 28

                                                                          MIPS is everywhere, still. Including in network gear, wireless, IoT, and other embedded applications.

                                                                          1. 8

                                                                            This. While it seems to me that most high-end network gear is slowly migrating towards ARM, MIPS keeps turning up in odd places. I recently dug around in the weird world of handheld video game consoles designed to run emulators, and found this spreadsheet compiled by the fine folks here. I was surprised to see a relatively large number of CPU’s with “XBurst” architecture, which MIPS32 plus some DSP extensions.

                                                                            I have a friend who recently got an internship at a company to help optimize their AS/400-based database infrastructure, and it looks like the current IBM systems are still backwards-compatible with S/390 programs. So while you might not see s390 much it’s probably not going away quickly.

                                                                            I believe Alpha, PA-RISC and IA-64 are officially deprecated these days, so nobody is making new ones and nobody seems to want to. To my surprise, it appears that people are still manufacturing SPARC hardware though.

                                                                            1. 3

                                                                              Mostly Fujitsu, but even they are doing more aarch64.

                                                                              1. 3

                                                                                it looks like the current IBM systems are still backwards-compatible with S/390 programs

                                                                                My understanding is that IBM Z stuff today is extremely compatible with System/360 programs from the mid-’60s.

                                                                                1. 2

                                                                                  So while you might not see s390 much it’s probably not going away quickly.

                                                                                  For legacy applications on MVS and friends, yeah, but IBM basically killed 31-bit Linux.

                                                                                  To my surprise, it appears that people are still manufacturing SPARC hardware though.

                                                                                  There’s still a market for legacy Solaris systems.

                                                                                  1. 1

                                                                                    How frequently are these legacy Solaris systems updated? How frequently are IBM Z systems updated? I heard (might be unsubstantiated) that some mainframes still run 20 year old Perl, even though the OS gets updates.

                                                                                    1. 1

                                                                                      Depends how much they care; if they do, they’ll keep their ancient application onto newer Solaris on newer hardware (i.e M8).

                                                                                      The 20-year-old-Perl makes me think you’re talking USS on z/OS (aka MVS); that’s a world I know very little of.

                                                                                  2. 1

                                                                                    IBM i (née AS/400) is all on PowerPC these days. It’s a very different system from s390/mainframe/zOS

                                                                                  1. 2

                                                                                    The piece of information I’m missing:

                                                                                    How is Nissan giving up on the root shell? As far as I understand the article, having an owner-accessible root shell was not their intention to begin with.

                                                                                    1. 6

                                                                                      I think the phrase “gives up” here is intended like “allows against their wishes” not “giving up on”.

                                                                                      1. 5

                                                                                        This paragraph contains the gist of it

                                                                                        After some poking, [ea] discovered the script designed to mount USB storage devices had a potential flaw in it. The script was written in such a way that the filesystem label of the device would be used to create the mount point, but there were no checks in place to prevent a directory traversal attack. By crafting a label that read ../../usr/bin/ and placing a Bash script on the drive, it’s possible to run arbitrary commands on the head unit.

                                                                                        Full report from the author [ea] there : https://github.com/ea/bosch_headunit_root. The write-up is super complete and really interesting. It may be a better link than the news from hackaday.

                                                                                        EDIT: the gist of it : U-Boot + no password set for root + SSH EmptyPassword permitted.

                                                                                        1. 2

                                                                                          Yes, I read the article. What I was confused about was that the title didn’t fit the content.

                                                                                      1. 11

                                                                                        Looking at pictures: Oh, that brings me back to working on SPARC workstations in the late 90s

                                                                                        Raymii: “You can run it on modern linux”

                                                                                        me: “Hmm, no thank you, nostalgia’s not worth that kind of pain”

                                                                                        1. 12

                                                                                          The feeling it still evokes in me is “expensive”. 100% of the machines I saw running it back in the day had gigantic (for the time) Trinitron monitors and just looked… “very official.”

                                                                                        1. 2

                                                                                          Now add Apple to this model, who work in none of those ways accused of being the “Silicon Valley” ways.

                                                                                          1. 2

                                                                                            They’re not really a software company though, software is a means to an end to them - just like with “traditional” companies.

                                                                                            Their main pillar is hardware and they try to shift to services, and the software they ship on their hardware isn’t great (basically living off the NeXT-inheritance from 20 years ago) and from what can be seen with their services, neither is it great there.

                                                                                            1. 3

                                                                                              The problem is that there are no true Scotsmen: no company is a software company. Facebook is an ad broker. Netflix is a media channel. Microsoft is a licensing company. Red Hat is a training company. It just happens that they each use software quite a bit in delivering their “true” business, just like Apple.

                                                                                              1. 4

                                                                                                Yeah, shipping the 2nd most popular desktop, mobile os and web browser is pretty trivial. Any “real” software companies could do it. All the tech has been there for 20 years after all.

                                                                                                1. 2

                                                                                                  iCloud had a rough start (and even more so its predecessors .Mac, MobileMe, etc.) but it seems mostly rock-solid today and has an astronomical amount of traffic. A billion and a half active devices, I believe, with a large proportion using multiple iCloud services all day every day. I’m not saying Apple doesn’t have room for improvements in services, but “Apple is bad at services” is just a decade old meme at this point, IMO.

                                                                                              1. 1

                                                                                                With increased automation of those reference counting operations and the addition of weak references, the convenience level for developers is essentially indistinguishable from a tracing GC now.

                                                                                                Are weak references new in apple environments? Hassn’t that been table stakes for a refcounting system for some time?

                                                                                                1. 3

                                                                                                  Weak references appeared alongside Automatic Reference Counting (ARC) in 2011. Before then you’d just omit the retain that the code which didn’t own the reference would normally have issued when it received it.

                                                                                                  That was Objective-C, of course, but Swift’s ARC is essentially the same.

                                                                                                1. 11

                                                                                                  I like Apple hardware a lot, and I know all of the standard this-is-why-it-is-that-way reasoning. But it’s wild that the new MacBook Pros only have two USB-C ports and can’t be upgraded past 16GB of RAM.

                                                                                                  1. 18

                                                                                                    Worse yet, they have “secure boot”, where secure means they’ll only boot an OS signed by Apple.

                                                                                                    These aren’t computers. They are Appleances.

                                                                                                    Prepare for DRM-enforced planned obsolence.

                                                                                                    1. 9

                                                                                                      I would be very surprised if that turned out to be the case. In recent years Apple has been advertising the MacBook Pro to developers, and I find it unlikely they would choose not to support things like Boot Camp or running Linux based OSs. Like most security features, secure boot is likely to annoy a small segment of users who could probably just disable it. A relevant precedent is the addition of System Integrity Protection, which can be disabled with minor difficulty. Most UEFI PCs (to my knowledge) have secure boot enabled by default already.

                                                                                                      Personally, I’ve needed to disable SIP once or twice but I can never bring myself to leave it disabled, even though I lived without it for years. I hope my experience with Secure Boot will be similar if I ever get one of these new computers.

                                                                                                      1. 12

                                                                                                        Boot Camp

                                                                                                        Probably a tangent, but I’m not sure how Boot Camp would fit into the picture here. ARM-based Windows is not freely available to buy, to my knowledge.

                                                                                                        1. 7

                                                                                                          Disclaimer: I work for Microsoft, but this is not based on any insider knowledge and is entirely speculation on my part.

                                                                                                          Back in the distant past, before Microsoft bought Connectix, there was a product called VirtualPC for Mac, an x86 emulator for PowerPC Macs (some of the code for this ended up in the x86 on Arm emulator on Windows and, I believe, on the Xbox 360 compatibility mode for Xbox One). Connectix bought OEM versions of Windows and sold a bundle of VirtualPC and a Windows version. I can see a few possible paths to something similar:

                                                                                                          • Apple releases a Boot Camp thing that can load *NIX, Microsoft releases a Windows for Macs version that is supported only on specific Boot Camp platforms. This seems fairly plausible if the number of Windows installs on Macs is high enough to justify the investment.
                                                                                                          • Apple becomes a Windows OEM and ships a Boot Camp + Windows bundle that is officially supported. I think Apple did this with the original Boot Camp because it was a way of de-risking Mac purchases for people: if they didn’t like OS X, they had a clean migration path away. This seems much less likely now.
                                                                                                          • Apple’s new Macs conform to one of the new Arm platform specifications that, like PREP and CHRP for PowerPC, standardise enough of the base platform that it’s possible to release a single OS image that can run on any machine. Microsoft could then release a version of Windows that runs on any such Arm machine.

                                                                                                          The likelihood of any of these depends a bit on the economics. In the past, Apple has made a lot of money on Macs and doesn’t actually care if you run *NIX or Windows on them because anyone running Windows on a Mac is still a large profit-making sale. This is far less true with iOS devices, where a big chunk of their revenue comes from other services (And their 30% cut on all App Store sales). If the new Macs are tied more closely to other Apple services, they may wish to discourage people from running another OS. Supporting other operating systems is not free: it increases their testing burden and means that they’ll have to handle support calls from people who managed to screw up their system with some other OS.

                                                                                                          1. 2

                                                                                                            Apple’s new Macs conform to one of the new Arm platform specifications

                                                                                                            We already definitely know they use their own device trees, no ACPI sadly.

                                                                                                            Supporting other operating systems is not free

                                                                                                            Yeah, this is why they really won’t help with running other OS on bare metal, their answer to “I want other OS” is virtualization.

                                                                                                            They showed a demo (on the previous presentation) of virtualizing amd64 Windows. I suppose a native aarch64 Windows VM would run too.

                                                                                                          2. 2

                                                                                                            ARM-based Windows is available for free as .vhdx VM images if you sign up for the Windows Insider Program, at least

                                                                                                          3. 9

                                                                                                            In the previous Apple Silicon presentation, they showed virtualization (with of-course-not-native Windows and who-knows-what-arch Debian, but I suspect both native aarch64 and emulated amd64 VMs would be available). That is their offer to developers. Of course nothing about running alternative OS on bare metal was shown.

                                                                                                            Even if secure boot can be disabled (likely – “reduced security” mode is already mentioned in the docs), the support in Linux would require lots of effort. Seems like the iPhone 7 port actually managed to get storage, display, touch, Wi-Fi and Bluetooth working. But of course no GPU because there’s still no open PowerVR driver. And there’s not going to be an Apple GPU driver for a loooong time for sure.

                                                                                                            1. 2

                                                                                                              I think dual-booting has always been a less-than-desireable “misfeature” from Apple’s POV. Their whole raisin de et is to offer an integrated experience where the OS, hardware, and (locked-down) app ecosystem all work together closely. Rip out any one of those and the whole edifice starts to tumble.

                                                                                                              So now they have a brand-new hardware platform with an expanded trusted base, so why not use it to protect their customers from “bad ideas” like disabling secure boot or side-loading apps? Again, from their perspective they’re not doing anything wrong, or hostile to users; they’re just deciding what is and isn’t a “safe” use of the product.

                                                                                                              I for one would be completely unsurprised to discover that the new Apple Silicon boxes were effectively just as locked down as their iOS cousins. You know, for safety.

                                                                                                              1. 3

                                                                                                                They’re definitely not blocking downloading apps. Federighi even mentioned universal binaries “downloaded from the web”. Of course you can compile and run any programs. In fact we know you can load unsigned kexts.

                                                                                                                Reboot your Mac with Apple silicon into Recovery mode. Set the security level to Reduced security.

                                                                                                                Remains to be seen whether that setting allows it to boot any unsigned kernel, but I wouldn’t just assume it doesn’t.

                                                                                                                1. 4

                                                                                                                  They also went into some detail at WWDC about this, saying that the new Macs will be able to run code in the same contexts existing ones can. The message they want to give is “don’t be afraid of your existing workflow breaking when we change CPU”, so tightening the gatekeeper screws alongside the architecture shift is off the cards.

                                                                                                                2. 2

                                                                                                                  I think dual-booting has always been a less-than-desireable “misfeature” from Apple’s POV. Their whole raisin de et is to offer an integrated experience where the OS, hardware, and (locked-down) app ecosystem all work together closely. Rip out any one of those and the whole edifice starts to tumble.

                                                                                                                  For most consumers, buying their first Mac is a high-risk endeavour. It’s a very expensive machine and it doesn’t run any of their existing binaries (especially since they broke Wine with Catalina). Supporting dual boot is Apple’s way of reducing that risk. If you aren’t 100% sure that you’ll like macOS, there’s a migration path away from it that doesn’t involve throwing away the machine: just install Windows and use it like your old machine. Apple doesn’t want you to do that, but by giving you the option of doing it they overcome some of the initial resistance of people switching.

                                                                                                                  1. 7

                                                                                                                    The context has switched, though.

                                                                                                                    Before, many prospective buyers of Macs used Windows, or needed Windows apps for their jobs.

                                                                                                                    Now, many more prospective buyers of Macs use iPhones and other iOS devices.

                                                                                                                    The value proposition of “this Mac runs iOS apps” is now much larger than the value proposition of “you can run Windows on this Mac”.

                                                                                                                    1. 2

                                                                                                                      There’s certainly some truth to that but I would imagine that most iOS users who buy Macs are doing so because iOS doesn’t do everything that they need. For example, the iPad version of PowerPoint is fine for presenting slides but is pretty useless for serious editing. There are probably a lot of other apps where the iOS version is quite cut down and is fine for a small device but is not sufficient for all purposes.

                                                                                                                      In terms of functionality, there isn’t much difference between macOS and Windows these days, but the UIs are pretty different and both are very different from iOS. There’s still some risk for someone who is happy with iOS on the phone and Windows on the laptop buying a Mac, even if it can run all of their iOS apps. There’s a much bigger psychological barrier for someone who is not particularly computer literate moving to something new, even if it’s quite like similar to something they’re more-or-less used to. There are still vastly more Windows users than iOS users, though it’s not clear how many of those are thinking about buying Macs.

                                                                                                                      1. 2

                                                                                                                        There are still vastly more Windows users than iOS users, though it’s not clear how many of those are thinking about buying Macs.

                                                                                                                        Not really arguing here, I’m sure you’re right, but how many of those Windows users choose to use Windows, as opposed to having to use it for work?

                                                                                                                        1. 1

                                                                                                                          I don’t think it matters very much. I remember trying to convince people to switch from MS Office ‘97 to OpenOffice around 2002 and the two were incredibly similar back then but people were very nervous about the switch. Novell did some experiments just replacing the Office shortcuts with OpenOffice and found most people didn’t notice at all but the same people were very resistant to switching if you offered them the choice.

                                                                                                                3. 1

                                                                                                                  That “developer” might means Apple developers.

                                                                                                                4. 3

                                                                                                                  Here is the source of truth from WWDC 2020 about the new boot architecture.

                                                                                                                  1. 2

                                                                                                                    People claimed the same thing about T2 equipped intel Macs.

                                                                                                                    On the T2 intels at least, the OS verification can be disabled. The main reason you can’t just install eg Linux on a T2 Mac is the lack of support for the ssd (which is managed by the T2 itself). Even stuff like ESXi can be used on T2 Macs - you just can’t use the built in SSD.

                                                                                                                    That’s not to say that it’s impossible they’ve added more strict boot requirements but I’d wager that like with other security enhancements in Macs which cause some to clutch for their pearls, this too can probably be disabled.

                                                                                                                  2. 10

                                                                                                                    … This is the Intel model it replaces: https://support.apple.com/kb/SP818?viewlocale=en_US&locale=en_US

                                                                                                                    Two TB3/USB-C ports; Max 16GB RAM;

                                                                                                                    It’s essentially the same laptop, but with a non-intel CPU/iGPU, and with USB4 as a bonus.

                                                                                                                    1. 1

                                                                                                                      Fair point! Toggling between “M1” and “Intel” on the product page flips between 2 ports/4 ports and 16GB RAM/max 32GB RAM, and it’s not clear this is a base model/higher tier toggle. I still think this is pretty stingy, but you’re right – it’s not a new change.

                                                                                                                    2. 5

                                                                                                                      These seem like replacements for the base model 13” MBP, which had similar limitations. Of course, it becomes awkward that the base model now has a much, much better CPU/IGP than the higher-end models.

                                                                                                                      1. 2

                                                                                                                        I assume this is just a “phase 1” type thing. They will probably roll out additional options when their A15 (or whatever their next cpu model is named) ships down the road. Apple has a tendency to be a bit miserly (or conservative, depending on your take) at first, and then the next version looks that much better when it rolls around.

                                                                                                                        1. 2

                                                                                                                          Yeah, they said the transition would take ~2 years, so I assume they’ll slowly go up the stack. I expect the iMacs and 13-16” MacBook Pros to be refreshed next.

                                                                                                                          1. 3

                                                                                                                            Indeed. Could be they wanted to make the new models a bit “developer puny” to keep from cannabalizing the more expensive units (higher end mac pros, imacs) until they have the next rev of cpu ready or something. Who knows the amount of marketing/portfolio wrangling that goes behind the scenes to suss out timings for stuff like this (billion dollar industries), in order to try to hit projected quarterly earnings for a few quarters out down the road.

                                                                                                                            1. 5

                                                                                                                              I think this is exactly right. Developers have never been a core demographic for Apple to sell to - it’s almost accidental that OS X being a great Unix desktop, coupled with software developer’s higher income made Macs so popular with developers (iOS being an income gold mine helped too, of course).

                                                                                                                              But if you’re launching a new product, you look at what you’re selling best of (iPads and Macbook Air’s) and you iterate on that.

                                                                                                                              Plus, what developer in their right mind would trust their livelihood to a 1.0 release?!

                                                                                                                              1. 9

                                                                                                                                I think part of the strategy is that they’d rather launch a series of increasingly powerful chips, instead of starting with the most powerful and working their way down - makes for far better presentations. “50% faster!” looks better than “$100 cheaper! (oh, and 30% slower)”.

                                                                                                                                1. 2

                                                                                                                                  It also means that they can buy more time for some sort of form-factor update while having competent, if not ideal, machines for developers in-market. I was somewhat surprised at the immediate availability given that these are transition machines. This is likely due to the huge opportunity for lower-priced machines during the pandemic. It is prudent for Apple to get something out for this market right now since an end might be on the horizon.

                                                                                                                                  I’ve seen comments about the Mini being released for this reason, but it’s much more likely that the Air is the product that this demographic will adopt. Desktop computers, even if we are more confined to our homes, have many downsides. Geeks are not always able to understand these, but drive the online conversations. Fans in the Mini and MBP increase the thermal envelope, so they’ll likely be somewhat more favourable for devs and enthusiasts. It’s going to be really interesting to see what exists a year from now. It will be disappointing, if at least some broader changes to the form factor and design aren’t introduced.

                                                                                                                                2. 1

                                                                                                                                  Developers have never been a core demographic for Apple to sell to

                                                                                                                                  While this may have been true once, it certainly isn’t anymore. The entire iPhone and iPad ecosystem is underpinned by developers who pretty much need a Mac and Xcode to get anything done. Apple knows that.

                                                                                                                                  1. 2

                                                                                                                                    Not only that, developers were key to switching throughout the 00s. That Unix shell convinced a lot of us, and we convinced a lot of friends.

                                                                                                                                    1. 1

                                                                                                                                      In the 00s, Apple was still an underdog. Now they rule the mobile space, their laptops are probably the only ones that make any money in the market, and “Wintel” is basically toast. Apple can afford to piss off most developers (the ones who like the Mac because it’s a nice Unix machine) if it believes doing so will make a better consumer product.

                                                                                                                                      1. 2

                                                                                                                                        I’ll give you this; developers are not top priority for them. Casual users are still number one by a large margin.

                                                                                                                                    2. 1

                                                                                                                                      Some points

                                                                                                                                      • Developers for iOS need Apple way more than Apple needs them
                                                                                                                                      • You don’t need an ARM Mac to develop for ARM i-Devices
                                                                                                                                      • For that tiny minority of developers who develop native macOS apps, Apple provided a transition hardware platform - not free, by the way.

                                                                                                                                      As seen by this submission, Apple does the bare minimum to accommodate developers. They are certainly not prioritized.

                                                                                                                                      1. 1

                                                                                                                                        I don’t really think it’s so one-sided towards developers - sure, developers do need to cater for iOS if they want good product outreach, but remember that Apple are also taking a 30% cut on everything in the iOS ecosystem and the margins on their cut will be excellent.

                                                                                                                                  2. 2

                                                                                                                                    higher end mac pros

                                                                                                                                    Honestly trepidatiously excited to see what kind of replacement apple silicon has for the 28 core xeon mac pro. It will either be a horrific nerfing or an incredible boon for high performance computing.

                                                                                                                            2. 4

                                                                                                                              and can’t be upgraded past 16GB of RAM.

                                                                                                                              Note that RAM is part of the SoC. You can’t upgrade this afterwards. You must choose the correct amount at checkout.

                                                                                                                              1. 2

                                                                                                                                This is not new to the ARM models. Memory in Mac laptops, and often desktops, has not been expandable for some time.

                                                                                                                              2. 2

                                                                                                                                I really believe that most people (including me) don’t need more than two Thunderbolt 3 ports nowadays. You can get a WiFi or Bluetooth version of pretty much anything nowadays and USB hubs solve the issue when you are at home with many peripherals.

                                                                                                                                Also, some Thunderbolt 3 displays can charge your laptop and act like a USB hub. They are usually quite expensive but really convenient (that’s what I used at work before COVID-19).

                                                                                                                                1. 4

                                                                                                                                  it’s still pretty convenient to have the option of plugging in on the left or right based on where you are sitting so disappointing for that reason

                                                                                                                                  1. 4

                                                                                                                                    I’m not convinced. A power adapter and a monitor will use up both ports, and AFAIK monitors that will also charge the device over Thunderbolt are pretty uncommon. Add an external hard drive for Time Machine backups, and now you’re juggling connections regularly rather than just leaving everything plugged in.

                                                                                                                                    On my 4-port MacBook Pro, the power adapter, monitor, and hard drive account for 3 ports. My 4th is taken up with a wireless dongle for my keyboard. Whenever I want to connect my microphone for audio calls or a card reader for photos I have to disconnect something, and my experiences with USB-C hubs have shown them to be unreliable. I’m sure I could spend a hundred dollars and get a better hub – but if I’m spending $1500 on a laptop, I don’t think I should need to.

                                                                                                                                    1. 2

                                                                                                                                      and AFAIK monitors that will also charge the device over Thunderbolt are pretty uncommon

                                                                                                                                      Also, many adapters that pass through power and have USB + a video connector of some sort only allow 4k@30Hz (such as Apple’s own USB-C adapters). Often the only way to get 4k@60Hz with a non-Thunderbolt screen is by using a dedicated USB-C DisplayPort Alt Mode adapter, which leaves only one USB-C port for everything else (power, any extra USB devices).

                                                                                                                                  2. 1

                                                                                                                                    I’ve been trying to get a Mac laptop with 32GB for years. It still doesn’t exist. But that’s not an ARM problem.

                                                                                                                                    Update: Correction, 32GB is supported in Intel MBPs as of this past May. Another update: see the reply! I must have been ignoring the larger sizes.

                                                                                                                                    1. 3

                                                                                                                                      I think that link says that’s the first 13 inch MacBook Pro with 32GB RAM. I have a 15 inch MBP from mid-2018 with 32GB, so they’ve been around for a couple of years at least.

                                                                                                                                      1. 1

                                                                                                                                        You can get 64GB on the 2020 MBP 16” and I think on the 2019, too.

                                                                                                                                    1. 3

                                                                                                                                      I see some comments about port and RAM limits, the situation is a bit more interesting than most people appreciate. The M1 has the RAM built into the CPU package which helps space, cost, performance, and power, but of course reduces flexibility. Fine for home and office users, and honestly a lot of developers and other pro users too.

                                                                                                                                      Here’s where it gets fun: I expect the higher-end chips are going to have move main memory out of the CPU package and instead include GPU-style high speed DRAM for high-traffic memory regions. Maybe two stacks of HBM2E for 1TB/s bandwidth. That will be a big improvement to all kinds of compute-intensive apps and potentially give them access to high end engineering apps. For the first time in 20 years the fastest workstations won’t be commodity PCs.

                                                                                                                                      Should be fun.

                                                                                                                                      1. 1

                                                                                                                                        What about a multiple-level memory thing? (Well, one more in addition to all the levels there already are.) 16 or 32GB of CPU-/GPU-shared super fast memory on the SoC for stuff that needs it plus a few sticks (or, more likely, soldered) of conventional RAM. Would likely require new smarts at the Darwin layer, of course.

                                                                                                                                        1. 3

                                                                                                                                          Yes, that’s roughly what I’m guessing Apple does. There’s some precedent. Intel has shipped chips with in-package eDRAM, AMD has done work on CPU/GPU unified memory space, and some upcoming Intel HPC-targeted chips are announced to include in-package HBM2E. 16/32GB of HBM + slots to take you up to 1TB+ in DDR4 DIMMS seems like a reasonable answer for a tower Mac. 2/4GB + DDR4 soldered on seems likely for a 16” successor. I doubt there would be many software changes outside the kernel and Metal infrastructure.