1. 23

    What I also find frustrating on macOS is the fact you need to download Xcode packages to get basic stuff such as Git. Even though I don’t use it, Xcode is bloating my drive on this machine.

    We iOS developers are also not pleased with the size on disk of an Xcode installation. But you only need the total package if you are using Xcode itself.

    A lighter option is to delete Xcode.app and its related components like ~/Library/Developer, then get its command line tools separately with xcode-select --install. Git is included; iOS simulators are not.

    1. 7

      I’m always surprised when I see people complain about how much space programs occupy on disk. It has been perhaps a decade since I even knew (off the top of my head) how big my hard drive was, let alone how much space any particular program required. Does it matter for some reason that I don’t understand?

      1. 20

        Perhaps you don’t, but some of us do fill up our drives if we don’t stay on top of usage. And yes, Xcode is one of the worst offenders, especially if you need to keep more than one version around. (Current versions occupy 18-19GB when installed. It’s common to have at least the latest release and the latest beta around, I personally need to keep a larger back catalogue.)

        Other common storage hogs are VM images and videos.

        1. 4
          $ df -h / /data
          Filesystem      Size  Used Avail Use% Mounted on
          /dev/nvme0n1p6  134G  121G  6.0G  96% /
          /dev/sda1       110G   95G  9.9G  91% /data
          

          I don’t know how large XCode is; a quick internet search reveals it’s about 13GB, someone else mentioned almost 20GB in another comment there. Neither would not fit on my machine unless I delete some other stuff. I’d rather not do that just to install git.

          The MacBook Pro comes with 256GB by default, so my 244GB spread out over two SSDs isn’t that unusually small. You can upgrade it to 512GB, 1TB, or 2TB, which will set you back $200, $400, or $800 so it’s not cheap. You can literally buy an entire laptop for that $400, and quite a nice laptop for that $800.

          1. 6

            $800 for 2TB is ridiculous. If I had to use a laptop with soldered storage chips as my main machine, I’d rather deal with an external USB-NVMe adapter.

            1. 2

              I was about to complain about this, but actually check first (for a comment on the internet!) and holy heck prices have come down since I last had to buy an ssd

            2. 1

              I guess disk usage can be a problem when you have to overpay for storage. On the desktop I built at home my Samsung 970 EVO Plus (2TB NVMe) cost me $250 and the 512GB NVMe for OS partition was $60. My two 2TB HDDs went into a small Synology NAS for bulk/slow storage.

            3. 4

              It matters because a lot of people’s main machines are laptops, and even at 256 GB (base storage of a macbook pro) and not storing media or anything, you can easily fill that up.

              When I started working I didn’t have that much disposable income, I bought an Air with 128GB, and later “upgraded” with an sd card slot 128gb thing. Having stuff like xcode (but to be honest even stuff like a debug build of certain kinds of rust programs) would take up _so much space. Docker images and stuff are also an issue, but at least I understand that. Lots of dev tools are ginoromous and it’s painful.

              “Just buy a bigger hard drive from the outset” is not really useful advice when you’re sitting there trying to do a thing and don’t want to spend, what, $1500 to resolve this problem

              1. 1

                I don’t know. Buying laptops for Unix and Windows (gaming) size hasn’t really been an issue since 2010 or so? These days you can buy at least 512GB without make much of a dent in the price. Is Apple that much more expensive?

                (I’ll probably buy a new one this year and would go with at least a 512GB SSD and 1TB HDD.)

                1. 3

                  Apple under-specs their entry level machines to make the base prices look good, and then criminally overcharges for things like memory and storage upgrades.

                  1. 1

                    Not to be too dismissive but I literally just talked about what I experienced with my air (that I ended up using up until…2016 or so? But my replacement was still only 256GB that I used up until last year). And loads of people buy the minimum spec thing (I’m lucky enough now to be able to upgrade beyond my needs at this point tho)

                    I’m not lying to prove a point. Also not justifying my choices, just saying that people with small SSDs aren’t theoretical

              2. 1

                Yup, it’s actually what is written on the homebrew website and what I used at first.

              1. 6

                I really appreciate the vision behind this project. I like the FreeBSD base system because it’s organized and understandable, but then adding a GUI gives me decision paralysis. A default UI based on proven usability studies sounds like what I want. Big bonus points for the bringing back the spatial Finder.

                1. 47

                  This is advocating that you always be a disposable commodity within a labor marker. A repackaging of the “free labour” idea from liberalism - that wage labour frees the worker to engage in any contract as they please. But the reality of being an exchangeable commodity is rather different.

                  1. 30

                    You can still be indispensable through your unique contribution and areas of focus that others would not have pioneered. By making it easy for people to follow in your footsteps and take over from you, you are influential, you change the way things work, and people notice that. When it’s for the organization’s betterment they appreciate it too. :)

                    I don’t want to be indispensable in the sense of a bus factor. I do want to be indispensable in the sense of “Wow, it’s a good thing /u/kevinc works here.”

                    1. 16

                      That’s perfectly reasonable, but in order for it to work, there has to be a company at the other end that needs, values, and can recognize innovation and unique contribution. All companies claim they do because you don’t want to project a boring software sweatshop image, but realistically, many of them don’t. Only a pretty small fraction of today’s computer industry checks out the “needs” part, and you still got two boxes to go. For many, if not most people in our field, making yourself indispensable in the sense of a bus factor is an unfortunate but perfectly valid – in fact, often the only – career choice that their geography and/or employment allows for.

                      1. 9

                        Well technically we’re all bus-replacable. Some of us have enough experience and/or good-will built up in the company that if you actually do what the article proposes, you actually won’t be easily replacable even if you make yourself “replacable”. It’ll be either too expensive for the company to find and train your replacements, or they’ll lose on the value you’re bringing.

                        What the article doesn’t mention, though, is that you can’t do any of that stuff if you’re a green junior dev. It’s easy to find a job when you’re good at it and you can prove it, but just getting kicked out on the street while I was still young in the industry would scare me shitless.

                        1. 1

                          I agree you want to find a workplace that does value you, and even if you do find that, you have to watch for the organization changing out from under you. Just, on your way there, you can earn some great referrals by giving what you know instead of hoarding it.

                          As an engineer, is it valid to make yourself a wrench in the works entrusted to you? I think no. But to your point, you’re a person first and an engineer second. If survival is on the line, it’s another story.

                          1. 3

                            Just, on your way there, you can earn some great referrals by giving what you know instead of hoarding it.

                            I absolutely agree that it is invalid to make yourself a wrench in the works entrusted to you, but computer stuff is absolutely secondary to many companies out there.

                            Note: I edited my comment because, in spite of my clever efforts at anonymising things, I’m preeeetty sure they can still be traced to the companies in question. I’ll just leave the gist of it: so far, my thing (documentation) has not earned me any referrals. It has, however, earned me several Very Serious Talks with managers, and HR got involved in one of them, too.

                            I know, and continue to firmly believe (just like you, I think) that good work trumps everything else, but I did learn a valuable lesson (after several tries, of course): never underestimate good work’s potential to embarrass people, or to make things worse for a company that’s in the business of selling something other than good work.

                      2. 8

                        I think this is a bit unfair. I’ve worked with people who have hidden information and jealously guarded their position in a company and it makes it harder to do your job. You have to dance around all sorts politics and all changes are viewed with suspicion. You have to learn what any given person is protecting in order to get what you need to do your job. You hear stories about people getting bribed to do their jobs. People won’t tell you how to do things, but will do them so they are unreplaceable. People build systems with the eye towards capturing other parts of the organization.

                        Most of that would go away if people did what was described in the article.

                        1. 9

                          Maybe if IT workers had a better way of protecting their job security – such as a union – there wouldn’t be the motivation to do this kind of thing.

                          (Note: I don’t do this kind of thing, but I totally understand why someone would, and worker solidarity prevents me from criticizing them for it.)

                          1. 2

                            I don’t know if I agree with you in this specific case. It was at a place that never fired anyone. People who were not capable of doing their jobs were kept for years. It seemed to be more predicated on face saving, inter team rivalry and competition for budget.

                        2. 6

                          Yes, I had the same thought as you. It’s true that “if you can’t be replaced, you can’t be promoted”, but since when are people promoted anymore? The outlook of this article is that job security is not something you can always take for granted; indeed, that you can take upward (or at least lateral) mobility for granted. Maybe that’s true for highly-marketable (white, cis-male, young, able-bodied) developers in certain urban areas, but at my age, I wouldn’t want to count on it.

                          1. 4

                            Being a disposable commodity doesn’t necessarily imply low value. You can do something that is highly uniform and fungible, and also well compensated, I think.

                            1. 16

                              you think wrong. Historically “deskilling” (this is the term for when a worker becomes standardized and easily replaceable) corresponds to salaries going down. This happens for a variety of reasons: you cannot complain, you cannot unionize easily, you cannot negotiate your salary. You get the money you get just because your employer has no mean to find somebody that can do exactly the same and get paid less. If that becomes possible and you don’t have rights that protect (minimum wage, collective agreements, industry-wide agreements) or collective organizations that can protect you, the salaries go down. Fighting deskilling is not necessarily the most efficient strategy and doesn’t have to be the only one, but for sure giving up on that is no good.

                              On top of that, deskilling is coupled with more alienation, less commitment and in general a much worse working experience, because you know you don’t make a difference. You become less human and more machine.

                              Programming, I believe, naturally fights against deskilling because what can be standardized and therefore automated will eventually be automated. But the industry is capable of generating new (often pointless) jobs on top of these new layers of automation of tasks that before were done manually. Actively pursuing deskilling is unreasonable also from an efficiency point of view, because the same problem of “scale” is already solved by our own discipline. The same is not true for most other professions: a skilled factory worker cannot build the machine he’s using or improve it (with rare exceptions). A programmer can and will if necessary. Deskilling means employing people that will only execute and not be able to control the process or the organization, leaving that privilege and responsibility to managers.

                              1. 7

                                the article is not about deskilling, it’s about communicating your work with your peers. Those are very different things.

                                1. 8

                                  it says explicitely to try to be disposable. Disposability and deskilling are equivalent. The term, in the labor context, is not just used to say “this job should require less skill to be done”. It’s used for any factor that makes you disposable or not, regardless of the level of skill involved. Clearly skill plays a relevant role in the vast majority of the cases. What he’s advocating is to surrender any knowledge of the company, the platform and so on, so that you can be easily replaced by somebody that doesn’t have that knowledge. You’re supposed to put in extra effort deliberately (not on request from your boss and maybe often going against company’s practices) to make this process more frictionless from your employer. That’s what the article is saying

                                  1. 3

                                    it says explicitely to try to be disposable.

                                    While it does say that, I think that the actual meaning of the article is “make the work you do disposable”, not “make yourself disposable”. That way you can go through life making changes that make it easier for everyone around but also highly profitable for the company so that while the work that you currently are doing can be done by whomever, the potential value you bring at each new thing you do is incalculable. So they’d keep you, of course.

                                    1. 1

                                      What he’s advocating is to surrender any knowledge of the company, the platform and so on, so that you can be easily replaced by somebody that doesn’t have that knowledge.

                                      Are you suggesting that the replacement will not have that knowledge, or will at the moment of replacement have gained that knowledge?

                                      Disposability and deskilling are equivalent.

                                      This is not the case in my mental vocabulary, and I don’t think it is the case in the article linked. Disposability is about upskilling as a team, becoming more engaged in craft, and having a community of practice, so that the community doesn’t rely on a single member to continue to upskill/self-improve.

                                  2. 1

                                    While I agree that deskilling is a thing, it might be more something that affects blue collar workers working on an assembly line than IT professionals (to an extent). Replacing someone isn’t just firing the more expensive person and hiring a cheaper one. It involves onboarding and training, which may take several months, which directly translates to lost earnings.

                                    1. 1

                                      It happened to plenty of cognitive workers throughout the work. Deskilling is also replacing accountants, fraud analysts or many other professions with ML models that live on the work of data labelers somewhere in Pakistan.

                              1. 2

                                The third video’s comparison between structured concurrency and structured programming was eye-opening for me.

                                1. 3

                                  From the title, I thought the article would be about writing declarative code when you can. Code takes fewer steps to understand when instead of having to deduce the result from the steps, you read the result right in the code. Don’t tell me the recipe, show me the sandwich.

                                  But a high-minded practice like that is no help if the code is already written. If you’re maintaining a legacy system — and sooner or later, don’t we all? — you could benefit immensely from this kind of tool. I’d love to see it realized.

                                  1. 2

                                    This feels like a lot more changes than a typical minor Swift version. Maybe that’s because almost all of them are of one topic. We’ve been looking forward to these concurrency features for years, and then they all arrived at once. I hope this means Swift is much more viable for server work.

                                    1. 12

                                      Thanks for posting. That was a delightful read.

                                      1. 36

                                        Better title, “don’t just check performance on the highest-end hardware.” Applies to other stuff too, like native apps — developers tend to get the fastest machines, which means the code they’re building always feels fast to them.

                                        During the development cycle of Mac OS X (10.0), most of the engineers weren’t allowed to have more than 64MB of RAM, which was the expected average end-user config — that way they’d feel the pain of page-swapping and there’d be more incentive to reduce footprint. I think that got backpedaled after a while because compiles took forever, but it was basically a good idea (as was dog-fooding the OS itself, of course.)

                                        1. 4

                                          Given that the easy solution is often the most poorly performing and that people with high-end hardware have more money and thus will be the majority of your revenue, it would seem that optimising for performance is throwing good money after bad.

                                          You are not gonna convince websites driven by profit with sad stories about poor people having to wait 3 extra seconds to load the megabytes of JS.

                                          1. 6

                                            depends on who your target audience is. If you are selling premium products, maybe. But then still, there are people outside of tech who are willing to spend money, just not on tech. So I would be very careful with that assumption.

                                            1. 2

                                              It’s still based on your users and the product you sell. Obviously Gucci, Versace and Ferrari have different audiences but the page should still load quickly. That’s why looking at your CrUX reports and RUM data helps with figuring out who you think your users are and who’s actually visiting your web site.

                                              I don’t own a Ferrari but I still like to window shop. Maybe one day I will. Why make the page load slow because you didn’t bother to optimize your JavaScript?

                                            2. 5

                                              These days your page performance (e.g. Core Web Vitals) is an SEO factor. For public sites that operate as a revenue funnel, a stakeholder will listen to that.

                                              1. 3

                                                I don’t work on websites, but my understanding is that generally money comes from ad views, not money spent by the user, so revenue isn’t based on their wealth. I’m sure Facebook’s user / viewer base isn’t mostly rich people.

                                                Most of my experience comes from working on the OS (and it’s bundled apps like iChat.) it was important that the OS run well on the typical machines out in the world, or people wouldn’t upgrade, or buy a new Mac.

                                                1. 2

                                                  Even if you were targeting only the richest, relying on high-end hardware to save you would be a bad strategy.

                                                  • Mobile connections can have crappy speeds, on any hardware.
                                                  • All non-iPhone phones are relatively slow, even the top-tier luxury ones (e.g. foldables). Apple has a huge lead in hardware performance, and other manufacturers just can’t get equally fast chips for any price.
                                                  • It may also backfire if your product is for well-off people, but not tech-savvy people. There are people who could easily afford a better phone, but they don’t want to change it. They see tech upgrades as a disruption and a risk.
                                                2. 3

                                                  I’ve heard similar (I believe from Raymond Chen) about Windows 95 - you could only have the recommended spec as stated on the box unless you could justify otherwise.

                                                  1. 2

                                                    It would be very useful if the computer could run at full speed while compiling, throttling down to medium speed while running your program.

                                                    1. 1

                                                      Or you use a distributed build environment.

                                                      1. 1

                                                        If you use Linux, then I believe this can be accomplished with cgroups.

                                                      2. 2

                                                        They might have loved a distributed build system at the time. :) Compiling on fast boxes and running the IDE on slow boxes would’ve been a reasonable compromise I think.

                                                        1. 1

                                                          most of the engineers weren’t allowed to have more than 64MB of RAM,

                                                          Can OS X even run on that amount of ram?

                                                          1. 15

                                                            OS X 10.0 was an update to OPENSTEP, which ran pretty happily with 8 MiB of RAM. There were some big redesigns of core APIs between OPENSTEP and iOS to optimise for power / performance rather than memory use. OPENSTEP was really aggressive about not keeping state for UI widgets. If you have an NSTableView instance on OPENSTEP, you have one NSCell object (<100 bytes) per column and this is used to draw every cell in the table. If it’s rendering text, then there’s a single global NSTextView (multiple KiB, including all other associated state) instance that handles the text rendering and is placed over the cell that the user is currently editing, to give the impression that there’s a real text view backing every cell. When a part of the window is exposed and needs redrawing, the NSCell instances redraw it. Most of the objects that are allocated on the drawing path are in a custom NSZone that does bump allocation and bulk free, so the allocation is cheap and the objects are thrown away at the end of the drawing operation.

                                                            With OS X, the display server was replaced with one that did compositing by default. Drawing happened the same way, but each window’s full contents were stored. This was one of the big reasons that OS X needed more RAM than OPENSTEP. The full frame buffer for a 24-bit colour 1024x768 display is a little over 2 MiB. With OPENSTEP, that’s all you needed. When a window was occluded, you threw away the contents and drew over it with the contents of the other window[1]. With OS X, you kept the contents of all windows in memory[2] . If you’ve got 10 full-screen windows, now you need over 20 MiB just for the display. In exchange for this, you get faster UI interaction because you’re not having to redraw on expose events.

                                                            Fast forward to the iPhone era and now you’ve got enough dedicated video memory that storing a texture for every single window was a fairly negligible impact on the GPU space and having 1-2 MiB of system memory per window to have a separate NSView instance (even something big like NSTextView) for every visible cell in a table was pretty negligible and the extra developer effort required to use the NSCell infrastructure was not buying anything important. To make matters worse, the NSCell mechanisms were intrinsically serial. Because every cell was drawn with the same NSCell instance, you couldn’t parallelise this. In contrast, an NSView is stateful and, as long as the controller / model support concurrent reads (including the case that they’re separate objects), you can draw them in parallel. This made it possible to have each NSView draw in a separate thread (or on a thread pool with libdispatch), spreading the drawing work across cores (improving power, because the cores could run in a lower power state and still be faster than one core doing all of the work in a higher power state, with the same power envelope). It also meant that the result of drawing an NSView could be stored in a separate texture (CoreAnimation Layer) and, if the view hadn’t changed, be composited very cheaply on the GPU without needing the CPU to do anything other than drop a couple of commands into a ring buffer. All of this improves performance and power consumption on a modern system, but would have been completely infeasible on the kind of hardware that OPENSTEP or OS X 10.0 ran on.

                                                            [1] More or less. The redraws actually drew a bit more than was needed that was stored in a small cache, because doing a redraw for every row of column of pixels that was exposed was too slow, asking views to draw a little bit more and caching it meant that you make it appear smooth as a window was gradually revealed. Each window would (if you moved the mouse in a predictable path) draw the bit that’s most likely to be exposed next and then that would be just copied into the frame buffer by the display server as the mouse moved.

                                                            [2] Well, not quite all - if memory was constrained and you had some fully occluded windows, the system would discard them and force redraws on expose.

                                                            1. 1

                                                              Thanks for this excellent comment. You should turn it into a mini post of it’s own!

                                                            2. 3

                                                              Looks as if the minimum requirement for OS X 10.0 (Cheetah) was 128 MB (unofficially 64 MB minimum).

                                                              1. 2

                                                                Huh. You know, I totally forgot that OS X first came out 20 years ago. This 64M number makes a lot more sense now :)

                                                              2. 1

                                                                10.0 could, but not very well; it really needed 128MB. But then, 10.0 was pretty slow in general. (It was released somewhat prematurely; there were a ton of low-hanging-fruit performance fixes in 10.1 that made it much more useable.)

                                                            1. 7

                                                              A text editor is a central tool in my workflow; almost all my work goes through it. (Same for web browsers.) I would find it completely unreasonable to use a proprietary tool for such a core tool/need, especially given the rather good quality of open source alternatives. Evidently other people disagree, and it’s completely fine, but I am constantly surprised that there is a large enough pool to make “proprietary text editor” a sustainable business.

                                                              (Maybe there is a failure among contributors to open source editors to find the right way to commercialize them and make them feel “aww” to the audience that likes to pay for slick software.)

                                                              1. 15

                                                                Whether a tool is proprietary or open source is an abstract property that’s totally unrelated to its user experience.

                                                                1. 8

                                                                  that’s totally unrelated to its user experience.

                                                                  I would consider having the right to read, modify and share the changes I make to an editor a core part of the user experience. I’ve fixed bugs both for myself and for friends in the text editor I currently use, not having the right to do that would certainly sour my experience.

                                                                  1. 4

                                                                    I would consider having the right to read, modify and share the changes I make to an editor a core part of the user experience.

                                                                    That’s fine, but it’s a position that’s shared by statistically zero other people.

                                                                    1. 2

                                                                      That’s fine, but it’s a position that’s shared by statistically zero other people.

                                                                      If your population is the entirety of people who use software, I do not doubt that. If your population is constrained to people using text editors to write programs, I’d want a link to a study before believing you :).

                                                                      1. 1

                                                                        Ehh, fair :p

                                                                    2. 4

                                                                      I would consider having the right to read, modify and share the changes I make to an editor a core part of the user experience.

                                                                      I think that’s the key difference. I’ve literally never even looked at the source code for the vast majority of tools I use. They do what I need them to do, so I have no motivation to change them. I use (and pay for) JetBrains IDEs (which are proprietary) partly because I’ve never even encountered a meaningful (to me) bug or problem with any of them. That being said, FOSS editors provide competition that improves the quality of proprietary editors. So if that’s your cup of tea, then that’s great in my book!

                                                                    3. 2

                                                                      Saying this doesn’t make it true. In practice, some open tools have better user experiences; these are not always coincidental or unrelated to the development model.

                                                                      1. 12

                                                                        Open source GUI tools also can often have innate challenges with UX due to a “too many cooks” effect — there have been many articles written about this, with titles like “why open source usability tends to suck”. I can’t speak for all tools, but in my experience I’ve found proprietary ones to have much better UX than open source ones.

                                                                        (My background is working at Apple from 1991-2007. I’ve worked with gifted UI designers who were passionate about getting things right. It could be really painful as the engineer who had to make all those changes and hone details, but I think the results were worth it. Of course sometimes this goes wrong and features get dropped or too much time gets spent on questionable eye candy, but in general I think it’s the way to go for really good UX.)

                                                                        1. 3

                                                                          Yes, the “too many cooks” effect often happens. I see what you mean based on your experience.

                                                                          I’m speaking more generally, in the hopes of highlighting some design options software developers might be less aware of. To that end, I would like to make two distinctions:

                                                                          First, I want to point out that “user experience” takes many forms. In the case of a text editor, the “primary” experience is the person doing the text editing. But there are also others; e.g. the experience for the extension/plugin designer. It is often said that committees don’t design UIs well, but on the other hand, it can be very valuable to have a small army of people to listen to suggestions, triage them, document rough spots, and ease user pain.

                                                                          Second, I distinguish between (a) code licensing and availability (e.g. open source code) and (b) the governance model. My hypothesis is that the governance model is a larger driving factor when it comes to user experience.

                                                                          This would be an interesting area to study empirically. I personally would enjoy speaking with large numbers of open source contributors to see how they feel about their efforts and the results.

                                                                    4. 10

                                                                      For those who don’t intend to spend cycles on editing the tool itself and just want to get the best thing and use it, their personal definition of “best” will lead them from one tool to another over the years. For such a central piece of workflow, it can be easy to justify some investment of money and/or learning time every now and then.

                                                                      If I recall correctly, the first version of Sublime introduced the minimap UI feature we’ve since seen in Atom and VS Code. It was very fast for a graphical app, especially compared to its then competitor TextMate. Both TextMate and Sublime offered scripting in major languages of their day, Ruby and Python respectively.

                                                                      Sitting back and observing, you can see these editors maybe trying to do too much of the job themselves, repeating work they could have shared, not standing the test of time. But to a serial app-of-the-moment user, that’s just the way it goes.

                                                                      1. 2

                                                                        This here is why we can’t have nice things :(

                                                                        1. 1

                                                                          I’m not familiar with any good open source text editors which weren’t formerly commercial — could you name any that I’ve been missing?

                                                                          1. 2

                                                                            Some examples: emacs, vim, neovim, kakoune, vis, …

                                                                            1. 0

                                                                              That’s the reply I expected, and I leave the interpretation to the reader.

                                                                        1. 1

                                                                          I did a double take initially, because early iPhones are my responsive layout minimum spec for screen width.

                                                                          1. 8

                                                                            Interesting read! I had no idea there was such a wide gulf between iPhone and non-iPhone performance. When I switched to an iPhone a year or two ago I definitely noticed that everything felt faster, but the magnitude of the difference is actually a little shocking.

                                                                            1. 4

                                                                              This is not far from why the M1 chip’s effective performance surprised many people, while for us iOS developers it was simply a bigger leap than we expected. The A-series chips have been improving quickly and steadily for a decade.

                                                                            1. 27

                                                                              Generally I liked it, but from some reason it discourages square-style borders in designs. One answer treats a square border as a wrong answer, but that’s just a style issue. I don’t see anything wrong with square borders.

                                                                              1. 30

                                                                                Also, having the “skip” button just be text instead of being an actual button is somehow the “correct” design.

                                                                                1. 6

                                                                                  You’re not alone. That’s an aspect of mobile platform standards that plenty of people in the design world aren’t wild about.

                                                                                  1. 11

                                                                                    I am not in the design world but I really want to punch somebody for this. Either give me a choice or don’t give me a choice. Don’t hide half the options under different design. This is the standard cookie bullshit dark pattern.

                                                                                    1. 4

                                                                                      Oh yeah, you’re talking about the “option that preserves your privacy is diminished” dark pattern? Agreed. That much is not a platform standard. What I was referring to as a platform standard that many designers don’t like is for buttons to be borderless.

                                                                                  2. 2

                                                                                    With a single ‘highlighted’ option it makes speed-nexting easier.

                                                                                    Too bad most organisations put the anti-user option as the default whenever they could.

                                                                                  3. 10

                                                                                    What’s not explicitly stated here is that these designs are in the style of popular mobile standards like the stock iOS search bar or Material Design in the share prompt. In mobile app development we often try to keep consistency with other apps. Users switch between apps quickly and they need to be more similar in terms of appearance and layout to keep the friction to a minimum. Colors and fonts are fair game to change, but shapes, capitalization, putting the confirm button on a certain side, etc. are viewed as having a correct (conformant) answer. At least, that’s a common viewpoint.

                                                                                    On a web app, go nuts :) In that case it’s all up to your own design system, and I agree fully.

                                                                                    Another possible interpretation of “correct”: Your designer gave you the intended appearance. You built it. Now it looks like this. Either the design or implementation might have introduced a mistake, but either way the buck stops with you. Is it right? “Did I do this correctly” is an everyday question for mobile app developers.

                                                                                    1. 1

                                                                                      As others have said it’s more about consistency. If there are two rectangular items, and 6 of the 8 corners are rounded, then the smallest change that achieves full consistency is to make all 8 corners rounded (requires changing only 2 corners) rather than making all 8 sharp (requires changing 6 corners). Even more so if rounded corners are the default style of the platform (which they often are these days).

                                                                                    1. 8

                                                                                      Honestly I like working in groups of three or four better than in pairs. You can be in the background for part of the time while two others go back and forth; you can get up and leave for a bit without disrupting anything.

                                                                                      One-on-one situations are more stressful to me. I never get a break from being the direct focus of the other person’s attention.

                                                                                      1. 3

                                                                                        This is so contextual. Generally my experience is the opposite, but only because my pairing sessions are voluntary, laid back, and last only as long as they need to be. I have never had a good experience with any kind of group programming beyond a pair. There is too much time wasted considering people’s feelings and negotiating social dynamics. A good pair can be intimate and almost as relaxed as programming solo.

                                                                                        All that said, I get how this is a very individual thing, influenced your own personality, the culture in which it takes place, and many more things, and it’s interesting how varied people’s experiences are.

                                                                                        1. 4

                                                                                          It’s definitely a subjective experience. I think it’s important that teams be able to work out their own process to suit individuals’ needs, even discarding group work entirely if it’s not effective for them.

                                                                                          That said, I don’t think time spent considering your teammates’ feelings is wasted time. When social dynamics are recognized they can perhaps be addressed directly.

                                                                                          1. 3

                                                                                            To clarify, I wasn’t saying considering people’s feelings is waste of time. It’s essential. My point is that managing the O(n^2) politics of a group – even a group of friends – while trying to do deep programming work just makes everything less effective. And, for me, a lot less fun. There is no amount of trust or addressing of issues that can minimize the dynamics to a point where it works harmoniously with the focus required to program well. Again, YMMV. This is just a subjective report and not a prescription.

                                                                                      1. 5

                                                                                        You can’t pair or mob and have it be healthy without psychological safety in the team environment. The people in the process have to believe it’s OK to take breaks, OK to make a mistake or show work in progress, OK not to know the answer yet, OK to experiment with the process and mutate it, OK to reject it outright having given it a try. A manager can’t just tell people to work in a group and expect anything better than your average group work in school.

                                                                                        I also just don’t see a great amount of value in pairing or mobbing with just other developers. The benefits are greater when it’s a cross-discipline group, and you can reduce handoffs by merging or overlapping stages of workflow.

                                                                                        1. 3

                                                                                          Is “mob programming” a thing?

                                                                                          In Swedish, it would be a very unfortunate term as “mobbning” (mobbing) is the generic term for bullying.

                                                                                          1. 4

                                                                                            It is a thing. As far as I know, the term hasn’t had a popular revision, but some Americans I know do feel that it needs one.

                                                                                            At my job we use the term “ensemble” to refer to working groups in which the members represent different specialties.

                                                                                        1. 5

                                                                                          On “The importance of unpacking abstractions”, I’ve found that I (and, likely, some others) have the opposite problem, which is “always unpacking abstractions” - because, when done recursively for any large system, makes you incredibly inefficient because you end up spending an exorbitant amount of unnecessary time learning, instead of doing. (This is mostly a function of my difficulty in tolerating black boxes - I really really want to completely understand all of the layers of a system before I begin to manipulate it.)

                                                                                          That is - the ideal programmer will both have the ability to pierce abstraction layers to understand what’s going on at a lower level, and the ability to manipulate systems without total recursive understanding of every single layer it’s built on - and then employ those abilities appropriately.

                                                                                          1. 1

                                                                                            For me this is a matter of trust. Do I trust the black box to behave a certain way? And if not, what’s it going to take for me to become optimistic about it so I can stay zoomed out?

                                                                                          1. 2

                                                                                            The work project I lead and its testing tooling are in TypeScript. But when it comes to basic scripting, like our CI/CD steps, we reach for bash. I’d rather use the application’s language for business problems and the execution environment’s language for environment problems.

                                                                                              1. 29

                                                                                                Weird, I definitely disagree on expanding this. It just wastes horizontal space and keystrokes for no benefit.

                                                                                                Maybe I32 and Str instead of i32 and str, but abbreviations for commonly used things are good. You’re not even getting rid of the abbreviation, Int is after all short for Integer.

                                                                                                1. 9

                                                                                                  I agree with this (I think lowercase would be fine, too, though).

                                                                                                  I think that Rust overdoes it a little bit on the terseness.

                                                                                                  I understand that Rust is a systems language and that Unix greybeards love only typing two or three characters per thing, but there’s something to be said for being descriptive.

                                                                                                  Examples of very terse things that might be confusing to a non-expert programmer:

                                                                                                  • Vec
                                                                                                  • fn
                                                                                                  • i32, u32, etc
                                                                                                  • str
                                                                                                  • foo.len() for length/size
                                                                                                  • mod for module
                                                                                                  • mut - this keyword is wrong anyway
                                                                                                  • impl

                                                                                                  None of the above bothered me when I learned Rust, but I already had lots of experience with C++ and other languages, so I knew that Vec was short for “vector” immediately. But what if I had come from a language with “lists” rather than “vectors”? It might be a bit confusing.

                                                                                                  And I’m not saying I would change all/most of the above, either. But maybe we could tolerate a few of them being a little more descriptive. I’d say i32 -> int32, Vec -> Vector, len() -> count() or length() or size(), and mut -> uniq or something.

                                                                                                  1. 11

                                                                                                    mut -> uniq

                                                                                                    Definitely this!

                                                                                                    For the context of those who aren’t familiar, &mut pointers are really more about guaranteeing uniqueness than mutability. The property &mut pointers guarantee is that there is only one pointing at a given object at a time, and that nothing access that object except through them while they exist.

                                                                                                    Mut isn’t really correct because you can have mutability through a & pointer using Cell types. You can have nearly no mutability through a &mut pointer by just not implementing any mutable methods on the type (though you can’t stop people from doing *mut_ptr = new_value()).

                                                                                                    The decision to call this mut was to be similar to let mut x = 3… I’m still unconvinced by that argument.

                                                                                                    1. 4

                                                                                                      Not to mention the holy war over whether let mut x = 3 should even exist, or if every binding is inherently a mutable binding since you aren’t actually prevented from turning a non-mutable binding into a mutable one:

                                                                                                      let x = 3;
                                                                                                      let mut x = x;
                                                                                                      // mutate the ever living crap out of x
                                                                                                      
                                                                                                      1. 4

                                                                                                        My favorite is {x} allowing mutability, because now you’re not accessing x, but a temporary value returned by {}.

                                                                                                        1. 2

                                                                                                          I never knew about that one! Cute.

                                                                                                    2. 11

                                                                                                      For an example, check out some Swift code. Swift more or less took Rust’s syntax and made it a little more verbose. fn became func, the main integer type is Int, sequence length is .count, function arguments idiomatically have labels most of the time, and so on. The emphasis is on clarity, particularly clarity at the point of use of a symbol — a function should make sense where you find a call to it, not just at its own declaration. Conciseness is desirable, but after clarity.

                                                                                                      1. 2

                                                                                                        Yep. I also work with Swift and I do like some of those choices. I still think the function param labels are weird, though. But that’s another topic. :)

                                                                                                      2. 4

                                                                                                        I think this mostly doesn’t matter - I doubt anyone would first-try Rust, given its complexity, so it’s not really that much of an issue. Keywords are all sort of arbitrary anyway, and you’re just gonna have to learn them. Who’d think go would spawn a thread?

                                                                                                        I, for one, think these are pretty nice - many people will learn Python so they expect len and str, and fn and mod are OK abbreviations. I think the terseness makes Rust code look nice (I sorta like looking at Rust code).

                                                                                                        Though I’d agree on mut (quite misleading) and impl(implement what?).

                                                                                                      3. 2

                                                                                                        Oh, true.

                                                                                                        I don’t care about the exact naming conventions, as long as it is consistent. (This is in fact exactly how I named types in my project though, what a coincidence. :-D)

                                                                                                        In general the random abbreviations of everything, everywhere are pretty annoying.

                                                                                                        1. 3

                                                                                                          It’s the consistency, yes. Why should some types be written with minuscles?

                                                                                                          1. 4

                                                                                                            Lowercase types are primitive types while camelcase are library types. One has special support from the compiler and usually map to the machine instructions set while the other could be implemented as a 3rd party library.

                                                                                                            1. 4

                                                                                                              Because they are stack-allocated primitive types that implement Copy, unlike the other types which are not guaranteed to be stack-allocated and are definitely not primitive types.

                                                                                                              1. 2

                                                                                                                Because they are used more than anything else :)

                                                                                                                (and really it should be s32 to match u32)

                                                                                                                1. 1

                                                                                                                  Not a good reason. Code is read way more often than it is written.

                                                                                                                  1. 4

                                                                                                                    I don’t see how i32 is less readable. It makes the code overall more readable by making lines shorter and looks better.

                                                                                                          1. 9

                                                                                                            This comes across to me as Stockholm syndrome. I can’t agree with this premise after seeing how Zig implements color-less async/await:

                                                                                                            https://kristoff.it/blog/zig-colorblind-async-await/

                                                                                                            I highly recommend watching Andrew Kelley’s video (linked in that article) on this topic where he does a survey of function-colored languages and contrasts how Zig handles async/await:

                                                                                                            https://youtu.be/zeLToGnjIUM

                                                                                                            1. 5

                                                                                                              Aside, but I wish more Zig content was in blog posts instead of videos. Youtube is great for reaching certain demographics, but text is just way more efficient for me.

                                                                                                              1. 4

                                                                                                                Imo watching the creator of the language give a presentation provides something you can’t get from text alone. Agree in general though… I’ve come across a few youtube videos that just paste content from Stack Overflow and it’s infuriating :)

                                                                                                                1. 1

                                                                                                                  I’d rather watch a YouTube video in a Stack Overflow answer than read a Stack Overflow answer in a YouTube video :D

                                                                                                              2. 4

                                                                                                                This comes across to me as Stockholm syndrome.

                                                                                                                Stockholm syndrome is such a bombastic phrase for what often boils down to preference. Zig offers optionally explicit effects, but I can see this being unnecessary complexity if you’re writing code that is mostly async with a few portions of synchronous code. I think it boils down to what you find ergonomic for the program you’re writing.

                                                                                                                1. 3

                                                                                                                  I think it boils down to what you find ergonomic for the program you’re writing.

                                                                                                                  I don’t understand your point at all. If given the option of not bifurcating your codebase into regular and async functions, why would you choose to? This is a language design issue, not a program design issue.

                                                                                                                  1. 1

                                                                                                                    If given the option of not bifurcating your codebase into regular and async functions, why would you choose to?

                                                                                                                    If you are writing a mostly async program, you still need to go through the overhead of calling async and then awaiting it. Sure, you can compile the underlying asynchronous logic away if you want to then use this codebase in a synchronous context, but what if you know your code will not run in a synchronous context? Why even bother trying to ensure that the underlying abstraction to swap out synchronous and asynchronous code is working and just go full steam ahead with asynchronous code?

                                                                                                                    1. 3

                                                                                                                      but you might not know. Suppose you have a long-running function that you could either run synchronously on threads or run asynchronously with yields on task-stealing threads. The first will be faster, the second will give you more even p90 and p99 latencies if your requests come in unevenly. So to find out you must simulate your load and then apply it to both conditions. So you’ll have to write your code twice. Versus, zig, where you just write your code once.

                                                                                                                      https://www.youtube.com/watch?v=kpRK9BC0-I8&t=233shttps://www.youtube.com/watch?v=kpRK9BC0-I8&t=233s

                                                                                                                      1. 1

                                                                                                                        Sure and in situations where you aren’t sure, I agree, Zig’s compile-time ability to turn async on and off can be a help. But this just goes back to what I said upthread: “I think it boils down to what you find ergonomic for the program you’re writing”

                                                                                                                        1. 3

                                                                                                                          Well if you are a library writer then you are going to always be unsure. So surely you must concede, then, that rust is unfriendly to library writers.

                                                                                                                          1. 1

                                                                                                                            So surely you must concede, then, that rust is unfriendly to library writers.

                                                                                                                            I’m not sure what Rust has to do with this. Asynchronous concurrency certainly isn’t specific to Rust.

                                                                                                                            Well if you are a library writer then you are going to always be unsure.

                                                                                                                            This depends on the library you’re writing. The scenario I’m envisioning is quite simple: I’m writing a networked service that I will be deploying onto a set of known hardware. In more generic libraries, yeah I can see Zig’s compile-time async switching being a feature. Thinking a bit more about this, another situation I can see benefiting from compile-time async switching is testing, where you’d like to shut off asynchronous execution for deterministic test behavior. Regardless, I maintain that “I think it boils down to what you find ergonomic for the program you’re writing”.

                                                                                                                2. 3

                                                                                                                  But Zig does have a difference between these functions. There are still two types of functions. There are limitations of which type can be called when (e.g. main can’t be async for obvious reasons, you can have nosuspend contexts).

                                                                                                                  Zig hasn’t removed the difference, because that’s logically impossible. It only obfuscated the difference syntactically.

                                                                                                                  To me this is a parlor trick of “look no async!”, but in fact there still is, and you’re just programming blind. Now you need to be extra paranoid about holding locks across surprise-suspension points, about pointers to parent stack frames which may not be there anymore after the next function call, about calling non-Zig blocking functions which can block async runtime which you didn’t even ask for. Or you mark function calls nosuspend and hey - you have colored them!

                                                                                                                  1. 3

                                                                                                                    It’s clear you haven’t actually used this in practice, but let me assure you I’ve run into none of these concerns you bring up; it does “just work”, and it works as you would expect. It’s a little bit of a challenge figuring out how to wrangle such a low level async (you have to think about what’s happening on the hardware), and yes the first time you do it you will probably mess up.

                                                                                                                    1. 2

                                                                                                                      To me this is a parlor trick of “look no async!”

                                                                                                                      And yet I have a Redis client, implemented in a single codebase, that can be used by both async and blocking applications. Zig async is not without issues, but the “not having to duplicate effort” tradeoff is a pretty good one and your dismissal is missing its most relevant strength.

                                                                                                                      https://github.com/kristoff-it/zig-okredis

                                                                                                                  1. 6

                                                                                                                    Total outsider here, but my understanding is that Rust newcomers struggle with satisfying the compiler. That seems necessary because of the safety you get, so OK, and the error messages have a great reputation. I would want to design in possible fixes for each error which would compile, and a way to apply them back to source code given your choice. If that’s a tractable problem, I think it could help cut trial and error down to one step and give you meaningful examples to learn from.

                                                                                                                    Maybe add a rusty paperclip mascot…

                                                                                                                    1. 9

                                                                                                                      Actually, a lot of the error messages do offer suggestions for fixes and they often (not always) do “just work”. It’s really about as pleasant as I ever would’ve hoped for from a low-level systems language.

                                                                                                                      1. 3

                                                                                                                        That’s great! Is it exposed well enough to, say, click a button to apply the suggestion in an editor?

                                                                                                                        1. 4

                                                                                                                          In some cases, yes. See https://rust-analyzer.github.io/

                                                                                                                          1. 1

                                                                                                                            In a lot of cases, actually. It becomes too easy sometimes, because I don’t bother trying to figure out why it works.

                                                                                                                          2. 1

                                                                                                                            Yeah, it seems to be. I often use Emacs with lsp-mode and “rust-analyzer” as the LSP server and IIRC, I can hit the “fix it” key combo on at least some errors and warnings. I’m sure that’s less true the more egregious/ambiguous the compile error is.

                                                                                                                        2. 3

                                                                                                                          rusty paperclip mascot…

                                                                                                                          There is this but it doesn’t seem to have a logo, someone should make one!

                                                                                                                        1. 10

                                                                                                                          One of the common complaints about Lisp is that there are no libraries in the ecosystem. As you see, five libraries are used just in this example for such things as encoding, compression, getting Unix time, and socket connections.

                                                                                                                          Wait are they really making an argument of “we used a library for getting the current time, and also for sockets” as if that’s a good thing?

                                                                                                                          1. 16

                                                                                                                            Lisp is older than network sockets. Maybe it intends to outlast them? ;)

                                                                                                                            More seriously, Lisp is known for high-level abstraction and is perhaps even more general than what we usually call a general purpose language. I could see any concrete domain of data sources and effects as an optional addition.

                                                                                                                            In the real world, physics constants are in the standard library. In mathematics, they’re a third party package.

                                                                                                                            1. 12

                                                                                                                              Lisp is older than network sockets.

                                                                                                                              Older than time, too.

                                                                                                                              1. 1

                                                                                                                                Common Lisp is not older than network sockets, so the point is moot I think.

                                                                                                                                1. 1

                                                                                                                                  I don’t think so. It seems to me that it was far from obvious in 1994 that Berkeley sockets would win to such an extent and not be replaced by some superior abstraction. Not to mention that the standard had been in the works for a decade at that point.

                                                                                                                              2. 5

                                                                                                                                Because when the next big thing comes out it’ll be implemented as just another library, and won’t result in ecosystem upheval. I’m looking at you, Python, Perl, and Ruby.

                                                                                                                                1. 4

                                                                                                                                  Why should those things be in the stdlib?

                                                                                                                                  1. 4

                                                                                                                                    I think that there are reasons to not have a high-level library for manipulating time (since semantics of time are Complicated, and moving it out of stdlib and into a library means you can iterate faster). But I think sockets should be in the stdlib so all your code can have a common vocabulary.

                                                                                                                                    1. 5

                                                                                                                                      reasons to not have a high-level library for manipulating time

                                                                                                                                      I actually agree with this; it’s extraordinarily difficult to do this correctly. You only have to look to Java for an example where you have the built-in Date class (absolute pants-on-head disaster), the built-in Calendar which was meant to replace it but was still very bad, then the 3rd-party Joda library which was quite good but not perfect, followed by the built-in Instant in Java 8 which was designed by the author of Joda and fixed the final few quirks in it.

                                                                                                                                      However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                                                                                                                                      1. 7

                                                                                                                                        Common Lisp has (some) date and time support in the standard library. It just doesn’t use Unix time, so if you need to interact with things that use the Unix convention, you either need to do the conversion back and forth, or just use a library which implements the Unix convention. Unix date and time format is not at all universal, and it had its own share of problems back when the last version of the Common Lisp standard was published (1994).

                                                                                                                                        It’s sort of the same thing with sockets. Just like, say, C or C++, there’s no support for Berkeley sockets in the standard library. There is some history to how and why the scope of the Common Lisp standard is the way that it is (it’s worth noting that, like C or C++ and unlike Python or Go, the Common Lisp standard was really meant to support independent implementation by vendors, rather than to formalize a reference implementation) but, besides the fact that sockets were arguably out of scope, it’s only one of the many networking abstractions that platforms on which Common Lisp runs support(ed).

                                                                                                                                        We could argue that in 2021 it’s probably safe to say that BSD sockets and Unix timestamps have won and they might as well get imported in the standard library. But whether that’s a good idea or not, the sockets and Unix time libraries that already exist are really good enough even without the “standard library” seal of approval – which, considering that the last version of the standard is basically older than Spice Girls, doesn’t mean much anyway. Plus who’s going to publish another version of the Common Lisp standard?

                                                                                                                                        To defend the author’s wording: their remark is worth putting into its own context – Common Lisp had a pretty difficult transition from large commercial packages to free, open source implementations like SBCL. Large Lisp vendors gave you a full on CL environment that was sort of on-par with a hosted version of a Lisp machine’s environment. So you got not just the interpreter and a fancy IDE and whatever, you also got a GUI toolkit and various glue layer libraries (like, say, socket libraries :-P). FOSS versions didn’t come with all these goodies and it took a while for FOSS alternatives to come up. But that was like 20+ years ago.

                                                                                                                                        1. 2

                                                                                                                                          However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                                                                                                                                          GET-UNIVERSAL-TIME is in the standard. It returns a universal time, which is the number of seconds since midnight, 1 January 1900.

                                                                                                                                          1. 2

                                                                                                                                            Any language could ignore an existing standard and introduce their own version with its own flaws and quirks, but only Common Lispers would go so far as to call the result “universal”.

                                                                                                                                          2. 1

                                                                                                                                            However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                                                                                                                                            Actually, it doesn’t support leap seconds so in that case the value repeats.

                                                                                                                                          3. 1

                                                                                                                                            Yeah but getting the current unix time is not Complicated, it’s just a call to the OS that returns a number.

                                                                                                                                            1. 6

                                                                                                                                              What if you’re not running on Unix? Or indeed, on a system that has a concept of epoch? Note that the CL standard has its own epoch, unrelated (AFAIK) to OS epoch.

                                                                                                                                              Bear in mind that Common Lisp as a standard, and a language, is designed to be portable by better standards than “any flavour of Unix” or “every version of Windows since XP” ;-)

                                                                                                                                              1. 1

                                                                                                                                                Sure, but it’s possible they were using that library elsewhere for good reasons.

                                                                                                                                            2. 3

                                                                                                                                              In general, I really appreciate having a single known-good library promoted to stdlib (the way golang does). Of course, there’s the danger that you standardise something broken (I am also a ruby dev, and quite a bit of the ruby stdlib was full of footguns until more-recent versions).

                                                                                                                                              1. 1

                                                                                                                                                Effectively that’s what happened though. The libraries for threading, sockets etc converged to de facto standards.