1. 3

    Thanks for sharing this! I really like the idea of reference counting with panic + single ownership, but I didn’t see that in the wild yet

    1. 29

      Well written, this were exactly my thoughs when i read this. We don’t need faster programmers. We need more thorough programmers.

      Software could be so much better (and faster) if the market would value quality software higher than “more features”

      1. 9

        We don’t need faster programmers. We need more thorough programmers.

        That’s just a “kids these days…” complaint. Programmers have always been fast and sloppy and bugs get ironed out over time. We don’t need more thorough programmers, like we don’t need more sturdy furniture. Having IKEA furniture is amazing.

        1. 12

          Source code is a blueprint. IKEA spends a lot of time getting their blueprints right. Imagine if every IKEA furniture set had several blueprint bugs in it that you had to work around.

          1. 5

            We’re already close though. We have mature operating systems, language runtimes, and frameworks. Going forward I see the same thing happening to programming that happens to carpentry or cars now. A small set of engineers develop a design (blueprint) and come up with lists of materials. From there, technicians guide the creation of the actual design. Repairs are performed by contractors or other field workers. Likewise, a select few will work on the design for frameworks, operating systems, security, IPC, language runtimes, important libraries, and other core aspects of software. From there we’ll have implementors gluing libraries together for common tasks. Then we’ll have sysadmins or field programmers that actually take these solutions and customize/maintain them for use.

            1. 7

              I think we’re already completely there in some cases. You don’t need to hire any technical people at all if you want to set up a fully functioning online store for your small business. Back in the day, you would have needed a dev team and your own sysadmins, no other options.

              1. 1

                I see the same thing happening to programming that happens to carpentry or cars now. […] From there we’ll have implementors gluing libraries together for common tasks.

                Wasn’t this the spiel from the 4GL advocates in the 80s?

                1. 2

                  Wasn’t this the spiel from the 4GL advocates in the 80s?

                  No, it was the spiel of OOP/OOAD advocates in the 80s. Think “software IC’.

            2. 1

              Maybe, maybe not. I just figured that if i work more thoroughly, i get to my goals quicker, as i have less work to do and rewrite my code less often. Skipping error handling might seem appealing at frist, as i reach my goal earlier, but the price for this is that either me or someone else has to fix that sooner or later.

              Also mistakes or just imperformance in software nowadays have huge impact due to being so widespread.

              One nice example i like to make:

              Wikimedia foundation got 21.035.450.914 page views last month [0]. So if we optimize that web server by a single instruction per page view, assuming the CPU runs at 4 GHz, with a perfect optimized code of 1.2 instructions per cycle, we can shave off 4.382 seconds per month. Assuming wikipedia runs average servers [1], this means we shave of 1.034 watt hour of energy per month. With a energy price of 13.24 euro cent [2], this means a single cycle costs us roughly 0.013 euro cent.

              Now imagine you can make the software run 1% faster, which are 48.000.000 instructions, this is suddenly 6240€ per month savings. For 1% overall speedup!

              High-quality software is not only pleasant for the user. It also saves the planet by wasting less energy and goes easy on your wallet.

              So maybe

              Programmers have always been fast and sloppy and bugs get ironed out over time. We don’t need more thorough programmers,

              this should change. For the greater good of everyone

              [0] https://stats.wikimedia.org/#/all-projects/reading/total-page-views/normal|table|2-year|~total|monthly
              [1] https://www.zdnet.com/article/toolkit-calculate-datacenter-server-power-usage/
              [2] https://www.statista.com/statistics/1046605/industry-electricity-prices-european-union-country/

            3. 9

              Software could be so much better (and faster) if the market would value quality software higher than “more features”

              The problem is there just aren’t enough people for that. That’s basically been the problem for the last 30+ years. It’s actually better than it used to be; there was a time not so long ago where everyone who could sum up numbers in Excel was a programmer and anyone who knew how to defrag their C:\ drive was a sysadmin.

              Yesterday I wanted to generate a random string in JavaScript; I knew Math.random() isn’t truly random and wanted to know if there’s something better out there. The Stack Overflow question is dominated by Math.random() in more variations that you’d think possible (not all equally good I might add). This makes sense because for a long time this was the only way to get any kind of randomness in client-side JS. It also mentions the newer window.crypto API in some answers which is what I ended up using.

              I can make that judgment call, but I’m not an ML algorithm. And while on Stack Overflow I can add context, caveats, involved trade-offs, offer different solutions, etc. with an “autocomplete code snippet” that’s a lot more limited. And especially for novice less experienced programmer you wouldn’t necessarily know a good snippet from a bad one: “it seems to work”, and without the context a Stack Overflow answer has you just don’t know. Stack Overflow (and related sites) are more than just “gimme teh codez”; they’re also teaching moments.

              Ideally, there would be some senior programmer to correct them. In reality, due the limited number of people, this often doesn’t happen.

              We’ll have to wait and see how well it turns out in practice, but I’m worried for an even greater proliferation of programmers who can’t really program but instead just manage to clobber something together by trail-and-error. Guess we’ll have to suffer through even more ridiculous interviews to separate the wheat from the chaff in the future…

              1. 2

                We’ll have to wait and see how well it turns out in practice, but I’m worried for an even greater proliferation of programmers who can’t really program

                I don’t see this as a problem. More mediocre programmers available doesn’t lower the bar for places that need skilled programmers. Lobste.rs commenters often talk of the death of the open web for example. If this makes programming more accessible, isn’t that better for the open web?

              2. 6

                We don’t need faster programmers. We need more thorough programmers.

                Maybe we need more than programmers and should aim to deserve the title of software engineers. Writing code should be the equivalent of nailing wood, whether you use a hammer or AI assisted nailgun shouldn’t matter much if you are building a structure that can’t hold the weight it is designed for or can’t deal with a single plank that is going to break or rot.

                1. 6

                  We don’t need faster programmers. We need more thorough programmers.

                  Not for everything, but given we spend so much time debugging and fixing things, thoroughness is usually faster.

                  1. 6

                    Slow is smooth and smooth is fast.

                1. 3

                  Windows short-term stability is okay, but long-term stability is not. I manage a “cluster” of 12 windows 10 machines used for production and my experience was that i had to manually set up the machines by hand after every functional upgrade as windows lost or reset crucial settings or uninstalled unsigned drivers. we also have other computers in the company we are afraid of updating as they have to run for long times and we tried once and window update just rebooted the machine in the middle of a process.

                  I cannot speak for personal computing, but Windows 10 isn’t suited for production use

                  1. 26

                    (Disclaimer: I’m a Microsoft employee.)

                    The way to think about this is there are multiple reasons for a BSOD.

                    1. Windows has a bug.
                    2. A driver has a bug. Note drivers are in the same protection domain so can modify almost any system memory.
                    3. Hardware has a bug in that it corrupts memory.

                    The reason that people disagree over stability is because (2) & (3) are much more likely than (1), so crashes can plague particular configurations while leaving others completely unaffected. It’s very hard for mortals to pinpoint a driver or hardware bug, so all all users see is the result, not the cause.

                    The part that always frustrates me a little is people who overclock devices, causing hardware errors, and blame the result on Windows. It’s not that Windows is worse than any other kernel, it’s that the people who overclock hardware all seem to run Windows.

                    1. 12

                      My impression is that the Windows kernel is really top notch these days (as opposed, to say, the drivers, display manager, etc, etc).

                      1. 4

                        I agree. The one thing I think Windows must improve is in its modularity and letting the user chose which applications and services to be installed.

                        There are too many services and features I’d like to be able to remove (Or better, chose not to install). There was a talking about Windows Mini Kernel, I want that. I want efficiency.

                        1. 4

                          Have you tried Windows Embedded? Server Core? WinPE?

                          The guts of Windows is fairly modular and composable. The issue is that each of those services are providing something, so removing them will affect applications or scenarios in ways that may not be obvious. The monolithic nature of Windows is really a result of trying to ensure that programs work, and work the same way, on each machine.

                          Personally I do a lot of command line development, so I thought Server Core would be an interesting option. Here’s what happened:

                          1. The Visual Studio installer is an Electron application, so it failed because a DirectX DLL wasn’t present;
                          2. I put the relevant DLL there, and the installer launched with a lot of rendering glitches since it’s a pure-GDI non-composited environment, but I got things to install;
                          3. Programs using common controls don’t render correctly, which isn’t a big deal for servers, but makes certain things like GFlags to be nigh incomprehensible;
                          4. …but the end result was the programs I was writing behave differently where appcompat shims and services aren’t running. In a lot of ways I don’t miss them, but the consequence is I can’t run my program in this environment and assume it works the same way in a real environment, so it wasn’t useful for development work.
                          1. 2

                            It sounds like a mess. Maybe I should take back my words :-).

                            One of the issues ow Windows is the luggage it carries. It is time you put all pre historic compatibility under a VM and be done with it.

                            Moreover, I het what you say and still I’d be happy to have user choices to what to install. Windows is bloated. 30 GB for OS is too much. The RAM consumption is too much. Performance are getting better and hopefully one day we’ll a File System as fast as Linux and the margin will be negligible.

                          2. 3

                            I’d love to pay for a gaming build of Windows that only includes necessary components and presumes that I’m competent enough to steward maintenance of my own machine.

                            1. 2

                              If you want a gaming build of Windows, you can buy that. It even comes bundled with a computer optimised for running it.

                        2. 5

                          I worked as a repair tech in a computer shop for about three years; this was over ten years ago so most of my experience is with XP, Vista, and 7. In this time I saw a lot of BSODs.

                          In my experience the overwhelming majority of BSODs are caused by faulty hardware or driver bugs. For example the Dutch version of AT&&T (KPN) handed out these Realtek wireless dongles for a while, but after some update in XP they caused frequent BSODs. I’m going to guess this was Realtek’s fault and not Microsoft’s, and it just happened to work prior to this update (they never released an update to fix this. They also never made Vista/7 drivers). Plenty of customers were quick to blame Microsoft for this though, in some cases even after I explained all of this to them they still blamed Microsoft.

                          By far the most common problem though was just faulty memory. By my rough estimate it caused at least half of all problems, if not more, during this time. The rest were a combination of other hardware faults (mainboard, hard drive, etc.) or bad (often third-party) drivers.

                          No doubt BSODs happen due to Windows bugs, but it’s a lot less often than some people think. The biggest issue was actually the lack of tooling. Windows leaves small “minidump” core dumps, but actually reading them and getting an error isn’t easy. I actually wrote a Python script to read them all and list all reasons in a Tkinter window, and this usually gave you a pretty good idea what the problem was.

                          1. 3

                            Even if i despise Windows nowadays, i agree with you and BSOD stability isn’t a problem nowadays anmore. There are a lot of problems, but kernel stability ain’t one

                            1. 2

                              I think it is fair that windows maintains some criticism. A micro kernel would not suffer a systemic failure from a buggy audio driver for instance. Linux is also another insane system where driver code for dozens of architectures are effectively maintained on a budget but i rarely see any crashes on my commodity development box that corporate procured. My dell laptops running win7 and win10 have all crashed frequently.

                              1. 8

                                I think some of the stability that you see on Linux is that the drivers are upstreamed, and so face the same discipline as the rest of the kernel, whereas Windows drivers are often vendor-supplied, and potentially very dodgy. You can easily crash Linux with out-of-kernel-tree drivers, but there are only a few of those that are in common use.

                                1. 1

                                  Much of the audio stack in Windows runs in userspace. You can often fix audio driver crashes by restarting the relevant services. The troubleshooting wizard does this for you.

                                  Linux and Windows are both moving to more device drivers in userspace. CUSE on Linux, for example, and Windows also has a framework for userspace USB drivers. Most GPU drivers are almost entirely userspace, for performance reasons: the devices support SR-IOV or similar and allow the kernel to just map a virtual context directly into the userspace address space, so you don’t need a system call to communicate with the device.

                                2. 1

                                  On the one hand it’s a bit unfair to blame current windows for earlier disgressions but it is what it is.

                                  Regarding your point 3) - I’ve had it SO often that a machine in the 98-XP days would crash on Windows and run for a week on Linux, so I don’t really buy that point. Hardware defects in my experience are quite reproducible “every time I start a game -> graphics card”, every time it runs for longer than a day -> RAM, etc.pp. Nothing of “it crashes randomly every odd day” has ever been a hardware defect for me (except maybe ram, and that is sooo rare).

                                  I don’t think I have claimed Windows is unstable since I’ve been using 7 or 10 (and 2000 and XP were okish). But 98 (non-SE), Me, Vista, 95a and 95b were hot garbage.

                                1. 3

                                  The main news I see is progress on the self-hosted compiler, which they consider a necessary step towards a package manager.

                                  1. 5

                                    The self-hosted compiler is considered necessary as a vivid package system suddenly increases the amount of source compiled by a huge amount. This means that compile times will skyrocket quickly to worse-than-Rust levels as LLVM takes a huge amount of time already and the stage1 (c++) compiler is not implemented efficiently. The self-hosted compiler should beat the current implementation by at least one order of magnitude in compile speed, if not more

                                    1. 1

                                      Why worse than rust levels? I understand that the zig compiler is pretty speedy.

                                  1. 2

                                    On Linux you can get around this by using middle-click copy/paste. Still doesn’t make it a good idea…

                                    1. 1

                                      Well, i use the primary selection buffer as my main clipboard, utilizing the clibboard only for long term storage. And to my knowledge, browsers cannot have an effect on that buffer (doesn’t work on the page above)

                                      I don’t think there’s something wrong with pasting code from a website, as you have to always read carefully the code before copying/typing/… it into a shell

                                    1. 5

                                      Both languages require explicit annotations for nulls (Option in rust, ?T in zig) and require code to either handle the null case or safely crash on null (x.unwrap() in rust, x.? in zig).

                                      Describing Option<T> as “explicit annotation for nulls” has always struck me as missing the point a little (this is not the only essay to use that kind of verbiage to talk about what an option type is).

                                      At one level of abstraction, Rust just doesn’t have nulls - an i32 is always a signed 32 bit integer, a String is always an allocated utf-8 string, with no possibility that when you start calling methods on a variable with that type, it will turn out that there was some special null value in that type that makes your calls crash the program or cause undefined behavior. This is a good improvement over many languages that do make null implicitly a member of every type, that the programmer needs to check for.

                                      At a different level of abstraction, null semantics are still something a programmer frequently wants to represent using a language - that is, the idea of a variable either being nothing or else being some value of a specific type. The Rust standard library provides the Option<T> type to represent these semantics, and has some special syntactic support for dealing with it with things like the ? operator. But at the end of the day, it’s just an enum type that the standard library defines in the same way as any other Rust type, enum Option<T> { Some(T), None }. If you are writing a program that needs two different notions of nullity for some reason, you can define your own custom type enum MyEnum { None1, None2, Some(T) } using the same common syntax for defining new types.

                                      1. 5

                                        Since you mention it, it’s interesting that ?T could be a tagged union in zig:

                                        fn Option(comptime T: type) type {
                                            return union(enum) {
                                                Some: T,
                                                None,
                                            };
                                        }
                                        

                                        Instead it’s … weird. null is a value with type @TypeOf(null). There are implicit casts from T to ?T and from null to ?T which is the only way to construct ?T. There is a special case in == for ?T == null.

                                        I had a quick dig through the issues and I can’t find any discussion about this.

                                        And, out of curiosity:

                                            const a: ?usize = null;
                                            const b: ??usize = a;
                                            std.debug.print("{} {}", .{b == null, b.? == null});
                                        

                                        prints “false true”.

                                        1. 4

                                          I read that like this in Rust:

                                          fn main() {
                                              let a: Option<u8>  = None;
                                              let b: Option<Option<u8>> = Some(a);
                                              println!("{:?}, {:?}", b.is_none(), b.unwrap().is_none());
                                          }
                                          

                                          Which has the same output. This makes sense to me, as b and a have different types. Does zig normally pass through the nulls? The great thing to me in Rust is that although a and b are different types, they take up the same space in memory (zig may do the same, I’ve never tested).

                                          1. 1

                                            Does zig normally pass through the nulls?

                                            No, your translation is correct and this is the behavior I would want. But this is something I tested early on because the way ?T is constructed by casting made me suspicious that it wouldn’t work.

                                            zig may do the same, I’ve never tested

                                            Oh, me neither…

                                               std.debug.print("{}", .{.{
                                                    @sizeOf(usize),
                                                    @sizeOf(?usize),
                                                    @sizeOf(??usize),
                                                    @sizeOf(*usize),
                                                    @sizeOf(?*usize),
                                                    @sizeOf(??*usize),
                                                }});
                                            
                                            [nix-shell:~]$ zig run test.zig
                                            struct:79:30{ .0 = 8, .1 = 16, .2 = 24, .3 = 8, .4 = 8, .5 = 16 }
                                            [nix-shell:~]$ zig run test.zig -O ReleaseFast
                                            struct:79:30{ .0 = 8, .1 = 16, .2 = 24, .3 = 8, .4 = 8, .5 = 16 }
                                            

                                            Looks like it does collapse ?* but not ??.

                                            1. 2

                                              Looks like it does collapse ?* but not ??.

                                              It’s not possible to collapse ?? as it would have a semantic loss of information. Imagine ?void as a boolean which is either null (“false”) or void (“true”). When you now do ??void, you have the same number of bits as ?bool.

                                              ??void still requires 1.5 bit to represent, whereas ?void only needs 1 bit.

                                              Collapsing an optional pointer though is possible, as Zig pointers don’t allow 0x00… as a valid address, thus this can be used as sentinel for null in an optional pointer. This allows a really good integration into existing C projects, as ?*Foo is kinda equivalent to a C pointer Foo * which can always be NULL. This translates well to Zig semantics of ?*Foo.

                                              Note that there are pointers that allow 0x00… as a valid value: *allowzero T. Using an optional to them doesn’t collapse: @sizeOf(*allowzero T) != @sizeOf(*T)

                                              1. 1

                                                It’s not possible to collapse ?? as it would have a semantic loss of information.

                                                ?void still requires 1.5 bit to represent, whereas ?void only needs 1 bit.

                                                I don’t think you read the sizes carefully in my previous comment. ??void actually uses 16 bits in practice.

                                                std.debug.print("{}", .{.{@sizeOf(void), @sizeOf(?void), @sizeOf(??void)}});
                                                
                                                struct:4:30{ .0 = 0, .1 = 1, .2 = 2 }
                                                

                                                Whereas if we hand-packed it we can collapse the two tags into one byte (actually 2 bits plus padding):

                                                fn Option(comptime T: type) type {
                                                    // packed union(enum) is not supported directly :'(
                                                    return packed struct {
                                                        tag: packed enum(u1) {
                                                            Some,
                                                            None,
                                                        },
                                                        payload: packed union {
                                                            Some: T,
                                                            None: void,
                                                        },
                                                    };
                                                }
                                                
                                                pub fn main() void {
                                                    std.debug.print("{}", .{.{@sizeOf(void), @sizeOf(Option(void)), @sizeOf(Option(Option(void)))}});
                                                }
                                                
                                                struct:17:30{ .0 = 0, .1 = 1, .2 = 1 }
                                                

                                                The downside is that &x.? would have a non-byte alignment, which I imagine is why this is not the default.

                                                But that’s what we were testing above. Not “can we magically fit two enums in one bit”.

                                                1. 1

                                                  Okay, i misread that then, sorry. Zig is still able to do some collapsing of multi-optionals as there is no ABI definition. It might be enabled in release-small, but not in the other modes. But: This is just a vision of the future, it’s not implemented atm

                                              2. 1

                                                That makes sense, and is the same as Rust. 0 is a valid bit pattern for usize and thus cannot use the null pointer optimization. In Rust you’d have to use Option<&Option<&usize>> to collapse everything since Option<T> is not known to be non-null but references (&) are. It would be neat if both Rust and Zig were able to say that Option<T> is non-null if T if non-null so you could get this benefit without the need for references (or other [unstable] methods of marking a type non-null).

                                            2. 1

                                              Could those decisions have something to do with C interop? Not sure how much that would affect it, but my inexperienced assumption is that using actual nulls over a tagged union would help with that.

                                              1. 2

                                                Worth noting here that Rust guarantees that Option<T> is represented without a discriminant (tag) when T is a nullable pointer type or otherwise has a “niche” where you could encode the discriminant in. This even applies to fat pointers like slices or Vec (which have an internal pointer to the allocation, which can never be null).

                                                Or, more visually:

                                                fn main() {
                                                    use std::ptr::NonNull;
                                                    use std::mem::size_of;
                                                    
                                                    assert_eq!(size_of::<Option<&u32>>()              , size_of::<&u32>());
                                                    assert_eq!(size_of::<Option<NonNull<u32>>>(), size_of::<&u32>());
                                                    assert_eq!(size_of::<Option<&[u8]>>()               , size_of::<&[u8]>());
                                                    assert_eq!(size_of::<Option<Vec<u32>>>()       , size_of::<Vec<u32>>());
                                                }
                                                

                                                (NonNull is the non-nullable raw pointer: https://doc.rust-lang.org/std/ptr/struct.NonNull.html

                                                For that reason, Option can be used in FFI situations.

                                                This is actually a general compiler feature, those composite types are not special-cased. (Declaration of a type as not being nullable is a nightly feature still, though)

                                                https://doc.rust-lang.org/nomicon/ffi.html#the-nullable-pointer-optimization

                                                1. 1

                                                  That’s possible. There is a separate [*c]T for c pointers and the casts could do the conversions. But maybe that would be expensive.

                                            1. 3

                                              One point that is conspicuously missing is comparison of resource management (RAII vs defer). It seems to be an area without clear answer (see this issue‘s history: https://github.com/ziglang/zig/issues/782). Was this a non-question in practice?

                                              1. 4

                                                So far I haven’t had any difficulty using defer, but on the other hand most of the code I’ve written leans heavily on arena allocation and I also haven’t put much effort into testing error paths yet. I don’t expect to have much of an opinion either way until I’ve written a lot more code and properly stress tested some of it.

                                                I suspect that defer will be the easy part, and the hard part will be making sure that every try has matching errdefers. There’s a bit of an asymmetry between how easy it is to propagate errors and how hard it is to handle and test resource cleanup in all those propagation paths.

                                                1. 2

                                                  For me, it is a non-problem. You usually see when a return value needs deferring cleanups, and it’s just a matter of typing

                                                  var x = try …(…, allocator, …);
                                                  defer x.deinit();
                                                  

                                                  it’s usually pretty obvious when a clean-up is required and if not, looking at the return value or doc comments is sufficient

                                                  1. 4

                                                    Does Zig suffer from the same problems with defer that Go does? e.g., It’s often quite tempting to run a defer inside a loop, but since defer is scoped to functions and not blocks, it doesn’t do what you might think it will do. The syntax betrays it.

                                                    Answering my own question (nice docs! only took one click from a Google search result), it looks like Zig does not suffer from this problem and executes at the end of the scope.

                                                    1. 2

                                                      No, Zig has defer for block scopes, not function scopes. When i learnt what Go does i was stunned on how unintuitive that is

                                                      1. 1

                                                        Yeah, it’s definitely a bug I see appear now and then. I suspect there’s some design interaction here between defer and unwinding. Go of course does the latter, and AFAIK, uses that to guarantee that defer statements are executed even when a “panic” occurs. I would guess that Zig does not do unwinding like that, but I don’t actually know. Looking at the docs, Zig does have a notion of “panic” but I’m not sure what that actually implies.

                                                        1. 1

                                                          Panic calls a configurable panic handler. The default handler on most platforms prints a stacktrace and exits. It can’t be caught at thread boundaries like the rust panic can, so I guess it makes sense that it doesn’t try to unwind.

                                                          1. 1

                                                            Ah yeah, that might explain why Zig is able to offer scope based defer, where as defer in Go is tied to function scope.

                                                          2. 1

                                                            A “panic” in zig is a unrecoverable error condition. If your program panics, it will not unwind anything but usually just print a panic message and exit or kill the process. Unwinding is only done for error bubbling