1. 4

    Who owns the web?

    The users.
    Hence if someone found that a given standard risks the users beyond reasonable the right thing to do is change it. Even if it creates discomfort for developers (With real empathy).

    The real question here isn’t if it is hard for developers of fair to them. The question is how high is the risk leaving things as they are.

    1. 9

      The users should own the web, but in the current ecosystem (ad supported, walled gardens, etc), the people paying for the a large portion of web are the browser vendor(s) and major sites that can impact them (facebook, amazon, etc)

    1. 2

      Very good writing.

      For me, I wish Zig would take a step towards being Scientific Programming friendlier:

      1. Built In types for Tensors, Matrix and Vectors with respective operations.
      2. Multiple dispatch like in Julia.
      1. 2

        Another option, based on Pluto.jl, is Neptune.jl.
        It removes the Always On interactivity which I find to be a better option for larger documents with more operations.

        For me, Pluto.jl is the interactive next step from Jupyter while Neptune.jl is a real replacement for Jupyter.

        1. 1

          I notice that Neptune.jl has ripped not just the reactivity, but also the dependency analysis. The sane file format is still a big improvement over .ipynb, but there is another question to which Neptune’s answer is not as good as Jupyter’s:

          How do I notice that I’ve run cells out of order (i.e. my session works now, but will fail if I rerun the notebook)?

          • Pluto: There is no out of order, your cells will always run in dependency order. Duplicate and missing definitions will be highlighted.
          • Jupyter: Your cell run numbers will be out of order (no longer increasing from top to bottom).
          • Neptune: No hints. (The Observable/Pluto/Neptune UI doesn’t have cell run numbers because reactivity ensures consistency, but Neptune ditched the reactivity, so…)
          • RStudio:
            • the session log won’t match the order of the cells
            • the values window at the top right shows the current values of variables, this awareness helps
            • like Pluto has a sane file format (though Pluto’s is even better).

          It would have been nice if Neptune had kept the dependency analysis: then it could be like Jupyter, but also highlight the cells that are now troubled because they depend on a cell whose value you just changed. In other words:

          • (because of no reactivity) you could alter your notebook code once cell at a time;
          • (because of dependency analysis) Neptune could show you which cells need to be rerun, and which ones to rerun/edit first.

          Inspiration: Mercurial/evolve makes the commit graph easier to shape by allowing intermediate inconsistent states. That lets you, for example, rebase commits B-D out of the middle of a branch:

          • this creates the new, moved, commits B’-D’
          • B-D still exist, and are marked as obsolete
          • the descendants of B-D are marked as troubled: you have to rebase them onto a non-obsolete commit, or delete them, to restore consistency
          • you can see in the commit graph which troubled commits you still have to fix
          1. 1

            For me, I don’t want any dependency analysis. This is just a script like any script with some HTML synthetic sugar to be able to produce plots and other visualization within the document.

            The code should be written to run serially like a native Julia script.

        1. 26

          (Disclaimer: I’m a Microsoft employee.)

          The way to think about this is there are multiple reasons for a BSOD.

          1. Windows has a bug.
          2. A driver has a bug. Note drivers are in the same protection domain so can modify almost any system memory.
          3. Hardware has a bug in that it corrupts memory.

          The reason that people disagree over stability is because (2) & (3) are much more likely than (1), so crashes can plague particular configurations while leaving others completely unaffected. It’s very hard for mortals to pinpoint a driver or hardware bug, so all all users see is the result, not the cause.

          The part that always frustrates me a little is people who overclock devices, causing hardware errors, and blame the result on Windows. It’s not that Windows is worse than any other kernel, it’s that the people who overclock hardware all seem to run Windows.

          1. 12

            My impression is that the Windows kernel is really top notch these days (as opposed, to say, the drivers, display manager, etc, etc).

            1. 4

              I agree. The one thing I think Windows must improve is in its modularity and letting the user chose which applications and services to be installed.

              There are too many services and features I’d like to be able to remove (Or better, chose not to install). There was a talking about Windows Mini Kernel, I want that. I want efficiency.

              1. 4

                Have you tried Windows Embedded? Server Core? WinPE?

                The guts of Windows is fairly modular and composable. The issue is that each of those services are providing something, so removing them will affect applications or scenarios in ways that may not be obvious. The monolithic nature of Windows is really a result of trying to ensure that programs work, and work the same way, on each machine.

                Personally I do a lot of command line development, so I thought Server Core would be an interesting option. Here’s what happened:

                1. The Visual Studio installer is an Electron application, so it failed because a DirectX DLL wasn’t present;
                2. I put the relevant DLL there, and the installer launched with a lot of rendering glitches since it’s a pure-GDI non-composited environment, but I got things to install;
                3. Programs using common controls don’t render correctly, which isn’t a big deal for servers, but makes certain things like GFlags to be nigh incomprehensible;
                4. …but the end result was the programs I was writing behave differently where appcompat shims and services aren’t running. In a lot of ways I don’t miss them, but the consequence is I can’t run my program in this environment and assume it works the same way in a real environment, so it wasn’t useful for development work.
                1. 2

                  It sounds like a mess. Maybe I should take back my words :-).

                  One of the issues ow Windows is the luggage it carries. It is time you put all pre historic compatibility under a VM and be done with it.

                  Moreover, I het what you say and still I’d be happy to have user choices to what to install. Windows is bloated. 30 GB for OS is too much. The RAM consumption is too much. Performance are getting better and hopefully one day we’ll a File System as fast as Linux and the margin will be negligible.

                2. 3

                  I’d love to pay for a gaming build of Windows that only includes necessary components and presumes that I’m competent enough to steward maintenance of my own machine.

                  1. 2

                    If you want a gaming build of Windows, you can buy that. It even comes bundled with a computer optimised for running it.

              2. 5

                I worked as a repair tech in a computer shop for about three years; this was over ten years ago so most of my experience is with XP, Vista, and 7. In this time I saw a lot of BSODs.

                In my experience the overwhelming majority of BSODs are caused by faulty hardware or driver bugs. For example the Dutch version of AT&&T (KPN) handed out these Realtek wireless dongles for a while, but after some update in XP they caused frequent BSODs. I’m going to guess this was Realtek’s fault and not Microsoft’s, and it just happened to work prior to this update (they never released an update to fix this. They also never made Vista/7 drivers). Plenty of customers were quick to blame Microsoft for this though, in some cases even after I explained all of this to them they still blamed Microsoft.

                By far the most common problem though was just faulty memory. By my rough estimate it caused at least half of all problems, if not more, during this time. The rest were a combination of other hardware faults (mainboard, hard drive, etc.) or bad (often third-party) drivers.

                No doubt BSODs happen due to Windows bugs, but it’s a lot less often than some people think. The biggest issue was actually the lack of tooling. Windows leaves small “minidump” core dumps, but actually reading them and getting an error isn’t easy. I actually wrote a Python script to read them all and list all reasons in a Tkinter window, and this usually gave you a pretty good idea what the problem was.

                1. 3

                  Even if i despise Windows nowadays, i agree with you and BSOD stability isn’t a problem nowadays anmore. There are a lot of problems, but kernel stability ain’t one

                  1. 2

                    I think it is fair that windows maintains some criticism. A micro kernel would not suffer a systemic failure from a buggy audio driver for instance. Linux is also another insane system where driver code for dozens of architectures are effectively maintained on a budget but i rarely see any crashes on my commodity development box that corporate procured. My dell laptops running win7 and win10 have all crashed frequently.

                    1. 8

                      I think some of the stability that you see on Linux is that the drivers are upstreamed, and so face the same discipline as the rest of the kernel, whereas Windows drivers are often vendor-supplied, and potentially very dodgy. You can easily crash Linux with out-of-kernel-tree drivers, but there are only a few of those that are in common use.

                      1. 1

                        Much of the audio stack in Windows runs in userspace. You can often fix audio driver crashes by restarting the relevant services. The troubleshooting wizard does this for you.

                        Linux and Windows are both moving to more device drivers in userspace. CUSE on Linux, for example, and Windows also has a framework for userspace USB drivers. Most GPU drivers are almost entirely userspace, for performance reasons: the devices support SR-IOV or similar and allow the kernel to just map a virtual context directly into the userspace address space, so you don’t need a system call to communicate with the device.

                      2. 1

                        On the one hand it’s a bit unfair to blame current windows for earlier disgressions but it is what it is.

                        Regarding your point 3) - I’ve had it SO often that a machine in the 98-XP days would crash on Windows and run for a week on Linux, so I don’t really buy that point. Hardware defects in my experience are quite reproducible “every time I start a game -> graphics card”, every time it runs for longer than a day -> RAM, etc.pp. Nothing of “it crashes randomly every odd day” has ever been a hardware defect for me (except maybe ram, and that is sooo rare).

                        I don’t think I have claimed Windows is unstable since I’ve been using 7 or 10 (and 2000 and XP were okish). But 98 (non-SE), Me, Vista, 95a and 95b were hot garbage.

                      1. 3

                        The most usable feature of xmake is the abstraction layer it has.

                        For me a good build system behaves the same with any compiler (At least for the common flags of compilation).
                        So I want to be able to use OpenMP, Floating Point Precision (Fast Math), Optimization level, etc… without worrying which compiler is used.

                        I think xmake is the only build system which provides this.

                        1. 1

                          This is a really great project.

                          It is the only build system which truly abstract the compiler (Or tries to do so).

                          1. 3

                            Is there an up to date review of this OS? ArsTechnica style?

                            1. 2

                              The DistroWatch page on Elementary OS has links to several reviews of 5.0 and 5.1, there may be something for you there. The people who maintain that DistroWatch are truly community treasures.

                              1. 1

                                I don’t think so.

                              1. 5

                                https://www.turris.com/en/omnia/overview/

                                Open Hardware, Free Software. The only blob is for 5GHz WiFi.

                                Now I just need some libre power line adapters.

                                1. 1

                                  Have any performance and stability tests?

                                  1. 1

                                    What’s the OS of the router?

                                    1. 1

                                      OpenWRT based TurrisOS

                                      1. 1

                                        Does it have a version with 8 LAN ports? What about 2.5 Gbps Ethernet?

                                  1. 2

                                    This is great!

                                    Any chance to have the UX of drawing with chalks on the board? Having Free Drawing + Math will be a real scribbling board.

                                    After that, live joint editing :-).

                                    1. 3

                                      What about BLAS and LAPACK?

                                      1. 1

                                        Judging by the badges, there is/was meant to be a project called “NumGo+” that would be the “NumPy for Go+”, and which might be the home for that kind of stuff. But that repo has an init commit and nothing else.

                                        There is, however, another project, GoNum, that does have the linear algebra stuff (but isn’t related to Go+).

                                      1. 1

                                        Release Notes with no mention of performance and memory footprint? Aren’t those important to users and developers?

                                        1. 4

                                          One of my most recent side-projects was a networked dice roller for my remote table-top sessions

                                          I’m interested in seeing this.

                                          (Don’t know if OP is the article author or if they’re reading this).

                                          1. 2

                                            I’m not the article author.

                                            But I’m exploring making the same choice as the author.

                                            1. 1

                                              Will it affect Neuron in any way? Hopefully Windows support :-)?

                                              1. 2

                                                Having recently switched to Windows myself, I do hope to have Windows support in neuron 2.0.

                                                Somebody in fact is already working on it: https://github.com/srid/neuron/pull/586

                                                Neuron itself will continue to be written in Haskell, but I do see the value of using a .NET language for straightforward cross-platform support!

                                                1. 1

                                                  That’s an interesting turn in plot. May I ask how come you switched to Windows?

                                          1. 1

                                            I wonder how fast their Sorting Algorithms implementations is.

                                            Could anyone link to other similar Toolkits?

                                            1. 2

                                              Not the same thing, but in terms of hash algorithms the ones they offer are far from state of the art (at least in speed.)

                                            1. 1

                                              Few years ago Microsoft suggested JPEG XR. It uses integer compression algorithm hence decompressing and compressing doesn’t have Quantization errors.

                                              I wonder why it didn’t get wide support (Microsoft granted a free use of patents).

                                              1. 10

                                                There are some weird things in the specification.

                                                A new open bracket will cancel any preceding unmatched open bracket of its kind.

                                                This suggests that, for example, *foo and *bar* will get “correctly” processed into *foo and <strong>bar</strong>. As the user, I would rather get a warning and be invited to escape the first star, because this is likely to be a mistake on my part. (The “implicit cancellation” rule is not very Strict).

                                                The only form of links CommonMark supports is full reference links. The link label must be exactly one symbol long.

                                                So you cannot write [foo](https://example.com), you have to write [foo][1]. Fine with me. But then “one symbol long”? [foo][1] is allowed but [foo][12] is not, the document recommends using letters above ten references, so [foo][e] is okay but [foo][example] is not.

                                                I think that this limitation comes from trying to make it easy to parse StrictMark with fairly dumb parser technology. Honestly, while I agree that 10K-lines hand-written parsers are not the way to go for a widely-used document format, I would rather have a good specification that is paired with some tutorials on how to implement decent parsing approaches (for example, recursive-descent on a regex-separated token stream) for unfamiliar programmers, rather than annoying choices in the language design to support poor technical choices.

                                                1. 5

                                                  I totally agree. It would make much more sense to have a limitation of a set of digits with no spaces ([12], and [0001] are acceptable) than a single symbol.

                                                  1. 3

                                                    I agree. To make matters worse, the specification says “one symbol wide”. Sadly, “symbol” does not have a strict definition when it comes to text encoding or parsing. The text can be UTF-16 encoded, where one symbol is actually 2 or more codeunits. Symbols might be language-dependent, a Czech or Slovak reader might consider “ch” to be one symbol, a dutch reader might consider “ij” to the one symbol. UTF-8 everywhere fans might be dismayed to know that certain symbols are encoded as multiple codepoints by unicode itself, so for example while “ю́” (cyrillic small letter yu with acute) looks, walks and sounds like one symbol, but it’s encoded as by the sequence U+044E cyrillic small letter yu followed by U+0301 combining acute accent.

                                                    I think the closest thing to what the author intended is “grapheme cluster”, roughly, whatever you can highlight as one unit of text using your cursor is your one symbol. Good luck implementing that in a parser though.

                                                    1. 1

                                                      a dutch reader might consider “ij” to the one symbol

                                                      Certainly in the context of computers, I think very few people would, if any, since it’s always written as “i j”. Outside of that, things are a bit more complicated and it’s a bit of a weird/irregular status, but this isn’t something you really need to worry about in this context.

                                                      There’s a codepoint for it, but that’s just a legacy ligature codepoint, just like (U+FB00) for ff, (U+FB06) for st, and a bunch of others. These days ligatures are encoded in the font itself and using the ligature codepoints is discouraged.

                                                      The text can be UTF-16 encoded, where one symbol is actually 2 or more codeunits

                                                      This has nothing to do with UTF-16, which is functionally identical to UTF-8, except that it encodes the codepoints in a different way (2 or 4 bytes, instead of 1 to 4 bytes). I don’t know what you mean with “one symbol is actually 2 or more codeunits” as that’s a Unicode feature, not a UTF-16 feature.

                                                      UTF-8 everywhere fans might be dismayed to know that certain symbols are encoded as multiple codepoints by unicode itself

                                                      Yes, and this works fine in UTF-8?

                                                      I think the closest thing to what the author intended is “grapheme cluster”, roughly, whatever you can highlight as one unit of text using your cursor is your one symbol. Good luck implementing that in a parser though.

                                                      Most languages should have either native support for this or a library for it, and it’s actually not that hard to implement.

                                                      They did mean “codepoint” though, as that is what is in the grammar:

                                                      PUNCT = "!".."/" | ":".."" | "[".."`" | "{".."~";
                                                      WS = [ \t\r\n];
                                                      WSP = WS | PUNCT;
                                                      LINK_LABEL = CODEPOINT - WSP - "]";
                                                      

                                                      You probably want to restrict this a bit a bit more; there’s much more “white space” and “punctuation” than just those listed, and using control characters, combining characters, format characters, etc. could lead to some very strange rendering artefacts. All of this should really be based on Unicode categories.

                                                      1. 1

                                                        My main point is I can see how a naive implementation might use the built-in length function to check if something is one “symbol” long and it will fail in non-obvious ways for abstract characters that one might consider to be one character long.

                                                        Most languages should have either native support for this or a library for it, and it’s actually not that hard to implement.

                                                        Except they don’t. Here’s an example, the following string consists of 16 grapheme clusters (including spaces), but anywhere from 20 to 22 codepoints.

                                                        Приве́т नमस्ते שָׁלוֹם

                                                        I invite you to use any of your tools that you think would handle this correctly and tell me if any do. And this example is without resorting to easy gotchas, like combining emojis “👩‍👩‍👦‍👦”.

                                                        1. 2

                                                          My main point is I can see how a naive implementation might use the built-in length function to check if something is one “symbol” long

                                                          Well in this case that would be correct as the specification says it’s a single codepoint.

                                                          I invite you to use any of your tools that you think would handle this correctly and tell me if any do.

                                                          Searching “graphmeme ” should turn up a library. Some languages have native support (specifically, IIRC Swift has, and I thought Rust too but not sure) and others may include some support in stdlib. Like I said, this is not super-hard to implement; the Unicode specifications always make this kind of stuff seem harder than it actually is because of the way they’re written, but essentially they just have a .txt file which lists codepoints that are “graphmeme break characters” and the logic isn’t that hard.

                                                  1. 2

                                                    I’m dreaming of Zig, but with operator overloading to make implementing mathematical expressions bearable. Anyone have experience with Odin?

                                                    1. 2

                                                      I would take Julia’s style of multi dispatching. I’d also be happy with Vector / Matrix / Tensor data types built in.

                                                    1. 1

                                                      Is there an option to verify the files integrity at the source and backup during the process? The troubling thing about backup it is automated. But what can guarantee the video I last accessed 8 years ago is still a valid file?

                                                      1. 2

                                                        Could it be used on Windows?

                                                        1. 4

                                                          I haven’t tested bupstash on windows yet, but it’s something I plan to make work, I suspect it might need some fixes first,

                                                          1. 1

                                                            It also means you have to implement VSS support into bupstash, because backups on windows without supporting the VSS features wont make any sense ..

                                                        1. 3

                                                          Any chance moving to C17 for MSVC compatibility?

                                                          1. 2

                                                            MSVC should be able to compile ISO C99, though I do understand the politics behind MSVC’s lack of support for ISO C99. Fortunately CTL containers compile with a C++99 or a C++17 compiler as well, so we might just be in luck with MSVC’s ISO C99 support with their C compiler, at the very least.

                                                            1. 4

                                                              Support for C99 requires supporting VLA (Variable Length Arrays) which MSVC doesn’t support. Later versions of C (Like C11) removed this and hence are officially supported by MSVC.

                                                            2. 2

                                                              I have updated to C11. Compiling with the Developer Command Prompt: cl /I ctl /std:c11 main.c

                                                            1. 5

                                                              The other takeaway is that, backup, and backup often. If I was already backing up every week, I wouldn’t have lost what I lost.

                                                              I guess they discovered that file synchronization is not a backup. Though, they do not seem to have learned a lot from this. Losing a week of work is also insane. Do hourly incremental backups to multiple locations. restic and Arq are your friends.

                                                              1. 1

                                                                Isn’t backup some kind of file synchronization? Would you define it as one way synchronization? If it is just one way synchronization, I think Synchthing supports this kind of mode, doesn’t it?