1. 3

    The features are just right. If they could just get their memory leaks under control.

    1. 5

      I’m not aware of any leaks in the core, although we do cache some things, so we won’t release the memory since it would just require re-calculating again the next time it is needed. Plugins definitely could be leaking. Probably https://forum.sublimetext.com would be a good place to debug/diagnose.

      1. 1

        Removed all packages except for Package Control (v3.4.0). Still leaks.

    1. 8

      Starts with the “science” part of:

      The Pulfrich effect […] yields about a 15 ms delay for a factor of ten difference in average retinal illuminance

      Then pass-the-messages this into “eye processing power” – whatever that is??? – which ultimately becomes “more load for your brain.”

      Using a dark colour scheme to write code requires more eye processing power than using a light colour scheme.

      but more processing power does mean you add more load to your brain, meaning that a dark colour scheme actually is more exhaustive

      Implicating the brain is an unsubstantiated leap. For all we know, this could be due to limitations of the retina–like exposure time with film and CCDs (needing more time in the low-light scenario to capture enough photons to reach activation potential), or something else entirely.

      1. 3

        I’m not sure if this is just due to the newness of Flutter, but I’ve yet to find any app that doesn’t immediately feel like a Flutter app. Opening the Flutter Gallery app on Windows, I instantly could tell something was wrong because scrolling wasn’t smooth like it is in other applications. This is the same on Android; things are just slightly different, but different enough that something feels wrong.

        Compared to something like React Native, I really don’t get the appeal of Flutter.

        1. 1

          Having to throw yet another language (Dart) into the mix makes this even less compelling.

        1. 1

          This looks pretty cool. If I didn’t think Apple’s ultimate goal is to deep-six OS X after herding us all into the iOS plantation, I’d buy a license.

          1. 1

            This did not age well. Appreciated the MINITEL respect though.

            1. 2

              For stateless validations (must be a number between 0 and 100), this is a nice approach. For stateful validations (this e-mail address has already been taken), it should probably be a two-stage process–unless we want to put filesystem/database/etc calls inside our parsers, which seems like a terrible idea.

              1. 3

                Yes, putting some kind of IO (service-call/db/etc) inside a parser would be terrible. I try to tackle stateful validation problems like this:

                1. Model the syntactically-valid data type, and use a parser to “smart-construct” it. So in this case we’d have an %EmailAddress{}. This data type doesn’t tell us anything about whether the email has been claimed or not.

                2. Down the line, when(if) we actually need to work with email addresses that are unclaimed, we give the service responsible for instantiating them expose a function typed:

                @spec to_unclaimed_email_address(
                  %EmailAddress{}) :: Result.t(%UnclaimedEmailAddress, some_error())
                

                This function does the necessary legwork to either create a truly unclaimed email address, or tell you that it’s not possible with the data you brought it. It still conforms to the ‘railway-oriented style’, but at another level of the architecture.

                Of course this opens up another can of worms in terms of concurrency, but that’s state for you.

              1. 3

                If Google has enough power to dictate AMP usage across the web, they have more than enough power to dictate what the browser will and will not do–regardless of who ends up writing and maintaining that browser. This is probably why MS threw in the towel.

                The Mozilla thing is noise. They make most of their money from search partnerships. Hard to say “no” to Google when they’re the ones keeping the lights on:

                https://www.mozilla.org/en-US/foundation/annualreport/2017/#:~:text=Today%2C%20the%20majority%20of%20Mozilla,remain%20the%20subject%20of%20litigation.

                Given their current share of the browser market (and especially poor showing in mobile), I think Mozilla is just a ~$500M/yr antitrust insurance policy:

                https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Summary_tables

                If the cool kids want something less offensive than the modern web, they will have to make it. “Build it, and they will come

                1. 2

                  The dimensionality of the written word is low enough that these methods really shine. It isn’t high art, and has a number of tells, but it did fool quite a few people!

                  The scammers are probably burning the midnight oil as we speak–to incorporate this non-profit wizardry into their spam farms. Nigerian princes are about to get an upgrade.

                  1. 3

                    Considering the tone of the article, and the theme running through his war stories, I don’t think learning statistics would have fixed anything here. Sounds more like bad attitudes in resonance. Subtract the bad attitudes, and the statistical aspects could be explained in minutes to anyone with a firm grasp on grade-school arithmetic and common sense.

                    1. 5

                      This is an incredibly awful article, I don’t know where to start.

                      This allows us to instantly see how the nested functions close over the outer functions.

                      We already have something for this. It’s called indentation. Comprehending his example code was no easier than if it were completely unhighlighted. I’m curious if any other Lobsters did.

                      Syntax coloring isn’t useless, it is childish, like training wheels or school paste. It is great for a while, and then you might grow up.

                      Got it, Real Men ™ program in monochrome. I hear they also only program in Fortran, Lisp, and assembly, unlike all those other, childish languages.

                      I no longer need help in separating operators from numbers.

                      No one uses syntax highlighting to differentiate operators from numbers. The most common use cases by far is highlighting of string literals, comments, and keywords. These are all very valuable. Syntax highlighting is a way to quickly find the end of a comment or a long string literal that might otherwise take some conscious effort to find. And keywords are highlighted because it’s often very easy to mistake one with an identifier, due to the syntax of most languages.

                      But assistance in finding the functions and their contexts and influences is valuable.

                      Scope does not help you find “influence”. The “context” of code is understandable in the whole. No amount of syntax highlighting will help you understand why something is inside an “if” instead of its parent scope.

                      1. 3

                        We already have something for this. It’s called indentation.

                        Look more closely at the green encoder references at the bottom. In a language with closures, it’s not a one-to-one correspondence between indentation and what he has proposed.

                        1. 2

                          Bizarre. I find turning off syntax coloring useful when I’m trying to learn new syntax (because it makes it harder to read).

                          I turn it back on when I want to get work done because it makes me faster.

                        1. 1

                          Ever since the gyms opened back up, it’s been fine. Been working from home a long time now, and we homeschool, so not much to adjust to otherwise.

                          1. 1

                            Yuzo Koshiro’s oeuvre is great for tuning your brain toward the machine and away from everything else. Absolute getting shit done music.

                            Whether or not that’s a good thing, I leave to the philosophers.

                            1. 2

                              Now we just need a collapse computer to go with it.

                              1. 4

                                As far as I can tell, the RC2014 is the “flagship device” for collapseOS: https://rc2014.co.uk/

                                You can hand-assemble it using thru-hole soldering only.

                                1. 2

                                  Still too supply-chainy. Z80 through-hole chips are sort of a boutique item. Wouldn’t be surprised if most of the inventory out there is surplus from 20 years ago. If existing inventory survives the event, x86_64, ARM, and existing OSes would be the most probable survivors.

                                  It would be cool if there were something we could fabricate using near-ubiquitous technology and materials. Maybe some kind of laser-printer based lithography process? Like inkjet t-shirt iron-ons, but for integrated circuits.

                              1. 5

                                Much of my beloved profession has become an abomination. 100% javascript stacks are the closest we have to the cleansing fire. Let it burn!

                                1. 2

                                  I’ve experienced a great many of these. Still preferable to a five-second telemetry delay every time I open the calculator on Windows.

                                  Alan Kay mentioned the value in developing hardware and OS in tandem in one shop. The PC is a disaster as-is, and something has got to give. Rust and formally verified micro-kernels might buy some time, but anyone who has to reach into the guts of these things knows that’s just lipstick on the pig.

                                  1. 5

                                    It is not surprising that reproducibility is so hit-and-miss these days when so many scientists are this arrogant.

                                    I think mankind would be further along if we had more Knuths and DJBs in the natural sciences–guys who’ll write a check instead of point a finger.

                                    1. 5

                                      Sorry if this is the wrong forum for this question, but what would it take for Plan 9 to become a viable option on servers or desktops today? It seems like many people are enthusiastic about the concepts it is built on and its potential, and have been for years, yet it seems as elusive as the GNU Hurd. Technically it may still be under development, but I’ve never encountered a machine, virtually or in person, that is running Plan 9.

                                      Why isn’t Plan 9 more popular? Is it an organizational problem, where there is no clear leader (person, corporation, or non-profit) pushing Plan 9 forward? Are there too many competing “forks” diluting what development effort exists? Is it a lack of good documentation/tutorials helping people get started developing Plan 9? Is there some licensing issue? Is it lack of hardware support, making it impossible to run on modern hardware? (Then why not run it in a virtual machine / emulator, as Redox OS does while it’s being developed?) Is it a lack of software written for it, or lack of a killer app that makes people want to run it instead of BSD or any other niche OS? Is it a sheer lack of publicity, so that fewer people are aware of its existence than I think? Is Plan 9 actually obsolete, so that people who really look at its design give up and go do something else with their time?

                                      1. 12

                                        I think there are a few factors:

                                        • The developers have very strong opinions. On things like obsessive adherence to the Unix philosophy (check the source for the plan 9 coreutils), syntax highlighting, mouse use, and so on. The nature of the system seems to make a lot of those opinions much harder to disagree with than other systems. Read the mailing lists or cat-v to get an idea of what I mean. I don’t think that this is a bad thing, but it is polarising.
                                        • The mouse is central. This stems from above, but I think it’s an issue in itself. A lot of the people who are likely to be interested are also likely to be invested in programs such as Vim or Emacs, and telling these people that they have no choice but to use a mouse isn’t going to go down well. Also, the prevalence of laptops these days means that people are less likely to always be able to use a mouse in the efficient way that is required. Furthermore, the mouse should be three buttoned, and modern mouses rarely are, the scroll wheel not working as a suitable alternative.
                                        • It works best together. Plan 9 is designed as a distributed operating system, and comes into its own when used on more than just a single personal machine. The fact no one uses it makes it hard for this to be achieved - a chicken and egg situation.
                                        • It’s ugly. Personally I quite like the aesthetic, but it does look like it’s from the 90s, and that’s going to turn a lot of people off. The interface is spartan, and many of its programs don’t come with easy ways to change the colour schemes to what the user might prefer.
                                        • There isn’t a good browser. I hate that I need these as much as the next person, but unfortunately it’s the case.

                                        These are the main things that have stopped me from using the system, and I’ve wanted to make it my main OS on a couple of occasions. Some people have switched, but others have moved to modern systems, bringing the killer apps with them.

                                        That’s the situation as I’ve experienced it anyway.

                                        [edit - just a heads up, apologies if this sounds a bit rushed, I wrote it once and then accidentally C-w’d my tab at the last moment (damn browsers!) and my thought process was a bit scattered the second time around.]

                                        1. 1

                                          Sorry to pick out this one thing, but why won’t the scroll wheel work as a 3rd button? Can’t you just push it without scrolling?

                                          1. 2

                                            The scroll wheel does work as a 3rd button. I wouldn’t call it unsuitable, it’s just that it is less ergonomic than a real middle button.

                                            1. 1

                                              You can, but that’s not really what it’s between designed for. Maybe you do and it works for you, but I find it frustrating because it feels like a wheel, not a button. Even when I disable scrolling with it, it feels like a wheel which is broken so that’s even worse. I used to have a three button mouse and it just felt better (in that regard, it also had a ball so was worse in that regard).

                                          2. 9

                                            @twee answered in detail why Plan 9 as a whole doesn’t get much adoption. But a lot of pieces of Plan 9 have been inspirational to other more popular OSes, and as a research OS, I think that counts as success for Plan 9. Obvious examples are UTF-8, which is everywhere, and the /proc filesystem on Linux that gives “everything is just a file” access to all sorts of kernel internals. A less obvious one is the 9p file server protocol, which made a recent appearance in Windows Subsystem for Linux of all places!

                                            1. 4

                                              Drivers are a huge obstacle for every alternative OS. Even on Linux, the situation is rough.

                                              The ubiquity of virtualization software (and somewhat consistent virtual hardware drivers) has been a boon to alt-OS usage and popularity.

                                              1. 4

                                                Plan 9 was and still is a beautiful experiment, an example of a research OS developed coherently, with a clear vision, while having a surprisingly decent userspace. However, it’s dangerous to confuse its aesthetic beauty and conceptual simplicity with actual utility for end users. The reason for existence for operating systems is to provide a hardware abstraction layer and to run end-user applications. It turns out that it’s possible to run the world on something as bloated and messy as Linux, and even, gasp, Windows. As long as the OS does not fall apart (like Windows 98) and has drivers for its target hardware, it’s good enough. Plan 9 clearly steered too much into the “pure aesthetics” territory without being an order of magnitude improvement for end users.

                                                Already in mid-90s Plan 9 started to get out of touch with the mainstream OSes and it had not found a niche where it was a winner. The other comments mentioned already the archaic UI and the mouse-centric workflow that requires the middle button–it is opinionated and rather hostile to most workflows of both power and casual users. As for the organizational problems: a successful OS needs backing of at least one big corp (OpenBSD is the most successful OS that I still consider being actually community-driven).

                                                To be honest, I prefer Plan 9 to stay in history in its clear form, as a myth of a perfect operating system, instead of watching it becoming something like what Linux is becoming today under the pressure of needs of big enterprises.

                                              1. 14

                                                Also see a (better) article related to this topic: C Is Not a Low-level Language.

                                                1. 7

                                                  Startup idea: build a CPU that is a fast PDP-11. :)

                                                  1. 8

                                                    Refer to the venerable 68010.

                                                    The 68000 tried, but they made a couple mistakes that made it fall short of the Popek and Goldberg virtualization requirements. 010 fixes these.

                                                    1. 2

                                                      Actually, Intel has been doing that for some time now. There are limits to what they can do (speed of light, cost of static memory…) but much of their work is tailored towards making C programs fast.

                                                      1. 7

                                                        No, I mean that actually is a fast PDP-11, not one that can pretend that it is via a bunch of incredibly leaky abstractions. I don’t know what it would take, I’m also not saying it’s a good idea.

                                                        Suppose we can somehow make all memory access as fast as cache access is now. That would make many optimizations unnecessary, and it would make the C mental model correct again.

                                                        Suppose we can somehow make drivers loadable microcode modules. At least some devices can still be just memory-mapped like in PDP-11, the rest can expose protocols with machine-readable specs. Then writing an OS will not be as easy as in PDP-11 times, but hardware support will be mostly a non-issue and running an experimental OS will be feasible.

                                                        In many regards, a computer that is as fast as modern machines but as easy to reason about as PDP-11 is a dream machine. It it can be built, I bet it will make it big.

                                                        1. 5

                                                          Suppose we can somehow make all memory access as fast as cache access is now.

                                                          As long as memory modules are physical, they would occupy space, and the distance from a given place in memory to the CPU would grow roughly as a square root of the memory size. And speed depends on distance. The reason caches are small is rooted in physical constants, not insufficient engineering.

                                                          1. 3

                                                            It’s not just distance. DRAM is more dense (which partially relates to distance), but to achieve that density, it leaks and requires refresh cycles. Even if it were physically small, and on-die, it would still be slow due to the refresh.

                                                            SRAM is less dense, but does not require refresh, so even large ones that are off-die (not too far off-die though!) will still be faster.

                                                            1. 1

                                                              Are you referring to memory volatility? I don’t see what that has to do with density. Also, isn’t the much-faster CPU cache also volatile?

                                                              1. 1

                                                                All RAM is volatile, but not all RAM requires refresh cycles.

                                                                SRAM elements are larger and more complicated, but don’t need to be refreshed–cache.

                                                                DRAM elements are basically capacitors, so more can fit in the same space, but they have to be periodically refreshed–system memory.

                                                          2. 4

                                                            The PDP-11 is easy to reason about because it’s a microcoded single-issue processor without a cache. You could build such a machine but it would be too slow to be useful.
                                                            Some DSPs are available with static memory, that’s as close to uniform memory access as you can get today.
                                                            If you are interested in this topic, Computer Architecture: A Quantitative Approach has a great explanation of why this hardware complexity exists.

                                                            1. 5

                                                              I know why it exists. I also think we should better drop the C model. I’m not talking about building a PDP-11 with what we have now, but about a hypothetical situation when someone discovers a “tachyon memory” that is as fast as CPU cache is now.

                                                              1. 1

                                                                Wouldn’t that just be… more CPU cache? We could use the same technology for all memory, but it would be crazy-expensive. And that’s not even accounting for how much slower things get once you have to address a large memory space.

                                                            2. 2

                                                              Oh, I see. Though I can’t think it can really be done: there’s no way around the cache hierarchy for instance if you want to overcome the memory gap (slow memory, fast CPU). We should ditch C instead.

                                                              About hardware support, have you seen Casey Muratory talking about the 30 million lines problem? He kinda proposes the same thing, where the hardware vendor would conform to a standard hardware interface (one standard for keyboard, another for mouses, another for web cams…), instead of providing drivers to compensate for their non-standard hardware.

                                                              1. 1

                                                                some devices can still be just memory-mapped like in PDP-11, the rest can expose protocols with machine-readable specs

                                                                What is “just” memory mapped as opposed to “protocols”? Something like an XHCI controller is memory-mapped, but you have to speak the protocol into the mapped memory :)

                                                            3. 1

                                                              People have explained why not, but I want to ask: Why? What advantage would that have over other CPUs?

                                                            4. 3

                                                              Also see a (better) article related to this topic: C Is Not a Low-level Language.

                                                              Thanks. I am the author of that article. I am now working on Verona, which aims to build the kinds of abstractions that are easy to map to modern hardware (and to build fast hardware for).

                                                              1. 1

                                                                Pardon the digression: Verona sounds extremely interesting. Why do you include/investigate the test-related features, though? I don’t mean to say that it’s wrong, I just don’t see how it’s connected to your other focus.

                                                                1. 1

                                                                  I don’t think I understand the question, would you mind giving a bit more detail?

                                                                  1. 1

                                                                    It seems to me as if Verona investigates several aspects of one problem area: The three questions you mention seem related, in the sense that a language feature that touches any of those is likely going to touch several. On the other hand systematic testing sounds both interesting and worthwhile, but I don’t see how it’s connected to the others. Is it intended to be? If so, how?

                                                                    1. 3

                                                                      Ah, I understand. Systematic testing is somewhat orthogonal to the others, but is also enabled by them. The Verona decision to disallow concurrent mutation has a lot of benefits. It means that you can do very fast GC (on a per-region basis) using techniques from the ‘80s, before the need to support concurrent mutation made them complicated (but, because they’re per-region, without needing any stop-the-world phase and being able to explicitly schedule region GC if the programmer chooses to use GC’d regions). One of the benefits is that the compiler and runtime are jointly now aware of all possible communications between components that live within the abstract machine[1].

                                                                      Contrast this with C. According to the C11 spec, any program that contains data races is undefined behaviour. All thread communication must be via _Atomic variables or other in-memory communication that is guarded by _Atomic accesses (for example, you can use a spinlock built from an atomic_flag to ensure that only one thread is mutating shared state at any given time, and can build more complex isolation out of this). In practice, most concurrent C programs do not do this. In theory, a C compiler could instrument every load and store to _Atomic variables and then enforce a specific repeatable ordering on these, but in practice that would miss a load of things (in particular, large C codebases that predate C11 often use inline assembly instead of compiler intrinsics for atomic operations, so are completely escaping from the abstract machine for concurrency).

                                                                      In contrast, you can take a Verona program and run it on a single core, with explicit yield points at any of the communication paths. In the runtime, we have implemented code (lifting a design completely from Pantazis Deligiannis’s PhD) that does exactly that. This was originally built to test the runtime itself: in CI, we run some tests with a large number of random seeds to make them all execute in different deterministic orders. If one fails, we’ve found a bug and, importantly, we have a reproduceable test case: if you run the same test with the same random seed, it will fail in the same way locally that it fails in CI. This has been sufficiently useful in finding bugs that it seemed useful to expose to the programmer.

                                                                      More generally, one goal for Verona is to provide a productive language for writing highly scalable concurrent programs. Being able to test the code well is a fundamental part of this.

                                                                      [1] There are some exceptions. For example, if I do an HTTP PUT in one cown and a GET in another at the same resource, I now have out-of-band communication that the runtime isn’t aware of: it is aware that the GET introduces some nondeterminism from I/O, but not that there is a causal relationship between the PUT and the GET.