Threads for intarga

  1. 11

    As cool as this looks, I feel like a proprietary editor is a hard sell these days.

    1. 3

      Is it? I’ve used sublime for years, and only really switched to VSCode because it had more extensions and a bigger community. Only having a limited time trial feels like a much bigger obstacle, since developers hate to pay for stuff

      1. 3

        I don’t mind paying for things but my main concern has become longevity.

        On the mac in particular where they frequently break backwards compatibility I want to know that the software will keep working for forever and every time I buy commercial software for the mac that gets broken. Their license servers don’t stay up or they lose interest in it for long enough that Apple switches CPU architectures or macOS versions underneath it and it doesn’t work anymore.

        I’ve spent about $200 on 1password licences and they’re just about to drop support for the way I use them and there’s basically nothing I can do to keep it working once Apple changes some insignificant thing that Agilebits would have to update it for. That might be a Firefox or Safari plugin architectural change or even just an SSL certificate that needs renewing.

        At least with something open source I can go and do that myself

        1. 3

          Paying for stuff isn’t the barrier, at least not for me. It’s the lack of hackability and extensions.

          I guess if they had a robust plugin system, that could make the lack of source easier to swallow, but it’s still unlikely to have many plugins or a big community, because it’s proprietary.

          1. 2

            Sublime was hackable, had extensions, had a robust plugin system. In fact, both atom and VSCode are very much inspired on sublime, and this is as well, just from looking at it. The assertion that a piece of software won’t have plugins or a big community ** because** it’s proprietary is just incorrect.

            1. 5

              Sublime is from another time where VSCode and Atom didn’t exist. This is very anecdotal, but most developers I see nowadays are using VSCode wethers 5-10 years ago most of them were using Sublime.

              I guess this editor has a niche for Rust and Go developers who want IDE with native performance, but at the cost of extendability.

      1. -1

        Modal, terminal based text editor written in… Rust? In 2022?

        Wouldn’t C be a better fit here, considering the 70s sensibilities?

        1. 15

          Wait til you find out that people write vim plugins in typescript

          1. 14

            It’s okay to not like things

            1. 4

              I think your sarcasm was on lost on people here. I detected and appreciated it.

              1. 2

                Guess I’m dense, could you explain?

                1. 3

                  It’s my impression that Emacs and Vim are largely inspired by development in editors from the 70s and 80s. The joke is that developing tools with their aesthetic would lead one toward C to reflect the time period appropriately.

              2. 2

                Until the new Strict Provenance work, Rust embraced the PDP-11’s model of memory, so it seems very appropriate here.

              1. 1

                Some thoughts to that:

                Universal Clipboard

                That’s an anti-feature to me that KDE Connect may provide, but I also disable it on windows.

                Safari Tab Groups

                Don’t know, I’ve been a tab group/managing/.. user, but since I’ve migrated away and just close everything at the end, I’m a much happier person. AFAIK Opera does some of this.

                AirPlay

                Points for that, though it’s still a proprietary system that costs extra money on every device that can support it. Let’s see about multi-DPI after wayland finally settled.

                Apple Maps

                Google maps/OSM?

                You can still boot Linux on them

                Well you can, but you won’t ever upgrade any drivers, for all we know you’ll have to dual-boot for the end of time.

                Costs

                I did actually think about buying an apple laptop after the recent M1 success. Too bad they do actually ship only 8GB of hard soldered RAM for everything under 2000€. So you get a beast of a processor, but don’t try opening too many tabs (electron apps) or actually compiling on it, because then you’ll trash your hard soldered SSD over time? So I went with a lenovo yoga, that way I’m also sparing another 800+ for an apple tablet or a ton of adapters. It’s a shame to be honest.

                1. 10

                  I’ve migrated away and just close everything at the end, I’m a much happier person

                  I respect that, but for my use cases, I need this stuff. I have a lot of long-standing research and projects going. For example, in 2021 I was researching cancer treatments for a family member so they could make an informed decision about their options. I have a few improvements I want to contribute to VirtualBox, and the relevant docs are in a tab group, safe and waiting for me when I’m ready to tackle them. Et cetera.

                  after wayland finally settled.

                  “Initial release: September 30, 2008”

                  Not sure if Wayland reminds me more of Duke Nukem Forever (15 years of development hell later, it’s released but a disappointment) or Windows Vista (“WinFS! Palladium! Avalon! Well ok, none of those, but isn’t that new theme pretty!”).

                  Google maps/OSM?

                  I did mention Marble in my article. The point was to have a desktop app that performed these functions instead of relying on a Web site. I really dislike using applications in a browser and should probably write that up later.

                  you’ll have to dual-boot

                  I’m not sure if by “upgrade any drivers” you meant firmware. Asahi is looking at making firmware updates possible from Linux since they are packaged in a standard format (.scap file) and you can already upgrade them manually using a Mac CLI command (bless). Otherwise there is no reason you have to run the Mac OS on an M1 Mac, Asahi can be the only OS on the drive (though right now you probably wouldn’t like it).

                  8GB

                  …is enough for my Mum to do eight Safari tabs + Photoshop + Telegram on her MBA, but I concede it’d be really nice to have slotted RAM. Unfortunately there are reasons they removed them; I remember the PBG4 slot failure fiasco, and it does drive up cost, thermals, dimensions, etc. Not that I like the idea, but I do understand the point.

                  1. 6

                    I have a lot of long-standing research and projects going

                    Bookmark seems to be the right feature for this use-case.

                    1. 11

                      Bookmarks don’t preserve history. It is possible to use bookmarks in a similar fashion, but I have never been as productive with bookmarks as I am with tab groups.

                      1. 4

                        Preach! I find I think I’m terms of space. Tabs exist in space. Bookmarks do not. I can find a container or a tab, but bookmarks? Five years later as I’m cleaning out my bookmarks I remember how that would have been so useful.

                        1. 1

                          I use Pinboard, with the archiving feature; I can search by tags I’ve applied or some text within the documents. It’s pretty useful!

                        2. 3

                          For some reason as soon as tabs became I thing, I basically stopped using bookmarks completely. I feel like what I really want is a queue of “this looks interesting” things that slowly dies if I don’t look at it… kind of like tabs, they stay open until I get so annoyed by all of them that I just close them, but it works great to keep stuff around that “oh hey, I might want to read this a bit later”

                          1. 1

                            I use Reading List for that, but yeah, before I had a Reading List this was another use case for tabs.

                      2. 2

                        Which Wayland implementation is being discussed?

                        I’ve been using one for years and I’m pretty happy with it.

                        Between TV, home display, work display, and internal display, there’s 4 different DPIs/scaling factors going on, and it seems to work just fine?

                        1. 5

                          Wayland implementations are at that critical stage between “works on my machine” and “works on everyone’s machine”. Mine’s pretty well-behaved, non-nvidia, three year-old hardware, and all Wayland compositors I’ve tried break in all sorts of hilarious ways. Sway is basically the only one that’s usable for any extended amount of time but that’s for hardc0re l33t h4x0rz and I’m just a puny clicky-clicker.

                        2. 1

                          Wayland seems to be coming to the next stable KUbuntu release, which makes it “production ready” for me. But I can totally understand the sentiment (sitting on a fullHD + 4k screen with windows for multi-dpi scaling).

                          Fair point for desktop-app Maps, guess I’m just used to that now.

                          Regarding driver updates you’re right, I misremembered something. What does annoy me though is that you have to use a bunch of binary blobs that you’ll have to live-download from apple (or first put on an usb stick in a future asahi release). That feels like the driver blobs on android custom ROMs and isn’t necessary for any of my intel laptops.

                          For my daily workload 8GB of RAM isn’t enough, although I’m doing more than than office/browsing.

                        3. 9

                          That’s an anti-feature to me that KDE Connect may provide, but I also disable it on windows.

                          Well, it’s a useful feature for a lot of people that use multiple apple devices, including myself.

                          8GB of hard soldered RAM

                          Soldered on RAM is likely the future everywhere, not due to price, but due to engineering and physical constraints. If you want to increase performance, it has to come closer to the die.

                          So you get a beast of a processor, but don’t try opening too many tabs (electron apps) or actually compiling on it, because then you’ll trash your hard soldered SSD over time?

                          This seems to be a bit of a straw man. I haven’t had any issues with swap over the last year and a half of daily driving a MacBook air. Admittedly, it’s 16GB rather than 8GB.

                          I agree with the rest of your points, for the most part.

                          1. 1

                            it’s 16GB rather than 8GB

                            And that’s my point. I’m fine with 16GB, but 8 isn’t enough if I open my daily workload. (Note though that I was apparently wrong, I’d have gotten a decent machine for 1500€ apparently.)

                          2. 4

                            8GB of hard soldered RAM

                            Trying to compare on specs like that misses the forest for the trees IMO, the performance characteristics of that RAM are so different to what we’re used to, benchmarks are the only appropriate way to compare. The M1 beats my previous 16GB machine in every memory-heavy task I give it, if non-swappable RAM is the price I pay for that, I’ll gladly pay it.

                            1. 1

                              That’s very interesting. I’m just looking at my usual memory usage and an IDE + VM + browser are easily over 8GB of RAM. Then add things like rust analyzer or AI completion and you’re at 12GB. Not sure if swapping is good for that.

                            2. 3

                              Too bad they do actually ship only 8GB of hard soldered RAM for everything under 2000€.

                              That’s not true. A MacBook Air with M1, 16GB of RAM and 256 base level SSD costs 1.359€. Selecting a more reasonable 1TB SSD will set you back 1.819€. You can always buy a 2nd choice/refurbished model for 100+€ less. Also, one should consider that the laptop will hold a lot of its value and can be sold easily in a couple of years.

                              1. 2

                                only 8GB of hard soldered RAM for everything under 2000€

                                I’m shocked it’s that expensive in Europe. My M1 Air with maxed out GPU (8-core) and RAM (16 GB), as well as 1 TB SSD, was only $1650 (~1500€).

                                1. 7

                                  It’s not that expensive. E.g. in Germany, the prices are currently roughly:

                                  1. 2

                                    Wait what. I did go to alternate (which is also a certified repair shop) and I looked on apple.com and couldn’t find that. And even now when I go to apple.com I get a listing saying “up to 16GB”, then klick on “buy now” and get exactly two models with 8GB of RAM. Oh wait I have to change to 14 inches for that o_O

                                    Anyway if I could, I’d edit my comment, because apparently I wasn’t searching hard enough..

                                    Edit: For 16GB RAM, 512GB SSD you’re at a minimum of 1450 (1.479 on alternate), which is still far too much in my opinion. And 256GB for my workload won’t cut it sadly.

                                    1. 3

                                      Wait what. I did go to alternate (which is also a certified repair shop) and I looked on apple.com and couldn’t find that. And even now when I go to apple.com I get a listing saying “up to 16GB”, then klick on “buy now” and get exactly two models with 8GB of RAM. Oh wait I have to change to 14 inches for that o_O

                                      You click Buy, then Select the base model with 8GB RAM and then you can configure the options: 8 or 16GB RAM and 256GB storage all the way up to 2TB storage, the keyboard layout, etc. No need to change to the Pro 14”.

                                      For 16GB RAM, 512GB SSD you’re at a minimum of 1450 (1.479 on alternate), which is still far too much in my opinion.

                                      You are moving the goal posts. You said that a MacBook with 16GB costs more than 2000 Euro, while it actually starts at 1200 Euro.

                                      which is still far too much in my opinion.

                                      Each to their own, but MacBooks retain much more value. I often buy a new MacBook every 18-24 months and usually sell the old one at a ~400 loss. That’s 1.5-2 years of a premium laptop for 200-300 Euro per year, which is IMO a very good price. If I’d buy a Lenovo for 1200-1300 Euro, it’s usually worth maybe 300-400 Euro after two years.

                                      1. 3

                                        The trick is to buy these Lenovos (or some other undesirable brand) second hand from the people who paid for the new car smell, and get 5-8 more years out of them.

                                        1. 5

                                          The trick is to buy these Lenovos (or some other undesirable brand) second hand from the people who paid for the new car smell, and get 5-8 more years out of them.

                                          I can understand that approach, it is much more economically viable. But at least on the Apple side of things, there have been too many useful changes the last decade or so to want to use such an old machine:

                                          • Retina display (2012)
                                          • USB-C (2016), they definitely screwed that up by removing too many ports too early, but I love USB-C otherwise: I can charge through many ports, get high-bandwidth Thunderbolt, DP-Alt mode, etc.
                                          • External 4K@60Hz screens (2015?)
                                          • Touch ID (2016)
                                          • T2 secure enclave (2017)
                                          • M1 CPU (2020)
                                          • XDR display (2021)

                                          These changes have all been such an improvement of computing QoL. Then there are many nice incremental changes, like support for newer, faster WiFi standards.

                                          1. 2

                                            So very much this.

                                            My approach in recent years has been:

                                            Laptops are old Thinkpads, upgraded with more RAM and SSDs. Robust, keyboards are best of breed, screens & ports are adequate, performance is good enough.

                                            Phones are cheap Chinese things, usually for well under £/$/€ 200. Bonuses: dual SIM, storage expansion with inexpensive µSD card, headphone socket; good battery life.

                                            Snags: fingerprint sensors and compass are usually rubbish; 1 OS update ever; no 3rd-party cases or screen protectors. But I don’t mind replacing a £125-£150 phone after 18mth-2Y. I do mind replacing a £300+ phone that soon (or if it’s stolen or gets broken).

                                            1. 2

                                              I think we’re the same person. Phones seem to last about two years in my hands before they develop cracks and quirks that make them hard or impossible to use, regardless of whether it’s a “premium” phone or the cheapest model.

                                              I wish this weren’t the case, but economically, the cheapest (‘disposable’) chinese phones offer the best value for money, even better than most realistic second-hand options that can run LineageOS.

                                              1. 2

                                                :-D

                                                Exactly so. I have had a few phones stolen, and I’ve broken the screens on a few.

                                                It gives me far fewer stabbing pains in the wallet to crack the glass on a cheapo ChiPhone than it did on an £800 Nokia or even a £350 used iPhone. (Still debating fixing the 6S+. It’s old and past it, but was a solid device.)

                                                My new laptop from $WORK is seriously fast, but it has a horrible keyboard and not enough ports, and although it does work with a USB-C docking station, it looks like one with the ports I need will cost me some 50% of the new cost of the laptop itself. >_<

                                                1. 1

                                                  I just bought a refurbished iPhone SE1 for 100 € to replace my old SE which had a cracked screen, dead battery and a glitchy Lightning port. Fixing all that would probably have cost as much. The SE still runs the latest iOS version and has an earphone connection.

                                            2. 1

                                              Thanks for the help with that website.

                                              You are moving the goal posts

                                              My price error does lower the bar of entry a lot, true. - I could now just stop writing and pretend that 1500 would be the ideal price and I’m regretting not buying it. But 1500 is still a lot of money when you can still get something that is very similar, has more features and costs less. I was able to buy something that is convertible for 1200 from lenovo, has more connection slots, has a replaceable SSD and does come with a high-end ryzen, supports linux (and windows) since day 1 (so it is not a glorified android tablet).

                                              I often buy a new MacBook every 18-24 months and usually sell the old one at a ~400 loss.

                                              I’m running phones for 8 years, laptops for 6+ years, desktops for 10 (with some minor upgrades). I wouldn’t want to invest that much time into buying a new one and selling the old one. But I can see you point, you’re essentially leasing apple hardware.

                                              it’s usually worth maybe 300-400 Euro after two years

                                              If you’re trying to always buy the newest thing available, fair. I’m trying to run these things for a long time because I hate switching my setup all the time and I like being environmentally friendly.

                                              Each to their own

                                              I agree, but I can see now where the difference in our preference comes from and I think that’s worth it.

                                      2. 1

                                        Note though that I’m not commenting too much on the OS aspect, it can be linux or windows, I use both equally. And if not for those 8GB of RAM, I’d have bought an apple laptop last week.

                                      1. 14

                                        Note that the second paragraph is an hypothesis. It is generally accepted that the fact that languages are mutable is the reason that the strong form doesn’t hold, but the relationship between patterns of thought and language is still very fuzzy.

                                        The later hypothesis depends on the idea programming languages are immutable and I find that somewhat misleading. There are two ways in which you modify a language to express thoughts that couldn’t be expressed in the old language:

                                        • Add new vocabulary (or redefine existing vocabulary).
                                        • Add (or modify) grammar.

                                        The first of these is far more common in natural languages. Most fields have some form of jargon, which extends an existing language with new terms to cover domain-specific knowledge. In most programming languages you can add:

                                        • New proper nouns (objects / structures / records)
                                        • New nouns (field / variable names)
                                        • New verbs (methods / functions)
                                        • New adjectives (generic types parameterised on another type)
                                        • New adverbs (higher-order functions)

                                        In some ways, programming languages are more extensible than natural languages because the dictionary, which defines the meaning of the word (in a prescriptivist world view, at least), is carried around with the word. You can use a new word only in a context that carries its definition (via source code or some compiled representation) and that means that you always have a mechanism for providing both the new word and its definition to consumers of your jargon.

                                        This is a space that I keep trying to persuade psycholinguists to look at more because I think it has some fascinating implications on language design. In particular, I came across a paper about 15 years ago that showed that only about 10-20% of humans naturally think in terms of hierarchies. This was born out by the success of early versions of iTunes (which replaced an hierarchical file structure with a filter mechanism and was incredibly popular with non-programmers). I hypothesise that the fact that hierarchies are so deeply entrenched in the grammar of most programming languages is a significant factor in the fact that only 10-20% of people find it easy to learn to program. Most languages have hierarchy for namespaces, for scopes, for call stacks, and so on. They’re so pervasive that we don’t notice that they’re there and it’s hard to imagine a language without them, yet the most accessible languages (Excel with its flat grid, visual data-flow languages with their pipelines) don’t have any kind of strict hierarchy.

                                        1. 4

                                          Note that the second paragraph is an hypothesis.

                                          Correct, everything beyond the first paragraph is my own speculation.

                                          The point you make about extending vocabulary is a good one, but I don’t agree with your mapping of parts of speech.

                                          For one, I don’t think nouns map to variable/value/field names. Consider this Java snippet:

                                          Dog fido = new Dog()

                                          Here, it seems clear that fido (variable name) is a proper noun, and it is instead Dog (a type) that is a noun.

                                          This is a bit of a philosophical point, but I also don’t think verbs map well to functions, at least not in a language where they’re first-class. Say I have a function that accepts dough and returns bread, If I can assign that a name and pass it around as I can data, it seems better described as an oven, rather than the action of baking. I would say that function types correspond to nouns, and function names to proper nouns.

                                          This means that the things we can most flexibly define are proper nouns, which are unambiguously not part of the language. Just as my name is not a part of English, my variable/value names are not a part of a programming language.

                                          We can still define new nouns and adjectives, in the form of types and traits, but our power to do this in most languages is very weak, as we can only define new types as hierarchies of existing ones.

                                          1. 4

                                            (Note: not parent!)

                                            This is a bit of a philosophical point, but I also don’t think verbs map well to functions, at least not in a language where they’re first-class. Say I have a function that accepts dough and returns bread, If I can assign that a name and pass it around as I can data, it seems better described as an oven, rather than the action of baking. I would say that function types correspond to nouns, and function names to proper nouns.

                                            This is indeed a bit of a philosophical question, as in, it’s a question that even Aristotle struggled with in De Interpretatione, so it’s literally 2500 year-old. (Caveat: if wanna read it, be careful, because his definition for ‘noun’ and ‘verb’ map really badly to modern English grammar).

                                            That being said, “a name [denoting an action] that you can pass around as data” would correspond to a verb in the infinitive or present participle form. As in, you can read (defun add (x y) ...) as either “to add two numbers” (FWIW this is what a lot of old English recipes books used for recipe names ;-) ) or “[a description of] adding two numbers”. Aristotle had it easy, he only needed the ancient Greek infinitive to cover all that ground. But it could also be worse – I think modern English grammar no longer distinguishes between participle and gerund form, so those two suffice, but some Romance languages need four forms to cover all that ground (infinitive, participle, gerund and supine).

                                            Verbs in the present participle form act as nouns, so present participle forms that have come to have enough of a denotative sense on their own (like “reading”) are actually listed in the dictionary as nouns. Verbs in the infinitive can act as nouns, too (“To see the world is all he wanted”) but you won’t find “to see” listed as a noun.

                                            Why do these count as verbs? The answer is also language-dependent, and there are more of them in the English language, but there are two of them that are particularly relevant to your example:

                                            • First, and one which Aristotle would’ve likely used as well [1]: “oven” is inherently countable, whereas (edit: a manipulable description of) “the act of baking” is not (and not because it’s a mass noun, like furniture or money). Two distinct ovens carry out the same act of baking, so the act of baking must be distinct from any individual oven. On the other hand, an oven can be built out of bricks, but the act of baking cannot be, so the act of baking must also be distinct from any oven in the world, current or future. (A further argument, that the idea of baking must also be distinct from the idea of an oven, as it would’ve been impossible for the idea of something that performs an action to exist without the idea of that action already being in existence, would’ve got me expelled from the lyceum, but I’m gonna make it just to annoy the school principal, for old time’s sake).
                                            • Second, because the act of baking can have tenses, voices, and takes complements, which an oven does not.

                                            Some of these are a little hard to translate to functions per se (but so is the act of baking) because code inherently has no tense (although I guess one could argue the tense is implied: all code is in the future tense, all logs are in the past tense, and as for the present maybe Zeno had a point about that stupid tortoise of his). However, some of them do readily translate: functions, for example, can take complements (in fact the ones that take arguments don’t even work without them), whereas nouns do not.


                                            1. As in, he would’ve used this to distinguish between “ovens” and “baking” as separate types of things (separate causes, among others). Aristotle’s De Interpretatione is 1/3 semiotics, 1/3 metaphysics, and 1/3 a grammar of the Ancient Greek language, which is why his classification of nouns and verbs maps so badly to modern English grammar (besides the obvious, that ancient Greek is not English, thank God). Aristotle would’ve classified “oven”, “baking” and “baked” as nouns, “is baking” and “is baked” as verbs, and “was baking” as a case of a verb.
                                            1. 2

                                              This was some interesting context, thanks for sharing!

                                              Two distinct ovens carry out the same act of baking, so the act of baking must be distinct from any individual oven.

                                              I fixated a bit on this, and I actually think it makes my case. I can theoretically write many different functions that all accept dough and return bread, these are all individual ovens with proper noun names. These all have the same type though, which is the abstract concept of baking or oven.

                                              :: Dough -> Bread

                                              a la Haskell

                                              Function<Dough, Bread>

                                              a la Java

                                              1. 5

                                                We’re kind of running into the same problem scholastic philosophers started running into during the Renaissance – reasoning by analogy is attractive but it’s a dangerous thing to do, because there are many things that apply to physical processes which accept dough and return bread, but don’t apply to functions in code.

                                                That being said, this is actually a fun exercise so what the hell :-D.

                                                I can come up with a “function that accepts dough and returns bread” by subjecting it to laser beams of different wavelengths. It doesn’t have to happen in an enclosure that traps heat: water from the outer layer evaporates more quickly than the one inside, so a high-wavelength laser will create a dry outer layer, trapping the more liquid core, which will then be easy to heat up using a lower-wavelength laser (e.g. in the microwave region). Arguably, it’s baking, because it accepts dough and returns bread, but since I’m poking laser beams at the dough, this is a lightsaber, not an oven. Baking can’t be both an oven and a lightsaber. (Although you might be interested to think of it in Platonic terms: both an oven and a lightsaber could be said to partake in the idea of baking, which is a thought I find particularly interesting because it implies Martha Stewart and Darth Vader have a lot of common ground).

                                                Or, coming at it from a more, erm, aristotelic angle. A function that accepts dough and returns bread describes a particular type of change. In order for it to be an oven, an oven (I mean an actual oven, or at least the abstract idea of an oven, although that would also get us expelled from the lyceum) should also describe a particular type of change. But it doesn’t: first of all, an oven isn’t a description, and second, it’s an object (or a class of objects) which can be used to enact many types of changing: the changing of dough into bread, but also the change of clay into pottery (which is sort of like baking but certainly not the same function), or the change of a combustible material into ash, smoke and vapour (which is nothing like baking).

                                                Now, to give ancient philosophers a break: unfortunately, I haven’t gone to school in an English-speaking country so I can’t say for certain but it seems to me that American schools (presumably British ones, too, but I’ve not spoken to enough British people) teach grammar in the same backwards way that they taught me grammar when I was in school. One of the many silly things they tell you, mostly because it makes it very easy for teachers to inflict psychological pain upon their students, is that “the type” and “the meaning” of a word is what determines what part of speech it is: if it’s an object, it’s a common noun, unless it’s a person or a particular object that was given a name, in which case it’s a proper noun (but not just any particular object, that would just be an articulated noun). If it’s an action, it’s a verb.

                                                This causes a great deal of confusion, and not only about nouns and verbs, but also about nouns and adjectives, and especially adjectives and adverbs, where it’s particularly obvious that it’s bollocks. For example, they teach you that adjectives are supposed to describe an object: “health” is obviously a noun, but what about “healthy”? It’s an adjective in “I’ve never seen a healthy Panda” but an adverb in “It’s important to eat healthy”.

                                                In fact, at least in the grammar of European languages (I honestly have no idea how this goes for others), it’s the syntactic behaviour of a word that determines what part of speech it is (edit: that is to say, most modern authors define parts of speech not just in terms of meaning, but also in terms of what syntactic role it plays). Or, to put it in possibly more adequate terms (caveat: I’m not sure if I’m using the correct English terminology here), parts of speech are syntactic classes, rather than semantic categories: word are assigned to one part of speech or another (noun, verb etc.) based on the structural relationships between these elements and other items in a larger grammatical structure, rather than meaning.

                                                (Edit: that’s why a verb, for example, is described not only in terms of what it conveys through its meaning, but also in terms of what it conveys through its relationship with other words in a sentence: it’s not just “what it means”, but also what role it can play in a sentence (a predicate), and how it may be changed in relation to a wider context – i.e. whether it can have tense, voice, number, gender and so on.)

                                                That is why the same word sometimes displays the behaviour of different parts of speech depending on context. For example, “to write” can act as a noun, playing the role of a subject in a sentence: “To write seemed foolish under these circumstances”. But it can also act as a verb: “I intended to write you earlier”.

                                                So the name that denotes a particular sequence of instructions applied to a certain set of arguments (“a function”) can “act as” a verb or a noun without breaking any law :-). That’s why it “feels like a noun” when you do this:

                                                btn->on_click_cb = &my_click_callback;
                                                

                                                but it “feels like a verb” when you do this:

                                                btn->on_click_callback(&btn, &data);
                                                

                                                There’s an even more complicated story about this in the English language about how verb phrases and noun phrases work. But I honestly don’t know it well enough because English isn’t my first language so I never studied English grammar in that much depth (also I honestly hated grammar in school, and that definitely didn’t help)

                                                Edit: oh, yeah. That raises the obvious question: well, if it’s not (just) the meaning of the word that determines what part of speech it is, how come we say “write” is a verb? I mean, I literally just gave an example where “to write” acts as a noun, but if you look up “to write” in the dictionary it says verb. Why do we treat these things – nouns, verbs, adjectives – like they’re totally distinct things that never overlap?

                                                Well, first, for the most part, they don’t, so it’s convenient. Second, there’s actually some tortuous history here, which you can sort of guess from the etymology: “noun” comes from “nomen”, name. It was supposed – “was” as in, back in the Renaissance or so, when we first started compiling complete descriptive grammars of European languages, Latin, that is, because it’s always f&^@ing Latin – to mean exactly that: a word that names something, whatever that is. “Verb” comes from “verbum”, “word”, which seems like it’s entirely arbitrary until you realize the Latins (from which we took the word) inherited the dichotomy of Greek grammar between “onoma” (name) and “rhema” (word, as in saying, utterance, something that is said).

                                                Plato is, I think, the first one who operated with this distinction, between rhema (words that describe actions) and onoma (words that describe those who perform actions). The distinction is a little obscure (here) and there’s been plenty of speculation about how and why Plato came up with those names. The one that’s probably least controversial IMHO, as in, it relies the least on understanding Plato’s metaphysics while making it least likely for the ghost of Plato to haunt you because you’re using it wrong, is that “rhema” are “things that are said of others”, whereas “onoma” are, well, said “others”.

                                                However, it’s worth bearing in mind that Plato was explicitly talking only about a fairly restricted type of sentences. The word that Plato uses is commonly translated as “discourse” (that’s the term used above, too) – which, in the most generous interpretation of Plato’s text, would be defined as “an utterance that can be either true or false”. The fact that we then kept this terminology for every sentence and word out there is entirely on us. But the fact that meaning is insufficient to determine what part of speech a word is (or, to put it another way, that the same words can be a different part of speech in different contexts, so there’s no 1:1 mapping between them) has dawned on many other people since Plato’s time.

                                                tl;dr IMHO you’re not wrong to think of funtions as either verbs or nouns, there are plenty of words that act as either in different contexts. I am also extremely fun at parties.

                                                1. 3

                                                  That being said, this is actually a fun exercise so what the hell :-D.

                                                  Damn right it is!

                                                  It is kind of an exercise in backwards reasoning though, In truth my argument about function types being nouns, only holds because the people who invented first-class functions already believed functions should be equivalent to data, and so implemented the language that way.

                                                  I think you’re right on the money with the on_click example, the act of applying a function is what makes it a verb.

                                                  1. 4

                                                    I think you’re right on the money with the on_click example, the act of applying a function is what makes it a verb.

                                                    Yeah, I didn’t want to amend my post with that because it was already way too long but, even though we call them by the same name, on_click in the function call and on_click in the function definition, for example, are instances of different things. We don’t need to summon the ghosts of dead Greek philosophers to figure that out, it’s enough to think of how a compiler would treat the most trivial case – procedures (no arguments) in a language without higher-order functions. The former would be just an alias for an address, whereas the latter is a binding of a scope (and a sequence of instructions in it) to a compilation state. That’s why you can replace the former with an address, but not the latter.

                                                    Ancient Greek philosophers sure had a lot of free time…

                                            2. 2

                                              Your imperative view is valid, but there’s also a relational way of considering verbs: let verbs designate relations, not functions.

                                            3. 2

                                              They’re so pervasive that we don’t notice that they’re there and it’s hard to imagine a language without them, yet the most accessible languages (Excel with its flat grid, visual data-flow languages with their pipelines) don’t have any kind of strict hierarchy.

                                              I think @crazyloglad was interested in some applications of hierarchy-less models for UIs (among others), too, and I’ve no idea if @-mentions get you pinged in any way but maybe they do :-D?

                                              1. 1

                                                @mentioning did not ping me in, though I did however just stumble upon this from the thread being briefly mentioned on IRC. I take it that you are referring to Pipeworld.

                                                Pipeworld is indeed poking at the intersection of decomposing hierarchies into data streams, presenting that as different aural/visual representations and recomposing that dynamically into one or several hierarchies by the user, and a handful of other things – though I am at a loss for describing it in a more approachable way; there is a lot to unpack in there and lacking other incentives, channeled my inner Diogenes and just dumped some cryptic visuals and feature descriptions.

                                                A lot of the interest was born out of poking around in SCADA systems and the hypotheticals of how these would evolve as more and more cheap compute gets introduced and the ‘we are not connected to a larger communications network’ gneigh- sayers stopped horsing around. Multiple stakeholders have different needs from shared compute over shared constraints, so how should the entirety of the user interface be formed to satisfy and encourage collaboration between them — that sort of reasoning.

                                                Looping back a bit to the thread and article in question, though it dips into the same realm, how about a detour through music and the role of sheet music and its notation from the vantage point of the composer; individual performers; conductor; the orchestra and the audience. How much of the evolution of music notation are we actually peepholing here?

                                              2. 2

                                                Hm interesting hypothesis. I would point to R and Matlab as other examples of mostly non-programmer languages without much hierarchy and namespacing. The users are highly technical but the code I’ve seen uses less namespacing than code from programmers. Actually when they move to Python, there seem to be frequent complaints about imports and the like (which is exacerbated by Python’s complex packaging mechanism)

                                                Shell basically has one namespace for commands / functions, and I think people like that too. However I do think non-programmers do have problems with the file system hierarchy, which good shell scripts will make use of.

                                              1. 3

                                                I’m not sure I’m convinced by the linguistic argument here.

                                                The author writes off the strong hypothesis of linguistic determinism on the following basis:

                                                We can propose a mechanism to explain this starting from two premises: humans are capable of abstract thought in the absence of language, and are capable of modifying their languages. From these, we can conclude not only that the strong version is false, but that the relationship appears to flow in the other direction: one thinks an abstract thought, and modifies their language to express it. Thought determines language.

                                                I get that this is a programming blog post and not a linguistics PhD thesis, but it really skips over a lot of the important details – were things really this simple, nobody would have accepted strong linguistic determinism in the first place. But more importantly, an approach based on the modifiability of natural language sells language itself a bit short, and this has consequences for the rest of the author’s argument.

                                                Yes, the author is absolutely right to point out the ways in which language changes, and to use this as an argument against strong linguistic determinism. But in the context of PLT, I don’t think it’s relevant. The fact of the matter is that in a huge number of cases, speakers do not have the power to modify the language they use to express a given idea. Perhaps they are L2 speakers, or bound to a particular formal standard – it doesn’t matter, because since the development of the very first language families, humans have had all the grammatical tools they need to express anything they want in any language. In light of this, the so-called “weak Sapir-Whorf” hypothesis can be summarised from the perspective of a language user in the following maxim:

                                                All languages can express all ideas, but every language sees the world in a different way.

                                                That is to say, no modification needed! There is nothing I can say in English that can’t be translated roughly into any other language. Where does that leave us? Well, the points about the modifiability of programming languages for the most part still stand, of course. I’m just not convinced that the comparison to natural languages in this context is particularly productive. If programming languages are like natural languages, then it surely shouldn’t particularly matter which one you use: I can express my ideas in German or in Russian with more-or-less the same (average) level of efficiency, even if the structures involved are different. But that isn’t the case when it comes to programming languages – some languages, when compared to others for example, have very tangible differences in the likely ‘correctness’ of the resulting program, or in the effort required on the programmer’s part to write it. That’s because programming languages are not like natural languages, and in particular, exhibit different properties when it comes to modifiability.

                                                (NB: there’s also a lot to be said here about natural language development – the author’s argument depends somewhat on the idea that language changes according to its speakers’ needs, which is by no means a given – but I don’t know nearly enough about this to argue about it!)

                                                1. 2

                                                  but it really skips over a lot of the important details

                                                  Agreed, I don’t have a background in linguistics, so my ideas are lacking nuance and should be taken with a pinch of salt. I tried to make clear with my word choice that this was a speculative exploration of the topic.

                                                  speakers do not have the power to modify the language they use to express a given idea. Perhaps they are L2 speakers

                                                  I would contend this point. Where I live, there’s a language variant specific to L2 speakers. Formal standards do restrict modification, but I did address that in the aside about codified vs ad-hoc languages. L2 speakers are more likely to be held to formal standards, but I think that has more to say about culture and politics than it does about the nature of language.

                                                  All languages can express all ideas, but every language sees the world in a different way.

                                                  This is an interesting idea to explore. Is this the case because there is a certain universality to language, or because humans from different cultures have similar ideas they commonly wish to express on account of being human, and so sculpt their languages to be able to express the same ideas?

                                                  I would offer a different analysis of this in regards to programming languages. General purpose programming languages are Turing complete, so for any program in one language, I could write a program in any other that represents the same output/state change. In this sense all general purpose programming languages could be said to be able to express ideas equivalently. How they express those ideas might be different though due to constraints like “no mutable data”, “no first-class functions” etc. This looks similar to how you can express the same idea in German and Russian, but might have to express it differently. From this perspective programming languages come out looking rather similar to natural ones.

                                                  1. 2

                                                    Thanks for your reply, didn’t realise you were the author! It was a very interesting post, so thank you for sharing it here.

                                                    Formal standards do restrict modification, but I did address that in the aside about codified vs ad-hoc languages. L2 speakers are more likely to be held to formal standards, but I think that has more to say about culture and politics than it does about the nature of language.

                                                    I agree with you here, I’m not sure I made my point particularly clear. What I mean is that modifiability is in no way a prerequisite to the free use of language. There are people who speak (a proposed variant of) reconstructed Proto-Indo-European for fun and are in no way limited in the concepts they can express.

                                                    Is this the case because there is a certain universality to language, or because humans from different cultures have similar ideas they commonly wish to express on account of being human, and so sculpt their languages to be able to express the same ideas?

                                                    Well, if you believe Noam Chomsky’s fan club, there is definitely a very significant universal component to language. At any rate, the evidence for this theory is very compelling.

                                                    How they express those ideas might be different though due to constraints like “no mutable data”, “no first-class functions” etc.

                                                    The important thing to note here is that for programming languages, these differences have a significant impact on the end result. We need to develop new programming languages because they allow us to write more efficient/correct/desirable programs more easily. More than 6000 years of natural language development, on the other hand, have not resulted in any improvement to our ability to express ideas using language; language change is just a thing that happens, with consequences for culture but no consequences for the efficiency or usefulness of language as a system. That’s why I hold that modifiability plays a fundamentally different role in PLT.

                                                1. 3

                                                  I like the overall point of comparing human language as a framework for thought in the same way your first programming language is a framework for programming, but I have a few comments about the linguistic half, as someone with a degree in it.

                                                  I think comparing humans to compilers is a bit of a mistake. Compilers have a binary acceptable/unacceptable state, while humans process language on a range of acceptability, and it’s that range of acceptability that allows human language to change. The fact that if i change my compiler it won’t run on your machine isn’t a minor technical difficulty - it’s a fundamental difference.

                                                  This is because language change doesn’t just happen on a communal scale - it happens on an individual level as well. The language a person speaks is always changing, so you always have to be able to change your internal processor.

                                                  This also means that the distinction they draw between codified and ad-hoc languages is correct, except that all natural languages, linguistically speaking, are ad-hoc by their definition, so they’re just redefining the dichotomy they mention.

                                                  As a side note, I think the author is missing the most interesting comparison of natural language and computer languages - HTML. With its explicit “be liberal in what you accept” philosophy, the web allows users to write technically invalid HTML while still displaying acceptable webpages. This is a much better analogy for human language, where you can still understand someone speaking a different dialect.

                                                  1. 2

                                                    This also means that the distinction they draw between codified and ad-hoc languages is correct, except that all natural languages, linguistically speaking, are ad-hoc by their definition, so they’re just redefining the dichotomy they mention.

                                                    It isn’t the same dichotomy though, I did give two examples that this dichotomy classifies differently. I made this distinction to illustrate the same point you made, that computer languages (as we use them today at least) aren’t comparable in terms of evolution to “natural” natural languages because they can’t be ad-hoc.

                                                    As a side note, I think the author is missing the most interesting comparison of natural language and computer languages - HTML. With its explicit “be liberal in what you accept” philosophy, the web allows users to write technically invalid HTML while still displaying acceptable webpages. This is a much better analogy for human language, where you can still understand someone speaking a different dialect.

                                                    This is an interesting case, but I think it still has the same shortcoming. There’s no way to convey new constructs or reach consensus on them. It really only enables backwards-compatibility, which, I guess, is what it was designed for.

                                                  1. 12

                                                    I don’t think the proposed syllabus really achieves what the author says it does. In education, neat plans often have a way of imploding when they meet students.

                                                    As I see it, the “WHY!?” has two components:

                                                    1. What’s my motivation to learn this?
                                                    2. How do I use this in practice?

                                                    The first is the hardest, especially if you have to teach a large group. Who’s to say they will be interested in your card game problem? Experience tells me most will see it as contrived.

                                                    As for the second, I’m fairly certain students will fail to generalise from solving one hyper-specific problem. Yes, you’ve taught them how to make a card game, but will they able to apply these tools to another problem?

                                                    Teaching is definitely easier when you can pair it with practical examples, but I think that’s how any decent teacher would teach the first syllabus anyway. People with no experience in education often like to armchair speculate on how it ought to be done, but the real problems in education tend to be systemic and institutional, which the teachers usually have no power to affect.

                                                    1. 6

                                                      In my introductory robotics course we follow an approach similar to what he recommends, and although I loved the format and my students were overall very successful in the course, I noticed that other than a handful, students really struggled to take the previous material and apply it to a new domain. They tended to compartmentalize solutions versus seeing them as tools that could be used in other ways. It did improve some as we went but I think, like you said, that there are systemic issues that need to be addressed to change that kind of thinking. No CS course can be expected to overcome that without changes elsewhere too.

                                                    1. 2

                                                      You would close your project off to contributions from anyone outside the collective.

                                                      One of the joys of open source is finding a bug, or missing feature that I care about, being able to fix/implement it myself, and send a patch upstream. If I contribute under GPL, and you accept my contributions, you lose the right to relicense. Well, unless you make me sign a CLA (no thanks).

                                                      1. 1

                                                        I have a lot of Open Source projects, and amount of contributions is minimal comparing to amount of users. And even contributors are often additional work because their changes have to guided, reviewed and then supported.

                                                        I’ve signed CLAs before and it’s annoying, but I don’t mind otherwise. Small price to pay, for otherwise free and open software provided for me. Especially if I knew it allows the developers to support themselves without begging for donations while doing unpaid work for me, everyone, and especially companies making a lot of money using their software.

                                                        1. 2

                                                          If that’s the case for you, then I guess it’s no loss.

                                                          For me at least, the point of GPL is protection against proprietary software. which signing a CLA, and this type of arrangement in general, undermines.

                                                          Given the sentiments of you, and many others around you have little to do with software freedom or copyleft, I wonder if you wouldn’t be better served by a non-commercial license rather than a copyleft one.

                                                          1. 2

                                                            For me at least, the point of GPL is protection against proprietary software. which signing a CLA, and this type of arrangement in general, undermines.

                                                            The Free Software Foundation used to require people to sign over their copyright for contributions to all official GNU projects. If I recall correctly, they did this so they had the freedom to retroactively update code to newer GPL versions. It seems like they’ve relaxed this a bit now.

                                                            1. 2

                                                              I wouldn’t sign a CLA for the FSF either.

                                                              That said, the FSF isn’t using their CLA to sell unlicensed versions of the code.

                                                              1. 4

                                                                That said, the FSF isn’t using their CLA to sell unlicensed [proprietary] versions of the code.

                                                                Technically, yes they are, that is precisely why they want copyright assignment (not a CLA). This used to be where most FSF revenue came from (more than donations). If a company was caught violating the GPL, the FSF will offer them a time-limited retroactive proprietary license to the code. They then have the duration of the proprietary license to comply with the GPL. If they’re not in compliance at the end then the FSF can take them to court for copyright infingement.

                                                                Note that granting this license is possible for the FSF only because they own all of the copyright. RedHat, for example, couldn’t do the same for Linux because hundreds of other people would still have standing to sue for copyright infringement on their contribution to the Linux kernel.

                                                            2. 0

                                                              For me at least, the point of GPL is protection against proprietary software. which signing a CLA, and this type of arrangement in general, undermines.

                                                              Signing a CLA by no means is undermining copyleft licenses or their spirit. There can be a AGPL project, you contribute, sign CLA and everybody can still use it as a AGPL licensed free software.

                                                              1. 2

                                                                The point of AGPL isn’t to have a badge saying “AGPL”, it’s that people who make derivatives of one’s work have to share their source. This arrangement undermines that, because the entire point of the collective is that you sell the right to strip the license.

                                                                1. 2

                                                                  The root problem is that dual-licensing isn’t an open source business model. It is a proprietary business model. Your money comes from selling proprietary software. This is predicated on the idea that proprietary software is more valuable than F/OSS, which is one that any proponent of Free Software and most advocates of Open Source would reject.

                                                                  You don’t choose a specific F/OSS license in a dual-licensed open/proprietary project because you believe that it’s the best way of ensuring freedom for users or because you think it’s the best way of growing a community, you choose it to give the most friction to any of your downstream consumers who have money. You intentionally make using the open version difficult to encourage people not to use it.

                                                                  This is fine if you want to be running a proprietary software company but still get some marketing points from a try-before-you-buy version that is technically open source but claiming that it’s an open source (or Free Software) business model is misleading.

                                                                  1. 2

                                                                    Yes, but selling exceptions to AGPL is no worse than using MIT in terms of software freedom. In fact it is better! In terms of software freedom, MIT < AGPL and selling exceptions < AGPL. So if you are okay with contributing to MIT projects, you should be okay with contributing to AGPL and CLA for selling exceptions projects, software freedom wise.

                                                                    https://www.gnu.org/philosophy/selling-exceptions.html is of the same opinion, by the way.

                                                                    1. 1

                                                                      Exactly. Unless someone is an AGPL-maximalist (which I can understand and respect), I don’t know what are their qualms with dual-licensing. It’s a ideological compromise from AGPL-purism, but waaaay more pure than liberal OSS licenses which are like dual-licensing but with unconditional, perpetual, reciprocitiy-free license for all commercial applications.

                                                          1. 84

                                                            Stallman was afraid that users would extend his software and not hand over their contributions.

                                                            This gets it backwards. The fear is not “as a developer, I fear that people won’t contribute extensions to my software”, it’s “as a user, I fear that I will be made to use software I cannot audit, extend, or modify”. Given that this is the everyday reality for almost everyone using technology, I’d say the fear is well founded.

                                                            The GPL takes no issue with you modifying software for your own use, the issue comes when you distribute binaries without source.

                                                            1. 16

                                                              I seized on that exact same sentence, to the extent that I was unable to carefully read the rest of the piece.

                                                              IMO it’s a fundamental misunderstanding of Stallman, and so much of the rest of DHH’s analysis rests on the point that I don’t think the piece is coherent. (Of course he can be correct when he says “I won’t let you pay me for my open source” even if his reasoning behind that is incorrect.)

                                                            1. 3

                                                              Also worth noting conjure, which supports a much wider array of lisps (Clojure, Scheme, Racket, Janet, Fennel, Hy), and is written in a lisp itself (Fennel)! It does require neovim though.

                                                              1. 2

                                                                So I’m not familiar with the ECS style of doing things - what’s the granularity of an entity in this system? Does that go down to a point? quad? mesh? The way I understand the examples, lights and meshes are definitely a kind of entity, but could be subdivided?

                                                                1. 2

                                                                  An entity is just a container of components, I believe points and meshes are generally components contained by an entity. i.e. an enemy entity might contain a location component (point), and a body component (mesh).

                                                                  The way I think of it is that entities generally map to objects in OO, components to fields, and systems to methods. The big difference is that systems are not bound to entities, and instead each system queries the set of entities to find ones to act on.

                                                                  1. 2

                                                                    I think one of the better ways to understand the value of storing entity fields as components is through concurrent systems. The ECS model, despite being used heavily in OO languages, is anti-OO. With OO data modelling you would need two systems that each want to change an entity to run serially or use some typing of locking because your unit of addressability is the entity object. By storing entity details in components, which are not part of the entity object, a system can address just one type of component for an entity (or set of entities) while another system can address a different component for the same entity. Since the queries are disjoint and neither system has a reference to the entity object the systems can run concurrently without worrying about conflicts.

                                                                    In some of the bevy examples a system will pass a component back to the engine to get a reference to the entity. If you dig deeply enough you’ll find that the entity reference is an ID rather than a memory reference. This is another lens into the anti-OO aspect of ECS. I’m not familiar with the specific storage techniques used by bevy but at a conceptual level you can think of entities as a table in a relational DB with a single column that is also the primary key. Each component in ECS would then be a different table with a foreign key reference back to the entity. This sparse storage is another way to see the utility of ECS and IMO why it has been popular in C++ game studios. In C++ every entity of a given type would need to store fields for every value that would ever be needed by any entity of that type. If you had a class RpgNpc and you wanted to track whether a player had met the NPC you could easily add a boolean hasBeenMet field to the class. This would incur a storage cost for all NPCs even at the very beginning of the game. With ECS you have the overhead of tracking the HasBeenMetComponent values but you only have those components in memory for NPCs the player has actually met, which would be zero at the beginning of the game.

                                                                1. 46

                                                                  Also – cards on the table here – I don’t share the same generational excitement for moving all aspects of life into an instrumented economy.

                                                                  If Moxie really feels this way, I have to wonder how they justified embedding a crypto scam into Signal.

                                                                  1. 2

                                                                    Link/details?

                                                                    1. 10
                                                                      1. 3

                                                                        I’m ignorant; what about being pre-mined makes it more of a scam than otherwise?

                                                                        1. 11

                                                                          The founders owned all coins in existence, of which they held 85% in reserve. At the ridiculously inflated price they tried to push it at, their reserve would be worth several billion USD

                                                                          1. 9

                                                                            That doesn’t necessarily make it a scam. The owner of a pre-mined coin is basically acting as a bank issuing promissory notes. If they hold 85% in reserve, then that 85% doesn’t really exist until they issue it. If they issue it only in exchange for USD at a fixed rate, then it’s just room for expansion and they can keep the exchange rate fairly stable. If they decide to issue some of it without receiving assets that let them cover their promissory note then that’s a problem. The fact that there’s a public ledger should make it easy for third parties to audit whether they’re doing this.

                                                                            That said, I think the Signal in-app payments thing has exactly the problems Moxie describes. It has centralised control (a single issuer of promissory notes) with none of the advantages of centralisation (fraud protection and so on).

                                                                            1. 1

                                                                              what you describe in your first paragraph is still a scam in that it’s obfuscation to skirt regulations that would apply to actual promissory notes.

                                                                              If they decide to issue some of it without receiving assets that let them cover their promissory note then that’s a problem. The fact that there’s a public ledger should make it easy for third parties to audit whether they’re doing this.

                                                                              the blockchain designs I’m familiar with don’t include information about assets held in reserve. if it did it would carry no more weight than a mere promise which could just as well be off the blockchain. unless I’m missing something?

                                                                              1. 2

                                                                                the blockchain designs I’m familiar with don’t include information about assets held in reserve. if it did it would carry no more weight than a mere promise which could just as well be off the blockchain. unless I’m missing something?

                                                                                I don’t know the details of their implementation, but any transaction that involves coins in the reserved 85% being transferred would show up in most distributed ledger implementations. You can’t see the assets that they receive in exchange but if they start issuing them in any great quantity without demonstrating that they have sufficient assets to back them then that’s quite dubious.

                                                                                what you describe in your first paragraph is still a scam in that it’s obfuscation to skirt regulations that would apply to actual promissory notes.

                                                                                I don’t disagree here, but operating a bank without complying with banking regulations is very different kind of scam than the typical pyramid scheme that most cryptocurrencies implement. PayPal made a lot of money by being an unregulated bank until they were caught and forced to keep making money as a regulated bank.

                                                                                1. 1

                                                                                  You can’t see the assets that they receive in exchange but if they start issuing them in any great quantity without demonstrating that they have sufficient assets to back them then that’s quite dubious.

                                                                                  I see your point: If the amount of currency in circulation is public then there is at least some informal accountability, though it is quite weak as has been demonstrated in cases such as Tether.

                                                                                  1. 1

                                                                                    It’s also useful for post-facto auditing. There’s a public log of when they released the currency, a later fraud investigation just needs to discover when they acquired the assets that back that currency. I don’t know how they’ve implemented it, but it would in theory be fairly easy to add actions where a user buys or sells fake money with real money to the blockchain, and the amount of money that they paid. A user could then audit the fact that their transactions appeared on the ledger and a later audit could ensure that all cross-currency transactions that appear on the blockchain are matched by a transaction in their bank account.

                                                                                    1. 1

                                                                                      That would make it less of a scam, as would implementing regulated promissory notes on the blockchain, but at that point what’s the point of a separate fake currency.

                                                                                      1. 1

                                                                                        I’m not the right person to answer that. From my perspective, it lets me do things that my bank lets me do for free (transfer money to other people), only it charges me money to do it and isn’t subject to the same consumer protection laws as a bank transaction.

                                                                        2. 1

                                                                          Oh dear

                                                                      2. 1

                                                                        I believe you answered your own question with the word “scam.”

                                                                      1. 2

                                                                        Absolutely impressive work, as always! While I’m not a fan of M1’s locked-down nature and Apple’s practice on hardware (which is why I haven’t bought a Mac in almost 10 years now), I understand many prefer using Apple hardware while using Linux.

                                                                        Given Windows-support is out of the question, this might even make Linux in general more popular among Mac power-users.

                                                                          1. 2

                                                                            From what I understand, that article discusses running windows in a VM. What’s interesting about the work Asahi are doing is that it runs bare metal.

                                                                            Linux has always been able to run virtualised on arm macs, IIRC Apple even demoed a fedora VM on stage.

                                                                            1. 1

                                                                              Yes, it’s a virtualised one in this case. But if there’s enough interest, I wonder what will happen in the future. We MS already deploys Arm on their platforms, so it’s not impossible they’ll give it a go.

                                                                          2. 1

                                                                            I just bought a MacBook-Pro, partly because I was interested in trying this macOS, but primarily because I would have needed to wait months for most of the somewhat open alternatives I found. I would love to run linux on that machine!

                                                                          1. 2

                                                                            A long time ago, all the developers had a common dream. The dream was about interactivity, liveness, evaluation…

                                                                            And after a while, we even forgot that we ever had this dream.

                                                                            This assertion is unfounded, and I definitely don’t agree. There are some developers who love those things, and they definitely haven’t forgotten about it. There’s a reason many developers fawn over REPLs. On the other hand, many developers opt away from REPLs, because the sacrifices necessary to enable them aren’t always worth it.

                                                                            On the topic of the article, while it’s a cool demo, I don’t personally find it revolutionary, or all that useful. The tour of go, many programmers’ introduction to the language, is similarly interactive go programming in a webpage (albeit with compilation done on the backend). And honestly, when reading programming blogs, I’ve never once found myself wishing the snippets were interactive. If I want to tinker with a snippet, I can edit and compile it locally, in a familiar and comfortable environment.

                                                                            1. 4

                                                                              To build on this, I’ve found that blogs/sites that try to do clever interactive things with the examples completely fail to display any of the code snippet if javascript is turned off in the browser. So it’s beyond useless. Not everyone is browsing with a fast computer.

                                                                              1. 1

                                                                                The fact that the word “dream” is used, is telling. It is not an coincidence. The author us é the word he could use. A dream, something vague, not rationaly proven to make sense.

                                                                                Over the last decade or two, ever now and then, some website or programming language does this. Pyret, Crystal programming language, go programming language, eloquent JavaScript book. They all have this silly text boxes where one can paste code and click execute. I fail to understand why and to whom thismwould be useful. So you are advocating for the usage of a language and for interactivity and all for being able to use it “on your computer”, and you do so by encouraging the user to NOT running it their computer? Seems paradoxal to me. Also, if we are talking about JavaScript, there is no dream chase. You don’t need toy REPLs, your browser already comes with a proper one. To conter the example by Kay: such dream is already reality for JavaScript. Just open the browser JavaScript console while on Wikipedia.

                                                                                1. 1

                                                                                  So you are advocating for the usage of a language and for interactivity and all for being able to use it “on your computer”, and you do so by encouraging the user to NOT running it their computer?

                                                                                  I think I’ve missed the point you’re trying to make here. The code in the blog post runs on whatever computer the user is using to browse the internet. How is that NOT “their computer” while opening the javascript console on the very same browser while visiting wikipedia is “their computer”?

                                                                                  1. 1

                                                                                    No, that is exactly my point. If you provide a text box on a webpage, the code is either executed remotely, which is the case for most examples I gave, or it relies (or is) JavaScript. For which we already have a superior experience in the developer tools.

                                                                              1. 10

                                                                                I feel the golang version of the first example is written (purposely?) to be obtuse compared to D. i.e. using flag instead of os.Args. I don’t disagree with the point being made, but shoddy examples undermine it.

                                                                                1. 7

                                                                                  No one said Go wasn’t verbose, but yeah, the example is silly. It’s also not even correct, as the newlines will be stripped, which is probably not intended.

                                                                                  The more obvious way to write it is something like:

                                                                                  var (
                                                                                  	err error
                                                                                  	fp  = os.Stdin
                                                                                  )
                                                                                  if len(os.Args) > 1 {
                                                                                  	fp, err = os.Open(os.Args[1])
                                                                                  	if err != nil {
                                                                                  		log.Fatal(err)
                                                                                  	}
                                                                                  }
                                                                                  
                                                                                  text, err := io.ReadAll(fp)
                                                                                  if err != nil {
                                                                                  	log.Fatal(err)
                                                                                  }
                                                                                  fmt.Println(string(text))
                                                                                  

                                                                                  (Which doesn’t close the fp, but that’s okay for a short-lived CLI program)

                                                                                  1. 5

                                                                                    Also, as mentioned in another comment, the example in both cases is just… a bad way to solve the problem regardless of the language since it’s buffering the entire input unnecessarily.

                                                                                    I know it’s just a toy example in each case meant to show features of the language, but still.

                                                                                    1. 1

                                                                                      Oh yeah, I always forgot about io.Copy() for some reason; usually you want to actually use the data you read in your application, so using io.Copy() is pretty artificial here.

                                                                                1. 6

                                                                                  I wonder if this means that we also need a Xerox PARC for our times – and if our current operating systems can be rescued; the desktop revolution required new ones.

                                                                                  1. 11

                                                                                    A lot can be achieved without throwing the whole OS out. The author notes how iOS largely discards the desktop metaphor, and I would like to add that this was achieved despite it being a fork of OSX.

                                                                                    Personally though, I think the largest obstacle in the way of innovation is how locked-down our operating systems and the devices they run on are. If you run a proprietary OS, you can’t experiment with the environment at all, and this is exacerbated on many modern devices, which are designed to run one (proprietary) OS, and actively try to hinder the use of any other OS. The only people who get to rethink the user environment are designers at Apple/Microsoft/Google, and even they only get to do so within the limits set by their company.

                                                                                    Open source OSs have given us outsized innovation relative to their development resources and popularity in this regard, and it’s a shame that they remain obscure to most.

                                                                                    1. 16

                                                                                      I’ve seen very little UX innovation from open source OSs. Maybe more of it exists but isn’t widely publicized, but in that case why not? Wouldn’t it attract users?

                                                                                      Mostly what I’ve seen in open source Unix desktops is (in the old days) terrible bizarro-world UIs like Motif; then clunky copies of Windows or Mac UIs; well-done copies of Windows or Mac UIs; and some stuff that’s innovative but in minor ways. Again, this is my viewpoint, as a Mac user who peeks at Linux sometimes. Prove me wrong!

                                                                                      Back in the early ‘00s Nautilus looked like it was going to be innovative, then it pretty much died and most of the team came to Apple and built Safari instead (and later contributed to iOS.)

                                                                                      Creating great UX requires eating your dog food, and so much of the Linux world seems to be of the opinion that the terminal and CLI are the pinnacle of UX, so we get advancements like full-color ANSI escape sequences and support for mouse clicks in terminals. I have no doubt this makes CLIs much better, but where does that leave the UX that non-geeks use? A CLI mindset won’t lead to good GUI, any more than a novelist can pick up a brush and make a great painting.

                                                                                      (Updated to add: I wonder if some of the Unix philosophy and principles aren’t contrary to good end-user UX. “Everything is a file” means everything has the limits of files as bags-of-bits. The emphasis on human readable file formats (that it takes a geek to understand) means less interest in non-geek ways to work with that data. And “small pieces loosely joined” results in big tool shelves that are very capable but come with a steep learning curve. Also, in a rich user interface, my experience is that connections between components are more complex to do, compared to streams of text.)

                                                                                      1. 8

                                                                                        Another point in your favour: KDE feels like a slavish clone of whatever Microsoft was doing then. Perhaps not in your favour: Emacs is a “different” UI that people actually use, but the foundation for Emacs was laid by Multics Emacs being in Lisp instead of line noise^W^WTECO, and Lucid doing most of the work bringing it kicking and screaming into the GUI.

                                                                                        I really think the Linux and perhaps nerd fixation on the VT100 and VT100 accessories is actively damage. We certainly didn’t perfect UIs in 1977. CLIs deserve better, and maybe when we finally escape the character cell ghetto, we can actually move forward again.

                                                                                        1. 8

                                                                                          I think you’re conflating innovative with polished, and with (“non-geek”) user friendly. Something made by a handful of volunteers is never going to be as polished as what a company with a budget in the billions churns out, and something made by terminal weirdos for terminal weirdos is never going to be user friendly to people who aren’t terminal weirdos.

                                                                                          One project I would point to as innovative, dwm, directly addresses some of the things the author brings up in that blog post. It has a concept of tags, where every window is assigned a number of tags, and the user can select a number of tags to be active. This results in all windows with one of the selected tags appearing on the screen. This directly maps to the concept of “contexts” in the blog post, and takes it a step further by allowing contexts to be combined at will. It also allows the desktop to be decluttered at a keystroke, yet still have all the prior state immediately accessible when needed. dwm is definitely not polished, and it sets an intentional barrier to entry by requiring users to edit a C header and compile it, but it’s hard to argue that it isn’t innovative.

                                                                                          I don’t think it’s surprising to note that Linux people build Linux user environments, or that Mac people don’t like those Linux environments. What I would love to see though, would be the environments Mac people would build if given the chance.

                                                                                          1. 1

                                                                                            I don’t think it’s surprising to note that Linux people build Linux user environments, or that Mac people don’t like those Linux environments.

                                                                                            Is there a definition of “Linux user environment” and “Mac environment” here that I’m unaware of? When I hear “Linux user environment” I simply think of the userland that sits atop the Linux kernel.

                                                                                            1. 2

                                                                                              I meant it in a broader sense. As I see it, there are certain design ideals and philosophy associated with different operating systems, that are broadly shared by their developers and users. These ideals largely shape the user environments those developers build, and those users use.

                                                                                              1. 3

                                                                                                As I see it, there are certain design ideals and philosophy associated with different operating systems, that are broadly shared by their developers and users.

                                                                                                Does such an ideology or philosophy agree for free operating systems though? Is it defined somewhere and used as a guidepost? Do the users agree to this guidepost?

                                                                                                Just look at the history of Linux and BSD and you’ll find enough hand-wringing about the different cultures in the communities and the users they attract. I just don’t think any operating system, free or not, has consensus around its philosophy. Users use computers for a variety of things, and I think a small minority use an operating system specifically to align with a personal philosophy.

                                                                                          2. 7

                                                                                            I’ve seen very little UX innovation from open source OSs.

                                                                                            This is systemic and not limited to open source OSes. There has not been significant UX innovation among commercial operating systems either. Arguably for good reason: People are mostly satisfied. We’ve gotten so good at UX that we are now going in the other direction; using dark patterns to betray our users to get them to do things they don’t want to do. Most “This UX is bad” posts are picking nits compared to what we had to work with in the 90s.

                                                                                            FWIW, I do think the open source community’s work on tiling window managers are commendable. Stacking managers are partially an artifact of low-resolution displays and early skeuomorphic attempts.

                                                                                            As far as the “CLI mindset” - It’s interesting nobody has taken a step back and considered why the CLI continues to proliferate… and if anything gain momentum in the last 15+ years. There was high hopes in the 90s it would be dead within a decade. Many seem to have fallen victim to the appeal to novelty fallacy, and simply assumed newer = better, not fully comprehending the amount of work that went into making CLIs valuable to their users. You can’t blame the greybeards: The CLI power users these days are Zoomers and younger Millennials zipping around with extravagant dotfiles. And while GUI UX has stagnated, the quality of life on the command line has improved dramatically the last 20 years.

                                                                                            1. 3

                                                                                              There has not been significant UX innovation among commercial operating systems either.

                                                                                              Point taken. In large part the web browser took on the role of an overlay OS with its own apps and UX inside a desktop window, and became the locus of innovation. That UX started as a very simple “click an underlined link to go to another page”, exploded into an anarchy of weird designs, and has stabilized somewhat although it’s still pretty inconsistent.

                                                                                              I wonder if this is the way forward for other new UX paradigms: build on top of the existing desktop, as an overlay with its own world. That’s Urbit’s plan, though of course their weird languages and sharecropping ID system aren’t necessary.

                                                                                            2. 5

                                                                                              (Updated to add: I wonder if some of the Unix philosophy and principles aren’t contrary to good end-user UX. “Everything is a file” means everything has the limits of files as bags-of-bits. The emphasis on human readable file formats (that it takes a geek to understand) means less interest in non-geek ways to work with that data. And “small pieces loosely joined” results in big tool shelves that are very capable but come with a steep learning curve. Also, in a rich user interface, my experience is that connections between components are more complex to do, compared to streams of text.)

                                                                                              New reply for your edit: I’m developing the impression the “original sin” with Unix was an emphasis on passing the buck of complexity to the user. A simple system for the developers is not necessarily a simple system for the users.

                                                                                              1. 3

                                                                                                Not even necessarily the end user — the emphasis on ease of implementation leads to a dearth of useful interfaces for the programmer at the next level of abstraction, forcing them to either reinvent basic things like argument parsing and I/O formats in an ad-hoc fashion, impeding composability and leaking abstractions into the next layer, almost like throwing an exception throughout all layers of the system.

                                                                                                1. 4

                                                                                                  This thread is also making me think a lot about this old Gruber post. You can’t just bolt on usability, it needs to be from top to bottom, because the frontend is a Conway’s law manifestation of the backend. Otherwise, you get interfaces that are “what if it was a GTK frontend to ls”.

                                                                                                  1. 1

                                                                                                    When A.T. needs to configure a printer, it’s going to be connected directly to her computer, not shared over a network.

                                                                                                    Ironically, this kind of dates the article now that laptops, wifi printers, tablets, and smartphones are a thing.

                                                                                                    Good article aside from that, though.

                                                                                              2. 4

                                                                                                but where does that leave the UX that non-geeks use

                                                                                                I do want to emphasize that there are geeks that enjoy GUI UXes as well. I have a preference for most of my regularly used tools and applications to be GUIs, and I suspect there are others out there as well.

                                                                                                1. 4

                                                                                                  I definitely agree the mainstream open source UIs are somewhat not innovative. But there have been many interesting experiments in the last 20 years, tiling window managers being one of the more popular examples. For me, the tragedy is that the “market leading” environments seem to be making decisions which are expressly designed to kill off alternatives and innovation (stuff like the baking of window manager and compositor together in wayland, requiring sysiphean approaches like wlroots; client side decorations)

                                                                                                  1. 7

                                                                                                    Tiling window managers are almost all the exact opposite of innovative - they’re what people were trying to escape from with Windows 1.x!

                                                                                                    1. 2

                                                                                                      This seems… Questionable, I guess, but maybe I’m missing some context since my useful memory of computers starts some time in the late 80s with the Apple II and DOS machines. What widely used interface in that era was a real parallel to something like xmonad?

                                                                                                      1. 2

                                                                                                        There were plenty of “tiling” (really, split-screen) GUIs at the time, and overlapping windows was definitely the main feature of Windows 2.x (not even Windows 1.x had overlapping windows).

                                                                                                        1. 2

                                                                                                          Yeah, fair enough. I guess I think this last 10 or 15 years’ tiling WMs feel pretty qualitatively different to people who grew up on the mainstream computing of late 1980s – early 2000s, but probably much more because of the contrarian model they embrace than because that model’s basic ideas are novel as such. (And of course it’s fair to ask how contrarian that model is now that most computing is done on phones, where there’s scarcely a “windowing” model at all.)

                                                                                                      2. 1

                                                                                                        If you are arguing that there was no innovation from tiling window managers due to “windows 1.0”, then the only polite thing I can say is I fundamentally disagree with you.

                                                                                                        1. 2

                                                                                                          Have you tried Oberon?

                                                                                                2. 2

                                                                                                  I think we do, but I’m not sure who would invest in a modern PARC without expecting a quick turn around and a solid product to hock.

                                                                                                  1. 8

                                                                                                    For better or for worse, many companies shifted R&D to startups, which basically have to create something product-ready with an exit plan funded by VC. I don’t think long-term research can really be done in this model.

                                                                                                    Excuse me Mister Culver. I forgot what the peppers represent.

                                                                                                    1. 1

                                                                                                      Frankly if we had a good method to fund this sort of thing, we’d have leveraged it to fund the load-bearing OSS that forms our common infrastructure.

                                                                                                1. 4

                                                                                                  I feel there’s a lot of posts like this. Seemingly “thought-provoking” things and “radical” new ideas for design. However, I’ve yet to see anyone actually try to implement any of these ideas. It’s getting kind of tiring seeing stuff like this all the time.

                                                                                                  1. 7

                                                                                                    A large part of that is simply that people can’t, because their OS is proprietary. On the Linux/BSD/9front/etc. side of things, people actually can change how their environment works, and there has been a huge amount of experimentation in desktop environment/window manager design.

                                                                                                    It’s a little funny every time someone from Windows/macOS land has a radical idea, and it turns out someone implemented in 15 years ago on a Linux system.

                                                                                                    1. 8

                                                                                                      It’s a little funny every time someone from Windows/macOS land has a radical idea, and it turns out someone implemented in 15 years ago on a Linux system.

                                                                                                      Sorry, but [citation needed]. In all my years of trying desktop environments in Linux, FreeBSD, and OpenBSD I have yet to see a serious effort at a desktop environment that didn’t basically copy Windows or macOS.

                                                                                                      Sure, there are lots of window managers out there, but that’s not even on the same level as what this OP is talking about. Tiling windows with keyboard shortcuts isn’t particularly innovative. Rotating windows or transparency and other such things are just self-loving “unixporn”: it makes a pretty demo but is functionally inert.

                                                                                                      Free desktop environments lack something key to making anything one might consider novel: tight integration of apps with the environment. No existing desktop envs have it that I have tried, but the closest ones are probably macOS and Windows. And frankly, the Unix philosophy runs counter to this principle, which is probably why it’s never really been on the radar there.

                                                                                                      1. 4

                                                                                                        Sorry, but [citation needed]. In all my years of trying desktop environments in Linux, FreeBSD, and OpenBSD I have yet to see a serious effort at a desktop environment that didn’t basically copy Windows or macOS.

                                                                                                        I didn’t know tiling window managers were “copying Windows or macOS”.

                                                                                                        Ah, reading your post further I see you touch on the subject;

                                                                                                        Tiling windows with keyboard shortcuts isn’t particularly innovative

                                                                                                        This couldn’t be further from the truth. Of course, the keyboard shortcuts aren’t the main appeal. Rather, it’s how the windows are managed. Do you prefer a stack layout like is offered by dwm and XMonad? Perhaps you like manual tiling like i3? Or something like bspwm? The experience varies wildly between these different window managers and I think it’s disingenuous to simply mark them as “not innovative” and “unixporn”. In addition to this, there are plenty of non-tiling window managers that are trying to innovate. Look no further than projects such as 2bwm which are innovating in the floating window manager space as well.

                                                                                                        Free desktop environments lack something key to making anything one might consider novel: tight integration of apps with the environment

                                                                                                        I’m in complete agreement with you here. Unix-like operating systems have long prided themselves on having all its parts interchangable, you be able to swap out your image viewer or file browser or any other part and still have a functional system. However, this “tight integration” you mention does not exist in Windows either (I can’t speak for macOS, as I haven’t used it myself). How does the image viewer cooperate with the file browser? The text editor (if it even has a proper one, notepad doesn’t seem that powerful…)? If you want a clear example of an OS with proper “tight integration”, you should check out Plan 9, or perhaps even Open Genera if you’re feeling extra adventurous.

                                                                                                        1. 4

                                                                                                          Sorry, but [citation needed]. In all my years of trying desktop environments in Linux, FreeBSD, and OpenBSD I have yet to see a serious effort at a desktop environment that didn’t basically copy Windows or macOS.

                                                                                                          I think a lot about Mark Tarver’s essay when I see “yeah, look how innovative Free Software is, it allows for a greater xterm density”.

                                                                                                          1. 3

                                                                                                            In all my years of trying desktop environments in Linux, FreeBSD, and OpenBSD I have yet to see a serious effort at a desktop environment that didn’t basically copy Windows or macOS.

                                                                                                            Of course there are, but you have to do some effort at trying to find them, because the mainstream tries to cater to the lowest common denominator. Have you tried something like the ACME editor, which has a very heavy focus on the mouse and on using any text as active commands? Or something like Vimperator or Conkeror where everything is fully keyboard-driven with contextual popover tags that allow you to quickly jump to any link? Hell, even Emacs is sufficiently different (even though it’s old) with its hyper-focus on scriptability.

                                                                                                            Now these are all separate applications without an overarching design goal, but that’s the nature of open source I guess - everybody is doing their own thing. Coming up with a full OS that implements these paradigms across multiple applications is such a monumental amount of work that not a single person could do it. And for a company to invest in a completely new paradigm is quite a gamble, especially because what we currently have sells well enough.

                                                                                                            1. 1

                                                                                                              Hell, even Emacs is sufficiently different (even though it’s old) with its hyper-focus on scriptability.

                                                                                                              It’s not even the hyperfocus on scriptability. For me, the amazing thing about Emacs is that it’s a fully integrated application environment in a way that graphical desktop environments, even highly polished proprietary ones, can only dream of. Every application for Emacs (like eww, elpher, Gnus, mu4e) inherit all the functionality of core Emacs, and all the ways that core has been extended (e.g. selectrum), and is practically always consistent between applications. That’s what makes me want to do everything possible within Emacs, and grudgingly leave for the web browser only when I have to…

                                                                                                          2. 2

                                                                                                            You can’t really implement much of all in a Linux/X11 system without throwing all your applications away.

                                                                                                            You can make all kinds of tiling window managers on X11, but the window manager itself works on a very specific level in the display hierarchy without access to the rest of the system. Try, for example, to make a WM with a Macintosh-style global menu bar at the top, or a RiscOS-style global pop up menu. You can’t, because the window manager doesn’t know about menus.

                                                                                                          3. 1

                                                                                                            I imagine that Windows and macOS represent the vast majority of desktop OS’s. How would someone build a new desktop environment on these closed systems?

                                                                                                            1. 3

                                                                                                              MacOS, not sure. On windows it’s still relatively easy to replace the whole shell. Here’s a list of a few https://en.wikipedia.org/wiki/List_of_alternative_shells_for_Windows

                                                                                                              I remember using a customised blackbox shell for quite a while in xp times.

                                                                                                              1. 3

                                                                                                                I think the idea of re inventing the desktop experience require a lot more than an alternative shell.

                                                                                                                1. 3

                                                                                                                  What do you think would have to be replaced to implement most of what the post is talking about, apart from the shell and open/save dialog box? I believe you could get windows 99% there with shell + file explorer replacement, an indexing service to speed up tags, and some helpers to materialise/provide views like clipboard and browsing history. Sure, you’d still have to deal with paths when installing software for example, but otherwise you could live in a highly indexed/tagged reality.

                                                                                                                  What are the missing pieces that would go beyond? (Yes, the apps would need to integrate into this world at some point, but realistically we won’t have those apps until the environment itself is proven)

                                                                                                          1. 7

                                                                                                            I have two observations on this article.

                                                                                                            First, there is no mention of the “new” wave of federated services that popped up all over the place based on ActivityPub. I find that to be a glaring omission. Even though they didn’t get mass adoption, the number of users the Mastodon network has is impressive for a bunch of open-source projects.

                                                                                                            Second, I think that throwing the baby with the bath water because a corporation has captured a large number of users inside of a distributed network is pretty defeatist. Just because gmail has a large portion of email users it doesn’t mean that as a user I can’t choose a smaller provider like tutanota or proton.

                                                                                                            1. 12
                                                                                                              1. I didn’t mention these on purpose because I don’t have any direct experience with them (personal dislike of social media), and so don’t feel qualified to talk about them. From an outsider’s perspective though, they do seem to fit my case study of XMPP/Matrix etc.

                                                                                                              2. A lot of people seem to have got the impression that I hate these applications, and/or am somehow against them. I tried to make it clear in the post that I am active user of almost all the applications I discussed, and only want to see them succeed.

                                                                                                              1. 3

                                                                                                                I’m sorry to sound like the “ackchyually” gang, but I guess my comments were based on how your title makes a sweeping generalization without the article looking into all the options.

                                                                                                                PS. I’m working on an ActivityPub service myself, and that might colour my views. :D

                                                                                                              2. 4

                                                                                                                Quoting an entire paragraph:

                                                                                                                Whenever this topic comes up, I’m used to seeing other programmers declare that the solution is simply to make something better. I understand where this thought comes from; when you have a hammer, everything looks like a nail, and it’s comforting to think that your primary skill and passion is exactly what the problem needs. Making better tools doesn’t do anything about the backwards profit motive though, and besides, have you tried using any of the centralised alternatives lately? They’re all terrible. Quality of tools really isn’t what we’re losing on.

                                                                                                                ActivityPub might be awesome, but that is entirely beside the point the author tries to make.

                                                                                                                1. 2

                                                                                                                  How so? I’m not sure how the paragraph you’re quoting takes away from the fact that there are currently community driven federated projects and services which are popular and that the author didn’t consider.

                                                                                                                  1. 3

                                                                                                                    The article listed a few examples of where decentralization didn’t work out, even though it had the technical merits to be much better than the alternatives. Enumerating all possible examples where it did or didn’t work is out of the scope and besides the point - it’s never the technical merits that are lacking in these situations.

                                                                                                                    ActivityPub isn’t used (in a very broad term here…) by others than hypergeeks like you and me. We might find each other and form communities around our interests, but the was majority of users are not going to form their own communities like this.

                                                                                                                    1. 1

                                                                                                                      but the was majority of users are not going to form their own communities like this.

                                                                                                                      That’s fine, but considering this as the only metric for success is a poor choice.

                                                                                                                2. 4

                                                                                                                  Try to run your own mail server, though.

                                                                                                                  1. 3

                                                                                                                    There are options out there for allowing other people to do the nitty gritty or running the server with minimal costs and time investment. I’m running a purelymail account with multiple domains that I own.

                                                                                                                    1. 3

                                                                                                                      I did that for quite some time. Only switched to a commercial hosted provider because of general sysadmin burnout / laziness, not anything mail specific. The problem of Gmail treating my server as suspicious was easily solved by sending one outgoing email from Gmail to my server.

                                                                                                                      1. 4

                                                                                                                        You get blacklisted to hell once you put your mail server on a VPS with high enough churn in its IP neighborhood.

                                                                                                                        And there is no power on Earth (for now) that would convince GOOG or MSFT to reconsider. Their harsh policies are playing into their hands – startups buy their expensive services instead of running a stupid postfix instance.

                                                                                                                        We (the IT sector) need to agree on a way to group hosts on a network by their operator in much finer way so that regular leasers are not thrown in the same bag as the spammers or victims of a network breach.

                                                                                                                        1. 4

                                                                                                                          You get blacklisted to hell once you put your mail server on a VPS with high enough churn in its IP neighborhood.

                                                                                                                          Is that so? So far the main reason I see for mails not arriving is the use of Mailchimp. No hate on Mailchimp there, just experience, since many companies use their service.

                                                                                                                          Meanwhile Google is super fine, as long as you make sure SPF and DKIM/DMARC are set up correctly. Oh and of course reverse IP (PTR record) should be set up correctly, like with any server. They are even nice enough to report back why your mail was rejected and what to do about if you don’t do the above.

                                                                                                                          Experience is based on Mailchimp usage in multiple companies and e-mail servers in various setups (new domain, new IP, moving servers, VPS, dedicated hoster, small hosters, big hosters). Didn’t have a case so far where Google would have rejected an email, once the initial SPF/DKIM-Setup/PTR was running correctly.

                                                                                                                          The “suspicious email” stuff is usually analysis of the e-mail itself. The main causes are things like reply-to with different domain, HTML links, where it says example.com, but actually (for example for click tracking purposes) links somewhere else.

                                                                                                                          Not telling anyone they should run a mail server, just throwing in some personal experiences, because the only real life examples where Google would reject an email was, because of SPF, DKIM or PTR being misconfigured or missing. For accepted, but thrown into spam it’s mostly reply-to and links. I have close to no experience with MSFT. Only ever used a small noname-VPS and a big German dedicated hosing provider to send to hotmail addresses and it worked.

                                                                                                                          1. 3

                                                                                                                            Is that so?

                                                                                                                            I host my own postfix instance in a VPS for years now (well, not since last summer or so, but I’ll eventually get back to it). I had my email bounced from hotmail’s server, and the reason given by the bounce email was that my whole IP block was being banned. It tends to resolve itself after a few days. In the mean time, I am dead to hotmail users. Google is even more fickle. I am often marked as spam, including in cases where I was replying to email.

                                                                                                                            I don’t believe it was a misconfiguration of any kind. I did everything except DKIM, and tested that against dedicated test servers (I believe you can send an email to them, and they respond with how spammy your mail server looks).

                                                                                                                            So yes, it is very much so.

                                                                                                                            1. 2

                                                                                                                              Google is even more fickle. I am often marked as spam, including in cases where I was replying to email.

                                                                                                                              Same here. After finally setting up DKIM these hard to diagnose and debug problems finally went away completely , AFAICT.

                                                                                                                              1. 1

                                                                                                                                Interesting. Thank you for the response!

                                                                                                                                Just curious, whether if you open the detail view of the email it says more? When I played with it it usually did tell why it thought it was spammy.

                                                                                                                                1. 1

                                                                                                                                  Just curious, whether if you open the detail view of the email it says more?

                                                                                                                                  Didn’t thought of that: when I send email to my own gmail account, it does not end up in the spam folder. I have yet to hack into other people’s spam folder. :-)

                                                                                                                                  Right now I can’t make the test because I’m not using my own mail server (I’m currently using the mail service of my provider). I send an email to myself anyway, and the headers added by Gmail say SPF and DKIM are “neutral” (that is, not configured). I will set them up once I reinstate my own Postfix instance, though.

                                                                                                                              2. 2

                                                                                                                                It’s a frequent issue with Digital Ocean, for example.

                                                                                                                      1. 7

                                                                                                                        I’m just not seeing how and why any of these topics discussed in the Case Studies have “failed” as a decentralized application. The author defines decentraization as “applications whose main purpose is fulfilled as part of a network, where that network is not reliant on any preordained nodes” but doesn’t tie back this definition to any of the case studies.

                                                                                                                        1. 30

                                                                                                                          Author here. I have some regrets about this post, and this is one. The post was written somewhat haphazardly and with only the intention to free some feelings cluttering my mind, I honestly didn’t expect it to get a fraction of the attention it has.

                                                                                                                          In hindsight, I should have included a clarification of what I meant by “Don’t Work”. I’d say broadly there are 3 ways an application can qualify as “not working”:

                                                                                                                          1. It does not achieve its stated goals
                                                                                                                          2. Its network becomes highly centralised
                                                                                                                          3. Its users are largely unable to leverage its decentralised nature

                                                                                                                          Also, I don’t think all the case studies qualify equally. I tried to put them in order of descending applicability, from blockchain, which I regard as an abject failure on (1) and, in the case of bitcoin at least, (2), to BitTorrent, which I only consider to be “working, but slightly suboptimally”.

                                                                                                                          I also should have written differently about blockchain, I’m quite burned out on talking about it and it shows in my writing. I only made the case for (1), but should have made the case for (2) for bitcoin with its centralised mining pool, and the effects that has on its governance. I think I also focused on bitcoin too much, when there are good cases to be made for its descendants, notably Ethereum in light of the DAO attack. I also should have chosen less incendiary words to express my thoughts on blockchain.

                                                                                                                          It is interesting to see what others find worthy of criticism in my writing though. I was sure that “Applications” was the word in my title with the flimsiest definition, but I think it’s the only one that hasn’t received any scrutiny.

                                                                                                                          1. 4

                                                                                                                            Thanks for the detailed reply. I don’t necessarily disagree with anything you wrote in the article, but I wanted a bit more detail about how exactly the cases you brought up “didn’t work”, especially because I think analyzing the explicit failure mode is instructive in how to align incentives toward decentralization.

                                                                                                                          2. 12

                                                                                                                            The failures don’t really fit a single definition, but I think they’re still examples where decentralization is barely working.

                                                                                                                            I see a pattern that instead of everyone running their own node democratically, they either use a Service A with 90% of the market, the underdog Service B, or they try their own thing which is either specialized or just weird (GitHub/GitLab/selfhost, Gmail/Fastmail/selfhost, YouTube/Vimeo/selfhost).

                                                                                                                            1. 5

                                                                                                                              A pattern I see in the given examples that was not explicitly mentioned: decentralized services that should theoretically be valid, consistently aren’t coming close to centralized (thus monetized) alternatives in terms of ease of use: hosting your own mail server vs Gmail, mastodon vs Twitter, git-over-email vs GitHub.

                                                                                                                              Isn’t the fundamental issue that if a service cannot be monetized, the incentive to organize effort around polishing and communicating (another view of marketing) it is not as strong compared to when a central entity (i.e. company) can reap rewards?

                                                                                                                              I suspect that this can even be expressed as a result of how people have historically organized themselves: the most common / natural / primordial way for humans to function is to form close-knit tribes that look after their own members. In a way, a truly decentralized structure violates that form of organization.

                                                                                                                              1. 1

                                                                                                                                Indeed in practice I know of very few truly decentralized organizations, even outside of software. Clubs and volunteer organizations I’ve participated in had some form of resource aggregation leading naturally to some form of structure and thus a power differential, even if it were small. I think it would be interesting to explore the space of decentralization to find a thin line here, but I do think fundamentally that humans enjoy collaborating, and that collaboration creates structure and agglomeration. If humans were antisocial creatures, then we would operate independently in a decentralized fashion.