1. 7

    I LOVE the existence of classes like this. Regardless of politics, exploring new ways of organizing software is fascinating to me. I wish this course diverted away from Urbit-specificness and discussed more forms of “Martian Computing” such as Collapse OS, Mu, Nebulet, or redshirt. Cutting away legacy code and starting from scratch is an amazing topic that can be explored in a number of ways, and I’m glad classes like this are encouraging this exploration.

    1. 1

      Agreed. I’m obviously interested in what the Urbit project is trying to do, but there’s no reason that there shouldn’t be other projects trying to rethink the foundations of personal computing, compared to the ultimately Unix-based world we (largely) live in today.

    1. 4

      “Yeah, we write out one big JSON object to a text file.”

      “How? When? What?”

      Obviously “insane” decisions like these always surprise me. Hindsight is 20/20, but while building software I try my best to focus on the correct solution rather than the fastest solution. This is of course at odds with my manager who would rather me get it out by Friday. I’m constantly reminded about a comment @ddevault made regarding this dichotomy (relevant lobsters post):

      The culture of doing the wrong thing because the right thing is harder really, really bothers me about the modern software development ethos. We used to care about doing things right. It has nothing to do with being stretched thin - if the throughput doesn’t change, it just takes a bit longer to finish. Well, sometimes the correct solutions take more time.

      I think about this a lot because, as a “Software Engineer”, my job is to engineer solutions to engineering problems. The more I see engineers learn, the more I see them want to do “right thing” over the “wrong thing”, even if “it just takes a bit longer to finish.” But at the same time, as the product and demand for updates grow larger and larger, the line between “right” and “wrong” technical decisions is blurred.

      A couple of weeks later, a customer wanted to try it out. I wasn’t ready to commit to the data model yet and do it properly in SQL, so I took a shortcut: the object holding all the data was wrapped in a sync.Mutex, all accesses went through it, and on edit the whole structure was passed to json.Marshal and written to disk. This gave us data model persistence in ~20 lines of Go.

      The plan was always to migrate to something else but, uh, we got busy with other stuff and kinda forgot.

      Was this a bad decision? From a software engineering perspective, I would say yes. This data should have almost certainly been shoved in a database. JSON files as databases don’t scale and are susceptible to a number of other issues. From a business perspective, this might have been the “right choice.” The customer was able to try out the product faster than if time had been taken to use a real database.

      There are, of course, a number of things we can do to balance this, but I think it’s interesting that this problem is ubiquitous among software engineers, and as Drew puts it, part of “the modern software development ethos.”

        1. 1

          Ah, you are correct! I must have missed that in the new stories.

          1. 3

            IMO this link is better than the one I posted, since this is a full write-up of the script.

        1. 5

          This seems like as good a time as any to consolidate my thoughts on Tree Notation.

          Classically, we have had SUNSA technology in the form of combinator logic. There are proofs that SK combinators are universal on Turing machines. If we want to represent shapes as well, though, then Tree Notation has one obvious flaw: It is only two-dimensional at most. If we want arbitrary dimensionality, then we need opetopes. Short for “operator polytope” and pronounced with two different “o” sounds, the advantage of opetopes is that every way to compose objects is an opetope, even if the objects have high dimensionality. See Higher-Dimensional Categories (p63) for illustrations. This approach was recently made viable; we have syntax and type theory which is computable, along with the Opetopic graphical proof assistant.

          As discussed previously here on Lobsters, there are existing experiments to encode legal principles into expert-system-style libraries for logical languages. I have talked with startup founders whose aim is to improve legibility of various legal codes by formalizing them with declarative rules. The principle is not new. Reading lines like:

          I dream of a world where … when I want to start a business I don’t have to fill out forms in Delaware (the codesmell in that last one is so obvious!).

          I’ve incorporated an LLC in Oregon before. I did not have to fill out forms in Delaware. Incorporation in Delaware is done because the Delaware loophole makes it a tax haven. Indeed, nobody needs to fill out forms in Delaware, unless they intend to deliberately (legally) avoid taxes. This anecdote is germane to the idea that Tree Notation is suited for real-world applications; the actual pragmatics of the notations that we currently use are very different from the idealized descriptions given as Tree Notation. Or, in fewer words, Tree Notation won’t stop people from incorporating in Delaware.

          Let’s address your predictions.

          Prediction 1 is falsified by Iota and Jot, a pair of bitstring languages who are Turing-complete; Iota encodes a one-combinator language, showing that this is the minimal number of combinators required for Turing-completeness.

          Prediction 2 seems strange given how long two-dimensional syntax has existed in popular languages. Famously, Python and Haskell both have had two-dimensional syntax since the 1990s, and they use the same whitespace character as Tree Notation. Additionally, there’s a trivial one-dimensional syntax for each two-dimensional syntax which unravels it onto a single line with a designated line-break character, generalizing the idea which Tree Notation inherits that one-dimensional bytestrings in files can have newline characters interpreted as dimensional commas.

          Prediction 3 will have to wait until Tree Oriented Programming can be formalized. But given how poor the terms “structured programming”, “functional programming”, and “object-oriented programming” have been defined over the years, I’m tempted to see this as pure ego. Is this just programming with katamorphisms, ala recursion schemes?

          Prediction 4 can’t be taken seriously. Terms are too nebulous. This smells like hype.

          I dusted off some September 2019 notes on the hope that 70% of TIOBE languages will be founded on Tree Notation by 2027. The summary is that I predict 9 of 10 2027 TIOBE languages, and then use this prediction to predict that the bet’s conditions will not be fulfilled. Languages take a long time to become popular and take over, unless they clearly fill a niche which didn’t exist before.

          1. 4

            I apologize because I’m not going to spend time responding to the other points in your comment (will try to remember and come back later) because the link you provided to opetopes made my jaw drop. Somehow I missed this area of research—so many connections! I’ll be mining it for a bit. Thank you so much!

            1. 1

              Fun to read through your 2027 predictions! I added some thoughts in bold https://gist.github.com/breck7/2220c6b6a3890af2c0a031531b4b1ae3

              For my bet to win roughly the following has to happen:

              1. The hypothesis that “2-D+ languages” are intrinsically 10x+ better has to be true (I’m confident but also gulp! :) )
              2. People have to make Tree Languages that “just work” and solve real problems
              3. They have to invent and launch those things with enough time for them to catch on (if nothing is catching on by December 2024, my bet is in trouble)
              4. TIOBE has to consider these new high level languages for consideration on their list

              A tall order, but I think it could happen.

              1. 1

                Omfg, THANK YOU. These resources are amazing. I was already drowning in stuff to read but this has jumped way up near the top in the priority queue.

                1. 1

                  My thoughts are still pending on opetopes, but while that settles I wanted to just say thanks again and touch on your other feedback.

                  encode legal principles…The principle is not new.

                  I agree and have read up on a lot of research in this domain and even recently did a bit of advising with one of the big 5 who is kicking the tires here. What I proffer is something very relevant though, a new type of notation that includes an intrinsic ability to count complexity (https://jellypbc.com/posts/ezmntq-counting-complexity), meaning that different encodings of a legal principle will have different complexity measurements, and my hope is that it might provide less room for special interests to hide carve outs. (It’s much easier to detect hop-ons when you have a scale). Then you can extend that to see what I’m talking about with the Delaware situation is just an instance of the general pattern of unnecessary complexity embedded throughout out legal systems. So the only reason why I think Tree Notation offers an OOM over other tech in terms of laws is this intrinsic complexity measurement which provides an objective measurement of complexity. Nothing else has this, as in Tree Notation you can start with just the concept of “.” and “” and build up binary to hex to english to law etc through a succession of grammars, without ever leaving the dimensional syntax of Tree Notation.

                  Prediction 1 is not falsified by Iota and Jot. You don’t get to start with any characters. You don’t get to assume anything. All you get is a sheet of grid paper and pen, and you need to build up all your assumptions from there. No imports. No includes. Nothing. You need to define binary, numbers, etc.

                  Python and Haskell are not 2-D languages, despite them having some indent sensitive properties. A tiny fraction, ~50 languages use indentation. But the ones that do use other things as well. They are not pure 2-D languages, and so you don’t get the guaranteed isomorphisms and all the neatness that comes with that. On the second point, while all the current implementations of Tree Notation of course are saved as a 1-D bytestring, this is just because of the limits of current 1-D register technology, and there are kinds of programs and computations you could perform using 2-D and 3-D post-register machines that can not be done with existing languages (you could technically emulate them in 1-D, but the time constraints would make that impractical).

                  Prediction 3. I do it already, but haven’t come up with the terms. Generally it seems that most of the patterns in object oriented programming boil down to “simplify your trees”.

                  Prediction 4. I’ve provided no evidence here, but have worked it out in a few ways and am pretty confident in it, but this is more of a fun, far out one (though I’d be surprised if it turns out to be wrong).

                  Wow! Your list of TIOBE predictions is amazing. I will have to come back to that one as another thread again (separate from opetopes). Thank you!

                  1. 2

                    You don’t get to start with any characters. You don’t get to assume anything. All you get is a sheet of grid paper and pen, and you need to build up all your assumptions from there.

                    The strength of our metatheory is always going to tint our ability to consider the complexity of constructions, based on what we can build from when we “don’t get to assume anything” and have “nothing”. Here, our common ground is not just the pen and paper, but type theory. Iota and Jot provide examples of how to use around three symbols to encode Turing-complete syntax. Similarly, opetopic type theory can be reduced to three symbols.

                    Python and Haskell are not 2-D languages

                    A one-dimensional language has one-dimensional addressing. For example, traditional filesystem files are one-dimensional; they store a string of bytes, and each byte has a one-dimensional index. Common Python and Haskell implementations use two-dimensional indexing to address syntax, viewing source code as a ragged list of lines, with each line containing some dependent number of symbols. This is not just an editor convenience, but part of the definition of how these languages are parsed. I do not practice much Go, but I understand that their canonical formatting requires their parser to be able to faithfully represent and manipulate syntax in 2D.

                    1. 1

                      Similarly, opetopic type theory can be reduced to three symbols

                      Has this been done before? I’m super interested in opetopic type theory but the literature for it has a little too much jargon to understand most of it.

                      1. 1

                        In these slides, the syntax only needs three symbols: [, ], and *. Roughly, these can be read as “go down into a new dimension”, “pop out of the current dimension”, and “mark a new address in the current dimension”.

                      2. 1

                        And I think “Iota” and “Jot” and “opetopes” and Tree Notation are probably mostly the same thing, just put in different ways. In my way vocab this is a https://upload.wikimedia.org/wikipedia/commons/0/07/Game_of_life_pulsar.gif Tree Language/Tree Program as well as this (https://upload.wikimedia.org/wikipedia/commons/a/aa/Rule30-256-rows.png). I think I’m just taking a new attack vector on old ideas, but in a way that I think leads to more practical utility.

                        IIRC the Python pipeline removes the indentation in an early pass and inserts delimiting tokens in its place, but I may be mistaken. I can’t remember OOTOMH how Haskell does it.

                  1. 0

                    Here are my notes:

                    • Not requiring semicola is good.

                    • My least favorite parts are the mixed case types and generics with <>… in both cases I would have hoped that people would have stopped repeating these mistakes.

                    • Not sure what to think about ident Type syntax.

                    • The collection literals and the general amount of operator overloading is something I would personally avoid.

                    • Having no “static” is good.

                    • Replacing modifiers with annotations is good, I have done the same.

                    1. 1

                      My least favorite parts are the mixed case types … I would have hoped that people would have stopped repeating these mistakes

                      Are you really bikeshedding about CamelCase vs snake_case? I thought we were over that. 😬 Live and let live.

                      As for generics brackets, familiarity can beat out grammatical convenience. Most people are used to angle brackets from C++, Java, Rust, etc.

                      1. 4

                        Are you really bikeshedding about CamelCase vs snake_case?

                        No. I meant that bool, int, double and string are lower-case while StringBuilder, Box, List, etc, start with an uppercase letter. Looks like an oversight in any language created after 2000.

                        As for generics brackets, familiarity can beat out grammatical convenience.

                        According to that, cars should also be steered as if they were drawn by horses. It’s not about “convenience”, it’s that no one has figured out how to deal with <> without exposing the needless complexity of that choice to users.

                        I prefer languages without needless complexity.

                        Most people are used to angle brackets from C++, Java, Rust, etc.

                        And it has been a costly disaster in all of them. Sure, let’s excuse C++ because it was the least bad option to retrofit generics into an existsing language. But if you are creating a new language with generics, using <> is an inexcusable mistake.

                        1. 1

                          You mean because of parsing ambiguity?

                          1. 1

                            Yes, especially how the syntax ambiguities have a deep impact on the user.

                            To be clear, there are syntactic complexities that are worth it and that can be cleanly hidden from the user. This isn’t one of them, as we have experienced for a few decades now.

                            1. 1

                              FWIW I think that might be solvable on the parser’s end, at least for a java-ish syntax.

                              Source: I wrote a parsing library

                              1. 1

                                Java has not-so-nice user-facing syntax to work around issues, like foo.<Bar>baz().

                                In the end you can solve pretty much everything with unlimited lookahead (see C#), the question really is why, when there are much better options available.

                          2. 1

                            Oh, I see. That does seem inconsistent … I just noticed that Nim makes the same mistake. I’m actually happy for scalar primitives like ‘int’ to be lowercase, but if Box and List are capitalized then String should be too.

                            IIRC, angle brackets are troublesome for the lexer because of ambiguity with other tokens. Not aware of ways they’ve caused problems (since C++11) for users of languages, who vastly outnumber compiler writers.

                            1. 1

                              I believed that until recently, when Vale found a really elegant solution to disambiguate binary operators from generics: require binary operators to have a space on each side, otherwise they’re generics. With that one adjustment (which is already required by a lot of major styles guides), all the ambiguity disappeared. It’s nice, because then Vale can use square brackets for other things.

                              I also like Rust’s solution (the ::<>), it may be a bit awkward-looking but it solves all the problems.

                              It may have been a costly disaster and inexcusable mistake a while ago, but nowadays because of these mitigations we can use angle brackets with no problems. That said, I’m not sure if Skew is employing these mitigations.

                              1. 1

                                I think introducing a (likely the only) whitespace-sensitive rule into a language to ad-hoc solve this issue is not a good idea.

                                I also like Rust’s solution (the ::<>), it may be a bit awkward-looking but it solves all the problems.

                                I don’t; I think it’s rather embarrassing for a language to end up with 4 different syntax variations that are only valid in specific contexts.

                                But even if <> did not require such workarounds, I still wouldn’t want to use it, because using <> as brackets is just poorly readable (especially compared to []).

                                If fate didn’t end up forcing C++ down this route, would we “invent” using <> today? I’d say we wouldn’t.

                                Let’s just let generics with <> die. There is no reason to use them.

                                1. 1

                                  Your claim about readability is subjective. I find <> to be equally readable to [], and before that, <> was more readable because it was more familiar. Familiarity is important for readability, especially for a new language. Not many languages use [].

                                  We use whitespace sensitivity all the time, in every language I’ve seen. It’s everywhere once you start looking. For example, “void myfunc() { … }” needs a space between void and myfunc. In our case in Vale, it was an easy choice because having spaces around binary operators is more readable anyway (IMO).

                                  There’s a big reason to use <> over []: the opportunity cost for []. In Vale, once we freed those up, we were able to use [] for tuples.

                                  With all that, it’s a slam-dunk case, <> is a better choice for a new language.

                                  1. 1

                                    < and > are used as a binary operator for the first 15 years of everyone’s lives. Whatever it is is, it’s not “familiar” to overload it with a different meaning.

                                    With Go joining Eiffel, Scala, Python and Nim, I think the tide is turning anyway. (Similar to how ident: Type replaced Type ident earlier on.)

                                    Being able to use [] for random things is not a plus either. Using () for terms and [] for types is easy and consistent. Other approaches aren’t.

                                    In the end, languages using <> have a steep hill to climb for me to consider them worthwhile a second look.

                          3. 1

                            I think these notes are weak because you’re solely talking about syntax, which I believe is one of if not the least important choices in making a programming language. Memory management, runtime overhead, package management, et al are far more important than using angle brackets for generics.

                            1. 4

                              Ah the pseudo-intellectual “syntax doesn’t matter” trope.

                              I’d love to live in a world were syntax was good enough to not worry about it. Sadly we don’t live in that world.

                              And to be honest, looking at the syntax is a good way to discard languages early on. If the syntax is bad, the semantics won’t be great either.

                          1. 9

                            M1 is apparently a multi-die package that contains both the actual processor die and the DRAM. As such, it has a very high-speed interface between the DRAM and the processors

                            But it’s not like they’re using HBM on an interposer, this is basic dual channel (LP)DDR4(X) whatever, which doesn’t magically become much faster if you put it on the package. They could clock it a bit higher but no vendor is going full on overclock mode 4800MT/s CL16 out of the box :D

                            The benefit of sticking to RC is much-reduced memory consumption

                            But that’s only for regular app object junk. 8K video frames, detailed 3D scenes, machine learning matrices, and gigantic ELF files full of debuginfo won’t become smaller due to this difference, and on a big system machine with serious usage, memory is mostly occupied by that kind of stuff.

                            uncontended acquire-release atomics are about the same speed as regular load/store on A14

                            Ooh, that’s cool.

                            1. 6

                              this is basic dual channel (LP)DDR4(X)

                              How do you know? After all, alternative memory interfaces like HBM exist, and Apple specifically claims “high-bandwidth, low-latency memory” memory.

                              But that’s only for regular app object junk.

                              Right, but it’s the “regular object junk” that’s problematic.

                              8K video frames, detailed 3D scenes, machine learning matrices

                              …tend to be nicely contiguous blobs of bytes that can be moved from the (2x faster) SSDs to memory really, really quickly. > 2GB/s on my current Intel MBP.

                              ELF files full of debuginfo

                              Mach-O, not ELF. Those binary files tend to not be all that large, and debug information is stored externally (and even if internal, it’ll be in a section that would at most be mapped, but not read into memory until actually needed).

                              1. 1

                                How do you know?

                                First – the animation in the Special Event where they show a rendered mock-up of how the package is formed. Just checked again, the timestamp is 07:40. Even though it’s a mock-up, you can clearly see the standard shape of a DDR memory chip, itself already covered in its own black plastic with text on it. (HBM stacks look like bare silicon dies located within about a millimeter of the main die, connected by an interposer, not by a regular PCB. Look at an AMD Vega GPU, for example.)

                                Also the fact that the price has not increased compared to the previous models. (HBM is really expensive.)

                                Now let’s look at anandtech – of course, it is LPDDR4X indeed :) (upd: also the aforementioned slide is there, no need to open the video)

                                (upd: also yeah “8x 16b channels” – same width as the typical desktop 2x 64b)

                                Mach-O, not ELF. Those binary files tend to not be all that large, and debug information is stored externally

                                I’m not on macOS these days, so I’m not talking about macOS specifics, I’m just giving examples of things consuming big memory on big computers.

                                The Firefox debug build’s libxul.so requires >14G memory to link, and the result is >2G (which is larger than the WebAssembly address limit so the new symbolicator used in the Firefox Profiler can’t load it, haha).

                                1. 1

                                  From the anandtech article:

                                  as well as doing a microarchitectural deep dive based on the already-released Apple A14 SoC.

                                  The micro-architectural discussion is of the A14, not the M1. And even that doesn’t say that the memory is any kind of DDR, in fact the only mention of DDR I could find is in describing one of the comparison machines for the benchmarks.

                                  Firefox

                                  XUL is 110MB on macOS, and as I mentioned, debug information is kept to the side. I do remember running out of memory, a LOT, trying to link lldb on Linux (in a Docker container). There seem to be some pathologies in that neck of the woods, partly with code bloat and partly with linker performance. Friends helped me out with configuring the build and switching to a different linker (lld or gold), that solved the issue. Well, it was still “problematic”, but it did link.

                              2. 5

                                But that’s only for regular app object junk. 8K video frames, detailed 3D scenes, machine learning matrices, and gigantic ELF files full of debuginfo won’t become smaller due to this difference, and on a big system machine with serious usage, memory is mostly occupied by that kind of stuff.

                                I currently run a Dell XPS 15 with 32 gb of RAM, and I would say that my app usage rarely contains anything you listed. Most of the RAM being used in my machine is regular app object junk, and seeing a large performance jump in managing these allocations would be very beneficial to me at least.

                                While working, I almost always have these apps open:

                                • 1-3 projects open in IntelliJ
                                • 1-3 terminals
                                • Slack or Telegram
                                • TeamSpeak
                                • 10-40 tabs open in FireFox

                                On Fedora 32, these apps usually aren’t overtaking my resources, but I am very curious to see the performance difference with the shiny new M1.

                                1. 1

                                  IntelliJ is implemented in Java, which is equally memory hungry regardless of the OS. Similarly, Slack is an Electron app, which is Chrome under the hood (C++ and Javascript). Firefox also is C++/Javascript.

                                  I’m assuming all of those apps use Objective-C APIs (though how they interact with system APIs is really a blind spot for me), so there’s some impact there but I suspect the bulk of their memory use comes from things (Java objects, JS objects) that are similar in size regardless of the platform.

                                  Fwiw, both my work (Mac OS, 4 years old) and home (Linux 3.5 years old) laptops currently have 16GB, and it’s mostly ok. Even with 100 Chrome tabs, I have a gig or 2 free. When I’ve run into problems, it’s been analyzing a 8+GB java heap, or running Intellij + Docker DB + multi gig batch process. Optimizing “ordinary app junk” won’t help those use cases (they all manage memory in a way that has nothing to do with Apple’s optimizations to their libraries).

                                  For a new laptop I’d plan on getting 32GB. I’m ok with what Apple released this week because I assume that something with 32GB will come sometime next year. But it doesn’t meet the needs of some developers.

                                  1. 1

                                    Really depends on what you’re doing, for me 16GB is a constant struggle and my buddy oomkiller annoys me to no end.

                              1. 1

                                {{ school }}: struggling with Astrophysics & Automata Theory. There are only 3 weeks left of this semester so it’s going to be pretty tricky to raise those grades!

                                {{ work }}: I’ve been put in charge of integrating a very large third party codebase into our own. The main challenge with this is that the third party code is insane. It has its own preprocessor which optionally uses dependencies depending on what version of the code you’re building. We have a lot of other external code that is imported in through a source transformation pipeline, but I’ve had to modify that pipeline quite a bit to accommodate this. Overall, it’s an interesting challenge!

                                {{ side_projects }}: I wish I had time for them! I have a million ideas for my language Alox but I haven’t been able to implement any of them. After getting a job solely using Java, I’m tempted to rewrite what I have because I’m much faster at writing Java than writing Rust. The biggest challenge was describing the IR in a memory-safe and mutable way. I ended up almost reimplementing an arena allocator (which I should have used from the start), and that definitely killed my drive to work with the code anymore.

                                1. 17

                                  The complaints listed here for Rust are exactly what I have been struggling with!

                                  However, I can’t actually pattern match like this in Rust because I don’t have DLists, I have Arc<Dlist>, and you can’t pattern match through Arcs and Rcs and Boxes.

                                  When trying to create thread safe recursive data types, I have struggled a lot with managing Arcs. Instead of normal pattern matching, you have to dereference the Arc within your match statement and then use ref to declare that you don’t actually want to access the data but a reference to it (I think I’m interpreting that correctly?).

                                  match *instruction {
                                      Instruction::Jump(ref j) => {
                                  

                                  Want to make this mutable? Yikes! Shoving Mutex into here makes things far more complicated because that’s another dereference you have to do every pattern match. Paul ended up doing this:

                                  match args.iter().map(|v| &(**v)).collect::<Vec<_>>().as_slice()
                                  

                                  which I would say is unreadable! I had a huge number of mutable concurrent data structures and I ended up switching all of them to immutable which simplified things greatly, but this had its own costs. I wrote this up thoroughly on my own blog.

                                  I’ve been writing Rust for quite a while now, enough to where I start thinking with the borrow checker and I know what structures/patterns won’t work before I write them, but I still hit walls pretty hard. Refactoring that codebase from mutable to immutable was very challenging in Rust, whereas in Java refactoring can be a lot faster. I feel like taking the wrong step in Rust can end up costing much more time compared to other languages, which ends up making prototyping difficult. (Non-concurrent, non-recursive prototyping is substantially easier)

                                  1. 9

                                    This is definitely an area that can be challenging! I think learning the AsRef and AsMut traits can be really useful for this case. In this case, Arc has an AsRef implementation that gives you an immutable reference to the contained type.

                                    Here’s an example from the Rust Playground on the use of as_ref on an Arc, along with Mutex::lock to access the value inside the Mutex as an owned value by blocking (to attempt locking without blocking use Mutex::try_lock, for mutable reference use Mutex::get_mut instead).

                                    1. 3

                                      Question: How does the Rust compiler itself solve these problems? Or does it not have to?

                                      1. 7

                                        The internals of the Rust compiler actually intern a lot of data, tied to the lifecycle of the memory arena containing the data. So for type information (the most widely shared information), you’re working with lifetimes rather than Arc. Here’s a section of the rustc dev guide explaining: https://rustc-dev-guide.rust-lang.org/memory.html

                                      2. 3

                                        In my experience with Rust, the best way to write recursive data structures like Parse Trees is to simply reference Nodes by their indices in the backing data structure (usually a Vec).

                                        struct Node {
                                            id: usize,
                                            parent: usize,
                                            children: Vec<usize>,
                                        }
                                        
                                        struct Tree {
                                            nodes: Vec<Node>,
                                            root: usize,
                                        }
                                        

                                        I imagine this isn’t quite as efficient as using a tonne of Arc<> and unsafe but it’s plenty fast enough for anything I’ve ever needed to write and it circumvents a lot of extremely ugly code.

                                        1. 2

                                          Seems like a significant compromise on type safety though.

                                          1. 1

                                            That can be mitigated with something like:

                                            type NodeID = usize;
                                            
                                            1. 4

                                              That’s a type alias. You want a newtype, like this:

                                              struct NodeID(usize);
                                              
                                              1. 1

                                                Sorry, yes, you’re quite right.

                                      1. 53

                                        TLDR - Drew hates the modern web, gripes about Mozilla management, and wants the scope of web browsers to be reduced.

                                        Edited to remove snarkiness.

                                        1. 38

                                          I’m forgiving the rantiness because I’m upset and angry about the Mozilla thing too. What I’m taking from this:

                                          1. Google is actively saturating the web with complexity for its own ends, one of which presumably is creating an insurmountable barrier to entry into the browser market
                                          2. All the other existing players but Google clearly have no interest in trying to stay in that market
                                          3. The resulting mess isn’t doing anyone but Google any good

                                          Of course it’s possible my interpretation is somewhat motivated. ;)

                                          I don’t think this really adds any factual information to the pool, but opinion pieces aren’t always a bad thing, and it could start some important conversations about the future of the open web.

                                          1. 18

                                            Mozilla just fired everyone relevant

                                            It feels like Mozilla management finally accomplished what Microsoft and Google tried unsuccessfully for decades:

                                            Burning the company to the ground.

                                          2. 6

                                            Exactly, I feel like this isn’t saying anything that Drew hasn’t already said in the past.

                                            1. 10

                                              And he is right in every word.

                                            1. 4

                                              Whenever these articles come around, there are always comments by people who specialize with the language, or at least know it well enough to point out obvious mistakes. When writing a comparison about languages, does it not make sense to get input from people who know the language really well?

                                              Also, I hate the conclusions. This isn’t saying anything that hasn’t already been said. They read like the front page of the languages respective landing pages. I don’t think “If I build exceptionally/mostly for Linux” is a good reason to pick Go as a language. This entirely misses its strengths. I primarily write Rust on Linux and, even when doing very advanced build configurations, I’ve never run into an issue so big I’d switch to Go.

                                              1. 10

                                                I don’t understand all the hate. The OP states he wants to write about his experience learning two new languages. This should be read as a post-mortem of impressions and lessons learned along the way, not as a olympic-level competition of which language is better. Yes, the author missed go modules, used async functions instead of threads, but those are normal mistakes of those who are just picking something from the first time. This post is a diary of their journey more akin to a travelogue with some recommendations in it. I think that experts coming here to just point a finger at it and shout “YOU’RE DOING IT WRONG!” are approaching this kind of post wrongly.

                                                1. 3

                                                  When writing a comparison about languages, does it not make sense to get input from people who know the language really well?

                                                  I think it’s fine to write about one’s experiences on a personal blog. It’s a good mechanism for getting thoughts and feedback from others — either the specialists you mention, or just your friends and local community. I don’t think there needs to be an expectation that you hunt down experts in each language and get feedback before publishing on your own website.

                                                  This is the first post on a new blog, by someone who describes themselves as a self-taught developer. I doubt this is intended to be the the definitive comparison article between Go and Rust. ;-) It’s simply a post about one developer’s experience implementing the same project in two languages.

                                                1. 23

                                                  There are also two environment variables we need to know

                                                  Uhhh… Kinda not really. You don’t need to know about GOROOT or GOPATH or virtualgo.

                                                  Something like this will work.

                                                  $ cd $HOME
                                                  $ mkdir hashtrack && cd $_
                                                  $ go mod init github.com/cuchi/hashtrack
                                                  $ go get github.com/Laisky/graphql
                                                  

                                                  All your code and deps get scoped to the module.

                                                  Go doesn’t have a package manager

                                                  Oookaay, this is just wrong. It might help to read up on Go Modules. Go’s official package manager is built into the go tool. https://blog.golang.org/using-go-modules

                                                  or official registry

                                                  Kinda. There is an official central place to search for modules: https://pkg.go.dev/

                                                  1. 3

                                                    Go doesn’t have a package manager

                                                    Oookaay, this is just wrong. It might help to read up on Go Modules.

                                                    When did this happen? It is news to me (but then again I only try Go every now and then :-)

                                                    1. 2

                                                      Go 1.11. Since it wasn’t proposed until 2018, a lot of people who tried the language years ago aren’t aware of its existence.

                                                      1. 5

                                                        I think the author is mostly just confused about the GOPATH/Module situation. Modules are not really a “package manger” in the same sense that npm or whatnot are (it’s better!) and there’s a lot of “legacy” advice and documentation across the internet. Never mind that the tooling is rather weird as well, where things behaving in different modes depending on how/where you invoke them.

                                                        It’s all pretty confusing especially for newcomers; I’ve seen quite a few people get confused by it. I started writing a comment about this last night, but ended up just sending it as an email to the article’s author.

                                                  1. 16

                                                    Here is the issue where this all started. Here is the library documentation that had the error, and you can now see it’s fixed and links to git.sr.ht. I feel like I’ve seen Drew harass OSS developers on GitHub far too often.

                                                    Here is the actual source code that contains the regexps filters for domains. One thing to note here is that it assumes any site beginning with gitlab., gittea., or gogs. is a valid instance of those git services, which is in direct contrast to what Drew tell us. He paints this as a malicious attack on decentralized git instances, when you are free to name your git instance appropriately or submit a change that adds your domain. This seems very standard for the service pkg.go.dev is providing, and I don’t think this flamebait is useful.

                                                    1. 8

                                                      you are free to name your git instance appropriately

                                                      s/free/constrained/

                                                      This assumption doesn’t hold in practice. See e.g. salsa.debian.org or source.puri.sm.

                                                      or submit a change that adds your domain

                                                      This doesn’t really fly. If I host a small instance of GitLab/sr.ht/Gitea I don’t want to chase down each programming language specific tooling requiring a special case for my instance, and wait for the changes to propagate in releases etc.

                                                      1. 5

                                                        The entire thing is that just the source code links don’t work for self-hosted instances; this is not great, but certainly quite a bit different than the entire repo not working as the article makes it out to be. When I pointed this out on HN last night I got handwaving “but Google sucks!” replies first, and a slew personal insults when I pressed the matter.

                                                        This is the kind of attitude these things are written with… meh. But hey, “Google bad” confirmation bias makes it get to the front page anyway 🤷‍♂️

                                                        There’s is some valid criticism here, as the entire way this works is kinda hacky and confusing; but it worked like that on godoc.org too, which was originally developed outside of Google. It might be a good idea to improve on that, but I can’t blame the devs for continuing on what’s already here, rather than taking the initiative to develop an entire new specification for this. If you feel such a thing is vitally important then maybe create it?

                                                        I’ve got a few other gripes with pkg.go.dev as well, but wrapping it all in an article like this isn’t helping anything. Indeed, if anything it’s going to put the developers in the defensive.

                                                        I feel like I’ve seen Drew harass OSS developers on GitHub far too often.

                                                        Well, that was a fun read 🙃 (that wm4 fella doesn’t seem … very pleasant … either)

                                                        1. 1

                                                          Thanks for sharing this. I’ve always had a feeling about the way he communicated and expressed himself, but this made it a lot more clear!

                                                          1. -5

                                                            Thanks, that made me uninstall mpv.

                                                            Looking for a new “simple” video player that doesn’t shit into my $HOME now!

                                                          1. 2

                                                            Chapel took the longest total time but consumed the least amount of memory (nearly 6GB RAM peak memory)

                                                            D consumed the highest amount of memory (around 20GB RAM peak memory) but took less total time than Chapel to execute

                                                            Julia consumed a moderate amount of memory (around 7.5 GB peak memory) but ran the quickest

                                                            What an interesting find! Just looking at the graphs, Chapel doesn’t look like an amazing contender. It’s quite close, but Chapel isn’t turning any head. With the memory consumption, however, this becomes much more interesting! The Chapel code came out very readable (imo, looking only at the calculateKernelMatrix example) and ended up using 14 GB less memory than its D counterpart, which looked very similar. I’m more familiar with Julia and D, so I’m interested to see what language features are at play in optimizing the Chapel code for speed and memory.

                                                            1. 1

                                                              Remember that Chapel is designed to let your code run on everything from multicore to clusters. Its competition was stuff like C++ with MPI with user juggling code and data over potentially thousands of CPU’s. Chapel was designed for high productivity with as close to manual performance as the compiler can get. Other contenders were X10 and Fortress languages.

                                                            1. 1

                                                              I love using Rust for parts of my infrastructure toolchain like this! For a recent work project I wrote a configuration builder that took configuration templates repos and constantly updated running servers with the current configs, and writing it in Rust was a breeze. Tools like this could be written in Python or Shell, but I like the type-level safety that Rust provides here.

                                                              1. 6

                                                                I feel like a lot of what is being discussed here has already been talked about at length in various other posts. Is it not odd that there seems to be a collection of C/C++ users who are misrepresenting Rust’s capabilities? Steve already talked about this in his post You can’t “turn off the borrow checker” in Rust which is mentioned in this article. I’ve seen many false statements across Reddit, HN, Discord, etc, that could easily be resolved by reading the documentation. What is causing this? It’s not like Rust’s documentation doesn’t spell out what it restricts.

                                                                All Rust checks are turned off inside unsafe blocks; it doesn’t check anything within those blocks and totally relies on you having written correct code.

                                                                This is objectively false! Granted the original video is in Russian, but if you’re giving a talk about Rust it seems like it would make sense to learn what unsafe actually does before preventing your idea of it as fact.

                                                                My greater question is: why does this happen this much? Am I disproportionately seeing more false comments about Rust than most people, or is there a real issue here? In contrast, people voicing their opinions on Go are founded on Go’s actual flaws. Lack of generics, error handing, versioning, et al. are mentioned, but when it comes to Rust, the argument shifts. Rust has flaws, and they are discussed, but there is quite a lot of misrepresentation. IMO

                                                                1. 14

                                                                  It seems like a fairly normal human reaction, I think. People have invested large portions of their life towards C++ and becoming important people in C++ spaces. In that group of people, most are deeply sensible geeks that have reasonable reactions to Rust. But there will be some that have their own egos tightly coupled with C++ and their place in the C++ community, that see the claims made by Rust people as some form of aggression - attacking the underpinning of their social status.

                                                                  And.. when that happens, our brains are garbage. Suddenly the most rational person will say the most senseless things. We all do this, I think.. most of us anyway. Some are better than others at calming down before they find themselves with all the lizard brain anger organized on a slide deck, clicking through it on stage.

                                                                  1. 2

                                                                    While I love this explanation, I do want to point out the complexity and length of the list of actions one must do to build a misleading slide deck and speak on stage about it with absurd confidence.

                                                                    1. 1

                                                                      Hm that might be true, I think this also happen to a lot of people attacking graphQL, they do not want to accept an alternative to REST.

                                                                    2. 6

                                                                      I think these are different crowds. People who use Go instead of X vs. C/C++ people looking into Rust. Based on my very limited experience talking to C/C++ developers they got this sort of Stockholm syndrome when it comes to programming languages and they always try to defend the shortcomings of their favorite language. UB is fine because… Overflows are fine because…. They does not see any value in Rust because their favorite language has all. I do not know that many Go developers, but the ones I know are familiar with the shortcomings of Go and do not try to downplay it. All of this is anecdotal and might not represent reality but one potential explanation of what you observed.

                                                                    1. 3

                                                                      Spending a birthday in quarantine and enjoying the end of my sophomore year! Only two left (bar grad school)!

                                                                      I’ve been reading a million papers on dependent type theory and I’m planning out building an implementation of my own. I have a few ideas of how to make it run efficiently, but we’ll see if they stick.

                                                                      1. 0

                                                                        Go is garbage collected, rust is not. That means rust is faster than go, right? No! Not always.

                                                                        Why does it mean that, generally?

                                                                        1. 2

                                                                          The assumption is that if you have a garbage collector stopping the world a lot, your application is going to run slower. In small benchmarks like this (although this once was broken), it impacts the perceived performance quite a bit. In practice, while it’s debated, languages with GCs do achieve high performance and can achieve similar speeds to manually managed languages. The JVM, Julia, & Go are all great examples of this.

                                                                          1. 0

                                                                            Yes, GC has costs, but so does reference counting. Making such a blanket statement is just misinformation.

                                                                            1. 4

                                                                              My (non-expert) understanding is that “gc is slow for managing heap memory” is a misconception, and than in fact the opposite is true – for allocation-heavy workloads modern gc schemes are faster than traditional malloc/free implementations.

                                                                              However, languages without tracing/reference counting GC tend to use heap more frugaly in the first place, and that creates the perf difference in real-world programs.

                                                                              1. 1

                                                                                The Go compiler does escape analysis and can (to some extent) avoid putting things on the heap if they do not have to be.

                                                                              2. 3

                                                                                But rust doesn’t do reference counting most of the time.

                                                                                1. 2

                                                                                  Rust only does reference counting when you ask for it by typing Rc, generally yourself.

                                                                            1. 34

                                                                              This allows you to use them independently and compose them however you please, which maps well onto the reality of how many projects are organized. Compare this to platforms like GitHub, GitLab, Gitea, and so on, resources like bug trackers and pull requests map 1:1 to a git repository.

                                                                              I think this is a great leg up on the competition. Having repositories be 1:1 with issue trackers always annoyed me on GitHub and results in hacky workarounds like sending issues between repositories or having one repo act as “the issue tracker repo”.

                                                                              1. 4

                                                                                This is awesome, specially for mirror repositories and stuff like that.

                                                                              1. 1

                                                                                https://jadon.io/blog - I write about programming language design & type theory. I’m working on a set of articles for introducing Computer Science Topics without the Jargon. So far I have only completed a few, but by the end I hope to have a lot of articles explaining deeper topics like dependent types, CoC, Homotopy Type Theory, etc. I also write about my own language design adventures.

                                                                                1. 3

                                                                                  I must be becoming old, words do not have the same meaning anymore. reverse engineered reading src? …

                                                                                  1. 1

                                                                                    That’s exactly what I was thinking. This isn’t reverse engineering in any sense that I’ve seen that term used. The real title for this post is Rewriting the LastPass CLI in Rust.