1. 8

    I’m not too convinced it has been misnamed. When referring to engineering with tools given to us by Computer Science, I’ve heard most people just say Software Engineering. Keeping “science” in the name highlights all of the (very much non-engineering) theoretical scientific parts of CS: automata theory, type theory, PLT, etc. that of course have their applications within Software Engineering (do Set Theory & Category Theory fit here? I usually throw them together with the “theoretical CS topics” I study).

    1.  

      If anything, I think the discipline has been misnamed for the very opposite reason than what the author of the article suggests. There are of course exceptions, because that’s how language works, but I generally interpret “science” as describing a discipline that seeks to develop increasingly accurate models of empirical phenomena via experimentation. Those theoretical constructs you mention don’t require any reference to the empirical, because like any purely mathematical object, their development proceeds according to the implications and constraints of logic. Something like “computational mathematics” seems more apt as a descriptor of that line of study, and the study of how those objects inform the development of real systems of computers seems best described as “engineering,” a field of inquiry that’s clearly related but has distinct aims in terms of which kinds of investigation it prioritizes.

      I wonder if the semantic ambiguities here might be due to some latent sense among practitioners that being an “applied” discipline makes that discipline lesser, which I think is an easier fiction to maintain when even the engineered artifacts of software seem to exist without recourse to the physical world (of course they do in practice, since some hardware needs to store and run that software, but it’s easy to pretend that that’s not the case).

      1.  

        The question of P and NP is, on its own, enough to justify the word “science”. We have only the barest idea of how to approach the problem mathematically, and most of our progress has been empirical. Quoting Aaronson:

        I like to joke that, if computer scientists had been physicists, we’d simply have declared P ≠ NP to be an observed law of Nature, analogous to the laws of thermodynamics. A Nobel Prize would even be given for the discovery of that law. (And in the unlikely event that someone later proved P = NP, a second Nobel Prize would be awarded for the law’s overthrow.)

    1. 1

      One of my favorite features of Koka is the Perceus reference counting that lets you write code in a functional, immutable style while maintaining performance by reusing unique references.

      1. 1

        I just bought the HoTT book! I’m very excited to read it. There is a HoTT library for Coq and the Arend Theorem Prover by JetBrains which has HoTT built in for those that want something other than Agda.

        1. 1

          Tangentially related is an amazing optimization for virtual calls in Android ART 8.0 which devirtualizes calls if only one implementation exists (and then possible inlines it!). I’m unsure if OpenJDK has a similar optimization, but the space for optimizing JVM implementations is still very open.

          1. 2

            Single implementation devirtualisation has been standard practice for all Java JITs for all time - because of the “everything is virtual by default” decision at the core of the language.

            Even .NET does it despite virtual dispatch not being a default in the primary languages as there are enough high frequency calls that are technically overridable but are not overridden in practice.

          1. 1

            I wonder how this compares to XLA which uses the same LLVM NVPTX backend but also supports compiling to CPUs. On top of that, is it feasible to target GPUs other than Nvidia’s or is everything else really too slow to care about?

            1. 8

              It’s nice to see languages using QBE, and even better to see it’s still in slightly active development. I’m hoping for more projects like QBE to pop up, or for more languages to follow Zig’s footsteps and make LLVM an optional dependency by rewriting things for the language’s specific use, which I believe for Zig will help with cross compiling and is already helping with linking

              1. 1

                There are a lot of people theorizing that this is directly related to recent changes in the Minecraft codebase.

                For many people working in the Minecraft ecosystem, these changes are long overdue! The game has been stuck on an old build of Oracle JRE 8 for years to maintain compatibility with older systems. Many projects have already stated they’re upgrading to Java 11 (Paper, Velocity) for the coming Minecraft release.

                1. 1

                  No, the reason we went with go was because golang’s packaging is so much better than node. Or java. Or C#. We get a single binary.

                  At a previous job, we had many Java servers with jar plugins that required manual scps to the OVH boxes.

                  Building was as easy as:

                  mvn package
                  

                  Testing was as easy as scping the jar to the test box and testing functionality manually. I’ve written many unit tests and integration tests that look like they account for everything you would ever need, but that facade fades as soon as you push it to production and customers instantly find bugs you never thought possible.

                  It sure would be nice if we could spend a couple weeks building the perfect CI/CD pipeline, an elegant deployment process, along with pretty dashboards. But we have software we need to ship in order to get users in order to drive subscriptions.

                  CI/CD seems to be one of the first things I do when making a new project, open source or private. If your deployment process sucks, then it’s going to be extremely challenging to patch bugs in prod.

                  we have a workout video app – we don’t have the same scalability problems as Twitter

                  Current product I’m working on was in a very similar situation. When I started, our servers had very few users, and very few transactions that needed to be processed. Scaling this >100x in the last year has taught us a lot of things, including: you never know when you need to scale. Shit hits the fan, and then you need to upgrade a box or make your server distributed as quickly as possible.

                  1. 1

                    From what I read in the article it doesn’t seem like they would have a tough time scaling. They run a single static binary server with no server-side state. They could set up a deploy to replicate that to a massive amount of instances if needed, and put up a load-balancing proxy in front of it. This is bread-and-butter stuff, not exactly rocket science.

                  1. 4

                    It seems a bit unfair to post this here after Drew was banned from this forum in my opinion.

                    1. 4

                      Why? Their blog posts are shared often enough here as it is, and authors rarely get right-of-reply here anyway.

                      1. 2

                        The blog posts aren’t going to be shared here anymore (the domain was banned too). I generally don’t think it would be appropriate to have a story on this site whose primary intent is to criticize someone who can’t respond.

                        The point about authors not getting a right to reply is a fair one, but I’d counter that someone would generally extend an invitation to someone being criticized who wanted a chance to respond.

                      2. 4

                        Drew was banned? What happened?

                        1. 6

                          29 hours ago by pushcx: Please go be loudly disappointed in the entire world (and promote sourcehut) somewhere else.

                          I didn’t know this either! I’m going to miss his commentary on a lot of topics (and not miss his commentary on others). @pushcx would you care to elaborate on this? He seemed to have a pretty positive score on a lot of his comments, even though I didn’t personally agree with many of his opinions. Quickly ctrl-f’ing his comment page, I can only see one comment with a score of 0.

                          1. 7

                            When someone’s account is deleted (by themself or banning), their negative-score comments are deleted. There were a lot with @drewdevault over a long time.

                            1. 2

                              Is there a particular reason his domain was banned? There’s plenty of cranky open source folks there with questionable use of language in their rants, and I don’t think he was any worse than, say, ESR.

                              1. 7

                                The reason for the domain ban is in the mod log:

                                Reason: I’m tired of merging hot takes, or cleaning up after the results of his trolling and sourcehut promotion.

                                1. 2

                                  Ouch. I had read the mod log earlier and was hoping for some clarification, but okay then.

                      1. 3

                        Here is the paper introducing Souper. The abstract tells of the initial use of Souper:

                        Alternately, when Souper is used as a fully automated optimization pass it compiles a Clang compiler binary that is about 3 MB (4.4%) smaller than the one compiled by LLVM.

                        1. 3

                          Note that this won’t be true today, because LLVM developers look at Souper output and implement suggested optimizations. You benefit whether you use Souper or not.

                        1. 7

                          I LOVE the existence of classes like this. Regardless of politics, exploring new ways of organizing software is fascinating to me. I wish this course diverted away from Urbit-specificness and discussed more forms of “Martian Computing” such as Collapse OS, Mu, Nebulet, or redshirt. Cutting away legacy code and starting from scratch is an amazing topic that can be explored in a number of ways, and I’m glad classes like this are encouraging this exploration.

                          1. 1

                            Agreed. I’m obviously interested in what the Urbit project is trying to do, but there’s no reason that there shouldn’t be other projects trying to rethink the foundations of personal computing, compared to the ultimately Unix-based world we (largely) live in today.

                          1. 4

                            “Yeah, we write out one big JSON object to a text file.”

                            “How? When? What?”

                            Obviously “insane” decisions like these always surprise me. Hindsight is 20/20, but while building software I try my best to focus on the correct solution rather than the fastest solution. This is of course at odds with my manager who would rather me get it out by Friday. I’m constantly reminded about a comment @ddevault made regarding this dichotomy (relevant lobsters post):

                            The culture of doing the wrong thing because the right thing is harder really, really bothers me about the modern software development ethos. We used to care about doing things right. It has nothing to do with being stretched thin - if the throughput doesn’t change, it just takes a bit longer to finish. Well, sometimes the correct solutions take more time.

                            I think about this a lot because, as a “Software Engineer”, my job is to engineer solutions to engineering problems. The more I see engineers learn, the more I see them want to do “right thing” over the “wrong thing”, even if “it just takes a bit longer to finish.” But at the same time, as the product and demand for updates grow larger and larger, the line between “right” and “wrong” technical decisions is blurred.

                            A couple of weeks later, a customer wanted to try it out. I wasn’t ready to commit to the data model yet and do it properly in SQL, so I took a shortcut: the object holding all the data was wrapped in a sync.Mutex, all accesses went through it, and on edit the whole structure was passed to json.Marshal and written to disk. This gave us data model persistence in ~20 lines of Go.

                            The plan was always to migrate to something else but, uh, we got busy with other stuff and kinda forgot.

                            Was this a bad decision? From a software engineering perspective, I would say yes. This data should have almost certainly been shoved in a database. JSON files as databases don’t scale and are susceptible to a number of other issues. From a business perspective, this might have been the “right choice.” The customer was able to try out the product faster than if time had been taken to use a real database.

                            There are, of course, a number of things we can do to balance this, but I think it’s interesting that this problem is ubiquitous among software engineers, and as Drew puts it, part of “the modern software development ethos.”

                              1. 1

                                Ah, you are correct! I must have missed that in the new stories.

                                1. 3

                                  IMO this link is better than the one I posted, since this is a full write-up of the script.

                              1. 5

                                This seems like as good a time as any to consolidate my thoughts on Tree Notation.

                                Classically, we have had SUNSA technology in the form of combinator logic. There are proofs that SK combinators are universal on Turing machines. If we want to represent shapes as well, though, then Tree Notation has one obvious flaw: It is only two-dimensional at most. If we want arbitrary dimensionality, then we need opetopes. Short for “operator polytope” and pronounced with two different “o” sounds, the advantage of opetopes is that every way to compose objects is an opetope, even if the objects have high dimensionality. See Higher-Dimensional Categories (p63) for illustrations. This approach was recently made viable; we have syntax and type theory which is computable, along with the Opetopic graphical proof assistant.

                                As discussed previously here on Lobsters, there are existing experiments to encode legal principles into expert-system-style libraries for logical languages. I have talked with startup founders whose aim is to improve legibility of various legal codes by formalizing them with declarative rules. The principle is not new. Reading lines like:

                                I dream of a world where … when I want to start a business I don’t have to fill out forms in Delaware (the codesmell in that last one is so obvious!).

                                I’ve incorporated an LLC in Oregon before. I did not have to fill out forms in Delaware. Incorporation in Delaware is done because the Delaware loophole makes it a tax haven. Indeed, nobody needs to fill out forms in Delaware, unless they intend to deliberately (legally) avoid taxes. This anecdote is germane to the idea that Tree Notation is suited for real-world applications; the actual pragmatics of the notations that we currently use are very different from the idealized descriptions given as Tree Notation. Or, in fewer words, Tree Notation won’t stop people from incorporating in Delaware.

                                Let’s address your predictions.

                                Prediction 1 is falsified by Iota and Jot, a pair of bitstring languages who are Turing-complete; Iota encodes a one-combinator language, showing that this is the minimal number of combinators required for Turing-completeness.

                                Prediction 2 seems strange given how long two-dimensional syntax has existed in popular languages. Famously, Python and Haskell both have had two-dimensional syntax since the 1990s, and they use the same whitespace character as Tree Notation. Additionally, there’s a trivial one-dimensional syntax for each two-dimensional syntax which unravels it onto a single line with a designated line-break character, generalizing the idea which Tree Notation inherits that one-dimensional bytestrings in files can have newline characters interpreted as dimensional commas.

                                Prediction 3 will have to wait until Tree Oriented Programming can be formalized. But given how poor the terms “structured programming”, “functional programming”, and “object-oriented programming” have been defined over the years, I’m tempted to see this as pure ego. Is this just programming with katamorphisms, ala recursion schemes?

                                Prediction 4 can’t be taken seriously. Terms are too nebulous. This smells like hype.

                                I dusted off some September 2019 notes on the hope that 70% of TIOBE languages will be founded on Tree Notation by 2027. The summary is that I predict 9 of 10 2027 TIOBE languages, and then use this prediction to predict that the bet’s conditions will not be fulfilled. Languages take a long time to become popular and take over, unless they clearly fill a niche which didn’t exist before.

                                1. 4

                                  I apologize because I’m not going to spend time responding to the other points in your comment (will try to remember and come back later) because the link you provided to opetopes made my jaw drop. Somehow I missed this area of research—so many connections! I’ll be mining it for a bit. Thank you so much!

                                  1. 1

                                    My thoughts are still pending on opetopes, but while that settles I wanted to just say thanks again and touch on your other feedback.

                                    encode legal principles…The principle is not new.

                                    I agree and have read up on a lot of research in this domain and even recently did a bit of advising with one of the big 5 who is kicking the tires here. What I proffer is something very relevant though, a new type of notation that includes an intrinsic ability to count complexity (https://jellypbc.com/posts/ezmntq-counting-complexity), meaning that different encodings of a legal principle will have different complexity measurements, and my hope is that it might provide less room for special interests to hide carve outs. (It’s much easier to detect hop-ons when you have a scale). Then you can extend that to see what I’m talking about with the Delaware situation is just an instance of the general pattern of unnecessary complexity embedded throughout out legal systems. So the only reason why I think Tree Notation offers an OOM over other tech in terms of laws is this intrinsic complexity measurement which provides an objective measurement of complexity. Nothing else has this, as in Tree Notation you can start with just the concept of “.” and “” and build up binary to hex to english to law etc through a succession of grammars, without ever leaving the dimensional syntax of Tree Notation.

                                    Prediction 1 is not falsified by Iota and Jot. You don’t get to start with any characters. You don’t get to assume anything. All you get is a sheet of grid paper and pen, and you need to build up all your assumptions from there. No imports. No includes. Nothing. You need to define binary, numbers, etc.

                                    Python and Haskell are not 2-D languages, despite them having some indent sensitive properties. A tiny fraction, ~50 languages use indentation. But the ones that do use other things as well. They are not pure 2-D languages, and so you don’t get the guaranteed isomorphisms and all the neatness that comes with that. On the second point, while all the current implementations of Tree Notation of course are saved as a 1-D bytestring, this is just because of the limits of current 1-D register technology, and there are kinds of programs and computations you could perform using 2-D and 3-D post-register machines that can not be done with existing languages (you could technically emulate them in 1-D, but the time constraints would make that impractical).

                                    Prediction 3. I do it already, but haven’t come up with the terms. Generally it seems that most of the patterns in object oriented programming boil down to “simplify your trees”.

                                    Prediction 4. I’ve provided no evidence here, but have worked it out in a few ways and am pretty confident in it, but this is more of a fun, far out one (though I’d be surprised if it turns out to be wrong).

                                    Wow! Your list of TIOBE predictions is amazing. I will have to come back to that one as another thread again (separate from opetopes). Thank you!

                                    1. 2

                                      You don’t get to start with any characters. You don’t get to assume anything. All you get is a sheet of grid paper and pen, and you need to build up all your assumptions from there.

                                      The strength of our metatheory is always going to tint our ability to consider the complexity of constructions, based on what we can build from when we “don’t get to assume anything” and have “nothing”. Here, our common ground is not just the pen and paper, but type theory. Iota and Jot provide examples of how to use around three symbols to encode Turing-complete syntax. Similarly, opetopic type theory can be reduced to three symbols.

                                      Python and Haskell are not 2-D languages

                                      A one-dimensional language has one-dimensional addressing. For example, traditional filesystem files are one-dimensional; they store a string of bytes, and each byte has a one-dimensional index. Common Python and Haskell implementations use two-dimensional indexing to address syntax, viewing source code as a ragged list of lines, with each line containing some dependent number of symbols. This is not just an editor convenience, but part of the definition of how these languages are parsed. I do not practice much Go, but I understand that their canonical formatting requires their parser to be able to faithfully represent and manipulate syntax in 2D.

                                      1. 1

                                        Similarly, opetopic type theory can be reduced to three symbols

                                        Has this been done before? I’m super interested in opetopic type theory but the literature for it has a little too much jargon to understand most of it.

                                        1. 1

                                          In these slides, the syntax only needs three symbols: [, ], and *. Roughly, these can be read as “go down into a new dimension”, “pop out of the current dimension”, and “mark a new address in the current dimension”.

                                        2. 1

                                          And I think “Iota” and “Jot” and “opetopes” and Tree Notation are probably mostly the same thing, just put in different ways. In my way vocab this is a https://upload.wikimedia.org/wikipedia/commons/0/07/Game_of_life_pulsar.gif Tree Language/Tree Program as well as this (https://upload.wikimedia.org/wikipedia/commons/a/aa/Rule30-256-rows.png). I think I’m just taking a new attack vector on old ideas, but in a way that I think leads to more practical utility.

                                          IIRC the Python pipeline removes the indentation in an early pass and inserts delimiting tokens in its place, but I may be mistaken. I can’t remember OOTOMH how Haskell does it.

                                      2. 1

                                        Fun to read through your 2027 predictions! I added some thoughts in bold https://gist.github.com/breck7/2220c6b6a3890af2c0a031531b4b1ae3

                                        For my bet to win roughly the following has to happen:

                                        1. The hypothesis that “2-D+ languages” are intrinsically 10x+ better has to be true (I’m confident but also gulp! :) )
                                        2. People have to make Tree Languages that “just work” and solve real problems
                                        3. They have to invent and launch those things with enough time for them to catch on (if nothing is catching on by December 2024, my bet is in trouble)
                                        4. TIOBE has to consider these new high level languages for consideration on their list

                                        A tall order, but I think it could happen.

                                        1. 1

                                          Omfg, THANK YOU. These resources are amazing. I was already drowning in stuff to read but this has jumped way up near the top in the priority queue.

                                        1. 0

                                          Here are my notes:

                                          • Not requiring semicola is good.

                                          • My least favorite parts are the mixed case types and generics with <>… in both cases I would have hoped that people would have stopped repeating these mistakes.

                                          • Not sure what to think about ident Type syntax.

                                          • The collection literals and the general amount of operator overloading is something I would personally avoid.

                                          • Having no “static” is good.

                                          • Replacing modifiers with annotations is good, I have done the same.

                                          1. 1

                                            I think these notes are weak because you’re solely talking about syntax, which I believe is one of if not the least important choices in making a programming language. Memory management, runtime overhead, package management, et al are far more important than using angle brackets for generics.

                                            1. 4

                                              Ah the pseudo-intellectual “syntax doesn’t matter” trope.

                                              I’d love to live in a world were syntax was good enough to not worry about it. Sadly we don’t live in that world.

                                              And to be honest, looking at the syntax is a good way to discard languages early on. If the syntax is bad, the semantics won’t be great either.

                                            2. 1

                                              My least favorite parts are the mixed case types … I would have hoped that people would have stopped repeating these mistakes

                                              Are you really bikeshedding about CamelCase vs snake_case? I thought we were over that. 😬 Live and let live.

                                              As for generics brackets, familiarity can beat out grammatical convenience. Most people are used to angle brackets from C++, Java, Rust, etc.

                                              1. 4

                                                Are you really bikeshedding about CamelCase vs snake_case?

                                                No. I meant that bool, int, double and string are lower-case while StringBuilder, Box, List, etc, start with an uppercase letter. Looks like an oversight in any language created after 2000.

                                                As for generics brackets, familiarity can beat out grammatical convenience.

                                                According to that, cars should also be steered as if they were drawn by horses. It’s not about “convenience”, it’s that no one has figured out how to deal with <> without exposing the needless complexity of that choice to users.

                                                I prefer languages without needless complexity.

                                                Most people are used to angle brackets from C++, Java, Rust, etc.

                                                And it has been a costly disaster in all of them. Sure, let’s excuse C++ because it was the least bad option to retrofit generics into an existsing language. But if you are creating a new language with generics, using <> is an inexcusable mistake.

                                                1. 1

                                                  Oh, I see. That does seem inconsistent … I just noticed that Nim makes the same mistake. I’m actually happy for scalar primitives like ‘int’ to be lowercase, but if Box and List are capitalized then String should be too.

                                                  IIRC, angle brackets are troublesome for the lexer because of ambiguity with other tokens. Not aware of ways they’ve caused problems (since C++11) for users of languages, who vastly outnumber compiler writers.

                                                  1. 1

                                                    I believed that until recently, when Vale found a really elegant solution to disambiguate binary operators from generics: require binary operators to have a space on each side, otherwise they’re generics. With that one adjustment (which is already required by a lot of major styles guides), all the ambiguity disappeared. It’s nice, because then Vale can use square brackets for other things.

                                                    I also like Rust’s solution (the ::<>), it may be a bit awkward-looking but it solves all the problems.

                                                    It may have been a costly disaster and inexcusable mistake a while ago, but nowadays because of these mitigations we can use angle brackets with no problems. That said, I’m not sure if Skew is employing these mitigations.

                                                    1. 1

                                                      I think introducing a (likely the only) whitespace-sensitive rule into a language to ad-hoc solve this issue is not a good idea.

                                                      I also like Rust’s solution (the ::<>), it may be a bit awkward-looking but it solves all the problems.

                                                      I don’t; I think it’s rather embarrassing for a language to end up with 4 different syntax variations that are only valid in specific contexts.

                                                      But even if <> did not require such workarounds, I still wouldn’t want to use it, because using <> as brackets is just poorly readable (especially compared to []).

                                                      If fate didn’t end up forcing C++ down this route, would we “invent” using <> today? I’d say we wouldn’t.

                                                      Let’s just let generics with <> die. There is no reason to use them.

                                                      1. 1

                                                        Your claim about readability is subjective. I find <> to be equally readable to [], and before that, <> was more readable because it was more familiar. Familiarity is important for readability, especially for a new language. Not many languages use [].

                                                        We use whitespace sensitivity all the time, in every language I’ve seen. It’s everywhere once you start looking. For example, “void myfunc() { … }” needs a space between void and myfunc. In our case in Vale, it was an easy choice because having spaces around binary operators is more readable anyway (IMO).

                                                        There’s a big reason to use <> over []: the opportunity cost for []. In Vale, once we freed those up, we were able to use [] for tuples.

                                                        With all that, it’s a slam-dunk case, <> is a better choice for a new language.

                                                        1. 1

                                                          < and > are used as a binary operator for the first 15 years of everyone’s lives. Whatever it is is, it’s not “familiar” to overload it with a different meaning.

                                                          With Go joining Eiffel, Scala, Python and Nim, I think the tide is turning anyway. (Similar to how ident: Type replaced Type ident earlier on.)

                                                          Being able to use [] for random things is not a plus either. Using () for terms and [] for types is easy and consistent. Other approaches aren’t.

                                                          In the end, languages using <> have a steep hill to climb for me to consider them worthwhile a second look.

                                                    2. 1

                                                      You mean because of parsing ambiguity?

                                                      1. 1

                                                        Yes, especially how the syntax ambiguities have a deep impact on the user.

                                                        To be clear, there are syntactic complexities that are worth it and that can be cleanly hidden from the user. This isn’t one of them, as we have experienced for a few decades now.

                                                        1. 1

                                                          FWIW I think that might be solvable on the parser’s end, at least for a java-ish syntax.

                                                          Source: I wrote a parsing library

                                                          1. 1

                                                            Java has not-so-nice user-facing syntax to work around issues, like foo.<Bar>baz().

                                                            In the end you can solve pretty much everything with unlimited lookahead (see C#), the question really is why, when there are much better options available.

                                                1. 9

                                                  M1 is apparently a multi-die package that contains both the actual processor die and the DRAM. As such, it has a very high-speed interface between the DRAM and the processors

                                                  But it’s not like they’re using HBM on an interposer, this is basic dual channel (LP)DDR4(X) whatever, which doesn’t magically become much faster if you put it on the package. They could clock it a bit higher but no vendor is going full on overclock mode 4800MT/s CL16 out of the box :D

                                                  The benefit of sticking to RC is much-reduced memory consumption

                                                  But that’s only for regular app object junk. 8K video frames, detailed 3D scenes, machine learning matrices, and gigantic ELF files full of debuginfo won’t become smaller due to this difference, and on a big system machine with serious usage, memory is mostly occupied by that kind of stuff.

                                                  uncontended acquire-release atomics are about the same speed as regular load/store on A14

                                                  Ooh, that’s cool.

                                                  1. 6

                                                    this is basic dual channel (LP)DDR4(X)

                                                    How do you know? After all, alternative memory interfaces like HBM exist, and Apple specifically claims “high-bandwidth, low-latency memory” memory.

                                                    But that’s only for regular app object junk.

                                                    Right, but it’s the “regular object junk” that’s problematic.

                                                    8K video frames, detailed 3D scenes, machine learning matrices

                                                    …tend to be nicely contiguous blobs of bytes that can be moved from the (2x faster) SSDs to memory really, really quickly. > 2GB/s on my current Intel MBP.

                                                    ELF files full of debuginfo

                                                    Mach-O, not ELF. Those binary files tend to not be all that large, and debug information is stored externally (and even if internal, it’ll be in a section that would at most be mapped, but not read into memory until actually needed).

                                                    1. 1

                                                      How do you know?

                                                      First – the animation in the Special Event where they show a rendered mock-up of how the package is formed. Just checked again, the timestamp is 07:40. Even though it’s a mock-up, you can clearly see the standard shape of a DDR memory chip, itself already covered in its own black plastic with text on it. (HBM stacks look like bare silicon dies located within about a millimeter of the main die, connected by an interposer, not by a regular PCB. Look at an AMD Vega GPU, for example.)

                                                      Also the fact that the price has not increased compared to the previous models. (HBM is really expensive.)

                                                      Now let’s look at anandtech – of course, it is LPDDR4X indeed :) (upd: also the aforementioned slide is there, no need to open the video)

                                                      (upd: also yeah “8x 16b channels” – same width as the typical desktop 2x 64b)

                                                      Mach-O, not ELF. Those binary files tend to not be all that large, and debug information is stored externally

                                                      I’m not on macOS these days, so I’m not talking about macOS specifics, I’m just giving examples of things consuming big memory on big computers.

                                                      The Firefox debug build’s libxul.so requires >14G memory to link, and the result is >2G (which is larger than the WebAssembly address limit so the new symbolicator used in the Firefox Profiler can’t load it, haha).

                                                      1. 1

                                                        From the anandtech article:

                                                        as well as doing a microarchitectural deep dive based on the already-released Apple A14 SoC.

                                                        The micro-architectural discussion is of the A14, not the M1. And even that doesn’t say that the memory is any kind of DDR, in fact the only mention of DDR I could find is in describing one of the comparison machines for the benchmarks.

                                                        Firefox

                                                        XUL is 110MB on macOS, and as I mentioned, debug information is kept to the side. I do remember running out of memory, a LOT, trying to link lldb on Linux (in a Docker container). There seem to be some pathologies in that neck of the woods, partly with code bloat and partly with linker performance. Friends helped me out with configuring the build and switching to a different linker (lld or gold), that solved the issue. Well, it was still “problematic”, but it did link.

                                                    2. 5

                                                      But that’s only for regular app object junk. 8K video frames, detailed 3D scenes, machine learning matrices, and gigantic ELF files full of debuginfo won’t become smaller due to this difference, and on a big system machine with serious usage, memory is mostly occupied by that kind of stuff.

                                                      I currently run a Dell XPS 15 with 32 gb of RAM, and I would say that my app usage rarely contains anything you listed. Most of the RAM being used in my machine is regular app object junk, and seeing a large performance jump in managing these allocations would be very beneficial to me at least.

                                                      While working, I almost always have these apps open:

                                                      • 1-3 projects open in IntelliJ
                                                      • 1-3 terminals
                                                      • Slack or Telegram
                                                      • TeamSpeak
                                                      • 10-40 tabs open in FireFox

                                                      On Fedora 32, these apps usually aren’t overtaking my resources, but I am very curious to see the performance difference with the shiny new M1.

                                                      1. 1

                                                        IntelliJ is implemented in Java, which is equally memory hungry regardless of the OS. Similarly, Slack is an Electron app, which is Chrome under the hood (C++ and Javascript). Firefox also is C++/Javascript.

                                                        I’m assuming all of those apps use Objective-C APIs (though how they interact with system APIs is really a blind spot for me), so there’s some impact there but I suspect the bulk of their memory use comes from things (Java objects, JS objects) that are similar in size regardless of the platform.

                                                        Fwiw, both my work (Mac OS, 4 years old) and home (Linux 3.5 years old) laptops currently have 16GB, and it’s mostly ok. Even with 100 Chrome tabs, I have a gig or 2 free. When I’ve run into problems, it’s been analyzing a 8+GB java heap, or running Intellij + Docker DB + multi gig batch process. Optimizing “ordinary app junk” won’t help those use cases (they all manage memory in a way that has nothing to do with Apple’s optimizations to their libraries).

                                                        For a new laptop I’d plan on getting 32GB. I’m ok with what Apple released this week because I assume that something with 32GB will come sometime next year. But it doesn’t meet the needs of some developers.

                                                        1. 1

                                                          Really depends on what you’re doing, for me 16GB is a constant struggle and my buddy oomkiller annoys me to no end.

                                                    1. 1

                                                      {{ school }}: struggling with Astrophysics & Automata Theory. There are only 3 weeks left of this semester so it’s going to be pretty tricky to raise those grades!

                                                      {{ work }}: I’ve been put in charge of integrating a very large third party codebase into our own. The main challenge with this is that the third party code is insane. It has its own preprocessor which optionally uses dependencies depending on what version of the code you’re building. We have a lot of other external code that is imported in through a source transformation pipeline, but I’ve had to modify that pipeline quite a bit to accommodate this. Overall, it’s an interesting challenge!

                                                      {{ side_projects }}: I wish I had time for them! I have a million ideas for my language Alox but I haven’t been able to implement any of them. After getting a job solely using Java, I’m tempted to rewrite what I have because I’m much faster at writing Java than writing Rust. The biggest challenge was describing the IR in a memory-safe and mutable way. I ended up almost reimplementing an arena allocator (which I should have used from the start), and that definitely killed my drive to work with the code anymore.

                                                      1. 17

                                                        The complaints listed here for Rust are exactly what I have been struggling with!

                                                        However, I can’t actually pattern match like this in Rust because I don’t have DLists, I have Arc<Dlist>, and you can’t pattern match through Arcs and Rcs and Boxes.

                                                        When trying to create thread safe recursive data types, I have struggled a lot with managing Arcs. Instead of normal pattern matching, you have to dereference the Arc within your match statement and then use ref to declare that you don’t actually want to access the data but a reference to it (I think I’m interpreting that correctly?).

                                                        match *instruction {
                                                            Instruction::Jump(ref j) => {
                                                        

                                                        Want to make this mutable? Yikes! Shoving Mutex into here makes things far more complicated because that’s another dereference you have to do every pattern match. Paul ended up doing this:

                                                        match args.iter().map(|v| &(**v)).collect::<Vec<_>>().as_slice()
                                                        

                                                        which I would say is unreadable! I had a huge number of mutable concurrent data structures and I ended up switching all of them to immutable which simplified things greatly, but this had its own costs. I wrote this up thoroughly on my own blog.

                                                        I’ve been writing Rust for quite a while now, enough to where I start thinking with the borrow checker and I know what structures/patterns won’t work before I write them, but I still hit walls pretty hard. Refactoring that codebase from mutable to immutable was very challenging in Rust, whereas in Java refactoring can be a lot faster. I feel like taking the wrong step in Rust can end up costing much more time compared to other languages, which ends up making prototyping difficult. (Non-concurrent, non-recursive prototyping is substantially easier)

                                                        1. 9

                                                          This is definitely an area that can be challenging! I think learning the AsRef and AsMut traits can be really useful for this case. In this case, Arc has an AsRef implementation that gives you an immutable reference to the contained type.

                                                          Here’s an example from the Rust Playground on the use of as_ref on an Arc, along with Mutex::lock to access the value inside the Mutex as an owned value by blocking (to attempt locking without blocking use Mutex::try_lock, for mutable reference use Mutex::get_mut instead).

                                                          1. 3

                                                            Question: How does the Rust compiler itself solve these problems? Or does it not have to?

                                                            1. 7

                                                              The internals of the Rust compiler actually intern a lot of data, tied to the lifecycle of the memory arena containing the data. So for type information (the most widely shared information), you’re working with lifetimes rather than Arc. Here’s a section of the rustc dev guide explaining: https://rustc-dev-guide.rust-lang.org/memory.html

                                                            2. 3

                                                              In my experience with Rust, the best way to write recursive data structures like Parse Trees is to simply reference Nodes by their indices in the backing data structure (usually a Vec).

                                                              struct Node {
                                                                  id: usize,
                                                                  parent: usize,
                                                                  children: Vec<usize>,
                                                              }
                                                              
                                                              struct Tree {
                                                                  nodes: Vec<Node>,
                                                                  root: usize,
                                                              }
                                                              

                                                              I imagine this isn’t quite as efficient as using a tonne of Arc<> and unsafe but it’s plenty fast enough for anything I’ve ever needed to write and it circumvents a lot of extremely ugly code.

                                                              1. 2

                                                                Seems like a significant compromise on type safety though.

                                                                1. 1

                                                                  That can be mitigated with something like:

                                                                  type NodeID = usize;
                                                                  
                                                                  1. 4

                                                                    That’s a type alias. You want a newtype, like this:

                                                                    struct NodeID(usize);
                                                                    
                                                                    1. 1

                                                                      Sorry, yes, you’re quite right.

                                                            1. 53

                                                              TLDR - Drew hates the modern web, gripes about Mozilla management, and wants the scope of web browsers to be reduced.

                                                              Edited to remove snarkiness.

                                                              1. 38

                                                                I’m forgiving the rantiness because I’m upset and angry about the Mozilla thing too. What I’m taking from this:

                                                                1. Google is actively saturating the web with complexity for its own ends, one of which presumably is creating an insurmountable barrier to entry into the browser market
                                                                2. All the other existing players but Google clearly have no interest in trying to stay in that market
                                                                3. The resulting mess isn’t doing anyone but Google any good

                                                                Of course it’s possible my interpretation is somewhat motivated. ;)

                                                                I don’t think this really adds any factual information to the pool, but opinion pieces aren’t always a bad thing, and it could start some important conversations about the future of the open web.

                                                                1. 18

                                                                  Mozilla just fired everyone relevant

                                                                  It feels like Mozilla management finally accomplished what Microsoft and Google tried unsuccessfully for decades:

                                                                  Burning the company to the ground.

                                                                2. 6

                                                                  Exactly, I feel like this isn’t saying anything that Drew hasn’t already said in the past.

                                                                  1. 10

                                                                    And he is right in every word.

                                                                  1. 4

                                                                    Whenever these articles come around, there are always comments by people who specialize with the language, or at least know it well enough to point out obvious mistakes. When writing a comparison about languages, does it not make sense to get input from people who know the language really well?

                                                                    Also, I hate the conclusions. This isn’t saying anything that hasn’t already been said. They read like the front page of the languages respective landing pages. I don’t think “If I build exceptionally/mostly for Linux” is a good reason to pick Go as a language. This entirely misses its strengths. I primarily write Rust on Linux and, even when doing very advanced build configurations, I’ve never run into an issue so big I’d switch to Go.

                                                                    1. 10

                                                                      I don’t understand all the hate. The OP states he wants to write about his experience learning two new languages. This should be read as a post-mortem of impressions and lessons learned along the way, not as a olympic-level competition of which language is better. Yes, the author missed go modules, used async functions instead of threads, but those are normal mistakes of those who are just picking something from the first time. This post is a diary of their journey more akin to a travelogue with some recommendations in it. I think that experts coming here to just point a finger at it and shout “YOU’RE DOING IT WRONG!” are approaching this kind of post wrongly.

                                                                      1. 3

                                                                        When writing a comparison about languages, does it not make sense to get input from people who know the language really well?

                                                                        I think it’s fine to write about one’s experiences on a personal blog. It’s a good mechanism for getting thoughts and feedback from others — either the specialists you mention, or just your friends and local community. I don’t think there needs to be an expectation that you hunt down experts in each language and get feedback before publishing on your own website.

                                                                        This is the first post on a new blog, by someone who describes themselves as a self-taught developer. I doubt this is intended to be the the definitive comparison article between Go and Rust. ;-) It’s simply a post about one developer’s experience implementing the same project in two languages.