1.  

    I’ve been running big endian workstations for years.

    I’m curious about your setup? What machines are you running with MIPS? I guess I haven’t really looked into “alternative architectures” since the early 2000s, so I’m quite intrigued at what people are actually running these days.

    1.  

      Haha, I like to think of learning Vim like this:

      Beginner Vim: there are so many commands to memorize!

      Intermediate Vim: Everything fits together so beautifully!

      Advanced Vim: Why are -es and -Es different flags? Why are there separate select and visual modes? Why is the syntax highlighter so janky? Why does undo work normally while recording a macro but undoes the entire thing while in the middle of replaying a macro? What the heck is up with Vimscript? Why is Z only used for ZZ? What’s U even FOR?

      1.  

        My experience with Rust binary size has let me to assume that it’ll be a non-starter in WASM, which is a pity. It’s going to be hard to compete with JS for startup time.

        1.  

          Considering i686 has the same limitation

          Didn’t that sell many more units to many more people needing a general-purpose system like Debian? And who are continuing to use non-MIPS systems outside black-box appliances like routers that they buy, including most FOSS-loving folks?

          Hardly any OS was ever “universal” in an absolute sense of the word. It typically means supporting whatever platforms have a lot of users. MIPS isn’t one of them for desktops or servers. It’s an also-ran in embedded mainly used in lower-cost applications. It doesn’t surprise me that hardly anyone supports it now. The recent opening of MIPS might shift that in a slightly-different direction, though.

          1.  

            If you can handle learning Vim then very little else about using GNU/Linux is going to be painful by comparison ;-p

            1.  

              The UI seems to have locked you into believing the index is a fundamentally necessary concept, but it’s not.

              Nothing has locked me into believing its a necessary concept. Its not necessary. In fact, for about 7 years I didnt use the index in any meaningful way.

              I think what you are missing is that Im not compelled to use it because its the default workflow, I am compelled to use it because its useful. It helps me accomplish work more smoothly than I did previously, when I would just make a bunch of tiny commits because I didnt understand the point of the index, as you still dont.

              The argument could be made to move the index into an option, like somehow make commit only the default workflow. Im not sure what that would look like with Git, but I dont think its a good idea. It would just encourage people to make a bunch of smaller commits with meaningless commit messages.

              1.  

                I don’t think that’s true. You would just run into DIFFERENT scalability limits with different design decisions.

                Git was probably strongly tied to the filesystem because it was made in 2005 (Pentium 4 era) for a lower-performance scenario by someone who understood the Linux filesystem better than high-performance, distributed applications. It worked for his and their purposes of managing their one project at their pace. Then, wider adoption and design inertia followed.

                It’s 2019. Deploying new capabilities backwards compatible with the 2005 design requires higher, crazier efforts with less-exciting results delivered than better or more modern designs.

                1. 6

                  There’s also the wasm case, which is going to become increasingly important.

                  The most interesting single feature of the feedback from this post is the extremely wide variance in how much people care about this. It’s possible I personally care more just because my experience goes back to 8 bit computers where making things fit in 64k was important (also a pretty long stretch programming 8086 and 80286). Other people are like, “if it’s 100M executable but is otherwise high quality code, I’m happy.” That’s helping validate my decision to include the localization functionality.

                  1.  

                    You don’t need a secret commit. An index is the same thing. Rewriting a (possibly secret) commit with hg amend --interactive or similar is the same as rewriting the index with git add -p.

                    etc. etc. Git stashes are a way to save multiple indices, too.

                    1.  

                      simply think of how many things git checkout can do

                      Happily these issues are slowly being fixed.

                      1.  

                        you are prompted to amend the message as well.

                        This is UI clutter unrelated to the underlying concepts. You can get around that with wrappers and aliases. I spoke of a hypothetical git amend above that could be an alias that avoids prompting for a commit message.

                        Don’t git users like to say how the UI is incidental? That once you understand the data structures, everything else is easy? The UI seems to have locked you into believing the index is a fundamentally necessary concept, but it’s not. It’s an artifact of the UI.

                        1.  

                          It’s an attempt at humor and hyperbole. The RISC-V people want to sell open chips. They’re more open than OpenPOWER. Yet, Raptor is actually selling workstations whose open cores are more open than most people buy. POWER itself has been selling for a long time. A bunch of companies got involved in OpenPOWER, too. They’re definitely competing with RISC-V for a market and winning. There’s probably RISC-V proponents or companies worried that OpenPOWER or newly-open MIPS with strong ecosystems might sway people over to them instead of RISC-V.

                          So, there’s some truth in it.

                          1.  

                            I haven’t looked for or seen measurements showing (or not showing) this, but one assumes that this could result in more frequent instruction cache misses if you are commonly using the different monomorphized versions of the generic function.

                            1.  

                              They aren’t extremely expensive. They’re cheaper than the low-volume RISC workstations from SGI and Sun that came before them. They were quoting me five digits for good workstations. Anyone wanting many CPU’s would pay six to seven. What people are missing is the Non-Recurring Engineering [1] [2] expenses are huge, must be recovered at a profit, and are divided over the number of units sold. The units sold are many times less than Intel, AMD, and ARM. These boards are also probably more complex with more QA than a Pi or something.

                              So, they’ll cost more unless many times more people buy them allowing scale up with lower per-unit price to recover NRE costs. If they don’t and everything gets DRM’d/backdoored, then everyone who didn’t buy the non-DRM’d/backdoored systems voted for that with their wallet to get a lower, per-unit price in the past. Maybe they’re cool with that, too. Just their choice. Meanwhile, higher-priced products at low volume are acceptable to some buyers trying to send the market a different signal: give us more-inspectable, high-performance products and we’ll reward you with higher profit. That’s Raptor’s market.

                              [1] http://hardwarestartupblog.com/hardware-product-development-manufacturing-cost-vs-nr-cost-nre/

                              [2] https://predictabledesigns.com/the-cost-to-develop-scale-and-manufacture-a-new-electronic-hardware-product/

                              1.  

                                Cool, I can get behind that. Just trying to work out if the primary motivation is disk use or something else.

                                1.  

                                  If every single program on a system requires hundreds of megabytes things just become unwieldy. Cutting waste is good, pointlessly large programs waste bandwidth everywhere from disks, ram and internet.

                                  Ever wonder why windows update takes minutes instead of seconds? I often do…

                                  1.  

                                    I’ve used a variety of VCS over the years, including both git & hg, the latter of which has always felt easier to use than the former - not so much because hg is particularly easy but because git is particularly confusing. It’s kinda like how Linus’ other major project was for many years until Ubuntu decided to make it easier to use.

                                    1.  

                                      This is one of those things that’s just repeated over and over and over and assumed to be true.

                                      For the most part it is definitely true. The git command line is notoriously inconsistent, whereas hg is not. It’s also harder to shoot yourself in the foot with hg.

                                      The one thing that (IMO) is a mess with hg is branching, which day to day has far more impact than remembering an inconsistent command interface. At a previous employer (who I’d convinced to move to hg from svn) we ended up with a horribly complex bookmarking strategy to deal with short-lived feature branches.

                                      I’m marginally sad that hg is losing, because it got some things very right. But git on the whole gets more of the things that matter right, despite the warts.

                                      1.  

                                        bazel, like Buck/pants/please handles the build/testing part of your requirements (and possibly the documentation part), but doesn’t address the deploying.

                                        If you run across anything that handles deploy dependencies, I’d love to hear it.

                                        1.  

                                          The biggest bloat potential comes from large generic functions used with many different parameters.

                                          In this case is there a drawback aside from the larger binary? I.e. is there a runtime performance impact of the larger binary?