1. 2

    No really, I promise this isn’t spam! This project is about owning a computer that is simple enough that you could audit it yourself!

    Check out this snippet from the first update:

    Even though the full source code for the Linux kernel and Firefox is published, nobody has the time to personally review every release for potential security problems; we simply trust that others have done a good job, because we have no other choice. Precursor rolls the clock back to the early 2000’s, when mobile computers were powerful enough to be useful for single tasks, while simple enough that individuals or small teams could build them from scratch.

    1. 3

      If it had a cell modem capable of voice and SMS, I’d buy it just on the basis of its form factor. I miss my old Nokia QWERTY phones.

      1. 2

        Well, Nokia still produces their classic lineup :D

        1. 2

          Mine are still fine, hardware-wise, and there are still plenty of used ones out there in decent shape. But the software…? I’m slightly embarrassed by the effort I put into trying to get a Symbian (S40 or S60) dev environment set up on a Windows 7 Pro VM, just so I could keep these old friends on software life support. Did a similar thing for my Blackberries, with similarly scant success. There are more fun dead-end closed technologies to bang one’s head against.

          I had in mind something more like this similar effort. Who knows, maybe it could be the Linux to bunnie’s “Betrusted” HURD.

      2. 1

        Super cool project! Thanks for posting.

      1. 2

        I think speed of change is more important than speed of execution because change of algorithm will always make more difference than any other factor.

        Similarly, speed of change is better for my employer because business needs change all the time.

        1. 4

          If you’re not an array programmer, it would probably be better to start with another language, or wait a few weeks.

          I wait with bated breath :)

          1. 1

            Same here! I keep bouncing off of APL, BQN looks like something I could finally stick to!

            1. 2

              I’ll work harder to keep my few-weeks promise now I know people are waiting for me! If you’d like something to start on now, you could take a look at my LambdaConf talk which introduces array programming for a functionally-inclined audience. BQN gets rid of the syntactic irregularity in the main subject, since it changes Outer Product from a prefix ∘. to a suffix , in addition to renaming it “Table”. Of the BQN documentation, I’d say Syntax, Blocks, and Functional programming are mostly readable without APL experience. You should also be able to understand the central ideas in the documentation for primitives, although many of the examples will just be too much for a beginner to pick apart.

          1. 6

            Could this url be edited to remove the trailing slash? The page looks a lot better this way: https://cheapskatesguide.org/articles/techno-cage.html

            1. 3

              Good idea! Could a mod fix that?

              1. 4

                Fixed, thanks for pointing it out.

            1. 24

              I recently heard the term app console for the first time.

              My paraphrasing is “If your relatives can’t load any software they want on their computer, it’s no longer a general purpose computer.”

              My relatives aren’t going to jailbreak their devices, they’ll be limited to whatever is available in their silo.

              I didn’t believe Cory Doctorow when he talked about the coming war on general purpose computing, but now I believe him.

              From the link above:

              we all know what computers are, and iPads are no computers

              1. 8

                The introduction has a lovely bit of Norwegian:

                Freed, we dance
                For an eyeblink, we play
                We thousand small leafships
                we anticipate, on that clear morning light
                

                That’s a wonderful introduction to the story of the seedling!

                1. 18

                  Oh thank goodness it worked. My Norwegian is marginal at best, and I really worried I messed up my article agreement or use of på/i in that poem.

                  1. 3

                    I’m fluent in Swedish rather than Norwegian, but to me “på” fits better since that preposition translates as “on top of” rather than “i” which would be “inside of” or “encompassed by”; and they are leafships.

                    I did a quick check, looks like Swedish and Norwegian prepositions work the same way.

                    Nice poetry, thanks for this nifty post!

                    1. 7

                      Thank you! And… that’s what I was hoping for as well. På/i has been such a challenge for me–I once told a friend I was i kjøkkenet (in the kitchen) and he stared at me as if I’d uttered something completely unparseable: one can only be i certain rooms of the house. One is på hytta (upon the cabin) but i huset (in the house). One is i Oslo, but på Røros, because… inland or mountainous towns are something one is on, rather than a coastal city, which one is within, except for places like Skjåk? One is på shops, libraries, and restaurants (I think because there’s a sense that these aren’t just places, but sort of… activities that one has embarked upon? ANYWAY languages are cool and hard and I like them, CARRY ON

                  2. 1

                    This was a fantastic read. I was a bit hesitant based on some of the other recent interview links that ended up turning into long discourse NOT about the article, but your quote of the Norwegian hooked my interest. Thank you for pulling that out.

                    Highly recommend this for anyone that wants to discuss the “correct” answer to FizzBuzz.

                  1. 5

                    I’ve heard good things about Bazel, though right now my company is using nix for our build.

                    Does anyone know of a good comparison among Bazel, Nix, and other caching build systems?

                    1. 8

                      The place to start is probably build systems a la carte. The primary difference is that Nix has suspending scheduling and deep tracing builds and Bazel has restarting scheduling and tracing builds.

                      The more relevant difference is the tracing, Bazel will only hash one layer deep so it needs to traverse the whole dependency dag to confirm that all dependencies are in the proper state.. while Nix optimizes by hashing all dependencies, so it is quick to know what needs to be rebuilt, but in return it needs to rebuild everything if a fundamental dependency is changed (i.e. rebuild all dependents even if the change doesn’t actually propagate and is “killed” somewhere - e.g. when there is no interface change then there is no need for the dependent to care about the update).

                      That’s at least how I understand it, I’m sure someone can correct me.

                      1. 1

                        Yep, that seems correct (having worked w/ both) as a good summary.

                      2. 3

                        I was wondering about this, too; was surprised they considered Bazel+Nix, but not Nix independently.

                        My experience with Haskell and Nix has been that while it Nix deals well with caching dependency artifacts, you have to switch back to cabal or similar for the main project if you want any kind of caching of partial builds, incremental rebuild etc. Do you have a good solution there?

                      1. 9

                        I would say especially in Go, since Go makes concurrency pretty hard compared to systems I’m used to…

                        1. 11

                          What systems are you used to?

                          1. 6

                            He’s probably thinking of Haskell, maybe secondarily Scala.

                            I mostly use Haskell, Rust, and Java and I have to concur.

                            1. 2

                              Yeah, Rust+Tokio is also pretty good.

                            2. 4

                              If I want to do concurrency I’ll always reach for Haskell, but also comfortable in Ruby+EventMachine

                              1. 2

                                This is super confusing to me. What do you use concurrency for?

                                1. 1

                                  High-throughput network servers and clients, mostly.

                                  1. 2

                                    I think you might be the first person I’ve ever encountered who defaults to Haskell for network servers. This isn’t a criticism in any way, just an expression of mild astonishment.

                                    1. 2

                                      I certainly didn’t used to, but at this point I haven’t been able to find something else that even comes close in terms of concurrency abilities, especially with the GHC runtime. In something like Ruby+Eventmachine or Rust+Tokio you have to manage your async much more explicitly, whereas in GHC Haskell all IO operations are async all the time within the idiomatic programming model. With lower level systems like Go, you can have thread safety problems and non-atomic operations, wheras in Haskell all IO operations are atomic (unless you use the FFI in unsafe ways) and of course most code is pure and has no possible thread safety problems at all.

                                      Probably more reasons, but that’s what comes to my mind.

                                      1. 1

                                        What kind of RPS and p99 latency do you get with a Haskell service serving HTTP and doing some nontrivial work per request?

                                        1. 1

                                          Looks like the Haskell web server (warp) comes in at 20th on a webserver benchmark from four months ago.

                                          At my last job I did a lightning talk on Python vs Haskell for a simple webapp. I wanted to focus on simplicity of the code, but my coworkers wanted to see benchmarks. Haskell was much faster than Python for 99% of the requests, with laziness and garbage collection putting the last fraction of 1% of responses slower than Python. Python was slow, but consistently slow.

                                          1. 1

                                            Hmm, I have done some Haskell HTTP stuff, but not for high performance. If you’re really curious about HTTP I’d look up warp benchmarks.

                                            1. 1

                                              OK, then whatever you’ve done: I’m just trying to get a sense Haskell’s ballpark.

                                        2. 1

                                          I’m also a fan, lots of benefits to doing network servers in Haskell.

                                          Perhaps this tour of go in haskell would help illustrate some benefits?

                              1. 14

                                These stories about bad experiences with Logo are so depressing. The full Logo environment and the education revolution the researchers were trying out in the lab was amazing, but as usual the school system can take anything and make it dull and terrible. Easiest way to make a child hate something is to have it covered in school.

                                1. 2

                                  When the exercise is “draw a box” it’s going to be boring. When the exercise is “draw a picture” it will be more exciting. Creative freedom must have been missing.

                                  1. 1

                                    Girlfriend told me yesterday she hates programming because of Logo in high school. I was surprised.

                                  1. 7

                                    I’m sad to see that this part of Rust development was killed due to Mozilla’s recent massive downsizing. Let’s hope that others pick it up, because the Rust compiler really needs more performance. It’s personally still keeping me away from Rust, as languages like Go prove that one can surpass C/C++ in compilation speeds by a wide margin.

                                    Admittedly, Rust does a lot of static analysis during compilation, but most of the bottleneck comes from the LLVM-backend, which does most of the work and is not very fast due to its generality and high workloads. It would be interesting to see how fast Rust would compile with a custom backend.

                                    1. 12

                                      rustc already has a second compiler backend using cranelift. It’s in flux, but measurement currently are looking good. https://github.com/bjorn3/rustc_codegen_cranelift/issues/133#issuecomment-439475172

                                      1. 3

                                        Does anyone know what’s happening to cranelift now that Mozilla has laid off the people who were working on it?

                                        1. 2

                                          I’m a little late to reply, but to answer your question: cranelift was moved to the bytecode alliance last year. I would not be surprised if those people start working at another alliance member.

                                          1. 1

                                            Cranelift is part of the bytecode alliance for a while now already and it’s doing rather fine, AFAIK.

                                            https://bytecodealliance.org/

                                        2. 10

                                          “Most of the bottleneck comes from LLVM” is broadly true but is an oversimplification, it can very much depend on the workload. I did some research a while back: https://wiki.alopex.li/WhereRustcSpendsItsTime

                                          1. 2

                                            That’s worthy of its own post to lobsters. I learned a bunch from this post.

                                            1. 5
                                              1. 4

                                                Thanks, but it’s been done ;-)

                                          1. 2

                                            I really enjoy this depth of detail!

                                            1. 3

                                              It would be interesting to see a similar test but with pg_trgm included in the postgres test.

                                              1. 1

                                                What does that do?

                                                1. 2

                                                  Creates trigram index, which helps with search for fixed strings and some regular expressions.

                                              1. 12

                                                TL;DR: guy says that over 1.5 million records, PostgreSQL will return results in up to 120ms, while Elasticsearch gets it in 20. So it’s worth to maintain an Elasticsearch cluster next to your Postgres.

                                                I think that for some critical search, it might make sense. But for most of us, first we need to get to 1.5 million records for this to be relevant. And even then - meh, it’s likely that our search is just fine with 120ms.

                                                1. 3

                                                  I’d also like to see this in Linux, I’ve had weird issues with postgresql in Mac that I’ve never seen in Linux. Mind you, if this is getting deployed in a Mac server, this quick comparison would be enough for me.

                                                  1. 1

                                                    Yep, cluster sizes, everything. I mean, if you had 1.5mil records that you need a FTS on, you probably don’t run this on a desktop machine. That’s what I’m thinking - this probably doesn’’t matter so much for majority of us - overhead of another technology for a few milliseconds.

                                                  2. 2

                                                    Not only that - full text search over 15 million documents, but with no other filtering? I’m sure something needs that, but I have never worked on a system matching that need.

                                                  1. 2

                                                    I think this Jupyter notebook is related? It’s in the github repo of one of the authors.

                                                    I was hoping to find the working code used for this paper, has anyone else found it?

                                                    1. 1

                                                      I am excited about this project!

                                                      At first glance it appears to be an easy way to describe what I want an FPGA to do, and produce a design that’s easily integrated into other software I’m building.

                                                      I’ve tried and failed to learn enough Verilog to build my own FPGA designs. My ‘sentinel’ question has always been “When FPGAs can be used to accelerate their own synthesis, FPGAs have reached usefulness”. This might be the thing I want!

                                                      1. 2

                                                        When FPGAs can be used to accelerate their own synthesis, FPGAs have reached usefulness

                                                        This reminds me of Wirth, who had a similar mindset on compiler optimizations…

                                                        1. 1

                                                          I’ve been playing with nmigen and I want to give this a try too.

                                                        1. 3

                                                          This is exciting to me!

                                                          Up to this point, the FPGA industry has been extremely proprietary for both software and hardware. I hope this leads to the equivalent of GCC for the FPGA world.

                                                          1. 2

                                                            gcc for the fpga world

                                                            You’ve missed a few years of developments. Refer to symbiflow.

                                                          1. 2

                                                            Coolest part of this paper is where it mentions finger trees are just one choice in the larger solution space: (from section 10, heading “One less constructor”)

                                                            We believe that an implementation that allows larger Tuples will behave faster in practice, because then there is less administration related to the recursion. The detailed development in this paper opens up for new designs of Finger Trees that can generalize in three dimensions.

                                                            I’d like to read further research about that!

                                                            1. 2

                                                              The time to repacking the Some a and Tuple a would be a constant factor (grows quadratically?) that scales with their sizes and must be taken into account.

                                                              1. 1

                                                                I agree! I’m curious how to measure the trade-offs, any thoughts?

                                                                1. 1

                                                                  It depends on the tree size, the operations, the system memory characteristics. It would be nice to have a profiling auto-tuner that automatically morphs from a unboxed packed array to a standard linked list. I guess the finger tree is some where in between these two extremes. Somebody should apply for a research grant for it.

                                                              2. 2

                                                                It’s kind of fascinating how they showed what looks on the surface like a sort of random quirky definition–one to four (or, in their simplified version, three) items here, two or three there–seems to be more-or-less minimal for the attributes they’re going for.

                                                              1. 2

                                                                I like this link because my blog post is paraphrased (with explicit permission) in the “big/slow hammer” part!

                                                                1. 19

                                                                  Several times in the last twenty years I’ve wanted to become a kernel contributor, but they’re so far away from being nice and supportive and helpful that I just won’t.

                                                                  I think this is another case where technology will not solve your people problem.

                                                                  1. 19

                                                                    I only understood SAT solvers last year after weeks of solid focus. If you want to know how they work, I wrote two blog posts that might help:

                                                                    SAT solvers part 1 - how to build the perfect spaceship in Endless Sky and SAT solvers part 2 - this is hard.

                                                                    1. 2

                                                                      I wish I could save comments. You’re being added to my orgmode though!

                                                                      1. 2

                                                                        Hurrah!

                                                                        If you want more along these lines, Lindsey Kuper’s work is great. I suggest the reading list from her recent course on this subject.

                                                                        My “biggest / slowest hammer” quote got paraphrased in the course overview !

                                                                        1. 2

                                                                          You could use a web annotation service like Hypothesis to save comments anywhere

                                                                          1. 1

                                                                            I’d want to run it myself and I’d have to set up my pi server for remote access. I’m comfortable enough using gitlab and automated sync in emacs for my notes at the moment. Thanks for the suggestion though :)