1. 4

    The Lisp Game Jam starts on Thursday: https://itch.io/jam/spring-lisp-game-jam-2021

    I’m going to be making a game using the TIC-80 retro game dev platform: https://tic.computer

    1. 3

      I saw this, and thinking of joining since my Lisping has improved a lot the past weeks.

      Excited to see your work this time, loved the FPS game of last jam.

      1. 2

        OOH, been doing a lot of common lisp lately, might join!

      1. 7

        At work, just pushing through the week, trying to not get burnt

        Personal is where I have mostly Go things going on:

        • Building a goban, (go board) bought some wood last week, need to setup my plane and start shaping that wood.
        • Continue reading Lessons in the Fundamentals of Go (it’s pretty dense, super interesting, I reread each chapter like 4 times)
        • Continue playing games everyday, trying to improve my reading and strategy
        • Maybe sign up for the league + reviews at nordic go dojo
        • Maybe buy some shell & slate stones

        So mostly continue with my Go obsession

        1. 1

          A go board needs to be shaped? I thought they were just flat with lines burned into them.

          1. 2

            Yeah! Shaped into flatness! hahaha, the wood is pretty rough, needs to be cut, flattened, sanded and treated

          2. 1

            Which kind of design do you plan for your goban, plank-style or more big chunk of wood traditional style?

            1. 2

              A tabletop but thick (5-6cm) traditional goban. When I finish this I will probably do a floor goban with legs if I find the time!

          1. 2

            It’s been some reaaaally long weeks at work. I need to get out of the house. Also need to work a little bit on my own projects to regain sanity.

            1. 2

              $WORK: Big project, lots of work

              $PERSONAL: Caddy made me realize it’s super nice to be able to modify configuration remotely without files and deploys and other annoying stuff. Will finish my own “Caddy” for DNS, a little DNS server written in Common Lisp, where I can inspect and change configuration from inside emacs. After this will probably do the same for a server (Caddy is nice, but I don’t like the JSON interface to it)

              1. 4

                Pottery, pottery and lots of pottery. Couldn’t go to the studio this week because of work and I really need it

                1. 1

                  I just learned about kintsugi!

                1. 3

                  So I got my shakuhachi (尺八, literally means 8 shaku, which is a length unit in japan. Although amusingly it measures 1.8 shaku not 8 shaku) yesterday in the mail (bought from here). This instrument has an interesting history, as for a long time it was only allowed to be played by monks of a particular buddhist sect, the Fuke sect, that used it for meditation. It was forbidden for everyone else. I’ll be practicing that every waking hour much to the dismay of my neighbours.

                  It is notoriously difficult to make a note with it. I spent a couple of hours yesterday and a couple of hours today and it seems like I can finally play a (terrible sounding) note consistently. Will keep practicing over the weekend to maybe make it sound less terrible and try to learn a simple tune by monday. I’m starting classes next week.

                  I’ll leave you with a couple of shakuhachi pieces I enjoy listening to:

                  1. 2

                    Have an additional shakuhachi piece: https://youtu.be/jIZbVEGIHGk. A seriously impressive cover of Cory Henry’s solo on Lingus (originally by Snarky Puppy).

                  1. 1

                    Deepen my understanding of the actualism method – which knocks the socks off mindfulness and meditation – by reading more about it, all the while applying it ever more consistently in life.

                    1. 1

                      Why do you say it knocks the socks off mindfulness and meditation? What are you reading about it?

                      1. 1

                        Mindfulness / meditation is really only about acceptance[^1], waiting for the feeling to pass away on its own (sooner or later). And not about getting rid of it, or never having it occur again, so that you can spend increasingly more of your time enjoying and appreciating life.

                        There are pages of correspondences on this topic, beginning from http://www.actualfreedom.com.au/richard/selectedcorrespondence/sc-buddhism.htm

                        [^1]: And in Theravada school, it is about replacing “bad” feelings with the “good” feelings (whereas in actualism both get replaced by the felicitous/innocuous feelings). In an actual freedom (ultimate goal), all feelings are gone.

                    1. 5

                      Nice intro.

                      I may not use J in my everyday programming, but I’m glad I learned it and think it made me a better programmer.

                      This was my experience too, though I continue to use J almost daily even if not professionally.

                      Far more than any other language, including Haskell, learning J felt like dropping acid and blasting away everything I thought I knew about programming.

                      It’s the third paradigm, and unfortunately the least well-known:

                      1. Procedural
                      2. Functional
                      3. Array

                      It really is a whole new mindset.

                      1. 7

                        How did you learn it? What kind of projects would you recommend to learn such a language to discover its benefits?

                        1. 4

                          Personally, I stuck to number-crunching a-la Project Euler and some Advent of Code challenges.

                          Also tried to use J for bio-signal processing (a set of student labs built around calculating various metrics of heart-rate variability), but after looking at the solution professor went “Nuh-uh” and I had to switch to R, which IMO wasn’t that bad of a compromise in terms of paradigm shift.


                          The most interesting point that I don’t see being addressed in most of the J write-ups is how one can use it to create ad-hoc calculi of various sorts, even problem-specific DSLs. Function composition, monadic/dyadic distinction and tacit programming surely allow for that, yet the “how” of it isn’t widely discussed.

                          1. 1

                            I had to switch to R, which IMO wasn’t that bad of a compromise in terms of paradigm shift.

                            That’s a shame: I think R is basically the same as numpy - no real paradigm shift, just more/different functions, so I hope you’ll give Iverson-ish languages another try, because by focusing on the array-ness you might’ve missed something incredible.

                            in most of the J write-ups is how one can use it to create ad-hoc calculi of various sorts, even problem-specific DSLs

                            I’m very curious to understand better what you’re looking for here. What’s a kind of problem you’d like to see?

                            1. 2

                              I think R is basically the same as numpy

                              That’s true, and NumPy is basically an ugly cousin of APL ;) What I rather meant is that sacrificing J’s expressiveness was a bearable trade-off in the context of my task, and that in the end I didn’t really need most of it: just reusing R’s libraries and mashing matrices together sufficed. Productivity and acceptance by peers over cleverness and personal satisfaction.

                              I’m very curious to understand better what you’re looking for here

                              “Notation as a tool of thought” I think. Is J a mean to an end as it is given, or does it permit/encourage building linguistic abstractions on top?

                              APL is often praised for its lyiricality, and J specifically describes the parts of its speech as nouns, verbs, adverbs, and conjunctions; but does it end there? In J, There are ways to assign obverses and adverses to a verb, to define its monadic and dyadic versions, to create adverbs with conjunctions; and then some tree-manipulating primitives, even ;: for tokenizing, and boxes that permit manipulating J code as data. Is any of that intended for meta-programming, AST building, and ultimately DSL creation?

                              Can an expert in their domain take J and carefully design a set of composable primitives for solving specific problems? If so, what is the process behind that, and how does the end result look like? Is it a notation that reads and writes like poetry, or is it a prosaic library? Is it even a normal practice in APL family?

                              So, I guess what I am looking for is a walkthrough thru the points I raised, with some interesting problem as its subject; both about the mindset and its execution. I’m too much of a puny human to grasp Aaron Hsu’s thesis on the compiler that he built with Dyalog APL, but the basic premise is roughly the same with what I have in mind. His live stream on this compiler’s design and architecture is also worth watching.

                              1. 3

                                Is J a mean to an end as it is given, or does it permit/encourage building linguistic abstractions on top?

                                Permit? Probably.

                                Encourage? I’ve not seen this personally. In fact, the ethos of J/APL seems the opposite of this. The big promise of the language is that with a single set of ~50 primitives you can elegantly solve the vast majority of practical problems. So there’s no need for bespoke DSLs for your different applications.

                                As a side note, my experience with Ruby has convinced me that such a DSLs are usually a mistake.

                                1. 3

                                  A case in point: in the above-mentioned talk from Aaron Hsu, he shows how he defined PEG parser combinators and used them to write parsing expressions in a seemingly declarative manner.

                                  Surely that doesn’t qualify as a DSL, but the ability to re-define a problem in terms of these 50+ vector-manipulating primitives is what always fascinated me in APL. There’s an inherent, almost visceral “spatiality” to this act. At times it feels that the more terse your code is, the closer it to some sort of underlying ur-computation that the world is build of.

                                  Perhaps my original question is not about how to extend the language towards the problem, but how to ground the problem in the notation. Or perhaps how to design a notation with APL-ish qualities that will influence how I think with it and create in it?


                                  APL family posits a certain view on the programming, but how to see everything thru its lens? Is it just linear algebra with mathematical know-how? Practice?

                                  An example from my experience: in the first semester of DSP I really struggled with the concept of convolution, I simply couldn’t grasp it on an intuitive level. At least in my native language “convolution” sounds more like “folding” or even “coagulating”, but what folds with what? What is the physical meaning? Implementing it in C-level language only obscured the question behind the semantic noise, by-the-book definitions didn’t help either.

                                  And then one day I just [:+//.*/ in J console and it clicked. In a sense, if two signals are threads, then their convolution is a helical braid, the one that you create by coiling one thread over another; a sum +/ of oblique /. diagonals formed by multiplicative * intersections /.

                                  There’s a scene in “Matrix: Revolutions” where blinded Neo says to Smith in Bane’s body “I can see you” — that’s the kind of feeling I got from this experience.

                                  1. 2

                                    APL family posits a certain view on the programming, but how to see everything thru its lens? Is it just linear algebra with mathematical know-how? Practice?

                                    Practical answer: doing many problems, getting it “wrong”, and then seeing a more natural way to do it. It seems like you already know this though, based on your very nice example.

                                    Larger answer: I’ve been meaning to write about this, but tldr… two words come to mind: geometrical and holistic. Think of the game of life in APL. In J/APL think, you don’t view the problem from the point of view of individual cells. Instead, you look at the whole plane at once, and imagine eight copies of that plane stacked on top of one another – one plane for each directional shift. Then the neighbor count is simply the +/ of those planes.

                                    This pattern appears a lot. You take the view from above, look at “everything at once,” and there’s a natural and simple way to express that with array computations. It doesn’t always work, though. Some problems aren’t a good fit for array thinking, but a surprising number are.

                                    1. 3

                                      Yes, the local action (counting neighbors) is applied on the board as a whole, not just to a specific cell; I even have a J one-liner stashed for that somewhere.† Nice example.

                                      Interesting that APL solutions turn out to be fractal in their nature (speaking of ur-computation). Reminds me of Konrad Zuse’s Rechnender Raum and Plankalkül.


                                      † Lo and behold:

                                      L =: [:+.`*./ ],~ 3 4=/ [:+/^:2 (>{;~i:1)|.]
                                      g =: 5 5 $ 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 1 1 0
                                      
                                         <"_1 ' o' {~ L^:(i.5) g
                                      ┌─────┬─────┬─────┬─────┬─────┐
                                      │     │     │     │     │     │
                                      │  o  │     │     │     │     │
                                      │   o │ o o │   o │  o  │   o │
                                      │ ooo │  oo │ o o │   oo│    o│
                                      │     │  o  │  oo │  oo │  ooo│
                                      └─────┴─────┴─────┴─────┴─────┘
                                      
                                      1. 1

                                        Nice. You might also enjoy this 26 byte version:

                                        (]=3+4=*)[:+/(>,{;~i:1)&|.
                                        

                                        By the way, what did you mean by “APL solutions turn out to be fractal in their nature”?

                                        1. 2

                                          Well, Moore neighborhood is applied to the playing field as a matrix rotation, but at the same time to each cell on the field (each element of a matrix). So, in a way, Game of Life field is a cell by itself (with a number of states dependant on a number of cells in it), and, in turn, each cell on it is a playing board. [1]

                                             [ c =: 4=i.3 3
                                          0 0 0 NB. A cell with 8 neighbors.
                                          0 1 0
                                          0 0 0
                                             <"_2 (>{;~i:_1)|. c
                                          ┌─────┬─────┬─────┐ NB. A cell with 8 neighbors?
                                          │1 0 0│0 1 0│0 0 1│
                                          │0 0 0│0 0 0│0 0 0│
                                          │0 0 0│0 0 0│0 0 0│
                                          ├─────┼─────┼─────┤
                                          │0 0 0│0 0 0│0 0 0│
                                          │1 0 0│0 1 0│0 0 1│
                                          │0 0 0│0 0 0│0 0 0│
                                          ├─────┼─────┼─────┤
                                          │0 0 0│0 0 0│0 0 0│
                                          │0 0 0│0 0 0│0 0 0│
                                          │1 0 0│0 1 0│0 0 1│
                                          └─────┴─────┴─────┘
                                          

                                          That was a nod towards your idea about geometry and holistic approach.

                                          1. 1

                                            Gotcha. Thanks.

                                  2. 2

                                    Yeah. One subtlety most people don’t appreciate about “notation as a tool of thought” is that the paper doesn’t mention “abstraction” once. Easy to miss in these modern times where “notation” has implicitly come to mean “for creating new abstractions”.

                                    1. 4

                                      That’s fascinating. I never noticed that before.

                                      Since I have it open now, relevant paragraph on my previous point:

                                      The utility of a language as a tool of thought increases with the range of topics it can treat, but decreases with the amount of vocabulary and the complexity of grammatical rules which the user must keep in mind. Economy of notation is therefore important.

                                      Notation as a tool of thought

                                      1. 1

                                        True, it doesn’t. But then the whole paper is a process of abstraction (deriving general patterns from concrete examples, addressing the main thesis case-by-case) that uses APL as a conceptual vehicle.

                                        1. 2

                                          The design goal is to create a notation that minimizes the need for new names.

                                          1. 3

                                            APL is like a beautiful diamond – flawless, beautifully symmetrical. But you can’t add anything to it. If you try to glue on another diamond, you don’t get a bigger diamond. Lisp is like a ball of mud.

                                            Extension of the base language is at odds with its core design principle, but that surely does not preclude us from defining problems in terms of it. My original questions (not well-formed TBH) were more concerned with the ways this notation can be adapted to a particular context (or vice versa).

                                            E.g. the above-mentioned Co-dfns compiler, as far as I understand, implements a Scheme-like subset of Dyalog APL and represents trees as vectors, but not the other way around (extend the language with tree-like datatype and spruce all of that with s-expressions or something).

                                            Like I said, J has a set of primitives and mechanics that to me seem language-oriented, so I wondered how compiler/interpreter creation looks like from this point of view, on a small exemplified scale (hence the DSLs), and what is the problem-solving mindset (heuristics?) behind it.

                                            FSM state-transitions can be modeled with tables, yes, and AST, as a graph, can use adjacency matrices with eigen-shticks, but this whole approach feels like a black magic and well-guarded trade secret. Maybe because it’s a fork from the mainstream road and off into the academical weeds. Need to check out more of Aaron Hsu’s talks for sure.

                                            1. 1

                                              As a historical aside, the APL presented in Iverson’s A Programming Language book (pdf) included trees as a basic data structure (page 45 in the book). They weren’t included in APL as first implemented, then APL2 added nested arrays which serve a similar purpose. Some of the book’s tree operations are in J.

                                              1. 2

                                                Yes, thanks for mentioning that! The chapter on microprogramming with APL was mind-blowing at the time I read it.

                              2. 2

                                I learned it mainly by doing hundreds of code golf problems, asking questions on the J IRC channel, reading (good chunks of) J for C programmers and Learning J, as well as the J dictionary and the more accessible NuVoc documentation.

                                I’d say it took 1.5-2 years to acquire a feeling of fluency. Much longer than other languages. One crucial practice was solving a problem and then having it edited by experts. Because unavoidably you’ll apply your current paradigms to J, or simply be unaware of J idioms, rather than solving the problem in a “J way.”

                                The best place to get this kind of help these days is the The APL Orchard. There’s a handful of very talented J and APL programmers willing to help.

                                By the way, you could get the same mind-expansion from learning APL, if you were more inclined that way. The free book on the Dyalog APL site is also good.

                              3. 1

                                I’d classify it as functional. It just uses a multi-dimensional array where others use a linked list. Nearly all operators are side-effect free and data is immutable. There are higher order patterns like map-reduce.

                                1. 3

                                  Array programming languages aren’t “functional” or “immutable” in the way that Erlang and ML are, and I observe beginners notice this only after a few months, probably because these features are used in Array-language applications, and not in the puzzles you practice on to learn how to think in Arrays. Real applications batch and use lots of global variables with delimited names like TCL or Perl; J programmers call them locales; K programmers have the K-tree; APL it’s isolates. And so on.

                                  I know other languages have “arrays” but it’s just an unfortunate name collision. Many of us use the term “Iverson languages” instead of “Array languages”, because we’re not trying to confuse things unnecessarily.

                                  1. 2

                                    To add on the sibling comments, the article linked and code golf expose you to a part of J. For me it go in a very different mindset with stuffs like : full tacit programming reaching more a function-level programming [1] (Even point-free in Haskell fill clumsy in comparison with APL and J), constructs like hooks and forks [2] permitting composition and chaining of functions with ease.

                                    1. 1

                                      It’s true that tacit programming in J is functional, but that misses the larger difference. The way you need to think about and frame a problem to solve it idiomatically is usually quite different in J versus, say, Haskell.

                                  1. 19

                                    I’m probably not the only one with the opinion that rewrites in Rust may generally a good idea, but Rust’s compile times are unacceptable. I know there are efforts to improve that, but Rust’s compile times are so abysmally slow that it really affects me as a Gentoo user. Another point is that Rust is not standardized and a one-implementation-language, which also discourages me from looking deeper into Haskell and others. I’m not saying that I generally reject single-implementation languages, as this would disregard any new languages, but a language implementation should be possible without too much work (say within two man-months). Neither Haskell nor Rust satisfy this condition and contraptions like Cargo make it even worse, because implementing Rust would also mean to more or less implement the entire Cargo-ecosystem.

                                    Contrary to that, C compiles really fast, is an industry standard and has dozens of implementations. Another thing we should note is that the original C-codebase is a mature one. While Rust’s great ownership and type system may save you from general memory-handling- and type-errors, it won’t save you from intrinsic logic errors. However, I don’t weigh that point that much because this is an argument that could be given against any new codebase.

                                    What really matters to me is the increase in the diversity of git-implementations, which is a really good thing.

                                    1. 22

                                      but a language implementation should be possible without too much work (say within two man-months)

                                      Why is that a requirement? I don’t understand your position, we shouldn’t have complex, interesting or experimental languages only because a person couldn’t write an implementation by himself in 2 months? We should discard all the advances rust and haskell provide because they require a complex compiler?

                                      1. 5

                                        I’m not saying that we should discard those advances, because there is no mutual exclusion. I’m pretty certain one could work up a pure functional programming language based on linear type theory that provides the same benefits and is possible to implement in a reasonable amount of time.

                                        A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?

                                        The thing is the following: If you make the choice of a language as a developer, you “invest” into the ecosystem and if the ecosystem for some reason breaks apart/dies/changes into a direction you don’t agree with, you are forced to put additional work into it.

                                        This additional work can be a lot if you’re talking about proprietary ecosystems, meaning more or less you are forced to rewrite your programs. Rust satisfies the necessary condition of a qualified ecosystem, because it’s open source, but open source systems can also shut you out when the ABI/API isn’t stable, and the danger is especially given with the “loose” crate system that may provide high flexibility, but also means a lot of technical debt when you have to continually push your code to the newest specs to be able to use your dependencies. However, this is again a question of the ecosystem, and I’d prefer to only refer to the Rust compiler here.

                                        Anyway, I think the Rust community needs to address this and work up a standard for the Rust language. On my behalf, I won’t be investing my time into this ecosystem until this is addressed in some way. Anything else is just building a castle on sand.

                                        1. 5

                                          A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?

                                          There is a good argument by Drew DeVault that it is impossible to reimplement a web browser for the modern web

                                          1. 4

                                            I know Blink was forked from webkit but all these years later don’t you think it’s a little reductive to treat them as the same? If I’m not mistaken Blink sends nothing upstream to webkit and by now the codebases are fairly divergent.

                                        2. 8

                                          I feel ya - on OpenBSD compile times are orders of magnitude slower than on Linux! For example ncspot takes ~2 minutes to build on Linux and 37 minutes on OpenBSD (with most features disabled)!!

                                          1. 5

                                            37 minutes on OpenBSD

                                            For reals? This is terrifying.

                                            1. 1

                                              Excuse my ignorance – mind pointing me to some kind of article/document explaining why this is the case?

                                              1. 7

                                                There isn’t one. People (semarie@ - who maintains the rust port on OpenBSD being one) have looked into it with things like the RUSTC_BOOTSTRAP=1 and RUSTFLAGS='-Ztime-passes -Ztime-llvm-passes' env vars. These point to most of the time being spent in LLVM. But no one has tracked down the issue fully AFAIK.

                                            2. 6

                                              Another point is that Rust is not standardized and a one-implementation-language

                                              This is something that gives me pause when considering Rust. If the core Rust team does something that makes it impossible for me to continue using Rust (e.g. changes licenses to something incompatible with what I’m using it for), I don’t have anywhere to go and at best am stuck on an older version.

                                              One of the solutions to the above problem is a fork, but without a standard, the fork and the original can vary and no one is “right” and I lose the ability to write code portable between the two versions.

                                              Obviously, this isn’t a problem unique to Rust - most languages aren’t standardized and having a plethora of implementations can cause its own problems too - but the fact that there are large parts of Rust that are undefined and unstandardized (the ABI, the aliasing rules, etc) gives me pause from using it in mission-critical stuff.

                                              (I’m still learning Rust and I’m planning on using it for my next big thing if I get good enough at it in time, though given the time constraints it’s looking like I’ll be using C because my Rust won’t be good enough yet.)

                                              1. 2

                                                The fact that the trademark is still owned by the Mozilla foundation and not the to-be-created Rust Foundation is also likely chilling any attempts at independent reimplementation.

                                              2. 1

                                                As much as I understand your point about the slowness of compile time in Rust, I think it is a matter of time to see them shrink.

                                                On the standard point, Haskell have a standard : Haskell 2010 . GHC is the only implementation now but it have a lot of plugins to the compiler that are not in the standard. The new standard Haskell 2020 is on his way. Implementing the standard Haskell (not with all the GHC add-ons) is do-able but the language will way more simple and with flaws.

                                                1. 2

                                                  The thing is, as you said: You can’t compile a lot of code by implementing Haskell 2010 (or 2020 for that matter) when you also don’t ship the “proprietary” extensions.

                                                  1. 1

                                                    It is the same when you abuse GCC or Clang extensions in your codebase. The main difference with Haskell is that you, almost, only have GHC available and the community put their efforts in it and create a ecosystem of extensions.

                                                    As for C, your could write standard-compliant code that an hypothetical other compiler may compile. I am pretty sure if we only had one main compiler for C for so long that Haskell have had GHC, the situation would have been similar : lots of language extension outside the standard existing solely in the compiler.

                                                    1. 3

                                                      But this is exactly the case: There’s lots and lots of code out there that uses GNU extensions (from gcc). For a very long time, gcc was the only real compiler around and it lead to this problem. Some extensions are so persistent that clang had no other choice but to implement them.

                                                      1. 1

                                                        But does those extensions ever reached the standard? It as asked candidly as I do not know a lot of the evolution of C, compilers and standard that much.

                                                        1. 4

                                                          There’s a list by GNU that lists the extensions. I really hate it that you can’t enable a warning flag (like -Wextensions) that warns you about using GNU extensions.

                                                          Still, it is not as bad as bashism (i.e. extensions in GNU bash over Posix sh), because many scripts declare a /bin/sh-shebang at the top but are full of bashism because they incidentally have bash as the default shell. Most bashisms are just stupid, many people don’t know they are using them and there’s no warning to enable warnings. Another bad offender are GNU extensions of the Posix core utilities, especially GNU make, where 99% of all makefiles are actually GNU only and don’t work with Posix make.

                                                          In general, this is one major reason I dislike GNU: They see themselves as the one and only choice for software (demanding people to call Linux “GNU/Linux”) while introducing tons of extensions to chain their users to their ecosystem.

                                                          1. 2

                                                            Here are some of the GNU C extensions that ended up in the C standard.

                                                            • // comments
                                                            • inline functions
                                                            • Variable length arrays
                                                            • Hex floats
                                                            • Variadic macros
                                                            • alignof
                                                        2. 1

                                                          If I remember correctly 10 years ago hugs was still working and maybe even nhc :)

                                                          1. 1

                                                            Yep :) and yhc never landed after forking nhc. UHC and JHC seem dead. My main point is mainly that the existence of a standard does not assure the the multiplication of implementations and the cross-cIompilation between compilers/interpreters/jit/etc. It is a simplification around it and really depends on the community around those languages. If you look at Common Lisp with a set in the stone standard and a lot of compilers that can pin-point easily what is gonna work or not. Or Scheme with a fairly easy standard but you will quickly run out of the possibility to swap between interpreters if you focus on some specific stuffs.

                                                            After that, everyone have their checklist about what a programming language must or must not provide for them to learn and use.

                                                  1. 4

                                                    Moving! New flat, new cat, and a place to practice woodworking at home. Can’t wait!

                                                    1. 6

                                                      Does anyone know of any good resources to undertake the creation of a custom chip? I’ve dabbled in FPGAs and VHDL before but don’t really understand how you go from that to a custom buildable chip.

                                                      1. 2

                                                        Same here. I would like to replicate the function of some old 80s custom chips, but I’m not sure where to start other than “VHDL.” It seems like the OpenROAD project is named as a major EDA component of this initiative, but I’m also unclear on how I’d use it.

                                                        Lots of reading ahead!

                                                      1. 21

                                                        Hi, this is my crazy Linux OS project. I wasn’t really prepared for this to be shared today, so documentation is a bit lacking. I’m happy to answer any questions.

                                                        I just updated the screenshot on the wiki, now featuring the oasis netsurf frontend. I also added a script to build QEMU images using builds.sr.ht. The latest one is available here: https://patchouli.sr.ht/builds.sr.ht/artifacts/~mcf/226248/1b6626238a895943/oasis-qemu.tar.xz

                                                        If you want to try it out, just extract the tarball and run ./run (graphics mode) or ./run -s (serial mode). More information can be found in README.md alongside the image (including how to rebuild from source), which is also available in the home directories inside the image.

                                                        1. 6

                                                          Has any thought been placed on the security downsides of using static linking? Since Linux doesn’t support static PIE binaries, ASLR is made ineffectual with statically-compiled applications.

                                                          1. 10

                                                            Linux doesn’t support static PIE binaries

                                                            musl and gcc fully support static PIE. If you have a toolchain that supports it, you just need to put -fPIE in your CFLAGS and -static-pie in your LDFLAGS.

                                                            This used to be the default actually, but I just changed it in case someone might try to build with a toolchain from musl.cc, which does not build libc.a with -fPIE so it can’t be linked into a static PIE.

                                                            1. 1

                                                              Awesome! I didn’t know musl supported static PIE. I haven’t really paid much attention to musl (and if I’m being honest, Linux in general.)

                                                          2. 5

                                                            I’m really a big fan of what you have done, everything fits together in such a tidy way, thanks so much!

                                                            1. 1

                                                              I thought it was neat to see both your projects on the same day because they solve some similar problems in different ways. Both are really neat.

                                                              1. 1

                                                                This one isn’t my project, but I agree it solves similar problems in a more idealistic way.

                                                                1. 2

                                                                  That was ambiguous. I meant both your as in you and the other person.

                                                            2. 4

                                                              Looks neat. As I understand it, your model is closer to a firmware image than a traditional Linux distro (i.e. you build a set of components you want and install them as an atomic set). I can see that being really useful for cloud / server deployments.

                                                              Are you using anything like crunchgen to get some of the benefits of dynamic linking in your programs, or do they all carry copies of the same libraries? I’d love to see a system that generated a single binary for all of the programs in the image, did dead-code elimination and link-time optimisation across the entire thing.

                                                              (Totally unrelated, but I’m oddly pleased by all of the things using NetSurf suddenly. I remember using it on RiscOS back when AltaVista was an exciting new competitor to Yahoo! and Lycos)

                                                              1. 3

                                                                Thanks! Yeah, that seems like a fair comparison. The idea for that stemmed from dissatisfaction with how typical Linux distributions split up source packages into several binary packages (if they even do that at all). With this approach, you select the contents based on whatever criteria you want. Anything that doesn’t get selected doesn’t even get built. Due to the use of static linking, you don’t really have to worry about runtime dependencies. This gives you a lot of control depending on your use case. For example, on my VPS, I use something like

                                                                fs = {
                                                                	-- I need development files from these libraries to rebuild the kernel
                                                                	{'linux-headers', 'musl', 'ncurses', 'elftoolchain', 'libressl', 'zlib'},
                                                                	-- I want the st terminfo file, but I don't need st itself
                                                                	{'st', include={'^share/terminfo/'}},
                                                                	{
                                                                		sets.core, sets.extra,
                                                                		'acme-client', 'dnssec-rr', 'make', 'nginx', 'nsd', 'pounce',
                                                                		exclude={'^include/', 'lib/.*%.a$'},
                                                                	},
                                                                }
                                                                

                                                                On my desktop, I use fs = {exclude={}}, which builds every package, excluding nothing.

                                                                I’m not using anything like crunchgen, so everything carries a copy of everything it links to. However, due to the use of lightweight packages, most binaries are really small anyway. Only a few packages such as mpv or mupdf which link in a lot of libraries have huge binaries (and by huge I still mean < 10M).

                                                                Yes, I’m a big fan of NetSurf. It’s quite a capable browser considering their resources. Unfortunately, more and more sites require the monstrosity that is the modern web browser, so I installed firefox via pkgsrc for those.

                                                                1. 1

                                                                  on my VPS

                                                                  How do you install your custom image on your VPS? I ask because most VPS providers give you a selection of OS images (built with varying levels of care) that you have to start with.

                                                                  1. 4

                                                                    I started with a Debian image and used that to install oasis on a separate partition. Then I used a rescue image to move/expand oasis to fill the whole drive. I don’t remember the exact procedure I used, it was a few years ago.

                                                                    1. 2

                                                                      Several providers allow you to upload an ISO, Vultr for example, AWS also allows it I think

                                                                      1. 1

                                                                        On top of supporting providers, there’s tricks to get OS’s onto VMs of unsupporting providers. I’ll be even more impressed when I see someone get something running with an unsupported ISA. I already have an idea about how that might happen.

                                                                  2. 1

                                                                    Velox seems to be a window manager, not a display server, unless there’s something I missed?

                                                                    Forgive me, I misread the description.

                                                                  1. 13

                                                                    I really dislike that the base protocol is all written in javascript. I know this a volunteer project, but it’s clearly aiming at becoming a basis for the decentralized internet of the future and not just a personal project. Right now starting a dat/hypercore project in another language is a lot harder than it should be, and that will imo kill the project.

                                                                    1. 13

                                                                      This Rust implementation looks like it is under active development: https://github.com/Frando/hypercore-protocol-rs

                                                                      I encourage people who don’t like Javascript to chip in there, or get something going in their preferred language. The more implementations, the merrier!

                                                                      1. 2

                                                                        A lot of activity is also happening on https://github.com/datrs/ – after I stepped away from the project in 2019 activity quieted down for a while. But recently things have picked up again thanks to Frando & Bruno.

                                                                        In January a milestone was achieved: datrs running on an Android phone talking to the JS impl on Windows. This is really exciting and the project is in very capable hands!

                                                                        1. 1

                                                                          Is the protocol fairly stable nowadays? Or is it changing/evolving? How hard is it to keep up with upstream?

                                                                    1. 3

                                                                      Continue working on my programmable RTS game (think age of empires with programmable units and buildings). Started a low-volume (1 email every 2-3 weeks) newsletter about it this last weekend to help me with motivation.

                                                                      1. 2

                                                                        Very nice!

                                                                        For a project I’m developing (a programmable RTS game), I am writing a VM and this may actually come in handy, performance will be very important so I may end up writing some really low level code there. (I know there is no need for this probably, but I really love getting really close to the CPU, I’m fascinated by KolibriOS for example)

                                                                        1. 1

                                                                          Here’s what tree -dL 1 looks like at $HOME

                                                                          .
                                                                          ├── downloads
                                                                          ├── bin    // local bin folder
                                                                          ├── books  // pdfs, ebooks, docs
                                                                          ├── build  // Build folder for external projects
                                                                          ├── docs   // Documents
                                                                          ├── music
                                                                          ├── org    // personal org files
                                                                          ├── proj   // personal code projects
                                                                          

                                                                          It’s actually a lot less clean than this, but this is what it’s supposed to look like (plus $WORK folder which I keep separate from proj)

                                                                          1. 1

                                                                            I’d expand this with two additions and a minor fix:

                                                                            • use poetry to do dependency management and handle virtual environments

                                                                              This allows you to freeze dependencies with a lock file, and it simplifies the virtualenv flow a lot. This could be a way faster solution than using a docker container during development. Poetry also allows you to specify development-only dependencies like pytest and Coverage

                                                                            • use pdoc3 to automatically generate documentation. Could be made available as two make targets:

                                                                              • html - Generates the documentation and writes it to disk. This could also be run with github actions
                                                                              • live - Runs a local server which automatically refreshes the documentation in your browser while you work
                                                                            • Make the targets in the Makefile into .PHONY targets

                                                                            1. 15

                                                                              As the original author of pdoc, please reconsider using pdoc3, which is a hostile fork with subtle swastikas embedded in their homepage.

                                                                              1. 3

                                                                                Wow. That’s awful. I’m sorry they did that to you.

                                                                                1. 1

                                                                                  Wow, that’s messed up. That guy’s avatar is a swastika too. His stupid answer about it being a ‘misunderstanding’ is so typical of that kind of edge lord type, he’s clearly an asshole. Is there a way to report Github accounts?

                                                                                  1. 1

                                                                                    There is a link to block or report a user from that user’s profile page on Github. Regarding this particular user however the icon when viewed in larger sizes seems to be a detail of a pattern (repeated in miniature) so that there’s probably enough interpretive wiggle room for either GH or the user to argue it’s not “actually” a swastika.

                                                                                    1. 2

                                                                                      Yeah, but combine that with BurntSushi’s issue, and you start seeing an obvious pattern (pun not intended) of behavior.

                                                                                  2. 1

                                                                                    Wow, that sucks dude. I simply assumed the fork happened due to inactivity of pdoc. Attempting to remove pdoc from the python wiki, the change of license, his response to #87 clearly shows you he is not acting in good faith.

                                                                                    My initial reaction to the use of the swastika was to think of it in the original religious context. His defense of it seemed kinda respectful even (#193, #64). I asked my family about it and they immediately called bullshit. This led me to discover his response to #87, the fact that #1675 ever happened, and the fact that he most likely is from Slovenia and therefore most likely not a hindu nor buddhist himself. I now no longer believe his use of the swastika is defendable at all.

                                                                                    1. 4

                                                                                      Yes. These are exactly the kind of tactics used by these people. If you’re curious, you can learn more by reading their playbook: https://m.huffpost.com/us/entry/us_5a2ece19e4b0ce3b344492f2

                                                                                    2. 0

                                                                                      I understand some of the complaints, I understand the whole relicensing and stealing the project must suck for you and I’m sorry for that, absolutely not defending that part. But I think the whole swastika thing has been blown out of proportion. Don’t get me wrong, it’s a symbol that right now, at least in the west, carries a negative connotation, and you’re completely in your right to want to avoid being associated with it, but it’s a really old symbol1 used for a very long time with a completely different meaning and still used with than meaning in asia. I think it’s unfair to criticise that part.

                                                                                      1. 5

                                                                                        I know the history. I stand by my criticism. I think it’s pretty plain that this person is acting in bad faith.

                                                                                        1. 0

                                                                                          Yes I agree that he is probably acting in bad faith. But I don’t think the swastikas are part of the bad faith. Wanted to make sure to give some context for the swastika symbol as the other subthread was talking about reporting him specifically because he has a swastika as his profile picture and focusing on that.

                                                                                          1. 5

                                                                                            But I don’t think the swastikas are part of the bad faith.

                                                                                            I do. The very fact that we are debating this is exactly the point. If this person were genuine, it would be crystal clear.

                                                                                            1. 1

                                                                                              To add one more data point, the project website’s footer has the swastikas too, and let’s just say I do not think the choice of juxtaposed quote is coincidental.

                                                                                              Fucking sucks to have your project hijacked by the worst kind of people imaginable, my condolences :(

                                                                                              1. 0

                                                                                                Ew. (I agree they are, and also offer condolences.) I’m feeling a little bit thick as regards the quote, though. What’s wrong with it? Was Yourdon some kind of nazi-adjacent scumbag?

                                                                                                1. 4

                                                                                                  AFAIK nothing wrong with Yourdon, and I imagine the quote was innocent enough in its original context and time. It’s just that here in 2020 (as an American, but I imagine this is true for lots of European readers as well) I associate certain connotations with the word “undocumented,” i.e. undocumented immigrants.

                                                                                                  It fits into this guy’s general pattern of lamely attempted plausible deniability, but particularly given everything else, appropriating a quote that just happens to call an “undocumented” thing “despicable” right next to those swastikas sets off major alarm bells for me.

                                                                                                  1. 2

                                                                                                    That seems entirely likely. My brain just didn’t draw that line. Thanks for explaining.

                                                                                  1. 2

                                                                                    I’ve mostly used Arch these past few years without any issue. My desktop right now has NixOS though

                                                                                    1. 3

                                                                                      Working from home, travelling to see the family for the week (and more if we get further lockdowns), but most interestingly, I’ll be rewriting an existing Vue.js frontend to Svelte (https://svelte.dev), which is making my all giddy just thinking about it

                                                                                      1. 1

                                                                                        When I’ve done frontend in these past few years I’ve mostly used React, what does svelte bring to the table?

                                                                                        1. 2

                                                                                          Small bundle sizes due to Svelte being a compiler instead of only a library, but what I like most is that the code you write actually looks reasonable, and doesn’t look like you’re frantically trying to work around JavaScript’s shortcomings. Check out this login form for example — it’s fully reactive: https://marisa.cloud/aun/tome-svelte/src/branch/master/src/pages/Login.svelte

                                                                                          1. 1

                                                                                            ++ on the small bundle sizes, and when paired with rollup it’s definitely been a space savings for us. I’m really looking forward to it getting official Typescript support.

                                                                                      1. 12

                                                                                        The “quarantine” here in Barcelona, Spain has actually been kind of good for me, doing lots of things I didn’t have time for before. Hope I can create a routine so I can continue doing it after all of this ends.

                                                                                        Personal

                                                                                        • Have started doing exercise three times a day and meditating twice a day.
                                                                                        • Have started playing the mandolin.
                                                                                        • Studying Japanese.
                                                                                        • Start work on my blog.
                                                                                        • Maybe continue learning 3D modeling with Blender.
                                                                                        • Understanding more of NixOS.

                                                                                        Work

                                                                                        • Nothing very interesting, working on a project that should be finished in two weeks.
                                                                                        1. 1

                                                                                          It’s sad that it took a pandemic to bring this about but both my wife and I are very much enjoying the imposed 100% WFH time.

                                                                                          Amazing what you can do when you get back 2+ hours a day of not commuting.