1. 3

    While Nix is still the recommended way to install, we now also have auto-created Docker images for neuron.

    Thanks for doing this! I attempted to try out Neuron once before, and setting up Nix under macOS on an encrypted disk turned out to be a messy and scarcely documented process that I eventually gave up on. I’m looking forward to making another attempt.

    1. 5

      FWIW, if you tried installing Nix on Catalina before June, you may have more luck re-trying it now. There’s still one case the installer doesn’t really solve for–older pre-T2 hardware also using FileVault FDE–but if you don’t fit in that camp the experience shouldn’t be the disaster it was from October to May.

      1. 5

        I unfortunately am in that camp, but thank you for reminding me of exactly what the problematic case is. I may get my hands on a newer Mac soon.

        1. 3

          We got a tip on that case as well, but the only implementation I’m aware of yet is something in Ruby that was written for a company’s dev bootstrap scripts and we haven’t found someone with the time/interest/hardware to translate and test this yet. (TLDR; encrypting the new volume and putting the credential in the system keychain reportedly solves race-condition issues we’ve seen with the login keychain that have kept us from trying to handle that case yet).

          Someone recently expressed interest in tackling it, so I’m hopeful, but until there’s a PR I’ll try to avoid suggesting it’s right around the corner :)

          Oh. Also. A question mark we still have is whether anyone in this camp will object to the installer creating a new volume, generating an encryption passphrase without user input, and putting the credential into the system keychain on its own. Thoughts?

          1. 1

            That sounds great to me, assuming the the installer asks first. I haven’t had time to follow developments this issue on GH, so thank you for the excellent summary!

      2. 3

        Next step, someone needs to build a Whalebrew-friendly image based on Neuron’s or modify it to be compatible with Whalebrew.

        Edit: opened a feature suggestion ticket: https://github.com/srid/neuron/issues/307

      1. 2

        What about key ghosting[1] in such keyboard solution?

        [1] https://drakeirving.github.io/MultiKeyDisplay/

        1. 6

          The usual solution is to add a diode after every switch.

          1. 1

            What @C-Keen said. Each of the blue-gray segments in this wiring diagram corresponds to a diode:


          1. 6

            I’ve been using Hugo as of late due to ox-hugo, which allows me to write blog posts in Emacs’ Org mode. It’s pretty wonderful for tracking WIP posts and what else to write with Org.

            1. 4

              The other day I was digging into someone’s personal wiki’s implementation and found that they’re using ox-hugo to generate the wiki from their org-roam notes 👏

              1. 5

                That’s not just anyone - that’s actually the maintainer of org-roam, so it makes sense that they’d use it on a personal wiki as well. Cool find.

            1. 1

              It’s super convenient that Rust has &str (pointer + length, immutable) and types can auto-dereference to it. This works as the lowest common denominator for strings, so using a custom small string type is practical.

              This is in contrast with languages like ObjC or Swift, where there is one blessed string type. That type has to be everything for everyone, and needs to support small strings, large strings, fast appends, searches, and everything in between, because any other string type would be a second-class citizen.

              1. 3

                Did you know that NSString is a class cluster? It has a number of private subclasses which function as alternate implementations.

                1. 1

                  Yeah, it’s a very clever solution for a type that needs to do a good job for many very different workloads, given that ObjC already pays the cost of dynamic dispatch on every call.

              1. 3

                Isn’t caps lock ->control the canonical remap? On the home row, not good for anything else…?

                1. 2

                  Yeah, I really don’t understand why people are advocating for remapping extremely useful keys like enter or A to control. That seems like it would cause way more problems than it solves.

                  1. 5

                    The article says that it’s a dual-function remap - holding enter causes it to act like control, but merely tapping it causes it to continue to act like enter.

                    1. 2

                      That’ll teach me to skim, I guess.

                      I still dislike this though. I don’t like keys that simultaneously act as regular keys and modifier keys. The Windows key on PCs is a prime example: there are a bunch of shortcuts bound to Windows+, but tapping Windows alone causes the start menu to come up. When using Windows, I frequently find myself going for a keyboard shortcut, typing Windows, changing my mind, releasing Windows, and then having to deal with the annoyance of the Start menu appearing.

                      Given that enter is used as a “submit” key on a huge number of programs, I expect it would be even worse.

                      1. 4

                        That’s an implementation problem, not a conceptual problem. The way that xcape (and theoretically any other implementation of this) works (or could work) is that if you hold down the key long enough, even if you release it without pressing anything else, it acts like the modifier key and doesn’t send the “tap” event. My xcape is configured to have a 250 ms delay - if I press left shift as part of a chord, and then change my mind, if I just make sure that I hold the shift key for the required quarter of a second, the left parenthesis (tap functionality) is not emitted.

                        1. 1

                          The way I see it, the existence (and arbitrariness) of that 250 ms parameter is strong evidence that it is a fundamental conceptual problem. I’m with @Kronopath in that having both modifier and non-modifier functionality on the same physical key drives me thoroughly up the wall.

                          And for anyone else who feels similarly, a Firefox setting I was relieved to stumble across a few years ago: in about:config, set ui.key.menuAccessKeyFocuses to false. And if (as I do) you keep the menu bar hidden, note that it will unfortunately auto-revert this setting to true if you show and then hide the menu bar manually (e.g. via a right-click on the tab bar), though you can use F10 to show it temporarily and avoid that particular quirk (or bug?).

                          1. 1

                            I don’t quite see how the existence of a timeout parameter implies a fundamental conceptual parameter - could you clarify? The universe, and programming, are filled with timing-related parameters, such as the TCP connection timeout parameter, the length of time you have to hold down a key before it is repeated, or the oscillation frequency of a Josephson Junction when exposed to a particular voltage. This particular parameter isn’t actually arbitrary, either - I chose it myself based on my preferences and after some experimentation to find a comfortable value. This is no more arbitrary than you picking your mouse’s DPI.

                            It’s unfortunate that the functionality “drives you up a wall”, but that doesn’t actually mean that it’s a bad idea. For instance, vim-style modal editing is very unpleasant for some people (mostly people who eventually switch to emacs) - but that doesn’t have anything to do with the soundness of the idea, nor its efficiency.

                            1. 1

                              Sure, it’s not arbitrary for any given individual, but overall it’s a magic number with no clear “right” value, whereas otherwise it’s a simple matter of which keys were pressed and released in which order, no guessing or heuristics involved. Adding time durations as semantically-significant signals where they otherwise didn’t factor in at all is a fundamental change to the signaling mechanism.

                              And I’m not arguing against time-based behavior in general (certainly there are contexts in which it’s appropriate) but I find predictable, synchronous behavior highly preferable unless there’s a very compelling reason to introduce asynchrony. The sort of time-based behavior you’ve got is, ultimately, a race condition – and for people who are okay with it that’s fine, but I don’t like it when software (e.g. Firefox, or Windows as mentioned upthread) bakes it in.

                              1. 1

                                a magic number with no clear “right” value

                                I don’t think that this is right. I see a clear “right” value - that which works for you. Actually, it’s a range of values, bounded on the lower end by how quickly you tap individual keys, and on the upper end by how long you want to have to hold the modifier key in case you decide to “back out” of a chord.

                                no guessing or heuristics involved

                                There’s no guessing or heuristics involved here, either. You know exactly how long you have to press a modifier key for in order to send the “hold” command, as opposed to the “tap” command. The fact that humans don’t have very high precision on their internal timers in order to measure that duration is irrelevant - you can just hold the key until you’re sure it won’t act like a tap. Most people can’t tell the difference between 250 milliseconds and 260, but they sure can tell the difference between 250 and 1000 - and since you can train yourself to rarely decide to back out of key chords, the extra 0.75s is insignificant.

                                Adding time durations as semantically-significant signals where they otherwise didn’t factor in at all is a fundamental change to the signaling mechanism.

                                A fundamental change that is useful, makes perfect logical sense, and is easy to adapt to. I cannot recall the last time that I accidentally emitted a parenthesis by accident using space-cadet shift keys (tap to emit parenthesis, hold to act like shift) over the past few weeks, and it’s pretty obviously useful to take a key that does nothing when you tap it by yourself (e.g. shift) and make it do something when tapped by itself, in a way that doesn’t reduce the functionality of your computer in any way whatsoever.

                                I find predictable, synchronous behavior highly preferable

                                You can prefer it, but that doesn’t mean that dual-function keys have a “fundamental conceptual problem”. That’s what I’m arguing against - not what you prefer to use, but the idea that there’s somehow something wrong with the idea itself.

                                a race condition

                                This isn’t a race condition. The Wikipedia page on race conditions[1] defines it as “the condition of an electronics, software, or other system where the system’s substantive behavior is dependent on the sequence or timing of other uncontrollable events”. The order in which you press keys are not “uncontrollable events”. You have complete control over the order in which you press keys using your fingers - or, rather, that’s the model of human hands that most people operate under.

                                Edit: I think I see where you might be getting the idea that this is similar to a race condition - you might be thinking that if you don’t hold a key for the threshold e.g. 250 ms, it registers as a tap, even if you press another key. The way that actually it works is that there are three cases: (1) if you press the key and then release it sooner than THRESHOLD later without pressing another, it registers as a tap (2) if you press the key and don’t release it less than THRESHOLD later, it acts as a modifier (nothing happens) (3) if you press the key and then another key (a chord) before letting go of the first, it acts like a normal chord, regardless of whether you held the dual-function key for longer or shorter than the threshold. That is, you do NOT have to hold the chord for the threshold duration in order for the key to be counted as a “hold”/modifier instead of a “tap” - if you hold the key and press another, it acts like a modifier, no matter whether you hold it for longer or shorter than the threshold.

                                Windows as mentioned upthread

                                As also mentioned upthread, that’s an implementation problem, not a conceptual problem. You’re taking a bug in an implementation and conflating that with the idea that is represented by the implementation.

                                [1] https://en.wikipedia.org/wiki/Race_condition

                  2. 2

                    A great many Emacs users remap the infrequently used CapsLock key to Control to alleviate partially the problem with the accessibility of the control keys. That, while useful, is not sufficient for the optimal typing experience, since that way you’re breaking the key symmetry on both sides of your keyboard. Also - your right pinky has to go much further than your left one, while you’re typing. Many people seem to be using only the left Control and I guess they’re not particularly bothered by this, but touch typists like me are generally quite bothered by such things.

                    1. 1

                      Who are you quoting?

                      1. 2

                        @bbatsov in their previous article that they link to in the opening note:

                        Note: Check out my original article from 2013 about the rationale behind this remapping.

                      2. 1

                        Thanks, I missed that link/follow in the original. I still don’t understand though.

                        I don’t see how using Enter instead of capslock is more symmetrical. If anything it is less? the enter key is further from the letter home row than capslock is.

                        Also - your right pinky has to go much further than your left one, while you’re typing

                        So having the left little finger go a little more to the left (for capslock) is more symmetrical, right? And it already does this for shift anyway (just below capslock).

                        Sorry if it seems like I’m trying to pick nits, I’m really not getting the argument.

                    1. 6

                      If someone’s using another approach to achieve the same result I’d love to hear about it!

                      I use a keyboard that runs the QMK firmware, and my dual-function keys are all defined at the firmware level. QMK lets you fine-tune the parameters of tap-hold behavior, but the defaults are good.

                      I chose to remap “;” to Control. It’s been great.

                      1. 2

                        I had a few of these dual function keys set up with my QMK board and I found that getting the timing is tricky because you can’t hit the key in succession very well. Another thing is that if you hit the key and release it, then hit it again without waiting properly, it might register as a tap instead of a hold.

                        I’m sure these things are fixable in firmware. It’s just another thing you might have to adjust.

                      1. 1

                        And since keywords naturally go between their arguments, there is no need for “operators”, as a very different and special syntax form. You just allow some “binary” keywords to look a little different, so instead of 2 multiply:3 you can write 2 * 3.

                        How does this approach support operator precedence?

                        1. 6

                          It doesn’t. Or at least Smalltalk doesn’t, and neither does my baby, Objective-Smalltalk.

                          There is some form of precedence: unary binds stronger than binary binds stronger than keyword.

                          Other than that, evaluation is strictly left-to-right, which apparently was the option that caused the least confusion overall, even if it discarded old conventions.

                          1. 5

                            Operator precedence is a dangerous set of conventions to learn, or just bypass with a lot of parentheses. It is not a feature, but a side effect.

                          1. 7

                            I really liked this article. Having gotten in functional programming with Elm for the last 2 years, the nonEmpty type is brilliant, and I’m going to re-implement it in Elm. The tie in of language theoretic security was a nice touch. I’ve been promoting that at work for a while.

                            it’s extremely easy to forget.

                            Anything that can be forgotten by a developer, will be forgotten.

                            A better solution is to choose a data structure that disallows duplicate keys by construction

                            “Making Impossible States Impossible” by Richard Feldman is a talk on effectively the same concept.

                            1. 8

                              However, sometimes it is quite annoying to conflate the type and structure of the list with the fact that it is nonempty. For example the functions from Data.List don’t work with the NonEmpty type. I think the paper on “departed proofs” linked near the bottom points to a different approach where various claims about a value are represented as separate proof objects.

                              1. 3

                                In this instance, the fact that both [] (lists) and NonEmpty are instances of Foldable will help: http://hackage.haskell.org/package/base-

                              2. 2

                                I ran across a case of the nonempty approach in File.Select.files the other day:

                                Notice that the function that turns the resulting files into a message takes two arguments: the first file selected and then a list of the other selected files. This guarantees that one file (or more) is available. This way you do not have to handle “no files loaded” in your code. That can never happen!

                                1. 1

                                  Nice find! I’ve actually used that before but hadn’t made the connection, haha.

                              1. 13

                                First, let’s stipulate that lots of people don’t know what they’re doing or why in technology development. Unless you’re coding, it doesn’t really feel as if something is going on, i.e., it’s a waste of time. In most cases, it is. Your instincts are correct.

                                Second, I did not finish reading the article. I did not finish because the author doesn’t know what they’re talking about, or rather he is responding to the way he’s been taught and observed stand-ups happening, not how they actually work.

                                ”…The majority of meetings are a waste of time. And in my opinion, one flavor of meeting that tops the charts in uselessness is the “status update” meeting. You know this meeting— the meeting where everyone gets together to share what they’ve been doing…”

                                Yes. Shoot for meeting-free work areas. The way you do that is dynamically get together and talk about stuff as needed. The way you do that? A stand-ups. Stand-ups are the (in my mind) only fixed time and place where everybody gets together and asks for help. You don’t give status, you don’t report to anybody, you don’t ask or answer questions unless you can’t make out what the person is saying. You just take a minute and verbally review where you are and ask for help if you need it. Then we get to work. Maybe somebody has a problem that is going to be huge, so we hang out for an hour or two trying to solve it. Maybe nothing’s going on. Great! Two minutes later we’re all working on our own stuff. Works either way. It’s dynamic.

                                The problem here is 1) people are used to status reports so that’s what they gravitate towards doing, 2) there’s pressure to build-up your work and deny having any problems, and 3) when it’s working right there is no positive feedback. It’s like brushing your teeth. You do it, it’s quick, and if you’re doing it right you never think about it. In fact, the more you think about it, probably the worse you’re doing it.

                                1. 4

                                  In my experience, unless time limits for people talking are heavily enforced, they are largely a waste of time. If no one is willing/able to enforce hard limits for the amount of time any one person has in the stand up to talk, you almost always end up with senior people on the team who enjoy listening to themselves speak taking up 95% of the time, with the remaining 5% of the time given to those who need real help.

                                  1. 9

                                    Yeah they can go wrong in a lot of ways. Turns out having people talk to one another isn’t as simple as writing a for-next loop (grin).

                                    One of the fun things I used to do when teaching standups is a game where a team has to do a standup, but one person is the “ringer” – they’re given a dysfunction to display and the rest of the group has to deal with it. I found it was much easier to teach dysfunctions when people were only playing as opposed to directly addressing them.

                                    There’s also the guy that has to question everything, the one who never needs help, the one talking about nothing to do with work, and so on.

                                    1. 1

                                      That’s a fascinating approach. Thanks for sharing it.

                                1. 23

                                  For some spooky Halloween times, take a midnight stroll through Google’s graveyard!

                                  There’s a lot of hidden terrors in there that time has forgotten.

                                  1. 4

                                    This list is a really neat blast from the past. It’d be cool to see a category for companies that were literally killed by google (e.g. Kiko, a calendar app made just before Google Calendar came out, which Google squashed like a bug).

                                    1. 8

                                      I don’t think even Google can get away with literally killing competitors. Yet.

                                      1. 4

                                        Depends on the country and if they use third parties that distance their brand from the act. See Betchel Corp vs Bolivian citizens that wanted drinking water as an example. Maybe Coca Cola in Colombia vs union people.

                                        If anything, I’m surprised at how civil things normally are with rich companies. Probably just because they can use lobbyists and lawyers to get away with most stuff. The failures are usually a drop in their bucket of profit.

                                        1. 4

                                          Perhaps not competitors, but certainly people who get in the way of profits get killed, eg see the case of Shell in Nigeria: http://news.bbc.co.uk/2/hi/africa/8090493.stm

                                          Hundreds of activists are killed every year, we just don’t hear about it much.

                                      2. 1

                                        You joke but I recall there was (is?) a “storage graveyard” in their Chicago office filled with CDs, casette tapes, floppies, and other physical media.

                                      1. 3

                                        I clicked through to https://gather.wtf and clicked on an event, then “Attend”. I got an endless spinner and this error in the console:

                                        Error: submitAndWatchExtrinsic (extrinsic: Extrinsic): ExtrinsicStatus:: 1010: Invalid Transaction: Payment

                                        1. 2

                                          Yeah, it’s a hackathon project and though the backend/blockchain is complete, the UI is far from that…

                                          1. 2

                                            Not just an UI has to be added, there also have to be policies about privacy and who gets to see what. Not everyone wants their regular visits to a fetish club recorded on a public, immutable blockchain.

                                        1. 21

                                          The thing is, in 1996 almost everyone on the Internet knew how to type in a URL… Back then, virtually everyone dutifully typed http://www.pepsi.com/ into Mosaic or Netscape

                                          Is that really true? I remember posters for The Matrix that had “AOL Keyword: Matrix” on them instead of a URL.

                                          1. 6

                                            I think what the author of that quote is missing is that back in 1996 pretty much everyone on the internet was professionally competent in computers or a hobbiest; in either case they would take pride in knowing how things work.

                                            By the turn of the Millennium when the Matrix came out thanks in part due to companies like Compaq, Gateway and Packard Bell there were millions of novices with no real interest in the details of how a computer worked (or why) who were connected to the internet and around that time search engines had begun to become how people found information on the internet.

                                            Therefore it makes sense that a poster would have an AOL Keyword circa 2000 rather than a web address because the lowest common denominator won out.

                                            1. 4

                                              That was 1999

                                            1. 4

                                              A full accounting of a web minimalist’s digital pollution footprint must include the self-congratulatory blog post we invariably emit as a byproduct.

                                              1. 24

                                                I didn’t submit the link but I did write the blog post, so AMA.

                                                1. 3

                                                  Thank you for stepping in! Your involvement with community, presentations and just general hard work are worthy of envy.

                                                  Three questions:

                                                  In one of the recent talks, where Andrei and Walter was asked what are the ‘3 top thing’, Andrei mentioned full, unambiguous language specification. Where does it stand now, and where do you see it fit on the overeall scale of things.

                                                  Second, the mobile platform. Is there community/sponsorship interest in having D being first-class language for Android, IOS (and may be the librem)

                                                  Third, is in terms industry domain focus, are there specific domains/industries you would like to see more interest/sponsorship from?

                                                  Overall, I am glad to see that memory safety (including compile time) and multi-language interoperability are high on your list/vision. Given D’s maturity, previous investments, current capabilities and market positions – those are right things, to focus on.

                                                  1. 4

                                                    Second, the mobile platform. Is there community/sponsorship interest in having D being first-class language for Android, IOS

                                                    It depends on what exactly you mean by first-class, but there is sponsorship on it. I’ve been working on android last weekend and D with the NDK already works, just it is a little clunky. But the end result will be D works just as well as C++… which isn’t really first class there - only Java and Kotlin have that and tbh I don’t expect that to change given the android VM.

                                                    I also toyed with ios, but am not officially sponsored on that yet. Actually, I think D has better chances there of working just the same as objc and swift.. but xcode doesn’t appear to be expandable so even compiling to the same code with the same access to the runtime might not count as first class.

                                                    1. 2

                                                      Xcode is not very customizable, but at least supports external build commands. It’s relatively easy to generate an Xcode project file with all the tweaks needed to make Build/Run “just work”, even for submission to the Mac AppStore. I’ve done that for Rust/Cargo: https://gitlab.com/kornelski/cargo-xcode

                                                      1. 1

                                                        Indeed, I’ll have to look at that, thanks! What I did for proof of concept on ios was just manually add the D static library. Worked very well in the simulator - the dmd compiler speaks objective-c abi and runtime, so even defining the child class just worked. But it only does x86 codegen… the arm compilers don’t speak obj-c. Yet, I am sure I can port it over in another weekend or two.

                                                      2. 2

                                                        D with the NDK already works, just it is a little clunky.

                                                        Thank you Adam.

                                                        Yes, I should not have used ‘first class’ moniker, as, clearly, for Android platform at least, first class can only be a JVM language.

                                                        In the larger context, what I meant more was ‘easier’ business logic sharing among Android, iOS and backend. This is a challenge that seems to fit well with D’s multi-language interoperability vision. And it is a challenge that yearning to be solved. [1], [2].

                                                        For many small-budget teams, developing Android + iOS (with a common backend), is quite difficult.

                                                        So I was asking more around this angle, of integrating D into IDE/toolchains of the dominant and upcoming mobile platforms, to provide for this multi-mobile-platform+backend code sharing.

                                                        JS/Typescript seems to be the choice, for the common engine run-time for code sharing across mobile (but, in my view, JS has costs in terms of: compile-time error detection, interlanguage data passing, memory and battery utilization, plus it is not a prevalent language for backends). There is also .NET Xamarin (that makes different tradeoffs than common JS approaches)

                                                        [1] https://medium.com/ubique-innovation/sharing-code-between-ios-and-android-using-c-d5f6e361aa98 [2] https://news.ycombinator.com/item?id=20695806

                                                        1. 6

                                                          There’s a number of annoying problems to solve here (like Apple accepting LLVM bitcode instead of native code, which means you have to generate exactly the bitcode that the current Apple toolchain emits, and the the bitcode isn’t stable). Still, native languages are good choice there and having a multitude of native languages would be great.

                                                          1. 2

                                                            generate exactly the bitcode

                                                            whoa, how strict are the checks? (i have never done mobile development before, so I am not a great choice to be doing these projects… just it seemed i was the only one with compiler hacking experience with some free time so it kinda fell on me by default)

                                                            I read the website and go the impression that they basically did end-user acceptance tests… so I thought if it ran the same way you’d prolly be OK… do they actually scan the code for such strict patterns too? I wouldn’t put it above Apple - I hate them - but it seems a bit crazy.

                                                            1. 3

                                                              They will actually compile the code and ship it to your clients. https://www.infoq.com/articles/ios-9-bitcode/

                                                              So, when I’m saying “exactly”, I mean it must be legal bitcode for the compiler toolchain you are using. This is a major nuisance, as the LLVM Apple ships with XCode is some internal branch. So, basically the only option is building custom compiler against XCode.

                                                              For an effort in Rust to do this, check here. https://github.com/getditto/rust-bitcode

                                                              I’m not well-versed in D, I assume the compiler is not based on LLVM?

                                                              1. 2

                                                                D has 3 backends

                                                                gcc, its own, and LLVM (experimental) [1]

                                                                I actually think D fits well into this problem domain of ‘sharing business logic (non-UI)’ code across multiple languages and toolchains of the mobile dev world.

                                                                Because D’s team invested big effort into multi-backend architectures, and into C++ abi compatibility across non-standard ABI of the C++ compilers

                                                                [1] https://dlang.org/download.html

                                                                1. 3

                                                                  I wouldn’t call the llvm one experimental, it is in excellent condition and has been for a long time now. But yeah the llvm one is what would surely do the prod builds for ios… I guess it just needs to be built against the xcode version then hopefully it will work.

                                                                  1. 1

                                                                    ok, thank you. I had an old understanding of D’s llvm backend. sorry about that.

                                                                2. 2

                                                                  FWIW, bitcode submission is optional and doesn’t seem to have compelling benefits. I’m a full-time iOS developer and have disabled bitcode in all of my recent projects.

                                                                  1. 2

                                                                    Yes, but forcing individual decisions of that kind is not a good habit if you want adoption.

                                                        2. 3

                                                          Thanks for the kind words!

                                                          unambiguous language specification. Where does it stand now

                                                          I think we’re inching towards it.

                                                          are there specific domains/industries you would like to see more interest/sponsorship from?

                                                          All of them? :P

                                                        3. 3

                                                          I’m watching Zig’s progress, and it seems like it’s more minimalistic and modern (as in: type declarations, expression-based rather than statement based, quasi sum types with tagged unions) than D while competing in the “crazy compile time magic” department. Do you have an opinion on whether D could learn from Zig as well?

                                                          1. 1

                                                            I don’t have an opinion on that because I know next to nothing about Zig other than they don’t like operator overloading. Which dismisses it as a language I’d like to actually use.

                                                            1. 2

                                                              that’s how I feel about the lack of proper sum types + pattern matching :)

                                                          2. 2

                                                            I don’t understand the first point. Are we finally gonna have a good answer for all of the “but the GC…” protestors? Or are you just saying that the GC isn’t enough to ensure memory safety?

                                                            1. 4

                                                              The GC is enough for memory safety for heap allocated memory, but not for the stack.

                                                              As for “but the GC…” protestors, it’s a hard issue since it involves changing the current psychological frame that believes the GC is magically slow.

                                                              1. 4

                                                                Random pre-coffee thoughts on this sort of stuff… In my experience it also involves changing the current psychological frame of GC implementors (and users) that believes speed is the problem. I write video games and robotics stuff, both of which are soft-real-time applications. Making them faster can just be a matter of throwing better algorithms or beefier hardware at them, but even a 10ms pause at some random time for some reason I can’t control is not acceptable.

                                                                I would love to use a GC’ed language for these tasks but what I need is control. So if I’m learning a new language for it I need a more powerful API for talking to the GC than “do major collection” and “do minor collection” which seems sufficient for most GC writers. (Rust has made me stop much paying attention to GC’ed languages, so more powerful API’s seem to be a bit more common than last time I checked a few years ago though.) I also need documentation on how to write critical code that will not call the GC. Actually, now that I look at D’s GC API it looks a lot better than most for this task; you can globally enable/disable the darn thing, and API docs both describe the algorithm and how it’s triggered. So, writing something like a fast-paced video game in D without erratic framerates due to GC stalls seems like it shouldn’t actually be too hard.

                                                                So, changing the current psychological frame of the “but the GC…” people might be started by demonstrating how you write effective code that uses the GC only where convenient. That way the people who actually need to solve a problem have an easy roadmap, and the people who complain on philosophical ground look silly when someone with more experience then them says “oh I did what you say is infeasible, it was pretty easy and works well”.

                                                                I dunno, changing people’s minds is hard.

                                                                1. 2

                                                                  I would love to use a GC’ed language for these tasks but what I need is control.

                                                                  As you wrote later, there are API calls to control when to collect. And there’s always the option to not allocate on the GC heap at all and to use custom allocators.

                                                                  I dunno, changing people’s minds is hard.

                                                                  Yep. “GC is slow” is so ingrained I don’t know what to do about it. It’s similar to how a lot of people believe that code written in C is magically faster despite that defying logic, history, or benchmarks.

                                                            2. 1

                                                              One can write the production code in D and have libraries automagically make that code callable from other languages.

                                                              I think it can be big for Python. Given that Python is used in stats, are there any libraries in D that to stats?

                                                              1. 4

                                                                I think the mir libraries cover that to a certain degree.

                                                                1. 2

                                                                  just as Atila mentioned, here are referneces



                                                              1. 1

                                                                This was published some time ago, but I’ve been looking at it again because the authors just published a writeup about the “mnemonic medium” format they used for the document: How can we develop transformative tools for thought?

                                                                1. 4

                                                                  I saw the Strange Loop demo, and the biggest unanswered question was “how do you refactor an existing function?” This wasn’t answered and the edit docs aren’t covering that case. I think the answer is supposed to be “the tooling takes care of it”, but that sounds risky to me.

                                                                  1. 2

                                                                    My understanding from the demo is that the refactored function would get a new hash and the name associated with the previous function’s hash would get updated. The speaker noted that the name association is stored separately, so a dependency/refactor update is trivial for this reason. Dependent functions can update that reference and that is it (assuming a true refactor of maintaining the same type signature). Although I may not be understanding some subtly of your question.

                                                                    1. 1

                                                                      It’s the “Dependent functions can update that reference and that is it” that I’m hung up on. One of the selling points is that you can have two versions of the same function, which eliminates dependency conflicts. Consider the following case:

                                                                      A -> B -> C
                                                                      A' -> B -> C

                                                                      I discover a bug in C’s implementation and refactor it to C'. The tooling automatically updates B, which calls C, to B', which calls C'. Do we transitively update A? What about A'? What happens when the call chain is now 20 functions deep? Case two:

                                                                      A'' -> B' -> C'
                                                                      A' -> B -> C

                                                                      Turns out there was a second bug in C, and I have not yet pulled C'. I release C'' off C. How do we merge the change with C'? What if there are merge conflicts? Do we end up with two a fragmented ecosystem? Case three:

                                                                      A -> B -> C -> D -> E -> F

                                                                      C and F are in separate libraries. I see a bug in C and make C', somebody else at the same time sees a bug in F and pushes F'. What happens to A?

                                                                      1. 1

                                                                        Here is the applicable part of the StrangeLoop talk: https://youtu.be/gCWtkvDQ2ZI?t=1395

                                                                        The speaker’s example relies on/assumes different namespaces (at 26:16), but maybe the suggestion is that if you want to maintain two different versions, then they must ultimately be named differently. So a refactor of an existing type would not actually differentiate itself as a separate version unless you name it something different.

                                                                        That said, since all types are content addressable, you can still give each type a different name. It may be a matter of whether you choose to do that in your source, or you simply keep the one name and therefore the new version implicitly replaces the previous versions (similar to git, but at type level rather than file).

                                                                        Do we transitively update A?

                                                                        Correct, this is not answered in the talk. I can only speculate that the IR of hashes are updated to reflect the change unless you give it a new name in the textual/source representation. My guess is that if a fix to C or F is pushed, references will be implicitly updates (from the name C or F to the new hash). The Merkle tree will update accordingly. Of course if the name of C’ or F’ are changed and pushed, then the existing types will not implicitly update.

                                                                        Again this is speculation, but I am enjoying the conversation.

                                                                        1. 3

                                                                          Some details about propagation: https://twitter.com/unisonweb/status/1173942969726054401

                                                                          The way update propagation works is this: first we visit immediate dependents of the old and update them to point to the new hash. This alters their hash. We repeat for their dependents, and so on…

                                                                          …if the update isn’t type preserving, the ‘todo’ command walks you through a structured refactoring process where the programmer specifies how to update dependents of old hashes.

                                                                          Dependency chains in codebases written by humans tend to be pretty small. If it were even 100 that would be a lot.

                                                                          Once this manual process of updating reaching a “type preserving frontier”, the rest of the way can be propagated automatically.

                                                                          Also, these mappings from old hash to new hash are recorded in a Unison “patch”. A patch can be applied to any Unison namespace

                                                                          Important asterisk: for changes to type declarations, right now all we support is manual propagation. Which can be quite tedious. We are working on fixing this!

                                                                  1. 2

                                                                    When the dimensions and settings of the medium for your visual design are indeterminate, even something simple like putting things next to other things is a quandary. Will there be enough horizontal space? And, even if there is, will the layout make the most of the vertical space?

                                                                    This raises the question, should web pages in 2019 still be putting things next to each other?

                                                                    Every site I can think of that becomes unsuable on my smartphone or in my half-width tiled browser windows is one that attempts a 2- or 3-column layout. When I reorganized my own personal site from multiple columns to just one, I was able to delete half of my CSS.

                                                                    1. 3

                                                                      Isn’t this what media queries are for? Not trolling, genuinely curious.

                                                                      1. 3

                                                                        Check out the other articles on the linked site for the case against media queries. TL;DR kinda yes, but media queries make it hard to build reusable components, because they’re inherently global and thus hardly compose (while you can nest the linked pattern, and it would work seamlessly)

                                                                        1. 1

                                                                          I believe so, but it seems many sites don’t implement them flawlessly.

                                                                      1. 5

                                                                        Configuring static site generation for tombrow.com, then using what I learn to set up a homepage for my fiancé. Anyone know if Google Domains would be a nicer place to keep my domains than 1and1 is?

                                                                        Documenting the code I use for keyboard remapping in macOS. I might write a short post to try to persuade people of the benefit of mapping Ctrl-[ to Escape and Fn-hjkl to arrow keys.

                                                                        Cutting some more drawer organizers using the laser at the maker space.