Threads for ansible-rs

  1. 54

    Totally agreed about kebab case. It’s an unusually major quality-of-life improvement.

    I’d also add being allowed to use ? in an identifier. user-record-valid? is pretty clear, both as a function or as a variable.

    1. 55

      The argument I hear against kebab case is that it makes it impossible to write subtraction as foo-bar but like … that’s … good actually? Why are we designing our syntax specifically in order to accommodate bad readability patterns? Just put a space in there and be done with it. Same logic applies to question marks in identifiers. If there’s no space around it, it’s part of the identifier.

      1. 12

        Agreed! (hi phil 👋)

        This is mentioned in the article too in a way. In addition to the readability point you make, the author makes the argument that most of us use multi-word identifiers far, far more often than we do subtractions.

        1. 9

          I dunno, I think there’s a lot of pesky questions here. Are all mathematical operators whitespace sensitive, or just -? Is kebab-case really worth pesky errors when someone doesn’t type things correctly?

          I format my mathematical operators with whitespace, but I also shotgun down code and might leave out the spaces, then rely on my formatter to correct it.

          Basically, I think kebab-case is nice, but properly reserved for lisps.

          1. 28

            Are all mathematical operators whitespace sensitive?

            Yes, of course! There’s no reason to disallow tla+ as an identifier either or km/h for a variable to keep speed other than “that’s the way it’s been done for decades”.

            I also shotgun down code and might leave out the spaces, then rely on my formatter to correct it.

            The compiler should catch it immediately since it’d be considered an unrecognized identifier.

            1. 3

              I’m not sure if this is an argument for or against what you’re saying here, but this discussion reminded me of the old story about how fortran 77 and earlier just ignore all spaces in code:

              There is a useful lesson to be learned from the failure of one of the earliest planetary probes launched by NASA. The cause of the failure was eventually traced to a statement in its control software similar to this:

              DO 15 I = 1.100

              when what should have been written was: DO 15 I = 1,100

              but somehow a dot had replaced the comma. Because Fortran ignores spaces, this was seen by the compiler as:

              DO15I = 1.100

              which is a perfectly valid assignment to a variable called DO15I and not at all what was intended.


            2. 8

              If I see x-y, I always parse it visually as a single term, not x minus y. I think that’s a completely fair assumption to make.

          2. 16

            I have always found kebab-case easier on the eyes than snake_case, I wish the former was more prevalent in languages.

            1. 14

              Raku (previously known as Perl 6) does exactly this: dashes are allowed in variables names, and require spaces to be parsed as the minus operator.

              1. 6

                Crazy idea: reverse _ and - in your keyboard map :)

                Probably would work out well for programmers. All your variables are easier to type

                When you need to use minus, which is not as often, you press shift

                1. 10

                  More crazy ideas.

                  • Use ASCII hyphen (-) in identifiers, and use the Unicode minus sign (−) for subtraction.
                  • Permit -- (two hyphens) as a synonym for − (minus). Related to the fact that some languages let you write ≤ instead of <=, and so on. Related to the fact that -- turns to – in markdown.
                  • Your text editor automatically converts -- to − and <= to ≤.
                  • This makes more sense if you are viewing source code using a proportional font. Identifiers consume less precious horizontal screen space in a proportional font. Hyphens are shorter than underscores, so it looks better and is nicer to read.
                  1. 15

                    Use ASCII hyphen (-) in identifiers, and use the Unicode minus sign (−) for subtraction.

                    #include <vader.gif> Nooooooo!!!!!!!!!

                    I really don’t like this idea. I’m all for native support for Unicode strings and identifiers. And if you want to create locale-specific keywords, that is also fine. I might even be OK with expanding the set of common operators to specific Unicode symbols, provided there is a decent way to input them. [1]

                    But we should never, ever use two visually similar symbols for different things. Yes, I know, the compiler will immediately warn you if you mixed them up, but I would like to strongly discourage ever even starting down that path.

                    [1] Something like :interpunct: for the “·” for example. Or otherwise let’s have the entire world adopt new standard keyboards that have all the useful mathematical symbols. At any rate, I’d want to think about more symbols a lot more before incorporating it into a programming language.

                    1. 4

                      The hyphen and minus sign differ greatly in length, and are easily distinguished, when the correct character codes and a properly designed proportional font is used. According to The Texbook (Donald Knuth, page 4), a minus sign is about 3 times as long as a hyphen. Knuth designed the standards we still use for mathematical typesetting.

                      When I type these characters into Lobsters and view in Firefox, Unicode minus sign (−) U+2212 is about twice the width of Unicode hyphen (‐) U+2010. I’m not sure if everybody is seeing the same font I am, but the l and I are also indistinguishable, which is also bad for programming.

                      A programming language that is designed to be edited and viewed using traditional mathematical typesetting conventions would need to use a font designed for the purpose. Programming fonts that clearly distinguish all characters (1 and l and I, 0 and O), are not a new idea.

                    2. 7

                      Sun Labs’ Fortress project (An HPC language from ~15 years ago, a one time friendly competitor to Chapel, mentioned in the article) had some similar ideas to this, where unicode chars were allowed in programs, and there were specific rules for how to render Fortress programs when they were printed or even edited. for example

                      (a) If the identifier consists of two ASCII capital letters that are the same, possibly followed by digits, then a single capital letter is rendered double-struck, followed by full-sized (not subscripted) digits in roman font.

                      QQ is rendered as ℚ

                      RR64 is rendered as ℝ64

                      it supported identifier naming conventions for superscripts and subscripts, overbars and arrows, etc. I used to have a bookmark from that project that read “Run your whiteboard!”

                      the language spec is pretty interesting to read and has a lot of examples of these. I found one copy at

                      1. 9

                        Thanks, this is cool!

                        I feel that the programming community is mostly stuck in a bubble where the only acceptable way to communicate complex ideas is using a grid of fixed width ASCII characters. Need to put a diagram into a comment? ASCII graphics! Meanwhile, outside the bubble we have Unicode, Wikipedia and technical journals are full of images, diagrams, and mathematical notation with sophisticated typography. And text messages are full of emojis.

                        It would be nice to write code using richer visual notations.

                      2. 3

                        Use dieresis to indicate token break, as in some style guides for coöperate:




                        1. 1

                          Nice. All the cool people (from the 1800’s) spell this word diaëresis, which I think improves the vibe.

                          1. 2

                            Ah yes, but if you want to get really cool (read: archaic), methinks you’d be even better served by diæresis, its ligature also being (to my mind at least) significantly less offensive than the Neëuw Yorker style guide’s abominable diære…sizing(?) ;-)

                            1. 3

                              Thank you for pointing this out. I think that diæresis is more steampunk, but diaëresis is self-referential, which is a different kind of cool.

                      3. 7

                        I’ve tried that before and it turns out dash is more common than underscore even in programming. For example terminal stuff is riddle with dashes.

                        1. 7

                          For me, this is not at all about typing comfort, it’s all about reading. Dashes, underscores and camel case all sound different in my head when reading them, the underscore being the least comfortable.

                          1. 10

                            For me, this is not at all about typing comfort, it’s all about reading. Dashes, underscores and camel case all sound different in my head when reading them

                            I am the same way, except they all sound different from my screenreader, not just in my head. I prefer dashes. It’s also a traditional way to separate a compound word.

                            1. 2

                              Interesting, you must have some synesthesia :-)

                              As far as I can tell, different variable styles don’t sound like anything in my head. They make it harder for me to read when it’s inconsistent, and I have to adjust to different styles, but an all_underscore codebase is just as good to me as an all camelCase.

                              I use Ctrl-N in vim so typing underscore names doesn’t seem that bad. Usually the variable is already there somewhere. I also try to read and test “what I need” and then think about the code away from the computer, without referring to specific names

                          2. 5

                            I like ? being an operator you can apply to identifiers, like how it’s used with nullables in C#, or, as I recall, some kind of test in Ruby.

                            1. 6

                              In Ruby, ? is part of the ternary operator and a legal method suffix so method names like dst? are idiomatic.

                              1. 1

                                Ah, that makes sense. I don’t use Ruby so I wasn’t sure, I just knew I had seen it.

                              2. 3

                                In zig maybe.? resolves maybe to not be null, and errors if it is null.

                                maybe? is different, in my mind.

                                1. 1

                                  In Ruby it’s just convention to name your function valid? instead of the is_valid or isValid you have in most languages. The ? Is just part of the function name.

                              1. 29

                                for me, the most exciting thing about golang is that i can easily walk junior engineers through a codebase with 0 prep. i love accessible code that doesn’t require a krang-like brain to intuit. rust is so non-intuitive to me that i’ve bounced off of it several times, despite wanting to learn it - and i’m a seasoned engineer!

                                i didn’t go to school for CS, and i don’t have a traditional background - there are a lot of people like me in the industry. approachability of languages matters, and golang does a fine job.

                                it obv has warts. but between the inflammatory title & the cherry picked “bad things”, the article winds up feeling really cynical, and makes me feel like is probably cynical too.

                                continues to write fun, stable code quickly in golang

                                1. 9

                                  What to you makes the code written in Go’s monotonous style fun?

                                  1. 26

                                    For me—and for most who choose Go—the fun lies in watching your ideas for software come to life. Go is so easy to think in; it enables building stuff without having to fight the language.

                                    1. 24

                                      I’d rather work with a stable language, so that I can be creative in the approach to the problem (not the language expression) than a language where I have to spend significant valuable background mental effort on the choice of words

                                      1. 6

                                        And you don’t mind having to spend valuable background mental effort on typing if err != nil over and over?

                                        1. 10

                                          I do mind, but I think you can argue it produces low cognitive load

                                          1. 5

                                            after the first few times, it comes naturally for me and i don’t really think about it much. In fact, in situations where it is unnecessary I often have to stop and think about it more.

                                            1. 4

                                              The Rust folks had a similar issue with returning Option and Result and fixed it with the question mark operator.

                                              The Error type can be named anything, but the community very quickly settled on naming it err, following the convention started by the standard library. The language designers should have just made that the default, and created a similar construct to the question mark.

                                          2. 19

                                            bc go code mostly looks the same everywhere thanks to gofmt and a strong stdlib, i spend a lot less time thinking about package choice & a lot more time doing implementation. yesterday i wrote a prometheus to matrix alert bot from scratch in 30 minutes - i spent most of that time figuring out what the prometheus API layout was. now that it’s deployed, i have faith that the code will be rock solid for basically eternity.

                                            what’s not fun is writing similar code in ruby/python and having major language upgrades deprecate my code, or having unexpected errors show up during runtime. or, god forbid, doing dep management.

                                            part of that is thanks to go’s stability, which is another good reason to choose it for the sort of work i do.

                                            having a binary means not having to worry about the linux package ecosystem either - i just stick the binary where i want it to run & it’s off to the races forever.

                                            to me, that’s the fun of it. focusing on domain problems & not implementation, and having a solid foundation - it feels like sitting with my back against a wall instead of a window. it saves me significant time & having dealt with a lot of other languages & their ecosystems, golang feels like relief.

                                            1. 4

                                              it’s a language created at a specific workplace for doing the type of work that those workers do. Do you think bricklayers worry about how to make laying bricks fun?

                                              1. 4

                                                continues to write fun, stable code quickly in golang

                                                That’s why I was asking why they found writing Go fun, it wasn’t out of nowhere. I have received some satisfactory answers to that question, too.

                                                1. 3

                                                  If it was possible for brick laying to be fun, I’m sure bricklayers would take it.

                                                  1. 7

                                                    Fun fact, Winston Churchill took up brick laying as a hobby. He certainly seemed to think it was fun!

                                            1. 3

                                              Digression: Lua does something spiritually similar to this with its metatables. It’s pretty great! Basically each value has a “metatable” attached that has some magical methods on it that are called by operators, comparisons, etc. It also includes methods that are called on function application and on method lookup failure, so with these small primitives you can build just about anything.

                                              It is almost a rite-of-passage for a beginning Lua programmer to read up on tables and metatables, and then create their own object-oriented system. You can find many implementations of varying scope and complexity.

                                              1. 2

                                                Yep! I’ve done it. It’s fun and magical and mind-bending!

                                              1. 8

                                                I aspire to some day give presentations as well as Mickens does. It feels like a comedy routine, but he actually uses it to make good points about computer security in an accessible way.

                                                1. 6

                                                  James Mickens is the best. This is canon.

                                                  1. 3

                                                    I’ve watched this presentation before, and I will watch it again. Love this guy.

                                                1. 7

                                                  Can someone help me parse “maliciously secure”? Seems it’s used broadly in SMPC research, would love a primer (or rather a pointer to one) on what that claim means.

                                                  1. 9

                                                    Here, maliciously secure is about the thread model. It implies that an adversary may arbitrarily deviate from the protocol, may arbitrarily corrupt data, abort the protocol, and more.

                                                    The Pragmatic MPC book has a much better explanation though: – chapter 2.3.3:

                                                    A malicious (also known as active) adversary may instead cause corrupted parties to deviate arbitrarily from the prescribed protocol in an attempt to violate security. A malicious adversary has all the powers of a semi-honest one in analyzing the protocol execution, but may also take any actions it wants during protocol execution. Note that this subsumes an adversary that can control, manipulate, and arbitrarily inject messages on the network (even through throughout this book we assume direct secure channels between each pair of parties).

                                                    1. 1

                                                      Appreciate the brief explanation and pointer in the right direction!

                                                    2. 8

                                                      It should mean “it’s secure, and it uses the fact that it’s secure for evil purposes”, but I don’t think that’s what they’re going for.

                                                      1. 2

                                                        … or “The protocol is secure even against certain ‘evil’ adversaries”

                                                        1. 13

                                                          … or “The protocol is secure even against certain ‘evil’ adversaries”

                                                          Isn’t that just “secure”? Because anything less is “not secure”.

                                                          1. 2

                                                            That doesn’t really fall out of the syntax though — you have to read it in an unnatural way.

                                                            Plus I pretty much agree with ansible-rs here. I get it, it’s defining the capabilities of the adversary as being a Mallory, not an Eve, or an Eve who participates but without blowing cover — but A) as soon as you take away even a little bit of context, and try to read it as English, it’s terrible (“Yeah, we’re secure against people who are actually trying to break things for bad reasons!” “Okay, gold star.”), and B) why not just call M-secure “secure”, and E-secure “confidential”?

                                                            Not that I really think there’s a chance of changing anything, I just think it’s an unfortunate sequence of words.

                                                      1. 1

                                                        Unanswered question: What kind of terrible thermal paste was used with the original cooler? I’ve seen stuff get hard, but not turn into super glue.

                                                        Also, if the backplate is sliding around inside the case, it is possible it has damaged the components on that side of the motherboard. I would recommend trying to closely inspect the motherboard for that kind of damage, or at least checking the bottom of the case for debris from such components.

                                                        1. 1

                                                          Unanswered question: What kind of terrible thermal paste was used with the original cooler? I’ve seen stuff get hard, but not turn into super glue.

                                                          It was pre-applied on the heaksink when I got it (if I recall correctly) and I never bothered to check. I did a quick search now, but I couldn’t find the exact make of the paste.

                                                          Also, if the backplate is sliding around inside the case, it is possible it has damaged the components on that side of the motherboard. I would recommend trying to closely inspect the motherboard for that kind of damage, or at least checking the bottom of the case for debris from such components.

                                                          Thanks for the advice. I’ll take a look at some point, but I guess that if it’s working now it’s probably fine.

                                                        1. -4

                                                          It almost seems that Rust doesn’t want the competition from other compilers, like intel and or nvidia. If Rust would actually become popular outside the blogosphere, commercial compilers are bound to appear.

                                                          1. 17

                                                            That’s a baseless assumption. There’s mrustc and gcc-rs already, and existence of other compilers would benefit the language.

                                                            1. -7

                                                              There’s mrustc and gcc-rs already

                                                              They exist but they don’t work and I bet you know this, not sure why you decided to write this bullshit.

                                                              1. 5

                                                                If you knew that much about them, you’d know that mrustc is working perfectly fine to bootstrap rustc 1.54.0 for systems that aren’t supported. That means it is a rust compiler, otherwise it couldn’t bootstrap the (rust written) rustc. The borrow-rules are irrelevant for that. So all you’re doing is being rude.

                                                                GCC-RS is in the works and is also pretty much endorsed by everyone - you’re bashing something that went from 0 to halfway there in a tiny fraction of the time rustc exists. The only thing rustc wants to avoid is the difference you have between clang, gcc and msvc - despite a supposedly standard.

                                                                1. 1

                                                                  If you knew that much about them, you’d know that mrustc is working perfectly fine to bootstrap rustc 1.54.0 for systems that aren’t supported. That means it is a rust compiler, otherwise it couldn’t bootstrap the (rust written) rustc. The borrow-rules are irrelevant for that. So all you’re doing is being rude.

                                                                  hmm… why would it need to bootstrap rustc if it’s a rust compiler in its own right?

                                                                  1. 1

                                                                    You don’t “need” to, but for various reasons some people want to.

                                                                2. 1

                                                                  Rust Evangelism Strikeforce

                                                              2. 10

                                                                If someone wanted to create a commercial compiler, they’d be more likely to fork the (permissively licensed) reference implementation and add some secret-sauce optimisations on top. They wouldn’t worry too much about a spec because they’d be using the same front end as the reference implementation. A spec is useful for understanding whether a particular output of the reference compiler is the result of an implementation bug or a language design choice.

                                                                1. 7

                                                                  It almost seems that Rust doesn’t want the competition from other compilers, like intel and or nvidia.

                                                                  Who is this “Rust” you are talking about? The core team? They have done nothing I’m aware of to deter or discourage alternate implementations.

                                                                  As /u/kornel mentions, there is already mrustc, GCC Rust and even things like cranelift that can eventually become full alternate implementations.

                                                                  If Rust would actually become popular outside the blogosphere, …

                                                                  Real people are using it for real projects, and seeing real benefits. If you don’t like it, that’s fine. But as it turns out, a lot of people like tools that help them write correct and reliable code.

                                                                  … commercial compilers are bound to appear.

                                                                  Eh, maybe. A valid criticism of Rust is that it is itself complex, and the toolchain implementation is big and complicated. It would be a lot of work to create an entire implementation from scratch, and you wouldn’t see benefit for years. If the Rust core team displays bad governance, is slow and / or recalcitrant to accept outside contributions or other issues, I could see a stronger push for a commercial offering. But I think you would have a hard time arguing that is the case right now.

                                                                1. 2

                                                                  I do that here and there. When I was more into Lua, I wrote some functions for functional programming, stuff like map() and friends. Around that time I was also messing around with functions with variable numbers of arguments, it turns out there is quite a bit you can do there in Lua as well. So I’d make things like a map() function that could take another function, and a variable number of tables (lists). So you didn’t need a map2(), map3() and such. I thought it was really neat.

                                                                  Then that got me thinking about why Lua even needs to be passing arguments into or out of functions. Why not just use tables? Then that got me into programming language design (again). What if you had a minimal Lua that was built around tables, like Lisp / Scheme is built around linked lists? Where all the code is stored in tables as well, and make meta-programming and/or macros more integrated into the language?

                                                                  That’s me though, I’m always getting off on tangents.

                                                                  Edit: more on map()

                                                                  1. 2

                                                                    What if those tables are just based on content addressing a lisp where all atoms are quoted bytestrings. What if you could build theorems about the relations between the data and ways to interpret it….

                                                                  1. 11

                                                                    This isn’t exactly a recommendation, but since we are talking about programming games, I must mention the very first one created: Core War. You write programs in assembly (Redcode) that directly battle other programs in a shared memory space. I never got around to actually playing it, but it blew my mind when I first read about it in Scientific American.

                                                                    Oh, and I played Omega a little back in the day.

                                                                    1. 4

                                                                      For a slightly modern Core War built on top of the excellent radare2, see r2wars.

                                                                      1. 3

                                                                        Core War

                                                                        I haven’t played for a very long time but this was the first thing that came to my mind, and in my book it’s definitely a recommendation. It’s super interesting and the architecture of the online machine is simple enough that it’s fun but also foreign enough that it’s interesting.

                                                                        The only sort of unpleasant thing is that it’s so old by now (it’s gonna be fourty in another two years!) that lots of things have been tried, so it’s hard to come up with something that hasn’t been tried before (and, therefore, with a program that won’t lose to just about anything posted on the Internet). But it’s also rewarding enough to play by yourself, and if you’re playing with a couple of friends who are just starting out, too, it’s pretty fun.

                                                                      1. 8

                                                                        I don’t understand what’s going on here …

                                                                        It doesn’t matter to GNU Grep that fgrep and egrep have been present in Unix since V7. Out they go, because POSIX says so.

                                                                        The presence of extra commands should always conform to POSIX? i.e. a system with XYZgrep is still POSIX compliant?

                                                                        Then, from

                                                                        If you prefer the old names, you can use use your own substitutes, such as a shell script named @command{egrep} with the following contents:

                                                                        It’s already like this on my Ubuntu machine:

                                                                        $ cat $(which egrep)
                                                                        exec grep -E "$@"

                                                                        So this feels like a non-announcement for most people … ? It’s some cruft in upstream GNU grep removed that doesn’t affect anyone?

                                                                        I can’t imagine that anyone would actually remove those 1 line shell scripts.

                                                                        1. 29

                                                                          In grep-2.8 GNU added an echo "$cmd: warning: $cmd is obsolescent; using @grep@ @option@" >&2 in the upstream {e,f}grep scripts [ ]

                                                                          In Slackware at least suddenly many script were spamming the terminal with this arguably useless warning.

                                                                          Ultimately Slackware decided to patch upstream (something rarely done) to undo this change:

                                                                          Wed Sep 7 18:40:44 UTC 2022
                                                                          […]a/grep-3.8-x86_64-2.txz: Rebuilt.
                                                                                 Folks, I rarely veto upstream, but I'm not going to entertain this nonsense.
                                                                                 The egrep and fgrep commands were part of Unix since the 70s, continue to be
                                                                                 included with the BSDs, and frankly, aren't hurting anything. GNU grep
                                                                                 declared them deprecated in 2007 and when they were changed into shell
                                                                                 scripts around 8 years ago I figured that's where it would end. I can see no
                                                                                 logical justification to have these scripts start making noise and then to
                                                                                 eventually pull the rug out from under any code that might be using them, so
                                                                                 I've placed non-noisy versions of them into the package sources and will be
                                                                                 installing those during the build. Given that the -F and -E options are part
                                                                                 of the POSIX standard, these scripts will continue to work fine. That said,
                                                                                 we will be continuing to change our own code over to the recommended syntax
                                                                                 to avoid the minimal overhead incurred compared to using grep directly.
                                                                          1. 8

                                                                            This is just dumb? I can’t believe anyone would spend time on this

                                                                            Having 1 line shell script wrappers causes absolutely no maintenance burden.

                                                                            1. 5

                                                                              I would hope that the other distros follow suit. I don’t like the change in behavior.

                                                                              While I understand the desire to deprecate “old” functionality, it is just inescapable that egrep and fgrep have existed for many decades, and have come to be relied upon by generations of shell script programmers. And now scripts that have worked fine for decades will now break in unexpected ways.

                                                                              I’ll make it a point not to use egrep in the future (never used fgrep). But, realistically, what real harm was there in leaving the old /bin/egrep and fgrep scripts? Can anyone actually argue that they were some kind of maintenance burden? Yes, they each take up a sector on the drive when installed, but that marginal cost is very low.

                                                                            2. 8

                                                                              The presence of extra commands should always conform to POSIX? i.e. a system with XYZgrep is still POSIX compliant?

                                                                              I believe the argument goes like this:

                                                                              • POSIX specifies the behavior of grep -E and grep -F, but does not specify egrep or fgrep.
                                                                              • Therefore: if you use grep -E or grep -F, and it behaves in a way that deviates from POSIX’s specification, it is the grep implementation’s bug to deal with. But if you use egrep or fgrep, and it doesn’t behave the way POSIX grep -E/grep -F are specified, it is your bug to deal with, because POSIX didn’t actually guarantee egrep or fgrep would exist or behave in any particular way.
                                                                              • Therefore: the responsible thing to do as a programmer, if you want the bug to be someone else’s to deal with, is always to use grep -E/grep -F, never egrep/fgrep.
                                                                              • Therefore: programmers should be reminded to use grep -E/grep -F.

                                                                              Honestly, I find it very hard to get worked up much about this, given that it is just a warning. I also very much do not subscribe to the “warnings are part of your public API” school of thought.

                                                                            1. 4

                                                                              This topic is very interesting to think about.

                                                                              One key question, missed by the article, is: What will computers look like in 100 years? Massively parallel is a given, but beyond that, will it be higher performance to tightly couple memory to the processor and have NUMA-esqe interconnects, or will we have massive processor complexes connected to tebibytes of RAM?

                                                                              One interesting bit from Mill Computing is that they are designing the processor architecture and instruction set based on the constraints of silicon lithography! This leads to choices like having two separate instruction encodings, so that they can have two simpler (and smaller) decode units, coupled with smaller and faster (due to locality to the decoders) instruction caches. The video series goes into much, much more detail and they are worth a watch for anyone interested in computer architecture.

                                                                              And then we need to talk about truly 3D processor architectures, because we won’t be using silicon lithography forever.

                                                                              And then we need to ask, who is writing programs 100 years from now? Mostly other computers. So restricting program keywords to English text, saving code in files, in-line comments, and such may all go by the wayside. Does it make more sense for all the code to be in some kind of database instead? Will there be any code reuse in the conventional sense, or will AGI just custom-write each separate program, making things ultra-optimized?

                                                                              1. 5

                                                                                it’s very bold to believe that computers will still exist in 100 years. They might exist in some parts of the world where you can produce simple controllers locally if you have the luck to have in the same place the resources necessary, the know-how and an economy that can make use of them but it’s clear that personal computers won’t survive the ongoing collapse of the industrial civilization. Maybe there still will be some in 50 years but 100 years is way too much.

                                                                                A nice paper on the subject:

                                                                                1. 2

                                                                                  Oh, we’re certainly racing towards the cliff of un-sustainability. I’m fairly convinced, one way or the other, that human civilization won’t exist like we know it now by 2100. It might be good, or there might be civilizational collapse from ecologic or other factors. Or a robot uprising.

                                                                              1. 4

                                                                                Many home ISPs implement firewall rules to avoid common spam & exploits (eg mail-related ports) but I suspect CG nat is now the most likely reason to cause this problem.

                                                                                Tried contacting your ISP? My ISP (in Australia) now CG-nats home users by default, but they were perfectly happy to put me back on a dedicated IPV4 address after I gave them a reason (“need remote access”). No extra cost and sticky-enough that it doesn’t change (albeit I’m not paying for static, so it might eventually).

                                                                                1. 1

                                                                                  I can try that too. What’s funny is that according to the common services, My public IP address didn’t change.

                                                                                1. 7

                                                                                  Thank you everyone for your suggestions!

                                                                                  I decided to try Tailscale first. I was able to connect remotely with no problem. I haven’t yet activated Tailscale SSH, I was just using the keys and SSH config I had already set up, because that also works with ConnectBot on Android.

                                                                                  It was all rather… easy and hassle-free.

                                                                                  1. 3

                                                                                    Job hunting. I’ve been quite dissatisfied with my current workplace lately. Sigh.

                                                                                    1. 1

                                                                                      Also job hunting. And fixing a toilet :-/ (that shouldn’t take too long)

                                                                                      I have yet to finish either Divinity :Original Sin or D:OS 2, so I may give one or both of them a spin again. I have an incessant desire to experiment with different builds for the character classes.

                                                                                    1. 4

                                                                                      Got the nephews coming over for a night, so we’ll be entertaining them. I also need to polish up my resume and more seriously start looking for a remote developer job. Still debating on the one-page vs. two-page thing.

                                                                                      Recently ordered a VisionFive version 2 board (RISC-V CPU) on Kickstarter, but that won’t be coming in until December. Need to set up a VPN between my cheap cloud server and my home server… AT&T recently decided to not allow incoming connections (on any port), so annoying.

                                                                                      1. 1

                                                                                        VisionFive 2 looks neat, thanks for sharing! Do you have any particular plans for it, or will you just play around with RISC-V? I was considering grabbing one for NAS-like purposes, but USB 3.0 probably won’t cut it there.

                                                                                        1. 1

                                                                                          I’ll be taking a look at code generation in Rust. There is already good support in the main toolchain via LLVM, but the support for RISC-V was dropped in Cranelift a while back.

                                                                                      1. 34

                                                                                        I fear the ramifications of Fuchsia. From where I’m sitting, it looks like Google bootstrapped Android off the back of Linux, didn’t really give much back, and then set out on a campaign to rid themselves of components with licenses that might obligate them to give anything back in the future. Android just keeps getting more and more closed, and what remains open is increasingly useless without adding on proprietary software. They’ve made a mockery of the freedoms granted by the GPL; for most Android users, the only alternative to “whatever OS we decide to give you” is picking some hacked-up mess from a forum that will be maintained for approximately 37 seconds. To me, Fuchsia feels like an attempt to close this loop; once the Linux kernel is out of the picture, Google will have rid itself of all those troublesome GPL components and can forget that whole “open source” thing ever happened.

                                                                                        1. 7

                                                                                          Frankly, this doesn’t make any sense since Fuchsia is open source? Yes, Fuchsia is not GPL, but Google wrote Fuchsia so Google can decide its license.

                                                                                          1. 18

                                                                                            It’s a difference between users having guaranteed rights to the source code now and in the future, vs users depending on continued benevolence of Google.

                                                                                            Sadly, GPL covers only the kernel. Android is already problematic from software freedom perspective due to PlayServices dependency, and important components like camera image processing being kept closed-source.

                                                                                            With such history, do not expect to Google to be a good steward of a project they can close as much as they want. Google already keeps Android forks inferior, and Fuchsia gives them even more code they could make closed at any time to make Android forks harder to maintain.

                                                                                            1. 9

                                                                                              But doesn’t this just confirm the statement ?

                                                                                              Can decide its license

                                                                                              and thus can do anything they want with it, including the addition of proprietary extensions and APIs that you need to run the android of the future, closing its source later on or changing the license such that it’s not free to use.

                                                                                              1. 2

                                                                                                that’s entirely compatible with what /u/jordemort said, is it not?

                                                                                              2. 6

                                                                                                There has definitely been this trend, and it’s not just Google. Amazon also comes to mind.

                                                                                                Businesses tend towards rent-seeking behavior by nature, and often only “donate” when it is a means to that end. Google is neither a charity nor a non-profit.

                                                                                                1. 5

                                                                                                  Your theory might be coincidental with what the interviewee said was the real motivation : the insanity of Google having 4 or more disjoint teams all separately customizing the Linux kernel?

                                                                                                  From the article:

                                                                                                  “At that time, Fuchsia was never originally about building a new kernel. It was actually about an observation I made: that the Android team had their own Linux kernel team, and the Chrome OS team had their own Linux kernel team, and there was a desktop version of Linux at Google [Goobuntu and later gLinux], and there was a Linux kernel team in the data centers. They were all separate, and that seems crazy and inefficient.”

                                                                                                  1. 9

                                                                                                    It doesn’t seem that crazy and inefficient to me. All those teams supported different products with different requirements. A big thing with Android was (finally) getting Binder upstreamed into the LInux kernel… that was a long process. I’ve not heard anything about ChromeOS or the desktop efforts that had IPC mechanism requirements that couldn’t be fulfilled by existing projects like dBus.

                                                                                                    ChromeOS is about providing a polished and narrowly focused experience, without the flexibility that a nominal desktop OS should provide. So I don’t see as much overlap there either. And the server team I’m sure was more worried about software defined networking, virtualization, and making sure process scheduling doesn’t bog down on a 64-core machine. Also not necessarily a lot of overlap.

                                                                                                    1. 6

                                                                                                      I think the reasons why an engineer might want to start a project aren’t necessarily the same reasons why management might want to get behind a project.

                                                                                                      1. 1

                                                                                                        so the solution to that inefficiency is not to unify their linux efforts, but to develop an entirely new kernel??

                                                                                                        1. 1

                                                                                                          Maybe they found that it wasn’t efficient to shoehorn basically the same monolithic kernel into everything from mobiles to cloud clusters.

                                                                                                          1. 1

                                                                                                            maybe but that’s not what the interviewee said

                                                                                                      2. 2

                                                                                                        I just got a new phone and installed LineageOS on it. The GPL is doing absolutely nothing to help keep AOSP free and open: the requirements to deploy Google things are due to the Play Store having a monopoly on most apps that people actually need and the fact that a lot of things depend on Play Services and so on.

                                                                                                        I’m looking forward to Fuchsia replacing Linux in Android. It’s a much better kernel design and a better implementation. The main obstacle for Fuchsia at the moment is that Google open source projects are very much Google projects that happen to be open source. They are very bad at building (and not then immediately screwing over) communities.

                                                                                                        1. 1

                                                                                                          yeah it’s bad, but we already knew everything would move in that direction without effective organized resistance.

                                                                                                        1. 1

                                                                                                          I’ve always resisted the urge of switching to a different layout than QWERTY in fear of not being able to use someone else’s keyboard. Given that I use my own setup most of the time, this fear is not really justified. But it’s kinda like buying insurance, for that one time when you really need it.

                                                                                                          1. 1

                                                                                                            I am in front of a lot of different systems throughout a given week, so switching away from QWERTY is a non-starter for me. The best I can go is a Microsoft Natural 3000 keyboard for my main systems. Been using those or the original Microsoft Natural since the 1990’s.

                                                                                                            This is a really cool project, thank you for posting it! However:

                                                                                                            escape is too crucial to put on a non-base layer, but at the same time, not as important to deserve a place on the base layer.

                                                                                                            Not important! Heresy! (JK) I was obligated to say that as a decades-long vi/vim/neovim user. Very nice project, though, seriously.

                                                                                                            1. 1

                                                                                                              You won’t forget QWERTY if you switch to a new layout, don’t worry. Worst case is that you have to look down on the keycaps when typing…

                                                                                                              1. 1

                                                                                                                Yes, this is my experience as well, combined with the annoyance that the keys are all in non-sensical positions.

                                                                                                                If I have to help a colleague who is on qwerty and it takes me too much time to type something out, I will ask them to type it out.

                                                                                                              2. 1

                                                                                                                I learned a new alpha layout at the same time as I was getting used to a small ergo keyboard (30 keys in my case). I continued using my regular row stagger keyboard during the day at work, but would practice on my new layout and keyboard at night. After a few weeks, I felt good enough to start using my new keyboard for work. It has been over a year since and I still use QWERTY on my laptops built in keyboard and my alt-layout on my ergo keyboard. Having the layouts tied to different physical key layouts has made it really easy to keep them straight in my head.

                                                                                                              1. 6

                                                                                                                So the idea is really cool in general. And I get the desire to not modify the compass itself. But if I was to do this, I’d probably go for some kind of stepper motor instead.

                                                                                                                The main downside (from my PoV) of using a strong electromagnet plus unmodified compass is that you can’t then use a normal 9-axis IMU (which has a magnetometer) for orientation of the housing itself.

                                                                                                                1. 6

                                                                                                                  One of the big things missing, IMO, of the Semantic Web is the notion of context. A fact, any fact, is going to be true in some contexts, and false in other contexts. The classic example of this is something like “parallel lines do not intersect” which is true in flat space (2D or 3D), but not (for example) on the surface of a sphere.

                                                                                                                  The knowledge bases I worked with (briefly) had encoded facts like “Tim Cook is the CEO of Apple”. But of course, that is only true for a certain context of time. Before that was Steve Jobs from 1997 to 2011. But the dumps generated from Wikipedia metadata didn’t really have that with much consistency, nor any means to add and maintain context easily.

                                                                                                                  Context in general is needed all over the place for reasoning:

                                                                                                                  • fictional contexts, such as Star Trek or 18th century novels
                                                                                                                  • hypothetical contexts, such as: What if I didn’t go to the store yesterday, would we have run out of milk? … and … What if the user doesn’t have a laptop, only a phone, is the website still usable?
                                                                                                                  • Time, place, social group, etc.

                                                                                                                  A bare fact is only as useful so far as you know what contexts it applies to.

                                                                                                                  I don’t know a good way to represent this, a graph may not be ideal. Many facts share a context too. In 2003, Steve Jobs was an employee at Apple. Ditto for Jony Ive in 2003. And in 2004, etc.

                                                                                                                  How do our own brains organize these facts and the contexts they go with?

                                                                                                                  1. 2

                                                                                                                    There was some with on context. SPARQL lets you say what context you want to query, for example. I forget what it’s called there…

                                                                                                                    n3 also had quoting. It was kind of a mess, though.

                                                                                                                    You’ve seen Guha’s thesis and cyc, yes? Fun stuff.

                                                                                                                    But as noted above, machine learning got-3) has pretty much eclipsed symbolic reasoning on the one side. On the other, we have the stuff that’s no longer considered AI: type systems and sql.

                                                                                                                    1. 2

                                                                                                                      I’ve read a bit about Cyc and OpenCyc, yes. I haven’t read the book Building Large Knowledge Based Systems by Lenat and Guha though.

                                                                                                                      I haven’t given up on the idea of probabilistic symbolic reasoning, but I realized I’m in the minority here.

                                                                                                                      I still imagine a system where, for example, you receive a document (such as news article) and it is translated into a symbolic representation of the facts asserted in the document. Using existing knowledge it can assign probabilistic truth values to the statements within the article. And furthermore be able to precisely explain the reasoning behind everything, because it could all be traced back to prior facts and assertions in the database.

                                                                                                                      I can’t help but think such a system ought to be more precise in reasoning as well as needing fewer compute resources. And being able to scale to much larger and more complicated reasoning tasks.