Threads for aae

  1. 6

    Hot take here, but I find Linux on the desktop to be insufferable. I want it to be good, I really do, but it just isn’t. I feel like windows has no place on a server, but for my desktop? I’ve been quite happy with Win11, Terminal app, and WSL2. I live in the best parts of Linux, and I can actually use my computer with the best parts of Windows.

    Two main uses for my computer are unfortunately filled with gotchas on Linux, MS Teams, and Zoom. They’re both terrible.

    MacOS is almost the best of both worlds too, but you’re locked into their hardware. Which was a liability until recently (M1).

    1. 24

      Yeah yeah, mention Rust. Rust is too complicated to implement by one person.

      I’m not sure that’s a practical metric by which to judge a tool. The C compilers that provide a practical foundation for modern software development were not implemented by one person either.

      In general Turing completeness is necessary but not sufficient: it’s just one facet of what makes a language practically useful. There are many other properties that end up resulting in costs someone has to pay to use a language; e.g., is it memory safe, or will engineers and users alike be on the hook for an unknown number of egregious memory safety bugs?

      1. 12

        Also mrustc has been implemented mostly by one person.

        1. 2

          I knew this would be brought up; you know the effort they’ve had to do to achieve this? An incredible amount.

          1. 8

            It’s 100K lines of code, and majority of it was developed over a 2-3 year period (with ongoing development to catch up with evolution of Rust). The number of commits and lines of code happens to be close to TCC:

            It does take a couple of shortcuts: it’s a Rust-to-C compiler (no machine code generation) and it doesn’t perform borrow checking (the Rust language is carefully designed to make it optional. Lifetimes are purely a compile-time lint, and don’t affect generated code or its behavior).

            I think overall in terms of implementation difficulty Rust is somewhere between C and C++. Parsing of Rust is much simpler than C++, and Rust has fewer crufty language features than C++ (there’s one way to initialize a variable), but some features are big-ish (borrow checker, type inference).

            How hard it is to implement mainly depends on how good quality of implementation you want to have. For example, LLVM is 85× larger than mrustc and tcc, with over 130× more commits. It’s a 20-year collaborative effort, likely not possible to do by a single person. The main rustc project is also maximalist like that, because it isn’t merely an effort to get Rust working, but to make it fast, efficient, user-friendly, well-documented, reliable, portable, etc., so much much more work went into it beyond just the language implementation.

            1. 2

              I cannot speak for mrustc, but 100k loc for tcc is bullshit. Just counting sources and headers in the top level, I get 55k loc (the remainder is taken up by tests and win32 headers). Close to 40k is taken up by target-specific code. The core compiler is about 10k loc.

              1. 1

                openhub stats I’ve quoted are for the whole repo, and I see 57K .c and 38K .h in there. This includes tests, so it’s indeed more than just the compiler.

                1. 2

                  If I run a word count on everything in the ‘src’ directory of mrustc, I get about 130k loc. I therefore conclude that mrustc’s rust compiler is approximately 10x larger and more complex than tcc’s c compiler. Recall that tcc also includes assemblers and linkers, and supports many targets.

              2. 0

                I mean if 3 years is not a lot of effort then cheers to you! You must be an absolute coding beast.

                1. 15

                  I feel like this is a fairly disingenuous and dismissive argument - your original post stated that “Rust is too complicated to implement by one person.” The comment you were responding to was making the point that not only is there an implementation of Rust by primarily one person, but a single-contributor C implementation is a comparable size and would theoretically take a similar amount of effort to implement. People here aren’t trying say it’s not a lot of effort, but that it does exist and you may be trivializing the amount of effort needed for a C implementation.

                  1. 3

                    Sorry, I didn’t mean to dismiss anything! Isn’t the statement still true if it’s been mentioned they still got help?… Regardless the general sentiment is right. I should have said instead that it’s not reasonable!

                    I may very well be trivializing the effort for a C implementation. In my mind C’s type system, lack of borrow checker, and other features make its implementation maybe a magnitude easier. I could be completely wrong though and please elaborate if that’s the case!

                    1. 4

                      A non-optimizing C89 or C90 compiler is relatively simple to implement, with only minor inconveniences from the messy preprocessor, bitfields, parsing ambiguities of dangling else and typedef (did you know it can be scoped and nested and this affects syntax around it!?). The aren’t any things that are hard per-se, mostly just tedious and laborious, because there’s a lot of small quirks underneath the surface (e.g. arrays don’t always decay to pointers, sizeof evaluates things differently, there are rules around “sequence points”).

                      There are corners of C that most users don’t use, but compiler in theory needs to support, e.g. case doesn’t have to be at the top level of switch, but can be nested inside other arbitrary code. C can generate “irreducible” control flow, which is hard to reason about and hard to optimize. In fact, a lot of optimization is pretty hard due to aliasing, broken const, and the labyrinth of what is and isn’t UB described in the spec.

                      1. 3

                        There are corners of C that most users don’t use, but compiler in theory needs to support, e.g. case doesn’t have to be at the top level of switch, but can be nested inside other arbitrary code

                        It’s worth noting that, since you said ‘non-optimising’ these things are generally very easy in a non-optimising compiler. You can compile C more or less one statement at a time, including case statements, as long as you are able to insert labels after you insert a jump to them (which you can with most assembly languages). Similarly, sequence points matter only if you’re doing more than just evaluating expressions as you parse them.

                        The original C compiler ran on a computer that didn’t have enough memory for a full parsed AST and so the language had to support incremental code generation from a single-pass compiler.

          2. 9

            LLVM was originally just Chris Latner. I think the question isn’t “Can one person build it?” It’s “Can one person build it to the point where it has enough value for other people to work on it too?”

            1. 5

              LLVM was originally just Chris Latner

              Several of the folks in / formerly in Vikram Adve’s group at UIUC would be quite surprised to learn that.

              1. 1

                I actually looked at Wikipedia first before my comment, but that made it seems like it was Latner’s project under Adve’s mentorship. I’ll take your word for it that it was a group effort from the start.

            2. 3

              This was my first thought as well. There are a lot of very useful things that are too complicated to be implemented by one person - the current state of Linux probably falls into that category, and I know that at least I wouldn’t want to go back to even a version from 5 years ago, much less back to a version that could have been implemented by a single person.

              1. 2

                …And there are a lot of useful things that are simple enough for one person to implement! :D

                1. 3

                  Ha, I agree with that, was mostly just highlighting that I don’t feel like “too complicated to implement by one person” is a good reason to dismiss Rust’s potential usefulness.

                  For myself, I originally got frustrated with Rust not allowing me to do things; eventually, I realized that it was statically removing bad habits that I’d built in the past. Now I love when it yells at me :)

              2. 1

                [Tool] is too complicated to implement by one person.

                I’m not sure that’s a practical metric by which to judge a tool

                I am. Short term, that means the tool will cost much less: less time to make, fewer bugs, more opportunities for improvement. Long term it means other people will be able to rebuild it from scratch if they need to. At a lower cost.

                1. 3

                  The flip side of this is that the tool will do much less. A wooden hammer is a tool that a single person can make. A hammer with a steel head that can drive in nails requires a lot more infrastructure (smelting the ore and casting the head are probably large enough tasks that you’ll need multiple people before you even get to adding a wooden handle). An electric screwdriver requires many different parts made in different factories. If I want to fix two pieces of wood together than a screw driven by an electric screwdriver is both easier to use and produces a much better result than a nail driven by a wooden hammer.

                  1. 1

                    Obviously I was limiting my analysis to software tools, where the ability of a single person to make it is directly tied to its complexity.

                    One fair point you do have is how much infrastructure the tool sits upon. Something written in Forth needs almost nothing besides the hardware itself. Something written in Haskell is a very different story. Then you need to chose what pieces of infrastructure you want to depend on. For instance, when I wrote my crypto library I chose C because of it’s ubiquity. It’s also a guarantee of fairly extreme stability. There’s a good chance that the code I write now will still work several decades from now. If I wanted to maximise safety instead, I would probably have picked Rust.

                    1. 6

                      Obviously I was limiting my analysis to software tools, where the ability of a single person to make it is directly tied to its complexity.

                      My point still applies. A complex software tool allows me to do more. In the case of a programming language, a more complex compiler allows me to write fewer bugs or more features. The number of bugs in the compiler may be lower for a compiler written by a single person but I would be willing to bet that the number of bugs in the ecosystem is significantly higher.

                      The compiler and standard library are among the best places for complexity in an ecosystem because the cost is amortised across a great many users and the benefits are shared similarly. If physical tools were, like software, zero marginal cost goods, then nail guns, pillar drills, band saws, and so on would all be ubiquitous. If you tried to make the argument that you prefer a manual screwdriver to an electric one because you could build one yourself if you needed then you’d be laughed at.

                      For instance, when I wrote my crypto library I chose C because of it’s ubiquity. It’s also a guarantee of fairly extreme stability

                      It also gives you absolutely no help in writing constant-time code, whereas a language such as Low* allows you to prove constant-time properties at the source level. The low* compiler probably depends on at least a hundred person-years of engineering but I’d consider it very likely that the EverCrypt implementations of the same algorithms would be safer to use than your C versions.

                      1. 2

                        I reckon amortized cost is a strong argument. In a world where something is build once and used a gazillion times the cost analysis is very different from something that only has a couple users. Which is why by the way I have a very different outlook for Oberon and Go: the former were used in a single system, and the cost of a more powerful compiler could easily outweigh the benefits across the rest of the system; while Go set out to be used by a gazillion semi-competent programmers, and the benefit of some conspicuously absent features would be multiplied accordingly.

                        Honestly, I’m not sure where I stand. For the things I make, I like to keep it very very simple. On the other hand, If I’m being honest with myself I have little qualms sitting on a mountain of complexity, provided such foundation is solid enough.

                        Do you have a link to Low*? My search engine is failing me.

                        1. 2

                          Do you have a link to Low*? My search engine is failing me.

                          This paper is probably the best place to start

                2. 1

                  The C compilers that provide a practical foundation for modern software development were not implemented by one person either.

                  Right but there are many C compilers which were written by one person and still work. To me, that’s the important part. Thank you for your thoughts!

                  1. 2

                    Why is that important?

                    1. 1

                      It’s important because fast forward 300 years and no one uses your language anymore. It must be reasonable the future humans can write a compiler on their own if they want to run your program.

                      I’m really trying to encourage people thinking beyond their lives in the software realm lately, just as we need to do the same for the ecosystem.

                      1. 3

                        trying to build software to last 300 years seems like it would limit hardware development
                        and everyone implements C compatibility in their new hardware so that people will use it
                        if people can figure out quantum computers and computers not based on binary, they’ll probably need to figure out what the next C will be for that new architecture
                        if you want your software to last 300 years, write it in the most readable and easy-to-understand manner, and preserve it’s source so people can port it in the future

                        1. 3

                          And this is why C is not good for longevity, but languages which are more abstracted. Thank you for that! Completely agree with what you’re thinking here.

                          1. 3

                            i don’t think the biggest blockers to software longevity is language choices or even hardware, it’s the economy/politics of it… long lasting anything doesn’t fit in well with our throw-away society, and since it can’t be monetized, the capitalist society snubs it’s nose at it

                            1. 2

                              Hehe, an interesting thread of thought we could travel down here. I’ll just say I agree to a degree.

                        2. 3

                          It’s important because fast forward 300 years and no one uses your language anymore. It must be reasonable the future humans can write a compiler on their own if they want to run your program.

                          If you’re considering a person 300 years in the future then you should also consider that they will have tools 300 years more advanced than ours. 30 years ago, writing a simple game like space invaders was weeks worth of programming, now it’s something that you can do in an afternoon, with significantly better graphics. In the same time, parser generators have improved hugely, reusable back ends are common, and so on. In 300 years, it seems entirely feasible that you’d be able to generate a formal specification for a language from a spoken description and synthesise an optimising compiler directly from the operational semantics.

                          1. 1

                            You’re right, I haven’t considered this! I don’t know what to say immediately other than I think this is very important to think about. I’d like to see what others have to comment on this aspect too…!

                            1. 1

                              you should also consider that they will have tools 300 years more advanced than ours.

                              Unless there has been a collapse in between. With climate change and peak oil, we have some serious trouble ahead of us.

                              1. 5

                                In which case, implementing the compiler is one of the easiest parts of the problem. I could build a simple mechanical computer that could execute one instruction every few seconds out of the kind of materials that a society with a Victorian level of technology could produce, but that society existed only because coal was readily accessible. I’ve seen one assessment that said that if the Victorians had needed to use wood instead of coal to power their technology they’d have completely deforested Britain in a year. You can smelt metals with charcoal, but the total cost is significantly higher than with coal (ignoring all of the climate-related externalities).

                                Going from there to a transistor is pretty hard. A thermionic valve is easier, but it requires a lot of glass blowing (which, in turn, requires an energy-dense fuel source such as coal to reach the right temperatures) and the rest of a ‘50s-era computer required fairly pure copper, which has similar requirements. Maybe a post-collapse civilisation would be lucky here because there’s likely to be fairly pure copper lying around in various places.

                                Doping silicon to produce integrated circuits requires a lot of chemical infrastructure. Once you can do that, the step up to something on the complexity of a 4004 is pretty easy but getting lithography to the point where you can produce an IC powerful enough to run even a fairly simple C program is nontrivial. Remember that C has a separate preprocessor, compiler (which traditionally had a separate assembler), and linker because it was designed for computers that couldn’t fit more than one of those in RAM at a time. Even those computers were the result of many billions of dollars of investment from a society that already had mass production, mass mining, and large-scale chemistry infrastructure.

                                C code today tends to assume megabytes of RAM, at a minimum. Magnetic core storage could do something like 1 KiB in something the size of a wardrobe. Scaling up production to the point where 1 MiB is readily available requires ICs, so any non-trivial C program is going to have a dependency on at least ’80s-era computing hardware.

                                TL;DR: If a society has collapsed and recovered to the point where it’s rediscovering computers, writing a compiler for a fairly complex language is going to be very low cost in comparison to building the hardware that the compiler can target.

                                1. 1

                                  Well, I wasn’t anticipating such a hard collapse. I was imagining a situation where salvage is still a thing, or where technology doesn’t regress that far. Still, you’re making a good point.

                                  1. 4

                                    That’s an interesting middle ground. It’s hard for me to imagine a scenario in which computers are salvageable but storage is all lost to the point where a working compiler is impossible to find. At the moment, flash loses its ability to hold charge if not powered for a few years but spinning rust is still fine, as is magnetic tape, for a much longer period, so you’d need something else to be responsible for destroying them. Cheap optical storage degrades quite quickly but there are archive-quality disks that are rated for decades. If anything, processors and storage are more fragile.

                                    In the event of a collapse of society, I think it’s a lot more likely that copies of V8 would survive longer than any computer capable of running them. The implicit assumption in the idea that the compiler would be a bottleneck recovering from a collapse of society is that information is more easily destroyed than physical artefacts. This ignore the fact that information is infinitely copyable, whereas the physical artefacts in question are incredibly complex and have very tight manufacturing tolerances.

                                    Of course, this is assuming known threats. It’s possible that someone might create a worm that understands a sufficiently broad range of vulnerabilities that it propagates into all computers and erases all online data. If it also propagates into the control systems for data warehouses then it may successfully destroy a large proportion of backups. Maybe this could be combined with a mutated bacterium that ate something in optical disks and prevented recovering from backup DVDs or whatever. Possibly offline storage will completely go out of fashion and we’ll end up with all storage being some form of RAM that is susceptible to EMP and have all data erased by a solar flare.

                                    1. 1

                                      It really depends on what we can salvage, and what chips can withstand salvage operations. In a world where we stop manufacturing computers (or at least high-end chips), I’d expect chips to fail over the years, and the most complex ones will likely go first. And those that don’t will be harder to salvage for various reasons: how thin their connection pins are, ball arrays, multi-layer boards requirements, and the stupidly fast rise times that are sure to cause cross-talk and EMI problems with the hand made boards of a somewhat collapsed future.

                                      In the end, many of us may be stuck with fairly low-end micro controllers and very limited static memory chips (forget about controlling DRAM, it’s impossible to do even now without a whole company behind you). In that environment, physical salvage is not that horrible, but we’d have lost enough computing power that we’ll need custom software for it. Systems that optimise for simplicity, like Oberon, might be much more survivable in this environment.

                                      C code today tends to assume megabytes of RAM, at a minimum.

                                      In this hypothetical future, that is relevant indeed. Also, I believe you. But then the first serious project I wrote in C, Monocypher, requires only a couple KB of stack memory (no heap allocation) for everything save password hashing. The compiled code itself fits requires less than 40KB of memory. Thing is, I optimised it for simplicity and speed, not for memory usage (well, I did curb memory use a bit when I’ve heard I had embedded users).

                                      I suspect that when we optimise for simplicity, we also tend to use less resources as a side effect.


                                      Now sure, those simple systems will take no time to rebuild from scratch… if we have the skills. In our world of bigger and faster computers with a tower of abstraction taller than the Everest, I feel most of us simply don’t have those skills.

                                      1. 4

                                        Now sure, those simple systems will take no time to rebuild from scratch… if we have the skills. In our world of bigger and faster computers with a tower of abstraction taller than the Everest, I feel most of us simply don’t have those skills.

                                        While it’s an interesting thought exercise, but I think this really is the key point. The effort in salvaging a working compiler to be able to run some tuned C code in a post-apocalyptic future may be significantly higher than just rewriting it in assembly for whatever system you were able to salvage (and, if you can’t salvage an assembler, you can even assemble it by hand after writing it out on some paper. Assuming cheap paper survives - it was very expensive until a couple of hundred years ago).

                                        Most of us probably don’t have the skills to reproduce the massive towers of abstraction that we use today from scratch but my experience teaching children and young adults to program suggests that learning to write simple assembly routines is a skill that a large proportion of the population could pick up fairly easily if necessary. If anything, it’s easier to teach people to write assembly for microcontrollers than JavaScript for the web because they can easily build a mostly correct mental model of how everything works in the microcontroller.

                                        Perhaps more importantly, it’s unlikely that any software that you write now will solve an actual need for a subsistence level post-apocalyptic community. They’re likely to want computers for automating things like irrigation systems or monitoring intrusion sensors. Monocypher is a crypto library that implements cryptosystems that assume an adversary who had access to thousands of dedicated ASICs trying to crack your communications. A realistic adversary in this scenario would struggle to crack a five-wheel Enigma code and that would be something that you could implement in assembly in a few hours and then send the resulting messages in Morse code with an AM radio.

                                        1. 1

                                          Most of us probably don’t have the skills to reproduce the massive towers of abstraction that we use today from scratch but my experience teaching children and young adults to program suggests that learning to write simple assembly routines is a skill that a large proportion of the population could pick up fairly easily if necessary.

                                          I feel a little silly for not having thought of that. Feels obvious in retrospect. If people who have never programmed can play Human Resource Machine, they can probably learn enough assembly to be useful.

                                          Perhaps more importantly, it’s unlikely that any software that you write now will solve an actual need for a subsistence level post-apocalyptic community.

                                          Yeah, I have to agree there.

                            2. 2

                              Today’s humans were able to create Rust, so I don’t see why future humans wouldn’t. Future humans will probably just ask GPT-3000 to generate the compiler for them.

                              If you’re thinking about some post-apocalyptic scenario with a lone survivor rebuilding the civilisation, then our computing is already way beyond that. In the 1960’s you were able to hand-stitch RAM, but to even hold source code of modern software, let alone compile and run it, you need more technology than a single person can figure out.

                              C may be your point of reference, because it’s simple by contemporary standards, but it wasn’t a simple language back when the hardware was possible to comprehend by a single person. K&R C and single-pass C compilers for PDP-11 are unusable for any contemporary C programs, and C is too complex and bloated for 8-bit era computers.

                              1. 1

                                If GPT can do that for us then hey, I will gladly gladly welcome it. I’m not thinking about a post-apocalyptic scenario but I can see the relationship to it.

                              2. 2

                                But why one person? I think we’ll still write software in teams in 2322, if we write software at all by that point instead of flying spaceships and/or farming turnips in radioactive wastelands. The software was written by teams today, and I think, if it needs to be rewritten, it will be rewritten by teams in the future.

                                1. 1

                                  I would also be careful about timespans here. computers haven’t been around for a century yet, so who knows what things will be like 100 years from now? I don’t even know if it’s possible to emulate an ENIAC and run old punch card code on modern hardware, that’s the sort of change we’ve seen in just 75y. maybe multicore x86 machines running windows/*nix/BSD will seem similarly arcane 300y from now.

                              3. 1

                                Wouldn’t a published standard be more important to future programmers? Go might be a wonderful language, but is there a standards document I can read from which an implementation can be written from?

                            1. 8

                              It is pretty damming to the Go language that you can’t use any existing code. Just about every other language provides relatively straightforward (if not seamless) interop with any other C ABI. Only in Go have I heard such consistent and negative opinions on the FFI. Java and JNI are close, but it still seems better received and in that case at least there is a decent reason because ruins your portability once you add native code.

                              The fact that someone would recomend “reimplementing a large piece of C code in Go” instead of just binding to it is exposing a huge downside of the language.

                              1. 4

                                The fact that someone would recomend “reimplementing a large piece of C code in Go” instead of just binding to it is exposing a huge downside of the language.

                                The main reason is so you “get” effortless portability as a result. I can only think of zig where you get some out-of-the-box portability without re-writing your C in zig (since it ships with libc for various platforms/archs and has all the toolchain nicely setup).

                                1. 2

                                  i immediately thought of zig’s self contained compiler when i saw this post… and i recall things being posted to show how you can integrate zig cc in w/ go/cgo to have portable c compilers

                                  seems like it would be a good thing for these project maintainers to get on board with…

                                  1. 6

                                    I wrote a blog post where I cross-compiled with Zig the CGo SQLite library for all the major OSs without too much fuss.

                                    https://zig.news/kristoff/building-sqlite-with-cgo-for-every-os-4cic

                                2. 3

                                  I can’t wait for Go to have an FFI some day!

                                  1. 1

                                    As mentioned above, I believe this to be simply untrue: Go has an FFI today and it’s called cgo. What is it about cgo that does not make it an FFI?

                                    1. 1

                                      cgo is basically a separate language. It is a separate implementation.

                                      1. 1

                                        I can’t see how it’s a separate language. You embed a bit of C code in a special place within a Go file. The C code is compiled by a C compiler, the Go code by a compiler and cgo. And from the C and the Go code, cgo generates some interface code to make C names known to the Go compiler and some Go names known to the C compiler. How is cgo (which, to me, is a program) a separate language?

                                        It is a separate implementation.

                                        cgo is a separate implementation of what?

                                  2. 2

                                    Yes 100%, here is my lament from 4 years ago on that topic.

                                    https://news.ycombinator.com/item?id=16741043

                                    A big part of my pain, and the pain I’ve observed in 15 years of industry, is programming language silos. Too much time is spent on “How do I do X in language Y?” rather than just “How do I do X?”

                                    For example, people want a web socket server, or a syntax highlighting library, in pure Python, or Go, or JavaScript, etc. It’s repetitive and drastically increases the amount of code that has to be maintained, and reduces the overall quality of each solution (e.g. think e-mail parsers, video codecs, spam filters, information retrieval libraries, etc.).

                                    There’s this tendency of languages to want to be the be-all end-all, i.e. to pretend that they are at the center of the universe. Instead, they should focus on interoperating with other languages (as in the Unix philosophy).

                                    One reason I left Google over 6 years ago was the constant code churn without user visible progress. Somebody wrote a Google+ rant about how Python services should be rewritten in Go so that IDEs would work better. I posted something like <troll> … Meanwhile other companies are shipping features that users care about </troll>. Google+ itself is probably another example of that inward looking, out of touch view. (which was of course not universal at Google, but definitely there)

                                    This is one reason I’m working on https://www.oilshell.org – with a focus on INTEROPERABILITY and stable “narrow waists” (as discussed on the blog https://www.oilshell.org/blog/2022/02/diagrams.html )

                                    (copy of HN comment in response to almost the same observation!)

                                    I’m also excited about Zig for this reason. e.g. “maintain it with Zig” https://kristoff.it/blog/maintain-it-with-zig/

                                    1. 1

                                      On the other hand, oilshell is not(?) compatible with the piles of bash (and sh, and…) scripts out in the world, so folks have to rewrite it to be compatible with your shell. Is this not contradicting what you said earlier?

                                      1. 3

                                        Hm honest question: Why do you think it’s not compatible?

                                        It’s actually the opposite – it’s the ONLY alternative shell that’s compatible with POSIX sh and bash. It’s the most bash compatible shell by a mile.

                                        Try running osh myscript.sh on your shell scripts and tell me what happens!


                                        The slogan on the front page is supposed to emphasize that, but maybe it’s not crystal clear:

                                        It’s our upgrade path from bash to a better language and runtime.

                                        Also pretty high up on the FAQ is the statement:

                                        http://www.oilshell.org/blog/2021/01/why-a-new-shell.html#introduction

                                        OSH is a shell implementation that’s part of the Oil project. It’s compatible with both POSIX and bash. The goal is to run existing shell scripts. It’s done so since January 2018, and has matured in many regular releases since then.


                                        Nonetheless I think it could be clearer, so I filed a bug to write a project tour and put it prominently on the home page:

                                        https://github.com/oilshell/oil/issues/1127

                                        It is disappointing to me that this hasn’t been communicated after so many years … I suspect that some people actually think the project is impossible. It’s both compatible AND it’s a new language.

                                        I described in the latest blog post how that works:

                                        http://www.oilshell.org/blog/2022/05/release-0.10.0.html#backward-compatibility-the-eternal-puzzle

                                        Here are all your other alternative shell choices, NONE of which have the compatibility of OSH.

                                        https://github.com/oilshell/oil/wiki/Alternative-Shells

                                        (That is why the project is so large and long)

                                        1. 1

                                          Ah, I’m sorry. I skimmed the FAQ but missed that sentence. For some reason, the impression I got from your FAQ is that it’s basically yet another shell that doesn’t offer backwards compatibility. Obviously I was terribly wrong. I’m not sure how to suggest changes that may have prevented that (other than it’s completely my fault for misreading/skimming and getting the wrong impression.) So, sorry for the noise.

                                          1. 1

                                            OK no worries … I think it actually did point out that this crucial fact about the project is somewhat buried. Not entirely buried but “somewhat”.

                                    2. 2

                                      It is simply not true that “you can’t use any existing code” in Go. There’s cgo and it allows you to to call into C code and provides way for C code to call into go code - that’s pretty much the definition of using existing code. I think a big reason people are complaining about JNI is the the same for people complaining about cgo: Because you are dealing with garbage collected language, there are rules about what you can do with memory and pointers. The same applies to .NET as well.

                                      The fact that someone would recomend “reimplementing a large piece of C code in Go” instead of just binding to it is exposing a huge downside of the language.

                                      As the article points out, in the very first sentence, most people use mattn/go-sqlite3 which is in fact a wrapper around the canonical C SQLite implementation. A “decent reason” (your words) to not use that library is because “it ruins your portability” because “you add native code”. This reason is at play here.

                                      This being said, the shown port to Go is a respectable effort. While being impressive, I’d probably use one of the bindings to the canonical C code if possible as it uses a highly tested implementation. If not possible the cznic provides an interesting alternative.

                                      1. 1

                                        Yes and no. I mean there is CGO. Which you can use. While it’s worse in Go, also because of threading, especially on Linux you’ll still find “pure” implementations of things that would usually use a C library, sometimes they are faster because calling the FFI might still be slow. Database interfaces are one such an example, where people sometimes find the bottleneck to be the FFI.

                                        You also get certain benefits from not using C. I already mentioned the threading part which sometimes bites people in Go, but also you can be sure about memory safety, debugging will be easier, all the people using the project can contribute even when they are not fluent in C, etc.

                                        And if you still want/need to use C, there is CGO.

                                        There certainly have been cases in other languages where I wished a library wasn’t just a wrapper around C, be it Python, Java, Objective-C/Swift or in node.js-projects. Given circumstances they can be a source for headaches.

                                      1. 4

                                        Is it really that hard to not commit secrets? I mean I just don’t ever put them in with code… Never use git add . I guess it just seems like a heavy handed solution to a problem that’s barely there. Unless this happens a ton elsewhere. I am astounded when I hear the statistics about how many secrets are committed in GitHub, but I wonder if it has more to do with a lack of understanding than just a git flub/accident.

                                        1. 4

                                          I think it’s a coding practices thing. Like Nathaniel Borenstein said, “No ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.” I never hard code secrets because secrets always come in as environmental variables or other parameters, but I think for people who just want to get something done quickly, hard coding seems like the fastest way to do things.

                                          1. 3

                                            I think in a perfect world you’re right, however a lot of exploits that cause users data to be exposed are caused in part by people checking in secrets into source control. My goal with this project was to create something as lightweight and quiet as possible such that you can mostly forget that it’s installed and get the protection with very little downside. Also if you work at a company on a web service the security team might mandate using a security scanner as part of your pre-commit, in which case it’s nice to have a very fast and lightweight option.

                                            1. 1

                                              i like the concept of having a company where the security team might mandate a tool like this

                                              the point i was making is that if someone is committing secrets, they probably don’t realize what they’re doing, and in those cases, they probably won’t understand the need to add this tool to their git pre-commit hooks

                                              in other words, if they were to fully understand what this tool is used for, then the usefulness is probably greatly reduced

                                              which leads me to feel like ultimately the education is what is important… but the security team mandate thing is good, having this run in CI before in a main branch is good too, depending on your repo setup this may make overwriting the history in the repo easier

                                              but best to have it checked before ever committed, which is the point of the tool

                                            2. 2

                                              It’s easy to not commit secrets! That’s why we should make a machine automate it for us.

                                              1. 1

                                                I’m also not sure what’s hard about it.

                                                Never use git add .

                                                Don’t even need to avoid any commands - instead, just don’t have secrets in the repository directory at all. It’s easy and also completely foolproof.

                                                1. 1

                                                  Which breaks down for infrastructure repos that are mostly secrets :P

                                                  1. 2

                                                    i work every day in infrastructure repos and don’t have any secrets committed, so that just sounds like someone’s doing something wrong

                                                    1. 1

                                                      You can try to argue with reality or just accept it. There are setups that are a lot older than e.g. Vault has been around, or anything else based on tokens.

                                                      I don’t think this is the place for judgment of practices, and I’m not even involved in this game anymore, so don’t read this as defending myself.

                                                      1. 1

                                                        oh, not arguing at all, it just seemed like your statement was a “this won’t work for infrastructure repos”… which was, at least I think, the only logical conclusion to take from your statement, because i think it’s pretty obvious this isn’t going to work for repos that are literally designed to store secrets

                                                        then again, maybe the :P invalidates any attempt at a logical conclusion

                                                  2. 1

                                                    Never use git add . or git commit -a is good advice for other reasons though.

                                                1. 1

                                                  I use Nextcloud in a container so have largely not had to worry about php versions for a while…

                                                  I’m excited to see what comes of owncloud’s “infinite scale” rewrite into go

                                                  1. 3

                                                    Really good work getting put in to Zig

                                                    1. 2

                                                      maybe it’s just the operator in me thinking, but i’ve thought for a while you can do pretty powerful, even enterprise grade webapps, with just a true single go binary (compile the templates into the binary) and an sqlite database alongside… you could even use some caddy libraries to do the SSL and perhaps auto let’s encrypt for SSL, just open ports 80 and 443 for your redirects, challenges, and stuff like that, an auth provider if needed… and that’s the operational overhead…

                                                      database backups can happen via the web (either via ui or api calls), and if your use case requires more, litestream is there to stream in real time…

                                                      if object storage is needed, S3 apis are pretty commoditized now too, i wonder what percentage of web apps in use globally could be switched to this model… i imagine most

                                                      i wish i had the skills to create that, as i have some ideas for some fairly straightforward CRUD apps w/ < 1000 users that it seems to me i should have no problem running on a basic VPS, more users should be more than possible as long as the code is efficient

                                                      i’d like to devote some more time to that ideal but i feel like that’s a great stack… it seems like the toolkit is all there to do it…

                                                      maybe we can coin the new stack name… instead of the LAMP stack for linux, apache, mysql and php, we can call it the xSG stack…. xGS rolls off the tongue better,

                                                      x (any OS due to Go’s cross compilation abilities)
                                                      G, golang
                                                      S, sqlite

                                                      S standing for sqlite’s a bit of a stretch (microsoft sql server?)

                                                      maybe “xGSqlite” isn’t too long, but that’s like 5 syllables vs. LAMP’s 1….

                                                      maybe 2 binaries… one frontend routing traffic, and 1 with your webapp, this way your frontend binary could care for upgrades and could do seamless “blue/green” type deployments with your continuously integrated/delivered webapp, but part of me says a little downtime for upgrades and db migrations is not the end of the world, not for your vast majority

                                                      1. 1

                                                        Wow! I never knew sqllite and litestream was so amazing. Is there way to move Postgres SQL DB to SqlLite?

                                                        1. 7

                                                          Only if you’re using very few features of postgres. Any decent orm should be able to pipe from one store to the other. There’s also https://github.com/benbjohnson/postlite but it’s very alpha.

                                                          1. 4

                                                            If you’re low traffic enough for this to make sense, you can “just” dump your Postgres DB, and then import the dump into a SQLite DB after some light text processing. Add an audit log in Postgres if you have a bunch of data and will need to do incremental catch-ups for a low downtime cutover. How many writes/s does your Postgres instance handle?

                                                            1. 2

                                                              If you’re already on postgres I’m curious what you would gain by switching? You would lose a lot of features.

                                                              1. 7

                                                                a whole bunch of operational overhead

                                                                1. 1

                                                                  I’ve never found postgres to have overhead. Especially if you’re going to set up litestream anyway…

                                                                  1. 3

                                                                    simply running a postgres server is overhead

                                                                    yes, this is minimized a lot by using a database service from a cloud provider, but even then, assuming you make absolutely 0 tweaks to your postgres server… you still have much more to do than even just litestreaming to a bucket

                                                                    1. 1

                                                                      really? apt install postgres and then forget… ?

                                                                      1. 4

                                                                        At the very least you have to create some users and set permissions. You will also want to tweak the config a little (especially the memory settings and autovacuum). However, I would put this on the level of effort of litestreaming to a bucket. The real pain comes when you need to do some migrations…

                                                                        1. 3

                                                                          Backups, authentication management, and any form of replication all require work and upkeep.

                                                                          1. 1

                                                                            Tailscale is not a single backend server, it’s a tool that runs on users’ devices. It’s a really good idea to keep complexity low and not have to manage a separate Postgres process on every device, or even one single point of failure on a backend somewhere.

                                                                            SQLite is the smart choice here.

                                                                            1. 1

                                                                              If they’re talking about the client and not the service then that actually makes sense, but I doubt they’re running litestream on client devices…

                                                                1. 7
                                                                  1. 1

                                                                    I haven’t expected that that is how they’d explain it.

                                                                  1. 2

                                                                    tell us how you really feel?

                                                                    this reads just like someone’s brain threw up

                                                                    then i realized we’re looking into someone’s personal zettelkasten, basically their diary

                                                                    wonder what he thinks of vscode if he feels like neovim users are unemployable, 😁

                                                                    1. 2

                                                                      Can anyone who’s tried sorbet comment on why you’d want to use this instead of crystal?

                                                                      1. 10

                                                                        your question is ‘why x instead of y’, but these two are not direct competitors/alternatives, though both are related to ‘type checking ruby(-esque) code’.

                                                                        sorbet is a ‘checker’ that can be used on top of an existing ruby codebase, which also means it can be gradually adopted. this is similar to mypy in the python world.

                                                                        crystal, on the other hand, is a separate (though similar looking) programming language with no direct interop with the ruby ecosystem. instead, it compiles to machine code, as opposed to running inside a ruby interpreter/vm.

                                                                        1. 1

                                                                          yeah crystal is like elixer… similar syntax but distinctly different and incompatible.

                                                                          1. 1

                                                                            Thanks. “Ability to gradually migrate” does answer the question

                                                                          2. 4

                                                                            completely different languages and runtimes?

                                                                            it’s my understanding that sorbet, and the new type checking functionality built in to ruby 3 using RBS can be implemented gradually in existing ruby projects

                                                                            crystal would require a rewrite

                                                                            1. 1

                                                                              I am more interested in that exactly: What have people’s experiences been between RBS and Sorbet? They seem to approach the same problem, but I suspect they’re not entirely overlapping. As such, I find myself wanting to gather the experiences of someone who has tried both before deciding which I might gradually introduce into a project.

                                                                              1. 6

                                                                                I wrote an online poker virtual table in ruby without any typechecking at the start of the pandemic. It went swimmingly.

                                                                                After playing for a few weeks I realized about a dozen tweaks I wanted to make. Diving back into the code was a little difficult and I grabbed Sorbet (my first go at it) and I found it really helped me keep things straight. I used the types sparingly at first, mainly to document my assumptions about parameters for certain functions. In some places the type signatures got complicated and I took that as a hint to refactor. Decent experience all around. The worst part was the way it loaded gems when probing for type signatures in my dependencies. Thankfully that was a smaller project, probably 2k LoC with a half dozen dependencies. I can’t imagine how a large rails application would fare in that process.

                                                                                Later RBS was released and I figured I’d port my game over to it since the official nature of RBS seemed to challenge Sorbet and the future might demand I make the change. I didn’t like any part of it. The definitions being in a separate file was probably the worst. It meant that those useful notes about my assumptions (in the form of type signatures) were an extra step away. The error messages coming from the Steep tool were significantly less understandable than the Sorbet messages. It ran slower in most cases too.

                                                                                My current day job doesn’t use ruby but if it did I wouldn’t necessarily advocate for my team to adopt either tool. If someone else did, I’d be happy to help bring in Sorbet and I would argue strongly against RBS. The experience of no type checking was better than RBS.

                                                                                1. 1

                                                                                  Thanks for sharing your experiences! I’ll probably try out Sorbet first on a smaller project, and get a feel for it.

                                                                            1. 4

                                                                              To bring this full circle, on the “tiny personal programs” thread, I mention a URL shortener I wrote while I was learning Go. Another post of yours (Writing Go CLIs With Just Enough Architecture) inspired me to rewrite it from scratch with a cleaner overall structure. So, thanks for that.

                                                                              1. 2

                                                                                Great to hear! :-)

                                                                                1. 1

                                                                                  Thanks for the link! Can’t help but notice the lack of mention of Cobra or Viper. Especially when there is a sizable section on the flag package.

                                                                                  I’ve used cobra a couple times now, and see no reason to stop. As a new Gopher I’m still trying to figure out Viper. I wonder what the author’s thoughts on these are.

                                                                                  I find myself making quite a few of these, just like the OP. Most recently I started a new CLI to display kanban boards in the terminal from my Nextcloud Deck instance.

                                                                                  1. 3

                                                                                    I don’t like Cobra or Viper. They seem way too complicated for what they do. The Go standard library flags are fine. A thing a lot of people don’t know is that it accepts -option or --option. If you want short named flags, you can just make two flags point to the same variable. I wrote an extension to the flag package (inspired by @peterbourgon’s ff package) that can read environment variables in as flags and just use that exclusively. I also don’t really believe in subcommands. Just ship two binaries for your two commands. There are very few spaces where Go is a good fit and the storage space matters (if the storage space matters, you’re probably in Rust territory anyway).

                                                                                    1. 1

                                                                                      Two comments: first, I think I accidentally flagged this while trying to reply. Sorry (and ping @pushcx). I’m still figuring out if I can unflag it.

                                                                                      I can’t speak for the OP, but so far I’m happy just to use flag from the standard library. I don’t mind the lack of long flags, and I like sticking to Go’s standard library and default community style as much as possible. (Update: as @carlmjohnson points out in his comment, I was wrong to imply that you can’t have long options: flag treats -option and --option as equivalent. I think I’ve learned and forgotten this several times already.)

                                                                                      My only complaint about the flag package in the standard library is that I’d like more fine-grained control over where usage messages go. If a user asks for help, then the usage should appear on stdout, but if the usage is printed as the result of user error, then I’d like to send it to stderr. You can do this using the standard library, but it takes a bit of wrangling: https://paste.sr.ht/~telemachus/c1b0b50040dbee19924441d0e1fd39b89eedfc09.

                                                                                1. 1

                                                                                  I use cloudflare, regardless of whether I end up using their services, they offer nice DNS management APIs and provide you domain names at cost

                                                                                  1. 31

                                                                                    LE is clearly a NOBUS project.

                                                                                    This seems like a very baseless accusation.

                                                                                    1. 21

                                                                                      Also a wrong one. LE could impersonate your site, but it will be visible via Certificate Transparency. If you have opted out of getting a cert entirely, then any CA will be able to do this.

                                                                                      1. 4

                                                                                        How could LE impersonate your site without control over DNS? Or are you assuming a bad actor that circumvents the challenges

                                                                                        1. 1

                                                                                          I used “impersonate” as a shorthand for “issue a valid X.509 certificate for your site”, but yeah - it wouldn’t be useful unless LE could get on-path somehow, e.g. MITM or DNS

                                                                                          1. 1

                                                                                            is it MITM or DNS or MITM AND DNS?

                                                                                            they would need to MITM let’s encrypt’s proof mechanisms (the machines that do the HTTP or DNS challenges)

                                                                                            then of course they’d need to MITM the person they’re attacking too… just seems infeasible. on the other hand if you lose control of DNS all bets are off

                                                                                      2. 12

                                                                                        Especially since there are at least 3 or 4 free/gratis certificate providers using the ACME protocols.

                                                                                        1. 2

                                                                                          Could you point to one or two? Are they broadly supported by major browser vendors?

                                                                                          1. 2

                                                                                            As of now, I know of:

                                                                                            • Let’s Encrypt (USA)
                                                                                            • Buypass (Norway, horrible pun)
                                                                                            • ZeroSSL (USA, EU & UK)

                                                                                            Meaning that if you want to avoid USA based CAs, your only option is Buypass. If you want to obtain certificates through other means than ACME, I think ZeroSSL is the only option (I know the other two don’t allow it), but they have encumbered this feature with a reCaptcha tracker, so I’ve opted not to try it.

                                                                                            1. 1

                                                                                              Thanks!

                                                                                      1. 30

                                                                                        SQLite has so far never let me down, and enabled some things that would be otherwise in the “almost impossible” category. Still come across some amazing new feature that I had no idea about regularly (like the session extension), as well as regularly releasing big new features while remaining 100% backwards compatible always. Upgrading SQLite is a real joy: just replace sqlite3.{c,h} with the latest one, and eh yeah, that’s about it…

                                                                                        fish is a pleasure to use as a daily shell - very thankful that someone has taken on the very thankless task of making a new shell and actually thinking about the ergonomics first (“Finally, a command line shell for the 90s” is the perfect slogan for this)

                                                                                        1. 7

                                                                                          I became a “fisher” past year and I haven’t had a reason to regret that move. Contributed some missing auto-completions to it as well.

                                                                                          1. 4

                                                                                            I love Fish ergonomics, but actually wish it was Bash-compatible. It’s just a bit pain when you need to integrate scripts that do not have a native Fish version, it’s slower and does often not quite work (even with Bass). Nu is another shell to look at, with even more radical choices.

                                                                                            1. 3

                                                                                              I installed Fish a few months ago and haven’t looked back, it is such a pleasure to use.

                                                                                              1. 2

                                                                                                Fish was the big thing I missed when I moved from Linux to Windows.

                                                                                                1. 1

                                                                                                  As a daily WSL user, I’m wondering why make the move

                                                                                                  1. 2

                                                                                                    AutoHotKey.

                                                                                                    1. 1

                                                                                                      i was just about to write “you should write an article about how you use autohotkey” because it seems like it would be interesting

                                                                                                      then i found https://www.hillelwayne.com/post/ahk/

                                                                                                2. 1

                                                                                                  I really like fish. Sadly I use reverse-incremental search very often in zsh and the lack of it makes fish too hard for me to use. fish has very good reverse-prefix-incremental search but since I commonly aim to resurrect commands from history from mid-line partial-match, it doesn’t work for my use-case.

                                                                                                  I was using fish + Starship and now I use zsh + oh-my-zsh + Starship and it’s good enough.

                                                                                                  1. 8

                                                                                                    I use fzf to fill this hole in fish, via its ctrl-r binding

                                                                                                    1. 2

                                                                                                      Absolute game changer! Thank you!

                                                                                                    2. 2

                                                                                                      I have been using bash with the vi editing mode for years and it was only last month that I learned that you can press / in command mode to search command history. That’s completely changed how I use the shell (I can never remember the normal history search shortcut).

                                                                                                      FreeBSD 14 is replacing csh with sh as the default shell and the up-arrow behaviour from csh was the big feature that everyone insisted sh needed before it could replace csh (FreeBSD’s csh up-arrow searched for things in the history matching the typed prefix). I still wouldn’t choose to use either as a daily driver, but they’re now both fine for occasional use.

                                                                                                      1. 1

                                                                                                        Impressed you are able to use bash vi mode. I find it rather unintuitive and prefer Emacs mode but with C-x C-e to get into nvim or when I’m in Neovim’s terminal, just going to Normal mode.

                                                                                                        Strange because I have vim everywhere else. Just not here.

                                                                                                        1. 2

                                                                                                          I’ve written four books, a PhD thesis, a dozen papers, and at least a couple of hundred thousand lines of code in vim. At this point, convincing my fingers that any text input field is not vim is probably a lost cause. I will probably end this post by typing :wq and then have to delete it and hit post. Even if I don’t have vim mode, I hit b to go back in the command line and then have to delete the ^B.

                                                                                                  2. 2

                                                                                                    Yep, sqlite is the first thing that came to mind when reading this thread. I’ve done a lot of data wrangling over the past year, and sqlite has come in clutch so many times.

                                                                                                    It pairs well with pandas!

                                                                                                  1. 12

                                                                                                    Ruby changed my mind about so many things - importance of type safety, importance of aesthetics, OO & controlled mutation.

                                                                                                    Its something I recommend anyone give a serious go. Don’t read a bunch of books, just make a small fun project. Don’t try to make it FP, you will feel when you pulling against the language instead of embracing it. Its the first language I used that doesn’t feel like it has an agenda (in a good way) other than to just be a pleasure to use.

                                                                                                    EDIT: Important aside that I find people making common issue with, ruby is much more than rails. Try a project that is just plain ruby. I personally really fell in love with ruby when I started writing my language in it.

                                                                                                    1. 5

                                                                                                      +1 to that! I love using Ruby outside web development. Creating small scripts and CLIs with Ruby is :chef-kiss:

                                                                                                      1. 3

                                                                                                        Don’t try to make it FP, you will feel when you pulling against the language instead of embracing it

                                                                                                        Hard disagree, what I love about Ruby is that it blends OOP and FP approaches together.

                                                                                                        1. 1

                                                                                                          upvoted… my ruby is very FP and that’s how i likes it

                                                                                                          the ruby style guide that a big portion of linters is based on and more or less considered a standard across most of the industry encourages you to write in a functional style

                                                                                                          when i’m writing ruby i make it my mission to do it, it rarely is a problem unless i come across a method or gem someone wrote that was meant to be non functional

                                                                                                        2. 1

                                                                                                          I came at it from a different angle…

                                                                                                          I had extensively used Perl and a SmallTalk derivative..

                                                                                                          Meeting Ruby felt like coming home to something better than both.

                                                                                                          Although I’ll admit my Ruby code has changed over the years becoming far far more FP like and less perl like.

                                                                                                          1. 1

                                                                                                            Yes, indeed, “just plain Ruby” is both a joy and a pleasure! But with great power comes great responsibility yada yada…

                                                                                                          1. 19

                                                                                                            If the project allows it, Go and sqlite3 can be a wonderful combination. For a recent web service [1], we had billions of rows (and documents) stored in various sqlite3 databases, between 250 and 600GB in total. Plain vanilla net/http, sqlx and go-sqlite3 got us to hundreds of requests per second - resulting at times in over 10K queries per second (on a single m4.2xlarge equivalent). Now that’s not that impressive, but what make this setup enjoyable is its simplicity and the prospect of low maintenance.

                                                                                                            [1] lightning talk: https://github.com/miku/dwstalk

                                                                                                            1. 6

                                                                                                              That is pretty impressive! I know this isn’t a tech silver bullet (nothing is) but I do think it’s an underutilized pattern right now. There is a time and a place for giant database servers/services and there’s a time when a file sitting on a hard drive will do just fine.

                                                                                                              1. 2

                                                                                                                Ah, don’t undersell it. IMO, while hundreds of rq/s isn’t interesting for a ping-pong type webserver benchmark, it’s pretty respectable for something that’s actually reading information from persistent storage and doing real work.

                                                                                                                1. 1

                                                                                                                  as a devops evangelist (perhaps more towards the ops side) it’s a refreshing pattern… this is the kind of stuff ops people love to run

                                                                                                                  i wonder if that’s who’s building a lot of the systems designed like this, the type of devs who appreciate what it takes to run a service

                                                                                                                1. 2

                                                                                                                  If it’s stupid, but it works … it’s not stupid.

                                                                                                                  Well, maybe sometimes it’s a little bit dumb. I’ll take dumb but solves a problem over not solving the problem though.

                                                                                                                  1. 1

                                                                                                                    It does expose your TOTP code to the network.

                                                                                                                    1. 1

                                                                                                                      It is a fun hack, nothing anyone should use.

                                                                                                                      1. 2

                                                                                                                        i feel like maybe we should discuss that…

                                                                                                                        is it exposing your TOTP code to the network? isn’t the whole point of TOTPs that any knowing the TOTP would not expose the underlying algorithm?

                                                                                                                        is it even possible to guess a TOTP given knowledge of n previous TOTPs? i do know it’s fairly easy to brute force a TOTP when there is no rate limiting in place, and i think this would definitely be one of those cases

                                                                                                                        1. 2

                                                                                                                          Since it’s time-based, and nothing that I see (from my quick skim) is keeping track of which codes have been used, a network observer who sees what IP addresses you’re talking to should be able to bypass your TOTP protection as long as they connect to the same IP address within that 30 second window or whatever.

                                                                                                                          1. 2

                                                                                                                            I checked a few TOTP implementations out there and not all of them invalidate codes after use. Github for example happily accepts the same code multiple times within the same time period.

                                                                                                                            I agree that blacklisting codes after use is good practice, but it’s just one more safety measure. Only checking the TOTP without blacklisting is not the same as not checking a TOTP

                                                                                                                            1. 2

                                                                                                                              Github for example happily accepts the same code multiple times within the same time period.

                                                                                                                              That’s against the specs and a pretty serious bug. It’s called “one time” for a reason.

                                                                                                                            2. 1

                                                                                                                              If they can guess the IP then they have already broken your TOTP anyway…

                                                                                                                              1. 4

                                                                                                                                Somebody who can watch your IP traffic (watch, not decrypt!) does not need to guess the IP.

                                                                                                                                1. 3

                                                                                                                                  sure, but they still would need the SSH key to access the machine.

                                                                                                                                  1. 1

                                                                                                                                    TOTP is supposed to be the second factor that protects you when someone has stolen your first factor. If your security is only as good as the first factor, then you don’t have 2FA.

                                                                                                                                  2. 2

                                                                                                                                    Oh, sure, so they have a handful of seconds to try cracking your password before it rotates.

                                                                                                                                    1. 1

                                                                                                                                      Absolutely; that’s why a solution like fail2ban is probably the better idea and more comfortable to use.

                                                                                                                                      1. 1

                                                                                                                                        Yes, so at least it would provide that much protection – reducing the window of exposure.

                                                                                                                            3. 1

                                                                                                                              How? All the ip addresses exist, it just changes the firewall rules. You would have to bruteforce the code in the time to find it, no?

                                                                                                                              1. 2

                                                                                                                                no TLS for the TOTP “code”, it’s plain in the connection IP

                                                                                                                          1. 11

                                                                                                                            With regard to Rails and Django specifically then that is a matter beyond comprehension. Neither Ruby nor Python should ever have been utilized for web development. They are simply ill-suited because they are too slow.

                                                                                                                            Such a broad statement seems very shallow. Slow in what way? Development is MUCH faster in Rails than using whatever Go framework the author has in mind. And in many applications this is the “speed” that matters. There are too many billion dollar companies doing just fine providing services with Rails, your tiny web app doesn’t need to scale to millions of users.

                                                                                                                            I don’t disagree that people could learn a lot from coding a Go web application, to broadly pronounce Ruby (or really even Python) as the wrong tool for any web development job seems ignorant.

                                                                                                                            I agree with the author on the sentiments regarding super heavyweight front-end JavaScript, though. The JavaScript/SPA model is a scourge.

                                                                                                                            1. 12

                                                                                                                              Yes, the author completely misses that the reason that Ruby+Rails and Python+Django became popular is that developer time has been more costly than computing resources for decades now.

                                                                                                                              1. 3

                                                                                                                                Whose computing resources? Why is wasting the time of users with old computers or phones and slow connections, considered such an acceptable tradeoff?

                                                                                                                                1. 5

                                                                                                                                  Ruby and Python run on the server, not the client. The speed would not be affected by the user having a fast or slow computer.

                                                                                                                                2. 1

                                                                                                                                  Actually the author doesn’t miss it, but explicitly meetings it.

                                                                                                                                3. 5

                                                                                                                                  They also miss the fact that can have a super slow back-end and still a fast experience overall. My work’s service is running on rails and is not very fast, but most likely you’ll never notice, because almost every site will be served from a cache PoP close to you almost immediately.

                                                                                                                                  For many services it’s viable to literally rebuild affected views after an update and almost never hit the path that does database queries. Most services are not big enough for this to matter though…

                                                                                                                                1. 15

                                                                                                                                  I don’t get what this buys you over short lived and tightly scoped branches deleted on Pull/Merge…

                                                                                                                                  Sometimes I like having multiple commits in a PR, it tells a story I would like told

                                                                                                                                  1. 6

                                                                                                                                    This is something lost in the translation of changeset evolution in Mercurial to branchless Git: in Mercurial, the entire history of that changeset is cleanly preserved (and shareable!) in the repo history, so you can see the story when you want it, but just the overall change in the common case. Git really doesn’t provide a good facility to do that (at least, not today), which is where git-branchless ends up with the limitation you’re highlighting. I’m unfortunately not sure how you’d really navigate around that without some changes to core Git’s management of refs, at a bare minimum.

                                                                                                                                    1. 1

                                                                                                                                      git-branchless keeps the history of the commit in your repository, but unfortunately, this information can’t easily be synced between machines at present.

                                                                                                                                      I’ve been thinking about if it’s possible to add this to Git while still supporting common workflows. One way is to store obsolescence markers and rewrite edges as files in the source tree, under some .branchless directory. Since these structures form a CRDT, there should be an encoding which is conflict-free under Git’s merge rules as well (for example, say that there is a rewrite edge from abc to def iff there exists a file .branchless/rewrite/abc/def in the source tree).

                                                                                                                                      However, this doesn’t ensure that the pointed-to-objects are actually kept live. There are some encodings which would keep the objects live using a lot of parent commits, which might work when communicating directly with a Git server, or when using services like Github where you can create a multi-parent commit as part of a pull request, but I don’t see a way to support a patch-based workflow, since Git patches fundamentally don’t represent merge commits (or other data).

                                                                                                                                    2. 3

                                                                                                                                      I’m not using a branchless git tool now, but this sounds very attractive to me. I think what it gets you is that the process is automated.

                                                                                                                                      For me as a solo dev, I usually start work on something, realize that I should be in a branch, make a branch (which means thinking up some short arbitrary name for what I’m doing), get to where I think I’m done, do a pull request and review my work, and then finally merge it in, but then I have to back and clean things up by switching back to main and pulling, deleting the local branch, etc. It’s sort of a waste of time usually since for me having multiple live branches is the exception. It means I didn’t finish work on a feature before I had to do something else first. If I could get a tool to handle it for me, that would be nice.

                                                                                                                                      1. 3

                                                                                                                                        I also work on a branch for “too long” in hindsight, and Magit has a perfect (IMO) solution for this. It does all your steps with the two keys “b s” (see magit-branch-spinoff here: https://magit.vc/manual/magit/Branch-Commands.html). This is one of the many user-friendly interfaces Magit provides, and I think it hits a sweet spot between magic and shortcuts to things I already know about.

                                                                                                                                      2. 2

                                                                                                                                        Try it out – it’s really nice. And you can still have multiple commits in a PR.