1. 2

    I think it would be good if Go were to become a enterprise programming language, because it’s absolutely unfit for teaching algorithms – which would mean that universities wouldn’t use it in their programming courses either, and perhaps consider to use something that teaches computational thinking instead of boilerplate-coding.

    1. 7

      Why is it unfit for teaching Algorithms?

      1. 1

        Thinking about it, “unfit” probably isn’t the best word, but let’s say if you want to have nice code, you’re going to have to write nice Go, which is it’s own discipline, seeing as Go has it’s own peculiar limitations (which might/often make sense from a specific engineering standpoint). And it’s this gap in method between Go and “regular imperative languages” that employ OO, Generics, … that would make Go a very unusual choice as a language for a (introductory) algorithms course.

        1. 1

          I’d question whether an algorithms computer science course should be teaching you OO and Generics, or whether that’s the way it’s been done up until now because a lot of Algorithms courses have been taught with Java.

          I’d raise with you that many courses are taught using Python. Open ended type support, as such, but it’s so seamless that it hasn’t actually taught you anything about generics as much as you’ve ignored it at that point.

          Regarding OO - the industry has broadened its notions of acceptable software and OO is no longer a must have. Hasn’t been for a while. Probably better taught in another course to be honest.

          1. 2

            To be clear, I’m not a fan of using Java either, and I don’t see a necessity to teach OO and Generics (as examples). In most cases, Python would be much more preferable, yes.

            But you seem to have misunderstood my point: What you learn about OO and Generics (let’s generalize: Software Design, something that’s usually burdened onto Algorithms courses) in Java is more easily applicable to other languages – since Java ultimately isn’t that different – than you would if you were thought good Go design (for example the sort package).

      2. 5

        I feel like your response contradicts itself. You think it would be good for enterprise and that’s good because Universities wouldn’t teach it? That sounds bad to me.

        1. 5

          No, universities wouldn’t use it for algorithms classes, as they commonly use Java nowadays, which is what I care about.

      1. 32

        To me the big deal is that Rust can credibly replace C, and offers enough benefits to make it worthwhile.

        There are many natively-compiled languages with garbage collection. They’re safer than C and easier to use than Rust, but by adding GC they’ve exited the C niche. 99% of programs may work just fine with a GC, but for the rest the only practical options were C and C++ until Rust showed up.

        There were a few esoteric systems languages or C extensions that fixed some warts of C, but leaving the C ecosystem has real costs, and I could never justify use of a “weird” language just for a small improvement. Rust offered major safety, usability and productivity improvements, and managed to break out of obscurity.

        1. 38

          Ada provided everything except ADTs and linear types, including seamless interoperability with C, 20 years before Rust. Cyclone was Rust before Rust, and it was abandoned in a similar state as Rust was when it took off. Cyclone is dead, but Ada got a built-in formal verification toolkit in its latest revision—for some that stuff alone can be a reason to pick instead of anything else for a new project.

          I have nothing against Rust, but the reason it’s popular is that it came at a right time, in the right place, from a sufficiently big name organization. It’s one of the many languages based on those ideas that, fortunately, happened to succeed. And no, when it first got popular it wasn’t really practical. None of these points makes Rust bad. One just should always see a bigger picture especially when it comes to heavily hyped things. You need to know the other options to decide for yourself.

          Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

          Only in languages that cannot umabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

          1. 15

            I’ve seen Cyclone when it came out, but at that time I dismissed it as “it’s C, but weird”. It had the same basic syntax as C, but added lots of pointer sigils. It still had the same C preprocessor and the same stdlib.

            Now I see it has a feature set much closer to Rust’s (tagged unions, patterns, generics), but Rust “sold” them better. Rust used these features for Result which is a simple yet powerful construct. Cyclone could do that, but didn’t. It kept nullable pointers and added Null_Exception.

            1. 12

              Ada provided everything except ADTs and linear types

              Unfortunately for this argument, ADTs, substructural types and lifetimes are more exciting than that “everything except”. Finally the stuff that is supposed to be easy in theory is actually easy in practice, like not using resources you have already cleaned up.

              Ada got a built-in formal verification toolkit in its latest revision

              How much of a usability improvement is using these tools compared to verifying things manually? What makes types attractive to many programmers is not that they are logically very powerful (they are usually not!), but rather that they give a super gigantic bang for the buck in terms of reduction of verification effort.

              1. 17

                I would personally not compare Ada and Rust directly as they don’t even remotely fulfill the same use-cases.

                Sure, there have been languages that have done X, Y, Z before Rust (the project itself does not lay false claim to inventing those parts of the language which may have been found elsewhere in the past), but the actual distinguishing factor for Rust that places it into an entirely different category from Ada is how accessible and enjoyable it is to interact with while providing those features.

                If you’re in health or aeronautics, you should probably be reaching for the serious, deep toolkit provided by Ada, and I’d probably be siding with you in saying those people probably should have been doing that for the last decade. But Ada is really not for the average engineer. It’s an amazing albeit complex language, that not only represents a long history of incredible engineering but a very real barrier of entry that’s simply incomparable to that of Rust’s.

                If, for example, I wanted today to start writing from scratch a consumer operating system, a web browser, or a video game as a business venture, I would guarantee you Ada would not even be mentioned as an option to solve any of those problems, unless I wanted to sink my own ship by limiting myself to pick from ex-government contractors as engineers, whose salaries I’d likely be incapable of matching. Rust on the other hand actually provides a real contender to C/C++/D for people in these problem spaces, who don’t always need (or in some cases, even want) formal verification, but just a nice practical language with a systematic safety net from the memory footguns of C/C++/D. On top of that, it opens up these features, projects, and their problem spaces to many new engineers with a clear, enjoyable language free of confusing historical baggage.

                1. 6

                  Have you ever used Ada? Which implementation?

                  1. 15

                    I’ve never published production Ada of any sort and am definitely not an Ada regular (let alone pro) but I studied and had a fondness for Spark around the time I was reading “Type-Driven Development with Idris” and started getting interested in software proofs.

                    In my honest opinion the way the base Ada language is written (simple, and plain operator heavy) ends up lending really well to extension languages, but it also can make difficult for beginners to distinguish the class of concept used at times, whereas Rust’s syntax has a clear and immediate distinction between blocks (the land of namespaces), types (the land of names), and values (the land of data). In terms of cognitive load then, it feels as though these two languages are communicating at different levels. Like Rust is communicating in the mode of raw values and their manipulation through borrows, while the lineage of Ada languages communicate at a level that, in my amateur Ada-er view, center on the expression of properties of your program (and I don’t just mean the Spark stuff, obviously). I wasn’t even born when Ada was created, and so I can’t say for sure without becoming an Ada historian (not a bad idea…), but this sort of seems like a product of Ada’s heritage (just as Rust’s so obviously written to look like C++).

                    To try and clarify this ramble of mine, in my schooling experience, many similarly young programmers of my age are almost exclusively taught to program at an elementary level of abstract instructions with the details of those instructions removed, and then after being taught a couple type-level incantations get a series of algorithms and their explanations thrown at their face. Learning to consider their programs specifically in terms of expressing properties of that program’s operations becomes a huge step out of that starting box (that some don’t leave long after graduation). I think something that Rust’s syntax does well (if possibly by mistake) is fool the amateur user into expressing properties of their programs on accident while that expression becomes part of what seems like just a routine to get to the meat of a program’s procedures. It feels to me that expressing those properties are intrinsic to the language of speaking Ada, and thus present a barrier intrinsic to the programmer’s understanding of their work, which given a different popular curriculum could probably just be rendered as weak as paper to break through.

                    Excuse me if these thoughts are messy (and edited many times to improve that), but beyond the more popular issue of familiarity, they’re sort of how I view my own honest experience of feeling more quickly “at home” in moving from writing Rust to understanding Rust, compared to moving from just writing some form of Ada, and understanding the program I get.

                2. 5

                  Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

                  Only in languages that cannot unabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

                  OCaml can unambiguously infer the principal type, and I still find myself writing the type of top level functions explicitly quite often. More than once have I been guided by a type error that only happened because I wrote the type of the function I was writing in advance.

                  At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL. More than once have I been surprised. More than once that surprise was caused by a bug in my code. Had I not checked the type of my function, I would catch the bug only later, when using the function, and the error message would have made less sense to me.

                  1. 2

                    At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL

                    Why not use Merlin instead? Saves quite a bit of time.

                    That’s a tooling issue too of course. Tracking down typing surprises in OCaml is easy because the compiler outputs type annotations in a machine-readable format and there’s a tool and editor integrations that allow me to see the type of every expression in a keystroke.

                    1. 2

                      Why not use Merlin instead? Saves quite a bit of time.

                      I’m a dinosaur, that didn’t take the time to learn even the existence of Merlin. I’m kinda stucks in Emacs’ Tuareg mode. Works for me for small projects (all my Ocaml projects are small).

                      That said, my recent experience with C++ and QtCreator showed me that having warnings at edit time is even more powerful than a REPL (at least as long as I don’t have to check actual values). That makes Merlin look very attractive all of a sudden. I’ll take a look, thanks.

                3. 5

                  Rust can definitely credibly replace C++. I don’t really see how it can credibly replace C. It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                  1. 43

                    I’ve been a C programmer for over a decade. I’ve tried switching to C++ a couple of times, and couldn’t stand it. I’ve switched to Rust and love it.

                    My reasons are:

                    • Robust, automatic memory management. I have the same amount of control over memory, but I don’t need goto cleanup.
                    • Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.
                    • Slices are awesome: no array to pointer decay. Work great with substrings.
                    • Safety is not just about CVEs. I don’t need to investigate memory murder mysteries in GDB or Valgrind.
                    • Dependencies aren’t painful.
                    • Everything builds without fuss, even when supporting Windows and cross-compiling to iOS.
                    • I can add two signed numbers without UB, and checking if they overflow isn’t a party trick.
                    • I get some good parts of C++ such as type-optimized sort and hash maps, but without the baggage C++ is infamous for.
                    • Rust is much easier than C++. Iterators are so much cleaner (just a next() method). I/O is a Read/Write trait, not a hierarchy of iostream classes.
                    1. 6

                      I also like Rust and I agree with most of your points, but this one bit seems not entirely accurate:

                      Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.

                      AFAIK Rust:

                      • doesn’t guarantee thread-safety — it guarantees the lack of data races, but doesn’t guarantee the lack of e.g. deadlocks;
                      • guarantees the lack of data races, but only if you didn’t write any unsafe code.
                      1. 20

                        That is correct, but this is still an incredible improvement. If I get a deadlock I’ll definitely notice it, and can dissect it in a debugger. That’s easy-peasy compared to data races.

                        Even unsafe code is subject to thread-safety checks, because “breaking” of Send/Sync guarantees needs separate opt-in. In practice I can reuse well-tested concurrency primitives (e.g. WebKit’s parking_lot) so I don’t need to write that unsafe code myself.

                        Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                        1. 6

                          I share your enthusiasm. Just wanted to prevent a common misconception from spreading.

                          Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                          I did not know it could do that. That’s fantastic.

                        2. 9

                          Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                          guarantees the lack of data races, but only if you didn’t write any unsafe code.

                          Rust application code generally avoids unsafe.

                          1. 4

                            Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                            My comment was not a criticism of Rust. Just wanted to prevent a common misconception from spreading.

                            Rust application code generally avoids unsafe.

                            That depends on who wrote the code. And unsafe blocks can cause problems that show in places far from the unsafe code. Meanwhile, “written in Rust” is treated as a badge of quality.

                            Mind that I am a Rust enthusiast as well. I just think we shouldn’t oversell it.

                          2. 7

                            guarantees the lack of data races, but only if you didn’t write any unsafe code.

                            As long as your unsafe code is sound it still provides the guarantee. That’s the whole point, to limit the amount of code that needs to be carefully audited for correctness.

                            1. 2

                              I know what the point is. But proving things about code is generally not something that programmers are used to or good at. I’m not saying that the language is bad, only that we should understand its limitations.

                            2. 1

                              I find it funny that any critique of Rust needs to be prefixed with a disclaimer like “I also like Rust”, to fend off the Rust mob.

                          3. 11

                            This doesn’t really match what we see and our experience: a lot of organisations are investigating their replacement of C and Rust is on the table.

                            One advantage that Rust has is that it actually lands between C and C++. It’s pretty easy to move towards a more C-like programming style without having to ignore half of the language (this comes from the lack of classes, etc.).

                            Rust is much more “C with Generics” than C++ is.

                            We currently see a high interest in the embedded world, even in places that skipped adopting C++.

                            I don’t think the fundamental difference in approach is as large as you make it (sorry for the weak rebuttal, but that’s hard to quantify). But also: approaches are changing, so that’s less of a problem for us, as long as we are effective at arguing for our approach.

                            1. 2

                              It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                              Human minds are sometimes less flexible than rocks.

                              That’s why we still have that stupid Qwerty layout: popular once for mechanical (and historical) reasons, used forever since. As soon as the mechanical problems were fixed, Sholes imself devised a better layout, which went unused. Much later, Dvorak devised another better layout, and it is barely used today. People thinking in Qwerty simply can’t bring themselves to take the time to learn the superior layout. (I know: I’m in a similar situation, though my current layout is not Qwerty).

                              I mean, you make a good point here. And that’s precisely what’s make me sad. I just hope this lack of flexibility won’t prevent C programmers from learning superior tools.

                              (By the way, I would chose C over C++ in many cases, I think C++ is crazy. But I also know ML (OCaml), a bit of Haskell, a bit of Lua… and that gives me perspective. Rust as I see it is a blend of C and ML, and though I have yet to write Rust code, the code I have read so far was very easy to understand. I believe I can pick up the language pretty much instantly. In my opinion, C programmers that only know C, awk and Bash are unreasonably specialised.)

                              1. 1

                                I tried to switch to DVORAK twice. Both times I started to get pretty quick after a couple of days but I cheated: if I needed to type something I’d switch back to QWERTY, so it never stuck.

                                The same is true of Rust, incidentally. Tried it out a few times, was fun, but then if I want to get anything useful done quickly it’s just been too much of a hassle for me personally. YMMV of course. I fully intend to try to build something that’s kind of ‘C with lifetimes’, a much simpler Rust (which I think of as ‘C++ with lifetimes’ analogously), in the future. Just have to, y’know, design it. :D

                                1. 3

                                  I too was tempted at some point to design a “better C”. I need:

                                  • Generics
                                  • Algebraic data types
                                  • Type classes
                                  • coroutines, (for I/O and network code, I need a way out of raw poll(2))
                                  • Memory safety

                                  With the possible exception of lifetimes, I’d end up designing Rust, mostly.

                                  1. 2

                                    I agree that you need some way of handling async code, but I don’t think coroutines are it, at least not in the async/await form. I still feel like the ‘what colour is your function?’ stuff hasn’t been solved properly. Any function with a callback (sort with a key/cmp function, filter, map, etc.) needs an async_ version that takes a callback and calls it with await. Writing twice as much code that’s trivially different by adding await in some places sucks, but I do not have any clue what the solution is. Maybe it’s syntactic. Maybe everything should be async implicitly and you let the compiler figure out when it can optimise things down to ‘raw’ calls.

                                    shrug

                                    Worth thinking about at least.

                                    1. 4

                                      Function colors are effects. There are two ways to solve this problem:

                                      1. To use polymorphism over effects. This is what Haskell does, but IMO it is too complex.
                                      2. To split large async functions into smaller non-async ones, and dispatch them using an event loop.

                                      The second approach got a bad reputation due to its association with “callback hell”, but IMO this reputation is undeserved. You do not need to represent the continuation as a callback. Instead, you can

                                      1. Define a gigantic sum type of all possible intermediate states of asynchronous processes.
                                      2. Implement each non-async step as an ordinary small function that maps intermediate states (not necessarily just one) to intermediate states (not necessarily just one).
                                      3. Implement the event loop as a function that, iteratively,
                                        • Takes states from an event queue.
                                        • Dispatches an appropriate non-async step.
                                        • Pushes the results, which are again states, back into the event queue.

                                      Forking can be implemented by returning multiple states from a single non-async step. Joining can be implemented by taking multiple states as inputs in a single non-async step. You are not restricted to joining processes that were forked from a common parent.

                                      In this approach, you must write the event loop yourself, rather than delegate it to a framework. For starters, no framework can anticipate your data type of intermediate states, let alone the data type of the whole event queue. But, most importantly, the logic for dispatching the next non-async step is very specific to your application.

                                      Benefits:

                                      1. Because the data type of intermediate states is fixed, and the event loop is implemented in a single centralized place, it is easier to verify that your code works “in all cases”, either manually or using tools that explicitly model concurrent processes using state machines (e.g., TLA+).

                                      2. Because intermediate states are first-order values, rather than first-class functions, the program is much easier to debug. Just stop the event loop at an early time and pretty-print the event queue. (ML can automatically pretty-print first-order values in full detail. Haskell requires you to define a Show instance first, but this definition can be generated automatically.)

                                      Drawbacks:

                                      1. If your implementation language does not provide sum types and/or pattern matching, you will have a hard time checking that every case has been covered, simply because there are so many cases.

                                      2. The resulting code is very much non-extensible. To add new asynchronous processes, you need to add constructors to the sum type of intermediate states. This will make the event loop fail to type check until you modify it accordingly. (IMO, this is not completely a drawback, because it forces you to think about how the new asynchronous processes interact with the old ones. This is something that you eventually have to do anyway, but some people might prefer to postpone it.)

                                      1. 3

                                        I agree that you need some way of handling async code, but I don’t think coroutines are it

                                        Possibly. I actually don’t know. I’d take whatever let me write code that looks like I’m dispatching an unlimited number of threads, but dispatches the computation over a reasonable number of threads, possibly just one. Hell, my ideal world is green threads, actually. Perhaps I should have lead with that…

                                        Then again, I don’t know the details of the tradeoffs involved. Whatever let me solve the 1M connections cleanly and efficiently works for me.

                              2. 5

                                I agree with @milesrout. I don’t think Rust is a good replacement for C. This article goes into some of the details of why - https://drewdevault.com/2019/03/25/Rust-is-not-a-good-C-replacement.html

                                1. 17

                                  Drew has some very good points. Its a shame he ruins them with all the other ones.

                                  1. 25

                                    Drew has a rusty axe to grind: “Concurrency is generally a bad thing” (come on!), “Yes, Rust is more safe. I don’t really care.”

                                    Here’s a rebuttal of that awful article: https://telegra.ph/Replacing-of-C-with-Rust-has-been-a-great-success-03-27 (edit: it’s a tongue-in-cheek response. Please don’t take it too seriously: the original exaggerated negatives, so the response exaggerates positives).

                                    1. 11

                                      So many bad points from this post.

                                      • We can safely ignore the “features per year”, since the documentation they are based on don’t follow the same conventions. I’ll also note that, while a Rust program written last year may look outdated (I personally don’t know Rust enough to make such an assessment), it will still work (I’ve been told breaking changes are extremely rare).

                                      • C is not really the most portable language. Yes, C and C++ compilers, thanks to having decades of work behind them, target more devices than everything else put together. But no, those platforms do not share the same flavour of C and C++. There are simply too many implementation defined behaviours, starting with integer sizes. Did you know that some platforms had 32-bit chars? I worked with someone who worked on one.

                                        I wrote a C crypto library, and went out of my way to ensure the code was very portable. and it is. Embedded developers love it. There was no way however to ensure my code was fully portable. I right-shift negative integers (implementation defined behaviour), and I use fixed width integers like uint8_t (not supported on the DSP I mentioned above).

                                      • C does have a spec, but it’s an incomplete one. In addition to implementation defined behaviour, C and C++ also have a staggering amount of undefined and unspecified behaviour. Rust has no spec, but it still tries to minimise undefined behaviour. I expect this point will go away when Rust stabilises and we get an actual spec. I’m sure formal verification folks will want to have a verified compiler for Rust, like we currently have for C.

                                      • *C have many implementations… and that’s actually a good point.

                                      • C has a consistent & stable ABI… and so does Rust, somewhat? OK, it’s opt-in, and it’s contrived. My point is, Rust does have an FFI which allows it to talk to the outside world. It doesn’t have to be at the top level of a program. On the other hand, I’m not sure what would be the point of a stable ABI between Rust modules. C++ at least seems to be doing fine without that.

                                      • Rust compiler flags aren’t sable… and that’s a good point. They should probably stabilise at some point. On the other hand, having one true way to manage builds and dependencies is a god send. Whatever we’d use stable compile flags for, we probably don’t want to depart from that.

                                      • Parallelism and Concurrency are unavoidable. They’re not a bad thing, they’re the only thing that can help us cheat the speed of light, and with it single threaded performance. The ideal modern computer is more likely a high number of in-order cores, each with a small amount of memory, and an explicit (exposed to the programmer) cache hierarchy. Assuming performance and energy consumption trumps existing C (and C++) programs. Never forget that current computers are optimised to run C and C++ programs.

                                      • Not caring about safety is stupid. Or selfish. Security vulnerabilities are often mere externalities, which you can ignore if it doesn’t damage your reputation to the point of affecting your bottom line. Yay Capitalism. More seriously, safety is a subset of correctness, and correctness is the main point of Rust’s strong type system and borrow checker. C doesn’t just make it difficult to write safe programs, it makes it difficult to write correct programs. You wouldn’t believe how hard that is. My crypto library had to resort to Valgrind, sanitisers, and the freaking TIS interpreter to eke out undefined behaviour. And I’m talking about “constant time” code, that has fixed memory access patterns. It’s pathologically easy to test, yet writing tests took as long as writing the code, possibly longer. Part of the difficulty comes from C, not just the problem domain.

                                      Also, Drew DeVault mentions Go as a possible replacement for C? For some domains, sure. But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance). Such constrained environment are basically the remaining niche for C (and C++). For the rest, the only thing that keeps people hooked on C (and C++) are existing code and existing skills.

                                      1. 4

                                        Rust compiler flags aren’t sable… and that’s a good point. They should probably stabilise at some point. On the other hand, having one true way to manage builds and dependencies is a god send. Whatever we’d use stable compile flags for, we probably don’t want to depart from that.

                                        This is wrong, though. rustc compiler flags are stable, except flags behind the -Z flag, which intentionally separates the interface between stable and unstable flags.

                                        1. 2

                                          Okay, I stand corrected, thanks.

                                        2. 0

                                          But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance).

                                          The Go garbage collector can be turned off with debug.SetGCPercent(-1) and triggered manually with runtime.GC(). It is also possible to allocate memory at the start of the program and use that.

                                          Go has several compilers available. gc is the official Go compiler, GCC has built-in support for Go and there is also TinyGo, which targets microcontrollers and WASM: https://tinygo.org/

                                          1. 5

                                            Can you realistically control allocations? If we have ways to make sure all allocations are either explicit or on the stack, that could work. I wonder how contrived that would be, though. The GC is on by default, that’s got to affect idiomatic code in a major way. To the point where disabling it probably means you don’t have the same language any more.

                                            Personally, to replace C, I’d rather have a language that disables GC by default. If I am allowed to have a GC, I strongly suspect there are better alternatives than Go. (My most major objection being “lol no generics”. And if the designers made that error, that kind of cast doubt over their ability to properly design the rest of the language, and I lose all interest instantly. Though if I were writing network code, I would also say “lol no coroutines” at anything designed after 2015 or so.)

                                            1. 1

                                              I feel like GC by default vs no GC is one of the biggest decision points when designing a language. It affects so much of how the rest of a language has to be designed. GC makes writing code soooo much easier, but you can’t easily put non-GC’d things into a GC’d language. Or maybe you can? Rust was originally going to have syntax for GC’d pointers. People are building GC’d pointers into Rust now, as libraries - GC manages a particular region of memory. People are designing the same stuff for C++. So maybe we will finally be able to mix them in a few years.

                                              1. 1

                                                Go is unrealistic not only because of GC, but also segmented stacks, thick runtime that wants to talk to the kernel directly, implicit allocations, and dynamism of interface{}. They’re all fine if you’re replacing Java, but not C.

                                                D lang’s -betterC is much closer, but D’s experience shows that once you have a GC, it influences the standard library, programming patterns, 3rd party dependencies, and it’s really hard to avoid it later.

                                                1. 1

                                                  Can you realistically control allocations? If we have ways to make sure all allocations are either explicit or on the stack, that could work.

                                                  IIRC you can programmatically identify all heap allocations in a given go compilation, so you can wrap the build in a shim that checks for them and fails.

                                                  The GC is on by default, that’s got to affect idiomatic code in a major way.

                                                  Somewhat, yes, but the stdlib is written by people who have always cared about wasted allocations and many of the idioms were copied from that, so not quite as much as you might imagine.

                                                  That said - if I needed to care about allocations that much, I don’t think it’d be the best choice. The language was designed and optimized to let large groups (including many clever-but-inexperienced programmers) to write reliable network services.

                                          2. 1

                                            I don’t think replacing C is a good usecase for Rust though. C is relatively easy to learn, read, and write to the level where you can write something simple. In Rust this is decidedly not the case. Rust is much more like a safe C++ in this respect.

                                            I’d really like to see a safe C some day.

                                            1. 6

                                              Have a look at Cyclone mentioned earlier. It is very much a “safe C”. It has ownership and regions which look very much like Rust’s lifetimes. It has fat pointers like Rust slices. It has generics, because you can’t realistically build safe collections without them. It looks like this complexity is inherent to the problem of memory safety without a GC.

                                              As for learning C, it’s easy to get a compiler accept a program, but I don’t think it’s easier to learn to write good C programs. The language may seem small, but the actual language you need to master includes lots of practices for safe memory management and playing 3D chess with the optimizer exploiting undefined behavior.

                                          1. 3

                                            Overall I do like the idea of using unikernels instead of containers.

                                            There are several arguments in this article which don’t make sense for me.

                                            Unikernels Avoid Vendor Serverless Lock-In

                                            You could replace the word “unikernel” with “container” in this whole section and it would equally apply. I think that the author is specifically talking about poor container usage where people are running many processes instead of just one. Which, to be fair, is all too common, but that’s a different discussion altogether. You could also design bad bloated unikernels.

                                            Unikernels Avoid Kubernetes Complexity

                                            I agree with much of the points in this section being critical about the complexity of the big container orchestration tools… but again this paragraph sounds like the author has only experienced badly crafted containers:

                                            Containers are notorious for eating databases alive. With unikernels, you can pause, stop, restart, and even live migrate with ease with no need to use external software to do any of this because, at the end of the day, they are still just plain old VMs, albeit specially-crafted ones.

                                            1. 7

                                              unikernels get rid of the notion of the “server” and embrace a serverless mindset. That is not to say there is no server, but it’s not one you can ssh into and install a bunch of crap

                                              The “bunch of crap” are crucial tools for monitoring, logging, security (e.g. HIDS), debugging (e.g. strace).

                                              Attackers are still equally able to attack a vulnerability in your application and inject an in-memory payload as before.

                                              Essentially you gave up detection and forensic capabilities.

                                              It’s either working or it’s not.

                                              Until you want to investigate an occasional glitch or performance issue that depends on any non-trivial interaction between application, kernel, drivers, firmware, hardware.

                                              Unikernels Avoid Kubernetes Complexity

                                              Also traditional UNIX processes and OS packages. Nowadays they provide very good sandboxing without the additional layers of filesystems and virtual networking.

                                              1. 0

                                                The logging/monitoring/debugging points are just flat out wrong. This gets brought up way too often and it’s just not correct. I’d appreciate it if people would stop repeating things that simply aren’t true.

                                                Logging works out of the box. Ship it to papertrail/elastic/splunk/whatever. Monitoring works out of the box. Newrelic, prometheus, lightstep, etc. works out of the box.

                                                HIDS? For what? You can’t login to it - there aren’t the notion of users.

                                                As for in-memory payloads go for it - it becomes much much harder cause you can’t spawn new processes and at least Nanos employs ASLR and common page protections found in Linux so imo that becomes way harder to attack.

                                                As for ‘debugging’ firmware/kernel/hardware - a) none of that would be debugged in a unikernel cause it’s all virtual and b) why would you do that on a prod system? That’s the sort of work you do in a development environment - not a live system.

                                                1. 2

                                                  That’s the sort of work you do in a development environment - not a live system.

                                                  Until your repro is in a live system and the company is losing money so you need to debug it now.

                                                  1. 1

                                                    yeh, I’d completely disagree here - most ops people don’t code at all and frankly if you are going to “debug” “kernels, drivers, firmware, hardware” on a live prod system - that’s a firing offense in my most humble opinion

                                                    1. 2

                                                      Usually, the issue is in userspace. But when downtime is costing thousands of dollars a minute, and your repro slipped past all of your testing, QA, staged rollouts, and SREs, you go into the vault, get the SSH keys, and debug wherever you can.

                                                      1. 1

                                                        I’d agree with the sentiment that it is usually in userspace 100%.

                                                        Thousands/min? Was this measured over the course of a week or over the course of a few hours? If over the week why did it suddenly become a problem? If over the course of a hour was it because of a bad deploy that could simply be rolled back?

                                                        Also, why would a single instance cause thousands/min damage and not more than a few servers? If it was a single instance perhaps can just kill that one. If it’s infectious than that points something non-instance specific.

                                                        As someone that has had to personally deal with shitty code blowing up servers at odd hours of the night repeatedly for years I get the feeling - I get it :) . I just don’t think the practice of sshin’g into random servers is a sustainable process anymore. Definitely not when it’s just splashing water on forest fires vs fixing the root problem.

                                                        1. 1

                                                          So, in the instance I am thinking of, someone else’s rollout changed the query being sent to our service, which broke accounting for a significant portion of the ads served by YouTube.

                                                  2. 1

                                                    You cannot possibly instrument/log every parameter to every function in both user and kernel space for every call. It is way, way too expensive. The same thing goes for per-processor utilisation with subsecond resolution. It has to be done in a targeted way, when it is required.

                                                    I feel like people who object against live diagnostics have never heard of perf and BPF, or at least not used them to solve hard problems.

                                                    Centralised collection of metrics is great for discovering trends and forecasting. It is not of sufficient resolution to diagnose issues in production.

                                                    1. 1

                                                      We’ve heard. :)

                                                      Nanos even has support for both ftrace ( https://github.com/nanovms/nanos/tree/master/tools/trace-utilities ) like functionality and strace like functionality.

                                                      I still hold to the case that if you are resorting to running something like perf in prod you’ve failed to instrument or profile your application beforehand. The author explicitly warns in the docs that you might want to even test before using production as it has had bugs resulting in kernel panics in the past. Many of the problems that people “can’t replicate” in a dev/test environment actually can be and should be.

                                                  3. 0

                                                    The “bunch of crap” are crucial tools for monitoring, logging, security (e.g. HIDS), debugging (e.g. strace).

                                                    Even though I agree this is typically the case, this is not the way “DevOps” is supposed to work. Containers should not have all this stuff in them either. Containers should run 1 process. They can’t always work this way, without rewriting a program, but we should strive for that.

                                                    Attackers are still equally able to attack a vulnerability in your application and inject an in-memory payload as before.

                                                    Yes, but the attack surface is greatly reduced.

                                                    Also traditional UNIX processes and OS packages. Nowadays they provide very good sandboxing without the additional layers of filesystems and virtual networking.

                                                    I can’t help but largely agree with this. However, there are many benefits to the application isolation from VMs, unikernels, and containers, which allow for sysadmin models that are not possible with just processes and packages on a single host.

                                                    1. 4

                                                      So if a container should only run 1 process, what’s the difference between a container and a statically linked program running in a jail?

                                                      1. 1

                                                        Containers should not have all this stuff in them either. Containers should run 1 process.

                                                        Indeed. All the tooling is on the base OS and often works in roughly the same way against normal unix processes and containers.

                                                        Attackers are still equally able to attack a vulnerability in your application and inject an in-memory payload as before.

                                                        Yes, but the attack surface is greatly reduced.

                                                        Not if you consider the point above. The application is still the same codebase (regardless if it’s running as a sandboxed process, a container and so on). You still have to either deploy and run all the ancillary tooling at some point in the stack or have unmanaged black boxes around.

                                                        1. 0

                                                          The reduction in attack surface, while true, is not the main security selling point imo. It’s the fact that most common exploits wish to execute other programs, usually many others. If I want to take advantage of your weak wordpress setup and install a cryptominer not only is it a different program - it’s probably not written in php and thus I won’t be able to install it in a unikernel environment and run it. I’m forced to inject my payload in-memory as you state and that makes things rather hard very very fast. Most payloads simply try to exec /bin/sh as fast as possible. Can you code a cryptominer with ROP alone? Can you code a mysqlclient with ROP alone?

                                                          I’d love to see this.

                                                    2. 0
                                                      • Your argument here doesn’t match the header you pasted (serverless) - so not sure what you are trying to point out here - the security side of things or serverless lock-in? If you can elaborate I can help provide pointers. If it’s the security side of things - the fact that it is single process by architectural design deals a pretty hefty blow to all the malware found in containers today like the stuff mentioned in https://unit42.paloaltonetworks.com/graboid-first-ever-cryptojacking-worm-found-in-images-on-docker-hub/ . Otherwise, that header is about serverless offerings like lambda or google cloud run.

                                                      • There’s a quite a lot of “orchestration” tooling that is arguably necessary in the container ecosystem precisely because they insist on duplicating networking/disk on top of existing virtualization. The point I was trying to make here was that since at the end of the day these are virtual machines - you get all that “orchestration” for free. Make sense?

                                                      1. 1

                                                        From Unikernels Avoid Vendor Serverless Lock-In

                                                        However, unlike most serverless offerings that are entirely proprietary to each individual public cloud provider, unikernels stand apart because instead of some half-baked CGI process that is different from provider to provider, they assume the basic unit of compute is a VM that can be shuttled from one cloud to the next and the cloud provider is only responsible from the virtual machine down.

                                                        I don’t understand how this is unique to unikernels. You can shuttle containers from one server (or cloud provider) to the next.

                                                        Security- Yes, containers and their tooling are terrible here. Unikernels, especially well designed unikernels really shine in this regard.

                                                        There’s a quite a lot of “orchestration” tooling that is arguably necessary in the container ecosystem precisely because they insist on duplicating networking/disk on top of existing virtualization. The point I was trying to make here was that since at the end of the day these are virtual machines - you get all that “orchestration” for free. Make sense?

                                                        Ok I think I’m getting your perspective now. You’re assuming the container orchestration software is running inside of VMs managed in-turn by VM orchestration software. More abstraction, more code, more attack surface, more problems. Totally agree here, if I’ve got it right now. Maybe you could spell it out a bit more at the start of the article, even though it’s quite clear now on re-read once I’ve already grabbed onto the perspective.

                                                        As an aside, container orchestration software could run directly on a hypervisor, and that’s where things seem to be moving to in the cloud-native industry. Which is going to be… exciting, from a security perspective.

                                                        1. 2
                                                          • afaik live migration is still not a production quality feature for containers although it has existed in the vm world for over a decade now? and the container ecosystem has invented an entire language to describe the whole concept of persistence which is trivial in vms and non-trivial in containers, but this section wasn’t really taking aim at containers

                                                          • yeh - the security argument wasn’t really for that - it’s the fact that if you are spinning up a k8s stack on gcloud or aws you are inherently already on top of an existing vm stack and there’s a ton of duplication going on here which makes no sense from a performance or complexity standpoint - the security arguments are a bit different, as far as the containers on hypervisor - I know kube-virt is there and I see this concept talked about a lot on twitter but I don’t see much movement there, regardless - that’s essentially just stuffing a linux inside of a vm and using the container mechanisms for orchestration - part of the security story here is not just ‘container suck at security’ - it’s deeper than that - it’s the fact that linux is ~30 years old and was built in a time before heavy commercialized virtualization (ala vmware) and before the “cloud” - these two abstractions give us a chance to deal with long-standing architectural issues of the general purpose operating system

                                                          1. 1

                                                            Regarding live migration, isn’t it rare for public cloud providers to actually support this?

                                                            1. 1

                                                              That’s not a primitive that they traditionally expose as-is you are correct. To do so in a free-for-all environment would probably have some serious security/scheduling ramifications, however, you see it pop up in quite a few places regardless. For instance if you go boot a database on google cloud right now and give it say 8 gig, insert a few hundred thousand rows, you can instantly upgrade that database to 16 gig or 32 gig ram without it actually going down. Behind the scenes they are spinning up a new vm, transparently migrating page by page to the new vm without destroying the live database and then shutting down the old vm. Also, AWS uses it to migrate vms from faulty hardware as well.

                                                              Of course in private cloud situations this is routinely used for backup/DR.

                                                              This is all to say that there are many many features that are simply not possible under a container paradigm.

                                                            2. 1

                                                              it’s the fact that linux is ~30 years old and was built in a time before heavy commercialized virtualization (ala vmware) and before the “cloud”

                                                              What’s the connection? UNIX is older, virtualization exists since 1960ies and hardware virtualization since 1970s.

                                                              1. 2

                                                                The keyword here being “commercialized virtualization”. We are talking about ESX from VMWare here. Anyone using large sets of servers in north america and not using AWS (api on kvm/xen) or Google Cloud (api on kvm) is using xen from Citrix or esx from VMWare.

                                                                We didn’t have style of offering in the 90s. Everyone had to use real servers and actually use the unixisms of multiple processes with multiple users on the same box. You’d pop into a sun box and use commands like who and see a half dozen users. You could be the BOFH and wall them a message stating you were kicking them off.

                                                                Times have changed. Because we not only have access to virtualization in private datacenters but entire APIs against the ‘cloud’ we can finally drop the chains that have existed in unix land for so long and solve a lot of the problems like performance and security that exist.

                                                                1. 1

                                                                  Isn’t making Linux more fitted for virtualization one of the motivations for systemd?

                                                                  1. 2

                                                                    When I say virtualization I’m speaking about classical machine level virtualization not carving up a linux system with namespaces. The OSTEP book, while one of the more accessible books on operating systems, uses virtualization in a very liberal sense. Ever since then we’ve seen the containerati abuse the term as well and this is where the confusion sets in.

                                                                    Machine level virtualization has actual hardware capabilities to carve up the iron underneath. Machine level virtualization doesn’t care what the operating system is (linux, macos, windows, etc.), systemd otoh is clearly linux only (and a very specific set of linux at that). In fact today’s virtualization (vs that of 2001) is so sophisticated from things like passthrough and nic multiplexing it’s possible to run vms faster than an average linux on the hardware itself - that’s how good it is today.

                                                                    That is why I’m very hesitant to label namespaces and their friends ‘virtualization’. To me that’s a very different thing.

                                                                    1. 1

                                                                      Thanks for taking the time to clarify!

                                                                  2. 1

                                                                    Is there also an implication with “ Linux is 30 years old” that Linux has not been developed since its inception? That something is old is not an automatic disqualification if it has active development.

                                                                    1. 3

                                                                      It’s not about the age. It’s the fact that the environment has changed and core concepts are not appropriate anymore. We had to use operating systems that supported multiple processes by multiple users in the 90s. Linus’ computer was a 286/386 in 93. We didn’t have commercialized virtualization or the cloud. Back in the 70s when the concept was delivered originally in Unix they were using computers like the pdp-7 and pdp-11 that took up entire walls and cost half a million dollars. Clearly back then that architecture had to be that way.

                                                                      Contrast/compare today when even small 20 person companies can be using tens or even hundreds of vms because they have so much software. We need not mention the big players that pay AWS hundreds of millions of dollars/year or banks that wholly own entire sets of datacenters completely virtualized.

                                                                      So it’s not the fact that linux is 30 years old or unix is 50 years old - it’s the concepts of being able to say run a database and a webserver on the same machine or the fact that you even want the concept of a interactive ‘user’ when you have a fleet of thousands of vms. Most people’s databases don’t even fit on a single machine anymore, a lot of their web services don’t either - anyone with a load balancer can show you that. We’ve consumed so much software that the operating system has largely inversed itself yet we are still using the same architecture that was designed for a completely different time period and it’s not something you can just yank out or seccomp away.

                                                                      1. 2

                                                                        That’s a very interesting perspective. Thanks for explaining in greater detail!

                                                        1. 6

                                                          So basically it does trust-on-first-use PGP for email. It’s a bit misleading to say it’s similar to Signal because of the lack of forward/backward secrecy from ratcheting. Also, there’s no support for group chats beyond pairwise encrypting to everyone.

                                                          The main draw seems to be incremental deployment and ease of use, which are admirable goals that many have tried with PGP. Best of luck.

                                                          1. 5

                                                            Yes, no forward/backward secrecy, which is a serious concern.

                                                            Also, there’s no support for group chats beyond pairwise encrypting to everyone.

                                                            To be fair, that’s all that Signal does for group chats too.

                                                              1. 2

                                                                I don’t even understand why Signal is making this effort which will require a ton of work to verify this implementation is sound. They could have just done what Threema does:

                                                                In Threema, groups are managed without any involvement of the servers. That is, the servers do not know which groups exist and which users are members of which groups. When a user sends a message to a group, it is individually encrypted and sent to each other group member. This may appear wasteful, but given typical message sizes of 100-300 bytes, the extra traffic is insignificant. Media files (images, video, audio) are en- crypted with a random symmetric key and uploaded only once. The same key, along with a reference to the uploaded file, is then distributed to all members of the group.

                                                                https://threema.ch/press-files/cryptography_whitepaper.pdf

                                                                1. 1

                                                                  This is exactly how Signal currently works, FYI.

                                                                  1. 1

                                                                    Then why is it changing?

                                                                    1. 2

                                                                      Well you could imagine scenarios where there is logic on the server which infers groups based off of message timing, and then could do things like exclude one person from receiving messages from the group… but I think Signal is fundamentally a dead-end based on its centralized nature anyway…

                                                                  2. 1

                                                                    The new Signal work makes group management secure against a malicious server in addition to reducing the need for pairwise ciphertexts. It prevents old group members from messing with group state (membership, and other metadata), and it allows confidentiality over authenticated access control management.

                                                                    1. 1

                                                                      I just don’t see the value. I see an awful lot of complexity though.

                                                            1. 2

                                                              I used to do distributed systems resilience consulting for a few different blockchain projects. I never saw an anti-sybil mechanism that wasn’t easily gamable with relatively simple race conditions or a rich person splitting their account into many small ones to gain more votes for gaming whatever anti-concentration or democratic mechanisms the specific network is trying to incentivize. Can anyone today with a straight face say that this issue is addressed by a non-government/strong KYC mechanism?

                                                              1. 1

                                                                I don’t have anything to add other than I’ve been down the same path and I agree, I haven’t seen anything that doesn’t ultimately end up with some form of governance/KYC. I do want a verifiable and trust-building solution to exist though.

                                                                1. 1

                                                                  The GNUnet project implements a Sybil-resistant DHT, and tries to prevent Sybil attacks on other layers as well. See this paper (PDF) and try searching for “sybil” if you’re interested.

                                                                  1. 1

                                                                    GNUnet forces participants to calculate a small hash, basically bitcoin-style but cheap enough that the weakest “legitimate user” of the system can still do it. Maybe that’s fine for GNUnet, but once there are any sort of significant decision happening based on client participation, it becomes very difficult to make a system that can protect itself from a rich person forcing that decision to happen in a way that causes them to become even richer.

                                                                1. 2

                                                                  I’ve been using xo/xo for over a year now on my team’s two biggest projects at work. It’s been great – even if the error messaging isn’t super mature. I see little value in using Go for the improved type-safety on Ruby/Python/Node.js just to throw it all with a dynamic ORM layer.

                                                                  1. 1

                                                                    I had never seen xo/xo. Yea, now I’m confused – looks very similar. At a first glance do you see any obvious differences?

                                                                    1. 2

                                                                      xo supports more dbs, for one.

                                                                      1. 1

                                                                        xo/xo isn’t infinitely flexible but you specify the templates to use to generate code for each SQL flavor. We’ve had to heavily augment xo/xo’s default templates since they didn’t provide things like *GetMany or *BulkUpsert. At this point I think the template is definitely more original than default. This in turn meant we had to build our own tooling for list filtering that outputs a squirrel struct. Unfortunately, this layer is dynamic currently because the original version of the code was in Node.js and the API for filtering was just sequelize’s JSON filter objects.

                                                                        We haven’t yet gotten into code generation for MySQL but it’s something we’ll be looking into as an optional db backend.

                                                                    1. 1

                                                                      This looks like it has potential to be really nice. Going to try this out on a WIP right now actually.

                                                                      1. 30

                                                                        Good. The web is 30 years of tech debt. It’s worse than the Java standard library, and even that doesn’t have to deal with JavaScript.

                                                                        1. 10

                                                                          There’s absolutely nothing good about this. The advantage of the web stack is that it’s ubiquitous and backwards compatible. You can take a web app from a decade ago, open it in a modern browser, and it’ll just work on pretty much any OS. Meanwhile, good luck trying to compile an iOS app from a year go.

                                                                          Thanks to web tech we now have lots of mainstream applications on Linux that just wouldn’t exist otherwise. This has done wonders for Linux adoption in my opinion. Now you can run things like VS Code, Slack, and so on without having to hope they work through Wine.

                                                                          While it’s currently less performant, VS Code shows that you can clearly get pretty good results with it. And this will only get better going forward. Just look at how much Js engines have improved over the years. And again, this benefits all platforms out of the box.

                                                                          Meanwhile, you absolutely do not need to use Js nowadays. I’ve been using ClojureScript for around 5 years now, I can’t remember the last time I had to write a single line of Js. The dev experience is also strictly superior to any native tech I’ve used. Here’s an example of what it looks like in practice with immediate live feedback on the code as you’re writing it.

                                                                          This alone is a killer feature for writing UIs in my opinion because you often end up doing a lot of experimentation with component placement. And the amount of time you save by being able to share code across platforms is huge. Maintaining 5 different versions of the app for Windows, Mac, Linux, Android, and iOS is a herculean effort that’s simply beyond what many teams are able to do effectively.

                                                                          Instead of fighting web tech, I think it would be much more productive to look at how this stack can be improved and optimized to address the shortcomings.

                                                                          1. 16

                                                                            Thanks to web tech we now have lots of mainstream applications on Linux that just wouldn’t exist otherwise

                                                                            What I find hilarious about this is that electron apps are absolutely not portable. Like.. OpenBSD recently got an electron port (much… much much to my surprise :D).. but you will never be able to run apps like Spotify.. because they ship binary blobs for the DRM..

                                                                            In essence, you have what looks like a more open ecosystem because linux is supported.. and linux is open source!.. and now these apps run on it!… but it is in fact… it isn’t any more open than microsoft office… or what ever app you want to compare it to!

                                                                            1. 2

                                                                              There is the technology itself which is open source and allows writing open source apps that work on top of it. There are plenty of open source projects using Electron as well as commercial ones. VS Code is a perfect example of a completely open source app that now runs on many platforms and provides an accessible IDE for thousands of developers.

                                                                              1. 6

                                                                                VScode has non-opensource components.

                                                                                1. 2

                                                                                  I’m not aware of any non-open source components in VSCdoium.

                                                                            2. 5

                                                                              Thanks to web tech we now have lots of mainstream applications on Linux that just wouldn’t exist otherwise

                                                                              (my emphasis)

                                                                              That’s great. But the linked submission is about the macOS platform. Is it really so dominant that stopping Electron app updates in the App Store will deal the platform Web tech a death blow?

                                                                              If nothing else, I’d imagine the this would be an opportunity for the competitors to macOS.

                                                                              (edit clarification)

                                                                              1. 2

                                                                                It mostly harms macOS users since it makes it harder for them to get and update the apps they use. I don’t really understand why Apple should decide for me what apps I get to run on my computer or how I run them.

                                                                                1. 1

                                                                                  OK I see your point now regarding who it harms.

                                                                                  As I understand it, this is a halt to the update of Electron apps in the macOS App Store. Surely it’s still possible to install “normally”? I seem to remember just installing Discord from the Discord web page, not via the App Store.

                                                                                  1. 1

                                                                                    But now you have to go through unofficial channels to do it, so it’s becoming a workaround the official solution.

                                                                              2. -1

                                                                                I’m with you on the points about the web being ubiquitous and backwards-compatible. I love JavaScript, though, and it’s incredible how much better it has gotten over the years. A lot of people struggle with basics in JS because it isn’t what they’re comfortable with (e.g. prototypal vs classical inheritance) and just give up on that alone. The reason JS gets so much attention and so many improvements, though, is because it is ubiquitous. There is no other language or platform that gets as much attention, as many open source contributions, as many quality of life updates.

                                                                                Most folks who “hate” JS only remember goofy things from ye olde JS or never bothered to learn anything after seeing some syntax they don’t like or “no strict typing.”

                                                                                1. 5

                                                                                  I strongly prefer ClojureScript because it’s a cleaner and simpler language with a more stable ecosystem, and better tooling.

                                                                                  Nowadays Js is huge, and there are many different ways to do any one thing with libraries using all kinds of different patterns to do things. All this directly results in tons of mental overhead that you wouldn’t have otherwise. At the same time, there is a ton of churn in the ecosystem with things changing or becoming obsolete often in a matter of months.

                                                                                  By contrast, Clojure is a focused language that encourages a common way of solving problems. I’ve been using Reagent and re-frame for years now, and the APIs are still largely the same, only things I’ve had to do to move to new versions was update the version numbers in my projects. This is especially nice considering that Reagent uses React internally, and I’ve been protected from the churn there.

                                                                                  ClojureScript s-exp syntax completely obviates the need for things like JSX, and defaulting to immutability makes the code much easier to manage since you can safely do local reasoning about parts of your application.

                                                                                  ClojureScript compiler can do code pruning down to function level, something that’s completely impossible to do with Js. It also handlers code splitting and minification out of the box. This is incredibly useful for writing web apps where you’re sending your code to they client.

                                                                                  Hot loading just works in ClojureScript while it’s pretty much impossible to get working properly in Js due to pervasive mutability.

                                                                                  And since ClojureScript is hosted I get access to all of the Js ecosystem if I need it.

                                                                                  1. 2

                                                                                    I didn’t make it clear in my original post, but I’m 100% behind people who prefer something like clojurescript, elm, or anything else that can be compiled to run on the web. It’s really a personal preference. I just wanted to make it clear that JS itself is not some sort of abomination that must be avoided at all costs.

                                                                                    1. 1

                                                                                      That’s fair, JS and its ecosystem has come a long way, and you could certainly do worse.

                                                                              3. 9

                                                                                The web is the only application platform that isn’t owned. I’d like to keep it.

                                                                                1. 13

                                                                                  Give Google a few more years and they’ll own it officially, and not just inofficially as they do now.

                                                                                  1. 6

                                                                                    Only if you let them. As it stands now it is still fairly easy to avoid most of their products and services except for reCaptcha and gmail and, unfortunately, except for those who go to a school or work at a company or government institution which has standardised on Google services.

                                                                                    Don’t use Chromium or its closed-source derivative Chrome. Stay away from Chromebooks. Use AOSP-derived Android forks without the Google Services Framework, playstore or Google apps. Use a meta-search engine as a proxy to Google search. Don’t publish on Youtube, use Peertube (self-hosted or otherwise) or Vimeo or some other alternative. No Gmail but Email, please.

                                                                                    This is what I’ve been doing for ages, not out of spite for Google - which, years ago, was seen as one of the good forces on the ‘net against evil Microsoft and greedy SCO/Oracle/etc - but because open protocols and free software just make more sense. That I’ve been vindicated by Google wiping the ‘evil’ out of their corporate motto only confirms this stance.

                                                                                    1. 6

                                                                                      I finally got off all the Google services this year. I replaced gdrive, contacts, and calendar with nextcloud on a digital ocean droplet, and I went with fastmail for my mail. I’m still on android, but don’t actually use any of the Google apps. I’m likely get FairPhone next and completely avoid anything directly associated with Google. I find Firefox works pretty well as a browser on both Android too.

                                                                                      I find I don’t really miss the gsuit at all, and nextcloud has been pretty painless to maintain as well. So, getting off Google really all that bad nowadays.

                                                                                      1. 8

                                                                                        I’ve got a HP DL380g7 in a sound-insulated, force-ventilated rack which doubles as a drying cabinet, the heat generated by the equipment in top is used to dry produce on 8 racks in the bottom section. I built that enclosure using some left-over supermarket shelves which I found in a dumpster, a stack of lumber, a forced draft fan and the largest car air filter I could find. The drying racks are made of wood with metal meshing to allow for an uninterrupted air flow. The thing is managed using Proxmox and runs, among other things:

                                                                                        • mail: exim/greylistd/spamassassin/dovecot/managesieve, roundcube for web mail
                                                                                        • media: airsonic
                                                                                        • video: peertube
                                                                                        • photo sharing : experimenting with pixelfed, phasing out openphoto
                                                                                        • messaging: xmpp
                                                                                        • revision control front end: gitea
                                                                                        • search: searx, recoll (as a local search plugin for searx)
                                                                                        • ‘cloud’: nextcloud
                                                                                        • ‘office’: libreoffice online (through CODE in Nextcloud)
                                                                                        • backup: rsnapshot

                                                                                        The same server also is used as build server and will soon be used as a virtual router (using pfsense). I’m looking into expanding storage using a Netapp DS4243 or DS4246 used as a JBOD shelf. Services are used by me, family and friends and as remote backup for some of those. In return I’m using their storage as remote backup.

                                                                                        Of course it is not necessary to go to such lengths to become self-reliant when it comes to IT services, for most people a Raspberry Pi with some external storage will suffice.

                                                                                        1. 1

                                                                                          I’ve got a HP DL380g7 in a sound-insulated, force-ventilated rack which doubles as a drying cabinet, the heat generated by the equipment in top is used to dry produce on 8 racks in the bottom section. I built that enclosure using some left-over supermarket shelves which I found in a dumpster, a stack of lumber, a forced draft fan and the largest car air filter I could find. The drying racks are made of wood with metal meshing to allow for an uninterrupted air flow.

                                                                                          do you have anything written or pictures on that, it sounds interesting :)

                                                                                          1. 3

                                                                                            Not really, I could whip up something if there is interest but given the one-off nature of this project caused by it being based around left-over materials it does not lend itself to duplication unless you happen upon the same type of left-overs.

                                                                                            Here’s some images of the (mostly) finished enclosure:

                                                                                            https://imgur.com/a/M4Lbf1K

                                                                                            1. 1

                                                                                              thanks for the pictures! using the waste heat to dry produce is brilliant imho :) it’s always nice to see such builds, even if not directly reproduceable some ideas may be transferred :)

                                                                                      2. 2

                                                                                        I thought you were satirizing until the last paragraph.

                                                                                        1. 2

                                                                                          So we can also ignore HTTP/2 (which was shoved down the IETF’s throat) and HTTP/3 (which is currently being shoved down the IETF’s throat)?

                                                                                          1. 4

                                                                                            That depends on the merits of those protocols and whether there are several independent implementations. The mere fact that something comes from Google does not mean it is untouchable. The idea to re-use a connection (HTTP/2) is valid, as to the merits of HTTP/3 I can not make any statement as I have not looked at this protocol yet.

                                                                                            What can and should be ignored is something like AMP since that does lead to more power to be concentrated into the hands of a single party, i.e. Google. If AMP were to be changed into some form of compressed protocol which can be self-hosted without any outside dependencies this could change.

                                                                                            1. 2

                                                                                              You can reuse a connection on HTTP/1.1 (serially, not concurrently though). And HTTP/2 is basically TCP over TCP. It’s not faster (because the same amount of data is being sent to the same destination) but it does duplicate functionality that’s in the kernel, and it still suffers from head-blocking (only more so). HTTP/3 is TCP over UDP, which gets around the head-blocking issue, but still duplicates a lot of functionality from the kernel into userspace.

                                                                                              And all of this was developed because it serves Google’s needs.

                                                                                              1. 1

                                                                                                Reusing connections in HTTP/1 works but is a hack. HTTP/2 might not be the best way to go (and the same goes for HTTP/3 (or ‘HTTP over QUIC’ [1])) but there are independent implementations. Also, feel free to ignore HTTP/2, the web works fine without it.

                                                                                                Don’t confuse the messenger with the message, it is the message which counts. When Google says ‘route all traffic through us’ (viz. AMP) the answer should be no. When they say ‘re-use a single connection for multiple requests’ the answer can be ‘sounds good’ or ‘fine but this duplicates functionality’ or ‘good plan but this belongs in the kernel’ or something along those lines.

                                                                                                [1] https://en.wikipedia.org/wiki/QUIC

                                                                                        2. 1

                                                                                          They own the web inofficially? Care to elaborate? My reasoning was that there is a w3c/whatwg standards process that isn’t controlled by one single business (like the app store is) and isn’t tied to share holder interests. In fact, the web b and html have some guiding principles that put users first. Over website authors or browser makers.

                                                                                          I think that’s damn good in comparison to an app store.

                                                                                          But please let me know what you meant!

                                                                                      3. 0

                                                                                        macOS/iOS are older than web. If you want to avoid tech debt dump that shit.

                                                                                        1. 1

                                                                                          What?

                                                                                          1. 1

                                                                                            Objective C and NeXTStep are 80s tech. Swift and iOS are constrained by their legacy.

                                                                                            1. 1

                                                                                              This feels like a bit of a straw-man argument—I am not sure that this is really any different to anything else. The C programming language is 70s tech, born in the PDP age. The world hates the dangers of C these days (because it’s not Rust or something) and yet the Linux kernel, written in C, runs in seemingly everything. Would you also argue that Linux has been constrained by the legacy of the PDP and that the C language has not evolved at all in that time?

                                                                                              1. 1

                                                                                                I was responding to a claim that the web is tech debt. Straw man vs straw man.

                                                                                      1. 3

                                                                                        Shortcomings like this are what keep me from considering Guix as a serious OS option for infrastructure. I see things improving in these regards, however slowly. I worry that there are too few people in the core Guix community who take operations/sysadmin seriously, and have the necessary experience/perspective. How issues like this get handled will be very telling, I believe.

                                                                                        1. 2

                                                                                          This looks nice. Happy to see it’s in Nim too. Excited to see more projects written with Nim appearing as the language becomes more and more stable.

                                                                                          1. 2

                                                                                            Thank you! Nim is such a pleasure to work with.

                                                                                          1. 7

                                                                                            I for one use permissive licenses in the hope that one day an aerospace company will use my code and it will end up in orbit.

                                                                                            1. 10

                                                                                              Maybe they already do? With a permissive license you have good chances of never finding out.

                                                                                              1. 3

                                                                                                And how would the GPL change that?

                                                                                                1. 2

                                                                                                  Because the aerospace company would have to publish their code.

                                                                                                  1. 11

                                                                                                    s/publish/provide to customers/

                                                                                                    1. 6

                                                                                                      No. It is not required to publish GPL code of the modified version if it remains private (= not distributed).

                                                                                                      So you have the same chances of never finding out about usage in either case (but the virality of GPL might actually decrease the odds).

                                                                                                      1. 1

                                                                                                        I was referring to this aspect of the license:

                                                                                                        But if you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program’s users, under the GPL.

                                                                                                        Whether or not that would come into play with the hypothetical aerospace company in question is beside the point.

                                                                                                      2. 0

                                                                                                        Or not.

                                                                                                    2. 1

                                                                                                      https://www.gnu.org/licenses/gpl-faq.en.html#GPLRequireSourcePostedPublic

                                                                                                      I guess what you mean is better chances of finding out?

                                                                                                    3. 7

                                                                                                      I found out that my open source code was being used in nuclear missiles. It did not make me feel good.

                                                                                                      1. 2

                                                                                                        What license were you using?

                                                                                                        1. 2

                                                                                                          GPL

                                                                                                          1. 2

                                                                                                            Interesting that you could have discovered this, would presume such things would be quite secretive. I guess there’s nothing you can do to stop them using it either?

                                                                                                            1. 2

                                                                                                              It was a shock. And nope, nothing could be done. In fact, I suspect that Stallman would say restricting someone from using software for nuclear weapons (or torture devices or landmines or surviellance systems) would be a violation of the all important issue of software freedom.

                                                                                                                1. 1

                                                                                                                  It would be an interesting argument to try to make. The FSF already recognizes the AGPL – which explicitly does not grant Freedom Zero as defined by the FSF – as a Free Software license, and the general argument for that is one of taking a small bit of freedom to preserve a greater amount over time. A similar argument could be made about weapons (i.e., that disallowing use for weapons purposes preserves the greatest amount of long-term freedom).

                                                                                                                  1. 1

                                                                                                                    … Stallman would say … violation of the all important issue of software freedom

                                                                                                                    Restricting use on ethical basis is quite difficult to implement for practical reasons.

                                                                                                                    1. 1

                                                                                                                      That’s not really the issue. One of the things I dislike about FSF/Stallman is that they claim, on moral principal, that denying a software license to , let’s say, Infant Labor Camp and Organ Mart Inc. would be wrong. I think that “software freedom” is pretty low down on the list of moral imperatives.

                                                                                                                      1. 1

                                                                                                                        Being able to (legally) restrict the use of my creative output (photographs in my case) is the reason I retain the “all rights reserved” setting on Flickr. I’d hate to see an image of mine promote some odious company or political party, which is what can happen were I to license it using Creative Commons.

                                                                                                            2. 2

                                                                                                              How did you find out?

                                                                                                              1. 2

                                                                                                                They asked me to advise them.

                                                                                                              2. 2

                                                                                                                For ethical reasons or for fear of some possible liabilities somewhere down the line?

                                                                                                                1. 11

                                                                                                                  What a question. I didn’t want to be a mass murderer.