1. 17

    The libcurl examples we host on the curl web site (and ship in curl tarballs) are mostly done without error checks

    Hopefully the lesson here is for the libcurl project that users will copy-paste anything. All the checks really need to be in the examples.

    1. 12

      This is extra ironic because immediately preceding this section is one about how people don’t read the docs! Then it goes and says that following the examples in the docs is wrong.

      1. 12

        MSDN used to omit error handling from their examples to simplify the code (and warned about this), until they realized that everyone was just pasting the examples into their code as-is, regardless of warnings.

      1. 10

        The problem with this is that if you unknowingly rely on GNU extensions you might believe your scripts will work anywhere, but might not. There’s nothing bad in using them if you know you’re using them. And that’s precisely the problem I see with many GNU fans: they more often than not are totally unaware of the portability issues GNU poses… and how “learning GNU” isn’t exactly “learning how to use UNIX.”

        I don’t see how this is related to GNU at all. I see this in various places:

        • C++ code doesn’t compile under all C++ compilers, because it is using Visual Studio’s C++ extensions,
        • E-mail attachments don’t open under all e-mail clients, because they’re being sent in some Microsoft proprietary format,
        • Word documents are being shared between people, but nobody asks if everyone uses MS Office,
        • Some page doesn’t work in all browsers, because the page uses some Chrome-only API,
        • other examples when searching for “Embrace, Extend, Extinguish”.

        I think that the absence of knowledge is a natural thing, and it’s better to spread the knowledge how to build software properly, instead of trying to limit the technology we use to some least common denominator in fear that someone one day will want to use our software in some restricted environment, and THEN it won’t work.

        1.  

          I believe the author’s point is that the examples you gave above are different from GNU because everyone hates them, and most people give GNU a pass when their code results in similar situations.

          I don’t think it’s a very good argument, because as the author even points out, you have the freedom to run GNU code on your machine, so if you find a piece of software that’s “not portable” according to the POSIX standard, it doesn’t prevent you from running it the same way that getting a document in MS Word format does.

        1. 4

          I am a big fan of coverage, but feel that a lot of the debate around the practice largely misses the point(s). So, while I agree that complete or high coverage does not automatically mean that a test suite or the software is good… of course it doesn’t? In the extreme, it’s pretty trivial to reach 100% coverage without testing the actual behaviour at all.

          Coverage is useful for other reasons, for example the one this article ends with:

          Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness.

          Identifying under-tested parts of a program seems like a pretty important part of a testing strategy to me. Like many advantages of coverage, though, you have to have pretty high coverage for it to be useful. There are other “flavours” of this advantage that I find useful all the time, most obviously dead code elimination. High test coverage at the very least signals that the developers are putting effort into testing, and checking that their testing is actually hitting important pieces of the code. Maybe their test suite is in fact nearly useless, but that seems pretty unlikely, and it could be nearly useless without coverage, too. That said, like any metric, it can be gamed, and pursuing the metric itself can easily go wrong. Test coverage is a means to many useful ends, not an end unto itself.

          The quest for 100% may be a bit of a wank, but I’ve tried that in a few projects before and actually found it quite useful. In particular it highlights issues with code changes that affect the coverage of the test suite in a very simple way. Day-to-day, this means that you don’t need to meticulously pour over the test suite every time any change is made to make sure that some dead code or dead/redundant branches weren’t added. If you don’t have total coverage, doing that is a chore. If you do, it’s trivial: “oh, the number is not 100% anymore, I should look into why”. I regularly end up significantly improving the code during this process. It’s undeniably a lot of work to get there (depending on the sort of project), but once you do, there are a lot of efficiency benefits to be had. If the project has platform-specific or -dependant aspects, then this is even more useful in conjunction with a decent CI system.

          As to the article itself, the methodology here seems rather… convenient to me:

          • Programs are mutated by replacing a conditional operator with a different one. This mutation does not affect coverage (except perhaps branch coverage, in exactly one case, if you’re replacing > with >= as they are here). It also hardly seems like a common case.

          • The effectiveness of the test suite as a whole is determined by running random subsets of the tests and seeing if they catch the bug. This is absurd. Test suites are called test suites for a reason. The instant you remove arbitrary tests, you are no longer evaluating the effectiveness of the test suite, full stop. You are - obviously - evaluating the effectiveness of a random subset of the test suite. Who cares about that?

          Am I missing something? In short, given this methodology, the only things these results seem to say to me is: “running a random subset of a test suite is not a reliable way to detect random mutations that change one conditional operator to another”. I don’t think this is at all an indicator of overall test suite effectiveness.

          That said, I have not read the actual paper (paywall), and am assuming that the summary in the article is accurate.

          1.  

            I also find coverage extremely valuable for finding dead or unreachable code.

            I frequently find that unreachable code should be unreachable, e.g. error-handling for a function that doesn’t error when provided with certain inputs; this unreachable-by-design error handling should be replaced with panics since reaching them implies a critical bug. Doing so combines well with fuzz-testing.

            It’s also useful for discovering properties of inputs. Say I run a function isOdd that never returns true and thus never allows a certain branch to be covered. I therefore know that somehow all inputs are even; I can then investigate why this is and perhaps learn more about the algorithms or validation the program uses.

            In other words, good coverage helps me design better programs; it’s not just a bug-finding tool.

            This only holds true if I have a plethora of test cases (esp if I employ something like property testing) and if tests lean a little towards integration on the (contrived) “unit -> integration” test spectrum. I.e. only test user-facing parts and see what gets covered, and see how much code gets covered for each user-facing component.

            1.  

              This matches my experience very well. Good point that the sort of test suite is relevant here. I get the impression that the article is coming from more of a purist unit-testing perspective, but this dead code elimination thing is mostly useful when you have a pretty integrated test suite (I agree that this axis is largely contrived).

              I find it particularly nice for non-user-facing things with well-defined inputs and outputs like parsers, servers, and so on. If you have a test suite that mostly does the thing the software actually has to do (e.g. read this file with these options and output this file), in my experience, coverage exposes dead code a lot more often than you expect.

              This has the interesting side-effect that unit tests which only exist to cover internal code are actually harmful in a way, because something useless will still be covered.

              1.  

                I find it particularly nice for non-user-facing things with well-defined inputs and outputs like parsers, servers, and so on. If you have a test suite that mostly does the thing the software actually has to do (e.g. read this file with these options and output this file), in my experience, coverage exposes dead code a lot more often than you expect.

                I think it’s just fine, as long as it’s possible to turn them off and just run the subset of tests for public functions or user-facing code. I typically have a portable Makefile that includes make test-cov, make test, and make test-quick; if applicable, only make test needs to touch all test files.

            2. 1

              I have not read the actual paper (paywall)

              The PDF is on the linked ACM site: https://dl.acm.org/doi/pdf/10.1145/2568225.2568271 – I think you must have misinterpreted something or took a wrong turn somewhere(?)

              Otherwise there is always that certain site run by a certain Kazakhstani :-)

              1. 1

                Paywalled in the typical ACM fashion as far as I can tell?

                That said, sure, there are… ways (and someone’s found an author copy on the open web now). I’m just lazy :)

                1. 1

                  Skimmed the paper. It seems the methodology summary in the article is accurate, and I stand by my critique of it. To be fair, doing studies like this is incredibly hard, but I don’t think the suggested conclusions follow from the data. The constructed “suites” are essentially synthetic, and so don’t really say anything about how useful of a quality metric or target coverage is in a real-world project.

                  1.  

                    Huh, I can just access it. I don’t know, ACM is weird at times; for a while they blocked my IP because it was “infiltrated by Sci-Hub” 🤷 Don’t ask me what that means exactly, quoting their support department.

                    1.  

                      Hm. Out of curiosity, do you have a lingering academic account, or are you accessing it via some institution’s network? I know I was surprised and dismayed when my magical “free” access to all papers got taken away :)

                      1.  

                        I only barely finished high school, and that was a long time ago. So no 🙃

                        Maybe they’re providing free access to some developing countries (Indonesia in my case), or they just haven’t fully understood carrier grade NAT (my IP address is shared by hundreds or thousands of people, as is common in many countries). Or maybe both. Or maybe it’s one of those “free access to the first n articles, paywall afterwards” things? I don’t store cookies by default (only whitelisted sites), so that could play a factor too.

                2.  

                  Identifying under-tested parts of a program seems like a pretty important part of a testing strategy to me.

                  My interpretation is that test coverage reports can be useful if you look at them in detail to identify specific areas in the code where you thought you were testing it but you were wrong.

                  But test coverage reports are completely useless if you just look at a percentage number on its own and say “the tests for project X are better than project Y because their number is higher”. We have a codebase at work with the coverage number around 80%, and having looked at it in detail, I can tell you that we could raise that number to 90% and get absolutely no actual benefit from it.

                1. 6

                  I have been on the lookout for an indentation based language to replace Python for some time now as an introductory language to teach students. Python has too many warts (bad scoping, bad implementation of default parameters, not well-thought-out distinction between statements and expressions, comprehensions are a language within the language that makes student’s life difficult, and so on.). Is Nim the best at this point in this space? Am I missing warts in Nim that makes the grass greener on the other side? Anyone who has experience with both Nim and Python, can you tell me what the trade-offs are?

                  1. 9

                    I am uncomfortable with statements like (from this article) “if you know Python, you’re 90% of the way to knowing Nim.” The two languages are not IMO as similar as that. It’s sort of like saying “if you know Java, you’re 90% of the way to knowing C++.” Yes, there is a surface level syntactic similarity, but it’s not nearly as deep as with Crystal and Ruby. Nim is strongly+statically typed, doesn’t have list comprehensions, doesn’t capitalize True, passes by value not reference, has very different OOP, etc.

                    That said, there’s definitely evidence that Nim has a smooth learning curve for Pythonistas! This isn’t the first article like this I’ve read. Just don’t assume that whatever works in Python will work in Nim — you don’t want to be like one of those American tourists who’s sure the locals will understand him if he just talks louder and slower :)

                    So yes, Nim is excellent. It’s quite easy to learn, for a high performance compiles-to-machine-code language; definitely easier than C, C++ or Rust. (Comparable to Go, but for various reasons I prefer Nim.) When programming in it I frequently forget I’m not using a scripting language!

                    1. 2

                      Thank you for your perspective. Much appreciated.

                      1. 1

                        passes by value not reference

                        The terminology here is very muddied by C, so forgive me if this sounds obvious, but do you mean that if you pass a data structure from one function to another in Nim, it will create a copy of that data structure instead of just passing the original? That seems like a really odd default for a modern language to have.

                        1. 3

                          At the language level, it’s passing the value not a reference. Under the hood it’s passing a pointer, so this isn’t expensive, but Nim treats function arguments as immutable, so it’s still by-value semantically: if I pass an array or object to a function, it can’t modify it.

                          Obviously you don’t always want that. There is a sort-of-kludgey openarray type that exists as a parameter type for passing arrays by reference. For objects, you can declare a type as ref which makes it a reference to an object; passing such a type is passing the object by reference. This is very common since ref is also how you get dynamic allocation (with GC or more recently ref-counting.) It’s just like the distinction in C between Foo and *Foo, only it’s a safe managed pointer.

                          This works well in practice (modulo some annoyance with openarray which I probably noticed more than most because I was implementing some low-level functionality in a library) … but this is going to be all new, important info to a Python programmer. I’ve seen this cause frustration when someone approaches Nim as though it were AOT-compiled Python, and then starts either complaining or asking very confused questions on the Nim forum.

                          I recommend reading the tutorial/intro on the Nim site. It’s well written and by the end you’ll know most of the language. (Even the last part is optional unless you’re curious about fancy stuff like macros.)

                          (Disclaimer: fate has kept me away from Nim for about 6 months, so I may have made some dumb mistakes in my explanation.)

                          1. 4

                            Gotcha; I see. I wonder if it’d be clearer if they just emphasized the immutability. Framing it in terms of “by value” opens up a big can of worms around inefficient copying. But if it’s just the other function that’s prevented from modifying it, then the guarantee of immutability isn’t quite there. I guess none of the widely-understood terminology from other languages covers this particular situation, so some new terminology would be helpful.

                      2. 5

                        Nim is pretty strongly typed, that is certainly different from Python. I’m currently translating something with Python and Typescript implementations, and I’m mostly reading the Typescript because the typing makes it easier to understand. With Nim you might spend time working on typing that you wouldn’t do for Python (or not, Nim is not object oriented), but its worth it for later readability.

                        1. 4

                          Nim is less OO than Python, but more so than Go or Rust. To me the acid test is “can you inherit both methods and data”, and Nim passes.

                          Interestingly you can choose to write in OO or functional style, and get the same results, since foo(bar, 3, 4) is equivalent to foo.bar(3, 4).

                          IIRC, Nim even has multimethods, but I think they’re deprecated.

                          1. 1

                            what? don’t you mean foo(bar, 3, 4) and bar.foo(3, 4)? AFAIK the last token before a parenthesis is always invoked as a function.

                            1. 1

                              what? don’t you mean foo(bar, 3, 4) and bar.foo(3, 4)? AFAIK the last token before a parenthesis is always invoked as a function.

                              1. 1

                                Oops, you’re right!

                          2. 4

                            Python has too many warts (bad scoping, bad implementation of default parameters

                            I don’t want to sound like python fanboy, but those reasons are very weak. Why do you need to explore the corner cases of scoping? Just stick to a couple of basic styles. Relyokg on many scoping rules is a bad idea anyways. Why do you need default parameters at all. Many languages have no support for default parameters and do fine. Just don’t use them if you think their implementation is bad.

                            Less is more. I sometimes flirt with the idea of building a minimal indendtation based language with just a handful of primitives. Just as a proof of concept of the practicallity os something very simpl and minimal.

                            1. 6

                              At least for python and me, it’s less a matter of exploring the corner cases in the scoping rules and more a matter of tripping over them involuntarily.

                              I only know three languages that don’t do lexical scoping at this point:

                              1. Emacs lisp, which does dynamic scoping by default for backwards compatibility but offers lexical scoping as am option and strongly recommends lexical scoping for new code.

                              2. Bash, which does dynamic scoping but kind of doesn’t claim to be a real programming language. (This is wrong but you know what I mean.)

                              3. Python, which does neither dynamic nor lexical scoping, very much does claim to be a real programming language, and has advocates defending its weird scoping rules.

                              I mean, access to variables in the enclosing scope has copy on write semantics. Wtf, python?

                              (Three guesses who started learning python recently after writing a lexically scoped language for many years. Thank you for indulging me.)

                              1. 4

                                It is weirder than copy on write. Not tested because I’m on my iPad, but given this:

                                x = 1
                                def f(cond):
                                   if cond:
                                      x
                                   x = 2
                                

                                f(false) does nothing, but f(true) will thrown an undefined variable exception.

                                1. 4

                                  I think you need nonlocal x but I don’t quite get why this is weird/nonlexical.

                                  It has lexical scoping but requires you mark variables you intend to modify locally with ‘nonlocal’ or ‘global’ as a speed bump on the way to accidental aliasing. I don’t think I’d call puthon “not lexically scoped”

                                  1. 3

                                    Have you tried declaring a variable inside an if?

                                    if True:
                                        X = 1
                                    print(X)
                                    
                                    1. 1

                                      Yeah, if doesn’t introduce scope. Nonlexical scope doesn’t IMO mean “there exist lexical constructs that don’t introduce scope”, it is more “there exist scopes that don’t match any lexical constructs”

                                      1.  

                                        I just learned the idea of variable hoisting thanks to this conversation. So the bizarre behavior with carlmjohnson’s example can be understood as the later assignment declaring a new local variable that comes into scope at the start of the function. Because python does block scope instead of expression scope.

                                        I guess I’ve been misusing “lexical scope” to mean expression-level lexical scope.

                                        I still find the idea of block scope deeply unintuitive but at least I can predict it’s behavior now. So, thanks!

                                        1.  

                                          Yeah I’m not a huge fan either tbh, but I guess I’ve never thought of it as weird cause JavaScript has similar behavior.

                                    2. 2

                                      I agree. This is more of a quirk due to python not having explicit variable declaration syntax.

                                      1.  

                                        It’s multiple weird things. It’s weird that Python has† no explicit local variable declarations, and it’s weird that scoping is per function instead of per block, and it’s weird that assignments are hoisted to the top of a function.

                                        † Had? Not sure how type declaration make this more complicated than when I learned it in Python 2.5. The thing with Python is it only gets more complicated. :-)

                                        Different weird thing: nonlocal won’t work here, because nonlocal only applies to functions within functions, and top level variables have to be referred to as global.

                                  2. 3

                                    JavaScript didn’t have it it either until the recent introduction of declaration keywords. It only had global and function (not block) scope. It’s much trickier.

                                    But I am puzzled with why/how people stumble up on scoping problems. It doesn’t ever happen to me. Why do people feel the urge of accessing a symbol on a block outside the one when it was created? If you just don’t do it, you will mover have a problem, on any language.

                                    1.  

                                      For me it’s all about closures. I’m used to using first class functions and closures where I suspect an object and instance variables would be more pythonic.

                                      But if you’re used to expression level lexical scope, then it feels very natural to write functions with free variables and expect them to close over the thing with the same name (gestures upward) over there.

                                      I’m curious, do you use any languages with expression level scope? You’re not the first python person I’ve met who thinks pythons scope rules make sense, and it confuses me as much as my confusion seems to confuse you.

                                      1.  

                                        I don’t need to remember complicated scoping rules because I don’t ever use a symbol in a block higher up in the tree than the one it is defined in. Nor do I understand the need to re-assign variables, let alone re-using their names. (Talking about python now). Which languages qualify having expression level scope? Is that the same as block scope? So… Java, modern JavaScript, c#, etc?

                                        I am confused. What problems does python pose when using closures? How is it different than other languages in that respect?

                                2. 3

                                  Latest release of Scala 3 is trying to be more appealing to Python developers with this: https://medium.com/scala-3/scala-3-new-but-optional-syntax-855b48a4ca76

                                  So I guess could make it an option.

                                  1. 2

                                    Thanks!, this certainly looks interesting. Would it make an introductory language, though? By which I mean that I want to explain a small subset of the language to the pupil, and that restricted language should be sufficient to achieve fairly reasonable tasks in the language. The student should then be able to pick up the advanced concepts in the language by self exploration (and those implementations should be wart free. For example, I do not want to explain again why one shouldn’t use an array as a default parameter value in Python).

                                    1. 2

                                      There is no such thing as a programming language that is “wart free”, and while initially you want to present any language as not having difficulties or weirdness, in the long run you do need to introduce this to the student otherwise they will not be prepared for “warts” in other languages.

                                  2. 1

                                    Depending on what you’re trying to teach, Elm does fit your description of an introductory language for teaching students that uses indentation. I know there’s a school that uses Elm for teaching kids how to make games, so it definitely has a presedence for being used in education too. Though, of you’re looking to teach things like file IO, http servers, or other back end specific things then it’s probably a poor choice.

                                  1. 8

                                    Anyone who’s worked at AWS knows everything is constantly on fire, but they do manage to keep blast radius small enough and overwork their on-calls enough that the chaos is rarely visible to customers.

                                    1. 1

                                      How the heck is this viable to them?

                                      1. 4

                                        It’s what AWS users pay Amazon for, right? Hardware fails, software has bugs, things will catch fire (figuratively or literally). We pay AWS so that Amazon’s workers take care of all that and we don’t have to think about it too much.

                                        1. 1

                                          It’s just fascinating to me that such a process hasn’t been streamlined at this point, I guess.

                                          1. 4

                                            Amazon’s whole thing is basically to shave the margins down to nothing and grease the wheels with human misery. It’s working as designed as far as I can tell.

                                            1.  

                                              There’s some truth there but this statement also misses the forest for the trees.

                                            2. 1

                                              What is streamlined?

                                              1. 1

                                                Not have things on fire all the time?

                                                1. 2

                                                  I don’t think you can “streamline away” disk failures, RAM failures, power supply failures, datacenter cooling system failures, Internet connection failures, and all the other kinds of messy failures which occur when working with vast amounts of physical hardware. And they’re not in control of the software they run for the most part; companies like Netflix can design intelligent systems where a whole bunch of nodes can fail at the same time and other nodes seamlessly take over for the failed nodes, and some workers can take their time to fix the failed nodes whenever it’s most convenient. But that requires fancy distributed software, and one of the core abstractions Amazon provides is that of one highly reliable Linux computer with a fixed, large hard drive and a fixed IP address which never shuts down, and that seriously limits what you can do to engineer your way around downtime caused by hardware failures.

                                                  I’m not an expert in this by any means, it would be interesting to hear more specific details from someone who has done operations work for a cloud provider. But it doesn’t seem that difficult to me to imagine why what AWS is doing is a hard problem to do cleanly.

                                      1. 6

                                        From my reading of the changes this seems more like a problem with how JodaTime deals with the changes than the actual changes themselves, and, if this were done without warning I would understand the author’s complaints but AIUI these changes have been contemplated and publicised for a while now.

                                        1. 9

                                          Yep!

                                          Technically, the data has moved, not been deleted. But the file containing the moved data is never normally used by downstream systems, thus to all intents and purposes it has been deleted.

                                          So the data is still there, but he’s just going to act like it’s completely vanished because the place it’s moved to is … a file he previously didn’t use.

                                        1. 13

                                          They’re missing the most important (to me) one! Plugins are nerfed in terms of what keys they’re allowed to bind. For “security reasons” it’s impossible for a plugin in Chrome’s extension system (which Firefox tragically copied) to bind to ctrl-n, ctrl-t, or ctrl-p, all critical Emacs shortcuts. So plugins are more or less completely useless for building an Emacs-like browser.

                                          1. 8

                                            True, plugins in the major browsers have been neutered.

                                            Both Chromium and Firefox can be patched to get rid of the keybinding restrictions. Firefox can even be hot patched from the binary so you don’t have to rebuild anything to free up the reserved key bindings.

                                            But yes, it’s sad that there isn’t a better way.

                                            1. 3

                                              Holy smokes, that hot patch is amazing.

                                              If I had found that four years ago I would have saved myself a lot of pain. (In the end I instead switched to a window manager which can remap all keystrokes from Emacs keys to “conventional keys” before the browser even sees them so I don’t have a need for that any more.) But I admire it all the same.

                                              1. 5

                                                Ooh, I’d love to hear more about this. I’m 100% Linux these days but if there’s something that I do miss from macOS it’s that the default readline/Emacs keybindings for cursor navigation work in every text-like GUI element.

                                                1. 9

                                                  Oh man, EXWM’s simulation keys changed my life. I can’t sing their praises highly enough. I went from “once every twenty minutes I want to throw my laptop out the window” to “hey this is great” instantly: https://technomancy.us/184

                                                  I use it mostly to make Firefox bearable again but it works in any X program.

                                                  1. 3

                                                    This is a great fix, but the change in Firefox extensions architecture got rid of lots of other interesting possibilities that made Vimperator / Pentadactyl a really immersive user experience.

                                                    And I say this as an Emacs fanboy who learned vi just to enjoy Vimperator. It was really good, but sadly it’s gone.

                                            2. 3

                                              Every time someone links this WRT Firefox and WebExtensions, I have to link this post because it does a good job at explaining why.

                                              1. 8

                                                I don’t begrudge them removing XUL; it needed to die.

                                                I just wished they replaced it with something actually good.

                                                1. 5

                                                  I agree that XUL had to die I just wish they hadn’t used that as an opportunity to slam the door on full-access extensions. I also think WebExtensions are a good idea but I don’t think full-access extensions should have been fully killed.

                                                  My preferred approach would be having two types of extensions:

                                                  • WebExtensions support permissions and have API stability guarantees.
                                                  • Full-access extensions always have root, may break every release (and the Firefox devs won’t feel bad about it) and come with big scary warnings on add-ons.mozilla.org.

                                                  Basically the API for full-access would just be “you may run arbitrary JS in the main process”. Following API changes and not breaking things is the problem of the extension author.

                                                  I think this provides enough reason to prefer WebExtensions where they work and limits the maitnaince burden for Firefox devs while still allowing for truely powerful extensions.

                                                  Honestly without the old style of extensions I see very little meaningful difference between Firefox and chrome. The only reason I stick with FF is to resit the Google web monopoly.

                                              1. 6

                                                What do people use for blocking ads on Nyxt? I don’t use many plugins but ublock origin is an important one.

                                                1. 5

                                                  I briefly tried out qutebrowser and luakit a while back and this was a big problem. They both had ad blockers but nothing nearly as good as ublock origin, so I’m back on Firefox.

                                                  1. 2

                                                    Same issue for me. blocker-mode on Nyxt is good enough for general use, but not for ad-heavy parts of the Web of Lies.The github issue on blocking talks about upcoming support for webextensions, aimed at running uBlock Origin. But I’d rather see a uBlock-compatible extension in Lisp.

                                                  2. 2

                                                    I use an /etc/hosts based blocklist and it suits me well. Although it’s not perfect it does the job good enough without interfering with my browser at all.

                                                    1. 1

                                                      If you want a more ready-to-go solution than pihole:

                                                      I use nextdns.io as a DNS provider and enabled all the builtin blocklists. I can then selectively add some domains to the Allowlist when i see something not working correctly. Eg Google photos CDN, then i toggle it off as well after done. This works pretty well, and across all devices, browsers etc., no addon needed per browser etc just change the DNS server on each device. You can also use their app, but i prefer the manual DNS server config.

                                                      The free plan allows upto 300K per month DNS queries i believe, and they also show you analytics. For eg. on my phone(android) roughly 53% are blocked(ads) versus only 7% on a laptop. It was pretty evident how much a spy device a phone is! So i don’t mind paying a few dollars when i exceed 300K, which doesn’t seem anytime soon.

                                                      1. 1

                                                        This solution isn’t too bad, but I also like ublock origin for being about to do things like remove ads from Youtube videos. I also have some custom filter lists that I need to occasionally turn off to use websites, and that really needs to be a one-click operation for me.

                                                        Still a great idea for my phone, so I’ll give it a shot!

                                                    1. 4

                                                      Man, there’s so many people that don’t read anything.

                                                      I currently fill the role of IT and it’s absolutely bonkers the stuff people get stuck on. It’s so basic, and the screen is telling you what’s wrong, but people aren’t even parsing it.

                                                      1. 4

                                                        The irony here is that the article claims that people actually do spend a significant amount of their time reading compiler errors.

                                                        1. 3

                                                          The bonus irony is that it’s discussing a paper that, guess what, no one will read! Because it links to a paywalled version of it instead of the author’s copy available freely on their web site: https://people.engr.ncsu.edu/ermurph3/papers/icse17.pdf

                                                      1. 6

                                                        My latest company which hired me however mandated a company MacBook

                                                        Is this common? I have been fortunate to use Linux at every tech company I have worked with over the last 16 years or so. It was a pretty big deal when the first started allowing Linux and Macs as options to Windows, but every one since has allowed Linux.

                                                        Granted, I may be self-selecting somehow. After all, an opportunity with the comment ‘Every developer gets a top-of-the-line MacBook!’ would probably not entice me to apply.

                                                        1. 3

                                                          It’s pretty common for compliance reasons; my last employer required this and my current employer just sent me a macbook a few months ago. They make us keep all our private code on it so the day it arrived I installed virtualbox on it and set up port forwarding for SSH and stuck it in a closet. I basically never touch it except to do OS upgrades which for some reason can’t be accomplished over SSH.

                                                          1. 3

                                                            I wish Linux worked for our developers. We’ve adopted a mandated MacBook too. People who’ve chosen Linux have generally had a much more difficult time getting their systems to work. The drivers are too finicky and the companies aren’t chosen to support the tools needed to do video conferencing.

                                                            1. 1

                                                              Most companies I worked for gave you the choice of either ThinkPad with Linux or Macbook, but I’ve heard from friends about the “here’s your macbook” with varying degree of possibility to change it.

                                                              Interestingly a lot of stuff I’ve been working with simply doesn’t work on Windows, so I’ve been pretty safe from “here’s your Windows laptop”, but friends who do consulting/work at customers for a few weeks/months have told enough stories of getting handed one of those, I’d take a mac book over this any day.

                                                              1. 1

                                                                Is this common? I have been fortunate to use Linux at every tech company I have worked with over the last 16 years or so. It was a pretty big deal when the first started allowing Linux and Macs as options to Windows, but every one since has allowed Linux.

                                                                very, very common esp. with Bay Area and/or start-upy companies. My last 3 jobs always had work mandated MacBooks. The other option was Windows and I def. never want to go back to that.

                                                              1. 6

                                                                Oooooo, I should join this. By mid-October I will probably be ready for a break from programming language design stuff again, and ten days is an intriguing length of time for a game jam. More than the 48-hour sprints that I’m used to, but less than a month-long endurance run.

                                                                I am intrigued by the mention of Janet+Raylib, but Rule One of a game jam is “don’t use it to learn totally new technology”, so I think that this time Fennel+Love2D is a better choice. I, uh, still don’t know Fennel, but I know Lua vaguely well and I’m very familiar with Love2D, so it will hopefully take a lot less time to get myself up to speed. Anyone have any good suggestions for an ECS lib written in Lua or Fennel, or are such things less necessary in a dynamically typed language?

                                                                1. 2

                                                                  I, uh, still don’t know Fennel, but I know Lua vaguely well

                                                                  You should have no trouble picking up Fennel. If you have some example Lua code and you want to see what it would look like in Fennel, there’s a reverse compiler available: https://fennel-lang.org/see (It might not always give an idiomatic result, for instance no pattern matching or destructuring, but it should be enough to give you the general gist.)

                                                                  Anyone have any good suggestions for an ECS lib written in Lua or Fennel, or are such things less necessary in a dynamically typed language?

                                                                  There’s a bunch of entity-component systems in Lua that you can use trivially from Fennel, including one (written in Lua) by the original author of Fennel: https://github.com/bakpakin/tiny-ecs

                                                                  However, for the type of game that is typically created during a game jam, an ECS usually doesn’t provide enough benefit to justify its overhead.

                                                                1. 71

                                                                  I think it depends heavily on who you are trying to teach. I strongly believe the goal should be to ignite curiosity and empower someone to follow where that curiosity leads them. For one person, this might be writing some C to run on a microprocessor to read data from a sensor. For another, it might be building a Django app. Figuring that out is the hard part, in my opinion. Once you do that, things will fall into place for a while.

                                                                  1. 15

                                                                    Adding to that, motivation is always key when learning anything, and working on stuff you want to work on is the best way to be motivated.

                                                                    1. 8

                                                                      A corollary here is that the easier it is to find an engaging project to write in a language, the easier it is to learn a language. That’s why I feel like the best way to proceed when teaching kids is to start with making a game. Of course, every kid is different; some kids may be able to sustain interest just on the basis of a fascination with logic alone (like Bryan Cantrill in the OP) but if you assume everyone thinks this way going in you’re just going to end up with frustration. Building a game is a starting point that will resonate with a much wider audience!

                                                                      1. 3

                                                                        Some random musings:

                                                                        My first program was to sort the list of games I had; every floppy disk we had contained a bunch of games, something like 10-20 of them. And “I want to play game foo” meant going through the hundreds of disks we had to find it. I numbered the disks and wrote down which games it had, but a handwritten notebook isn’t actually all that much easier, whereas a sorted list is.

                                                                        I remember typing ZZZZZZZZZZZZZ at some point; it was slow as hell, probably somewhere between O(omg) and O(fucked). But it did work and solved a real practical problem I had, and a 12-year old me was very proud of it and happy.

                                                                        In spite of all the new languages, the internet, a plethora of books and courses, etc. I sometimes feel that getting started is, in a way, harder today than it was when I was young all those years ago. You just don’t really run in to these kind of fairly simple practical problems any more: you’d just use a spreadsheet or something like that today. Of course that’s not a bad thing as such, but it does mean there are far fewer problems to solve. I sometimes wonder if I ever would have gotten in to programming if I was born 25 years later.

                                                                        After we got a “proper” Windows 95 PC I stopped programming, as I didn’t really know how to and was so much harder to get started. The MSX just booted in a BASIC environment and you could program. I didn’t know about Python or Perl and such, and mucking about with Visual Studio was complicated and I didn’t really understand it (it didn’t help that the “Teach yourself C++ in 10 minutes”-book I got from the local bookstore was beyond horrible). I got back in to it a few years later by making some mods for Unreal and Deus Ex (which resulted in avoiding OOP, and inheritance in particular, for years to come), and after I started using Linux and FreeBSD I discovered Perl and Python and that you didn’t need an expensive (pirated of course) copy of Visual Studio to prmeansogram on modern machines. Things have certainly improved in that area.

                                                                        1. 3

                                                                          I have a similar story where I got started on QBasic and loved the immediacy but I hated DOS. When I learned to program a “proper” GUI for my Mac I got so bummed out by the tedium that I stopped programming for years.

                                                                          That’s part of why I’m excited about environments like TIC-80. They may never ship with the OS, but it has all the immediacy and “batteries included” and none of the “faff around with the environment before you can get it working” problems that most beginners face nowadays.

                                                                        2. 3

                                                                          A corollary here is that the easier it is to find an engaging project to write in a language, the easier it is to learn a language.

                                                                          For some people (i.e. me) this can be tricky. My first steps in programming were in Basic on a ZX Spectrum. Once we got a PC, it was natural to continue with QBasic or something like that there. But I actually stopped for a while then. Because while I could write programs, they would need to run in the Basic interpreter. And therefore, they were not “real” programs, like the other one I had on the PC. And if it was not “real” then I didn’t want to waste time on it. It felt more like making toys, in a playground called Basic.

                                                                          Thinking about it now, that was apparently the first manifestation of the programming me that wants to avoid lock-ins and resents unnecessary dependencies.

                                                                      2. 8

                                                                        This is roughly what I came to say. I dropped out of c++ focused cs because the projects weren’t engaging enough to suffer long projects while working nights.

                                                                        Later, I crawled through glass to pick up php for some projects that interested me.

                                                                        I imagine not everyone has the luxury of finding that driving project to learn through… But I still suspect it’s best to follow curiosity if at all possible?

                                                                        1. 4

                                                                          Same - I dropped out of my C++ classes, in fact I despised coding until ten years after college. A combination of luck and opportunity changed my life, and I learned Rails. I actively despise the approach in Rails apps now, but I feel like for the place and time learning a web framework which forced me to “just build a cool website” was a life changing and positive experience. I’m now working for a big tech company and life is more or less ok.

                                                                          1. 3

                                                                            I have a very similar story. Grew up with BASIC, “learned” C++/Java at university, hated that, stopped programming for maybe 5 years, got sucked back in by Python, became a professional programmer nine years after graduating with a CS degree.

                                                                          2. 1

                                                                            Just great…. C++ is my next class!!!! I still keep asking myself if I enjoy programming or not, I’ve only written one “real” program in my entire life and it was in BASIC back in the Commodore 64 days. I was an avid radio scanner buff (still am to this day) and spent I don’t know how many painstaking hours writing out a database from a copy of Police Call all so I could query my computer vs. look it up in a book!

                                                                            I have personally struggled in this space, I’ve spent countless hours and money on books, courses etc trying to learn C/C++ but just doesn’t jive with me for whatever reason, and it seems like I’m not alone. Perhaps it will “click” for me someday.

                                                                            1. 1

                                                                              I mean, I wouldn’t say it has to be bad.

                                                                              It wasn’t the C++ that I didn’t like, it was being run ragged by long debug sessions on things I didn’t care about like a program to play tic-tac-toe. If I had cared about programming or the language for its own sake, that might have changed the dynamic a little.

                                                                              ~7 years later (after the PHP project opened my eyes a bit to how code could fit into my projects) the language I ended up really cutting my teeth on was LPC, a C-alike used to write LD/LPMuds. It took a while for things to really click, so it helped to be working on something I found fun.

                                                                          3. 5

                                                                            Adding to that, it is good to begin solving a real problem from the get-go. Working on an imaginary problem may not sparkle authentic and sustainable curiosity.

                                                                          1. 7

                                                                            Has anyone submitted anything in the past? What’s the most comfortable framework for making games in Lisp/Scheme/etc? I ask as a curious person who is looking for a hygienic framework for creating games in. My issue is finding something that’s easy to cross-compile for other platforms (eg. I use Linux, but would want to compile for Windows friends). I have been mulling over Lua/love2d for a while but never committed since I would prefer a Lisp environment.

                                                                            1. 13

                                                                              You might like https://fennel-lang.org/

                                                                              1. 15

                                                                                I have participated in every one since 2018 using Fennel. The easiest way to get started IMO is using TIC-80 which has built-in support for Fennel and lets you publish your games to play in the browser, so no downloads are required: https://tic80.com

                                                                                Targeting love2d is another popular choice tho. It’s a lot more complex than TIC-80 but it also offers a lot more flexibility: https://love2d.org It’s harder to get love2d games to run in the browser but still really easy to cross-compile from Linux to Windows.

                                                                                (disclaimer: Fennel lead developer here)

                                                                                1. 4

                                                                                  There’s even a starter kit for Love2D and Fennel

                                                                                  https://gitlab.com/alexjgriffith/min-love2d-fennel

                                                                              2. 7

                                                                                Hello, jam organizer here. There have been lots of submissions from a variety of different Lisps in the past. You can check out the past jams on our wiki. Fennel seems to be a popular choice each jam, but I can’t say much more than that, as I am a diehard Common Lisp user :) For Common Lisp, there are a lot of partial solutions, as most people seem to be focused on building extremely general game engines rather than focused engines for a particular game/genre. This is expected in a way, as Common Lisp is extrememly performant, and there is no reason we need to be confined to C++ etc with Unity/Unreal/Godot…it just isn’t there yet though. I have been working towards that for a good 10 years now…but nothing worth announcing yet…anyway have fun regardless of which dialect you decide on!

                                                                                1. 1

                                                                                  Thank you! I appreciate the insight. I have indeed looked at Godot in the past but felt it was too much for me to take on. I definitely want to try something out in Common Lisp, so the wiki you linked looks like a great place for me to start doing some research. Thank you for organizing this event!

                                                                                2. 5

                                                                                  For 2D in Common Lisp, popular choices are Sketch and trivial-gamekit (apologies for self-plug). I, as an author of the latter framework, use travis/appveyor/github actions CI solutions to make builds for different platforms. If there would be any interest, I probably can arrange github action for building gamekit-based stuff for Linux and Windows. But otherwise, there exist examples for how to do that with travis and appveyor.

                                                                                  1. 4

                                                                                    Thank you! I don’t mind the self-plug, in fact I’m more inclined to check out your project for responding to my question! I will totally look into your library to see how it works now!

                                                                                  2. 5

                                                                                    Perhaps you might like CHICKEN; it is straightforward to compile static binaries, and cross-compiling to Windows (mingw) from Linux is also supported. There’s hypergiant, a game development toolkit, and on IRC you’ll find a few people interested in game writing too. In CHICKEN 4 we used to have a love2d-inspired framework called doodle, which shouldn’t be too hard to port to CHICKEN 5.

                                                                                    1. 2

                                                                                      I have been doing a bit of practice with Chicken and trying to get familiar with that environment. The cross-compiling is very appealing to me and I was actively looking at hypergiant. I might have to give that another shot!

                                                                                  1. 12

                                                                                    As with Gmail, I believe Flow is the only browser engine written after Google Docs that can run Google Docs.

                                                                                    This is pretty impressive; I would not have expected this to be possible without millions of dollars and years of work. Though I guess since it’s a proprietary codebase we can’t know much about how it was built beyond what’s on the blog.

                                                                                    1. 9

                                                                                      Yeah, I’m quite surprised by this, especially given that the HTML5 standards are now basically of the form “implement exactly what Chrome does based on this pseudocode”. I’m guessing that they’ve worked quite hard to get Google Docs and GMail working, but that doesn’t mean that equally complex web apps that exercise different areas of the web platform standards also work.

                                                                                      1. 15

                                                                                        It’s basically the same problem that the Wine project faced in the early 2000s - things people cared about might work ok, but it was pretty easy to find reasonably modern software that wouldn’t work at all. The more work that goes into something like this, the more sites start to work properly, but I think it’ll look like an S-curve.

                                                                                        1. 1

                                                                                          I think the newer versions of the Google apps mostly just use canvas, which is basically a PostScript drawing model API. I’ve been wondering for a little while at what point it will make sense to implement canvas on top of WebGPU and all of the DOM things on top of canvas. Firefox uses a PDF viewer that’s written in JavaScript, which means that it benefits from everything done to sandbox JavaScript. It might make sense to do the same thing with the rest of the traditional web stack. In modern browsers, calling from JavaScript into DOM methods is increasingly a bottleneck. If the DOM were just a JavaScript construct on top of canvas then that wouldn’t be a problem and DOM calls could even be inlined. That kind of approach would make it possible to share the implementations of most of a browser between lightweight rendering engines and would also make it easier to deprecate things: if you want an old version of various specs, just import the JavaScript implementations of them.

                                                                                          1. 1

                                                                                            I think some of the newer parts of the standard are like that (Bluetooth, whatever), but core HTML5, CSS, and JS was based on multiple browsers.

                                                                                            GMail works fine in Firefox, which is a totally different rendering engine. Google Docs tends to be slow so I switch to Chrome for that, but presumably it works too. Google Maps works, etc.

                                                                                            I assume GMail has to work in Safari too … Apple’s slow development of Safari is probably helping efforts like these.

                                                                                        1. 2

                                                                                          One time as a prank I set my co-worker’s bash $PS1 to C:\>

                                                                                          Setting this as their font in their terminal would have been a great companion prank.

                                                                                          1. 13

                                                                                            Genuine comment (never used Nix before): is it as good as it seems? Or is it too good to be true?

                                                                                            1. 51

                                                                                              I feel like Nix/Guix vs Docker is like … do you want the right idea with not-enough-polish-applied, or do you want the wrong idea with way-too-much-polish-applied?

                                                                                              1. 23

                                                                                                Having gone somewhat deep on both this is the perfect description.

                                                                                                Nix as a package manager is unquestionably the right idea. However nix the language itself made some in practice regrettable choices.

                                                                                                Docker works and has a lot of polish but you eat a lot of overhead that is in theory unnecessary when you use it.

                                                                                              2. 32

                                                                                                It is really good, but it is also full of paper cuts. I wish I had this guide when learning to use nix for project dependencies, because what’s done here is exactly what I do, and it took me many frustrating attempts to get there.

                                                                                                Once it’s in place, it’s great. I love being able to open a project and have my shell and Emacs have all the dependencies – including language servers, postgresql with extensions, etc. – in place, and have it isolated per project.

                                                                                                1. 15

                                                                                                  The answer depends on what are you going to use nix for. I use NixOS as my daily driver. I am running a boring Plasma desktop. I’ve been using it for about 6 years now. Before that, I’ve used windows 7, a bit of Ununtu, a bit of MacOS, and Arch before. For me, NixOS is a better desktop than any of the other, by a large margin. Some specific perks I haven’t seen anywhere else:

                                                                                                  NixOS is unbreakable. When using windows or arch, I was re-installing the system from scratch a couple of times a year, because it inevitably got into a weird state. With NixOS, I never have to do that. On the contrary, the software system outlives the hardware. I’ve been using what feels the same instance of NixOS on six different physical machines now.

                                                                                                  NixOS allows messing with things safely. That’s a subset of previous point. In Arch, if I installed something temporarily, that inevitably was leaving some residuals on the system. With NixOS, I install random on-off software all the time, I often switch between stable, unstable, and head versions of packages together, and that just works and easy rollbackabe via entry in a boot menu.

                                                                                                  NixOS is declarative. I store my config on GitHub, which allows me to hop physical systems while keeping the OS essentially the same.

                                                                                                  NixOS allows per-project configuration of environment. If some project needs a random C++ package, I don’t have to install it globally.

                                                                                                  Caveats:

                                                                                                  Learning curve. I am a huge fan of various weird languages, but “getting” NixOS took me several months.

                                                                                                  Not everything is managed by NixOS. I can use configuration.nix to say declaratively that I want Plasma and a bunch of applications. I can’t use NixOS to configure plasma global shortcuts.

                                                                                                  Running random binaries from the internet is hard. On the flip side, packaging software for NixOS is easy — unlike Arch, I was able to contribute updates to the packages I care about, and even added one new package.

                                                                                                  1. 1

                                                                                                    NixOS is unbreakable. When using windows or arch, I was re-installing the system from scratch a couple of times a year, because it inevitably got into a weird state. With NixOS, I never have to do that. On the contrary, the software system outlives the hardware. I’ve been using what feels the same instance of NixOS on six different physical machines now.

                                                                                                    How do you deal with patches for security issues?

                                                                                                    1. 8

                                                                                                      I don’t do anything special, just run “update all packages” command from time to time (I use the rolling release version of NixOS misnamed as unstable). NixOS is unbreakable not because it is frozen, but because changes are safe.

                                                                                                      NixOS is like git: you create a mess of your workspace without fear, because you can always reset to known-good commit sha. User-friendliness is also on the git level though.

                                                                                                      1. 1

                                                                                                        Ah I see. That sounds cool. Have you ever had found an issue on updating a package, rolled back, and then taken the trouble to sift through the changes to take the patch-level changes but not the minor or major versions, etc.? Or do you just try updating again after some time to see if somebody fixed it?

                                                                                                        1. 4

                                                                                                          In case you are getting interested enough to start exploring Nix, I’d personally heartily recommend trying to also explore the Nix Flakes “new approach”. I believe it fixes most pain points of “original” Nix; two exceptions not addressed by Flakes being: secrets management (will have to wait for different time), and documentation quality (which for Flakes is now at even poorer level than that of “Nix proper”).

                                                                                                          1. 2

                                                                                                            I didn’t do exactly that, but, when I was using non-rolling release, I combined the base system with older packages with a couple of packages I kept up-to-date manually.

                                                                                                    2. 9

                                                                                                      It does what it says on the box, but I don’t like it.

                                                                                                      1. 2

                                                                                                        I use Nixos, and I really like it, relative to how I feel about Unix in general, but it is warty. I would definitely try it, though.

                                                                                                      1. 31

                                                                                                        It’s odd to see C described as boring. How can it be boring if you’re constantly navigating a minefield and a single misstep could cause the whole thing to explode? Writing C should be exhilarating, like shoplifting or driving a motorcycle fast on a crowded freeway.

                                                                                                        1. 17

                                                                                                          Hush! We don’t need more reasons for impressionable youngsters to start experimenting with C.

                                                                                                          1. 10

                                                                                                            Something can be boring while still be trying to kill you. One example is described in Things I Won’t Work With.

                                                                                                            1. 1

                                                                                                              ‘Boring’ is I suspect the author’s wording for ‘I approve of this language based on my experiences’.

                                                                                                              1. 10

                                                                                                                I suspect “boring” is used to describe established languages whose strengths and weaknesses are well known. These are languages you don’t spend any “weirdness points” for picking.

                                                                                                                1. 5

                                                                                                                  Normally I’d lean towards this interpretation, but I’ve read many other posts by this author and he strikes me as being more thoughtful than that. Perhaps a momentary lapse in judgement; happens to everyone I suppose.

                                                                                                                  1. 4

                                                                                                                    ‘Boring’ is I suspect the author’s wording for ‘I approve of this language based on my experiences’.

                                                                                                                    I’m curious if you read the post, and if so, how you got that impression when I said things like “it feels much nicer to use an interesting language (like F#)”, “I still love F#”, etc.

                                                                                                                    Thanks for the feedback.

                                                                                                                    1. 4

                                                                                                                      I found your article pretty full of non-sequiturs and contradictions, actually.

                                                                                                                      boring languages are widely panned. … One thing I find interesting is that, in personal conversations with people, the vast majority of experienced developers I know think that most mainstream langauges are basically fine,

                                                                                                                      Are they widely panned or are they basically fine?

                                                                                                                      But when I’m doing interesting work, the boilerplate is a rounding error and I don’t mind using a boring language like Java, even if that means a huge fraction of the code I’m writing is boilerplate.

                                                                                                                      Is it a rounding error or is it a huge fraction? Once the code has been written down, it doesn’t matter how much effort it was to mentally wrestle with the problem. That was a one-time effort, you don’t optimize for that. The only thing that matters is clearly communicating the code to readers. And if it’s full of boilerplate, that is not great for communication. I want to optimize for clear, succinct communication.

                                                                                                                      Of course, neither people who are loud on the internet nor people I personally know are representative samples of programmers, but I still find it interesting.

                                                                                                                      I’m fairly sure, based on this, that you are just commenting based on your own experiences, and are not claiming to have an unbiased sample?

                                                                                                                      To me it basically seems that your argument is, ‘the languages which should be used are the ones which are already used’. The same argument was used against C, C++, Java, Python, and every other boring language you can think of.

                                                                                                                      1. 2

                                                                                                                        Are they widely panned or are they basically fine?

                                                                                                                        I think the point is that the people who spend a lot of time panning boring languages (and advocating their favourite “interesting” one) are not representative of “experienced developers”. They’re just very loud and have an axe to grind.

                                                                                                                        1. 1

                                                                                                                          Having a tough time reconciling this notion that a narrow section of loudmouths criticize ‘boring languages’, against ‘widely panned’, which to me means ‘by a wide or significant section’.

                                                                                                                          But it’s really quite interesting how the experienced programmers who like ‘boring languages’ are the ones being highlighted here. It begs the question, what about the experienced programmers who don’t? Are they just not experienced enough? Sounds like an unquestionable dogma to me. If you don’t like the boring languages in the list, you’re just not experienced enough to realize that languages ultimately don’t matter.

                                                                                                                          Another interesting thing, some essential languages of the past few decades are simply not in this list. E.g. SQL, JavaScript, shell. Want to use a relational database, make interactive web pages, or just bash out a quick script? Sorry, can’t, not boring enough 😉

                                                                                                                          Of course that’s a silly argument. The point is to use the right tool for the job. Sometimes that’s a low-level real-time stuff that needs C, sometimes it’s safety-critical high-perf stuff that needs Ada or Rust, sometimes you need a performant language with good domain modelling and safety properties like OCaml or F#. Having approved lists of ‘boring languages’ is a silly situation to get into.

                                                                                                                          1. 1

                                                                                                                            To be honest, I don’t really see why that’s hard to reconcile at all. Take an extreme example:

                                                                                                                            Let’s say programming language X is used for the vast majority of real world software development. Through some strange mechanism (doesn’t matter), programmers who write language X never proselytize programming languages on the Internet. Meanwhile, among the set of people who do, they almost always have nasty things to say about X. So, all the articles you can find on the general topic are at least critical of X, and a lot of them are specifically about how X is the devil.

                                                                                                                            Is saying that X is “widely panned” accurate? Yes.

                                                                                                                            Of course that’s a silly argument.

                                                                                                                            Yes it is.

                                                                                                                            The point is to use the right tool for the job.

                                                                                                                            Indeed.

                                                                                                                1. 8

                                                                                                                  Does anyone else have the problem that Matrix is incredibly slow? I have a top of the line desktop PC and it takes 20-60 seconds from double clicking the icon until the Element client finishes with all the spinners and freezing and becomes usable. Also, how can I extract conversations in a usable format (e.g. SQLite)? These are my two biggest pain points with Matrix and the reason why I don’t use it.

                                                                                                                  1. 9

                                                                                                                    You’re describing problems with Element, which is fair. It’s an electron app, and it’s memory hungry. So is Signal.

                                                                                                                    The difference is that you can use Matrix without using Element; there are 3rd-party clients out there that work great with much better performance footprints. If you don’t want to use the Signal electron app, or if Signal decides your platform isn’t worth supporting (like arm64) then your only option is to not use Signal.

                                                                                                                    1. 8

                                                                                                                      If Element is showing spinners for tens of seconds, that’s all time spent waiting for the server to respond.

                                                                                                                      Synapse is (at least was) slow. Back when I’ve tried running it, there even was a postgres query you had to run occasionally to clean up things that brought it to a complete halt. Thankfully it seems that Conduit is a server project that will actually be good.

                                                                                                                      1. 2

                                                                                                                        The server stack isn’t much better performance wise, sadly.

                                                                                                                        1. 1

                                                                                                                          It’s an electron app, and it’s memory hungry. So is Signal.

                                                                                                                          Signal Desktop on my laptop has been running for a couple of days. It is using 23.1 MB of RAM (between 3 processes, two using about 1 MB) and is responsive. Restarting it takes a few seconds and it’s usable as soon as it presents the UI.

                                                                                                                          1. 3

                                                                                                                            That experience is completely different from mine, but arguing about it is academic since I don’t even have the option of running it on my machine any more even if I wanted to.

                                                                                                                        2. 2

                                                                                                                          It seems fine as a Weechat plugin for me

                                                                                                                          1. 1

                                                                                                                            Which server do you use? I run my own with (usually nearly) the latest versions of Synapse, and I don’t see any such problem. Starting up the client on my iPhone just now took 4 seconds, on my Linux laptop it took 12 seconds (but normally it’s always running and available as a panel icon).

                                                                                                                            1. 3

                                                                                                                              Same here. A friend of mine chats with me over the Matrix “default” home server and I’ve seen his client freeze like that too, while for me it’s always been fast (I’m using a self-hosted home server). I think there need to be more alternative servers and awareness of how to use such alternatives.

                                                                                                                              1. 1

                                                                                                                                Yeah, I think Matrix.org is slow, because they haven’t been able to scale in proportion to their users. Synapse is overly resource-hungry, but it’s not actually slow unless it’s starved of resources. Small homeservers are pretty much always faster than Matrix.org.

                                                                                                                          1. 2

                                                                                                                            I can’t wait to try Shenandoah and ZGC in production (one of my services does over 1.5K RPS with G1). I’ve moved over to Kotlin long time ago seeing same features land in Java, and more JVM improvements get me excited and confident in future of JVM ecosystem.

                                                                                                                            1. 12

                                                                                                                              It really is an exciting time to be working in a JVM language. I too moved over to Kotlin a while back, but I still closely follow what’s going on in Java.

                                                                                                                              My hunch is that a lot of people who currently dismiss Java and the JVM as slow bulky dinosaur tech are going to be shocked when some of the major upcoming changes get released. Loom (virtual threads) in particular should drive a stake through the heart of async/await or reactive-style programming outside a small set of niche use cases, without sacrificing any of the scalability wins. Valhalla (user-defined types with the low overhead of primitive types) and Panama (lightweight interaction between Java and native code) will, I suspect, make JVM languages a competitive option for a lot of applications where people currently turn to Python with C extensions.

                                                                                                                              1. 2

                                                                                                                                My hunch is that a lot of people who currently dismiss Java and the JVM as slow bulky dinosaur tech are going to be shocked when some of the major upcoming changes get released

                                                                                                                                I agree with this re the JVM, but isn’t Java mostly releasing language-level changes that are just catch-up with things that have been commonplace elsewhere for years?

                                                                                                                                1. 4

                                                                                                                                  That’s a fair point, sure.

                                                                                                                                  Maybe a better way to frame it is that as language changes roll out, it’ll get harder to point to Java and say, “That’s such an obsolete, behind-the-times language. It doesn’t even have thing X like the other 9 of the top 10 languages have had for years.”

                                                                                                                                  Of course, Java will never (and should never, IMO) be on the bleeding edge of language design; its designers have made a deliberate choice to let other languages prove, or fail to prove, the value of new language features. By design, you’ll pretty much always be able to point to existing precedent for anything new in Java, and it’ll never look as modern as brand-new languages. My point is more that I think the perception will shift from, “Java is obsolete and stagnant” to, “Java is conservative but modern.”

                                                                                                                                2. 1

                                                                                                                                  my Android client app shares code with backend (both are in Java).
                                                                                                                                  The android’s Java is at about JDK 8+ level (https://developer.android.com/studio/write/java8-support-table ) The backend is currently using JDK 11.

                                                                                                                                  So sharing the code between client and backend is becoming more challenging.

                                                                                                                                  I think, if I am to move backend to JDK 17, then it will be harder to share code (if take advantage of JDK 17 features on the backend).

                                                                                                                                  I guess the solution is to move both backend and frontend to Kotlin… but that’s a lot of work without significant business value.

                                                                                                                                  1. 3

                                                                                                                                    Nothing prevents you from using JDK 17 with Java 8 language features level, essentially marking any use of new features a compile errors and making sure your compiler produces Java 1.8 compatible bytecode. That’s what we do in our library that needs to be JDK 8+ (but we use JDK 8 on the CI side to compile the final JARs to be on the safe side). Then you can run that code on JVM 17 on the server and take advantage of the JVM improvements (but not the new language features). We have decided to add Kotlin where it makes sense gradually instead of a full rewrite (e.g. when we’d want to use coroutines).

                                                                                                                                    1. 2

                                                                                                                                      You could also just stick to 11. It’ll be supported for years.

                                                                                                                                1. 7

                                                                                                                                  This distinction is very similar to the one made in this article, except it splits the module manager into two subcategories:

                                                                                                                                  • Language package managers, e.g. go get, which manage packages for a particular language, globally.
                                                                                                                                  • Project dependency managers, e.g. cargo, which manage packages for a particular language and a particular local project.

                                                                                                                                  To be fair, many package managers play both roles by allowing you to install a package locally or globally. I tend to think that global package installation is an anti-pattern, and the use cases for it are better served by improving the UX around setting up local projects. For example, nix-shell makes it extremely easy to create an ad-hoc environment containing some set of packages, and as a result there’s rarely a need to use nix-env.

                                                                                                                                  1. 13

                                                                                                                                    I tend to think that global package installation is an anti-pattern

                                                                                                                                    From experience, I agree with this very strongly. Any “how to do X” tutorial that encourages you to run something like “sudo pio install …” or “sudo gem install …” is immediately very suspect. It’s such a pain in the hindquarters to cope with the mess that ends up accruing.

                                                                                                                                    1. 3

                                                                                                                                      Honestly I’m surprised to read that this still exists in newer languages.

                                                                                                                                      Back when I was hacking on Rubygems in 2008 or so it was very clear that this was a mistake, and tools like isolate and bundler were having to backport the project-local model onto an ecosystem which had spent over a decade building around a flawed global install model, and it was really ugly. The idea that people would repeat those same mistakes without the excuse of a legacy ecosystem is somewhat boggling.

                                                                                                                                      1. 3

                                                                                                                                        Gah, this is one thing that frustrates me so much about OPAM. Keeping things scoped to a a specific project is not the default, global installations of libraries is more prominently encouraged in the docs, and you need to figure out how to use a complicated, stateful workflow using global ‘switches’ to avoid getting into trouble.

                                                                                                                                        1. 3

                                                                                                                                          One big exception… sudo gem install bundler ;)

                                                                                                                                          (Though in prod I do actually find it easier/more comfortable to just use Bundler from APT.)