Threads for ketralnis

  1. 11

    As cool as this looks, I feel like a proprietary editor is a hard sell these days.

    1. 3

      Is it? I’ve used sublime for years, and only really switched to VSCode because it had more extensions and a bigger community. Only having a limited time trial feels like a much bigger obstacle, since developers hate to pay for stuff

      1. 3

        I don’t mind paying for things but my main concern has become longevity.

        On the mac in particular where they frequently break backwards compatibility I want to know that the software will keep working for forever and every time I buy commercial software for the mac that gets broken. Their license servers don’t stay up or they lose interest in it for long enough that Apple switches CPU architectures or macOS versions underneath it and it doesn’t work anymore.

        I’ve spent about $200 on 1password licences and they’re just about to drop support for the way I use them and there’s basically nothing I can do to keep it working once Apple changes some insignificant thing that Agilebits would have to update it for. That might be a Firefox or Safari plugin architectural change or even just an SSL certificate that needs renewing.

        At least with something open source I can go and do that myself

        1. 3

          Paying for stuff isn’t the barrier, at least not for me. It’s the lack of hackability and extensions.

          I guess if they had a robust plugin system, that could make the lack of source easier to swallow, but it’s still unlikely to have many plugins or a big community, because it’s proprietary.

          1. 2

            Sublime was hackable, had extensions, had a robust plugin system. In fact, both atom and VSCode are very much inspired on sublime, and this is as well, just from looking at it. The assertion that a piece of software won’t have plugins or a big community ** because** it’s proprietary is just incorrect.

            1. 5

              Sublime is from another time where VSCode and Atom didn’t exist. This is very anecdotal, but most developers I see nowadays are using VSCode wethers 5-10 years ago most of them were using Sublime.

              I guess this editor has a niche for Rust and Go developers who want IDE with native performance, but at the cost of extendability.

      1. 2

        Bear in mind you don’t need to keep track in memory of all the device:inode pairs for every file seen, unless you have an unusual situation: you just need to look at the link-count of regular files and keep track for those with a link-count greater than 1. So it’s “usually fine”, unless you have people hard-linking entire trees.

        In an era with bind/loopback mounts, that’s much less likely to be encountered.

        Honestly, the bigger issue will be if you’re trying to rsync across a unified presentation view of the filesystem, where the same underlying FS is bind/loopback-mounted into multiple locations: you’ll no longer have the same information available to detect that this has happened, and the device numbers will be different so existing deduplication will fail. It might mean needing rsync to be aware of OS-specific bind/loopback mechanisms and how they present, and then both unifying/normalizing device numbers and ignoring the link-count and instead tracking every inode encountered.

        1. 1

          Maybe bind mounts should increment the reported hardlink count of all inodes in both themselves and the filesystem they’re duplicating. (Of course I realise the futility of “maybe everyone should change everything everywhere because of this tiny detail that comes up rarely” :) )

          1. 1

            If you stat each directory, then you can tell when you’re crossing a mount point (the device changes between the parent and subdirectory). That gives you a place to look for aliased filesystems. With FreeBSD nullfs mounts, the inode numbers are unchanged, but the device node is different for each mount, I believe the same is true for bind mounts on Linux. This means that you need to do something OS-specific to parse the mount table and understand the remapping.

            It would be nice if stat could be extended to provide an ‘underlying device’ device node field, so that you could differentiate between the device providing the mapping and the raw filesystem. That would be simpler than incrementing the link count because only nullfs and similar things would need to be modified to provide anything other than the default value for this field.

        1. 33

          The thing is, I don’t want to “keep up” with web development. I think I knew web development in about 2010, ish, but I’m essentially useless in a “modern” development environment. (How do you know it’s “modern”? All of the javascript dependencies say so in their name, you see!) Meanwhile my knowledge of almost every other branch of development is still perfectly applicable. And I’m not just some old fogey that refuses to learn; I spend a lot of time expanding my knowledge into new fields and getting deeper into the ones I do understand. I take MOOCs, read books, learn new languages and environments, and try building things that I’ve never build before. But every time I pick up a JS project older than about a year half of it doesn’t work anymore.

          The web’s model of being different every week according to the whims of what amounts to some exuberant teenagers is just incorrect. Code that worked last week not working this week because browsers or npm or somebody rearranging everything underneath you and then blaming you for not “keeping up” with the web isn’t how it should work. Surely accountants aren’t routinely berated for using “old” maths, and bridges keep on not falling apart every time a “new” car drives over them.

          1. 10

            Every time I have to deal with a project using npm, it’s death by a thousand cuts. I’ve had no issue digging up ten year old ruby projects and getting them running again. Node? Forget it.

            Then there is security. Since I started web development, we went from simple http to https everywhere. ​Certificates double or triple the complexity of just serving content in the first place. There are old devices that are completely unusable now because their root certificates or/and ciphers are too old. Ugh.

            Docker is nice though. Once you get it working.

            1. 5

              It doesn’t help with old devices, but Caddy makes HTTPS super easy.

              1. 2

                Seconded. Throwing up an HTTPS endpoint is now falling off a log easy with Caddy. Big fan.

                I use it as a reverse proxy for the thelounge instance so I can have persistent IRC :)

                1. 4

                  Disagree. If it works out of the box for you, great. If you’re trying to decipher the docs regarding something that is trivial in nginx or apache you either don’t find it or if find a question on their forums, some moderator usually answers “you’re doing it wrong” because apparently all their users are idiots and just don’t know what they want.

                  I am using it for one project and after the kind of horrible setup procedure it’s been chugging along happily. Well except that one time I had to completely rewrite the config because they changed the config format, but that was a 0.x version, so I am not trying to blame them here (but I didn’t see any migration guide either) - except that my experience mentioned above was the exact same. Can’t find it in the examples, or the docs. See someone ask the exact question I have on the forum and instead of a solution just a condescending answer.

                  This may sound overly salty, but it’s the reason I’ve been warning people to use it in production or at least be prepared for some pain. If it works for your toy project, fine.

                  1. 2

                    I haven’t tried Caddy but setting up LE with nginx was very easy. And when I wanted to have legacy CGI in nginx it was a couple of minutes googling.

                    1. 1

                      Yup I am literally using it to reverse proxy my web based irc client :)

                      If it goes pear shaped I will say oh well and find another way to connect to IRC :P

              2. 5

                My trick when I want anything from NPM is to assemble the libraries into a static bundle with browserify. Ironically, a couple of times I had a misfortune to find browserify or its plugins broken at the time of the attempt and had to pin it to an older version. ;)

                I’m also happy that I can also compile OCaml code with js_of_ocaml without ever touching NPM.

              1. 2

                All this focus on speed really annoys me. I want elegant, concise yet flexible interfaces, which are just fun to work with. Sure there are some cases where speed is the bottleneck, but most of the time people read or write code. Speed of development is whats important. I want this to be the main selling point of great libraries and programming languages.

                I know Julia and its libraries also care about usability and new innovative ideas. For example I really like the ideas of Pluto.jl and just yesterday stumbled upon Latexify.jl, which turns normal function into math expression. I just wished that more conference presentation and blog posts in the Julia community would focus on this stuff more.

                1. 1

                  All this focus on [something they want] really annoys me. I want [something else]

                  It’s freely provided volunteer effort. They’re going to work on what they want to have. If you want something else, jump in and help.

                  1. 1

                    Was my tone too harsh? If so I‘m sorry.

                    I was expecting an answer like „you are right the Julia community is focusing too much on performance“ or „focusing on perform is right, because…“.

                    Your comment is completely ad hominem. What do my or your FOSS contributions have to do with this?

                    I‘m seriously thinking about giving Julia more than just a spin, but I still have my doubts about it.

                1. 62

                  I don’t get all of the “useless use of cat” hate. Shell is for ad-hoc stream-of-conciousness scripting. It’s not supposed to be perfect or optimised. If you think about the pipeline starting with the file so that

                  cat file | do something | do something else

                  makes more sense than the counterintuitive ordering of

                  do something < file | do something else

                  then more power to you, do what make sense to you. A human is not going to notice the performance penalty of copying the bytes a few more times in an interactive shell session.

                  The “useless use of cat” meme is just a “gotcha” superiority complex with no bearing on reality besides somebody getting to feel like they “corrected” you. How many other random tiny insignificant performance hits fly by your smugness every day that don’t enter the memeosphere so you don’t get to feel smart about repeating them?

                  If you think it’s a real problem, then detect this case in the shell and change the execution strategy there. You don’t get to feel smug, but you will solve the actual “problem”.

                  1. 16

                    than the counterintuitive ordering of

                    It doesn’t have to be counterintuitive. This works too:

                    < file do something | do something else

                    There’s a slight benefit to the redirecting this way in that the file remains seekable. (It doesn’t with cat)

                    1. 18

                      How do you produce shell pipelines? For me, it’s always stepwise: I run something, visually inspect the output, then up-arrow and | to another command that transforms the output somehow. Repeat until I get what I want. < file fails this test, I think?

                      1. 3

                        I don’t follow. Like you, I build shell pipelines “stepwise. I run something, visually inspect the output, then up-arrow and | to another command that transforms the output somehow. Repeat until I get what I want.”

                        How is cat file | better (or worse) than < file? In other words, if my original pipeline begins < file, can’t I keep using up-arrow and tweaking what follows, just as I can with cat file |?

                        Maybe you’re thinking of < file at the end of the pipeline? If so, I think that viraptor’s whole point was that you can move < file to the front of the pipeline—exactly where cat file | sits.

                        1. 5

                          If my original pipeline begins < file . . .

                          That doesn’t pass my test, because < file doesn’t produce any output by itself. If you start off with < file | something else then sure, but I’ve never done that! I find it nonintuitive. But if it works for you, groovy.

                          1. 8

                            Hmm, it displays the file in zsh, but apparently not in bash.

                            But now I know how to annoy both groups!

                            < myfile cat | wc -l
                            1. 1

                              That absolutely makes sense. Thanks for clarifying. (I imagined you were starting with cat file | something. If I first wanted to check the contents of file, I sometimes do the same as you describe: cat file and then cat file | whatever. Other times I do less file first because then I can bounce around in the file more easily.)

                          2. 2

                            I’m not sure that’s something I ever gave any attention. I mean, it’s slightly different and I don’t mind ¯\(ツ)

                          3. 1

                            This doesn’t work in /bin/sh on my latest MacOS.

                            1. 5

                              Maybe something else is going on?

                              macOS 12.1, bash 3.2 (at /bin/sh), and it works fine here.

                              sh-3.2$ < wtf.c grep assert | sed 's/assert/wtf/'
                              #include <wtf.h>
                              	wtf(sodium_init() != -1);
                              1. 3

                                True, I should have clarified what I was trying to do. Your example works for me, but there are other cases where < doesn’t work, while cat does;

                                This doesn’t work (but it works in some other shells):


                                This works:

                                data="$(<file cat)"
                                1. 1

                                  I’m sorry, but I still think that there may be something else going on. I can use "$(<file)" to assign the contents of a file to a variable in bash 3.2 on macOS.

                                  sh-3.2$ data="$(< wtf.c)"
                                  sh-3.2$ printf "${data}\n"
                                  #include <assert.h>
                                  #include <sodium.h>
                                  int main()
                                  	assert(sodium_init() != -1);
                                  	return 0;

                                  What are you trying to do next?

                                  Re your larger point, you say “there are other cases where < doesn’t work, while cat does” and “This doesn’t work (but it works in other shells).” I think I (sort of?) agree. cat and < are different, and there are an enormous number of differences (e.g., for example) between different shells and even different versions of the same shell. (/bin/sh on macOS is currently 3.2, but I usually run bash 5.1 from MacPorts. Those two have a lot of differences.)

                                  Nevertheless, I think that the OP’s point stands: < file do something generally works as well as cat file | do something. I am not at all a purist about UUoC—like ketralnis, I think that people who say UUoC are generally just being jerks. But, all of that said < file is also important to learn. It often comes in handy, and you can often substitute it for cat file |—though, again, I agree that they are not 1 to 1.

                          4. 15

                            I agree! This project is a joke. Perhaps that should be at the end of the README. If cat wanted to not be pipeable, then it wouldn’t be pipeable. If someone actually measures a performance problem because they’re working with a stream of bytes instead of a random access file descriptor, then they should change it. The left-to-right reading of cating first is nice!

                            1. 3

                              I agree it’s rare that this is an actual problem in a program, to me it’s more that it’s an indicator that the writer might have some things to learn including possible:

                              • the various ways you can pipe stuff into a program in addition to |.
                              • that cat can also be used for concatenating file, it’s not just for printing a file’s contents.

                              Once you picked that signal up, dealing with it in a way that’s not about stroking your own ego is of course a good idea.

                            1. 19

                              For what it’s worth, I always thought this is a seriously underappreciated piece of work due to timing.

                              Unlike its Windows counterpart, it’s a native BASIC compiler that generates OMF files and executes a native linker, and distributes a character-based windowing library. This makes it possible to mix-and-match with C or assembly. The UI library is impressive just due to how comprehensive it is - there were many versions of character based windowing toolkits, but this one allows a similar set of configurability on controls to the Windows version. If you ignore the UI library, it’s the final version of QuickBasic, and it compiles nibbles.bas to run at truly insane speed.

                              It also shipped with a converter to transition projects to Windows, although that was a bit hit-and-miss, since the UI elements really have a “preferred” size on both platforms.

                              To me, this was the QuickBasic for DOS team going out with a mic-drop.

                              1. 8

                                Couldn’t agree more. I used VBDOS a lot as a teenager, learning solely from the really-good included documentation. I used it to make a bunch of little form programs to generate files for video game configuration and batch files. It gave me immediate feedback in a way that was important to that stage of my learning, and was immediately applicable in a way that was important to that stage of my interest.

                                There are so few places like this for kids right now. Hell, even as a professional today I wish it were still this easy to write little convenience stuff

                                1. 3

                                  Is there any comparison out there of, say, a simple GUI CRUD program written in a variety of languages (with simple backends, e.g., SQLite or flat file)? I’m really curious how different popular modern languages and GUI toolkits compare in that sort of domain.

                                  1. 3

                                    Poorly, honestly. I was on the Delphi side, not the VB side, but it took well less than hour to put together a simple database CRUD app, complete with installer. (My understanding is that VB4 and onward, at a minimum, delivered similar speed for that kind of thing, so I don’t think that was unique to Delphi, but I just don’t feel comfy speaking to it.) The only web framework I’ve seen that can do that so quickly is Rails, but that can only do the client/server setup, whereas Delphi can trivially do local-only or client/server via BDE (and I believe VB could also do either, via Jet, but this is again not something I worked with).

                                    1. 2

                                      It’s been ages since I did this, but there are three distinct bits to this problem:

                                      • Building a UI.
                                      • Defining the business logic.
                                      • Integrating with a back end.

                                      OpenStep + Enterprise Object Framework was probably the best I’ve ever seen at this. For the first part, NeXT’s Interface Builder did something that most tools that have copied it (including Delphi) missed: it was not a UI builder, it was a tool for creating a serialised object graph. Those objects included view objects, controllers, and connections for attaching models and so on to the rest of the system. With EOF (an ORM), those controllers could be connected directly to a database, which gave to the third part almost for free. Unfortunately, NeXT charged around $40K for a license to EOF (I think it was bundled with WebObjects, which was doing Ruby-on-Rails-like things in 2004. I don’t think Apple ever released the Objective-C version of EOF with Cocoa (the later versions of WebObjects and EOF from NeXT were Java and Apple supported them for a while). They provided a cut-down version as CoreData but it supports only local back ends. GNUstep provided an implementation EOF (GDL2) but never provided the GUI tools for working with it (and GORM is much worse than even an old version of NeXT’s Interface Builder) so it had very few users.

                                      The modern equivalents are probably things like Power Apps, which use the Excel formula language for business logic and provide black-box connectors to various data sources and a GUI builder.

                                2. 2

                                  I was a huge fan of vbdos. My only complaint was that the UI for the IDE was slow and the compiled code was by default much slower than what quickbasic produced. It may have been possible to tune (compiling in release mode or similar) but the IDE never guided me in that direction.

                                  As a middle schooler learning programming, I loved vbdos, but the performance pushed me to eventually learn Pascal and C (my computer did not have enough memory to run Turbo C++).

                                  1. 1

                                    It actually compiles to native code? VB for Windows uses a crappy VM for the first few versions IIRC.

                                    1. 2

                                      I remember a lot of arguments about whether VB actually counted as a compiler in the ‘90s. For a high-level language, most of the operations are going to be implemented in high-level service routines in a support library, so there’s a bit of a spectrum:

                                      • Interpreter that walks an AST (or even parses each statement) and calls the service routines.
                                      • Compiler that generates bytecode, interpreter that dispatches to a service routine for each bytecode.
                                      • Compiler that replaces each bytecode with a call to the corresponding service routine.
                                      • Compiler that replaces calls to small service routines with inlined versions.

                                      There is a big performance jump from the first to the second, a much smaller return for each subsequent one. A decent bytecode interpreter is easily a factor of 10 faster than a simple AST interpreter (ignoring things like Graal, that do a lot of optimisation on the AST as they run). If the bytecores are each rich operations then they can be individually optimised and the overhead of the bytecode dispatch is small. If your bytecode is ‘add two 32-bit integers’ then there’s a lot of dispatch overhead, but if it’s ‘draw an arc segment with this Bezier’ or ‘print this sequence of variables converting any non-string values to strings’ then the dispatch overhead is negligible. BASIC bytecodes are typically closer to the latter in most programs (and with VB it was pretty easy to offload any non-trivial calculation to another language so the performance for raw numerical compute didn’t matter to most people. Especially after VB4 when they replaced VBX with OCX and so you could import COM objects from C++ trivially).

                                  1. 28

                                    It’s iffy, but I’m not removing this article because it’s about comparing the existing technical trade-offs to those of new/proposed systems clumped under “web3” rather than skipping over them to talk about business, scams, and other stuff that’s not topical here. Those two areas are so interdependent that I’m at a loss to offer a bright line dividing them; it’s not even clear which is driving which. Both HN and Reddit have large, broad conversations responding to this article, though, so that conversation is available if you want it. Let’s try to keep Lobsters focused on the technical discussion. (If you can draw a line between the two areas to define topicality in a way that’s reasonably predictable to boosters and skeptics alike, my inbox is open; please also note which religion you would like me to nominate you for sainthood in.)

                                    Also, I’m applying the merkle-trees (nee cryptocurrency) tag. It is our second-most filtered-out tag: currently 293 users, after meta at 1,367.

                                    1. 23

                                      I didn’t see this until now.

                                      Let’s try to keep Lobsters focused on the technical discussion.

                                      That’s kind of difficult. What even is a system? And, our political criticism of it is founded in technical concerns (efficiency, scarcity, externalities).

                                      But I’ll take a break. I hope people don’t mistake that break as assent of web3.

                                      As frightening as it is to focus on the technical details of something like this, I do see the value in the strict focus Lobsters has. I’ll turn my computer off and go to bed. Thank you.

                                      1. 2

                                        I’m applying the merkle-trees (nee cryptocurrency) tag. It is our second-most filtered-out tag

                                        People who filter out cryptocurrency posts, i.e. people who don’t like cryptocurrencies, will automatically filter out this post, and miss out on a technical analysis that provides a solid argument against it. I think it would be better to give them a chance to see this post.

                                        1. 17

                                          People who filter out cryptocurrency posts probably don’t want to read a technical analysis about it at all, speaking as one of them

                                      1. 41

                                        I think this, and many other similar cases, spanning back to the Python 2.7 fallout, reveal an interesting divide within the community of Python users.

                                        (This is entirely an observation, not a “but they should support it, if not until the thermal death of the Universe, at least until the Sun is reasonably close to being a Red Dwarf” rant. Also, seriously, 5 years is pretty good. It would’ve been pretty good even before the “move fast and break things” era).

                                        There are, on the one hand, the people who run Python as an application, or at the very least as an operational infrastructure language. They need security updates because running an unpatched Django installation is a really bad idea. Porting their codebase forward is not just necessary, it provides real value.

                                        And then there are the people who run Python for their automated testing framework, for deployment scripts, for project set-up/boilerplate scripts and so on. They are understandably pissed at things like these because their script might as well run on Python 1.4. Porting their codebase forward is a (sometimes substantial!) effort that doesn’t really yield any benefits. Even the security updates are barely relevant in many such environments. Most of the non-security fixes are technically really useless: the bulk of the code has been written X years ago and embeds all the workarounds that were necessary back then, too. Nobody’s going to go back and replace the “clunky workarounds” with a “clean, Pythonic” version, seeing how the code not only works fine but is literally correct and sees zero debugging or further development.

                                        Lots of enterprise people – and this part applies to many things other than Python – still plan for these things like it’s 2001 and their massive orchestration tool is a collection of five POSIX sh scripts and a small C programs written fifteen years ago. That is to say, their maintenance budget has only “bugs” and “feature requests” items, and zero time for “keeping up with the hundreds of open source projects that made it feasible to write our SaaS product with less than 200 people in the first place and which have many other SaaS products to keep alive so they’re not gonna stop for us”.

                                        1. 19

                                          A point you’re passing over (or at least expressing with some incredulity) is that, sometimes software is allowed to be “Done”. You’re allowed to write software that can reach an end state where it is not only feature-complete, but adding features to it would make it worse. IMO, it is not only possible for someone to do that, but it is good, because the current paradigm is inherently unstable in the same way that capitalism’s “Exponential Growth” concept is. The idea that software can be maintained indefinitely is a fallacy that relies on unlimited input resources (time, money, interest, etc), and the idea that new software grows to replace old is outright incorrect when you see people deliberately hunting out old versions of software with specific traits. (Actually, on the whole, features have been lost over time, but that’s a whole different discussion about the fact that the history of software is not taught, with much of it being bound up in long-dead companies and people).

                                          For people maintaining such software, why would it make sense to rewrite large swathes of the codebase, and have a high risk of introducing bugs in the process, many of them probably having already been fixed. Sure, there’s the “security” aspect of it, and there will always need to be minor maintenance needed here and there, but rewriting the code to be compatible with non-EOL platforms not only incurs extra weeks or months (or even years) of effort, but it invalidates all of the testing that you have accumulated against the current codebase, as well.

                                          What made me point this out, is that you seem to regard this form of software as a negative, or as a liability. But at least half of modern software development seems to be what is effectively treading water, all due to bad decisions related to dependencies that are chosen, or the sheer insufferable amount of astractional cost we have and are accumulating as an industry. Personally, I envy the people who get to maintain software where there is now very little to do aside from maintenance work.

                                          1. 10

                                            You’re allowed to write software that can reach an end state where it is not only feature-complete, but adding features to it would make it worse.

                                            Doing this requires you to find a language, compiler and/or interpreter, build toolchain, dev tooling, operating system, etc. all of which must in in the “Done” state with hard guarantees. And while you’re free to build your own “Done” software, the key here is that nobody else is obligated to provide you with “Done” software, so you may have to build your own “Done” stack to get it, or pay the going rate for the shrinking number of people who are fluent in languages which are effectively “Done” because the only work happening in those languages these days is maintenance of half-century-old software systems.

                                            Meanwhile, we already have tons of effectively “Done” software in the wild, in the sense that the people who made it have made a conscious decision to never do any type of upgrades or further work on it, and it’s generally not a good thing – especially when it turns out that, say, a few billion deployed devices worldwide are all vulnerable to a remote code execution bug.

                                            1. 5

                                              Not every upgrade has to be non backward compatible. Perl5 is a good example.

                                              1. 6

                                                Python releases are about as backwards incompatible as Perl releases. I think people just assume every upgrade is bad because of the Python 2 -> 3 upgrade. Worth remembering that realistically nobody tried doing a Perl 5 -> Raku (neé Perl 6) migration.

                                                1. 2

                                                  That’s probably because Raku was such a long time coming, and marketed from the start (at least as far back as I can remember, not being a Perl coder) as an “apocalypse”. I think nobody expected to be able to migrate, so nobody even bothered trying. Python, on the other hand, had compatibility packages like “six” and AFAIK it was always intended to be doable at least to upgrade from 2 to 3 (and it was, for a lot of code, quite doable). But then when people actually tried, the nitty-gritty details caused so much pain (especially in the early days) that they didn’t want to migrate. And of course essential dependencies lagging behind made it all so much more painful, even if your own pure Python code itself was easy to port it might be undoable to port a library you’re using.

                                                  So I guess it boils down to expectation management :)

                                              2. 5

                                                This is actually why standards are useful, they allow code to outlive any actually-existing platform. The code I have from 20 years ago that still builds and runs without changes usually has a single dependency on POSIX. I’m not running it on IRIX or BSD/OS like the author was, but it still works.

                                                1. 2

                                                  Not necessarily, I think. One might want to do so in cases where there is external, potentially malicious user input. However, in highly regulated environments, where different parties exchange messages and are liable for their correctness, one can keep their tools without upgrading anything for a long time (or at least until the protocol changes significantly). There is simply no business reason to spend time on upgrading any part of the stack.

                                                  1. 2

                                                    Meanwhile, we already have tons of effectively “Done” software in the wild, in the sense that the people who made it have made a conscious decision to never do any type of upgrades or further work on it, and it’s generally not a good thing

                                                    Good way to carefully misrepresent what I was talking about :)

                                                    1. 1

                                                      It’s more that this really is what “Done” means. My own stance, learned the hard way, is that the only way a piece of software can be “Done” is when it’s no longer used by anyone or anything, anywhere. If it’s in use, it isn’t and can’t be “Done”.

                                                      And the fact that most software that approximates a “Done” state is abandonware, and the problems abandonware tends to cause, is the point.

                                                      1. 1

                                                        I disagree that this is what “Done” means, and I disagree with your implied point that this is in any way “inevitable”.

                                                        1. 1

                                                          The idea that software can be maintained indefinitely is a fallacy that relies on unlimited input resources

                                                          That’s what you wrote above. Which implies that your definition of “Done” involves ceasing work on the software at a certain point.

                                                          My point is that generally you get to choose “no further work will be done” or “people will continue to use it”. Not both. You mention people searching for older versions of software – what do you think a lot of communities do with old software? Many of them continue to maintain that software because they need it to stay working for as long as it’s used, which is incompatible with “Done” status.

                                                          1. 1

                                                            And yet if you read a paragraph down from there, you will see

                                                            “For people maintaining such software,”

                                                  2. 6

                                                    A point you’re passing over (or at least expressing with some incredulity) is that, sometimes software is allowed to be “Done”.

                                                    Oh, I’m passing over it because I prefer to open one can of worms at a time :-D.

                                                    But since you’ve opened it, yeah, I’m with you on this one. There’s a huge chunk of our industry which is now subsisting on bikeshedding existing products because, having ran out of useful things to do, it nonetheless needs to do some things in order to keep charging money and to justify its continued existence.

                                                    I don’t think it’s a grand strategy from the top offices, I think it’s a universal affliction that pops up in every enterprise department, sort of like a fungus which grows everywhere, from lowly vegetable gardens to the royal rose gardens, and it’s rooted in self-preservation as much as a narrow vision of growth. Lots of UI shuffling or “improvements” in language standards (cough C++ cough), to name just two offenders, happen simply because without them an entire generation of designers and language consultants and evangelists would find themselves out of a job.

                                                    So a whole bunch of entirely useless changes piggyback on top of a few actually useful things. You still need some useful things, otherwise even adults would call on the emperor’s nakedness and point out that it’s a superfluous (or outright bad) release. But the proportion, erm, varies.

                                                    The impact of this fungus is indeed terrible though. If you were to accrue 20 years’ worth of improvement on top of, say, Windows XP, you’d get the best operating system ever made. But people aren’t exactly ecstatic over Windows 11 because it’s not just 20 years’ worth of improvement, and there’s a lot of real innovation (ASLR, application sandboxing) and real “incremental” improvement (good UTF-8 support) mixed with a whole lot of things that are, at best, useless. So what you get is a really bad pile of brown, sticky material, whose only redeeming feature is that there’s a good OpenVMS-ish kernel underneath and it still runs the applications you need. Even that is getting shaky, though – you can’t help but think that, with so many resources being diverted towards the outer layer, the core is probably getting bad, too.

                                                    Personally, I envy the people who get to maintain software where there is now very little to do aside from maintenance work.

                                                    I was one of them for about two years, let me assure you it is entirely unenviable, precisely because of all that stuff above. Even software that only needs to be maintained doesn’t exist and run in a vacuum, you have to keep it chugging with the rest of world. It may not need any major new features, but it still needs to be taught a new set of workarounds every other systemd release, for example. And, precisely because there’s no substantial growth in it, the resources you get for it get thinner and thinner every year, because growth has to be squeezed out of it somehow.

                                                    Edit: FWIW, this is actually what the last part of my original message was about. As @ketralnis mentioned in their comment here, just keeping existing software up and running is not at all as simple as it’s made out to be, even if you don’t dig yourself into a hole of unreasonable dependencies.

                                                  3. 8

                                                    Lots of enterprise people – and this part applies to many things other than Python – still plan for these things like it’s 2001 and their massive orchestration tool is a collection of five POSIX sh scripts and a small C programs written fifteen years ago.

                                                    The funny thing is that the Python 2.x series was not the bastion of stability and compatibility people like to claim now, as they look back with nostalgia (or possibly just without experience of using Python back then). Idiomatic Python 2.0 and idiomatic Python 2.7 are vastly different languages, and many things that are now well-known and widely-liked/widely-relied-upon features of Python didn’t exist back in 2.0, 2.1, 2.2, etc. And the release notes for the 2.x releases are full of backwards-incompatible changes you had to account for if you were upgrading to newer versions.

                                                    1. 8

                                                      People probably remember Python 2 as the “bastion of stability and compatibility” because Python 2.7 was around and supported for 10 years as the main version of the Python 2 language. Which is pretty “bastion of stability and compatibility”-like. I know that wasn’t the intention when 3.0 was released, but it’s what ended up happening, and people liked it.

                                                      1. 4

                                                        So, obviously, the thing to do is to trick the Python core team into releasing Python 4, so that we get another decade of stability for Python 3.

                                                    2. 5

                                                      Is it a really problem in practice though? If the small tool is actually large enough that porting would take too much time, then there are precompiled releases going back forever, docker images going back to 3.2 at least, and there’s pyenv. That seems like an ok situation to me. Anyone requiring the old version still has it available and just has to think: should I spend effort to upgrade or spend effort to install older version.

                                                      1. 25

                                                        Yes, it really is a problem in practise to try to keep your older unchanging code running. It’s becoming increasingly difficult to opt out of the software update treadmill, even (especially!) on things that don’t ostensibly need updating at all.

                                                        Python 3.6 depends on libssl which depends on glibc, the old version of which isn’t packaged for Ubuntu 16.04. But the security update for glib’s latest sscanf vulnerability that lets remote attackers shoot cream cheese frosting out of your CD-ROM isn’t available on 16.04 and 17 dropped support for your 32-bit processor. And your CD-ROM.

                                                        Sadly you can’t just opt out of the treadmill anymore. The “kids these days” expect constant connectivity, permanent maintenance, and instant minor version upgrades. They leave Github issues on your projects titled “This COBOL parser hasn’t had an update in several minutes, is it abandoned?” They wrap their 4-line scripts with Docker images from dockerhub, and it phones home to their favourite analytics service that crashes if it’s not available. Even if you don’t depend on all of those hosted services (and that’s harder than you think with npm relying on github, apt relying on launchpad, etc), any internet connectivity will drag you in via vital security updates.

                                                        1. 5

                                                          I’m not sure I buy the argument that “kids these days” have anything to do with Ubuntu’s decisions on how long to support their OS, and for what platforms.

                                                          I’d personally jump to one of Debian (for years of few changes), Alpine (if it met my needs) or OpenSUSE Tumbleweed (if rolling was acceptable). I was surprised of the last, but Tumbleweed is actually a pretty solid experience if you’re ok with rolling updates. If not, Debian will cover you for another few years at least.

                                                          If you need an install with a CD drive, maybe could be helpful. There are a variety of ways to boot the tool, even an existing grub.

                                                          1. 11

                                                            Maybe it didn’t come through but I mean almost all of that to be hyperbole. The only factual bit is that it really is harder to keep unchanging code running than you’d think, speaking as somebody that spends a lot of time trying to actually do that. It’s easy to “why don’t you just” it, but harder to do in real life.

                                                            Plus the cream cheese frosting. That’s obviously 100% true.

                                                            1. 4

                                                              Plus the cream cheese frosting. That’s obviously 100% true.

                                                              In case anyone is wondering, this is really legit! Back in 2015 or so I used to keep an Ubuntu honeypot machine in the office for this precise reason – it was infected with the cream cheese squirting malware and a bunch of crypto miners, which kept the CPU at 100% and, thus, kept the cream cheese hot. It was oddly satisfying to know that the company was basically paying for (part of) my lunch in such a contorted way, as I only had to supply the cream cheese.

                                                          2. 1

                                                            I was asked a while ago to do some minor improvements to a webshop system that had been working mostly fine for the customer. When I looked into it, it turned out to be a whole pile of custom code which was built on an ancient version of CakePHP, which only supported PHP versions up to 5.3. Of course PHP 5 had been deprecated for a while and was slated to be dropped by the (shared) hosting provider they were using.

                                                            So I cautioned that their site would go down pretty soon, and indeed it did. I tried upgrading CakePHP, but eventually got stuck, not only because the code of the webshop was an absolute dumpster fire (without any tests…), but also because CakePHP made so many incompatible changes in a major release (their model layer for db storage was rewritten from scratch, as I understand it) that updating it was basically a rewrite.

                                                            So after several days of heavy coding, I decided that it was basically an impossible task and had to tell the customer that it would be smarter to get the site rebuilt from scratch.

                                                          3. 3

                                                            It depends on how the whole thing is laid out. I’m a little out of my element here but I knew some folks who were wrestling with a humongous Python codebase in the second category and they weren’t exactly happy about how simple it was.

                                                            For example, lots of these codebases see continuous, but low-key development. You have a test suite for like fourty products, spanning five or six firmware versions. You add support for maybe another one every year, and maybe once every couple of years you add a major new feature to the testing framework itself. So it’s not just a matter of deploying a legacy application that’s completely untouched, you also have to support a complete development environment, even if it’s just to add fifty lines of boilerplate and maybe one or two original test cases a year. Thing is, shipping a non-trivial Docker setup that interacts with the outside world a lot to QA automation developers who are not Linux experts is just… not always a very productive affair. It’s not that they can’t use Docker and don’t want to learn it, it’s just that non-trivial setups break a lot, in non-obvious ways, and their end users aren’t always equipped to un-break them.

                                                            There’s also the matter of dependencies. These things have hundreds of them and there’s a surprising amount of impedance matching to do, not only between the “main package” and its dependencies, but also between dependencies and libraries on the host, for example. It really doesn’t help that Python distribution/build/deployment tools are the way they are.

                                                            I guess what I’m saying is it doesn’t have to be a problem in practice, but it is a hole that’s uncannily easy to dig yourself in.

                                                          4. 4

                                                            It’s also hard to change code from any version of Python, just because it’s so permissive (which is part of the appeal, of course). Good luck understanding or refactoring code- especially authored by someone else- without type hinting, tests, or basic error-level checks by a linter (which doesn’t even ship out of the box).

                                                          1. 22

                                                            This wastes a lot of time denigrating other peoples’ free donated labour.

                                                            Fine, your requirements are that you don’t want to install anything and that you don’t like boilerplate, got it. Is “oh god my eyes” and “what the goddamn hell is this supposed to be” necessary? What is it helping? Generally when a library author settles on a solution it’s because they have some constraints. Maybe the boilerplate is because they expect you to customise that bit, or because their approach is generic enough that you need to fill in the gaps to glue it to your particular problem. Maybe those constraints don’t match yours and it’s not a good fit for you but it’s probably not because they’re trying to assault you personally and everyone except you is stupid.

                                                            You’re fully in your rights to just not use it, nothing wrong with that. You’re finding yourself in a wealth of choice, given this opportunity to be choosy, because of the generosity of these people whose work you’re shitting on instead.

                                                            1. 9

                                                              Qt has accessibility. The author states that installing it is more work than they want to do. I’ve installed Qt on various platforms and it’s not been difficult.

                                                              I don’t feel like this is an honest evaluation, but rather a lazy stab at ticking an ‘accessibility’ box without having to put even a small amount of effort into understanding the very basics of some well regarded frameworks.

                                                              1. 3

                                                                Yeah installing QT is fairly simple, even more for the amount of stuff it can do. Using QT with anything but C++ isn’t something on my wish list, as far as my little experience with that goes. Oh and if the author is already afraid of installing QT, a “binding generator” will be far too much for them.

                                                              2. 8

                                                                It wasn’t at all a waste of time for the OP to write this. I’m actively interested in writing a GUI program in Rust, and I myself have looked into a few of the libraries mentioned here and come to some of the same conclusions about their immaturity for my use case. It was directly helpful for me to see this analysis of some of the GUI libraries I didn’t try myself.

                                                                I agree that it’s in poor taste to denigrate the work of people releasing an open source library to the world; but it’s not any meaningful kind of denigration to look at open source libraries critically, evaluate whether they meet your needs, and say so honestly and publicly if the answer is “no” or “not yet”. Which is what the OP was doing.

                                                                1. 1

                                                                  Sorry, doing the survey and documenting it wasn’t a waste. A reader has to read it through the lens of the writers’ requirements of course but that’s always the case (which again, include not installing anything, a lack of boilerplate, and accessibility features; all legitimate requirements). The unnecessary negativity to other peoples’ work in the middle of it is what I’m railing against

                                                                2. 5

                                                                  Is “oh god my eyes” and “what the goddamn hell is this supposed to be” necessary? What is it helping?

                                                                  Well, at least it’s good to know that people in the Rust community are just as nice to each other as they are to those who aren’t using Rust…

                                                                1. 3

                                                                  The links in the table don’t do anything for me. -1 for accessibility and easy reading. Also I’m missing a “cross Plattform” / OS column.

                                                                  1. 3

                                                                    Also I’m missing a “cross Plattform” / OS column

                                                                    Is it? They’re pretty explicit that they care about Windows, I don’t see cross-platformness listed in their constraints at all

                                                                  1. 2

                                                                    Are there any options for storing Extended Attributes?

                                                                    1. 2

                                                                      No, but that’s an interesting idea, if there’s a cross-platform way to read & write them.

                                                                      1. 1

                                                                        Unfortunately it’s OS-dependent because nobody really uses them. I don’t know how well supported they are, Linux has standard kernel system calls, as does MAC OS X. The BSDs do too, (You can check out exattr.c for implementation details on how I wrapped it). I’m not sure how Windows handles this with respect to NTFS streams.

                                                                        1. 1

                                                                          And the different platforms have pretty different constraints on them so a unified API would be rough

                                                                          1. 1

                                                                            No, that’s filesystem-dependent not platform-dependent

                                                                    1. 19

                                                                      Unfortunately, being nice won’t get you a lot of money in the modern corporate workplace.

                                                                      Being nice means some people will think you are a weakling.

                                                                      It’s really important to know how and when to be nice and how and when to be assertive (or whatever opposite is of nice is in this context…).

                                                                      For every “guide to being nice” there’s a career article in the vein of “how to get what you want: step 1. stop saying yes all the time”

                                                                      Just as different programming languages can be suited for different jobs. Different personality is suited for dealing with different incarnation of the corporation/society. A friendly personality is great for making friends. But that should not be your primary goal in a workplace.

                                                                      Developers who don’t have social skills and usually seem upset or angry.

                                                                      This has nothing to do with being nice. Telling people who don’t have social skills to ‘just be nice’ is like telling starving people to ‘just be rich’

                                                                      Developers who undermine each other at every turn.

                                                                      Necessary in many modern workplaces in order to compete for limited upward potential.

                                                                      Generally defensive developers.

                                                                      This has more to do with culture around mistakes. Not the fault of the individual.

                                                                      Developers who think that other departments of the company are stupid, that they don’t know what they want.

                                                                      They are stupid. At least for this limited domain. If you are a knowledge worker you rely on other people being stupid in your domain. So to assume that they are not would just not make any sense.

                                                                      1. 38

                                                                        I think a point of this blog post was to be polite when talking to and about your colleagues. Doing that does not imply in any way that you are a weakling. It makes the conversation better and you are more likely to come up with good solutions, in my experience.

                                                                        1. 9

                                                                          My experience in the corporate workplace matches @LibertarianLlama’s’s post very much, albeit in a somewhat nuanced way (which I suspect is just a matter of how you present your ideas at the end of the day?).

                                                                          For example, being polite when talking to and about your colleagues is important, primarily for reasons of basic human decency. (Also because “being professional” is very much a high-strung nerve game, where everyone tries to get the other to blink first and lose it but that, and how “professional” now means pretty much anything you want it to mean, is a whole other story.)

                                                                          However, there are plenty of people who, for various reasons, will be able to be jerks without any kind of consequences (usually anyone who’s a manager and on good terms with either their manager, or someone who wants to undercut their manager). These people will take any good-faith attempt to keep things nice even in the face of unwarranted assholeness as a license to bring on the abuse. Being polite only makes things worse, and at that point you either step up your own asshole game, or – if that’s an option – you scramble for another job (which may or may not be worse, you never know…).

                                                                          Also, all sorts of things, including this, can be taken as a sign of weakness under some circumstances. Promotions aren’t particularly affected by that. While plenty of incompetent people benefit from favoritism, everyone who’s in an authority position (and therefore also depends on other people’s work to move further up) needs at least some competent underlings in order to keep their job. So they will promote people they perceive as “smart” whether they are also perceived as “weak” or not. But it does affect how your ideas are treated and how much influence you can have. I’ve seen plenty of projects where “product owners” (their actual title varied but you get the point) trusted the advice of certain developers – some of them in different teams, or different departments altogether – to the point where they took various decisions against the advice of the lead developers in said projects,sometimes with disastrous consequences. It’s remarkably easy to have the boat drift against your commands, and then get stuck with the bill when it starts taking water.

                                                                          Basically, I think all this stuff the blog post mentions works only in organisations where politeness, common sense etc. are the modus operandi throughout the hierarchy, or at least through enough layers of the hierarchy to matter. In most modern corporate workplaces, applying this advice will just make you look like that one weirdo who talks like a nerd.

                                                                          (inb4: yes yes, I know your experience at {Microsoft|Google|Apple|Facebook|Amazon|Netflix|whatever} wasn’t like that at all. My experience at exactly one corporate workplace wasn’t like that either, but I knew plenty of people in other departments who were racking up therapy bills working for the same company. Also, my experience at pretty much all other corporate workplaces was exactly like that, and the only reason I didn’t rack up on therapy bills is that, while I hate playing office politics because it gets in the way of my programming time, if someone messes with me just to play office politics or to take it out on someone, I will absolutely leave that job and fuck them up and torch their corporate carcass in the process just for the fun of it).

                                                                          Edit: I guess what I’m saying is, we all have a limited ability to be nice while working under pressure and all, and you shouldn’t waste it on people who will make a point of weaponizing it against you, even if it looks like the decent thing to do. Be nice but don’t be the office punching bag, that doesn’t do you any good.

                                                                          1. 3

                                                                            I mean, in the case you’re describing, I think it’s still valuable to act nice, like this post describes. You definitely gain more support and generate valuable rapport by being nice, rather than being an asshole. Oftentimes, being able to do something large that cuts across many orgs requires that you have contacts in those other orgs, and people are much more willing to work with you if you’re nice.

                                                                            Nice should be the default. However, when you have to work with an asshole, I think it’s important to understand that the dynamic has changed and that you may need to interact with them differently from other coworkers. Maybe this means starting nice, seeing that they will exploit that, and then engaging far more firmly in the future. Maybe you start with trying to empathize with their position (I don’t mean saying something like “I see where you’re coming from and I feel blah blah,” but by speaking their language, “Yeah dude, this shit sucks, but we have to play ball” or whatever).

                                                                            In general, the default should always be nice, but nice does not mean necessarily not being firm when it’s required (someone wants to explore a new technology, but your team is not staffed for it and you have other priorities that the team needs to meet), and nice does not mean you should put on social blinders and interact with everyone the same way. Part of social interaction is meeting people where they are.

                                                                            1. 4

                                                                              Nice should be the default.

                                                                              Oh, yeah, no disagreement here. We have a word to describe people who aren’t nice by default and that word is “asshole”. You shouldn’t be an asshole. Some people are, and deserve to be treated as such. Whether they’re assholes because they just have a shit soul or they’re pre-emptively being nasty to everyone defensively, or for, um, libertarian reasons, makes very little difference IMHO.

                                                                          2. 14

                                                                            The benefits of being nice find no purchase in the libertarian’s mentality. Keep this in mind when you encounter them. Adjust your approach and expectations accordingly. More generally, try to practice what I call “impedance matching” with them (and with all people). What I mean by that is (1) understand their personality’s API and (2) customize your interface accordingly. Meet them where they are. Then there will be fewer internal reflections in your signaling. Of course, if they proudly undermine you, don’t think you can change them. You’ll have to just keep your chin up and route around that damage somehow.

                                                                            1. 1

                                                                              This corresponds to a very personal and painful lesson that I have recently learned. I would caution against stereotypes, but I’m a bit beaten down by the experience.

                                                                          3. 30

                                                                            Hard no. I’ve tried to be nice in my 35-year career (at least, never tried to undermine or hurt others) and have nevertheless accumulated what many would see a “a lot of money”. (And I’d have a lot more if I hadn’t kept selling Apple stock options as soon as they vested, in the ‘00s…) Plenty of “nice” co-workers have made out well too.

                                                                            Telling people who don’t have social skills to ‘just be nice’ is like telling starving people to ‘just be rich’

                                                                            The advice in that article is directly teaching social skills.

                                                                            Necessary in many modern workplaces in order to compete for limited upward potential.

                                                                            Funny, I’ve always used productivity, intelligence and social skills to compete. If one has to use nastiness, then either one is lacking in more positive attributes, or at least is in a seriously f’ed up workplace that should be escaped from ASAP.

                                                                            1. 19

                                                                              Unfortunately, being nice won’t get you a lot of money in the modern corporate workplace.

                                                                              I’ve been at a workplace like yours but at my current one most of the most-senior and presumably best-paid folk are incredibly nice and I aspire and struggle to be like them. I’ve learned a lot trying to do so and frankly not being nicer is one of the things holding me back. Consider changing yourself or workplaces, I think you’ll be surprised. I’m disappointed by the “but I have to be an asshole” discourse here, part of growing up professionally for me was leaving that behind.

                                                                              Unfortunately that version of me also wouldn’t have listened to this advice and would fall into this what’s with all the unnecessary verbosity? trap so I don’t know that this will actually land for anybody.

                                                                              1. 9

                                                                                I did not expect you to be a fan of the modern corporate workplace.

                                                                                I recall one time at a former employer where I pissed off my managers by pointing out to an upper executive that it was illegal, under USA labor laws, to instruct employees to not discuss salaries. I was polite and nice, but I’m sure you can imagine that my managers did not consider my politeness to be beneficial given that I had caught them giving unlawful advice.

                                                                                If you want to be assertive, learn when employers cannot retaliate against employees. I have written confident letters to CEOs, asking them to dissolve their PACs and stop interfering in democracy. This is safe because it is illegal for employers to directly retaliate; federal election laws protect such opinions.

                                                                                It is true that such employers can find pretexts for dismissal later, but the truth is that I don’t want to be employed by folks who routinely break labor or election laws.

                                                                                1. 12

                                                                                  It is true that such employers can find pretexts for dismissal later, but the truth is that I don’t want to be employed by folks who routinely break labor or election laws.

                                                                                  This is one of the best pieces of advice that a young tech worker can receive, and I want to second this a million times, and not just with regard to PACs and federal election laws. Just a few other examples:

                                                                                  • Don’t cope with a toxic workplace, leave and find a place where you won’t have to sacrifice 16 hours a day to make it through the other 8.
                                                                                  • Don’t “cope with difficult managers”, to quote one of the worst LinkedIn posts I’ve seen. Help them get their shit together (that’s basic human decency, yes, if they’re going through a tough patch and unwittingly taking it out on others, by all means lend a hand if you can) but if they don’t, leave the team, or leave the company and don’t sugar coat it in the exit interview (edit: but obviously be nice about it!). Let the higher-ups figure out how they’ll meet their quarterly objectives with nobody other than the “difficult managers” that nobody wants to work with and the developers who can’t find another job.
                                                                                  • Don’t tolerate shabby workplace health and safety conditions any more than companies tolerate shabby employee work.
                                                                                  • Don’t tolerate illegal workplace regulations and actions (including things like not discussing your salary) any more than companies tolerate employees’ illegal behaviour.

                                                                                  Everyone who drank the recruiting/HR Kool-Aid blabbers about missing opportunities when they hear this but it’s all bullshit, there are no opportunities worth taking in companies like these. Do you really think you’ll have a successful career building amazing things and get rich working in a company that can’t even get its people to not throw tantrums like a ten year-old – or, worse, rewards people who do? In a company that’s so screwed up that even people who don’t work there anymore have difficulty concentrating at work? In a company that will go to any lengths – including breaking the law! – to prevent you from negotiating a fair deal?

                                                                                  I mean yes, some people do get rich working for companies like these, but if you’re a smart, passionate programmer, why not get rich doing that instead of playing office politics? The sheer fact that there are more people getting treatment for anxiety and PTSD than people with senior management titles at these companies should be enough to realize that success in these places is a statistical anomaly, not something baked in their DNA.

                                                                                  Obviously, there are exceptions. By all means put up with all that if it pays your spouse’s cancer treatment or your mortgage or whatever. But don’t believe the people who tell you there’s no other way to success. It’s no coincidence that most of the people who tell you that are recruiters – people whose jobs literally depend on convincing other people to join their company, but have no means to enact substantial change so as to make that company more attractive.

                                                                              1. 3

                                                                                I wrote a similar tool for password management within a team It’s client-server and has different goals but I’m happy to see people working in this space :)

                                                                                1. 2

                                                                                  What’s the goal of the nondeterminism in your case?

                                                                                  1. 13

                                                                                    I don’t think it’s that nondeterminism is a goal, it’s that determinism is not always a goal. For instance if you need a way to shed 20% of your traffic once it goes over some threshold you could deterministically drop every 5th request, keeping the state and coordination necessary to do so, or you could randomly drop every request with a 20% probability. The latter is much simpler. It’s not that nondeterminism is desireable, it’s that determinism adds more complexity and we don’t really need it for our goals here.

                                                                                    Or a recommender that scores everything that you’re likely to buy. There’s likely to be something that the algorithm thinks you’re really likely to buy but you aren’t so it will stack it way at the top of the list, so every time you see recommendations it’s always the top item. (If we remove things after you’ve bought them, this result is virtually guaranteed.) We could recompute the scores often and keep track of what you’ve already seen and frequencies of re-showing the same product, enforcing diversity by occasionally promoting items lower in the list. That would work. Or we could multiply the score by a random float, effectively weighting the likelihood by the computed score but still allowing it to look fresh every time you look and getting the diversity that way. If you need determinism then this quick hack isn’t available to you, but if it’s not something that you actually need then it is.

                                                                                    1. 1

                                                                                      Such things are perfectly fine to do, if somewhat tricky to test. But randomizing tests is a big no-no in my book (see my note above). In fact, if you drop requests based on a probability rather than deterministically, it will probably also be much harder to abuse the system in a denial of service attack.

                                                                                    2. 4

                                                                                      Property testing will generate random cases. If you run a test 100 times with random data, you might generate data which breaks a test; that’s good!

                                                                                      1. 1

                                                                                        Random tests are just as annoying as flaky tests - they break at inopportune moments and can take a lot of effort to track down. They only sound good in theory, but in practice they’re just a headache.

                                                                                        1. 3

                                                                                          Nah, I use them all day every day. Not a headache, not a lot of effort. Finds bugs all the time and is great documentation.

                                                                                    1. 3

                                                                                      Who’s ready to rewrite Mac System 7 in Rust?! It could really use the memory safety and concurrency.

                                                                                      1. 3

                                                                                        All of the best stuff I remember from classic Mac OS was because there was no memory safety. Most extensions worked by rooting around in the OS’s memory space and changing stuff around at will. There are good reasons not to bring that back to modern OS’s but losing it would make the classic mac notably less functional

                                                                                        1. 2

                                                                                          Look, I get it, we can build The Grouch in as an easter egg.

                                                                                        2. 1

                                                                                          I mean, you really want to start with System 6, no? It’s been a very long time since I did any programming on “Classic” Mac OS, so I could be confused.

                                                                                          1. 4

                                                                                            System 7 was a huge jump, especially in the UX. I wouldn’t go back any further.

                                                                                            1. 1

                                                                                              But it was clunky under the hood, is what I’m vaguely remembering. I do miss System 7/8, though.

                                                                                              1. 3

                                                                                                Oh god yes. Everything before Mac OS X was a dumpster fire under the hood. The Lisa had a pretty sound OS, from what I’ve heard, but the Macintosh project had to hack together a similar UX on a much smaller machine (128KB RAM, 64KB ROM, and 112KB floppies) so a lot of corners were cut. Most significantly it was designed to run only one app at a time, and adding multi-app support was a monstrous kludge that basically put all apps into a single process and single thread.

                                                                                                (Apple knew this was a mess and were working on more solid replacement OSs since at least 1989, but for various reasons those projects all failed until NeXT came along and OS X took shape in 1997-2001.)

                                                                                            2. 4

                                                                                              System 7 drastically improved the mouthfeel; putting the two side by side reveals how much more polish is in there.

                                                                                              1. 4

                                                                                                I’m not going to lie, as an amateur mixologist, that’s the second weirdest reference to mouthfeel I’ve heard.

                                                                                                1. 2

                                                                                                  Okay I’ll bite. What was the weirdest?

                                                                                                  1. 2

                                                                                                    mildly sexua/adult languagel warning (within what youtube .. permits)

                                                                                                2. 2

                                                                                                  I’m glad someone is finally talking about the mouthfeel.

                                                                                            1. 2

                                                                                              Can you give some more context here? Aren’t most compiled languages a “Programming language that compiles into a x86 ELF executable”?

                                                                                              1. 2

                                                                                                Not my own stuff. This is a compiler that turns its C-similar language directly into x86 assembly. As in “forget about LLVM, and all the other complexity, just let me translate directly from source code into an assembly language.”

                                                                                                Sure he hopes to also target something else later, including LLVM, but this makes for an interesting project especially when considering the bootstrap-from-nothing problem.

                                                                                              1. 10

                                                                                                I think an important direction for future programming language development is better support for writing single programs that span multiple nodes. It’s been done, e.g. erlang, but it would be nice to see more tight integration of network protocols into programming languages, or languages that can readily accommodate libraries that do this without a lot of fuss.

                                                                                                There’s still some utility in IDLs like protobufs/capnproto in that realistically the whole world isn’t going to converge on one language any time soon, so having a nice way of describing an interface in a way that’s portable across languages is important for some use cases. But today we write a lot of plumbing code that we probably shouldn’t need to.

                                                                                                1. 3

                                                                                                  I couldn’t agree more. Some sort of language feature or DSL or something would allow you to have your services architecture without paying quite so many of the costs for it.

                                                                                                  Type-checking cross-node calls, service fusion (i.e. co-locating services that communicate with each other on the same node to eliminate network traffic where possible), RPC inlining (at my company we have RPC calls that amount to just CPU work but they’re in different repos and different machines because they’re written by different teams; if the compiler had access to that information it could eliminate that boundary), something like a query planner for complex RPCs that decay to many other backend RPC calls (we pass object IDs between services but often many of them need the data about those same underlying objects so they all go out to the data access layer to look up the same objects). Some of that could be done by ops teams with implementation knowledge but in our case those implementations are changing all of the time so they’d be out of date by the time the ops team has time to figure out what’s going on under the hood. There’s a lot that a Sufficiently Smart Compiler(tm) can do given all of the information

                                                                                                  1. 3

                                                                                                    There is also a view that it is a function of underlying OS (not a particular programming language) to seamlessly provide ‘resources’ (eg memory, CPU, scheduling) etc. across networked nodes.

                                                                                                    This view is, sometimes, called Single Image OS (briefly discussed that angle in that thread as well )

                                                                                                    Overall, I agree, of course, that creating safe, efficient and horizontally scalable programs – should much easier.

                                                                                                    Hardware is going to continue to drive horizontal scalability capabilities (whether it is multiple cores, or multiple nodes, or multiple video/network cards)

                                                                                                    1. 2

                                                                                                      I was tempted to add some specifics about projects/ideas I thought were promising, but I’m kinda glad I didn’t, since everybody’s chimed with stuff they’re excited about and there’s a pretty wide range. Some of these I knew about others I didn’t, and this turned out to be way more interesting than if it had been about one thing!

                                                                                                      1. 2

                                                                                                        Yes, but: you need to avoid the mistakes of earlier attempts to do this, like CORBA, Java RMI, DistributedObjects, etc. A remote call is not the same as an in-process call, for all the reasons called out in the famous Fallacies Of Distributed Computing list. Earlier systems tried to shove that inconvenient truth under the rug, with the result that ugly things happened at runtime.

                                                                                                        On the other hand, Erlang has of course been doing this well for a while.

                                                                                                        I think we’re in better shape to deal with this now thanks all the recent work languages have been doing to provide async calls, Erlang-style channels, Actors, and better error handling through effect systems. (Shout out to Rust, Swift and Pony!)

                                                                                                        1. 2

                                                                                                          Yep! I’m encouraged by signs that we as a field have learned our lesson. See also:

                                                                                                          1. 1

                                                                                                            Cap’nProto is already on my long list of stuff to get into…

                                                                                                        2. 2

                                                                                                          Great comment, yes, I completely agree.

                                                                                                          This is linked from the article, but just in case you didn’t se it, lists a few attempts at exactly that. Including my own

                                                                                                          1. 2

                                                                                                            This is what work like Spritely Goblins is hoping to push forward

                                                                                                            1. 1

                                                                                                              I think an important direction for future programming language development is better support for writing single programs that span multiple nodes.


                                                                                                              I think the model that has the most potential is something near to tuple spaces. That is, leaning in to the constraints, rather than trying to paper over them, or to prop up anachronistic models of computation.

                                                                                                              1. 1

                                                                                                                better support for writing single programs that span multiple nodes.

                                                                                                                That’s one of the goals of Objective-S. Well, not really a specific goal, but more a result of the overall goal of generalising to components and connectors. And components can certainly be whole programs, and connectors can certainly be various forms of IPC.

                                                                                                                Having support for node-spanning programs also illustrates the need to go beyond the current call/return focus in our programming languages. As long as the only linguistically supported way for two components to talk to each other is a procedure call, the only way to do IPC is transparent RPCs. And we all know how well that turned out.

                                                                                                                1. 1

                                                                                                                  indeed! Stuff like looks promising.

                                                                                                                1. 0

                                                                                                                  What’s the point of this? Most of the unique features of zOS are only really useful or interesting if you’re running it on a mainframe, which this person isn’t doing. 90% of the blogpost is the person trying to get a copy of it in the first place, and talking about code licensing bullshit.

                                                                                                                  I don’t see why anyone would go through this trouble except out of curiosity, but as far as I can tell for ‘normal’ use it’s basically just a unix box with some quirks, which along with the earlier licensing BS makes it seem like a lot of effort for very little gain – compare with running something like 9front where it’s a mostly unique system and you can acquire the entire thing for free without much effort.

                                                                                                                  Can someone explain why this is useful / interesting to do?

                                                                                                                  1. 3

                                                                                                                    What’s the point of this?

                                                                                                                    It makes the OP happy. What other justification does he need?

                                                                                                                    1. 1

                                                                                                                      Ok, that’s cool. But this is a guide on installing it and he doesn’t really give me a reason to do any of that. He said part of his reason for installing it is to pass the knowledge on to the next generation, but he just utterly fails to give any kind of reason on why this is worthwhile knowledge to pass on if you’re not working literally as a sysadmin on wallstreet

                                                                                                                    2. 2

                                                                                                                      Calling z/OS a “unix box with quirks” is underselling it extremely. It’s quite a bizarre OS branch people know little about, but that’s because IBM has no hobbyist program and you only see it if you’re basically MIS at a Fortune 500.

                                                                                                                      I don’t think there’s too much other than licensing bullshit in the OP either (it’s thin otherwise); he’d be better off using literally anything z/PDT for hyucks.

                                                                                                                      1. 1

                                                                                                                        Calling z/OS a “unix box with quirks” is underselling it extremely.

                                                                                                                        And yet neither this blogpost, nor the wikipage for the operating system does anything to disabuse me of this notion, and there doesn’t seem to be any feature of this that is useful for someone running it on something that isn’t a mainframe.

                                                                                                                        I don’t think there’s too much other than licensing bullshit in the OP either (it’s thin otherwise); he’d be better off using literally anything z/PDT for hyucks.


                                                                                                                      2. 1

                                                                                                                        They say why, it’s the first sentence

                                                                                                                        Some people retire and buy an open top sports car or big motorbike. Up here in Orkney the weather can change every day, so instead of buying a fast car with an open top, when I retired, I got z/OS running on my laptop for the similar sort of price! This means I can continue “playing” with z/OS and MQ, and helping the next generation to use z/OS. At the end of this process I had the software installed on my laptop, many unwanted DVDs, and a lot of unnecessary cardboard

                                                                                                                      1. 1

                                                                                                                        Like any “why isn’t ___ more popular” question I’m sure this will attract a lot of plausible-sounding anecdotes but, like all such questions, the answer is almost certainly that tech fashion just didn’t go that way, it went some other way for no reason at all except happenstance. Why didn’t OS/2 become more popular? You can describe the history there but there’s no why, it just did. An identical world with one fewer butterfly probably went another way instead and they’re just as sure about why it went that way. If graph databases did become more popular you’d have the same folk explaining why it’s obvious that it happened that way.

                                                                                                                        But for a baseless anecdote of my own: SQL can model graphs, if poorly, and there are some great SQL databases that already exist, so most people with a graph problem just write it as a SQL problem instead because they learnt how to use it in school and already know it and their friends and coworkers know it. Writing a database is difficult, hard work. Especially if you care about correctness. The number of people in the world that can do it is finite and for whatever historical set of reasonless circumstances they’re concentrated in the SQL space instead and that feeds itself.