1. 3

    The really interesting thing is the last paragraph, that iXSystems seems to be transitioning to Linux instead of FreeBSD, althought that could be misleading, as TrueNAS Core still seems to be just a rebranded updated FreeNAS.

    1. 4

      Yeah, and I have to say, as a FreeNAS user, I’m not excited about the linux shift. I understand the move is supposed to make horizontal scaling more feasible, but 1- as a home user, that’s not a compelling feature, and 2- it’s still unclear to me why Linux is better suited for that than FreeBSD. And replatforming is a huge effort that will distract from feature work.

      Of course, I’m not privy to the reasoning around this change, and I’m sure they have good reasons.

      That said, I’m glad it’s remaining open source, since that enables me to continue using my home NAS well beyond the sunset of the FreeBSD branch.

      1. 1

        On the one hand I love FreeBSD and have nothing but good experiences with it (and FreeNAS) but I chose it because of ZFS, many years ago, on an N54L. It’s a bit low on CPU these days and for my current needs being able to painlessly host docker containers and libvirt VMs for dev work on my NAS would be a boon, so Linux (if it has ZFS) would 100% solve my problem here. So while I’m a little sad I don’t have a really good reason.

    1. 2

      Surprised to see all the love for FastCGI. My recollectoin is that it was a nightmare to use – very fussy (hard to program for and integrate with), and quite brittle (needing regular sysad intervention).

      1. 2

        I remember trying to set it up once on the server side (~10 years ago?) and it was not fun.

        However as a user on shared hosting, it works great. I’ve been running the same FastCGI script for years, and it’s fast, with no problems. So someone figured out how to set it up better than me (which is not surprising).

        I think the core idea is good, but for awhile the implementations were spotty and it was not well documented in general. There seems to be significant confusion about it to this day, even on this thread of domain experts.

        To me the value is to provide the PHP deployment model and concurrency model (stateless/shared nothing/but with caching), but with any language.

        1. 1

          We ran FastCGI at quite large scale back around 2000 and it was very reliable and not particularly difficult to work with.

          1. 1

            I was using it at mid-scale in the aughts (mod_fastcgi on apache) and it was not a pleasant experience. Maybe our sysads were particularly bad, or maybe our devs just didn’t get the concepts, but I recall others in my local user groups having similar difficulties.

        1. 12

          topic drift… but anyone tried https://pijul.org/, a new VCS, that claimed to solve the exponential merge time?

          1. 7

            Pijul is being rewritten: https://discourse.pijul.org/t/is-this-project-still-active-yes-it-is/451

            I only know, because I was trying to send a patch to a project hosted on nest.pijul.com, but I think Nest is also in a bit of a broken state at the moment. At least, I couldn’t make it work.

            Maybe Pijul works fine locally. It’s been in the rewrite state for quite a bit, though, so the current code is not really maintained.

            1. 4

              Yeah, I want to like pijul, but I was put off by the fact that they refuse to open-source Nest. That means I can’t self-host, and it was a warning sign that the development style in general is not as open as I’d like. That they’ve been in closed rewrite for over a year is another yellow flag.

              I’ll check back in with it in a couple years.

              1. 4

                I couldn’t care less about Nest (although I completely get those who do!), but the fact that they’re on their third rewrite, none of which have existed long enough to actually get stable or usable, is a much bigger issue IMVHO.

                1. 4

                  I’d be fine if they rewrote it ten times, as long as the process was open! Or at least transparent.

                  1. 2

                    Haven’t they been rewriting it for over 2 years now?

                  2. 1

                    Third already? Where did you get that from?

                    1. 1

                      They had two older implementations in OCaml and Scala, and are currently rewriting in Rust. Am I missing something?

                      1. 2

                        Disclaimer: I’m one of the authors.

                        We’ve indeed tried different languages before settling on Rust, and we have had preliminary things in OCaml and Scala. But we’ve had a public version for years now, written in Rust.

                        This comment makes me think that there actually three different classes of opinions:

                        • those who think the development is “too opaque”, even though they never contributed anything when it was more open, and don’t know anything about the reasons why the new version is not public yet.
                        • those who think there have been too many released versions (or “rewrites”), meaning that the development is actually too open for them.
                        • I can also guess that some don’t really mind not having a half-broken prototype version to complain about, as long as they get a working, open source version in the end.
                  3. 1

                    Something I’ve read before about Pijul, is that you can’t self-host without having unlimited access to the Nest’s source code. This is actually false: you can totally setup another machine with Pijul installed, and push/pull your patches between the two machines. No need for a web interface for that, SSH is enough.

                    1. 1

                      Sure, and fair enough. I like having a web interface for browsing my shelved projects and examine history (and, rarely, showing to other people), though, and my concerns about openness remain.

                      1. 3

                        As the author of the Nest, I have written a number of times that I believe the Nest should and will be open in the long term, but Pijul is a large open source project to manage already, including its many dependencies (Rust crates Sanakirja and Thrussh, for example). At the moment, opening the Nest would simply not be feasible in terms of work time and extra maintenance. In the future, when things settle down a little on the Pijul front, this situation is likely to change.

              1. 1

                It sounds like a good idea, but I’ve never ran into the problem described in “Why darcs?“ section. I don’t why would two people work on the same feature independently?

                1. 6

                  A couple reasons come quickly to mind:

                  • someone adds a feature at the same time someone else adds cross-cutting functionality (e.g. i8n or l10n)
                  • someone fixes a bug while someone else fixes a different bug that touches the same code

                  This is more important for development that doesn’t have a central coordinating authority, so you can’t say “I’m going to work on X” and ensure nobody else touches X. Lots of open-source projects operate like that.

                1. 6

                  This is by far my favorite version control system. Unlike git, I find branching in darcs intuitive and easy to understand.

                  1. 6

                    Darcs doesn’t support branching in the same sense as Git, so I’m not sure that’s a fair comparison.

                    The “every repo is a branch” and “Master and Working Repositories” workflows also in Git by cloning the upstream repo, then cloning local “branch” repos from it. When you’re done, push to the local “master”, and then eventually back to upstream. You’d miss out on a lot of Git functionality, but it should work.

                    Personally, I think the Darcs way of doing it is really annoying. I like being able to keep around experimental branches, WIP features and bug fixes, etc. without cluttering my file system. And I like being able to push them all somewhere and clone them in different places with a single “git clone …”. I know under the hood Git’s keeping all that data around, but it’s hidden away in .git where I don’t have to think about it.

                    1. 5

                      I’d love to hear more about how darcs’ branching is different than git’s. Care to give us some more insight?

                      1. 2

                        Indeed. Unlike git, I never find myself needing to rm -rf . my darcs repo.

                        1. 9

                          I’ve never needed to do that to a git repo either

                          1. 1

                            Neither have I, hopefully I can at least encourage people to learn/use git reflog. Its rare you need it but comes in super handy when you’ve made a mistake.

                        2. 1

                          Do you by chance know if darcs how darcs supports big non-text files?

                          1. 3

                            It’s been a while, but as far as I remember, Darcs generally works in a way a bit different from git: it downloads all patch metadata, figures out which patches it needs to reconstruct the current work tree and then downloads the data. So binaries are at least not part of the bundle you usually work with.

                            AFAIK, it works quite well with them. I know it’s been a promoted strong-point of pijul.

                        1. 11

                          This is pretty low-content – not even a summary README (and the NEWS hasn’t been updated in a couple years). Even the doc folder is pretty spare.

                          For those not familiar, DOSEmu has been around since the 90s. Where DOSBox provided a full virtual environment (emulating graphics and CPU), DOSEmu was more like a bunch of wrapper libraries for DOS programs (so it only worked on x86 machines). It was also stagnant for some time (I had assumed it was dead).

                          I would find a post more useful if it addressed some questions like:

                          • Is DOSEmu2 still bound to x86 machines?
                          • Can it run on 64-bit architectures?
                          • What are its benefits/drawbacks compared to: DOSBox, FreeDOS, a full VM like QEMU?
                          • Can it containerize?
                          • Are there plans to port it to non-linux platforms?

                          Alternately:

                          • A dive into the overall architecture, or a deep dive into a specific difficult area of the software
                          • A tutorial on how to set up, use, and troubleshoot problems
                          • A case study on using it to run a tricky piece of old software
                          • An analysis of the differences between it and alternatives, including examples where each runs notably better or errors out. Possibly with benchmarks.

                          A post of a context-free repo with little documentation is disappointing.

                          1. 4

                            I too was very disappointed in the “content”. I actually tried building DOSemu last night and saw this hoping that some of the issues I’ve already had to patch have been addressed. Looking at the repo, I have no idea what this has in comparison to “version 1”.

                          1. 3

                            You seem to be focused on web framework performance, but mention other components (auth middleware, historical data store) that are also going to impact speed of page delivery. It won’t matter how fast your web framework is if each request hists the datastore, for example – the datastore is likely going to be your bottleneck.

                            It’s also unclear whether this app is intended to include the machine learning logic to make trades (in which case you probably don’t want a web app), just offer reporting and analysis (in which case, pretty much any language will do because you’ll spend all the time in your database/warehouse/whatnot), or something else entirely.

                            Can you be more clear about the requirements you need to fill?

                            1. 3

                              I have a hard time believing that this could run in 64k while still leaving enough space to load data into memory (let alone run another program). Any info on memory footprint?

                              1. 3

                                Just a side note: The IBM PC was originally released with 16KB and 64KB options. The first version of DOS used 12KB RAM, leaving 4KB for your application if you only had 16KB.

                                1. 3

                                  Sure. Apple DOS was smaller, still (8k, iirc?). But it didn’t support directories. I think PC-DOS did, but not subdirectories (again, iirc).

                                  FAT32 didn’t come out til the mid-90s, though, and it was safe to assume >1MB of RAM or more on a machine. It (rather, MS-DOS and Win95) required a 32-bit processor (with a far more robust instruction set and pool of registers than the 6502) when released. It’s possible that the filesystem portion fits under 64k, but I’d be surprised.

                                  1. 4

                                    I haven’t looked very closely at this implementation, but there are microcontroller FAT32 implementations written in portable C that run in hundreds of bytes of working RAM and single digit kilobytes of code size.

                                    (For example. note the benchmark linked for the “Petit” version is accessing a 2GB SD card from an AtTiny85 which is an 8-bit micro with 512 bytes of RAM, 8KB program size.)

                                    Of course there are restrictions: how many files you can have open at once, performance, whether you can have any kind of buffering, etc. I note that the 6502 implementation doesn’t support seek, for example (neither does the “petit” one I linked, but the bigger one does.)

                                2. 2

                                  I’d be interested to see the memory footprint too, it does apparently run on the Commander_X16 which while having a VIC-20 inspired memory map with the use of banking logic to allows for up to 2MB of high RAM for data.

                                1. 4

                                  This is interesting, but seems to miss one of the big historical reasons for incremental backups: size. That’s less of an issue now that storage is so inexpensive, of course, but it’s still something data infrastructure folks need to think about. If you’re doing daily full backups, you can easily end up using several times the storage over a weekly full/daily incremental scheme.

                                  Optimizing for recovery speed is not a bad thing, but the cost of that optimization ought to be considered in any production scenario.

                                  Also, minor nit: PostgreSQL full backups operate differently than described. Basically: 1- tell the DB you’re taking a backup, 2- copy the files, 3- tell the DB you’re done. Informing the DB lets it ensure the filesystem stays stable til the backup is completed.

                                  1. 2

                                    This is a very good point.

                                    I don’t mention it in the post but the key driver for me looking at it was about joining new members to a cluster. When they first join, you can generally choose to give them a backup to catch up from, or let them do a full copy from the live cluster.

                                  1. 2

                                    Adding features does not improve a language.

                                    1. 6

                                      And “having a small feature set” does not improve a language inherently (perhaps it makes implementors lives easier).

                                      There are things that are useful. Things that are less useful. And you should judge things based on those costs instead of establishing strong red lines. Especially when lots of these features are responses to people writing Python code in the wild requesting these kinds of changes to improve real experiences.

                                      1. 2

                                        And you should judge things based on those costs instead of establishing strong red lines.

                                        99.9% of languages cross all red lines and give a damn all about doing some cost analysis before doing things¹, so I think tit would be nice to have at least one language that upholds some quality standards.

                                        Especially when lots of these features are responses to people writing Python code in the wild requesting these kinds of changes to improve real experiences.

                                        If you do language-design-by-popularity-contest, don’t be surprised if the language looks like it. :-)

                                        ¹ Just have a look at the replies (including yours) having an aneurysm simply for mentioning that adding features doesn’t improve a language. If a feature improves a language, then it’s the job of the people who want to add it to prove this², not for others to argue against.

                                        ² Oh and also: People who want to add a feature should also be responsible to remove an existing feature first.

                                        1. 4

                                          99.9% of languages cross all red lines and give a damn all about doing some cost analysis before doing things¹, so I think tit would be nice to have at least one language that upholds some quality standards.

                                          See the PEPs.

                                          Just have a look at the replies (including yours) having an aneurysm simply for mentioning that adding features doesn’t improve a language. If a feature improves a language, then it’s the job of the people who want to add it to prove this², not for others to argue against.

                                          Again, see the PEPs for the feature inclusion. Also, there’s no need to talk down on others because they have a different opinion than you.

                                          Oh and also: People who want to add a feature should also be responsible to remove an existing feature first.

                                          Why do you have to remove another feature? Do you not have enough space? If so, a larger storage device is probably a better solution than arbitrarily restricting feature addition because a “feature” has to be removed first. Also, what is a “feature”? How do you define what is considered a “feature”? A method? An object? A function? A property of an object? A modification to the behaviour of an object?

                                          1. -4

                                            Why do you have to remove another feature? Do you not have enough space? If so, a larger storage device is probably a better solution than arbitrarily restricting feature addition because a “feature” has to be removed first. Also, what is a “feature”? How do you define what is considered a “feature”? A method? An object? A function? A property of an object? A modification to the behaviour of an object?

                                            These strawmen are exactly the reason why I didn’t bother with nuance in “adding features does not improve a language” anymore.

                                            People who seem to lack even the most basic understanding about these things, tend to have big opinions about which features the language needs next.

                                            1. 7

                                              I have occasionally jokingly suggested that when someone breaks out the Saint-Exupery quote about perfection, we should further perfect that person by taking something away – perhaps their keyboard! Which is a fun way to point out that such assertions absolutely depend on context and nuance.

                                              Your assertions not only lacked nuance, they verged on straw-man arguments: for example, as another reply pointed out, the Python team do require discussion and debate including pro/con analysis prior to adding new language features, as evidenced by the PEP process and by the lengthy email discussions around many PEPs. So when you asserted your distaste for people who don’t do that, the only possible conclusions to draw are that it’s a red herring because you know Python does do this, or that it’s a misrepresentation because Python does it but you’re not acknowledging it.

                                              On a deeper level, there are at least two common views about features in languages/libraries/products/etc., and those views appear to be fundamentally incommensurable. But neither view is objectively and universally correct, yet many people argue as if their view is, which is not useful.

                                              1. 1

                                                require discussion and debate including pro/con analysis prior to adding new language features

                                                Even by your own wording, it’s not even remotely an equal fight.

                                                One side is “pro”, the other side “against” – which is far removed from a process that says “we want to improve the language, let’s examine whether this can be done by removing something, keeping things the same, or adding something”.

                                                Let’s not even pretend that the pro/con analysis is unbiased or factual, as long as your process treats people who don’t want to add things/remove things as “the bad guys”.

                                                at least two common views

                                                That explains the complete meltdown of one side as seen in this thread, I guess?

                                              2. 3

                                                So you’re deliberately trolling.

                                                1. -4

                                                  No, but these hysterical reactions are exactly what I had in mind when I talked about replies with little value earlier.

                                            2. 2

                                              Here’s a comment where I argue against adding a new feature to Python.

                                              I believe in thinking about costs and benefits when adding language features. I will admit I like shiny stuff, but I also want features that fit in with the rest of the language.

                                              I also write Python on a daily basis, and I know of places where the walrus operator would have made things cleaner, without getting in the way later. I don’t think it is that important (like do: while or just a case statement would be much more helpful) and I would rather not have the walrus operator if it meant we could have kept Guido and not started a big acromonious fight about it.

                                              I just fundamentally dislike the concept of “too many features are bad”. Good features are good, that’s why they’re good!

                                              Now, lots of languages add features that interact badly with each other…. that’s where you’re really getting into actual messes that affect users.

                                              1. 1

                                                My first comment was a direct dismissal of the approach to language design you describe.

                                                There is no misunderstanding or need for further explanation here.

                                          2. 5

                                            I might accept “adding features is not the same as improving a language”.

                                            But features which allow for better code reuse (e.g. function decorators), improve expressivity (e.g. list comprehensions), or provide useful control structures (e.g. iterators and generators) can absolutely improve a language. All of my example features were added to Python after it was released, and they all improve that language.

                                            1. 1

                                              I think we’ll have to agree to disagree on that.

                                              All of the things you have mentioned have costs associated with them, and only very rarely do the improvements rise above that.

                                            2. 4

                                              Lack of features leads to books of programming patterns — they’re called so because they require each reader of the code to actively pattern match blocks of code to identify them as what would just be features in another language.

                                              1. 1

                                                I think we can do way better, we shouldn’t let the misdeeds of 70ies’ languages cloud our judgement.

                                              2. 2

                                                What’s the background on this statement? Are you suggesting that if your language is composed of maximally orthogonal and flexible features then additional features are unnecessary?

                                                1. 1

                                                  Yes.

                                                  Additionally, adding more features after-the-fact will always result in lower quality than if the feature was designed into the language from the start.

                                                  I think C#’ properties are a good example of this, Java Generics, …

                                              1. 15

                                                I think Git is an excellent candidate for a rust rewrite. There’s nothing about it that really needs to be in C or benfits from being in C, it’s just one of the many userspace command-line tools that were written in C out of tradition. Rust is a better language that C that results in more maintainable and less bug prone code. There’s no reason not to use Rust for this kind of program (or at least avoid C).

                                                1. 17

                                                  There’s no reason not to use Rust for this kind of program (or at least avoid C).

                                                  Portability? Also bootstraping rust is kind of hard :| . Both issues are important for this kind of foundational software.

                                                  But I agree that there are many reasons to use rust for this kind of program.

                                                  1. 1

                                                    How portable is Git? How many C compilers can compile it and for how many systems?

                                                    1. 1

                                                      A quick search shows Git is available on AIX and Sparc systems, you can probably find more with better searches

                                                  2. 15

                                                    Git was created in 2005, Rust in 2010…

                                                    That said, it’s a good idea to have multiple implementations of the same functionality. It wouldn’t surprise if GitHub was exploring alternatives.

                                                    I’d like to expand on my point in the first sentence - what would have been a viable language for Linus to use to implement Git in 2005 (other than C/C++?). Git was designed as a replacement for BitKeeper to handle the Linux source tree. Using C in that case was an easy choice.

                                                    1. 4

                                                      what would have been a viable language for Linus to use to implement Git in 2005

                                                      Python was used to created Mercurial in 2005.

                                                      1. 1

                                                        And haskell was used to create darcs in 2003.

                                                    2. 14

                                                      Rust is a better language that C

                                                      That’s a subjective opinion. Rust has some mechanisms that protect against certain classes of programming errors, but it has many many other problems.

                                                      1. 1

                                                        Even without those mechanisms, it would still be a better language.

                                                        1. 1

                                                          By what metric?

                                                          1. 1

                                                            By everyone of them. Being better than C is hardly rocket science.

                                                      2. 4

                                                        Rust is a better language that C

                                                        Isn’t git written in Perl? </joke> … sorta

                                                        1. 3

                                                          I would suggest it was written in C as per the opinions of its author rather than out of tradition.

                                                          1. 6

                                                            Rust did not exist at the time.

                                                            1. 1

                                                              Sure, but C++ did.

                                                              1. 1

                                                                And we already know how Linus feels about C++:

                                                                YOU are full of bullshit.

                                                                C++ is a horrible language. It’s made more horrible by the fact that a lot of substandard programmers use it, to the point where it’s much much easier to generate total and utter crap with it. Quite frankly, even if the choice of C were to do nothing but keep the C++ programmers out, that in itself would be a huge reason to use C.

                                                                In other words: the choice of C is the only sane choice. I know Miles Bader jokingly said “to piss you off”, but it’s actually true. I’ve come to the conclusion that any programmer that would prefer the project to be in C++ over C is likely a programmer that I really would prefer to piss off, so that he doesn’t come and screw up any project I’m involved with.

                                                                C++ leads to really really bad design choices. You invariably start using the “nice” library features of the language like STL and Boost and other total and utter crap, that may “help” you program, but causes:

                                                                • infinite amounts of pain when they don’t work (and anybody who tells me that STL and especially Boost are stable and portable is just so full of BS that it’s not even funny)

                                                                • inefficient abstracted programming models where two years down the road you notice that some abstraction wasn’t very efficient, but now all your code depends on all the nice object models around it, and you cannot fix it without rewriting your app.

                                                                In other words, the only way to do good, efficient, and system-level and portable C++ ends up to limit yourself to all the things that are basically available in C. And limiting your project to C means that people don’t screw that up, and also means that you get a lot of programmers that do actually understand low-level issues and don’t screw things up with any idiotic “object model” crap.

                                                                So I’m sorry, but for something like git, where efficiency was a primary objective, the “advantages” of C++ is just a huge mistake. The fact that we also piss off people who cannot see that is just a big additional advantage.

                                                                If you want a VCS that is written in C++, go play with Monotone. Really. They use a “real database”. They use “nice object-oriented libraries”. They use “nice C++ abstractions”. And quite frankly, as a result of all these design decisions that sound so appealing to some CS people, the end result is a horrible and unmaintainable mess.

                                                                But I’m sure you’d like it more than git.

                                                                and

                                                                It sucks. Trust me - writing kernel code in C++ is a BLOODY STUPID IDEA.

                                                                The fact is, C++ compilers are not trustworthy. They were even worse in 1992, but some fundamental facts haven’t changed:

                                                                • the whole C++ exception handling thing is fundamentally broken. It’s especially broken for kernels.
                                                                • any compiler or language that likes to hide things like memory allocations behind your back just isn’t a good choice for a kernel.
                                                                • you can write object-oriented code (useful for filesystems etc) in C, without the crap that is C++.

                                                                In general, I’d say that anybody who designs his kernel modules for C++ is either (a) looking for problems (b) a C++ bigot that can’t see what he is writing is really just C anyway (c) was given an assignment in CS class to do so.

                                                                Feel free to make up (d).

                                                                    Linus
                                                                
                                                        1. 3

                                                          I loved working in Flash. At the time, it was the best way to get your games in front of people with zero friction, in a way that seemed like they’d live forever – SWFs from the 90s still work flawlessly in the latest Flash player, decades later. Seeing the world try to transition over to HTML5 when it clearly wasn’t good enough yet was agonizing to watch.

                                                          I remember the early flash-to-HTML5 projects – these really sucked in terms of performance. Slideshows at best, if your flash animation or game even worked. I was really confused – what were we supposed to do?

                                                          Could flash have been effectively patched to nullify most security holes? Or VM’d? Ruffle (linked by author) seems curiously good so far, we will have to see what history brings.


                                                          On the topic of history: there is enough already to be making some judgements. The hatred of flash was blinding and we’re not necessarily in a better position because of it. Toppling regimes often leads to new ones.

                                                          Flash was accessible (ran on almost anything) and was easy to get into. Officially content could only be made through Adobe products, but in practice I’m not sure that many students paid for it and it was software that didn’t need servers.

                                                          There was a variety publishers for flash content. The only gatekeepers were the many website hosts. Your flash anim, game or program could find a decent home pretty easily.

                                                          Flash was considered insecure in terms of protecting you against the author’s wishes, as anyone could make flash content that broke out of your browser and did stuff on your computer. This was considered bad.

                                                          Now we have lots of invasive native software (eg Chrome), phone apps and centralised video hosting sites.

                                                          Native software and phone apps are not accessible. They require extra effort to support every platform, which many authors do not or cannot go to. In fact it’s easy to argue that phone APIs and ecosystems are designed to made it hard for your software to be cross platform (amongst the other gaming of your time and effort toward their platforms).

                                                          Native software still has a good variety of publishers (not centralised) but phone apps and video hosts do not. There are now two major gatekeepers for phone apps and pretty much only one gatekeeper for video hosting.

                                                          Native software and phones apps are insecure in terms of protecting you from their author’s wishes. Chrome is spyware, plain and simple, the sort that old AV software would have flagged and removed. No AV company could do that to big software company products today (perhaps overshadowed by the pivot of many to be just as abuse software themselves). Phone app and system API vulnerabilities are found and exploited just as flash APIs were.

                                                          We’ve burned down the old stadiums and built new ones, but now with the same or worse security issues, less accessibility and central corporate control instead of community control. Killing flash for its security problems looks like an own goal. Alas, would flash have evolved into the same beasts we have now anyway?

                                                          1. 4

                                                            FWIW, Flash lives on in a more secure way in Flashpoint. It’s an archive of flash games (and shockwave and a few others), with the necessary engines to play them on your local machine. It even includes website fakery to deal with SWFs that do phone-home checks, etc.

                                                            It’s not a complete list of all flash ever on the web, but if you notice something missing, you can nominate it for preservation.

                                                          1. 14

                                                            Given the same task, two developers often come up with solutions that differ in an order of magnitude in size, complexity, dependency count and resource consumption.

                                                            I have witnessed that over and over.

                                                            1. 4

                                                              I code for approximately an hour a day after my kids have gone to bed. (I’m a manager these days, so my job doesn’t involve day-to-day coding.) During that time I can be shockingly productive, cranking out a solid chunk of performant, safe, well-factored code in an amount of time that would have been an extended coffee break back in my “code-slinging” days.

                                                              So, have I discovered the Nirvana of 10x-ness? Could I jump back in to coding and be massively more effective than I was in my 20s or the portion of my 30s where I still considered myself a professional software developer?

                                                              No, and no. I’ve just found a comfortable niche where the problem domain, freedom to choose the tools I want, and lack of 50 competing demands on my time let me go head-down and code and consider only that time when I measure my production.

                                                              In my experience there’s a very strong correlation between “10x coders” and people who are given the space to work, control over their tools, and trust to architect systems as they choose.

                                                              Whether they earn that because of demonstrated, sustained performance or learn to do it after thrashing around in the deep end for a few years is kind of a chicken-and-egg problem, but studies and surveys (sorry to not have links handy, I’m typing this on my phone) show consistently that developers are happiest and feel most effective when offered those freedoms.

                                                              1. 3

                                                                In what kind of setting do two developers have to produce the exact same thing at the same time, without talking to eachother about it? And is it always the case that developer A produces something that’s objectively 10x better than developer B, or are the roles sometimes reversed. For instance, does it depend on experience in a certain niche?

                                                                1. 4

                                                                  That happens all the time when discussing the scope of new projects.

                                                                  A recent example I have in mind is, the employer wants a place to collect and analyze data that is being generated by their product.

                                                                  • Developer A’s proposal was to build a full data warehouse with SMT, CMT, ETL, Timeseries DB, …
                                                                  • Developer B’s proposal was to ingest the data into an S3 bucket in a regular format, and then plug whatever DB we need when we need it.

                                                                  Without judging which proposal is the most appropriate, which varies on the context. It’s quite obvious to me that the first proposal is at least 10x bigger.

                                                                  1. 2

                                                                    One example are test-work situations. Multiple developers who are considered for longer term projects do the same small project.

                                                                    I did not say objectively better. But 10x difference in size, complexity, dependency count and resource consumption. Personally, I think this leads to even more then 10x slower progress when the code evolves. But opinions on this differ widely.

                                                                    1. 2

                                                                      But 10x difference in size, complexity, dependency count and resource consumption.

                                                                      I would probably call that objectively better. But maybe that’s my own biased thinking.

                                                                      For instance, did the developers know that optimizing for these things was a goal? Did the other developers produce something in shorter time? Were they equally experienced?

                                                                      1. 2

                                                                        did the developers know that optimizing for these things was a goal

                                                                        No, because development is a tradeoff between so many goals that stating something like “size is important” would lead to absurd constructs. In my experience, every developer has their style and you cannot change it much. For example a dev who prefers to think in “business objects” over “sql queries” will struggle if you force them to change that type of thinking. Even if their approach creates code that needs ten million times more resources.

                                                                        Did the other developers produce something in shorter time?

                                                                        I would say usually coding time is shorter for the smaller, leaner, less resource hungry solutions with less dependencies.

                                                                        Were they equally experienced?

                                                                        In my experience there is a strong correlation betwen experience and the type of solutions a developer comes up with. And the time they need to implement them. More experienced developers tend to come up with leaner solutions and need less time to implement them.

                                                                        1. 2

                                                                          In my experience there is a strong correlation betwen experience and the type of solutions a developer comes up with. And the time they need to implement them.

                                                                          This would seem to fit well with other types of jobs. I think the myth is that there are people who are “magically” 10x better somehow, which makes very little sense to me.

                                                                          1. 5

                                                                            It’s not that they’re magically better. There are practices you can follow that get you those results, but people just don’t seem interested in following them:

                                                                            • Understand the business reasoning behind the project…look for ways of exploiting that knowledge to minimize the amount of work done.
                                                                            • Less code is always easier to debug…or if you can’t debug, throw out and replace.
                                                                            • Clearly defined project goals and constraints and milestones make development fast and visible.
                                                                            • Throwaway prototypes are faster than big projects and teach you more–and may be good enough for the business.
                                                                            • Understanding, deeply, the tools you use and the basic principles of computer science and engineering help you prune solution spaces a lot better.
                                                                            • Identify whether or not the framework/third-party app is actually delivering value over just implementing a smaller bespoke solution yourself. Color picker? Almost certainly. Database extension? Probably. Container orchestration for a static blog? Probably not.
                                                                            • Pseudocode and stepwise refinement of the solution to a problem. If you can’t explain something with bullet points, you don’t understand it, and if you don’t understand it, you’re slow.

                                                                            There are more I think, but that’s a start.

                                                                            Somewhat grumpily: there may not be 10x software engineers, but there sure as hell are 10x embedded AWS salesfolk.

                                                                            1. 2

                                                                              I think your third point is the most important one. Perhaps a 10x developer is gifted at recognizing the business priorities even when the Product folks don’t see how much size/complexity matters?

                                                                          2. 2

                                                                            I would like to think that my style has some flexibility to it, though I’ve definitely landed on the SQL queries side of that divide in the past.

                                                                            Of course, everyone struggles with a new way of thinking at first. Which is why, for me at least, I seek out new ways of thinking, so I can get the struggle done with.

                                                                            1. 1

                                                                              I would say usually coding time is shorter for the smaller, leaner, less resource hungry solutions with less dependencies

                                                                              I’m not sure I agree, here. Maybe if you take “coding time” (as opposed to “dev time”) very strictly, but even then…

                                                                              My anecdata includes a bunch of hackathon projects that, when polished up for production, amounted to far less code. Reducing duplication, refining design for smaller code, replacing bespoke with off-the-shelf libraries, etc, all benefit from taking time.

                                                                    1. 1

                                                                      I like Poly/ML (and SML in general), but a bugfix release with no detailed notes is pretty low-content.

                                                                      1. 7

                                                                        By installing libre office? Oh! Oh! Oh. Sorry, I had to make this bad joke

                                                                        1. 0

                                                                          Wasn’t OpenOffice better than Libre Office, or is my memory playing tricks on me? Or maybe I was just more content with those kinds of GUIs when OpenOffice use to be popular.

                                                                          1. 1

                                                                            From what I remember OpenOffice was great, but over time feature development and stability started to get worse. LibreOffice became the response to OpenOffice

                                                                            1. 7

                                                                              LibreOffice was forked from OpenOffice around when Oracle acquired Sun. Shortly after, Oracle dumped OpenOffice onto the Apache Foundation, where it’s still being updated.

                                                                              LibreOffice is considered by most to be better maintained and more actively developed, though.

                                                                              1. 1

                                                                                Thanks for the context!

                                                                        1. 2

                                                                          The title is … easy to misread as suggesting that somebody (presumably MS) has deprecated VS Code.

                                                                          1. 3

                                                                            Haha, you are definitely right, I updated the title.

                                                                          1. 3

                                                                            I once knew a physics professor who said something like “I don’t know what the programming language of the future will look like, whether it will have objects or anything else fancy like that. I do know that it will be called FORTRAN.”

                                                                            1. 3

                                                                              For some context - apparently:

                                                                              • “Delores” is “A Thimbleweed Park[-related] mini-adventure [game]”,
                                                                              • this seems to be an official release by Ron Gilbert and his website,
                                                                              • this is a “source available” release (i.e. NOT “open-source”), notably “for personal and hobby use only” and “you can’t release modified games”. (Though I wonder if people, incl. kids, will be sharing some games built on it anyway in hobby circles/forums?)
                                                                              1. 1

                                                                                It also appears that the source for the game engine isn’t available? Or maybe I missed something?

                                                                                1. 2

                                                                                  “I also uploaded the dev build of the Delores engine (including the complete debugger) so you can run all your edits and changes.”

                                                                                  Sounds like you’re right

                                                                                2. 1

                                                                                  It’s also just the source for the game itself, not the engine. It seems akin to Zork and the old Sierra games in that the games are written in a custom DSL and the engine is basically a VM that runs it.

                                                                                1. 1

                                                                                  I’m trying hard to both ignore and retain all the kubernetes expertise I crammed in this week. Ignore so I can recover, retain so I never have to re-learn it.

                                                                                  Let me know if you need help mounting NFS volumes in a Pod, though.

                                                                                  1. 1

                                                                                    I’ve been to a bunch of conferences, many good, but only a few I consider really good.

                                                                                    In a really good conference, I find myself with the following pattern:

                                                                                    • Attend a not-too popular talk (I have crowd issues, so I self-extract from anything packed)
                                                                                    • Get exposed to something interesting in that talk
                                                                                    • Digest that something interesting into actual learning, by skipping talks and talking with people (“hallway track”)
                                                                                    • Repeat as much as is feasible (generally not more than once or twice per day)

                                                                                    In good conferences, I still attend things, and still talk with people. But the talking is more about the technical issues at my job (or hobby, depending) than about the presented talks. Being able to talk with knowledgeable, interested people who are outside the culture and assumptions of my workplace, is really helpful. And being able to provide that perspective for someone else is very satisfying.