1. 76

What technology will “come into its own” this year? Is 2021 the Year of the Linux Desktop? Will Rust become the most important language ever? Will Go supplant Python? Is Ruby ready for a renaissance?

What technology is going to be the one to know in 2021 and beyond, in your learned opinion?

(This question is intentionally open-ended, in the interest of driving discussion.)

    1. 78

      Backlash against Kubernetes and a call for simplicity in orchestration; consolidation of “cloud native” tooling infrastructure.

      1. 18

        I’m not sure if we’ve reached peak-k8s-hype yet. I’m still waiting for k8s-shell which runs every command in a shell pipe in its own k8s node (I mean, grep foo file | awk is such a boring way to do things!)

        1. 15

          You must not have use Google Cloudbuild yet. They do… pretty much exactly that, and it’s as horribly over-engineered and needlessly complicated as you can imagine :-D

        2. 4

          I haven’t worked with k8 yet but to me all of this sounds like you’ll end up with the same problems legacy CORBA systems had: Eventually you lose track of what happens on which machine and everything becomes overly complex and slow.

      2. 13

        I don’t know if it will happen this year or not but I’ve been saying for many years that k8s is the new cross-language J2EE, and just like tomcat and fat jars began to compete we’ll see the options you’re discussing make a resurgence. Nomad is probably one that’s already got a good following.

      3. 7

        I understand where you’re coming from but I don’t think it’s likely. Every huge company I’ve worked with has idiosyncratic requirements that make simple deployment solutions impossible. I’m sure there will be some consolidation but the complexity of Kubernetes is actually needed at the top.

        1. 1

          We’ve been on k8s in some parts of our org for 2+ years, we’re moving more stuff that direction this year though primarily because of deployments and ease of operation (compared to alternatives).

          We don’t use half of k8s, but things are only just now starting to fill that gap like Nomad. I think we’re probably at least a year off from the backlash though.

      4. 4

        I won’t be surprised if the various FaaS offerings absorb much of the exodus. Most people just want self-healing and maybe auto-scaling with minimal yaml. Maybe more CDN edge compute backed by their KV services.

        1. 1

          Which FaaS offerings are good? They are definitely less limited than they used to be, but do they deal with state well and can they warm up fast?

          I haven’t seen any “reviews” of that and I think it would be interesting. Well there was one good experience from someone doing astronomy on AWS Lambda

          https://news.ycombinator.com/item?id=20433315

          linked here: https://github.com/oilshell/oil/wiki/Distributed-Shell

          1. 2

            The big 3 cloud providers are all fine for a large variety of use cases. The biggest mistake the FaaS proponents have made is marketing them as “nanoservices” which gives people trauma feelings instead of “chuck your monolith or anything stateless on this and we’ll run it with low fuss”.

            “serverless” and “function as a service” are both terrible names for “a more flexible and less prescriptive app engine”, and losing control of the messaging has really kneecapped adoption up until now.

            Just like k8s, there are tons of things I would never run on it, but there are significant operational savings to be had for many use cases.

      5. 4

        I wish, but I am not hopeful. But I have been on that bandwagon for years now. Simple deploys make the devops folks love you.

      6. 1

        For those in the AWS world, I like what I’ve seen so far of Amazon ECS, though I wish that Fargate containers would start faster (i.e. 5 seconds or less).

    2. 50

      2021 will be the year of the ARM workstation.

      1. 1

        I wouldn’t say workstation just yet unless you’re talking about laptops, unless Apple comes out with it.

        1. 4

          They already have. That’s what the Mac Mini is.

          1. 1

            I’m not sure that I’d agree that the mac mini counts as a workstation, but I see iMac Apple workstations by 2023, and the world will follow by 2025.

            1. 2

              I’m not sure that I’d agree that the mac mini counts as a workstation

              Why not?

            2. [Comment removed by author]

    3. 42

      Boring prediction: TypeScript. By the end of 2021 TypeScript will broadly be seen as an essential tool for front-end teams that want to move faster without breaking stuff.

      The alternatives which launched around the same time (Reason, Elm, Flow, etc…) have all fallen by the wayside, and at this point TS is the clear winner. The investment is there, the ecosystem is there, the engineers (increasingly) are there. The broad consensus isn’t quite there in wider world of all Javascript engineers, but I think it’s coming.

      Eventually good engineers will leave teams that won’t switch to TypeScript, ultimately hobbling those companies. Their lunch will be eaten by the competitors who’re using TypeScript. But there’ll be money to made dealing with legacy messes of JS / Coffeescript, Backbone, jQuery etc, for the people who’re willing to do it. It’ll be a long-lived niche.

      Knock-on effects will include decreased use of Python in an application / API server role (I know there’s MyPy, but I think TypeScript is ahead) except where it’s coupled to data-sciency stuff. I think something similar will be seen with Go. I don’t know how big these effects will be.


      Unrelated prediction: Mongo will make a comeback. I’ve really disliked working with Mongo, but I was completely wrong about the price of Bitcoin this year so I assume Mongo’s comeback is inevitable.

      1. 10

        Eventually good engineers will leave teams that won’t switch to TypeScript, ultimately hobbling those companies. Their lunch will be eaten by the competitors who’re using TypeScript. But there’ll be money to made dealing with legacy messes of JS / Coffeescript, Backbone, jQuery etc, for the people who’re willing to do it. It’ll be a long-lived niche.

        This is quite the prediction. I think I can see it happening. Working with a large, untyped JS codebase is a nightmare and can eat through morale quickly.

        1. 10

          I work in a place with a lot of Node JS, but it’s not my day to day. I quickly went from enjoying javascript to hating it. Recently I have been enjoying doing some small stuff on my own again. I think I’ve decided that hell is other people’s javascript.

          1. 6

            I’ve decided that most new JS code I write actually should be TypeScript these days. The tooling around the language is too nice.

            1. 5

              As a hobby I write Ableton plugins in Max MSP. It has JS support, but only ES5, that’s from 2009, and gross. Turns out the best modern transpiler to target that is TS! I was so happy when I found out.

            2. 2

              I should probably give TypeScript a go. I was pretty annoyed with JS for a while so I didn’t want to spend my spare time learning TypeScript. Unfortunately, the bits of Javascript that I tend to touch are largely agreed to be the worst in the org and there isn’t a lot of energy to move them over to TypeScript. C’est la vie.

        2. 1

          yep. I’m in the midst of a large feature-add to an angular 1.5.5 site. I’m a bit envious of other teams working with new angular and Typescript.

          I think vosper is right - there’s going to be plenty of work available to those who are willing to maintain older frontend tech.

      2. 7

        Eventually good engineers will leave teams that won’t switch to TypeScript

        Any engineer who will leave purely because the company doesn’t switch to $my_favourite_tech_choice is, by definition, not a good engineer. We’re supposed to professionals for crying out loud, not children insisting on our favourite flavour of ice cream.

        1. 14

          I’d argue switching companies in order to use better tech is what a professional engineer does. Strong type systems are equivalent to constraints in CAD for mechanical engineering or ECAD for electrical engineering. They are absolutely crucial for proper engineering.

          A mechanical engineer that wants to use something like Solidworks or Onshape at a company using TinkerCAD would not be looked down upon. Engineers need to use the right tools to actually engineer things.

          So yes, switching companies to use and practice with proper tooling is a damn good engineer.

          1. 4

            I interviewed a bunch of professional engineers a couple years back. Most of them were stuck on using Excel spreadsheets. One was yelled at by his boss for using python scripts.

            1. 1

              I mean, Excel is a battle-tested functional programming environment which has intermediate states easily visualized and debugged! It has its faults, but I’d imagine it was being used for something like BOM inventory management? In that case it is definitely the right tool compared to Python.

              In any case, yes there are many engineering jobs like that in other engineering fields, but there are also many software engineering jobs which deal exclusively with k8s yaml files, which I’d argue is similar but worse.

              1. 2

                In that specific interview it was for finite element analysis.

          2. 2

            They [Strong type systems] are absolutely crucial for proper engineering.

            Says you. The rigor and engineering approaches “crucial” to systems depends HEAVILY on the domain, but I’m not sure I can identify ANY domain that “requires” strong typing (which is how I interpret “crucial”). Plenty of critical software has been and is being built outside of strong type guarantees. I’m not prepared to dismiss them all as bad or improper engineering. There may be a weak correlation, or just plain orthogonal - hard to say, but your staked position seems to leave no room for nuance.

            I think your analogy muddles the conversation, as most software engineers are not familiar enough with those fields to be able to evaluate and understand the comparison beyond taking your statement as true (maybe it is, maybe it isn’t).

            Defining “a good engineer” and “proper engineering” seems heavily rooted in opinion and personal experience here. That’s not to say it’s fundamentally un-study-able or anything, but I’m not sure how to make any headway in a conversation like this as it stands.

            1. 2

              Says you. The rigor and engineering approaches “crucial” to systems depends HEAVILY on the domain, but I’m not sure I can identify ANY domain that “requires” strong typing (which is how I interpret “crucial”). Plenty of critical software has been and is being built outside of strong type guarantees. I’m not prepared to dismiss them all as bad or improper engineering. There may be a weak correlation, or just plain orthogonal - hard to say, but your staked position seems to leave no room for nuance.

              Sure, no domain “requires” strong typing. No domain “requires” a language above raw assembly either. We use higher level languages because it makes it significantly easier to write correct software, compared to assembly. A strong type system is a significant step above that, making it much easier to write correct software compared to languages without it.

              Having worked at companies using languages such as Python, Javascript and Ruby for their backends, and having worked at companies that have used C++, Java, and Rust for their backends (even given the faults and issues with Java and C++‘s type systems) the difference between the two types of companies is night and day. Drastic differences in quality, speed of development, and the types of bugs that occur. Strong type systems, especially the ones derived from ML languages, make a massive difference in software quality. And sure, few domains really “need” good software, but shouldn’t we strive for it anyway?

              I think your analogy muddles the conversation, as most software engineers are not familiar enough with those fields to be able to evaluate and understand the comparison beyond taking your statement as true (maybe it is, maybe it isn’t).

              I mean, if we can’t pull lessons from other engineering fields what is the point of calling it software engineering? I don’t really know how to respond to this point, because clearly we have different opinions on this topic, each derived from our past experiences. My point is no less strong because you or others are unfamiliar with a domain.

              I encourage you to try out various CAD tools, TinkerCAD and Onshape are both free to use and online available, so there isn’t even a need to download software. I think you will very quickly see the difference between the two, TinkerCAD you will master in a few minutes, and Onshape will likely be unapproachable without tutorials. But if you look at the examples, Onshape is used to produce production, intricately designed mechanical parts. And simply without the tools and constraints that Onshape provides you can’t do proper mechanical engineering.

              And I really want to highlight that I don’t mean to say that TinkerCAD is bad, or worse in any way. It is incredible for what it is and for opening the door to the world of mechanical design to those who aren’t familiar with it. It is simply not an engineering tool, while Onshape is.

              My analogy really stems from how each tool is used. TinkerCAD you just build the thing you want, using the shapes and manipulation tools it provides. Onshape is different, everything has to be specified. You have to set lengths of everything. You have to specify what the angles of things are. Certain things are really hard to do, especially organic shapes, because everything has to parameterized. I hope you’ll agree this is nearly identical to programming languages with and without strong type systems!

              Defining “a good engineer” and “proper engineering” seems heavily rooted in opinion and personal experience here. That’s not to say it’s fundamentally un-study-able or anything, but I’m not sure how to make any headway in a conversation like this as it stands.

              Of course it is rooted in opinion, at least in the US. I believe some other countries have licensed engineering which makes it more of a distinction. Personally I think it is extremely unfortunate that engineering is not a protected term, because I genuinely really think it should be. The state of software engineering feels very similar to the situation of doctors before modern medicine, where the majority were hacks and frauds besmirching the industry as a whole.

              1. 3

                I mean, if we can’t pull lessons from other engineering fields what is the point of calling it software engineering? I don’t really know how to respond to this point, because clearly we have different opinions on this topic, each derived from our past experiences. My point is no less strong because you or others are unfamiliar with a domain.

                That’s the thing: we shouldn’t be pulling lessons until we learn what those lessons are. I had very strong opinions about what software engineering was supposed to look like, then I started interviewing other engineers. Most of what we think “engineering” looks like is really just an ultra-idealized version that doesn’t match the reality. In some ways they are better, true, but in some ways they are much worse.

                EDIT: I just checked and it looks like you used to be a professional electrical engineer? If so then my point about “most people talk about engineering don’t understand engineering” doesn’t apply here.

                Of course it is rooted in opinion, at least in the US. I believe some other countries have licensed engineering which makes it more of a distinction. Personally I think it is extremely unfortunate that engineering is not a protected term, because I genuinely really think it should be. The state of software engineering feels very similar to the situation of doctors before modern medicine, where the majority were hacks and frauds besmirching the industry as a whole.

                Several of the people I interviewed called out the US system as being much better than the European, a sort of “grass is always greener thing”. The only country I know of with ultra-strict requirements on who can be called an engineer is “Canada”. In pretty much all other countries, you can call yourself whatever you want, but only a licensed engineer can sign off on an engineering project. This means that most professional engineers don’t need to be licensed, and many aren’t.

                1. 1

                  Do you have a write-up of your interviews anywhere? I would be really interested in reading it! You are right I used to be an electrical engineer, and still tinker on the side, but went into software simply because the jobs in electrical engineering are not very interesting. I think there is a lot that software does better than other engineering fields (although I wonder if that is simply due to how new the field is, and that it will inevitably end up with the more bureaucratic rules eventually), but the extreme resistance to strong typing by so many software engineers feels extremely misguided in my opinion and not something you see in other engineering fields.

                  Canada was the primary place I was thinking of, but I’m not particularly familiar with the details of it all. I know the US has some licensing requirements for civil engineering projects (and most of those requirements are written in blood unfortunately). It is almost certainly a “grass is greener,” and I should certainly educate myself on some of the downsides of other countries systems. If you have any resources here I’d appreciate it.

                  1. 2

                    Working on it! I’m in the middle of draft 3, which will hopefully be the final draft. Aiming to have this done by end of February.

              2. 1

                if we can’t pull lessons from other engineering fields what is the point of calling it software engineering? […] My point is no less strong because you or others are unfamiliar with a domain.

                Fair. Your expanded explanation here helped clarify a lot, I appreciate that. One point of interest in your explanation was

                Certain things are really hard to do, especially organic shapes, because everything has to parameterized

                Doesn’t this seem an admission of there existing domains where - according to analogy - strong type systems might be less suitable? Or would you consider this a breakdown in the analogy?

                As far as the meaning of “engineer” I take that as a very different conversation. I do think that’s murky territory and personally would be in favor of that being a more-regulated term.

                I’ve been doing software development myself for 20 years, and have worked in both strong typed and duck typed. I think they’re both good - there’s trade offs. Rather than wholesale dismissing either, I think discussing those trade offs is more interesting.

                Drastic differences in quality, speed of development, and the types of bugs that occur

                I agree with this statement. I don’t think it implies a clear winner for all development though. While a strong type system guarantees a certain kind of error is not possible, in my practice those kinds of errors are rarely a significant factor, and in fact the “looseness” is a feature that can be leveraged at times. I don’t think that’s always the right trade off to make either, but sometimes creating imperfect organic shapes is ok.

                I would also offer that there can be approaches that give us “both ways” - ruby now has the beginnings of an optional type system.

                I’m tapped out for this thread.

                1. 2

                  Fair. Your expanded explanation here helped clarify a lot, I appreciate that.

                  Yes, sorry about not expanding in my original comment. As you can probably tell, I’m extremely passionate about this idea, and my original comment was made in a bit of haste. I’m glad my expansion was helpful in clarifying my points!

                  Doesn’t this seem an admission of there existing domains where - according to analogy - strong type systems might be less suitable? Or would you consider this a breakdown in the analogy?

                  I completely agree there are domains where strong type systems are less suitable. Especially the more exploratory areas, such as data exploration and data science (although Julia is showing that types can be valuable there as well). I think my distinction is that there is a difference between domains where strong types are not as suitable, like prototyping, and production engineering systems where I think strong types are extremely important.

                  As far as the meaning of “engineer” I take that as a very different conversation. I do think that’s murky territory and personally would be in favor of that being a more-regulated term.

                  I’ve been doing software development myself for 20 years, and have worked in both strong typed and duck typed. I think they’re both good - there’s trade offs. Rather than wholesale dismissing either, I think discussing those trade offs is more interesting.

                  It is truly a murky term, which I think leads to a lot of the communication issues in this area of discussion. I should probably start future conversations like this with a clear definition of what I mean in terms of “engineering,” because I mean it more in terms of production engineering, rather than prototyping engineering. (Unfortunately not a distinction in software, but is in all of the other engineering fields. Prototyping engineering values malleable materials, like 3D printing, while production engineering is what is shipped to users and its very particular about materials used which minimize cost and maximize strength. Different fields, different requirements, and oftentimes different tools used to even CAD/design).

                  So using the production engineering vs prototyping engineering distinction, I would completely agree that there are trade-offs between strong typed and duck typed languages. I think one is fantastic for prototyping, while the other is fantastic for production. And similar to say 3D printing and injection-modling processes, the two can bleed into each other’s fields given the right opportunity. I should not have used “proper engineering” in my original comment, and I should have clarified I meant “production engineering.”

                  However, the number of companies that I’ve been at that have actually treated the two types of languages as different parts of the development life-cycle is exactly one, my current company, and that was after fighting to use Python for a prototype project. (Of course, when the prototype was successful, there was then reluctance to rewrite it in a strongly typed language because “its already built!”)

                  Similar to how you would feel frustrated if the new computer you bought had all of its internals built with 3D printed parts and breadboards, even if it worked identically to a computer with injection molded parts and PCBs, that is how I feel about using duck typed languages in production systems used by real users. Sure it works, but they are inherently less reliable and I think its poor engineering to ship a system like that (although, of course, there are situations where it is applicable, but not to the extent of the software world today).

                  I agree with this statement. I don’t think it implies a clear winner for all development though. While a strong type system guarantees a certain kind of error is not possible, in my practice those kinds of errors are rarely a significant factor, and in fact the “looseness” is a feature that can be leveraged at times. I don’t think that’s always the right trade off to make either, but sometimes creating imperfect organic shapes is ok.

                  I would also offer that there can be approaches that give us “both ways” - ruby now has the beginnings of an optional type system.

                  Fair enough, although my experience with errors in duck typed languages is certainly different! The jury is still out on the optional type systems, and I’m certainly curious how that effects things. I have a hard time believing they will make a significant difference, simply because the style of coding varies so drastically between the two. With strong types I design type-first, whereas with optional/incremental type systems its the other way (unless started from the get-go, but at that point what is the purpose of using a duck-typed language?).

                  I’m tapped out for this thread.

                  I completely agree here. This is an exhausting topic to talk about, probably because everyone has strong opinions on it and so many of the arguments are based on anecdotal data (of which I am definitely guilty sometimes!). In any case, I appreciate you taking the time to go back and forth with me here for a bit. I’ve certainly learned that in order to properly discuss this topic its important to be very careful with language and definitions, otherwise its just a mess. I guess the English language could use some strong typing, but then poetry would suck huh? :)

        2. 4

          Start caring about that when everything else about the company makes you happy. Your happiness is more important than being a “good engineer”.

          TS has evolved from a flavor to simply being a better version. The switch to TS is so easy there’s no reason not to. There’s a difference between a challenge and intentionally handicapping yourself.

          1. 6

            I was under the impression that my job is to solve problems for customers, not to “be happy”. And of all the things that make me happy or unhappy at the workplace, something like this is pretty far down the list; using JavaScript is hardly some sort of insufferable burden.

            TypeScript may very well be better; but the last time I used it I found it came with some downsides too, such as the code in your browser not being the same as what you’re writing so the REPL/debugger is a lot less useful, hard to inspect Typescript-specific attributes (such as types) since the browser has no knowledge of any of that, a vastly more complicated build pipeline, and I found working with it somewhat cumbersome with all the type casting you need to do for the DOM API (i.e. this kind of stuff). And while the wind has been blowing in the statically typed languages in the last few years, let’s not pretend like the good ol’ “dynamic vs. static languages” debate is a done and solved deal. I like typing, but dynamic languages do come with advantages too (and TypeScript’s typing may be optional, but if you’re not using it much then there’s little reason to use it at all).

            Perhaps there are some solutions to these kind of issues now (or will be soon), but about a year and a half ago I found it all pretty convoluted and decided to just stick with JS for the time being.

            In the end, I understand why it exists and why it’s popular, but like much of today’s frontend I find it hacky, cludgy, and a very suboptimal solution.

            1. 5

              You’re more than your job, you are a human being first. As engineers we have the huge privilege of being able to quit a job and find a new one easily. Happiness is a completely valid reason to do this.

              JS vs TS is a minor point, too small compared to other unknowns in switching companies. But OP said “leave teams”. I actually did just that, we have multiple teams at my company and one started using TS, so I switched. A year later the JS team now solely does bug fixes and no-one is willing to write new JS code. At first management thought it was like when Coffeescript happened but dev response was so much bigger that even they understand it is different this time.

              1. 4

                You’re more than your job, you are a human being first. As engineers we have the huge privilege of being able to quit a job and find a new one easily. Happiness is a completely valid reason to do this.

                Sure, but if you’re made deeply unhappy because you have to write JavaScript instead of TypeScript then you are either in a position of extreme luxury if that’s a problem that even rates in your life, or there seem to be some rather curious priorities going on. I think all of this is classic programming self-indulgent navel-gazing.

                Is TypeScript a better tool? Probably. Would it be a good idea to start a reasonable percentage of new projects in TypeScript? Also probably (depending on various factors). Should we drop all JavaScript projects ASAP just so that we rewrite everything to TypeScript? Ehh, probably not.

                I don’t have any predictions for 2021, but I’ll have one for 2026: we’ll be having the same discussion about WhatNotScript replacing boring old TypeScript and we should all switch to that. And thus the cycle continues.

              2. 2

                “no-one is willing to write new JS code.”

                until something goes very wrong. Not taking a side here, just “no-one is willing” struck me as an odd statement. Who has the unhappy chore of taking care of all that old boring JS?

                1. 2

                  We still do bug fixes. But even then it’s often an opportunity to write TS. The nature of TS makes it so that you gradually port your application. Lots of JS is valid TS and if it isn’t then that’s usually easy to split up or refactor to become valid TS, and TS will help with that refactoring even. JS IDEs provide hints by pretending it’s TS anyways.

            2. 5

              We’re supposed to professionals for crying out loud, not children insisting on our favourite flavour of ice cream.

              I was under the impression that my job is to solve problems for customers, not to “be happy”.

              This is a red herring. You are an economic agent making decisions about your employment to maximize your own utility function, and trying to get what you can out of the market. Some try simply to maximize earnings. Some maximize some combo of earnings and hours worked/stress. Others care about finding meaning in their work. And others care about working with specific technologies, or at least not working with ones they hate.

              All of these things are orthogonal to the concept of “being a professional.”

        3. 2

          I generally strongly agree with your sentiment, and am not a fan of the degree to which developers place their identity in one particular technology over another.

          But in this specific case I imagine OP is associating untyped JS projects with tech debt and difficulty of maintenance that all too often can contribute to low morale. I’ve definitely been there before when it comes to large codebases with foundational technical debt and no type system to help one find their way around. There’s something uniquely frustrating about e.g. a poorly formatted stack trace coming back from a bug monitoring tool due to broken or buggily-implemented sourcemaps. We can definitely debate whether language choice vs other factors (bad processes, low technical budget) contribute more to system quality. My guess is that language is not one of the largest factors, but it’s probably nonetheless significant. Otherwise we wouldn’t hear about stories where people left their jobs due to being sick of the shop tooling.

      3. 4

        Agreed. Although TypeScript is a bit OO for my taste, JavaScript libraries have grown sufficiently complex as to warrant strong typing. The adoption rate is undeniable. Vue, Deno, Babylon… When your stack is written in TypeScript, the cost/benefit scale tips in favor of adopting it downstream.

        Also, Cosmos is heating up, so you could make a case for Mongo’s revival by extension.

        1. 6

          At Notion we have some OO-as-state-encapsulation Typescript on the front end, but we have even more functional Typescript, and plenty of code somewhere in-between. We use the advanced Typescript types like as const inference, mapped types, and conditional types much more than we use inheritance or implements interface.

          Honestly writing a large from-scratch codebase in Typescript focused on type correctness and make-invalid-states-unrepresentable has been very fun and productive. Our biggest issue with the language is error handling - dealing with all the errors from external libraries, the DOM, exceptions vs Result<S, F>, etc is the most annoying and error-prone aspect of our codebase. Shoehorning optionals into our style has left me paining for Rust’s try features… and I’ve never really written rust either…

          1. 6

            Some stuff we’ve done:

            • write third-party types that basically force you to pass values through validators by saying “this actually returns an opaque InvalidatedResult style thing”
            • remove functions we deem “bad” from the type signatures
            • codegen definition files
            • heavy usage of stuff like never

            I think it’s actually pretty easy to wrap third-party libs for the most part, and it’s basically the “real way” to do most of this. Too many people hem at this idea but it resolves a lot of stuff come “oh no this lib is actually totally busted” o’clock.

          2. 1

            That sounds amazing! Have you or Notion written any articles describing this setup in more detail? Are there any by others you recommend?

            1. 1

              Unfortunately when I go looking for Typescript advice on the internet, I find mostly shallow blogspam tutorials. I have an idea to take notes whenever I use an advanced TS feature and write an article called “Applying Advanced Typescript Types” — but that’s remained just an idea for a couple of years.

              1. 1

                I’ll keep an eye out for your article in the lobste.rs feed 😉.

        2. 2

          Although TypeScript is a bit OO for my taste

          My limited experience with TypeScript is that it’s only as OO as it would be if you were writing plain JavaScript. Not sure if that makes sense - another way of saying it would be: JS has adopted some OO trappings, like class, but if you aren’t using them in your JS then TypeScript isn’t going to push you in that direction - you can write functional TS to the extent that you could write functional JS; and OO TS to the extent that you could write OO JS.

          Unless you’re refering more to the naming of new keywords, like interface? I see how those could be associated with popular OO languages, but really there’s nothing making you write actual OO code.

          1. 3

            My limited experience with TypeScript is that it’s only as OO as it would be if you were writing plain JavaScript.

            Anecdotally, after working at a Java and C# shop that picked up TypeScript, everyone’s happy having things work more like those languages (well, mostly C# ;) than like JS. I just wish TypeScript would get typed exceptions already.

          2. 2

            Yes, it is possible to write non-OO TypeScript. And yes, I’m pointing out its emphasis on interfaces and other OO features like class property modifiers.

            I realize that the choice to make TypeScript a superset of JavaScript means that its roots in Scheme are still present. I also realize that typing a Scheme-ish language makes it (if one squints hard enough) an ML-ish language. Nevertheless, we should not be surprised if most TypeScript in the wild looks a lot more like C# and a lot less like fp-ts.

            1. 1

              Nevertheless, we should not be surprised if most TypeScript in the wild looks a lot more like C# and a lot less like fp-ts.

              Makes sense. Perhaps some of this is also due to TypeScript being palatable to people who are comfortable in languages like C# and Java; maybe they’d have stayed away from vanilla JS before (especially if they were exposed in the pre-ES6 days) but might be willing to write TypeScript today? That’s total speculation, though, and I’ve no idea how many people like that there are.

    4. 33

      Nix.

      1. 8

        I really really want 2021 to be the year of Nix.

      2. 8

        See also: “backlash against Kubernetes”

        1. 5

          I struggle to see how building infrastructure on top of a library with howmanythousandslinesofcode of some Standard ML derivative and bash constitutes simplicity in orchestration.

          NixOS is neat, but it’s not simple.

          1. 10

            Hot take: I’d prefer building systemd units that automatically get managed over orchestrating k8s pods any day. Nothing is simple, but Nix provides another approach to managing complexity well.

            1. 4

              I had an idea to rebuild what was once CoreOS’s fleetctl. Orchestration built on top of systemd, without anything more fancy on top.

              1. 3

                I think people generally complain about systemd, among many reasons, because distros started with /etc/init.d, picked up systemd, and started using it in the same way. So “why do we need all this crap” makes sense when all the distro uses is the init system features. But systemd, for better or worse, is really a daemon manager, and the argument for systemd vs. sysvinit with NixOS or a tool like that is much stronger.

          2. 1

            Not sure how can be something too simple. A tool either solves the problem or it doesnt.

        2. 4

          And a blow against Ansible, Terraform, Salt etc. for free!

        3. 1

          I’m a Nix n00b, but I don’t understand the comparison between Nix/NixOS and Kubernetes. To me, k8s = “distributed” vs Nix = “single host”. Did I miss something?

          1. 1

            NixOps and similar let you do a form of distributed Terraform-style configuration. It’s not automatic scheduling like Kubes, but gets you part of the way there at least. Scheduling would be cool, though. If NixOps could do it, it would beat Kubernetes at its own game.

            And that’s why Nix is a technology to watch. I think its current adoption is partially a backlash against Docker being inefficient - you can generate Docker images with docker-tools in nixpkgs now without ever installing Docker. The principles behind the tech are solid - per-application install prefixes are widely used on clusters, Nix takes it to the next level and uses them as part of the dataflow of building new derivations. In 2021, I bet we’ll see the tools and documentation start to mature in a way that will ultimately set the stage for it being able to do more of what Kubernetes does, but without YAML. (Seriously. I’d rather write Nix than YAML.)

            And Nix has done all this without Docker-style growth hacking or Kubernetes-style aggressive advertising, but nixpkgs is now creeping up on FreeBSD Ports in number of maintainers. The ecosystem has a pretty bright future, IMO.

      3. [Comment removed by author]

      4. 1

        Flakes and the cli update will finally make it make sense.

    5. 30
      • RISC-V CPUs. Because open standards encourage co-operation across borders.
      • Zig. Because people want to replace C with something more secure, but in a gradual process, and Zig includes a C compiler.
      • MIDI over Bluetooth 4. Because it’s handy.
      • The Gemini protocol, since retrominimalism is still fun.
      • Godot for game development, because it has matured enough.
      • TOML since it seems to be the least disliked configuration format.
      1. 5

        Godot is in a great spot and just keeps getting better. I’m hoping you’re right on this one!

      2. 2

        “…encourage co-operation across borders” well said! I actually never considered geopolitical impact with processors. could you point me towards some reading, or share insight?

        1. 2

          I’m not an expert on RISC-V, but I know that both MIT and China work on improving it. There’s also political will in Europe to build a European CPU.

      3. 2

        MIDI over Bluetooth 4. Because it’s handy.

        Had to look that up. Latency jitters between 3-10ms. Figures.

        1. 1

          3-10 ms is acceptable latency for MIDI though?

          1. 4

            Drummers will disagree.

            But even then, It isn’t just 3-10ms latency; It is the latency from bluetooth + whatever other latency in the latency chain.

            It all adds up.

            1. 5

              (Am drummer) A total latency, i.e. of the whole chain, of 3-10 ms is okay. 3-10 ms for a single MIDI message is 3-10 times more than one would expect :-). Also, jitter matters enormously. A constant latency of 20 ms is annoying, but you can sort of live with it if you really try. Constantly jittering between 5 and 20 ms, on the other hand, feels like you’ve had one too many drinks.

      4. 1

        Two months later, RISC-V, TOML, Gemini and Godot has good traction and Zig is still very promising (and under development - not Rust-levels of safety yet).

        MIDI over Bluetooth, on the other hand, has delay and setup issues. I’ve tried it and I have lost faith in it.

        1. 1

          Three months later, there are some MIDI over Bluetooth hardware solutions that claim to be able to send signals with a 3ms delay. I tried the CME WIDI device. So far, it has been a complete hassle to set up on my Android phone, but the delay is very low. It also feels fine to play with a MIDI controller connected to a Linux machine, where Jack then redirects the MIDI signal via Bluetooth to the WIDI device.

    6. 24

      I think it’s going to be the year (and decade) of shell scripts written in YAML … Github Actions, Gitlab runners, Kubernetes config, sourcehut, etc. :)

      I have a few blog posts coming up about that

      1. 12

        Oh. Well, as my grandmother would say, rats.

        1. 9

          This is excellent news for those of us who’ve been writing bash scripts for a long time. We have decades of experience with an idiosyncratic, designed-by-oh-fuck-it language that barely has a syntax and performs in all sorts of surprising ways! This is practically the same thing, it’s a new flavour, there are fewer people who know it, so the consulting rates are higher…

          1. 2

            designed-by-oh-fuck-it language

            😂

        2. 3

          That’s about how I feel … There are a lot of useful platforms locked behind YAML.

          But it looks like there is a way out: JSON is a subset of YAML, so I changed my .travis.yml and sourcehut YAML to be generated. So I have .travis.yml.in, and .travis.yml, the latter of which is just JSON. [1]

          So I can change the source language to be something else, but I didn’t yet. My configs are like 30 lines now, so it may not be worth it. But I have worked on big services that are thousands of lines of config (e.g. when I used to work Google) . I would say that’s the norm there, and tens of thousands of lines is pretty common too.

          I remember someone saying that Facebook is written with like hundreds of thousands of configuration like this? https://research.fb.com/wp-content/uploads/2016/11/holistic-configuration-management-at-facebook.pdf

          I’d be curious if people like it or dislike it.


          So it looks like you can already replace YAML with Jsonnet, Cue, probably Dhall, etc. Does anyone actually do it? Anecdotally, it does seem like templating YAML is more popular? I wonder why that is. I only work with a handful of services that accept YAML now.

          https://leebriggs.co.uk/blog/2019/02/07/why-are-we-templating-yaml.html

          I think the functional languages are a little unfamiliar. Oil will use Ruby-like blocks for configuration, sort of like Vagrant or Chef but more succinct.

          Oil should will be more familiar to people who know shell / Python / Ruby. If you know those languages, I don’t expect that Jsonnet, Cue, or Dhall is very familiar. They feel like an “extra” thing. And the last thing we want in infrastructure management is yet another config language. (That’s why I think it makes sense to bundle into a shell.)

          Ditto for Nix. Nix is very similar to these languages – it’s basically an expression language over dynamically typed JSON-like records, but in this thread there is some negative feedback about that.

          Anyway I want to fix this problem with Oil, but I’m not sure in which cases people would actually accept the extra “compiler”. It seems like people are very eager to template YAML, and embed shell in YAML, which is weird to me. I wonder why that is.

          [1] https://github.com/oilshell/oil/blob/master/.travis.yml.in

          https://github.com/oilshell/oil/blob/master/.travis.yml

      2. 8

        I already spend too much time being a YAMLgineer.

      3. 3

        I’ve actually been working on my own CI system (not yet finished/released) because I got so fed up with this. After I ran out of the Travis credit thing I looked at GitHub Actions, and I just couldn’t get PostgreSQL to work: it just fails (after waiting 5 to 10 minutes, of course) and it’s pretty much impossible to inspect anything to see what’s going on. I already did this song and dance years ago with Travis, and it was painful then, and even more painful now.

        It just sets up an image and starts /run-ci (or another program) from your repo in a container with runc. The script can be written in $anything supported on the container, and that’s it. While this won’t cover 100% of the CI usage cases, it’s probably suitable for half or more of them, and you can, you know, actually debug it.

      4. 3

        I’ve thought about writing a language that uses YAML syntax but in the style of LISP or XSLT. It would be a total troll language, but I could see some projects actually using it.

        1. 7

          Github almost beat you to it, except:

          if: ${{ github.event.label.name == 'publish' }}
          runs-on: ubuntu-latest
          steps:
            - uses: actions/checkout@v2
          

          clearly needs to be

          if:
            cond:
              op: ==
              left:
                op: .
                left: github
                right:
                  op: .
                  left: event
                  right: ...
              right: publish
          
          runs-on: ubuntu-latest
          steps:
            - uses: actions/checkout@v2
          

          :-)

          from:

          https://lobste.rs/s/oeelem/using_github_issues_as_hugo_frontend_with

          https://github.com/shazow/shazow.net/blob/master/.github/workflows/publish.yml

    7. 22

      Distributed content technology. I want to see more Activity Pub, more PeerTube, Pleroma, Mastodon, Pixelfed.

      I want to see more regular people get good alternatives to Google, Apple and FB Services. I want to see tools to help route around censorship and give us a more free Internet.

      1. 4

        I can see this happening but I think it will be in the form of separate forums and clones of popular sites, not as something federated

    8. 22

      What was 2020 the year of?

      (Not rhetorical, asking as calibration.)

      1. 39

        The year of remote working?

        1. 3

          Seconded. I think it opened a lot of folks up to hiring remote, especially hiring managers / higher ups who may have been skeptical about remote work.

          Hope remote hiring continues to flourish in 2021.

      2. 6

        On the front-end side I think it was another year of “no new flavour of the week”. From where I sit it looks like React is still the dominant UI framework, with Angular trucking along beside it. Those ships have been sailing pretty steadily for a while now. Soon people are going to have to come up with a new trope to make fun of those who work with Javascript ;)

        1. 2

          The fountain of flux has moved to grace the machine learning community with a new Fancy Framework™️ every week.

      3. 3

        Flutter.

        1. 4

          My prediction for Flutter in 2021, is that popularity will wane somewhat as it becomes apparent that Flutter for web and desktop aren’t going to live up to the hype.

          1. 4

            Anything that promises to be great everywhere is going to be average at everything and will have different drawbacks on each platform, making everything much more complicated.

            Flutter isn’t the right approach to cross-platform development, in my opinion. The web is the best we’ve got, otherwise better to just target the platforms in a modular way.

    9. 20

      The fantasy: people realizing that Linux is insecure and switching to a model like seL4 for critical systems.

      The reality: more proprietary Linux drivers for OEM hardware. :-(

      Meta: this post’s ID is “v4crap,” do with that information what you will.

    10. 16

      The year of Prolog! Yes, I’m seriuous, last years we’ve seen flourish a new wave of Prolog environments (Tau, Trealla and Scryer) which this year can reach a production-ready status. At least, that’s what I hope and I’m helping this environments with some patches as well.

      1. 19

        year_of(prolog, 2021).

      2. 6

        There was even a new stable release of Mercury late last year. It’s, uh, I’m not personally betting on it getting wide scale adoption, but I do personally feel that it’s one of the most aesthetically pleasing bits of technology I’ve ever tried.

      3. 5

        A couple years ago I hacked on a Python type inferencer someone wrote in Prolog. I wasn’t enlightened, despite expecting to be, from a bunch of HN posts like this.

        https://github.com/andychu/hatlog

        For example, can someone add good error messages to this? It didn’t really seem practical. I’m sure I am missing something, but there also seemed to be a lot of deficiencies.

        In fact I think I learned the opposite lesson. I have to dig up the HN post, but I think the point was “Prolog is NOT logic”. It’s not programming and it’s not math.

        (Someone said the same thing about Project Euler and so forth, and I really liked that criticism. https://lobste.rs/s/bqnhbo/book_review_elements_programming )

        Related thread but I think there was a pithy blog post too: https://news.ycombinator.com/item?id=18373401 (Prolog Under the Hood)

        Yeah this is the quote and a bunch of the HN comments backed it up to a degree:

        Although Prolog’s original intent was to allow programmers to specify programs in a syntax close to logic, that is not how Prolog works. In fact, a conceptual understanding of logic is not that useful for understanding Prolog.

        I have programmed in many languages, and I at least have a decent understanding of math. In fact I just wrote about the difference between programming and math with regards to parsing here:

        http://www.oilshell.org/blog/2021/01/comments-parsing.html#what-programmers-dont-understand-about-grammars

        But I had a bad experience with Prolog. Even if you understand programming and math, you don’t understand Prolog.

        I’m not a fan of the computational complexity problem either; that makes it unsuitable for production use.

        1. 2

          Same. Every time I look at Prolog-ish things I want to be enlightened. It just never clicks. However, I feel like I know what the enlightenment would look like.

          I don’t fully grok logic programs, so I think of them as incredibly over-powered regexes over arbitrary data instead of regular strings. They can describe the specific shape of hypergraphs and stuff like that. So it makes sense to use it when you have an unwieldy blob of data that can only be understood with unwieldy blobs of logic/description, and you need an easy way to query or calculate information about it.

          I think the master would say “ah, but what is programming if not pattern matching on data”? And at these words a PhD student is enlightened. It seems to makes sense for both describing the tree of a running program and smaller components like conditionals. It also seems like the Haskell folk back their way into similar spaces. But my brain just can’t quite get there.

        2. 2

          Sorry to hear that. For me Prolog is mainly about unification (which is different from most pattern matching I’ve seen because you need to remember the unifications you’ve done before between variables) and backtracking (which was criticized for being slow but in modern systems you can use different strategies for every predicate, the most famous alternative is tabling). For the rest, it should be used as a purely functional language (it is not and lots of tutorials use side effects, but keeping yourself pure you can reason about a lot of things making debugging way easier).

          I did Prolog at university (which is not very rare in Europe) and we studied the logic parts of Prolog and where they come from, and yes, it’s logic but it’s heavily modified from the “usual way” to perform better and it’s not 100% mathematically equivalent (for example using negation can produce bad results, no occurs check, …) and it uses backward-chaining which is the reverse to what people usually learn. Also lots of people use ! which can be used to improve performance to cut solutions, but it makes the code non-pure and harder to reason about.

          However what I really liked about Prolog was the libraries that are made using the simple constructs. Bidirectional libraries that are very useful like dcgs (awesome stuff, I did some Advent of Code problems only using this “pattern matching over lists” helpers), findall, clpz, reif, dif, CHR if you want forward-chaining logic is also available in most Prolog systems,…

          Yes, computational complexity is a problem, having backtrackable data structures will always have a penalty but it’s not unfixable and there are ongoing efforts like the recent hashtable library.

          Having that said, at the end it’s also a matter of preference. I’ve seen in the repo that you consider Haskell easier. In my case it’s just the opposite, Prolog fits my mind better there are fewer abstractions going on than in Haskell IMO.

          For some modern Prolog you can checkout my HTTP/1.0 server library made in Prolog: https://github.com/mthom/scryer-prolog/pull/726/files

          1. 1

            FWIW I think I’m more interested in logic programming than Prolog.

            I am probably going to play with the Souffle datalog compiler.

            And a hybrid approach of ML + SMT + Datalog seems promising for a lot of problems:

            https://arxiv.org/abs/2009.08361

            Prolog just feels too limited and domain specific. I think I compared it to Forth, which is extremely elegant for some problems, but falls off a cliff for others.

    11. 16

      Julia hopefully. I hate Python with a passion and don’t want to learn R, so I really want to see Julia emerge as a viable alternative in scientific computing. I love its Lispiness as well. The ecosystem needs some polishing but it is very promising.

      Bayesian modelling. I’m a big fan of explainable models, and Bayesian modelling and credible intervals are both intuitive and rich. With advances in MCMC sampling and ergonomics (with libraries like Stan, Turing.jl, PyMC3, Edward, and Pyro) hopefully we can see a growing number of Bayesian models.

      1. 3

        MIT just did a course on it! I think Julia has been on a slow, successful burn for a while now, and I hope that continues happening!

        Also, have you by any chance read Bayesian Statistics the Fun Way? I read it with a friend and it’s got me interested in pretty much the same niches you’re describing (Julia & Bayesian modelling).

        1. 2

          Also, have you by any chance read Bayesian Statistics the Fun Way?

          I haven’t but I’ve heard a lot about it! My math background is fairly strong so I’ve been working through BDA3, Gelman’s book, instead. I’d love to exchange resources if there’s interest!

      2. 1

        Python and R were my mainly driver in research but I also wished to have use Julia and call PyCall and R equivalent only when stuck. Julia is amazing to do scientific programming in the very large understanding of what it could be, imho. One day, who knows.

        Bayesian modelling and Inference: I hope that R-INLA and INLA approach can be ported to other languages (Julia hey).

        1. 1

          Bayesian modelling and Inference: I hope that R-INLA and INLA approach can be ported to other languages (Julia hey).

          Do you know any resources I could look at that show how INLA differs from MCMC techniques?

          1. 2

            A gentle INLA tutorial is an accessible short introduction to INLA and how it differs.

    12. 14

      I hope that 2021 will be a year of slower technology popularisation/adoption as the lag between the technology that is available and the most developer’s understanding is unsustainably large.

      Please, nothing new! We don’t need it, we’re all still getting to grips with the old stuff!

    13. 14

      With luck, e-ink. DASUNG teased a 25” e-ink desktop display just recently. Maybe it’s wistful but I’m really hoping that more people can produce differentiated monitor technology; my eyes look forward to it!

    14. 13

      I’ll take a shot at this.

      What technology will “come into its own” this year?

      RISC-V will accelerate in a significant manner. This is because there will be a bunch of low-cost linux-capable SBCs released this year. There were none in the past decade of RISC-V.

      Is 2021 the Year of the Linux Desktop?

      No. It’s hopeless. YoLD will never happen. At this point, I’m certain some other system will actually do what Linux just couldn’t.

      Will Rust become the most important language ever?

      Doubtful. It is still a new language. Languages take time to gain support; Most just disappear. There’s a small chance that, about ten years from now, this will be a relevant question.

      Will Go supplant Python?

      They’re too different (typing) for this to even be considered.

      Is Ruby ready for a renaissance?

      I’m not optimistic for Ruby.

      What technology is going to be the one to know in 2021 and beyond, in your learned opinion?

      HDLs in general (relevance of specialized-purpose hardware, FPGA or ASIC based, will go up, alongside open hardware).

      seL4, and operating systems built on 3rd generation microkernels.

      And, RISC-V will become big, so it’ll then be good to have some early experience to show with it.

    15. 21

      The Linux Desktop! For the 20th year running…

      1. 12

        Aww, shucks, right when I ordered a Mac! I’m just about to leave the party and you’re telling me it’s starting??

        1. 12

          No better time to contribute to: https://asahilinux.org/ :)

          1. 3

            After 17 years of no better time contribute to $linuxthing I think it’s time I had a break from all this :-).

            (Edit: well, I’m still going to use Linux for all sorts of embedded gadgets. And there’s a driver I wrote, which I’m maintaining, and I’ve been putting off adding support for a new device for a few months now, what with all the global shipping kerfuffle. So I won’t have a break from Linux – what I do want, and will hopefully get, is a break from dealing with… everything else, from systemd to xdg-whatever and from GTK to Wayland, in my spare time. I’ve got a bad case of perpetual beta fatigue.).

            1. 1

              I’ve got a bad case of perpetual beta fatigue.

              Well, good luck then on macOS ;).

              I used macOS full-time from 2007-2018 and part time from 2018-2020. macOS was really great when I started using it. But it is as buggy as any other desktop OS nowadays. With the large difference that there is no recourse besides filing issues in OpenRadar, which is pretty much the same as sending them to /dev/null. In a FLOSS systems like Linux or the BSDs, you can at least report bugs to someone who is listening or fix it yourself.

              1. 1

                Oh, I’m not having grand expectations. I had two stints with what was OS X back at the time. I used 10.1, I think, for a few months (wasn’t my laptop), and then used OS X 10.4 and 10.5 on two of my machines between roughly 2007 and 2010. It was okay, I liked Linux better but I could see the appeal. Programming Unix-y things under OS X was definitely not fun and I’m not expecting it to be any fun-er but it was solid enough to be useful.

                Then I haven’t touched Macs for a long time until last year, when nobody wanted to fix macOS bugs in one of the projects I’m working in and I figured why not. Based on a few weeks of fiddling with them, Catalina and Big Sur seem to me worse, overall, than 10.4 was at its time (I was too enthusiastic about Aqua to be able to tell much about 10.1) but I can work with them.

                The hardware is the bit I’m more concerned about, honestly. I haven’t peeked under the hood in a while but strictly based on what I can see outside, I want to hug my old iBook G3 and never let it go.

                In a FLOSS systems like Linux or the BSDs, you can at least report bugs to someone who is listening or fix it yourself.

                Until a few months ago when it finally became unnecessary, I used to babysit a GTK application so I got to be great at being talked down to by upstream. Being ignored would be, like, amazing!

                Anyways, it’ll take a while for my MBP to arrive. What’s the worst that can happen? :-D

          2. 3

            I had no idea this existed. I thought it was a non-starter. yey. \o/

            Also, the developer has a serious history of successfully doing this sort of thing. Suddenly I am even more excited by Apple Silicon.

      2. 2

        The year of Gaming on Linux.

        With the recent improvements made to Proton, about 70% of the games work.

      3. 2

        after years on MacOS I switched to Windows when I started working in gamedev, and after a few years of putting up with it I finally got fed up enough that I switched to Ubuntu over the holiday break. Since getting my dual boot setup working, I haven’t yet booted into Windows.

        1. 2

          Give it some time :-). This isn’t my first rodeo, I’ve been at it for almost 20 years.

    16. 12

      Things that will keep growing : Rust, Zig, WASM.

      Things that will make a comeback : simpler non-SPA JS frameworks with server-side rendering (Stimulus etc), and Rails by ricochet effect. A “simple, ops-less hosting” platform like Heroku or App Engine will emerge, maybe Vercel.

      Obviously, on the hardware end: ARM laptops / deskops.

      Things that will go down (too much complexity for nothing): Kubernetes (at least used directly).

      Things I would like to see grow but I’m not sure: unikernels and/or lightweight VMs instead of containers (à la firecracker).

      5G will probably change some things, too.

    17. 10

      The return of procedural PHP.

      1. 7

        PHP 8 seems like it removed most footguns from the language. It’s pretty impressive.

        1. 3

          I have to admit I initially thought this was a joke, but it really is impressive.

          1. 1

            Yeah. I initially thought it was a joke too, but the changes are real, and remove like 80% of the stuff I used to show to people when I was TAing a security class and had to explain how bad PHP could be. So good on them. It’s probably still trivial to create XSS issues, though. PHP would definitely not be my first choice given lots of alternatives.

            Edit: Here’s a more in depth review

      2. 3

        I programmed professionally in C, C++, and Python for almost 20 years, in that order. I wrote my first PHP program last week and largely liked it :) It’s extremely procedural, but that is natural and appropriate within the stateless/functional CGI style. Procedural in the small but functional in the large.

        https://github.com/oilshell/picdir (code review welcome, as I don’t know PHP well)

        Mentioned here: http://www.oilshell.org/blog/2021/01/blog-roadmap.html#appendix-deferred-blog-topics

        Although to be fair I tried Flask (Python) right after, and I will probably end up using Flask for more things. But PHP has its upsides for sure. Mostly builtin libraries, easy deployment, and a good dev server. Although Flask also has a good dev server.

    18. 8

      2021 will be the year of serverless. I think that until somewhat recently you were either all in or not because many execution environments were very bespoke. In 2021 I think that people will be shoving basically everything they can into lambdas/cloudrun/knative/workers and you’ll run your app as far “up” the stack as possible.

      1. 11

        For companies, maybe, but people still like craft beers and craft servers.

      2. 2

        +1 for this. Especially considering lambda containers runtimes and aurora pg/mysql serverless

    19. 8

      I’m not going to be so brazen as to declare “the one to know.” As someone who’s spent a long time doing web development and the last year neck deep in WebGL, I’d say I’m interested in:

      • WebGPU: Writing vanilla WebGL is a pretty unpleasant. A framework like Babylon or Three is pretty much a necessity to be a productive web graphics developer. WebGPU promises a much better abstraction for graphics on the web. Now that Apple, Google, and Microsoft are all behind it, it seems like it’s actually going to happen.
      • glTF: The chief obstacle to the adoption of 3D content on the web is a common format. The development of this format and its industry adoption is very promising.
      • WebAssembly: There was a lot of talk about “isomorphic” or “universal” JavaScript about seven years ago. That dream never quite materialized for most teams, in part because JavaScript was derided by established backend developers, but mostly because node’s module system never caught on in the frontend development community. Most seasoned frontend developers who were around back then have Browserify horror stories they can share. (I certainly do.) WebAssembly’s been around for a while. The debugging story is still pretty crude, but the momentum behind Rust, the announcement of the Bytecode Alliance, discussions about supporting garbage collection, and several recent announcements about running wasm on the backend and CDN layers give me hope that there’s enough momentum behind this standard that we can eventually pick languages based on their own merits rather than what runtime for which they happen to be suited.
    20. 7

      This is going to sound a whole lot like self-promoting (and frankly it is), but I’m hoping to come around to finally make secure boot more accessible for people. Mostly the aversion to secure boot is the poor tooling and the bundle of documentation you need to get around to using it. You shouldn’t need to understand UEFI implementation details to utilize it.

      Most of this work is being written in Go in the form of sbctl and go-uefi.

      https://github.com/Foxboron/go-uefi

      https://github.com/Foxboron/sbctl

    21. 7

      P2P. Maybe it’s still early, but it there was a lot of momentum in late 2019 / 2020, and a lot seems to be coming together. Just off the top of my head:

      • The Dat / Hypercore protocol provides a nice layer for building P2P applications, and made some big strides last year: https://hypercore-protocol.org/. Beaker Browser hit 1.0 last year.
      • Matrix launched an early version of their P2P implementation last summer, and it sounds like they’re making good progress on it.
      • Radicle launched P2P git hosting at the tail end of last year.
      • Working remotely means real-time collaborative tech is more necessary. Collaborative data structures like CRDTs don’t need a central server – each peer has the history necessary to reconstruct the document. There’s a lot of overlap between collaborative editing research and P2P tech.

      There are big issues, some probably unsolvable, but the energy seems to be there in a way that it wasn’t 5 years ago. I’m surprised that it seems like progress seemed to slow down so much after BitTorrent, but as someone who’s dissatisfied with large tech platforms and doesn’t see federation as the solution, I’m excited to see the progress.

      1. 2

        Most of the P2P software I’ve used (syncthing, for example) involves relay servers for creating the initial P2P connection, which always struck me as inelegant and limiting, not that I have a better solution. Is that still the case? I took a quick glance at some of those links, and it sounds like hyperbee at least is an alternative, if I’m reading it right.

    22. 6

      This question was prompted by a discussion I had about how I felt like all the momentum in programming languages is pointing toward Rust these days, and I felt like there’s no point in keeping my Go current (it’s been languishing for a couple of years now anyway).

      So, I asked this question to see (among other things) if I’m right or wrong.

      1. 12

        What is driving that feeling? Genuinely curious, because I feel the opposite. I am seeing more and more enterprises adopt Go. In conversations I have with other engineers, Rust still feels a little underground.

        I also think that Go and Rust have slightly different use cases & target audiences.

        1. 9

          Well, lobste.rs, for one. I feel like everywhere I look on here people talk about how they’d re-do everything in Rust if they could. The number of Rust advocates I see here seems to dwarf the number of Go advocates. Maybe that’s perception because Rust is “newer” and its advocates louder, but who knows.

          The things that really stuck with me, though, were Linus indicating that he’d be open to allowing Rust in the kernel, Microsoft starting to switch to Rust for infrastructure, and Dropbox migrating their core technologies to Rust.

          I just don’t see stories like that for Go. I don’t know if I’m not looking in the right place, or what.

          1. 22

            Go and Rust more or less solve the same problem (although the overlap isn’t 100%), just in different ways, not too dissimilar to how Perl and Python more or less solve the same problems in very different ways.

            I have the impression that, on average, Go tends to attract people who are a little bit jaded by the Latest Hot New Thing™ churn for 20 years and just want to write their ifs and fors and not bother too much with everything else. This is probably one reason why the Go community has the (IMHO reasonably deserved) reputation for being a bunch of curmudgeonly malcontents. These are not the sort of people who go out and enthusiastically advocate for Go, or rewrite existing tools in Go for the sake of it: they’re happy with existing tools as long as they work.

            Another reason I don’t really like to get involved in Go discussions is because some people have a massive hate-on for it and don’t shy away from telling everyone that Go is stupid and so is anyone using it every chance they get. It gets very boring very fast and I have better things to do than to engage with that kind of stuff, so I don’t. There’s some people like that on Lobsters as well, although it’s less than on HN or Reddit. It’s a major reason why I just stopped checking /r/programming altogether, because if the top comment of damn near every post is “lol no generics” followed by people ranting about “Retards in Go team don’t think generics are useful” (which simply isn’t true) then … yeah… Let’s not.

            1. 14

              Go and Rust more or less solve the same problem

              Hard disagree, rust is way more useful for actual problems like c/c++ are. I can write kernel modules in it, there is almost no way i’d want to do that with go. Having a garbage collector, or even really a runtime in go means a different use case entirely to being able to run without an os and target minis. Yes I know go can be used for that too but just due to having a GC you’re limited on how far down the horsepower wagon you can go.

              I hold no views outside of that rust actually has solutions and community drive (aka people using it for that upstreaming things) for programming things like avr processors etc… I don’t hate go it just strikes me as redundant and not useful for my use cases. Kinda like if you learn python not much use for learning ruby too kind of a deal. And if I’m already using rust for low level stuff, why not high level too?

              I don’t however miss debugging goroutine and channel bugs though, go is way easier to shoot yourself in a concurrent foot without realizing it. It might be ‘simple’ but that doesn’t mean its without its own tradeoffs. I can read and write in it but prefer the rust compiler telling me i’m an idiot for trying to share data across threads to goroutine debugging where two goroutines read off one channel and one of ems never gonna complete cause the other already got it. I’m sure “I’m holding it wrong” but as I get older these strict and more formal/functional languages like rust/haskell/idris/blah strike my fancy more. I”m not talking about generics really but stuff like Monads (Option/Result essentially) read to me way better than the incessant if the thing i did isn’t nil constantly. Its closer to what I’ve done in the past in C with macros etc…

              Its not that I hate it though, just that the language seems a step back in helping me do things. Idris as an example though is the coolest thing I’ve used in years in that using it was like having a conversation with the compiler and relearning how to move my computation to the type system. It was impressive how concise you can make things in it.

              As a recovering kernel/c hacker, you’d think go would appeal but to be honest it just seems more of the same as c with less ways of stopping me from shooting myself in the foot needlessly.

              But to each their own, functional languages with types just strike me as actually doing a lot of things that OO languages from the mid 90’s always said could be done with open form polymorphism but never seemed to happen.

              In 10 years we’ll see where the ball landed in the outfield so whatever.

              1. 11

                Yes, hence the “more or less”. Most people aren’t writing kernel modules; they’re writing some CLI app, network service, database app, and so forth. You can do that with both languages. TinyGo can be used for microcontrollers, although I don’t know how well it works in practice – it does still have a GC and a (small) runtime (but so has e.g. C).

                I don’t however miss debugging goroutine and channel bugs though, go is way easier to shoot yourself in a concurrent foot without realizing it.

                Yeah, a lot of decisions are trade-offs. I’ve been intending to write a “why Go is not simple”-post or some such, which argues that while the syntax is very simple, using those simple constructs to build useful programs is a lot less simple. In another thread yesterday people were saying ‘you can learn Go in two days”, but I don’t think that’s really the case (you can only learn the syntax). On the other hand, I’ve tried to debug Rust programs and pretty much failed as I couldn’t make sense of the syntax. I never programmed much in Rust so the failure is entirely my own, but it’s a different set of trade-offs.

                In the end, I think a lot just comes down to style (not everything, obviously, like your kernel modules). I used to program Ruby in a previous life, which is fairly close to Rust in design philosophy, and I like Ruby, but it’s approach is not without its problems either. I wrote something about that on HN a few weeks ago (the context being “why isn’t Ruby used more for scripting?”)

                1. 4

                  Most people aren’t writing kernel modules; they’re writing some CLI app, network service, database app, and so forth. You can do that with both languages.

                  CLI yes, database probably, but I don’t think Rust’s async or concurrency or whatever story is mature enough to say it’s comparable with Go for network services.

                  1. 1

                    Cooperative concurrency is just more complicated (as a developer, not as a language designer) than preemptive concurrency. The trade-off is that it’s more performant. Someone could build a Rust-like language, i.e. compiler-enforced data race freedom, with green threads and relocatable stacks. And someday someone might. For now, the choice is between performance and compiler-enforced correctness on the Rust side, or “simpler” concurrency on the Go side.

            2. 3

              There’s just a lot of toxicity about programming languages out there, and subjectively it feels particularly bad here. Rust has a lot to like and enough to dislike (abandoned libraries, inconsistent async story, library soup, many ways to do the same thing), but something about its culture just brings out the hawkers. I still heartily recommend giving Rust a try, though you won’t be super impressed if you’ve used Haskell or Ocaml in the past.

              1. 6

                I came to Rust after having used Haskell and the main thing about it that impressed me was precisely that it brought ML-style types to language with no GC that you could write an OS in.

                1. 5

                  with no GC

                  I guess I find this often to be a solution with very few problems to solve. It’s understandable if you’re writing an OS or if you’re working on something real-time sensitive, but as long as the system you’re making can tolerate > 1ms p99 response times, and doesn’t require real-time behavior, Go, JVM languages, and .NET languages should be good enough. One could argue that there exists systems in the 1-10ms range where it’s easier to design in non-GC languages rather than fight the GC, and I can really see Rust succeeding in these areas, but this remains a narrow area of work. For most systems, I think working with a GC keeps logic light and easily understandable. When it comes to expressive power and compiler-driven development, I think both Haskell and Ocaml have better development stories.

                2. 1

                  Rust also has a much cleaner package management story and (ironically) faster compile times than Haskell. And first-class support for mutability. And you don’t have to deal with monad transformers.

                  Haskell is still a much higher level language, though. I experimented last night with translating a programming language core from Haskell to Rust, and I quickly got lost in the weeds of Iterator vs Visitor pattern vs Vec. Haskell is pretty incredible in it’s ability to abstract out from those details.

            3. 0

              Your descriptions of both advocates and Go haters match my experience exactly.

          2. 14

            I’ve been using Go since around the 1.0 release and Rust for the last year or so. I don’t think either of them is going away any time soon. Go has less visible advocates, but it’s definitely still used all over the place. Datadog has a huge Go repo, GitHub has been looking for Go engineers, etc.

            Rust is more visible because it’s newer and fancier, but it still loses for me in multiple aspects:

            • Development speed - It’s a much pickier language and is much slower to develop in, though it forces you to get things right (or at least handle every case). Rust-Analyzer is great, but still fairly slow when compared to a simpler language.
            • Compilation speed - Go compiles way faster because it’s a much simpler language.
            • Library support/ecosystem - Because Go has been around for quite a while, there are a wealth of libraries to use. Because Rust hasn’t been around as long, many of the libraries are not as mature and sometimes not as well maintained.

            However, Rust makes a number of improvements on Go.

            • Error handling - Rust’s error handling is miles above Go. if err != nil will haunt me to the end of my days.
            • Pattern matching - extremely powerful. This is an advantage Rust has, but I’m not sure how/if it would fit in to Go.
            • Generics - In theory coming to Go soon… though they will be very different feature-set wise

            They’re both great languages, and they have different strengths. For rock-solid systems software, I’d probably look at Rust. For web-apps and services, I’d probably look to Go first.

            1. 4

              Rust also loses massively in concurrency model. Tokio streams is so subpar to channels.

              1. 3

                Tokio also has channels - MPSC is the most common variant I’ve seen.

                When Stream is added back to tokio, these will also impl Stream

                I do agree that having goroutines as a part of the language and special syntax for channels makes it much easier to get into though.

              2. 1

                Rust’s async/await is definitely more complicated than Go’s green threads, and is almost certainly not worth it if Go has everything you need for your project. However, Rust’s standard threads are extremely powerful and quite safe, thanks to the Sync and Send marker traits and associated mechanics, and libraries like rayon make trivial parallelism trivial. Just yesterday I had a trivially parallel task that was going to take 3 days to run. I refreshed myself on how to use the latest rayon, and within about 10 minutes had it running on all 8 hyperthreads.

            2. 2

              Spot on. They both have their annoyances and use cases where they shine.

          3. 6

            A further difference is that Go code is hard to link with other languages due to its idiosyncratic ABI, threading and heaps. Rust fits in better. Not just in OS kernels but in mobile apps and libraries. (I’m still not a fan of Rust, though; Nim fits in well too and just feels a lot easier to use.)

            1. 1

              I would say that is entirely compiler dependant, and not a property of the language Go. https://github.com/tinygo-org/tinygo

              1. 3

                As long as the language has highly scalable goroutines, it won’t be using native stacks. As long as the language has [non ref-counted] garbage-collection, it won’t be using native heaps.

              2. 1

                Well, tinygo != go, and e.g. gccgo still reuses the official standard library from the main implementation (which is where lots of the frustrating stuff is located, e.g. the usage of raw syscalls even on platforms like FreeBSD where it’s “technically possible but very explicitly not defined as public API”).

      2. 9

        Golang is awesome. It works without fanfare.

      3. 8

        As someone who dislikes Go quite a lot, and is really enjoying Rust: I think they are different enough they serve different use cases.

        Go is much faster to learn, doesn’t require you to think about memory management or ownership (sometimes good, sometimes very very bad, Go code tends to be full of thread-safety bugs), is usually fast enough. For standard web app that is CPU-bound it’ll work just fine, without having to teach your team a vast new language scope.

        On the flip side, I’m working on a LD_PRELOADed profiler that hooks malloc(), and that’s basically impossible with Go, and I also need to extract every bit of performance I can. So Rust is (mostly) great for that—in practice I need a little C too because of Reasons. More broadly, anytime you want to write an extension for another language Go is not your friend.

        1. 4

          Go comes with a built-in race detector. What sources do you have for “tends to be full of thread-safety bugs”?

          1. 11

            As someone with a fair bit of go experience: the race detector only works on races you can reproduce locally, as it’s too slow to run in production.

            Rust stops you from doing things that can race; go gives you a flashlight to look for one after you accidentally introduce it.

            1. 1

              That is a good point, but once a developer has encountered the type of code that makes the race detector unhappy, will they not write better code in the future? Is this really a problem for most popular Go projects on GitHub, for instance? Also, the “staticcheck” utility helps a lot.

              1. 3

                Unfortunately, there’s still, even among some experienced programmers, a notion that “a race will happen so rarely we don’t have to worry about this one case”. Also I’ve seen an expectation that e.g. “I’m on amd64 so any uint64 access is atomic”, with no understanding of out-of-order execution etc. I assume in Rust this would be a harder sell… though the recent drama with a popular web framework (can’t recall the name now) in Rust seems to show to me that using unsafe is probably kinda similar approach/cop-out (“I just use unsafe in this simple case, it really does nothing wrong”, or “because speed”).

      4. 3

        I think (hope) the world settles into Rust and Go. I don’t see the Bell Labs folks not going 3 for 3 with Go. Rust I enjoyed playing with many months ago and it was a bonus I guess (at the time) that it was backed by Mozilla. Misplaced loyalties abound. Maybe (in a decade or so) lots of things once written in C will Rust over and tcp/http stuff will be written in Go.

    23. 6

      Perl is glissading in the far North as Ruby remains buried deep in the bedrock to be found again. Again and again. But things fall apart; the center cannot hold.

      1. 3

        Yes, but this question was specifically about what rough beast you think will slouch toward Bethlehem to be born this year. ;)

        1. 3

          Oh, the beast…hmmm. Rust. I love rg, for sure. But Rust is approaching that era of “this will solve no matter what problem I throw at it!” Also, it’s “new” and “fast”. I will say, rg has been a JOY. And that’s probably Rust “written well”. Not that I have any idea how “well” written Rust should look.

    24. 6

      There will be a massive surge of interest in security tools after the recent SolarWinds attacks.

      1. 12

        I admire your optimism.

      2. 7

        Isn’t the “tools” part kind of the problem? There’s never been a shortage of business people throwing cash at vendors for “security products” of various usefulness (and with bugs in them), but there’s always a shortage of actually incorporating security as a process into everything someone does.

        1. 1

          Too real. At a minimum, it takes developers who think about unintended consequences. At a maximum, something like the Security Development Lifecycle.

    25. 5

      This is my favorite Ask post of all time.

      I think ’21 will be the year of Python.

      Pylance is pushing frontiers forward. GraalPython is pushing performance forward. Mypyc is pushing code generation forward.

      And everyone in the PSF is pushing Python forward.

      I think it may be the year of lisps, as well. I suspect Carp, Janet, and maybe others may emerge as powerful tools for browser apps, while WebAssembly brings user-facing environments forward.

      I’d like to see some advances in genetic programming, volumetric user interfaces, and for God sakes, we gotta figure out the verbal interface to those stupid smart home things.

    26. 4

      I love the promise of WASM and it’s gaining traction both on the server side and the client. This could be the year that WASM explodes in popularity!

    27. 4

      Laptop RISC CPUs

    28. 4

      It’s been a while since I had a hot, new web framework to kick the tires on. Here’s hoping someone builds Rails for Rust or something like that and it completely takes over the world.

      1. 1

        Did you get to play around with Svelte?

    29. 4

      Wireguard

    30. 4

      As it was since 1950s, this year will be the year of… Fortran!

      1. 1

        What kind of educational background does someone need to pursue employment around FORTRAN?

    31. 3

      The year of the mainframe, as a computing model if not as a technology: modern personal computers as dumb terminals, large batched jobs, large data sets stored remotely.

    32. 3

      I personally believe that whenever the https://luna-lang.org Team (now called Enso) finally releases something stable and usable, this will immediately become the most groundbreaking development of the particular year in software technology (if not immediately recognized as such by many and derided as “a toy”). Here’s to hoping this will happen in 2021… though they are certainly taking their time, sigh. Kinda “Duke Nukem Forever” mechanics behind that I believe.

      1. 1

        I totally forgot about Luna (even it was sitting during a time on my laptop), thanks! If they really pull this of and improve the performance (at least since last year when I tested it for the last time), it could be amazing both for them and will expose a little more Haskell’s potential nevermind the Enso shift seems to have moved the codebase to Scala.

        Edit: The Enso change seems also to have changed a lot of different approach and visualizations. I am a bit disappointed to be honest.

        1. 1

          Where do you see something about the “approach and visualizations”? I didn’t seem to see anything visual from them last time I checked some time ago, I’m really curious what you’ve managed to find?

          edit: Oh, I found some dev docs! Still would like to understand what specifically you find to be “a lot different” and why (about what) you’re disappointed?

          1. 2

            Let me clarify this.

            About the approach, the shift away from Haskell and the Atom¹ front-end as much as understandable for the various reasons provided on their dev blog make it a whole new stack. It is still impressive to move all this to a custom WebGL render in Rust + leveraging GraalVM² to, one day, provide interop between multiple languages. The change in communication semantic too : From the Luna language to the Enso IDE. It is nitpicking but you switch from a visual programming language to a tool. All that is more “my little bubble view of the world” than any practical disappointment.

            For the visual part. The Luna lang website has not been updated to reflect the Enso change. If you look at the last post, you will is the change in the graphical language. From a clear node-oriented graphical visual, the shift put the text language first and foremost in the visual language with a but of widget around for visualization. I preferred the graphic language used in Luna.

            Let be clear, what the Enso/Luna team are doing is amazing and I am still in awe looking at what they have done and what they do. Still keeping an eye on it and trying to play with it when I can.

            ¹: It is crazy how Atom (and now to a less extend VScode) was used to provided new interface. In a less transformative way, Juno for Julia also used it as a backend.

            ²: GraalVM maybe a bit like Atom but instead of the frontend, it provide a rule it all backend. I really see the appeal but since I never more than dipped my toe in Java-ecosystem, I never followed closely what’s happening with Graal.

    33. 2

      This is a pet peeve of mine, but any prediction/forecast like this is useless without sufficient detail to make it falsifiable. What does one mean by “the year of the ARM workstation”? What market share? In which countries? What does “TypeScript broadly seen as an essential tool” mean? What evidence would one need to know that turned out to be false?

      Vague predictions without sufficient detail to be falsifiable tends to boil down to various fancy ways of saying “popular technology will keep being popular”. By putting it in concrete terms you force yourself to be honest.

      Additionally it’s a huge difference between 95 % and 55 % confidence. I’d recommend anyone forecasting tech here attach a confidence level to your prediction. That way you can be scored and we can get an aggregate crustacean prediction skill score too!

    34. 2

      Simplification of “containers” (using the term very broadly).

      I think there is a strong trend in the direction to static linking and embedding files. Java has fat JARs, there is projects like Bazel, cosmopolitan and CloudABI, developments around microkernels, Go gets out of the box support for embedding files. There is a ton of sandboxing mechanism that get simpler (pledge and unveil on OpenBSD, capsucimizer on FreeBSD and many ways of Linux).

      There is production ready ways, like Nomad that allow these to be run. To stick with the Nomad example. There is Task Drivers for JARs, raw and isolated execution of binaries, Qemu which could be used for microkernels and a plugin interface to build additional ones.

      And then there are Nix, developments around WebAssembly and other approaches. WebAssembly might be a technology that will be used more and more and be it just to replace the last remaining flash applications.

      With all of these in existence and services overall being developed, configured and run in more similar fashion I think it makes it a lot easier for new approaches to get their foot on the ground.

      As much as I dislike the (blind) hype around many of the developments in the past decade or so it certainly lead to software to get into a state that makes it easier to manage. In other words it’s not that something like Docker as a container or cloud infrastructure made things simpler, but (also) that software developed with certain limitations in mind helped to create a set of expectations which are close to a “standard”, a “style guide” or a set of best practice rules, that is used almost globally. So overall I think we will see that being harnessed more and more.

      Something else I’ve already noticed in the last one or two years, that is not really a technology and is highly subjective (maybe someone here has more insight?). Some people that exclusively used laptops are switching back to desktop systems or at least use them again in addition.

    35. 2

      Will be the year of Notion once their API is publicly available and integrations start to come.

      Sounds depressing, I know, it’s the industry we live in.

      1. 1

        I wish they just made Android widgets already.. and optimized startup time on Android more.

    36. 2

      For me, 2021 is the year of Elixir and the Phoenix Framework. I have plans to complete some project with them. Elixir seems to have a lot of enthusiasm behind it, but I don’t know how much compared to other languages. But, that’s not super important to me. I really like Elixir, so I’m going all in (besides my usual work in C and Lua). Any other folks tackling Elixir this year?

    37. 2

      What technology will “come into its own” this year?

      Belt drive 3D printers. Hopefully, 2021 will be a good year for me to play with my Creality 3D Ender-pro 3D printer, but will need to spend some time learning OpenSCAD for my CAD modelling.

      Is 2021 the Year of the Linux Desktop?

      2021 will continue to be the year of the OpenBSD Desktop for me ;~)

      1. 2

        learning OpenSCAD for my CAD modelling

        Honestly, even though “3d models as code” sounds awesome in theory, I much prefer FreeCAD (PartDesign). Maybe it’s just because I was never good at geometry, but designing parts visually with a “feature editing” workflow is much easier and faster.

        (btw, there are interesting alternatives in the model-as-code space too)

        1. 1

          CadQuery looks really interesting - thanks for the link. I have been using Fusion360 and Solidworks for my MSc, but don’t like either enough to justify the price tag. I’ll need to have a play with FreeCAD, but having used OpenSCAD in the past (as it worked well on OpenBSD) I liked the model-as-code approach.

    38. 2

      Tailwind CSS :)

    39. 2

      Virtual Reality

    40. 2

      I recognize that this might be in the land of fantasy rather than solid prophecy, but…

      Nim’s ARC/ORC memory management will reach a level of perceived stability which allows its usage in production. It’s already usable for software where Rust excels, but with much better ergonomics and developer happiness. It will reach top 50 in TIOBE in 2021.

      Realistically, this would require a significant and successful PR effort from the Nim team, which they so far haven’t been able or willing to do.

    41. 1

      I bet Unreal will start eating Unity’s lunch very soon. Unity’s been making a lot of weird business decisions, and Unreal’s been stepping up on their pedagogy (along with being developped by Actual Gamemakers). When people start realizing how easy Stuff like Unreal blueprints makes it insanely easy for anyone to pick up these tools.

      Unity still has a good lock on the “people starting out” thanks to all the content, but I think it’ll flip

      1. 3

        Hopefully Godot will start eating at them both :)

      2. 1

        I would say the royalties they are asking for is still a big limitation (5% vs flat fee). Lift that and a lot of people would flock to Unreal.

    42. 1

      server side driven/rendered sites (turbolinks, phoenix liveview, that new basecamp stuff, etc)

      1. 1

        My understanding is that this is servers serving… HTML? As opposed to a JSON API?

    43. 1

      Tools like Render will gain a lot of popularity.

      1. 1

        Weren’t these more popular in the past, when Heroku and App Engine were new? I’m not sure why there would be a huge resurgence of PaaS, or if just the pricing of Render can start that trend again..

    44. 1

      TypeScript will become even bigger. As JS codebases keep growing rapidly, old object oriented approaches will start to appear more on JS code bases. Teams trying to use purely functional approaches will start to appreciate the multi paradigm nature of JS.

    45. 1

      Beginning of the end of a big chunk of ad-revenue source of revenue for tech business.

      (ok not a technology)