Following up on my post from two years ago:
What technology will “come into its own” in the coming year? Is 2023 the Year of the Linux Desktop? Will Rust replace all other programming languages? Will Copilot make us all obsolete? Will DALL-E become sentient? Is Fortran ready for a renaissance?
What technology is going to be the one to know in 2023 and beyond, in your learned opinion?
(This question is intentionally open-ended, in the interest of driving discussion.)
AI being used in inappropriate ways
Are there inappropriate ways to use AI? Ineffective, maybe
Figured they meant unsavoury, malicious, ways.
That’s already happening. I was contacted by multiple AI-based GPT bots on telegram pretending to be real people for multiple days, eventually trying to sell crypto.
That makes more sense
A lot of these major models come with model cards, that describe appropriate and inappropriate usage, how it was trained, etc.
I made a picture of “a cat that looks like a butt”, and I don’t think that was a very appropriate use of the technology.
Every tool can be used for inappropriate motives. I believe this is not controversial. If this is not sarcasm and you are serious, I recommend you to reevaluate your position.
Pretty sure they just misinterpreted the meaning of “inappropriate” in this context.
There will be a new JavaScript framework.
Only one? Very optimistic
We can only pray.
Only one… thousand.
And, like every year for the last 7 years, it won’t dethrone React or make any particular impact. Commentors will not miss the opportunity to make tired old jokes about JS frameworks, however 🙂
2020 was the year of the Linux desktop, when most Steam games could be played by simply enabling Proton.
2021 was the year of Nix, bringing the first “easier to read than to write” configuration language to the masses (well, at least to the millions maintaining *nix machines).
2022 was the year of Linux gaming, when the Steam Deck was released, solidifying Linux’s position as a good gaming OS.
2023 is hopefully the year of HDR (10+-bit colours) on Linux. And the start of the next AI winter, as people realise that a statistical language model is neither necessary nor sufficient for “intelligence” (whatever that is).
I think you’re right that the era of the Linux desktop is unironically here. A lot of devs are gamers and so have windows on their home machine. Before, you had to deal with the mess of dual-booting (which basically requires you learn how UEFI works) so you could boot back and forth between Linux and Windows for projects and gaming. So a lot of people just didn’t deal with it, and kept using Windows. Now Windows is growing steadily more abusive of its users in ways that even devs can’t hide from, and Linux can play nearly all of their games. The list of reasons to stay on Windows, even for dual booting, is growing pretty small. Some similar logic is probably applying to non-devs. I’m not exactly a trend-setter and I finally switched over from Windows to Linux this year.
I don’t think you’re right about AI winter, though. ChatGPT made it clear that the style of internet search that has reined since like 1995 is on its way out (along with everyone complaining how search results have gone to crap). It won’t be long before much of peoples’ non-social-media internet access is mediated by a language model like ChatGPT.
The problem with the last bit is that search results have gone to crap largely because of using machine learning in searching. It used to be the case that constructing search queries was a skill that you could improve at - the search engine would give you back everything that matched what you asked for (or at least the top n matches), and you could refine your query to get narrower and more specific matches. You can’t do that anymore: every search returns a set of results that would be statistically likely to be clicked on by an average person entering one or two of the words from your query. Attempts to refine searches will just be ignored.
I suppose you can argue this is some kind of uncanny valley between logical search and ChatGPT-based search, but I doubt it. Given the confident wrongness of ChatGPT answers to questions, I’d expect that search using it would be actively worse.
God, I hate this with a passion. Yesterday I was searching for (fragments of) a specific error string output by a certain tool. It still returned pages that didn’t even include the words I was looking for. Very frustrating!
Oh, I completely agree ChatGPT will be incredibly annoying, comparable to the email spam problem but for the entire web. Just that we’ll get an AI winter once people realise it’s not very good at basically anything useful, only to generate statistically plausible text.
Thing is, generating statistically plausible text can be incredible useful. Not just for creating more spam, but also for actual productivity. For search I would say it is almost completely useless, as you have to go out and verify for every little details it gives you whether it is actually correct or not. It can be helpful to get ideas on what to search for though.
Where it shines, for me, is it’s ability to generate and mangle text. Over the last days I have used ChatGPT to write technical documentation and review comments. If you look at the end result, about 80% of the actual content was produced by me, but putting it in a nice readable text still saved me significant time.
People are still figuring out how to use it. Even if there is an AI winter coming for the research part, there will be a lot of activity to transform the current AI into usable tools in 2023.
I think there’s much more accuracy that can be squeezed out of large language models. There’s probably still room to throw more data at it.
I also assume it could be combined with a search engine to fetch relevant references in real time, so that it could automatically verify what it’s saying.
I joined the Linux side due to it now working with games. I’m not going back to windows :D
I really think Rust will become “the” systems language for new project starts, if not in 2023 than in 2024, if it doesn’t have some sort of spontaneous catastrophic failure somehow.
I think there’s going to be a renaissance in on-premises computing, and while The Cloud will continue to grow, there will be more people running their own stuff and/or smaller datacenters will come back into fashion, at least a bit.
I think Android is going to fall below 40% market share in the United States and maybe elsewhere.
That surprised me, but it wasn’t actually doesn’t require much of a shift. For 2022, these were the stats in the US:
So Android market share needs to drop 0.1% for this prediction to come true.
What do you expect to push Android’s market share down, and why? Are you thinking that iOS will pick up that market share?
iOS, exactly. I have two teachers in my family and for teens, iPhone usage is approaching 100%. Practically no Android devices at all. And these are public school kids in an economically diverse area, so it’s not just rich kids.
We’re gonna have to figure out how to make software that actually runs locally again. The public cloud is too expensive, complicated, and hard to operate safely for most projects and teams, and desktop and mobile apps are increasingly unable to function without broadband. (Don’t even get me started on the entirely-standard, deeply-integrated “analytics” hooks mining every shred of valuable data they can on each and every person to touch most apps/services/sites.)
For me, that means working on smaller things that work independently, and finding ways to repurpose and scale down tools that are normally used to go the other direction. (Tracing tools and load-balancing proxies, for example, are both equally useful for squeezing down onto smaller and smaller hosts as they are for scaling out to giant clusters.)
I’m also thinking a lot about how to make self-hosting more accessible and realistic for folks who don’t already build software all day. Nextcloud, Yunohost, and other distributions of OSS services are a good start, but we really need simpler, better-supported tools for messaging, data management, and publishing that can actually run on the computers people own and have access to.
I think scaling down is going to be increasingly popular. If you look at technologies that have gained significant market traction, they’ve all come from the low end. It’s incredibly rare to build something that targets the Fortune 500 and then have it grow into a mass-market product. It’s far more common for things that target SMEs and even hobbyists to grow up to market dominance.
This is something that the big cloud providers are largely forgetting at the moment. Entry-level offerings from the big players are far too expensive to build a hobby project with. MS DOS, MS Office, and Windows all got popular because they were cheap and easy to pirate for the people who couldn’t afford even the low price, and ran on cheap hardware. UNIX remained niche when it ran on big workstations and became popular only when Linux / Minix / *BSD started running on cheap 386-class hardware that let hobbyists tinker with it. The current big products are vulnerable to disruptive technologies.
2023 will be the year of color e-readers.
So as background, there are two main types of color e-ink tech: Color Filter Array (CFA), and multi-dye.
CFA is where you slap a giant static slab of stained glass on your screen, with a red-green-blue red-green-blue pattern on it that matches up to the pixels. This turns every 3 monochrome pixels into one color “pixel”. As shown in the right side on this diagram. (The left side of the diagram is completely unrelated to this post, BTW).
CFA is fundamentally limited by geometry, and cuts both contrast and pixel-density down to a third. It’s just bad. It also can’t be disabled, because it’s literally just slapping a tinted sheet in front of the screen. It’s also the only color tech used in e-readers so far. They suck hard without a backlight (and half the point of e-ink is that it doesn’t require a backlight), and they’re a technological dead-end.
Multi-dye has really nice contrast and is capable of 300PPI, just like monochrome. But it isn’t used in e-readers because, historically, it takes too long to refresh, to be usable in an e-reader. Too long, for an e-reader!!! Specifically, it’d take 10-30 seconds to refresh once. The reason for that is that to cycle between colors you essentially have to refresh once for each dye.
Buuuuut E-Ink released a multi-dye screen (“Gallery 3”, and they call their multi-dye tech ACeP) recently that refreshes in 1.5 seconds - (it can actually do a crappy color-refresh in 750ms and a normal monochrome-refresh in 350ms). But mostly, it can monochrome-refresh nearly as fast as a monochrome screen can - it has almost no downsides compared to monochrome. Which brings it up from a gimmick to something people might actually want to use, for e.g. reading comics.
So yeah, 2023 will be the first year with a color e-reader that actually has a shot at mass adoption.
I’m thinking of pulling the trigger on an Dasung Paperlike 253 for programming and have always wondered how far away basic color is for E-ink in the monitor form factor. At least having Black, White and Red would be quite the step up, similar to those tricolor waveshare screens.
WebAssembly in more places.
AI fears.
I hope zig starts beating rust as a systems language this year.
I dont think rust and zig really compete, do they? Zig is a “by hand” memory management language
I don’t understand under what circumstances I would chose zig if I had already learned Rust, so I see them as competitors.
Zig’s comptime metaprogramming is very competitive with Rust’s const eval and macros, but feels simpler to write to me. I think once Zig hits 1.0 (sometime in 2025?) and there is a larger ecosystem for it, it will be more compelling for people starting new projects to use it. I know I’d be using it more if there was a good selection of math / statistics packages available. @andrewrk has had a few ideas for memory safety that have yet to be implemented / tried out, and I think if there are real safety options it will give Zig a huge opportunity to grab up market share.
This is probably adding on the competition bit: I know Rust and I am looking at Zig. I doubt my ability to write good, fast code in a language that’s as huge as Rust. I also feel that “knowing” Rust isn’t something you do passively, it’s basically a part-time job. It’s not one that I find particularly rewarding, as language design is neither a hobby of mine, nor something I’m professionally interested in, and it takes up a lot of time that I would much rather spend writing useful programs.
On the other hand I doubt my ability to write correct, non-leaky code in a language as hands-off as Zig… For something small and simple, sure, but anything with decent complexity is bound to end in memory management mistakes I think
Oh, zig is fantastic at telling you if you leak addresses. The equivalent of valgrind is baked into the tooling.
Funny enough I’ve had this same feeling about C++. Conversely, keeping up with C#, Java, and even C doesn’t feel so mentally taxing.
Oh, yeah, I wanted to say “just like C++” but I thought that was going to be a little too inflamatory and I have C++ PTSD from my last big C++ project. It’s driving me nuts. You would expect a language that has so much more expressive power than C, and can better encode so much safer idioms, to have less churn than C, not more.
IMHO this is mostly a failure of the C++ commitee though. The complexity of the language and standard library, and the way it was (mis)managed, has spawned a huge, self-feeding machine of evangelists, consultants, language enthusiasts and experts, and a very unpleasant language feature hustle culture. I’ve seen a lot of good, clean, smart, super efficient C++ code, and most of it appears to have been made possible by a) a thorough knowledge of compiler idioms and b) ignoring this machine. Unfortunately, the latter is hard to do unless it’s organizationally enforced.
Nah, Zig needs to at least do 1.0 (well, as an alternative, Rust can do 2.0) to start to dream about outcompeting Rust :P
+1 for Zig!
Zig user here, doing all my WebAssembly projects with Zig. It’s quite easy and fun!
Came here to post exactly this!
I think it won’t quite break through to mainstream mainstream but I think the reasons and needs will slowly start to become more apparent. In a world where Google Chrome is the dominant operating system and more stuff is running “at the edge” and the unit of computation is becoming smaller and simpler on the surface, I think WASM and WASI has a strong competitive head start to solving some of these problems.
It won’t quite be the “write once run anywhere” of the JVM era, but I reckon it has a pretty fair shot at getting close enough to be useful!
That comment predicting the popularization of federated social media was right, though I don’t think anyone could have predicted how.
Speaking of, if you’re the same @briankung, I’m the same @lorddimwit. :)
I figured! And yep, I’m @briankung@hachyderm.io. Hi there from lobster space 👋
I truly hope 2023 is the year we find OSS funding and support models evolve. I would love if we could form some foundations to ensure developers with highly depended on code (npm packages, gems or other), could ensure some form of income for the developer if they desire it.
eBPF and io_uring. By now they are very powerful technologies, but quite complex to master and a bit immature from a tooling and ecosystem perspective.
And zig :)
Nix getting adopted by more devs
I wouldn’t pin to 2023 as a year in particular, but:
djot lost me at forcing an empty newline after each block. Isn’t the whole point of markdown to be human readable?
There are very heated debates about that :)
I’d say that the point of markdown is to codify exactly the plain text conventions.
The point of djot is to be a real markup language, which allows creation of complex documents, while staying as close to markdown as possible. It’s perhaps better to view it as better latex/asciidoctor, rather than as a better markdown.
And yeah, for a markup language you want to make it unambiguous where one block ends and another starts. The price that djot pays for it is an extra new line before nested lists.
The running example is this
We don’t want this to be a list, as it’s too easy for a lone list marker to accidentally end up as a first character by accident.
The blockchain will finally be powered by AI, and will make Web 3.0 an attainable reality. This will revolutionize the way that we interact with people online, and accelerate the Fortran movement of virtual WASM desktops on OpenBSD.
(Feel free to help me train the next generation of ChatGPT in replies.)
2023 is the year of clever people manipulating stable diffusion in clever ways (such as the audio variant recently shared). Everything else is pretty much secondary.
High-Performance Computing will make a rapid and completely inexplicable comeback.
A good friend of mine is a professor of HPC and I think you’re right.
KiCAD has taken long strides this year, and I think it will finally rise to prominence in professional use during 2023. I started using KiCAD about 5 years ago for hobby projects, and the latest major update (6.0) seriously improved the usability and aesthetics of the interface. When KiCAD has features truly on par with the likes of Altium and EAGLE, the paid software that costs thousands of dollars per license per year just won’t be able to compete.
I see it in a similar light. KiCAD may do to EAGLE, what Blender has done to 3DsMax, Cinema4D & friends.
The Linux Desktop!
Aren’t *nix users the only people left who have a “desktop” anyway?
The *nix users seem to be the only people left serious about anything related to computer science and software development. Is it just me or does it seem that *nix users are older leaving a generational gap. Who’s going to write software when the Millennials retire?
GPT-6
Maybe “the year of Matter”? The new local-first open standard for home automation & IoT landing at the end of 2022 might mean a bunch of open software & hardware pushing what’s possible, while being privacy preserving and still having the ability to interop, if desired, with the legacy HomeKit / Alexa / Nest ecosystems.
I have a feeling that in 2023 the decentralised/federated approach will come into its own, now that Mastodon is becoming bigger due to Twitter’s decline in popularity and Matrix/Element are starting to shape up and may become more suitable for casual users. This may result in bigger mindshare for federated systems in general, which leads to new systems being built in this style.
I think it it keeps growing to be a competitor the big companies in that area actually have to worry about then there will be some form of Embrace, Extend, Extinguish.
Would be cool though if it was otherwise this time.
The Verse language could have a big impact on FP. https://lobste.rs/s/240vwb/beyond_functional_programming_verse
Linux gaming.
Although I think it was already this year.
The Linux Desktop™
A technology ‘coming to its own’, like something that starts climbing or dominating the charts of StackOverflow annual surveys and the average corporate developer (Java coders and a like) or intended users, feels pressured or are motivated to try it? More bottom up. Or like when business managers will actually push the tech from the top-down in their organization? Also is it end-user facing? Or infrastructure (less fan-fare and less prone to investor mania)?
Nix (bottom up) and things like ChatGPT and CoPilot fit the criteria one way or the other, except there is lacking a risk-off funding environment or infrastructure stuff generally doesn’t get as much press. So CoPilots and ChatGPTs type of things if I had to pick, but the big push by capital allocaters - just funding anything that breaths and moves related to aforementioned - is not going to happen until the financial markets begin to roar again.
We’re at, I believe, the very beginning of a Nix super-cycle where all sorts of technology will be implemented on top of it and towards more user-friendly, UI oriented approach to manage or use it. I think it will be bigger than ChatGPT stuff in a way, but more behind-the-scenes where it becomes ubiquitous but generally unknown by the public. Infrastructure. Think Nix has hit critical mass and will be rolled out, in some form or fashion, quietly and strategically by larger and larger organizations globally, with a feedback loop as more Nix dev-friendly services and tools are built.
I’ve seen multiple people here mention Nix, which I’ve never heard of, and it’s a… 20 year old package manager? There are an unbelievable number of package managers, what makes Nix something that warrants mention?
It has fully reproducible builds, easy mixing and matching of different package versions on the same system without polluting the global namespace and can even run alongside other package managers.
For instance, when I got started at my current job I was using Nix on Debian so I could have a local dev environment for the project in which Java is available, without having to install Java system-wide (ugh).
You can pin specific versions of packages in each environment, so you should never run into any bitrot due to system updates.
It is essentially used for solving the same problems people use Docker for, but without needing VMs.
I agree with @sjamaan, but also there is NixOS, which is an entire OS built around nix. It’s a declarative operating system, from boot and hardware configuration to users and configuration of packages.
People talk about building cattle, and then try to use stuff like Chef, Puppet or Ansible to configure them in a “declarative” way, but it’s not actually declarative under the hood a bunch of wacky magic happens to try and turn it into a declarative system. Nix does a much, much better job of making it actually declarative.
NixOS gets us really close to the holy grail of totally declarative systems that make everything reproducible from the source to running binaries on a system.
Good point re: “declarative” deployment tools. I have struggled so much just getting reproducibility working properly in Ansible. Even “simple” things like managing cron jobs would be impossible since existing jobs wouldn’t get removed if you removed it from your recipe. That’s such a mindfuck! And getting it to work requires so many artificial order-sensitive stateful constructs you might as well be doing it all from a shell script.
Nixpks repo is an explicit dependency graph, cryptographically. You can’t just add any old package. It must be built with other packages, all the way up, so if I’m not mistaken, it took awhile to get a lot of old critical software compiled where it was difficult to re-compile, and it’s a huge network metcalf type of thing.
Good things sometimes take time to build. However, it has hit critical mass imo.
WASM and WASI.
I have hopes for WASI. Standardized Interfaces (be it actual standards of de facto standards by everyone using it) tend to result in a lot of innovation, because good (and bad) ideas have a lower barrier to being applied in practice/production.
But maybe it takes a bit longer for wide spread use. However I do think it’s reasonable to assume that important cornerstones of the ecosystem will be created and/or find adaption in 2023.
Encodings. I think we’ll see a bit of a shift in wide spread adoption of various encodings as there have been quite a few new ones and winners and losers are being determined “by the market” so to speak.
In terms of a lot of things labeled AI, I think we’ll see a lot of disillusion and interesting ways they get fooled, hacked, tricked. And situations where the statistical nature shines through more and more.
I’m really curious about what laws regarding AI, copyright infringement will bring, because it might (or might not) affect copyright law at large.
While not a technology itself I also expect a rise in vendors trying to lock-in customers more again, especially through means of SaaS. I think both the economical situation and how the industry works at the moment are leading towards that. Especially companies and people trying to reduce cost. Maybe the Fediverse example also scares some companies and reminds them that people could leave.
I actually believe more in disillusions and crashes. From AI to Web 4.0, programming languages, databases, maybe parts of cloud computing, smart homes (and smart devices at large), JavaScript frameworks older than three months.
Not necessarily saying that all things are necessarily bad, but things are hopelessly over over-hyoed and over-marketed. That’s true for a lot of technologies and even when they are good that leads to them being used for things they weren’t built and designed for. Just someone seeing an opportunity to bend it to kinda fit another use case and then the problems begin.
And of course like fashion trends there will be a lot of “next big things” that we’ll have forgotten about in two years.
cool-retro-term, chatgpt, rust, go, zig and enet
The year of the linux desktop; I’ll show myself out
On a serious note, I’d want it to be the year of Julia programming language
A new social media explosion that isn’t a rehash of prior attempts, hopefully filtered through the lens of a novel backend.
Integration with Typescript from API vendors that support user-defined interactions. In JS days it was fine interacting with dynamic Sendgrid templates in an untyped and unsafe way. Now it’s irritating.
I want my Sendgrid templates typed for my particular configuration. And my A/B testing SaaS. No more “I hope I used the right template variables - is this one a string or an array of strings?”
Or at least, this is what I want to happen. The first SaaSes to do it well will eat the legacy systems.