For me that’s shaping up to be the most important Emacs release in the past 10 years. LSP, tree-sitter, Wayland (the pgtk port) - it’s amazing how much progress the community’s making these days!
It seems weird to complain about the Mac Mini’s price just because it’s two years old. By their reported prices and reported benchmarks, the Dev Kit is 63% the price of the mini and offers 69% of the mini’s performance. That makes the mini seem astonishingly well-priced to me, in a market where its maker doesn’t even intend to compete.
But your considering only the CPU/GPU performance, right? There’s also the memory and the story to think about.
That’s a good point. I didn’t dig into what benchmarks the author used and how well they might reflect overall performance. I picked a set of benchmarks from the article that seemed representative to take a swag at a percentage to compare against the price difference.
I do think the prices on the initial M1 systems’ have aged really well, overall.
For me the thing that always gets me with Mac pricing is the storage price “gouging” (subjective). I know that’s not what the article is saying, but like … I have a lot of files out there!
No kidding. Their storage prices always make me go smaller than I want to. I think I’d like it if they could do some kind of tiered storage like they used to do with the old “fusion drive” in the iMacs. i.e. Let me buy 256GB or 512GB of the super fast SSD they use now, and sell me a couple of TB of slower SSD built in. I do that with a TB3 attached NVME now, but it’d be a lot nicer if it was internal.
Always be upgrading!
This is profoundly wasteful and should not be encouraged.
I upgraded my CPU a couple years ago but I made a mistake and bought one that didn’t support ECC RAM because I misread the specs (which is easier to do than one may think). This last month I decided to update the CPU and get the ECC RAM. Turns out upgrading the CPU caused thermal problems and I ended up having to get a cooling unit which I then found did not fit in my case, so I had to get a new case as well. I had to refit the whole setup for my desk as the final kicker.
I probably should have thrown in the towel when the CPU was having thermal problems, but I was stupidly stubborn. Now I’ve spent considerably more than I was hoping for a setup that is only marginally better than it was before. And I have a CPU and RAM that I could sell, but they’re not worth very much and the hassle of selling it seems like a waste of my time.
“Always be upgrading” is a great way to produce a bunch of e-waste that we would be better off without. I hope to not have to update anything for about 7-10 years. If an 8-core home server with 64GB of (ECC) RAM is not enough for home computing needs in that time, we’re probably doing something very wrong.
(Side note: gaming culture has made hardware purchasing a total mess. I hope it fades into the sunset some day.)
This is profoundly wasteful and should not be encouraged.
Agreed. To be clear - the “always be upgrading” was meant as a light joke (and a reference to learning as a form of self-upgrade) to wrap up the article on a positive note. :-)
I think of upgradeability as something good, that actually reduces waste, but not something we should be doing all the time. E.g. I’d normally do a CPU upgrade every 4-5 years, but sadly no motherboards have a lifespan that long.
AM4 socket motherboards were supposed to, but AMD broke compatibility in the middle of AM4’s lifetime. I’m hoping they don’t make the same mistake with AM5.
Intel sockets change every 5 minutes though. No chance there.
It’s surprising that AMD would make that mistake given their history. They gained a lot of market share during the era when they supported ‘super 7’ (socket 7 with a few minor tweaks) for CPUs from 133-450MHz, with the same RAM (though you wanted to upgrade to a faster clock ideally), while Intel was pushing everyone to Slot 1 and RAMBUS memory.
The times have been good for AMD recently, so perhaps they decided they have the upper hand this time around. Personally, I don’t see any major advantages of DDR5 over DDR4, and I’m in no rush to switch to it.
Fair point about Intel. I honestly don’t get why they need new sockets every couple of years. I was definitely very disappointed when AM5 was announced, when I originally expected that Zen 3 would be targeting AM4. And now that AM5 doesn’t support DDR4, changing the CPU means changing the MB + the CPU + RAM, which is almost a new PC. At least they said that all the MB dimensions are exactly the same, so coolers for AM4 would fit fine AM5.
I do think the main reason for the constant Platform upgrades is to appease the vendors of MBs, RAM, etc - they want to sell something all the time, not every 5 years. There are many players in the computer hardware world and everyone wants their cut.
I just built a gaming computer for my brother to last 10+ years. Planned upgrade of CPU in ~5 years, whenever the last AM5 CPU is released. GPU as needed, whenever prices are good. Every other part of the machine I picked to last through those upgrades. Always plan for upgrades to minimize e-waste. Few gaming PCs live 10+ years for exactly the reason you describe.
That’s a solid plan, hopefully the last AM5 cheap won’t be released 2 years from now. I guess I’ll stick with my current CPU for a couple of years more, and then I’ll evaluate my options. I still have high hopes for Intel 4 and Meteor Lake. I love AMD, but I really want Intel to get their act together and build something great again.
Btw, AMD did make the vague announcement that they “are going to support AM4” for many years to come, whatever this means.
I thought as much, although they might have another “special edition” of the Ryzen 5000 lineup in their plans - e.g. some Ryzen 5900X3d or something like this.
Unanswered question: What kind of terrible thermal paste was used with the original cooler? I’ve seen stuff get hard, but not turn into super glue.
Also, if the backplate is sliding around inside the case, it is possible it has damaged the components on that side of the motherboard. I would recommend trying to closely inspect the motherboard for that kind of damage, or at least checking the bottom of the case for debris from such components.
Unanswered question: What kind of terrible thermal paste was used with the original cooler? I’ve seen stuff get hard, but not turn into super glue.
It was pre-applied on the heaksink when I got it (if I recall correctly) and I never bothered to check. I did a quick search now, but I couldn’t find the exact make of the paste.
Also, if the backplate is sliding around inside the case, it is possible it has damaged the components on that side of the motherboard. I would recommend trying to closely inspect the motherboard for that kind of damage, or at least checking the bottom of the case for debris from such components.
Thanks for the advice. I’ll take a look at some point, but I guess that if it’s working now it’s probably fine.
Interesting story! It’s amazing you were able to straighten the pins. Some years ago I bought a used motherboard that had a few pins misaligned and after straightening them it still didn’t work and had to buy a new motherboard.
I’ve recently been (slowly) working through the Cornell class since it came up on Hacker news a couple months back. Your experience mirrors my own. OCaml is a lovely language with lot of cruft built up over time: the equivalent of finding an old muscle car in an estate sale. Sure the oil hasn’t been changed in 20 years, but boy would it fly with a little love!
I looked through the workflow thread that you posted and I didn’t see anyone mention how you can use shift + enter to send highlighted code to utop. I can’t remember if there was any extra config. Might not be super helpful given your emacs background though.
Given how you like Clojure and have bounced off of Erlang a couple times I was wondering if you had tried out Elixir. The creator worked with Ruby and had a soft spot for Clojure. Might be right up your alley.
Yeah, I did play with Elixir for a while and it definitely got many things right. It has great dev tooling and a much better standard library than Erlang, but for some reason I’ve always found it more enjoyable to program in Erlang. :D Funny enough, despite my Ruby background I never considered the Ruby-like syntax an improvement. We did a couple of small projects in Elixir on the job and generally the team was happy with Elixir, but we didn’t have use-cases where we could leverage fully its strengths (or rather the strengths of BEAM and OTP).
Lots of OSS projects are managed by some company, so I don’t necessary see this as a bad thing. It might even be better for the project in the long run. This “open letter” seems like an overreaction to me.
That’s a nice piece, and one that shows how much OCaml has improved on that side. Thanks for sharing your experience!
I am really curious to see what OCaml will look like in 2 or 3 years, with OCaml 5 well established.
I listened to this podcast and found it to be very interesting - What is an Operating System? (Signals and Threads) on OCaml and friends. Might be of interest to others in this thread, if a bit late!
This list is somewhat dishonest. It starts with:
ship by default “cool” themes like Solarized (it seems it’s very important to have some dark mode these days)
Yes, this is superficial and better left up to the user.
make Emacs more mouse-friendly (e.g. right clicking would open some context menus, there would be more tooltips on mouse hover and so on)
have a fancy configuration wizard to help with the initial setup of Emacs
These are, however, really powerful things that would greatly help both new and old users!
The mouse support in Emacs is already great, but many experienced users turn it off (menu-bar and tool-bar) to reduce clutter in the UI; adding that back in with context menus provides a way to help both experienced and novice users.
And many new users go straight to unnecessary heavy Emacs configuration frameworks, even if just a few basic customizations are all they want. This then makes life difficult for other experienced users who want to help them, but don’t know how to navigate the heavy-weight framework. Encouraging more use of customize and built-in Emacs features helps everyone.
Yes, this is superficial and better left up to the user.
Particularly when the author himself is responsible for an absolutely terrible emacs port of Solarized, which is best avoided at all costs. The last thing Emacs needs is new users asking why turning on the solarized theme inflicted a bunch of randomly different font sizes on their poor code when they’re expecting the kind of solarized experience they’d get from any other editor’s solarized port, instead.
Well, I’m sorry you feel like this about Solarized and you’re always free to voice your concerns on the issue tracker as well. ;-)
You are missing my point, though - for how many people exactly will bundling a theme like Solarized (or another other popular theme, that has been ported to your standards to Emacs) and making it the default be deciding factor to use or not use Emacs? :-) After all most themes are trivial to install even today. There are 20+ themes in the default Emacs repos today + a few themes that are bundled with Emacs.
To be clear - I don’t mind making Emacs more approachable to newcomers at all. I’m just skeptical this is going to make a difference for the broader adoption of Emacs. Also I view user-friendliness and ease of use as being orthogonal to being “modern” (e.g. Emacs can be “dated” and user-friendly at the same time), that’s why in general I don’t like it when those two concepts get conflated.
Everyone.
Emacs needs to be as easy to use out of the box as atom. Then, you can configure it endlessly. Just because it’s building material doesn’t mean it needs to arrive in shambles.
The fact that emacs is in such a sorry state doesn’t help anyone. It’s bad for new users that need to do so much work. It’s bad for existing users because it drives so many people away and makes the community smaller.
And maybe. The fact that emacs isn’t modern isn’t a feature. It’s an indication that it hasn’t kept up with the times, that bugs and slowness persist over decades, that there is truly a rotten broken core inside emacs that requires serious changes.
Heavy emacs user for almost 20 years.
Most of the perceived lack of modernity in Emacs comes down to bad defaults, and packages which are near-universally used not being in core. Many of the bad defaults can never be changed, because it’s the policy of the developers to not disrupt the workflow of people who have been using Emacs for 30 years. And getting packages included in core is nearly impossible due to politics and the need for copyright assignment. Even packages that are included in core are generally not activated by default.
I’ve been using Emacs for 30 years now, but I’ve kept up with the times. My Emacs configuration would be as newbie friendly as VS Code if you: enabled cua-mode, re-enabled the menu bar, and told the newbie that the command palette is on M-x. Maybe enabled tab-line-mode to make buffer handling more legible, and turned on treemacs by default rather than having it on a hotkey. But it does depend on quite a few packages from both ELPA and MELPA.
I couldn’t agree more. Although, emacs has disrupted people’s configurations and changed defaults plenty of times before. There is no reason why they couldn’t ask us all to add (give-me-plain-emacs) to the start of our current configurations to maintain a bare startup, while giving everyone a much better experience. They’ve toggled modes before, removed functions, etc. Many of these required more than a 1 line change.
Many of the bad defaults can never be changed, because it’s the policy of the developers to not disrupt the workflow of people who have been using Emacs for 30 years. And getting packages included in core is nearly impossible due to politics and the need for copyright assignment. Even packages that are included in core are generally not activated by default.
Yeah, that’s what I mean by “there is truly a rotten broken core inside emacs that requires serious changes.” Politics has been strangling emacs for a long time. Every time the word “freedom” appears in an email on emacs-devel, it invariably is a discussion about how to make all of our lives just a little bit worse.
My Emacs configuration would be as newbie friendly as VS Code if you
I think you should try this and see what people do when you set this up for them. In my experience onboarding students, the unfriendliness of emacs goes much deeper than that.
Yup. I’m a 30 year emacs user, and if it came out of the box with cua-mode the first thing I’d do is m-x plain-emacs, without complaint. No reason this community should stay as hidebound as I am.
Most of the perceived lack of modernity in Emacs comes down to bad defaults, and packages which are near-universally used not being in core.
Exactly.
I have been using Emacs for 20 years, and my goal is to end up with a tiny .emacs / init.el.
Emacs is moving towards making that possible, but progress is a bit slower than what would be desirable. A quantum leap came from adopting a package manager by Emacs core. And use-package
, which is sort of a de facto configuration standard.
Right now, the only two major annoyances in stock Emacs are make-backup-files
and poor completion frameworks. If I start a stock Emacs in a remote machine, it litters all my directories and that should not happen by default. The second is easy to fix with vertico, marginalia, orderless, corfu et al. These are great drop-in replacements for stock completion frameworks, unlike ivy or helm, and another major milestone towards making Emacs simple and easy.
Other major annoyances come from packages. In an ideal world, I should be able to open stock Emacs, install any packages I like and use them with zero configuration. However, there is still a horrible culture of asking users to copy-paste code into .emacs to make basic functionality usable.
poor completion frameworks
I used to be a heavy helm user, but I’m now using the built-in minibuffer completion, thanks to a recent (i.e. only on Emacs 29/git master) change that lets you scroll through completion candidates from the minibuffer. I still need embark and orderless, but being able to drop something as (I thought) vital as Helm is a big step.
“sorry state” and “a rotten broken core” are pretty harsh statements and it’d be nice if you were a bit more specific as to what exactly is so broken (especially at the core of Emacs).
FWIW - at least from my humble perspective Emacs has improved more in the past 5 years than in the previous 15 years, so I can definitely argue that things are not as bad as many people often portray them to be.
The fact that emacs is in such a sorry state doesn’t help anyone. It’s bad for new users that need to do so much work. It’s bad for existing users because it drives so many people away and makes the community smaller.
Agreed. We should want to build and extend communities. This post on the other hand is very much telling the people who want to do so to get off “my” lawn. Perhaps, instead, it would be better to be humble and learn from the projects that are attracting and building community.
I’m not a big Emacs user. I’m not saying that I don’t like it. Just that I don’t use it that much. I keep it there on my task bar as one of the various editors I use for various jobs. One thing though that would make it increase my personal usage of it though, is speed. But I know that interpreting Elisp is not necessarily quick and writing a new interpreter while making sure to maintain compatibility with the tens of thousands of Elisp extension already in the wild would be highly improbable.
The latest release of Emacs has JIT compilation of elisp: https://www.emacswiki.org/emacs/GccEmacs
There’s some hope on this front, but I’m not holding my breath - https://www.emacswiki.org/emacs/GuileEmacs
Hey, you might as well look at Lem: https://github.com/lem-project/lem/ it’s built in CL and it is usable right now, for CL and other languages out of the box. It works on the terminal and it has experimental Electron and OpenGL UIs.
A resounding achievement. Congrats to the whole team and to you specifically, Bozhidar! You’re a credit to every language community you’re in.
Shocker, right? :D But there are always some people like Viktor who care about the docs and work to restore the balance in the Source.
Great articles, Viktor! I’ve read the 3 of them in one go and I loved the story of your journey and all the insights you gained along the way. Keep rocking!
Congratulations on the great milestone! It’s nice to see that you’ve managed to be commercially successful with Sidekiq, while staying an independent OSS hacker.
I’ve been a Sidekiq user since day 1 and I worked on adopting Sidekiq in two different companies. I still remember how painful the second migration was for one of the projects, because the code there relied in some places on the db transactions in DelayedJob.
It was also amusing to see that a good recipe to become a tech billionaire is to write a Ruby background job processor. :D
Yay, I’ve been using this branch for months to get the Wayland scaling and I’ve had zero problems.
I think that Wayland support is so important that they should cut ASAP Emacs 29 just for it, but probably we’ll have to wait a couple more years for a “stable” release.
I am a Wayland user with a boring desktop use case. I like the:
I feel that last point. When using X11, it feels like I have a choice between no compositor (and the lack of features and slightly buggy rendering that entails) and compton (with the very bad performance that entails). Workspace switching with i3 would make the every OpenGL window look initially blank before it pops in half a second later when using compton.
Sway, on the other hand, Just Works.
Everything else on that list is important too of course.
Curious: what specific features do you like about the compositors? I’ve personally found them completely useless.
I don’t remember 100% since it’s a long time ago, but IIRC, a compositor was necessary for reducing screen tearing, and parts of the screen would sometimes fail to update properly without a compositor.
Screen tearing is easier to avoid with a compositor (nothing to do) than without. And in my case, true transparency (this helps me to check if a window is focused, but also for eye candy).
X can do all those things too, except maybe the refresh rate thing though, I’m not sure about that. It is a pity that applications are just rewriting in Wayland instead of fixing their bugs on X and maintaining full compatibility.
X11 can’t really avoid screen tearing. There are lots of different hacks which each somewhat reduce tearing in certain situations, but I’ve yet to see an X11 system which doesn’t have at least some screen tearing in at least some circumstances – while I’ve yet to see a Wayland system with screen tearing issues.
Fractional scaling on X11 is a hack which partially works sometimes but doesn’t work perfectly.
We’re long overdue for an X11 replacement. We’ve been adding hacks upon hacks upon hacks for far too long, and the fundamental limitations are really starting to show. It’s not enough to “just fix application bugs on X”. Screen tearing by itself is enough reason to throw X11 in the garbage and start fresh with something that’s actually reasonably well designed.
As far as I understand, X cannot have different fractional scaling factors for different monitors, while Wayland can. It’s the main motivation for me to use Wayland, given I have a 1440p 25’ and 2160p 27’.
I was always curious about fractional scaling. I thought that Wayland didn’t handle it (see https://gitlab.freedesktop.org/wayland/wayland-protocols/-/issues/47). From my understanding, you are rendering at 2x, then downscaling. If you happen to have two monitors, this can be a lot of pixels to render.
From my understanding, you are rendering at 2x, then downscaling.
This is how macOS does it, IIRC.
I think there are non-mainline GDK protocols for it. At least all the non-XWayland applications I use look perfectly crisp
For me personally, I use Wayland because it’s the only thing supported on my hardware (MNT Reform).
The only thing I use it for is to run XWayland so I can immediately launch Emacs and exwm, and then I run Firefox inside that.
At first glance it appears slower than using Wayland directly, but that’s only before you factor in the time you have to spend retrieving your laptop because you threw it out the window in rage because Firefox’s keybindings (when running outside exwm) are so bad and holding down C-n opened seventeen windows again instead of scrolling down.
Probably, but if you run Firefox outside exwm there’s no way to fix the key bindings. The Firefox extension mechanism bans rebinding C-n and C-p for “security reasons”, making it completely unusable for me.
Supposedly it does https://twitter.com/omgubuntu/status/1379818974532280321?s=20, but I guess exwm doesn’t.
No, the problem is that Firefox doesn’t allow the user to change some keybindings, like C-n. EXWM is the solution to that problem.
For a while now already. Gotta define the MOZ_ENABLE_WAYLAND=1 env variable and it will start in Wayland mode. Have been doing this for 2(?) years now on Sway without issue. (Maybe the env thing is a thing of the past…)
I’m surprised by your question. My (perhaps naive?) understanding is that the Xorg maintainers decided years ago that they don’t like the X protocol anymore for modern graphics stacks, that the stuff that was added on top of it is painful to design/implement/maintain, and that they wanted to start with something fresh with a more solid design. Wayland, as I understand it, this is “something fresh with a more solid design” (but its own protocols are still evolving and probably imperfect, as everything is.)
With this understanding, if I have the choice to use both (that is, I’m not depending on worklows that are not supported on Wayland for fundamental reason (no support for easy forwarding) or due to lack of maturity), it seems natural to use Wayland. I want to use the new stuff that is the future and benefits from a growing userbase to mature (and benefit more users that currently have fewer options), and not the old stuff that keeps working but people dread maintaining.
So: my use case is “moving to the new software stack that was designed from the start to be a better, more robust successor than the old software stack”.
From my PoV Wayland is still in “second system redesign” territory like the early days of PulseAudio. Some people find it useful, some people like to help test the bleeding edge, but it’s not the default in anything I set up and since I’ve never had any issues with X I don’t currently go through the extra work to set it up. The only time I’ve worked with Wayland was to help my mom switch her computer back to X because some app was misbehaving under her Wayland setup.
But if it’s working for you, that’s great of course. Just providing my PoV so hopefully my question is less surprising now.
There is always a bias towards “not fixing what aint broken” so once you know a tool you’re likely to stick to it.
That being said I don’t think wayland represents enough progress to consider it a successor to X, it’s a competitor for sure but I think by the time I move off X it won’t be to wayland.
With most Linux distros gradually switching to Wayland and Windows adopting it as well for WSL, I think it’s fairly certain that Wayland is going to (mostly) replace X in the next 3-5 years. I doubt some other alternative will emerge before the transition to Wayland is complete.
Desktops built on Wayland are amazing compared to desktops built on X11. I couldn’t go back anymore.
It was one of the first programming languages that I’ve learned and I still have a lot of fond memories of it. Happy birthday!
tangential but it’s a bit disappointing to see this (and others) today. I thought the internet had kind of come to the consensus that ‘april fool’ intentional misinformation posts like this have been done to death and we sort of moved to creating fun toys (like the reddit pixel thing).
The internet is made up of billion of people, why do you expect consensus on anything?
True that. It’s impossible to agree on anything and to please everyone.
I’ve never been big on April’s Fool myself, but I felt the OCaml team played it well this year. Seems, however, we’re at the point we can’t even agree if it’s OK to share something (subjectively) funny here.
It’s ok! people enjoyed it.