Why not simply run the service on a wireguard network and connect to that? Why pipe you private data through some unknown entity when you don’t have to?
Then you should run it on a proper server in a proper datacenter!
This is solely a question of scale, and not everything on the internet has to be scaled for millions of users, any more than every telephone number needs to be answered by a call center.
I keep a copy of my wedding video on a site that is accessible outside my private network (with appropriate credentials), because sometimes I want to show it to others. I don’t want it to be publicly accessible, I don’t want it hosted on youtube which will complain about copyrighted background music, and I certainly don’t need it often enough that it’s worth paying someone else for hosting.
Accepting these sorts of services is, I think, a key to building a more decentralized internet. I’ve been obsessed with availability and uptime forever - I remember bragging posts about uptime on usenet - but actually, those things don’t matter in the way I expect when the set of affected users is small.
Like Lua, Roc’s automatic memory management doesn’t require a virtual machine, and it’s possible to call Roc functions directly from any language that can call C functions
Was reading up on Roc and saw this, does anyone know what this refers to in practice?
Roc uses reference counting for memory reclamation, which is a local technique, instead of a tracing garbage collector, which is a global technique (“requires a virtual machine”).
which is a local technique
I’m curious what you mean by this? Are you referring to something more like newlisp, which ensures local memory is freed immediately after use? Or did you have something else in mind?
which is a global technique (“requires a virtual machine”)
Nothing about tracing garbage collection requires a virtual machine. It does make things easier to discover “roots” and be more precise, but as a counter example, the Boehm-Demers-Weiser GC just scans the stack for anything that might look like a pointer, and all you, as a programmer, have to do is call GC_{malloc,realloc,free}
instead of the typical malloc, realloc, free
. It’s incremental, generational, but not precise. It can miss things. (this is a very simplified explanation of Boehm, a ton more details here)
Tracing garbage collectors do not require a virtual machine, that statement (and not just that statement) is confused.
Feedback: it would be better (and I know it takes time) to explain why it’s confused instead of pointing it out and leaving it at that.
It sounds cool, but I can’t find any code! I clicked around a bit but couldn’t see any examples of real-world code (a hundred or so lines at least of actual non-example game logic) - am I just being blind/lazy?
There are quite a lot of examples, I managed to fing them at some point, but the documentation is so badly put together I can’t find them anymore even knowing they exist!
One thing I personally wonder about is the compiler - how fast it is and how good the error messages are. Epic doesn’t have a very good track record with that 😅
I like the language design a lot though. I’m implementing an UnrealScript compiler, and the amount keywords you have to churn through is astounding. Good that they solved it with a unified @annotation
and <specifier>
syntax.
As a compiler developer, it seems weird to me that parametric types have the limitations they do (particularly the fact you can’t use them in var
s.) Smells of the compiler being quite jank to me, but I haven’t seen the source code so no clue 😄
I like that they are continuing the legacy of UDK with UEFN/Verse. I’m excited to see what the community will build using the tools.
As a compiler developer, it seems weird to me that parametric types have the limitations they do (particularly the fact you can’t use them in vars.)
As another compiler developer, I suspect this is probably due to something similar to this in ML-derived languages: https://v2.ocaml.org/manual/polymorphism.html#s%3Aweak-polymorphism
I suspect that it’s due to the quasi-relational behavior that they’re working with; I remember Wadler explaining that type annotations are actually predicates which guard annotated variables, and that they are resolved relationally along with all of the other variable-unifying behaviors.
There’s a lot of good stuff here, effects, transactional memory, a concurrency story without colored functions (don’t be fooled by the documentation mentioning async
over and over, it’s different stuff), a completely novel way of dealing with failure, planet-scale repl (not yet available), etc.
But I feel like they had a list of features and a specific syntax that they wanted to give to game developers, and they then had to shoehorn “the good bits” (the semantics) into these arbitrary features. Some things are truly bafling, like it has both significant indentation and {}
, at programmer’s choice. Wat.
Also I think the documentation is not great. It says reference in the title, but as a reference is not very thorough, in fact it’s more like a tutorial, but the sequencing is not very pedagogical to be a tutorial either. In general the concepts are explained at a very basic level, but then sometimes it throws very technical concepts at you out of the blue, like type polarity. Also a lot of the links are 404 and they hijacked middle click. Ugh. They also publish memes in the documentation. In fact all images published as part of the documentation are kinda pointless.
I suppose I’m not the target user here, but I really wish there was a way to play with the language outside the Unreal editor. Surely this has to be possible, if anything, just for their own developing team’s sanity. (Edit: after watching the video linked at the bottom of my post, apparently this is coming, they are going to release it as open source!)
In any case, I am really excited about this new language, quirks at all, and I am really waiting for the next papers the team will publish. The first paper was really great!
Edit: there’s a quick video presentation: https://youtu.be/teTroOAGZjM?t=22512
More thoughts.
Notice how the naming convention is for values to be Capitalized, while types and functions are not. My prediction is that this is because types in Verse will be just (failable?) functions that validate values.
I said failable above, but they might be some other kind of functions. They have a total effect type (functions that are guaranteed to terminate), which could be safely called by the type checker.
In fact I think that class
and other constructs are some sort of compile-time macro. I suspect some syntactic idiosyncrasies are about making the language homoiconic.
Verse has effects, but not algebraic effects. The set of effects is finite and fixed, and there’s no way to define arbitrary handlers. All effect processing happens in language constructs like if
and for
(which might be macros, so maybe there will still be some user-level support for handlers after all). Because it does not have a full blown effect system, concurrency and monadic composition had to be baked into the language, with an effect system these features could have been implemented in a library instead.
Notice how effects are negative types. They are a promise about what a function will not do, not an indication about what a function might do. This is the opposite of other languages with effects.
defer
is not needed if you have linear types. I suspect they considered linear types too difficult for the average programmer, so they went with what they consider a good-enough solution.
The type system is probably very novel, I suspect it is the most interesting part of Verse but so far has not been explained or talked about at all. I suspect it might have some form of dependent types.
A curious thing I found in the API reference: https://dev.epicgames.com/documentation/en-us/uefn/verse-api/versedotorg/verse
As far as I can tell, specifiers like <abstract>
and <transacts>
are all real types. Weird, huh? Almost as if they planned arbitrarily defined effects from the start, just don’t document them.
Very interesting. Maybe there are arbitrary effects after all. Seeing them documented this way kind of reminds me of Haskell type classes. Maybe what you put in the function specifier are instances of a specific type class, and the compiler somehow calls into it at compile time to verify your code.
like it has both significant indentation and {}, at programmer’s choice. Wat.
Isn’t that what Haskell has?
Some things are truly bafling, like it has both significant indentation and {}, at programmer’s choice. Wat.
Not surprising, since Haskell has the same feature, and Verse is designed by some of the original designers of Haskell.
All the articles in the series: https://research.swtch.com/telemetry.
Good demo of the system: https://www.youtube.com/watch?v=o4-YnLpLgtk.
Tailscalar here. I didn’t tick “I’m the author” because I’m not actually the author of this doc. But I am from Tailscale. I’m not certain what the correct etiquette for this is, but I do believe this is a genuinely interesting piece of technology so I thought I’d try sharing it.
webdesign feedback: the ToC on the right is longer than a screen height for me, but not independently scrollable - I need to scroll the entire page towards the end for the ToC to scroll too and reveal the last few entries.
I am not sure if this is the right place to bring this up, it’s certainly not related to the posted link, but has there been any work into improving the energy efficiency of tailscale? Tailscale significantly affects my macOS/iOS devices’ battery life. I know that part of the problem is with Go itself, the Go runtime is not optimized for mobile devices, but still…
Never heard this credo applied to Lua before. Seems like an interesting target for it. Do you have a link with more info?
sadly I don’t, but lua is pretty much universal for many scripting additions, especially many game engines
Lua isn’t so much “write once, run anywhere” (there is a Lua bytecode, but it’s not a very common AOT compilation target, and in any case most Lua is meant to run in exactly one environment) — a better descriptor would be “it’s everywhere”. Its intention was always to be used as an embedded “scripting” language inside of some bigger system, for convenience (not having to write all of your functionality in C or whatever) and extensibility (since you’re shipping an interpreter, it’s easy to let the users drop in their own code). Much like ended up happening with JavaScript, except a bit less accidental.
But yeah, now wasm is starting to fill that niche, and people can compile to wasm from (more or less) any language they want. Which is nice, although it does rob the idea of a lot of its simplicity.
My understanding is that this wasn’t done in the past, solely because every (most?) DNS interfaces were blocking. How are they handling that now?
I am afraid you are mistaken, and the linked article gets some details wrong.
On all suported platforms except Plan 9, the only way to do native name resolution is to link to libc (or equivalent system library). Normally this would make cross compilation hard, so Go came with its own name resolver.
If Go were a standard toolchain, this is the best you could do, but the Go toolchain is non-standard. Very early in Go’s development the Go internal linker gained the ability to link to dynamic PE objects without having to have them available at build time. This was required by Windows port. Slightly later, I added the same feature for ELF, for the Solaris port. Windows and Solaris do not have stable system call ABIs, so the ability to link with the native libraries was crucial, and this special ability was also crucial for being able to cross-compile Go easily.
The internal linker gained the same special capability for Mach-O much later than for PE and ELF. I can’t recall exactly when, but it’s been this way for a couple of years old by now. Once it got this capability, it would have been posible to use the system resolver on MacOS. I am not sure why this change only happened now.
Notice that I didn’t say anything about blocking vs. non-blocking interfaces. It doesn’t matter, Go can use blocking interfaces just fine, in fact, most system calls are blocking. The Go runtime is a multithreaded system that implements m:n scheduling of goroutines onto threads.
The article says that “[Go is] making the necessary syscalls directly”, but there are no system calls involved in name resolution, it’s all userspace libraries.
I just don’t know who can replace Apple. I have no faith in companies like System76 to make something I enjoy using. I also can’t be bothered to carefully manicure a delicate teetering pile of dot files on a barely supported ThinkPad running Arch and a customized DE. GNOME and KDE are probably acceptable out of the box if I am left with no choice, but I don’t like them at all. KDE is a visual mess, GNOME is minimalist to a fault, and they both break my macOS muscle memory.
The personal computer needs a new Apple.
In my case finding replacement hardware isn’t too hard, but every time I’ve tried to use Linux for day-to-day professional tasks I’ve been driven back to OS X in a single-digit number of weeks. It’s absurd that the desktop experience feels even more shitty and half-baked now than it did 20 years ago when I was running it full-time in college. God help you if you dare to want fonts better rendered than you could expect in the late ’90s on some cheap 1024x768 CRT – HighDPI is still treated like some kind of godless abomination.
JWZ absolutely nailed it with his CADT post; nothing user-facing in Linux has ever been allowed to mature into a usable piece of software, it’s just half-assed rewrites of half-assed rewrites all the way down.
half-assed rewrites of half-assed rewrites all the way down
Right. It’s unbelievable. You would think some big company would be able to hold investment on a set of softwares and refine them. Instead, every few years, we got half-assed rewrites as “enterprise solutions”, and got “shiny new things” for us to adopt or fork.
Big companies are even less able to produce incrementally refined software over many years. They are (necessarily) run in many, many layers - each layer providing an additional opportunity for some manager to “put their stamp” on a product by initiating a half-assed rewrite.
I think desktop environments are dead, sadly. This includes macOS, which had a really interesting ecosystem of apps developed by indie developers and boutique shops such as Omni Group, Panic, etc.
I have found a lot of relief by moving all my computing to three platforms: Web (Firefox), text-mode Elisp (Emacs) and Unix (XTerm). These platforms are alive, dynamic and well maintained. Actually, the Unix plumbing provided by a minimalist Linux distribution (e.g. Arch) or by NixOS is a joy to use. And combined with a good tiling WM, it is very ergonomic.
This also addresses GP’s point about dotfiles. By not using a myriad of ncurses applications, the number of dotfiles I need is minimal. Besides, over the years, I have found that interactive ncurses applications like mutt do not compose well and fail to satisfy Unix design principles. Furthermore, their little configuration DSLs tend to be too rigid. Moving some of these computing needs to Emacs made me much more satisfied.
Linux won’t feel polished if you try to use a large desktop environment. But a minimal setup is likely to be very robust and there is very little churn. I have not reinstalled any of my machines in more than five years, and I always use the same boring programs: Firefox, Emacs, XTerm, screen, git, rsync, openssh, gnupg, etc.
When using Macs, I gravitate towards the same setup. But it feels less cohesive due to the lack of a package manager, and the fact that tons of services and libraries I do not use are installed by default. Obviously, for others this approach might not work well.
I do miss a good laptop manufacturer of PCs, though, as pointed by the GP. Lenovo is too unfocused. They release far too many models, instead of polishing a few designs. Furthermore, their service and pricing, at least in EU, is a disaster. Apple is far better.
It’s an uncomfortable secret that vendors always knew, and we laughed at Balmer’s very uninspired display of it with his developers, developers, erm, seizure? – but it was never about the desktop: it’s all about the apps.
Most of the bright people doing application development are either doing it for the web, or working on emerging technologies like VR. So lots of desktop applications written in the last 5-10 years or so quite plainly suck. Not all, obviously, there are plenty of exceptions (see: Krita) but overall it’s pretty bad. There just aren’t that many people interested in it anymore.
Back in the late 90s, if you wanted to show off, you wrote your own WM. That time’s passed. It’s not really worth doing it even to pad your CV anymore – “a better desktop for PCs” hasn’t been much of a value proposition for more than 10 years now, there’s harldy any growth from that, so large commercial vendors have long stopped investing in it.
This inherently leads to a lot of churn in this space, especially as there’s an increasing fracture between users and developers. There was a survey done by the VES a while back – admittedly on a small sample (like 88 studios or so?) in a narrow niche (the VFX industry – but a niche with loads of money to throw around). It turned out that back in 2021 (or early 2022, I don’t recall the exact timing), about 30-40% of the people in this field were using MATE, which half the Linux desktop dev crowd sneers at for being, like, Gnome for people with terminal nostalgia.
I think desktop environments are dead, sadly. This includes macOS, which had a really interesting ecosystem of apps developed by indie developers and boutique shops such as Omni Group, Panic, etc.
Had? It’s still there.
I have found a lot of relief by moving all my computing to three platforms: Web (Firefox), text-mode Elisp (Emacs) and Unix (XTerm). These platforms are alive, dynamic and well maintained. Actually, the Unix plumbing provided by a minimalist Linux distribution (e.g. Arch) or by NixOS is a joy to use. And combined with a good tiling WM, it is very ergonomic.
This is the most myopic way to look at it.
Had? It’s still there.
Mac ecosystem stagnation is well known, see e.g. https://news.ycombinator.com/item?id=30595383
Panic has stated they can’t make money on iOS. Apple’s cultural push for $1 apps is also impacting macOS. Omni has fired a significant chunk of their workforce.
This is the most myopic way to look at it.
Why? It’s a very common pattern for developers, even on macOS.
In my case finding replacement hardware isn’t too hard
After having worked with a fanless M1 MacBook Air for over a year I cannot find any non-apple laptop that comes even close. It’s sad, but since Apple Silicon I don’t see any reason to look elsewhere.
Many Intel MacBook Airs were also excellent Linux machines, super silent and well supported.
I would not be surprised if, eventually, the M1 ends up being a popular Linux machine. Linus used to use an Intel MacBook Air as his daily driver and was test driving a M1 recently.
Some ThinkPads are quite silent, but not fanless, and their computing power is comparable to the M1. Sadly, they are more expensive here in EU. Plus, their sales and customer service is outsourced and has become a big mess.
Interesting. I think many people will come to that same conclusion.
I haven’t tried any Air (other than for a few minutes in shops); the flat keyboards drive me away. I loathe them. I also don’t like modern macOS much; I still run 10.14 and I miss 10.6, after which it all started to go wrong.
So I am gradually collecting elderly Thinkpads from the x20 generation, the last one with decent keyboards, and when they die I don’t know where I’ll go.
But if you like the design and the form-factor, then yes, Apple has an edge that is only growing wider, and nobody has really got anything to rival it.
MS is, shockingly, innovating more in form factor and input. I didn’t see that coming, but in hindsight, I should have. I had Microsoft Mice early on, including the clip-on one that went onto the side of DOS laptops that had Windows on them. OK, I used mine with OS/2, but still…
Yeah I say that meaning only that I could build a tower I was happy with hardware wise pretty easily. There’s nothing really competitive in terms of mobile offerings, particularly when you’re dealing with dodgy Linux support.
HighDPI is still treated like some kind of godless abomination.
Honestly, the push back against wayland is the major reason for this. With Xorg it’s a hack to get it working at all but it is natively supported in wayland.
So every time I have this conversation about “why do we need wayland! I don’t care” I get tired because that same person will complain about problems that only exist in Xorg.
Fractional scaling works better in KDE today (with X), than in Wayland (with any WM/DE), where it doesn’t work at all.
Mind you, it’s still bafflingly bad compared to macOS.
I’m not sure where you hit this issue, but fractional scaling just works out of the box on KDE/Plasma 5 + Wayland for me using a plain install on NixOS. It’s why I switched to Plasma from GNOME a year or so back, since IME very few HiDPI screens actually look “right” using integer scaling. (Presumably 4k @ 42” or so would look fine 1:1, and 2:1 for a 24” or so, but the affordable 27-32” models fit very much into the awkward middle. Laptop displays are all over the map, but tend to lose too much screen space to random UI chrome at 2:1 IMHO.)
MacOS works better in terms of at least giving you a few different scales that render pretty well, but doesn’t magically solve the problem of making UI work well at a wide variety of resolutions and densities.
I’m always slightly surprised by descriptions like that, because for “and a customized DE. GNOME and KDE are probably acceptable out of the box if I am left with no choice” the good alternative is presented as macos with its “this is how it is and you’ll like it” approach. For me some of it is cool, a lot is annoying, and largely you can’t change anything beyond trivial settings.
Of course they will break the macos muscle memory too - the same is be true for every change. Every day at work I get the “which modifier was it to skip a word on a mac” game wrong a couple of times.
I mean, you’re used to macos, so whoever comes up with a new solution - you’ll likely see the same issues of unfamiliarity.
the good alternative is presented as macos with its “this is how it is and you’ll like it” approach.
I’m a hypocrite and hold Linux to a higher standard, yes. I don’t expect Apple to accommodate everyone, but they put the work in very early on in the history of the Mac to make a keyboard layout that is ergonomic and suited to power users. It has a lot of consistency in how the modifiers are used, as well as weird poweruser things like being able to navigate within text with readline-like shortcuts (Ctrl+A, Ctrl+E, etc.). Apple cared about someone’s muscle memory there, at least. Way better than reaching over to Home/End, if those keys even exist on your keyboard.
Remember we’re talking about Linux here. These are the people who created Evil mode for emacs to accommodate vim users. Of all the places where key mapping compatibility should be possible, it should be on Linux, but it’s not even on the table. It isn’t possible now because all keyboard shortcuts are set by the application; “client side”. This sounds like an accessibility nightmare, though perhaps I’m wrong. None of this is easy to change in GNOME/GTK. It’s almost doable in Sway and others, but again it requires insane text-only configuration, and still falls apart inside of most Qt/GTK apps.
A GTK4 application that wasn’t built with macOS in mind will use Control-based shortcuts by default and not respond to Command+Q, for example.
Why is it considered necessary, for example, to support different keyboard layouts for different countries and languages? I don’t have to adjust my expectations for how my keyboard works for my locale, but I have to use what’s given to me for basic text selection and navigation. My muscle memory doesn’t matter. I can remap Control and Command and get half way there, but they’re semantically different for about half of the shortcuts to accommodate Windows users’ expectations and it ends up being worse. I have to accept that if I’m only comfortable in macOS I’m stuck here and no one will make ExpatOS for me.
In an application like VSCode, I can’t do the fancy text manipulation tricks I’m used to on macOS on Linux, despite it being the same program, because of how desktop environments steal the Super key for their exclusive use. In some ways this is a nice design idea, but it also reduces the possible key chords within an application.
GTK used to support key themes, and one could enable an Emacs-like (i.e. GNU readline-like) theme to use C-a, C-e, etc: https://wiki.archlinux.org/title/GTK#Emacs_key_bindings
No idea if this still works on GTK 4.
Is it because you’re personally not happy with the choices or do you think there needs to be some solution that is so much one-size-fits-all that it becomes a certain standard solution?
I’m just asking because I’m not really seeing the benefit of System76 - last I checked it was a nightmare to get a machine of theirs in Europe with a reasonable price and/or support story, so they might as well not exist.
I also should probably admit that I’m more on the “I like tinkering” side of things with my x230 with Debian and my old laptop wit OpenBSD… but the thing is that I had several ThinkPads in the last years where I just installed Ubuntu and everything worked.
Is it because you’re personally not happy with the choices or do you think there needs to be some solution that is so much one-size-fits-all
Yeah this is pretty much it, and that’s what Apple offered. If you were in a film or music production studio anywhere in the last 20 years, you probably saw a Mac. In different worlds you’ve seen people use the same Macs for different reasons (like me, a programmer, who likes having a real terminal).
that it becomes a certain standard solution?
I think developers alone are a big enough market that it makes sense to have machines with excellent keyboards, great screens, and usable software that Just Works out of the box. The hardware is doable. Apple and Lenovo pull it off, for the most part. Dell could do it. They shouldn’t be cheap, and that’s okay. The hard part is having software that works well on it.
ThinkPads in the last years where I just installed Ubuntu and everything worked.
Ubuntu is a tough one for me. I agree that most things work well out of the box, if you like Canonical’s choices. I want to get to a point with desktop Linux where I don’t have to think about the bootloader and the init system and the package managers (Snap, deb, flatpak, etc.) and display drivers and just use it. It’s better than it has ever been, but it’s not good enough.
Stock GNOME on Fedora is what I use on my desktop computer. Most games run well on it, and it’s as good as any other Linux machine for the backend development work I do. It’s not everything I want from my desktop environment, but it gets me through the day. The GNOME devs are obsessed with minimalist to a fault. The people who dislike the new macOS settings window should have a look at GNOME’s. It’s basically identical and about as usable. Whoop-die-do. It’s responsive and looks good on small screens. Exactly what I need on my desktop computer. I don’t get their priorities. Microsoft is doing a lot of the same stuff with Windows 11, with the added bonus of being an evil corporation.
I also should probably admit that I’m more on the “I like tinkering” side of things with my x230 with Debian and my old laptop wit OpenBSD…
I would prefer to use something more like Sway, but it completely stinks out of the box. It requires so much fucking work to chase down themes and helper programs to get the desktop into a usable state. It’s like nothing has been learned since OpenBox and i3. I just do not get the insistence of the developers of these fantastic pieces of software to have the configuration of the graphical interface of my computer done in fifteen thousand dotfiles strewn about my ~
folder. Give me something to start with. Give me a GUI to configure my panels. I shouldn’t have to download and configure wofi
or rofi
and find a theme that matches just to launch applications in a way that feels nice. It’s absurd.
I see desktop Linux as two extremes:
I lean more towards wanting the power and features of what’s offered by option 1, but I don’t have the patience or will to manicure and maintain such an environment. GNOME is option 2 that lets you do most of what option 1 offers, but doesn’t help you at all. There has to be a middle ground. My view is that it has to come from a company like Apple.
Have you seen Framework laptop? That will be my next machine, I’ll probably replace my desktop with it.
Yes. I almost bought one a while ago. I absolutely love what they’re doing with their brand and their mission. I haven’t tried one personally, but apparently the keyboard leaves something to be desired, and the battery life is mediocre. The kicker for me is still the software, anyway, and that’s everyone’s problem with Apple these days anyway.
I’m using one. The keyboard is indeed not great, but hey, laptop keyboards are never great. I haven’t noticed battery life issues but I mostly use it plugged in.
I agree for sure that the software is the challenge with any Linux setup. I have my own practices that I like, and I’d rather have something that needs some time investment to set up but is centered on giving the user control of their device, over something like Apple’s thing that is moving towards a narrow and locked-down vision of what computing should be.
I don’t see Apple’s software problem as being just a matter of worse UI, it’s that their business model has been shifting. With the success of their various app stores, they’re no longer primarily a hardware company, as they had been. This means their incentive over the long run is no longer to keep things open and give users control of their systems.
I want to be clear that my complaint is not with Apple’s workers - I have friends there, who largely share my concerns. My complaint is with the company’s high-level decisions.
I have an M1 MBA which I bought for the exact same reason lots of people buy Windows laptops: because it just works, runs proprietary apps I can’t get on Linux (mainly Lightroom and Office) or that only minimally support it (Zoom, Slack, Discord).
The battery life is just as absurd as advertised and the trackpad is great, but there’s nothing particularly magical about using it aside from the lack of heat + fan noise. Both of those are immaterial for the 90% of the time I’m working at my desk with the Air docked, both of those are immaterial, but it’s nice for the occasional “road-warrior” day. The MBA also won’t drive a single 4k monitor at >30Hz with three out of four USB-C docks I’ve tried, but obviously that’s my fault for expecting a peripheral that works perfectly on an Intel Mac or any other laptop made in the last five years running Windows or Linux to function correctly b/c Macs “just work”.
Overall, Macs today are boringly utilitarian, locked-down tools, much like Windows (when locked down and running a bare minimum of services and crapware) and Ubuntu, Fedora/Red Hat, or any other commercial Linux distro. There are only so many ways to really innovate or improve a “modern OS” when your main concerns are preventing rootkits and clocking better benchmark scores for Chromium than the competition.
Once upon a time Apple’s fixation on software UI/UX and nurturing a broad, sustainable developer ecosystem made a real difference, but now it’s all App Store dark patterns around subscriptions, in-app purchases, and/or ads driving revenue for developers. MS is going a similar direction with their native and Android stores, and Google was there before anyone else with ChromeOS + Android crossover support.
Please don’t ask your readers to workaround Twitter’s crappy design!
Except that my readers are the ones who voluntarily joined Twitter and voluntarily followed me on Twitter.
Edit: It’s a browser problem! Carry on!
Please make your site font less wispy grey and unreadable.
Montserrat Thin must die.
Screenshot of site (rescaled 50% because my monitor is 4K) with Montserrat Thin: https://i.imgur.com/3XIyYkc.png
Screenshot of site in Firefox Reader Mode (Liberation Sans): https://i.imgur.com/IKJa2sZ.png
Note how the second image is actually legible.
In case you don’t trust the screenshots, I can provide photos too.
Correction: Issue happens only on Waterfox Classic, modern Firefox and Chromium are fine.
Looks perfect on macOS: https://i.imgur.com/LfaaF4O.png
Sadly, in industry many people are forced to work in languages with a weak type system, such as Haskell.
This is a joke, right? I’m genuinely confused.
if x < 10: x = x * 2
When I read these lines of instructions, I immediately switch to a particular mode where they cease to be English and become something else: to become code.
Isn’t this the same for everyone? How else?
This is the question I ask myself. Is the experience different for someone who speaks English to someone who don’t?
Unlike @4ad I do subvocalize when reading code. I even structure my code so it’s more “literate.” I don’t know how exactly to describe that, I suppose in the same way that people have descriptive function names, I try to also have descriptive code structure, so it can be read in 1 pass as often as possible.
When I read code, I just ingest it, there’s no English “translation”, same with math.
I don’t do subvocalization in general, if there’s any relation.
Also, I don’t do syntax highlighting. Whichever process I use for consuming code, syntax highlighting breaks it. I can’t read highlighted code.
This what is the most interesting. It is like listening to music or reading a book. The experience is different for each of us. For instance, I cannot work on a code without color highlighting. I need these landmarks on screen to detect and recognize structures.
I had this conversation with one my friends who used to listen to classes without ever taking notes, while I would need this process of note taking to stabilize my memory.
And I do read the code on screen in a subvocalization way, as I have alway read novels and articles.
I still struggle to understand the appeal of wireless earbuds, Airpods or otherwise. Under my value system, the costs are significant while the benefit is small:
While the last consideration alone is, for me, enough to summarily rule out wireless earbuds from my purchasing options, apparently there is no shortage of people who feel that the benefit had in being rid of a cable outweighs all of these costs. Given that any decent set of wired earbuds will have a relatively tangle-free cable and carrying case, I can’t help but wonder whether I am failing to see some key benefit beyond not having to occasionally manage a cable.
In my experience with wireless earbuds - the battery lasts lost enough that I basically never worry about it running out, the connection basically never drops out, and the audio quality is surprisingly good.
The reason why I personally went with wireless instead of wired is that, at least at the time, I wasn’t able to find wired in-ears with good ANC. The best tech seems to go into wireless earbuds, so they can end up as the best option all-around.
This is why I wound up with big wireless over-ear headphones. The best-in-class noise cancelling now is all wireless, even though I don’t particularly care about having wires or not.
you must pay a good fraction of $1000 for the mediocre audio quality supported by said wireless connection
The base model 3rd-gen AirPods are priced at $169. The “Pro” version at $249. If that’s “a good fraction of $1000”, then I’m going to start referring $1500 laptops as “a good fraction of $10k”.
Meanwhile: I used to be skeptical. Then I got really really tired of snagging headphone cables on all sorts of things and having them ripped out of my ears or, worse, out of the jack (I once had a pair of decent headphones destroyed by being yanked untimely from the jack). And I decided to try a pair of AirPods.
The audio quality is not “mediocre” by any reasonable measure. I own a pair of genuinely nice over-ear headphones for use at home, and I’ve basically stopped using them, in favor of the AirPods. The audio quality is just fine to my ears, and the added lightness and ability to get up and move around is a huge plus – I can listen to music while I’m puttering around doing chores or cooking or whatever and not have to carry the source device around with me or deal with a heavy headset or worry about snagging a headphone cable on things.
you have to live with the knowledge that after two years you will have introduced yet another sliver of unrecoverable minerals to a landfill somewhere
My first pair of AirPods lasted around five years before the battery life started to decline too much to continue my heavy daily use. I took them with me to an Apple store and handed them over to Apple to recycle as I picked up a new pair.
you have yet another battery to keep charge
The buds charge quickly in the case and get hours of listening time on a charge, in my experience, and the case itself is easy enough to plug in overnight.
you have another object to lose
As I mentioned, my first pair lasted five years, during which I lost them zero times. Including when wearing them on public transit and while out and about walking, shopping, etc. The case is about the same size as my keyring; I don’t lose that all the time, why would I lose the AirPods any more often?
you have yet another flaky wireless connection to contend with
I have owned flaky Bluetooth devices. I have used flaky Bluetooth devices. I have been forced to work with flaky Bluetooth devices. AirPods are what I wish every Bluetooth device could be.
I can’t help but wonder whether I am failing to see some key benefit beyond not having to occasionally manage a cable.
Yes. Also, several of your points are simply factually wrong.
Having been through this cycle before, I’ll just say that Apple did with the AirPods what they did with the iPod: took a product category that historically sucked, and made one that didn’t suck.
I hate cables on and around my body so much. I always catch them on my elbows and yank them. Or stand up to get something and yank them. I got an extra long cable so I could stand up at least and it got caught on other things like my chair, or knocked things off my desk. When I had the chance to buy actually decent wireless headphones for 3x the price of my wired ones I did it without hesitation.
Don’t throw them in a landfill! They’re a fire hazard in trash compactors. You have to keep them forever, bequeath them to your descendants, etc.
I still struggle to understand the appeal of wireless earbuds, Airpods or otherwise.
I have broken multiple cable-attached devices, and physically hurt my ears, trying to use wired headsets on busy public transport.
you have yet another battery to keep charge
Yes, but the battery lasts “forever”, where forever means I never have to be aware of the battery status. Maybe I charge mine every couple of weeks when I feel like it (not because I need it). When was the last time you had to be consciously aware of your wireless keyboard battery status?
you have another object to lose
Same with wired headphones, the number of objects in question is the same.
you have yet another flaky wireless connection to contend with
Not with Apple headphones you don’t. But non-Apple Bluetooth headphones are terrible here, yes.
you must pay a good fraction of $1000 for the mediocre audio quality supported by said wireless connection
The quality supported by the wireless connection is good enough, no bottleneck there, however the airpods themselves (AirPods Pro v1 and v2) are pretty mediocre in terms of audio quality, which I find surprising. I have much better IEMs than Apple AirPods. Surely Apple can do better here. I will note that AirPods Pro v2 (not v1) have the best noise cancelation of any headphone I ever tested.
However, audio quality is not the reason I use these headphones. It’s because they take no space, and I don’t have to deal with any wires. I have an evergrowing collection of real headphones at home, which have uncomparable audio quality, but they are simply different products with different use cases.
you have to live with the knowledge that after two years you will have introduced yet another sliver of unrecoverable minerals to a landfill somewhere
Two years is a stretch, mine don’t even last one year, maybe six months. I produce a lot of earwax and these things just get worse and worse. So what? Nothing lasts forever, I get a lot of value from $200 worth of airpods every six months.
For me the convenience is a huge advantage. I listen to audiobooks/podcasts a lot more when it’s easy to start and stop without fiddling with cables. I only spent around $30 on mine. The sound quality is plenty good for me and I’ve never had connectivity issues.
You only have to have a snagged headphone cable pull your $1200 phone out of your pocket and smash it on the ground once, and you’ll get it.
I like them for sleeping. They’re a bit uncomfortable when I sleep on my side, but I got used to it. The noise cancellation is great for fans / AC or partner’s snoring.
I find I actually do things like walk around, exercise, etc. a lot more than having to deal with wired headphone ceremony (untangling the cables….) when outside. That, and the nature of IEMs with transparency/ANC as someone who isn’t deaf but has occasional hearing difficulties is a big help. The charging isn’t also a big deal; I just throw it on a Qi pad when I’m at a desk using wired headphones (since the cable there is fine).
I use wired IEMs almost all the time I need to listen to audio but in the gym, while lifting, using wireless earbuds make life 1000x better.
My GF has a pair of those things. We found an interesting use for them once: being able to listen to the same thing while walking. She wore one and I wore the other. The fact that it wasn’t stereo didn’t really matter, because we were listening to GPS announcements from the screenreader on her phone. She likes them because she has destroyed quite a few headphone wires over the years.
I wouldn’t buy them myself. Old wired headsets are cheap, ubiquitous, reliable, and don’t need a battery. When I wore one, I always felt like it was going to fall out of my ear. I don’t know how people manage to keep them in while exercising.
My GF has a pair of those things. We found an interesting use for them once: being able to listen to the same thing while walking.
FWIW there are 3.5mm splitters, and nowadays iOS can attach multiple Bluetooth headsets for shared listening, at least for music.
Genuine curiosity: for folks wanting multicore OCaml, why not use F# from current dotnet 6.x since it works everywhere, and can compile to native, I believe, too.
A few reasons off the top of my head:
The most common answer to “why use X when you could use Y” is “we already have a bunch of X and rewriting it all in Y is a non-starter”
does ‘everywhere’ include NetBSD, OpenBSD?
F# GUI binding on non-Windows platforms, is there a good framework to pick up?
I think access to .NET ecosystem is a big plus for F#, but it is not clear that syntax constructs between the 2 languages map one-to-one. I cannot find links right now, but as a casual reader it seems that Ocaml is more advanced.
They absolutely don’t map. There’s a common subset, but that’s about it. F# lacks a module system comparable to ML (functors etc.), and its OO is the .Net OO, not OCaml’s object system with structural typing and object type inference.
They certainly are not[1]. ML modules can encode type classes, but not the other way around. ML module types are a form of dependent types[2] while type classes can be encoded into System F[3].
[1] https://dl.acm.org/doi/10.1145/1190215.1190229
[2] https://dl.acm.org/doi/10.1145/3474834
[3] https://okmij.org/ftp/Computation/typeclass.html
ah feck. Really? I’ve read that first paper but it’s quite possible I’ve misunderstood the implications of it. Howver, [4] presents mechanical translations from one to the other and back. The long version [5] is 170 pages long though, so I haven’t dug through it all; it may be incomplete. [1] does mention that its translation from modules to typeclasses is not very friendly to actual use, though I’d kinda expect it to be the other way around tbh – if modules are more powerful than typeclasses I’d expect it to be harder to translate modules into typeclasses than the other way around.
Alas, as I said at the end of the post, I’m not actually very good at the theory side of this.
[4] https://www.stefanwehr.de/publications/Wehr_ML_modules_and_Haskell_type_classes_SHORT.pdf [5] https://www.stefanwehr.de/publications/Wehr_ML_modules_and_Haskell_type_classes.pdf