My team adopted the Conventional Comments browser extension for use with our GitHub Enterprise instance. It’s been transformative w.r.t. clearly establishing expectations for actions on a PR comment. I almost can’t imagine not having it and generally want to use it on public code hosting and collaboration services.
I do distributed systems and web, and learning Rust made Go an obsolete language as far as I’m concerned. I legit can not think of a use case where I would pick Go over Rust.
Having a team that doesn’t already know Rust is at least one reason I can think of. It takes much less time to get a team up to speed on Go than it does on Rust.
Maybe compilation/test speed (eg. development velocity) is another.
Well, yes. I believe programmers in general and language zealots in particular underestimate how hard it is for a team to ramp up productively in a new language, and overestimate how productive the team will be once they have ramped up. In other words, the productivity bottleneck in programming is seldom the language itself.
An interesting fact is that Go maps are optimized for certain integer types. Quite a while ago (go 1.13) I did some testing (haven’t done it recently though), and as a result added this comment to one of my codebases:
// go maps are optimized for only certain int types:
// -- results as of go 1.13 on my slow laptop --
// BenchmarkInt 297391227 3.99 ns/op
// BenchmarkInt8 68107761 17.90 ns/op
// BenchmarkInt16 65628482 18.30 ns/op
// BenchmarkInt32 292725417 4.08 ns/op
// BenchmarkInt64 293602374 4.11 ns/op
// BenchmarkUInt 298711089 3.99 ns/op
// BenchmarkUInt8 68173198 17.80 ns/op
// BenchmarkUInt16 67566312 18.10 ns/op
// BenchmarkUInt32 298597942 3.99 ns/op
// BenchmarkUInt64 300239860 4.02 ns/op
// Since we would /want/ to use uint8 here, use uint32 instead
// Ugly and wasteful, but quite a bit faster for now...
subtrees map[uint32]*SomeStruct
Using one of the optimized map key types might improve the benchmarks a bit. So uint16 in the article may be a poor choice, though I doubt it would change the overall outcome (likely still the slowest).
I thought maps were optimized for 2-byte sized objects as well but I guess that’s not the case.
Using a 4-byte key like int32 does make them go faster.
Reset becomes a bit slower, but Get and Set see some improvements; I only did a lazy run without a high -count and benchstat, so to get the exact % someone would need to do a couple of extra steps.
But if no new map specializations were added, your results are probably still relevant.
… and buggy. Last time I had to use Jira there were half a dozen bugs that really got in the way. Like the formatting language being completely different depending on the screen at which you started editing a ticket, and the automatic conversion from one formatting language to the other being broken so that occasionally, even if you did everything right, sometimes your ticket would end up with piles of doubly escaped garbage instead of formatting. This wasn’t an extension, this was core Jira. Though I suspect that particular issue is fixed now.
Are you on a cloud instance or self-hosted server? I’ve seen both be slow, but self-hosted is usually the worst IMO (under-specced or poorly-configured hardware, I would guess).
Yeah, I’ve experienced slowness with both. Even with self-hosted and throwing oversized hardware at it, it still tends to be quite slow (albeit a little faster than cloud hosted).
If the feature set of the processor was driven by the sales team trying to make every last sale and meet every requirement no matter how weird, and the engineering team didn’t have a say in how it was designed, yeah I probably would!
I mean, if you put it like that, yeah I do kinda hate modern x86_64 CPUs for those same reasons.
They’ve been trying, for sales reasons, to meet the increasingly ridiculous requirement “more single-thread performance for the same old instructions”. And this has driven them to make increasingly dangerous engineering decisions, resulting in the slew of CPU vulnerabilities we’ve seen, along with mitigations for them that undo most of the performance wins.
I’d say that ties into my reasons for disliking it too. I think many of the “features” added were just to get more sales, no matter how it crippled other things that were working just fine. Meanwhile, highly requested features go unimplemented for years because Atlassian doesn’t think it’ll make them more money.
I am fine with the included (fewer things to have to install) Terminal.app, Mail.app, and Safari.
I also use Raycast, VSCode, macvim, Brave (mostly just for “works best in chrome” sites), The Archive (for Zettelkasten/knowledge archiving), syncthing, IINA (media player), Deckset (presentations), 1password, Affinity graphics suite, limechat, wireguard, monodraw, things.app, numbers, pages, toothfairy, and some misc tools from objective-see, and unixy stuff with homebrew.
This is wonderful news. Now people will be incentivized to set up IPv6, which means the documentation for setting up IPv6 will improve, which means more people will set up IPv6 by default, which eventually means everyone uses IPv6 and static IPs become free.
I don’t see how it will incentivize ISPs to add IPv6 support. I would love to have IPv6 but my ISP doesn’t care (and I can’t switch ISPs).
The only chance it will happen is if both things happen:
Websites stop being accessible thru IPv4 on a significant scale.
People blame ISPs for that instead of website operators.
Because your suggestion means that AWS customers will spend more money to have IPv4 and, for some mysterious reason, would spend engineering effort to set up IPv6 on top of that. Doubling their costs for what exactly?
Now, if AWS announced that they will stop allocating public IPv4 addresses by 2030, that would certainly get my ISP moving. But even that would not fulfill both parts of my test – the blame would fall on AWS.
For now, I only see a prospect of shared/SNI hosting like GH pages, Netlify, or the good ol’ LAMP hosting being more attractive.
My ISP (centurylink) supports ipv6, but it is almost worse than if they didn’t! I think they implemented some transitory version (6rd. also over PPPoE!) and seem to have never updated it since (eg. they consider it “job done”?). With what seems to be the proliferation of buggy dhcpv6 and prefix delegation, weird issues with ipv6 auto-address selection[1], getting a stable ipv6 address on an internal network seems nearly impossible. I’ve been tempted to try NAT66 ffs!
[1]: you were originally supposed to be able to use multiple ipv6 networks on the same segment (eg. a GUA/public-routable/globally-unique and a ULA/site-local), and have address selection pick the site-local when it is relevant (via a source address selection algorithm), and the GUA otherwise. I don’t think I ever saw it work right! I think these days site-local addresses are even considered “deprecated”.
I used to be IPv6 zealot 10+ years ago. Today I am resigned to the fact that IPv4 will be around forever.
About 10 years ago, my ISP (a former monopoly that is notorious for putting any kind of infrastructure investment off until not doing it will lose them a lot of customers) operated equipment on their backbones that dropped packets that weren’t well-formed IPv4 packets and broke IPv6 even for other ISPs buying transit from them. Now, with their consumer router, every machine on my network has IPv6 connectivity automatically and my browser connects to a surprising number of things with IPv6 without any issues.
I suspect IPv4 will be like old Android releases: people will track the number of customers still using it and eventually decide that it isn’t worth the cost to keep supporting. Once a few companies make that decision it will give cover to others wanting to do that same.
I agree and hope that this is what will happen. Especially since cloud providers seemingly being one of the major providers for machines without IPv6 per default.
However, I feel a bit like they are basically too cheap. Given how expensive AWS is in first place I feel like it’s more like a way to increase costs for Amazon rather than expecting a huge push for IPv6.
At $44/yr I don’t see this being a big issue for anyone who spends any significant amount of money on AWS. Maybe it will move the needle on IPv6 adoption a small amount, but I just don’t see it making a big difference. I hope I’m wrong.
At $44/yr I don’t see this being a big issue for anyone who spends any significant amount of money on AWS.
That may be true, but I know a lot of folks on the lower end of things that this will be a significant change for. One thing I do is help non-profits get hosted as cheaply (and easily: I’d rather them NOT have to keep me on speed dial) as possible. Lightsail has been good for that. At $3.50 per month for Lightsail, the cost of the IP will double the cost of everything they are hosting. I completely agree that (usually) won’t break the bank, but a 100% increase in costs is still a 100% increase in costs.
The exception to the above is non-profits I’ve helped keep a web presence after that have gone under so that their work is not lost. In that case, there are some very specific “free-tier” providers that can be used to keep something going for just the cost of a domain name. AWS will no longer be part of that. I say that acknowledging this is a very niche use case.
Now if only the solution to at least 10% of my networking problems weren’t “disable IPv6 at a system-wide and network-wide level to make sure nothing ever tries to use it, anywhere ever”, I could get on board with this.
Ignoring all my other problems and complaints with IPv6 (notably, that reciting an IP address for v6 is a disaster), “it doesn’t even work 10%+ of the time” is a showstopper that makes me laugh at this in the “please stop trying to make Fetch happen” way .
Then again - freeing up IPv4 addresses in the server space will reduce the need for me to care about IPv6 at all on the client side, as the server sides can NAT their way through the mess transparently to me, so maybe in the spirit of this article 1 and a few others I’ve read that talk about IPv6 being a flop, this is actually a good thing. Shrug.
I wonder how many systems communicate via HTTP that would see a non-trivial performance increase if they implemented a custom protocol. I don’t think it would really change much, since HTTP overhead isn’t going to be what makes the difference in the number of packets you have to send for something. So maybe this is a good thing, because having a standard, even if it’s a standard that was originally developed for hypertext documents, is still worth something.
Over the course of my career, I’ve come across something like O(100) custom protocols for service-to-service communication, all built with the assumption that e.g. JSON-over-HTTP would be too inefficient. These protocols were almost always underspecified, fragile, and fiendishly difficult to maintain. (No shade to their authors on these points – protocol design is hard!)
At some point, I started applying a test. I would write an end-to-end benchmark for the system as a whole, i.e. not a micro-benchmark of an individual parser or component. I’d run the benchmark with the default custom protocol to capture baseline results. I’d then write an alternative protocol with bog standard HTTP clients and servers, sending and receiving simple JSON objects, using gzip compression. And in almost all cases, gzipped JSON-over-HTTP would exhibit both lower latency and higher throughput than the custom protocol.
I’m not saying this is an absolute truth. There are definitely exceptions. But those exceptions gotta be justified with tests and benchmarks. In short, I agree with you :)
These protocols were almost always underspecified, fragile, and fiendishly difficult to maintain.
… And invariably require special tooling to debug. Often said tooling either doesn’t get written, or is ad hoc and thrown together at the last minute out of necessity. There is a lot to be said for a protocol that you can just read and write when you’re in the development phase.
It should be a good practice that each author of such a protocol or format also creates a dissector for Wireshark and some standalone tool and library for parsing and generating.
On the other hand, O(x) also colloquially means “on the order of”, to estimate a quantity to some number as a lowest upper bound. I suppose writing 10 < x < 100 would be more accurate, or x+ = 100, or some other notation that expresses bounds. Obviously he is not talking about the complexity of an algorithm here, so we all understood what he was saying even if he “misused” the notation.
Pieter Hintjens alluded to this pattern in his Cheap and Nasty protocol design article: you want either something easy and simple, or brutally tuned for sheer throughput. Trying to mix them or going straight to Nasty without a good reason is a recipe for pain.
It depends on the specifics of the payload, but in general, it’s less impactful than you might think. The difference to gzip is typically O(1-10%), which is usually lost in the noise, especially if you’re using HTTP/2. (Like you should be!)
Basically, it’s about request multiplexing over connections.
HTTP/1 connections serve one request at a time, which means N concurrent requests require N active connections. But if every active request to a server requires a unique connection, then connection overhead (in the broadest sense) quickly becomes the bottleneck in the system. What you want instead is to mux arbitrarily many logical requests over a single physical connection. This is what HTTP/2 gives you. Makes a huge difference in high-RPS systems.
HTTP/2 is an improvement, but muxing over a single tcp connection is still problematic as HTTP/2 still exhibits head-of-the-line blocking at the TCP layer – I imagine in really high RPS systems it would still be suboptimal. HTTP/3 should provide a big improvement though in that regard.
Yep! Although it’s worth observing that fixing request-per-connection (by moving from HTTP/1 to HTTP/2) is going to deliver performance benefits that are something like an order of magnitude greater than the benefits from fixing TCP head-of-line blocking (by moving from HTTP/2 to HTTP/3) in the nominal case. Both are worth doing! It’s just the classic situation of diminishing returns.
I’ve been keeping my eye on Chimera. I am a big fan[1] of Alpine, and Dinit seems like a very nice choice as well – I wish Alpine had picked it as their new future[2] init system.
Looking forward to trying Chimera out on a server once I get some time.
I love Void and Alpine Linux in concept, and it sounds like I might love this as well if only for how weird it is… But every time I try to use them for real I just keep banging my head into how much config BS is left up to the user. Debian has just spoiled me too much with the amount of little Mystery Edge Cases it solves for you: if you install a program, it has all the correct paths, config file locations, etc for running on a Debian system. If you install a program that is a service, such as a database or server, then it starts up after install, is enabled so it starts on each boot, and has sane defaults that lock it down from external access. If you install a server that provides stuff for local use like pulseaudio or Xorg, it generally turns itself on and integrates nicely with whatever else you have installed so you don’t need to wire up all the horrible edge-cases by hand.
Spoiled, I know. But my day job is wiring up all the horrible edge-cases of various programs by hand and then packaging the results for other people to use, so I suppose I can’t be arsed to do it for fun.
Hell no, this is exactly why you roll a distribution and not some hand selected binaries. Good defaults by people who had more time to select what makes sense.
Have never understood this perspective of Debian. It’s never seemed to solve edge cases for me, only create them - so many programs modified in completely unexpected and often poorly documented ways that I’ve had to fight with just to get expected behaviour. Void Linux is probably the best Linux I have used (sans NixOS but that doesn’t count so much, it’s too different).
If you install a program that is a service, such as a database or server, then it starts up after install, is enabled so it starts on each boot…
Opinions are funny… The “auto-start after install” is one of my least favorite things about debian. It seems so ridiculous to me that something would start immediately after I install it, before I have even managed to configure it or modify the defaults!
I sympathize! But the defaults are always good, is the thing. A lot of the common stuff doesn’t need configuring (pulseaudio, Xorg), and a lot of the less-common stuff (nginx, postfix) still has sane defaults so you can see that it works at all before tinkering with it.
I guess I have run into too many cases where I have found the defaults irritating (not always good for me, apparently), requiring me to stop the service, configure, clean-up after the defaults (eg. some default database having been created in a place I didn’t want), configure the service, then start it… and if I want to “try a service out”, I can just start it myself.
I have run into many cases over the years of that where “auto start on install” has been problematic and/or annoying.
Luckily I have to choice to just not use debian on any of my “pet” servers[1]!
[1]: This is less of an issue on “cattle” servers, as the configuration there is typically put in place before the package is installed. But startup and run-at-boot is also configured by the same mechanism there, so that is fairly pointless too.
I have run into many cases over the years of that where “auto start on install” has been problematic and/or annoying.
One such example I have seen recently is glances. Someone installed it on our servers for its useful interface, not realizing that it would also activate a service running a server mode.
Of course the Debian developers have taken the precaution of binding it to localhost, so it was only annoying. But since this is done by hardcoding the IP address in a trivial systemd service file, I am not sure what the value is anyway.
I used Void for a headless build/test machine at a database startup. At the time I had no good reason to install it, I just kinda felt like it. But for building and performance testing it was pretty nice. There were only 7-8 userspace processes running at a time, including my sshd session subprocess, which was surreal. Void really nailed that use case where I specifically wanted my machine doing nothing except handling syscalls and configuring nothing except SSH.
I definitely would avoid it for anything production, internet facing, or as a daily driver. It was great for doing exactly what I wanted, but probably only because I didn’t want to do anything interesting.
That’s not a bad idea. Every time I touched Void I’m impressed by how much it doesn’t do. Then I want to use it as a desktop and so it needs to be running dbus, a sound server, NetworkManager, all that nonsense, and so I have to set all that stuff up by hand. I like my minimalism, but I also like getting shit done and not having to jump through hoops to connect to a coffee shop’s wifi and listen to music on my bluetooth headphones.
I use an SSH terminal server pretty darn frequently though, I might try slapping Void onto it and seeing how it feels. Where did I leave my Raspberry Pi…
on chimera these are pretty much just dinitctl enable networkmanager as root and dinitctl enable wireplumber; dinitctl enable pipewire-pulse as user and that’s… about it (no manual dbus setup, no manual soundserver setup stuff that is not plain basic service enablement, etc)
i don’t consider simplicity to be an excuse for laziness, so you’ll find chimera workflows to be a lot more methodical and less ad-hoc than void/alpine (unhappiness with that was among one of the major reasons why start the project in the first place), and built-in first-class support for user services and login session tracking helps too, as well as being more opinionated (i found the zen of python to be a major inspiration there, particularly “there should be one obvious way to do it” and “simple is better than complex, but complex is better than complicated”)
Yup, log tail is going to be added in the future according to the roadmap. It will be provided as a separate HTTP API endpoint like /select/logsql/tail, which accepts query argument with the needed filters, and returns newly ingested logs matching the given filters in streaming manner until the client closes the connection to server.
Don’t use it as root filesystem on Linux (you can, it apparently takes a bit of work to get it to cooperate with GRUB, it probably isn’t worth the trouble)
I use refind, not grub, but I had no trouble using zfs on root. I put the kernel on the esp, rather than the root, so the bootloader does not need to speak zfs.
Don’t use it as swap (you can, it probably won’t work too well)
As others have pointed out, ZFS can allocate memory during a write. At least on FreeBSD (I believe it’s generic OpenZFS code), there is some special hackery that attempts to pre-allocate enough memory to handle a swap transaction but there remain some corner cases where this does not work. In particular, with a normal swap file you typically reserve space on the disk (or, at least, reserve inodes and can just pull blocks off a free list), whereas ZFS has a much more complex notion of allocation.
The root problem is that the ZFS layer doesn’t know that it’s involved in swapping. There’s already some complication from the fact that ZFS’s ARC and the buffer cache are distinct (on FreeBSD, the buffer cache had to learn that some pages were externally owned so that you didn’t end up with two copies of every disk page, one in the buffer cache and one in ARC, not sure how Linux handles this, perhaps it already had an analogous mechanism). Ideally, you’d want ZFS to know that some transactions are from swap, to prioritise these, and to be willing to evict clean pages from ARC to reclaim memory that it needs to handle the swap transactions.
I used swap on ZFS for years without problems, but on more recent installs I’ve just carved out a chunk of space at the start of each disk for swap.
Don’t use it as swap (you can, it probably won’t work too well)
Dunno, just saw docs warning against it. Particularly on the Arch wiki:
On systems with extremely high memory pressure, using a zvol for swap can result in lockup, regardless of how much swap is still available. This issue is currently being investigated in OpenZFS issue #7734
Don’t use it as swap (you can, it probably won’t work too well)
Why would it not work well?
My knowledge is about a decade old, but, from what I remember:
Putting a swap file on ZFS can break things messily because the kernel expects to read and write the file in a fixed location and not have it jump around because of copy-on-write.
Putting a swap partition on a ZVOL was… I think it was less bad but maybe somewhat pointless and risking bad interactions if the kernel decided to swap some of ZFS’s memory.
Swap usage by the kernel is basically “write this block ASAP to free up the ram”. On a direct device, that’s just writing that block. On any filesystem it’s going through finding the right area, potential allocation of new extent, maybe checksumming, maybe replication, etc.
You’re adding work which is unnecessary and doesn’t provide any features you care about.
Regardless, Safari 16.4 is astonishingly dense with delayed features, inadvertantly emphasising just how far behind WebKit has remained for many years and how effective the Blink Launch Process has been in allowing Chromium to ship responsibly while consensus was witheld in standards by Apple. It simultaneously shows how effective the requirements of that process have been in accelerating catch-up implementations. By mandating proof of developer enthusiasm for features, extensive test suites, and accurate specifications, the catch-up process has been put on rails for Apple. The intentional, responsible leadership of Blink was no accident, but to see it rewarded so definitively is gratifying.
I found this pretty boggle-worthy - the repeated use of the word “responsible”, painting blink/chrome as such a bastion of good internet citizenship while pushing their own standards, several of which have serious end user privacy concerns (web usb, etc). I mean, how dare everyone else not ship chrome’s de facto (“we have market share, so we can make our own standards”) standards right away?!
After clicking “about” on the blog it makes a bit more sense how/why the author might make such an assertion.
Yeah. I also note that the “delayed features” that are supposedly holding back the “Open Web” are on average things that landed in Chrome in 2018-2019. It’s as if the browser-pushers want us to think that the web of 2018 is obviously intolerably backward and unusable. Bro, please.
It may have snuck up on you, but 2018 was five years ago. I would count a feature that fails to land on major platforms for that long as effectively dead. It’s not that 2018 was the stone ages, but you don’t want to be in 2018 forever.
Other than security upgrades and fixes, why not? What have we gained in the 5 years since that hasn’t primarily been in service of companies like Google extending their data gathering / advertising pimpage?
A lot of mobile web apps are DOA without the Push API which shipped in Chrome in 2015 and Firefox in 2016. They are a big part of this Safari 16.4 release.
Chrome may be abusing its dominant status to push defacto non-standards but Safari is actually very behind.
A royalty-free video codec that isn’t ancient and doesn’t suck. HTTP3, which makes a real difference to performance, especially on iffy connections. Lazy loading hints so that heavy content doesn’t have to be loaded unless the user is actually going to see it (without lag-inducing JS hacks). Motion sensing support for phones (or whatever devices have gyros/accelerometers.). Tools for making layouts that aren’t ass-backwards when the content is in an RTL language. Some more stuff in general for making pages that look nice even without a gigabyte of tool-generated CSS and JS.
And of course, every millennial’s favorite, the ability for a media query to check whether the system-wide “dark mode” is enabled.
One of the most important things about an open ecosystem which ties in closely to responsibility is sustainability. The level of resources Google puts into adding features into Chromium is nigh-impossible to match sustainably. This isn’t to say that Apple doesn’t have a) the resources, or b) reasons to de-prioritise features, especially ones that duplicate native application functionality to their lucrative App Store, but even MS pulled out of this game with a reasonably competitive modern browser engine in EdgeHTML.
Given how much sway MS/Google have over the browser market with their position in WHATWG and Chromium/Blink’s market share, it’s not really much better than the “dictated standards” of the past, like PDF or OOXML - having an open spec to meet regulatory requirements many governments have about “open data standards”, but driven entirely by stakeholders who effectively control that ecosystem.
It’s the old “Fire and Motion” strategy that Joel Spolsky used to write about in the context of Microsoft. Google can simply fire bursts of new “standard” features at everyone else, and then blame competitors for being unable to keep up. And developers will happily take up the chant of “Safari is the new IE! The new IE! The new IE!” despite the fact that it’s Google and Chrome using the old Microsoft tactics.
I didn’t, and it’s way off-topic for here, but maybe someone in this thread knows:
I have been looking for bean-to-cup filter machine that has a burr grinder and drips into an insulated (not heated) jug. So far, I have found precisely one such machine to exist, it doesn’t ship outside the USA, and reviews indicate it often breaks after 6 months. Has anyone heard of such a thing being mass produced? Or a kickstarter or similar for one.
Messy desk and desktop is just default KDE - my main interests lie in terminal emulator anyway. The only mildly interesting thing is an e-ink monitor and monochrome setup for it.
Thanks for the write-up of the monitor! I am still on the fence whether to spend to money to try it, so reading about experiences of others is helpful.
I always wonder how people can work with a desk that is not height-adjustable? You tune your chair following ergonomic guidelines and then the table/keyboard is too high/low to preserve a good 90 degree angle and then what?
(Admittedly, I have worked with a non-adjustable desk and even chair when I was younger, and I have come to regret it.)
I adjust the chair and my elbows lie on its armrests, yeah. I’m of generic height so it somehow worked to me still with generic desks, though I’m tempted to get an adjustable one every now and then.
What keyboard is that in the e-ink monitor image? I’ve been looking for a low profile (choc or similar) split keyboard for a while, and haven’t run across anything that I wouldn’t have had to self-assemble (not really interested in doing that).
I’m also using an e-ink monitor. For those of you interested, it can be seen in action. The videos are boring, but in part 3 you can see how the display works.
I’m currently alternating between a normal chair and a kneeling chair. I’ve been having a lot of lower back pain lately, and the kneeling chair really helps with that, but it makes my knees and tail bone hurt. Right now I just switch between standing, kneeling and sitting. I write emails standing, and code sitting and kneeling. I was never able to focus on code standing for some reason but I tend to pace when I write emails, so standing for emails works well.
The orange cloth is used to cover my LCD monitor. Sometimes I need color, speed, or just a second monitor. Unfortunately, it takes like 6 seconds for my monitor to turn on. When I need to switch between screens I keep the glowy one covered.
I hope they also add a way to evaluate the f-string bindings at runtime. I keep running into places where I want to define f{foo} but have foo not be bound until the f-string is used at runtime. You can’t do that.
I keep running into places where I want to define f{foo} but have foo not be bound until the f-string is used at runtime. You can’t do that.
Yeah, as you mentioned there are some ugly work arounds – you can kind of use a lambda and partial as another example (not that I would necessarily recommend it though):
>>> thing = lambda _: f"I like {x} {y}"
>>> t = functools.partial(thing, None)
>>> x = "pickles"
>>> y = "on saturdays"
>>> t()
'I like pickles on saturdays'
>>> y = "on sundays"
>>> t()
'I like pickles on sundays'
str.format() is not a workaround, f-strings do exactly what str.format() is doing, it’s just a syntactic sugar for that. It worked before and working since then to just construct a format string and use it with str.format().
You are wrong. str.format() has very different semantic and inner working than f-strings. str.format format string are not even reusable for f-string. See https://docs.python.org/3/library/string.html#formatstrings. You can’t even eval arbitrary python with str.format, only access attributes or index or the given parameters.
I might have worded poorly, but I’m not wrong. What I meant was that after evaluating f-strings, they work the same way, which I thought not worth mentioning, because obviously evaluation is the whole point of f-strings over str.format().
The syntax should be friendly to hard-wrapping: hard-wrapping a paragraph should not lead to different interpretations, e.g. when a number followed by a period ends up at the beginning of a line.
This then mandates several annoying rules, like requiring a blank line between a paragraph and the start of a list.
(I anticipate that many will ask, why hard-wrap at all? Answer: so that your document is readable just as it is, without conversion to HTML and without special editor modes that soft-wrap long lines.
But these “special editor modes” are ubiquitous and easy; why bend over backwards to avoid using them? Consider:
Problem: I’m viewing markup and the paragraphs all run off the right edge of the screen.
Solution 1: Invoke the Word-Wrap command.
Solution 2: Add newlines to make the paragraphs wrap. After editing text, shuffle the newlines around to fix the wrapping (or use a command for that, which not every editor has.) If you happen to open the file in a narrower window or need to shrink the window to make room, rewrap the text. If you’re searching for a phrase, I hope your editor knows to match newlines as spaces. Oh, and make sure to insert blank lines before a list or block quote so those don’t get merged into the paragraph.
This decision prioritizes old-school-leaning coders and people old enough to remember typewriters, at the expense of regular users who take word-wrap for granted. That’s bad, because the latter category far outnumbers the first.
One of the great things about Markdown is that it’s, mostly, intuitive to people who haven’t learned it. Using asterisks or underscores for emphasis, and numbers or asterisks for lists, is common already. Give people Markdown for e.g. blog posts or comments, and it often just works. The changes in this syntax to appease the 80-column-TTY clique come at the expense of that usability.
But these “special editor modes” are ubiquitous and easy
It’s not possible to soft wrap text in vi or (neo)vim, at least not how one would normally expect.
If I want to soft wrap text at an arbitrary length (say 80 characters) while my terminal window is 140 characters in width, it’s not possible to do that. I could resize my terminal window, use set columns=80, or use plugins like goyo.vim but that essentially disables half of my screen and I can’t use it to make splits when I want to.
I’m sad that your preferred editor doesn’t have such basic functionality, but maybe it would be better for someone to fix that, than for markup languages to contort their syntax to accommodate this limitation.
From what I see it is not that common to have ability to soft-wrap at given column. In most cases from what I have seen is to have wrap at window boundary.
Additionally I like to use style, where I do semantic breaks (for example each sentence in separate line) in longer text that I write, which then allow me to have more sensible diffs of such documents.
" set overall column width
set columns=90
" For all filetype text files set 'textwidth' to 78 characters and add an 85 column highlight line
autocmd FileType text setlocal textwidth=78 colorcolumn=85
I used to hard-wrap all my markdown entries until I was introduced to Emacs’ visual-line-mode after griping about the lack of hardwrap support in gemtext. Now I just don’t bother justifying text via hard-wrapping.
My point is that standard Markdown has no problems with hard-wrapped input.
My team adopted the Conventional Comments browser extension for use with our GitHub Enterprise instance. It’s been transformative w.r.t. clearly establishing expectations for actions on a PR comment. I almost can’t imagine not having it and generally want to use it on public code hosting and collaboration services.
Which browser extension, if you don’t mind me asking?
https://addons.mozilla.org/en-US/firefox/addon/conventional-comments/
There’s a chrome version too.
The overlap in use cases for these two languages is way smaller than people seem to think
I do distributed systems and web, and learning Rust made Go an obsolete language as far as I’m concerned. I legit can not think of a use case where I would pick Go over Rust.
Having a team that doesn’t already know Rust is at least one reason I can think of. It takes much less time to get a team up to speed on Go than it does on Rust. Maybe compilation/test speed (eg. development velocity) is another.
You are not wrong, but a team knowing one language and not the other can be used to justify any language :)
Well, yes. I believe programmers in general and language zealots in particular underestimate how hard it is for a team to ramp up productively in a new language, and overestimate how productive the team will be once they have ramped up. In other words, the productivity bottleneck in programming is seldom the language itself.
An interesting fact is that Go maps are optimized for certain integer types. Quite a while ago (go 1.13) I did some testing (haven’t done it recently though), and as a result added this comment to one of my codebases:
Using one of the optimized map key types might improve the benchmarks a bit. So
uint16
in the article may be a poor choice, though I doubt it would change the overall outcome (likely still the slowest).I thought maps were optimized for 2-byte sized objects as well but I guess that’s not the case. Using a 4-byte key like int32 does make them go faster. Reset becomes a bit slower, but Get and Set see some improvements; I only did a lazy run without a high -count and benchstat, so to get the exact % someone would need to do a couple of extra steps. But if no new map specializations were added, your results are probably still relevant.
No, I definitely hate JIRA for catering to and enabling exactly these kinds of workflows.
… and also for being slow.
… and buggy. Last time I had to use Jira there were half a dozen bugs that really got in the way. Like the formatting language being completely different depending on the screen at which you started editing a ticket, and the automatic conversion from one formatting language to the other being broken so that occasionally, even if you did everything right, sometimes your ticket would end up with piles of doubly escaped garbage instead of formatting. This wasn’t an extension, this was core Jira. Though I suspect that particular issue is fixed now.
Are you on a cloud instance or self-hosted server? I’ve seen both be slow, but self-hosted is usually the worst IMO (under-specced or poorly-configured hardware, I would guess).
Yeah, I’ve experienced slowness with both. Even with self-hosted and throwing oversized hardware at it, it still tends to be quite slow (albeit a little faster than cloud hosted).
Do you also hate the processors that run the instructions to make it possible?
If the feature set of the processor was driven by the sales team trying to make every last sale and meet every requirement no matter how weird, and the engineering team didn’t have a say in how it was designed, yeah I probably would!
I mean, if you put it like that, yeah I do kinda hate modern x86_64 CPUs for those same reasons.
They’ve been trying, for sales reasons, to meet the increasingly ridiculous requirement “more single-thread performance for the same old instructions”. And this has driven them to make increasingly dangerous engineering decisions, resulting in the slew of CPU vulnerabilities we’ve seen, along with mitigations for them that undo most of the performance wins.
I’d say that ties into my reasons for disliking it too. I think many of the “features” added were just to get more sales, no matter how it crippled other things that were working just fine. Meanwhile, highly requested features go unimplemented for years because Atlassian doesn’t think it’ll make them more money.
Very cool updates in std.crypto. Thanks Frank, et al. !
I am fine with the included (fewer things to have to install) Terminal.app, Mail.app, and Safari.
I also use Raycast, VSCode, macvim, Brave (mostly just for “works best in chrome” sites), The Archive (for Zettelkasten/knowledge archiving), syncthing, IINA (media player), Deckset (presentations), 1password, Affinity graphics suite, limechat, wireguard, monodraw, things.app, numbers, pages, toothfairy, and some misc tools from objective-see, and unixy stuff with homebrew.
This is wonderful news. Now people will be incentivized to set up IPv6, which means the documentation for setting up IPv6 will improve, which means more people will set up IPv6 by default, which eventually means everyone uses IPv6 and static IPs become free.
I don’t see how it will incentivize ISPs to add IPv6 support. I would love to have IPv6 but my ISP doesn’t care (and I can’t switch ISPs).
The only chance it will happen is if both things happen:
Because your suggestion means that AWS customers will spend more money to have IPv4 and, for some mysterious reason, would spend engineering effort to set up IPv6 on top of that. Doubling their costs for what exactly?
Now, if AWS announced that they will stop allocating public IPv4 addresses by 2030, that would certainly get my ISP moving. But even that would not fulfill both parts of my test – the blame would fall on AWS.
For now, I only see a prospect of shared/SNI hosting like GH pages, Netlify, or the good ol’ LAMP hosting being more attractive.
There are government programs to pressure ISPs to add IPv6 support. Depending on the country, of course.
What if the Google front page would bicker about your ISP being bad when you access via ipv4?
My ISP (centurylink) supports ipv6, but it is almost worse than if they didn’t! I think they implemented some transitory version (6rd. also over PPPoE!) and seem to have never updated it since (eg. they consider it “job done”?). With what seems to be the proliferation of buggy dhcpv6 and prefix delegation, weird issues with ipv6 auto-address selection[1], getting a stable ipv6 address on an internal network seems nearly impossible. I’ve been tempted to try NAT66 ffs!
[1]: you were originally supposed to be able to use multiple ipv6 networks on the same segment (eg. a GUA/public-routable/globally-unique and a ULA/site-local), and have address selection pick the site-local when it is relevant (via a source address selection algorithm), and the GUA otherwise. I don’t think I ever saw it work right! I think these days site-local addresses are even considered “deprecated”.
There is long way from “paid/expensive IPv4 addresses” to “IPv6-only services that would force people to get IPv6 connectivity”.
I used to be IPv6 zealot 10+ years ago. Today I am resigned to the fact that IPv4 will be around forever.
About 10 years ago, my ISP (a former monopoly that is notorious for putting any kind of infrastructure investment off until not doing it will lose them a lot of customers) operated equipment on their backbones that dropped packets that weren’t well-formed IPv4 packets and broke IPv6 even for other ISPs buying transit from them. Now, with their consumer router, every machine on my network has IPv6 connectivity automatically and my browser connects to a surprising number of things with IPv6 without any issues.
I suspect IPv4 will be like old Android releases: people will track the number of customers still using it and eventually decide that it isn’t worth the cost to keep supporting. Once a few companies make that decision it will give cover to others wanting to do that same.
I agree and hope that this is what will happen. Especially since cloud providers seemingly being one of the major providers for machines without IPv6 per default.
However, I feel a bit like they are basically too cheap. Given how expensive AWS is in first place I feel like it’s more like a way to increase costs for Amazon rather than expecting a huge push for IPv6.
At $44/yr I don’t see this being a big issue for anyone who spends any significant amount of money on AWS. Maybe it will move the needle on IPv6 adoption a small amount, but I just don’t see it making a big difference. I hope I’m wrong.
That may be true, but I know a lot of folks on the lower end of things that this will be a significant change for. One thing I do is help non-profits get hosted as cheaply (and easily: I’d rather them NOT have to keep me on speed dial) as possible. Lightsail has been good for that. At $3.50 per month for Lightsail, the cost of the IP will double the cost of everything they are hosting. I completely agree that (usually) won’t break the bank, but a 100% increase in costs is still a 100% increase in costs.
The exception to the above is non-profits I’ve helped keep a web presence after that have gone under so that their work is not lost. In that case, there are some very specific “free-tier” providers that can be used to keep something going for just the cost of a domain name. AWS will no longer be part of that. I say that acknowledging this is a very niche use case.
Now if only the solution to at least 10% of my networking problems weren’t “disable IPv6 at a system-wide and network-wide level to make sure nothing ever tries to use it, anywhere ever”, I could get on board with this.
Ignoring all my other problems and complaints with IPv6 (notably, that reciting an IP address for v6 is a disaster), “it doesn’t even work 10%+ of the time” is a showstopper that makes me laugh at this in the “please stop trying to make Fetch happen” way .
Then again - freeing up IPv4 addresses in the server space will reduce the need for me to care about IPv6 at all on the client side, as the server sides can NAT their way through the mess transparently to me, so maybe in the spirit of this article 1 and a few others I’ve read that talk about IPv6 being a flop, this is actually a good thing. Shrug.
I wonder how many systems communicate via HTTP that would see a non-trivial performance increase if they implemented a custom protocol. I don’t think it would really change much, since HTTP overhead isn’t going to be what makes the difference in the number of packets you have to send for something. So maybe this is a good thing, because having a standard, even if it’s a standard that was originally developed for hypertext documents, is still worth something.
Over the course of my career, I’ve come across something like O(100) custom protocols for service-to-service communication, all built with the assumption that e.g. JSON-over-HTTP would be too inefficient. These protocols were almost always underspecified, fragile, and fiendishly difficult to maintain. (No shade to their authors on these points – protocol design is hard!)
At some point, I started applying a test. I would write an end-to-end benchmark for the system as a whole, i.e. not a micro-benchmark of an individual parser or component. I’d run the benchmark with the default custom protocol to capture baseline results. I’d then write an alternative protocol with bog standard HTTP clients and servers, sending and receiving simple JSON objects, using gzip compression. And in almost all cases, gzipped JSON-over-HTTP would exhibit both lower latency and higher throughput than the custom protocol.
I’m not saying this is an absolute truth. There are definitely exceptions. But those exceptions gotta be justified with tests and benchmarks. In short, I agree with you :)
… And invariably require special tooling to debug. Often said tooling either doesn’t get written, or is ad hoc and thrown together at the last minute out of necessity. There is a lot to be said for a protocol that you can just read and write when you’re in the development phase.
It should be a good practice that each author of such a protocol or format also creates a dissector for Wireshark and some standalone tool and library for parsing and generating.
O(100) = O(1), fyi
(The set of functions which are asymptotically bounded by a straight line of any gradient.)
s/something like O(100)/on the order of 100/
You’re certainly correct.
On the other hand, O(x) also colloquially means “on the order of”, to estimate a quantity to some number as a lowest upper bound. I suppose writing 10 < x < 100 would be more accurate, or x+ = 100, or some other notation that expresses bounds. Obviously he is not talking about the complexity of an algorithm here, so we all understood what he was saying even if he “misused” the notation.
Pieter Hintjens alluded to this pattern in his Cheap and Nasty protocol design article: you want either something easy and simple, or brutally tuned for sheer throughput. Trying to mix them or going straight to Nasty without a good reason is a recipe for pain.
I imagine these days using zstd instead of gzip would widen the gap even further.
You presume that the other protocols have no compression?
It depends on the specifics of the payload, but in general, it’s less impactful than you might think. The difference to gzip is typically O(1-10%), which is usually lost in the noise, especially if you’re using HTTP/2. (Like you should be!)
Why? I’ve seen no difference between http 1.1 and 2 in practice on any of my cases.
Basically, it’s about request multiplexing over connections.
HTTP/1 connections serve one request at a time, which means N concurrent requests require N active connections. But if every active request to a server requires a unique connection, then connection overhead (in the broadest sense) quickly becomes the bottleneck in the system. What you want instead is to mux arbitrarily many logical requests over a single physical connection. This is what HTTP/2 gives you. Makes a huge difference in high-RPS systems.
HTTP/2 is an improvement, but muxing over a single tcp connection is still problematic as HTTP/2 still exhibits head-of-the-line blocking at the TCP layer – I imagine in really high RPS systems it would still be suboptimal. HTTP/3 should provide a big improvement though in that regard.
Yep! Although it’s worth observing that fixing request-per-connection (by moving from HTTP/1 to HTTP/2) is going to deliver performance benefits that are something like an order of magnitude greater than the benefits from fixing TCP head-of-line blocking (by moving from HTTP/2 to HTTP/3) in the nominal case. Both are worth doing! It’s just the classic situation of diminishing returns.
I’ve been keeping my eye on Chimera. I am a big fan[1] of Alpine, and Dinit seems like a very nice choice as well – I wish Alpine had picked it as their new future[2] init system.
Looking forward to trying Chimera out on a server once I get some time.
[1]: I use it on my servers as well as a desktop.
[2]: https://ariadne.space/2021/03/25/lets-build-a-new-service-manager-for-alpine/
I love Void and Alpine Linux in concept, and it sounds like I might love this as well if only for how weird it is… But every time I try to use them for real I just keep banging my head into how much config BS is left up to the user. Debian has just spoiled me too much with the amount of little Mystery Edge Cases it solves for you: if you install a program, it has all the correct paths, config file locations, etc for running on a Debian system. If you install a program that is a service, such as a database or server, then it starts up after install, is enabled so it starts on each boot, and has sane defaults that lock it down from external access. If you install a server that provides stuff for local use like pulseaudio or Xorg, it generally turns itself on and integrates nicely with whatever else you have installed so you don’t need to wire up all the horrible edge-cases by hand.
Spoiled, I know. But my day job is wiring up all the horrible edge-cases of various programs by hand and then packaging the results for other people to use, so I suppose I can’t be arsed to do it for fun.
Hell no, this is exactly why you roll a distribution and not some hand selected binaries. Good defaults by people who had more time to select what makes sense.
Have never understood this perspective of Debian. It’s never seemed to solve edge cases for me, only create them - so many programs modified in completely unexpected and often poorly documented ways that I’ve had to fight with just to get expected behaviour. Void Linux is probably the best Linux I have used (sans NixOS but that doesn’t count so much, it’s too different).
Hence why different distros exist. Solve different problems for different people!
Opinions are funny… The “auto-start after install” is one of my least favorite things about debian. It seems so ridiculous to me that something would start immediately after I install it, before I have even managed to configure it or modify the defaults!
I sympathize! But the defaults are always good, is the thing. A lot of the common stuff doesn’t need configuring (pulseaudio, Xorg), and a lot of the less-common stuff (nginx, postfix) still has sane defaults so you can see that it works at all before tinkering with it.
I guess I have run into too many cases where I have found the defaults irritating (not always good for me, apparently), requiring me to stop the service, configure, clean-up after the defaults (eg. some default database having been created in a place I didn’t want), configure the service, then start it… and if I want to “try a service out”, I can just start it myself.
I have run into many cases over the years of that where “auto start on install” has been problematic and/or annoying.
Luckily I have to choice to just not use debian on any of my “pet” servers[1]!
[1]: This is less of an issue on “cattle” servers, as the configuration there is typically put in place before the package is installed. But startup and run-at-boot is also configured by the same mechanism there, so that is fairly pointless too.
One such example I have seen recently is
glances
. Someone installed it on our servers for its useful interface, not realizing that it would also activate a service running a server mode.Of course the Debian developers have taken the precaution of binding it to localhost, so it was only annoying. But since this is done by hardcoding the IP address in a trivial systemd service file, I am not sure what the value is anyway.
I used Void for a headless build/test machine at a database startup. At the time I had no good reason to install it, I just kinda felt like it. But for building and performance testing it was pretty nice. There were only 7-8 userspace processes running at a time, including my sshd session subprocess, which was surreal. Void really nailed that use case where I specifically wanted my machine doing nothing except handling syscalls and configuring nothing except SSH.
I did have to patch the openssh package to allow building with GSSAPI support once we started using kerberos for git, but that turned out to be pleasantly easy.
I definitely would avoid it for anything production, internet facing, or as a daily driver. It was great for doing exactly what I wanted, but probably only because I didn’t want to do anything interesting.
That’s not a bad idea. Every time I touched Void I’m impressed by how much it doesn’t do. Then I want to use it as a desktop and so it needs to be running dbus, a sound server, NetworkManager, all that nonsense, and so I have to set all that stuff up by hand. I like my minimalism, but I also like getting shit done and not having to jump through hoops to connect to a coffee shop’s wifi and listen to music on my bluetooth headphones.
I use an SSH terminal server pretty darn frequently though, I might try slapping Void onto it and seeing how it feels. Where did I leave my Raspberry Pi…
on chimera these are pretty much just
dinitctl enable networkmanager
as root anddinitctl enable wireplumber; dinitctl enable pipewire-pulse
as user and that’s… about it (no manual dbus setup, no manual soundserver setup stuff that is not plain basic service enablement, etc)i don’t consider simplicity to be an excuse for laziness, so you’ll find chimera workflows to be a lot more methodical and less ad-hoc than void/alpine (unhappiness with that was among one of the major reasons why start the project in the first place), and built-in first-class support for user services and login session tracking helps too, as well as being more opinionated (i found the zen of python to be a major inspiration there, particularly “there should be one obvious way to do it” and “simple is better than complex, but complex is better than complicated”)
This looks great. All it is missing right now for my current use-case is log tail support (which indeed appears to be on the roadmap).
Yup, log tail is going to be added in the future according to the roadmap. It will be provided as a separate HTTP API endpoint like
/select/logsql/tail
, which acceptsquery
argument with the needed filters, and returns newly ingested logs matching the given filters in streaming manner until the client closes the connection to server.I use refind, not grub, but I had no trouble using zfs on root. I put the kernel on the esp, rather than the root, so the bootloader does not need to speak zfs.
Why would it not work well?
Using ZFS for swap is a bad idea because ZFS may attempt to allocate memory during a write. So you can end up with a deadlock like this:
See: https://github.com/openzfs/zfs/issues/7734
zfsbootmenu is also a viable alternative for zfs as a root linux filesystem, without having to deal with the grub nonsense[¹].
[¹]: Grub’s lack of support for many zfs features, forcing you to have a separate boot pool with features disabled.
As others have pointed out, ZFS can allocate memory during a write. At least on FreeBSD (I believe it’s generic OpenZFS code), there is some special hackery that attempts to pre-allocate enough memory to handle a swap transaction but there remain some corner cases where this does not work. In particular, with a normal swap file you typically reserve space on the disk (or, at least, reserve inodes and can just pull blocks off a free list), whereas ZFS has a much more complex notion of allocation.
The root problem is that the ZFS layer doesn’t know that it’s involved in swapping. There’s already some complication from the fact that ZFS’s ARC and the buffer cache are distinct (on FreeBSD, the buffer cache had to learn that some pages were externally owned so that you didn’t end up with two copies of every disk page, one in the buffer cache and one in ARC, not sure how Linux handles this, perhaps it already had an analogous mechanism). Ideally, you’d want ZFS to know that some transactions are from swap, to prioritise these, and to be willing to evict clean pages from ARC to reclaim memory that it needs to handle the swap transactions.
I used swap on ZFS for years without problems, but on more recent installs I’ve just carved out a chunk of space at the start of each disk for swap.
Dunno, just saw docs warning against it. Particularly on the Arch wiki:
YMMV but to me that sort of thing says “don’t bother”.
My knowledge is about a decade old, but, from what I remember:
Putting a swap file on ZFS can break things messily because the kernel expects to read and write the file in a fixed location and not have it jump around because of copy-on-write.
Putting a swap partition on a ZVOL was… I think it was less bad but maybe somewhat pointless and risking bad interactions if the kernel decided to swap some of ZFS’s memory.
Swap usage by the kernel is basically “write this block ASAP to free up the ram”. On a direct device, that’s just writing that block. On any filesystem it’s going through finding the right area, potential allocation of new extent, maybe checksumming, maybe replication, etc.
You’re adding work which is unnecessary and doesn’t provide any features you care about.
This seems like a great change to address a very common Go pitfall.
This is so incredibly nerdy. I love it!
Also a big fan of initial-d. Very clever!
Yessss!! Thank you! I want to make a promo video with a great Euro Beat track 😆
I found this pretty boggle-worthy - the repeated use of the word “responsible”, painting blink/chrome as such a bastion of good internet citizenship while pushing their own standards, several of which have serious end user privacy concerns (web usb, etc). I mean, how dare everyone else not ship chrome’s de facto (“we have market share, so we can make our own standards”) standards right away?!
After clicking “about” on the blog it makes a bit more sense how/why the author might make such an assertion.
Yeah. I also note that the “delayed features” that are supposedly holding back the “Open Web” are on average things that landed in Chrome in 2018-2019. It’s as if the browser-pushers want us to think that the web of 2018 is obviously intolerably backward and unusable. Bro, please.
It may have snuck up on you, but 2018 was five years ago. I would count a feature that fails to land on major platforms for that long as effectively dead. It’s not that 2018 was the stone ages, but you don’t want to be in 2018 forever.
Other than security upgrades and fixes, why not? What have we gained in the 5 years since that hasn’t primarily been in service of companies like Google extending their data gathering / advertising pimpage?
WASM threads were released in Chrome in 2019, I would guess many other improvements to WASM have been released as well.
So… nothing then.
A lot of mobile web apps are DOA without the Push API which shipped in Chrome in 2015 and Firefox in 2016. They are a big part of this Safari 16.4 release.
Chrome may be abusing its dominant status to push defacto non-standards but Safari is actually very behind.
The push API is a terrible idea, it sucks hard, and I’m glad to see it fail.
A royalty-free video codec that isn’t ancient and doesn’t suck. HTTP3, which makes a real difference to performance, especially on iffy connections. Lazy loading hints so that heavy content doesn’t have to be loaded unless the user is actually going to see it (without lag-inducing JS hacks). Motion sensing support for phones (or whatever devices have gyros/accelerometers.). Tools for making layouts that aren’t ass-backwards when the content is in an RTL language. Some more stuff in general for making pages that look nice even without a gigabyte of tool-generated CSS and JS.
And of course, every millennial’s favorite, the ability for a media query to check whether the system-wide “dark mode” is enabled.
One of the most important things about an open ecosystem which ties in closely to responsibility is sustainability. The level of resources Google puts into adding features into Chromium is nigh-impossible to match sustainably. This isn’t to say that Apple doesn’t have a) the resources, or b) reasons to de-prioritise features, especially ones that duplicate native application functionality to their lucrative App Store, but even MS pulled out of this game with a reasonably competitive modern browser engine in EdgeHTML.
Given how much sway MS/Google have over the browser market with their position in WHATWG and Chromium/Blink’s market share, it’s not really much better than the “dictated standards” of the past, like PDF or OOXML - having an open spec to meet regulatory requirements many governments have about “open data standards”, but driven entirely by stakeholders who effectively control that ecosystem.
It’s the old “Fire and Motion” strategy that Joel Spolsky used to write about in the context of Microsoft. Google can simply fire bursts of new “standard” features at everyone else, and then blame competitors for being unable to keep up. And developers will happily take up the chant of “Safari is the new IE! The new IE! The new IE!” despite the fact that it’s Google and Chrome using the old Microsoft tactics.
Desk and desktop
I see you’re into Coffee? I see the Wacaco sticker.
https://www.home-barista.com/ is the lobste.rs for Coffee.
PS: You might already know about it, but others here might not :-)
Yes I love coffee and everything about home made espresso.
Wow thanks for sharing I did not know!
I didn’t, and it’s way off-topic for here, but maybe someone in this thread knows:
I have been looking for bean-to-cup filter machine that has a burr grinder and drips into an insulated (not heated) jug. So far, I have found precisely one such machine to exist, it doesn’t ship outside the USA, and reviews indicate it often breaks after 6 months. Has anyone heard of such a thing being mass produced? Or a kickstarter or similar for one.
What type of keyboard is it?
Looks like NuPhy.
Yep! Great keyboard!
Looks nice indeed! I’m going to pick one up and give it a try.
How do you get the windows to organize that way, with the margin between them? Is that a Rectangle config I’m unaware of?
Yep, that’s correct. You can do it thru UI or you can set it thru JSON. Here are my settings -> https://git.0x7f.dev/andreicek/dotfiles/src/branch/master/rectangle/RectangleConfig.json
Thanks! This changes everything for me. :D
Same! I’ve been wanting to do this lately. I didn’t know you could do it with Rectangle itself and thought I’d have to do it with Hammerspoon.
Messy desk and desktop is just default KDE - my main interests lie in terminal emulator anyway. The only mildly interesting thing is an e-ink monitor and monochrome setup for it.
Thanks for the write-up of the monitor! I am still on the fence whether to spend to money to try it, so reading about experiences of others is helpful.
That e-ink monitor looks really cool!
I always wonder how people can work with a desk that is not height-adjustable? You tune your chair following ergonomic guidelines and then the table/keyboard is too high/low to preserve a good 90 degree angle and then what?
(Admittedly, I have worked with a non-adjustable desk and even chair when I was younger, and I have come to regret it.)
I adjust the chair and my elbows lie on its armrests, yeah. I’m of generic height so it somehow worked to me still with generic desks, though I’m tempted to get an adjustable one every now and then.
What keyboard is that in the e-ink monitor image? I’ve been looking for a low profile (choc or similar) split keyboard for a while, and haven’t run across anything that I wouldn’t have had to self-assemble (not really interested in doing that).
Mistel Barocco MD650L - old model full of mini- and microusb connectors :) I think they’ve got a newer ones with usb-c by now.
I’m also using an e-ink monitor. For those of you interested, it can be seen in action. The videos are boring, but in part 3 you can see how the display works.
desk
I’m currently alternating between a normal chair and a kneeling chair. I’ve been having a lot of lower back pain lately, and the kneeling chair really helps with that, but it makes my knees and tail bone hurt. Right now I just switch between standing, kneeling and sitting. I write emails standing, and code sitting and kneeling. I was never able to focus on code standing for some reason but I tend to pace when I write emails, so standing for emails works well.
The orange cloth is used to cover my LCD monitor. Sometimes I need color, speed, or just a second monitor. Unfortunately, it takes like 6 seconds for my monitor to turn on. When I need to switch between screens I keep the glowy one covered.
Upvote for steamdeck.
I hope they also add a way to evaluate the f-string bindings at runtime. I keep running into places where I want to define
f{foo}
but havefoo
not be bound until the f-string is used at runtime. You can’t do that.There’s a bunch of workarounds. Using
str.format()
is probably the best, but the languageformat()
understands is different from f-string’s templates. You can get real f-strings but only by doingeval
orinspect
hacks, discussion here: https://stackoverflow.com/questions/42497625/how-to-postpone-defer-the-evaluation-of-f-stringsYeah, as you mentioned there are some ugly work arounds – you can kind of use a lambda and partial as another example (not that I would necessarily recommend it though):
str.format() is not a workaround, f-strings do exactly what str.format() is doing, it’s just a syntactic sugar for that. It worked before and working since then to just construct a format string and use it with str.format().
You are wrong.
str.format()
has very different semantic and inner working than f-strings.str.format
format string are not even reusable for f-string. See https://docs.python.org/3/library/string.html#formatstrings. You can’t even eval arbitrary python withstr.format
, only access attributes or index or the given parameters.I might have worded poorly, but I’m not wrong. What I meant was that after evaluating f-strings, they work the same way, which I thought not worth mentioning, because obviously evaluation is the whole point of f-strings over str.format().
I was excited by this until I got to
This then mandates several annoying rules, like requiring a blank line between a paragraph and the start of a list.
But these “special editor modes” are ubiquitous and easy; why bend over backwards to avoid using them? Consider:
Problem: I’m viewing markup and the paragraphs all run off the right edge of the screen.
This decision prioritizes old-school-leaning coders and people old enough to remember typewriters, at the expense of regular users who take word-wrap for granted. That’s bad, because the latter category far outnumbers the first.
One of the great things about Markdown is that it’s, mostly, intuitive to people who haven’t learned it. Using asterisks or underscores for emphasis, and numbers or asterisks for lists, is common already. Give people Markdown for e.g. blog posts or comments, and it often just works. The changes in this syntax to appease the 80-column-TTY clique come at the expense of that usability.
As someone who loves 80-column wide blocks of text, I just pipe stuff through
fmt
if I have to.It’s not possible to soft wrap text in vi or (neo)vim, at least not how one would normally expect.
If I want to soft wrap text at an arbitrary length (say 80 characters) while my terminal window is 140 characters in width, it’s not possible to do that. I could resize my terminal window, use
set columns=80
, or use plugins like goyo.vim but that essentially disables half of my screen and I can’t use it to make splits when I want to.I’m sad that your preferred editor doesn’t have such basic functionality, but maybe it would be better for someone to fix that, than for markup languages to contort their syntax to accommodate this limitation.
From what I see it is not that common to have ability to soft-wrap at given column. In most cases from what I have seen is to have wrap at window boundary.
Additionally I like to use style, where I do semantic breaks (for example each sentence in separate line) in longer text that I write, which then allow me to have more sensible diffs of such documents.
I thought this was what “textwidth” was for?
I used to hard-wrap all my markdown entries until I was introduced to Emacs’
visual-line-mode
after griping about the lack of hardwrap support in gemtext. Now I just don’t bother justifying text via hard-wrapping.My point is that standard Markdown has no problems with hard-wrapped input.