[Edit] I realize I read what I wanted from the article rather than what was actually written — I’m leaving this here but recognize that it’s not quite what the article is about.
I really like this, and I think this advice extends beyond just Linux troubleshooting. It’s really advice on how to teach people and how people learn. Answers are 20% of the learning process, 80% is understanding how to get to the answer, and it’s so critical for developing skills. I could rant about the US education system teaching-to-the-test which is focusing on that 20% and how terrible it is.
One of my roles at my current job is helping people learn Rust, and when someone comes to me with a confusing type error I always make an effort to explain how to read the error message, why the error message occurs, and the various ways to fix it. If I instead just provided the fix, there would be no learning, no growth and no development of self-sufficiency. It takes longer, and sometimes people just want an answer, but I stay firm on explaining what is going on (partially because I don’t want to be fixing everyone’s basic type errors). I wonder if part of the issue with Linux troubleshooting advice is that it doesn’t have that same feedback mechanism — someone not learning doesn’t affect the author of the advice in any way so there is no real push for building self-sufficiency.
Anyway, I think this post was really short and to the point, and I completely agree with the message, but I also think it’s interesting just how much it extends beyond Linux diagnostics and into learning everywhere.
The right way would be to use the secrets.compare_digest() function. It’s right there in the standard library and has been since 3.6. It’s an alias of hmac.compare_digest(), which has been in the standard library since 3.3.
More context on why you should do this: if you want to compare two things for security, use compare_digest() function. If you do naive comparison in python, it will stop at the first mismatch character, and therefore, prone to timing attacks.
I didn’t know how to do a safe implementation but getting these comments, I’m learning quickly. Will make another article on how to do it the right way after learning more!
Working on launching a new backend+frontend for Berkeley Graphics website. Completely rewrote it in Django, took 3 months (part time) which also included learning Django itself.
Stack: Django + HTMX + jQuery + PostgreSQL. I’ll migrate the data to PostgreSQL 15 if it releases on time on Oct 13.
It’s a perfect example of a language feature that many of us use from time to time, but also either don’t or barely understand what’s going on under the hood.
Jetbrains needed to fire their UI team years ago when they proudly wrote that one blog post about how they made every icon look the same and monochrome. They obviously don’t understand anything about their end users, humans, and how brains and eyes work.
IntelliJ has been unusable without themeing plugins since then, and seeing how one of my coworkers is doing perfectly fine with Eclipse, I always thought it’ll be my first choice to try out when the Jetbrains design team forces their next abomination upon me. Looks like the time has come.
Agreed, iconography has regressed so much to a point where people have no idea what its purpose is. Remember FamFamFam icons [1], circa 2005? I think humanity peaked at this time. Few designers have conviction of their own anymore.
I really enjoy the icons of Haiku, and the colours… why do we have to live in a world without colour, we have a palette of at least 16M nowadays, and even more with HDR displays…
It puts a smile on my face whenever I spot Silk/Mini icons being used online. They’ve been around long enough that they sort of just blend into the background of the internet. Big thank you to Mark James for keeping the site around, it’s like a time capsule now.
My children run ThinkPad X250s; the two eldest recently started coding, using GameMaker Studio. I had to buy them external monitors; the IDEs were literally unusable on 12” screens.
However, I’ve used Emacs without any issues for years on a 12” X220.
One of the reasons I dislike Visual Studio so much is that it crams so many window panes and tabs and buttons and toolbars and other crap in the way that I can hardly focus on writing code.
When I’m working in the garage, I don’t take out every tool I own and put them in front of me while I work on my bike. I only take out the tools I’m using at that time. An IDE should be the same - I don’t need to see every piece of information about everything and have hundreds of tools (button) visible - just show me the ones I’m using at that moment.
Have you ever tried hiding stuff? Basically every single thing can be stowed away. The fact of the matter is that you’re not familiar with your own garage. The whole thing is customizeable to the teeth.
Learning Django, anyone has had a similar experience where you build something quick in Flask but then it suffers a bit of “cohesiveness”? Hard to describe. There are a bunch of Flask-SomePackageThatDoesOneThing in my dependencies. I’ve been reading Django docs and almost everything is included. Need to generate a sitemap.xml? You got it. I need to hack the auth system so it works with our current passwordless auth (sends an email login link), and a few other customizations (setup loggers, prometheus/loki, etc). Overall, Django appears to be impressive. Any suggestions appreciated.
Django’s real killer feature is the admin site. With a bit of customization, it can be super useful for internal use. (i.e. within your team or company)
It also mixes well with their ORM, because keys can be strings like field__startswith, etc.
I want everything we do to be beautiful. I don’t give a damn whether the client understands that that’s worth anything, or whether the client thinks it’s worth anything, or whether it is worth anything. It’s worth it to me. It’s the way I want to live my live my life. I want to make beautiful things, even if nobody cares. Sometimes you can’t make everything beautiful, but that’s my intent. And I’m willing to pay for that. Now that’s where money comes in. Because you can get much more quickly to an answer if you don’t worry about those things.
As I read of the complexity needed to get fonts rendered, all I could think of is “More code, more security risk, for trivial things.”
More code doesn’t necessarily imply more security risk. More code executing in a privileged context does. Text layout is quite amenable to sandboxing: it needs to be able to read fonts, read strings and bounding boxes, and produce a list of bezier paths. An implementation that cared about security could run this in a completely unprivilged context.
And then somebody manages to slip in a malicious application name that renders “Deny Access” on top of the “Allow Access” button in the request permission dialog…
I used to think this way too. Personally I grew up speaking only English, which has a relatively simple script. But as I learned more I realized that I can’t call German hyphenation, Korean line-breaking, Unicode left-to-right, or Mongolian script “trivial things.”
Humans aren’t simple, and languages are a big part of peoples’ culture. Trying to simplify human typography so it’d be easier to implement on computers is like trying to fit human feet into square shoes because they’d be easier to manufacture!
It’s easy to lose sight of the goal and dismiss things as indulgent when your own personal requirements have already been met.
I don’t know if implementing these language features is more or less complex than the font ligature examples given, so I can’t comment on this. To me the examples given were more stylistic than necessary. I rarely have use for fancy fonts and the attendant complexities of rendering. Also, languages are mutating creatures and technology is one of the things that mutates them.
the stories that could be told about the machines that run manufacturing plants. It’s great because everything is prod and everything will do horrible things if you fuck up. sometimes you are fixing something on a kiln 3x the size of your house and it breaks and you melt a couple thousand bricks, and sometimes you get told “yeah don’t worry about the puddles of acid on the ground we’ve neutralized them”.
Hot take: Could font designers please just agree that the only valid way to write 0 for technical fonts is with a dot in the middle? 0-with-nothing is irritatingly ambiguous with O, 0-with-a-slash is irritatingly ambiguous with Ø, and I’ve never seen the 0-with-broken-edges actually used outsize of Brazilian license plates.
I’m at the sad and tired point in my life where I don’t want things where every nuance is customizable, I want things where the defaults are pretty good. :P
Never seen it before in practice! I suppose I have no objective complaints. I might worry a little about dyslexic legibility, but no practical experience with it.
Yeah, I agree. my eyes are pretty bad, and I struggle to read code at even 14pt sometimes. I pretty much exclusively use Source Code Pro as my main programming font because it has the most distinctly different letters and the dot-in-the-middle 0 and NO LIGATURES.
I am seriously impressed by the quality of this font. Very regular, very readable; I would put it at the same level as PragmataPro for coding.
Vertical alignment is correct for arrows (<, -, >, =…). It is possible to choose among multiple styles of zero characters. I have not seen any line height issues in Emacs and in XTerm.
Unicode coverage could be better, but this typeface is brand new, so I guess it will improve.
Honestly, you may want to try Input. You can modify it a bit to suit your tastes. I use it simply because it seems to make it very easy to distinguish between curly braces, brackets, and parentheses at very small font sizes, at least better than any others I’ve seen. there’s free licenses in addition to commercial ones so I don’t feel much guilt plugging it.
Name popular OSS software, written in Haskell, not used for Haskell management (e.g. Cabal).
AFAICT, there are only two, pandoc and XMonad.
This does not strike me as being an unreasonably effective language. There are tons of tools written in Rust you can name, and Rust is a significantly younger language.
People say there is a ton of good Haskell locked up in fintech, and that may be true, but a) fintech is weird because it has infinite money and b) there are plenty of other languages used in fintech which are also popular outside of it, eg Python, so it doesn’t strike me as being a good counterexample, even if we grant that it is true.
I think postgrest is a great idea, but it can be applied to very wrong situations. Unless you’re familiar with Postgres, you might be surprised with how much application logic can be modelled purely in the database without turning it into spaghetti. At that point, you can make the strategic choice of modelling a part of your domain purely in the DB and let the clients work directly with it.
To put it differently, postgrest is an architectural tool, it can be useful for giving front-end teams a fast path to maintaining their own CRUD stores and endpoints. You can still have other parts of the database behind your API.
I don’t understand Postgrest. IMO, the entire point of an API is to provide an interface to the database and explicitly decouple the internals of the database from the rest of the world. If you change the schema, all of your Postgrest users break. API is an abstraction layer serving exactly what the application needs and nothing more. It provides a way to maintain backwards compatibility if you need. You might as well just send sql query to a POST endpoint and eliminate the need for Postgrest - not condoning it but saying how silly the idea of postgrest is.
Sometimes you just don’t want to make any backend application, only to have a web frontend talk to a database. There are whole “as-a-Service” products like Firebase that offer this as part of their functionality. Postgrest is self-hosted that. It’s far more convenient than sending bare SQL directly.
with views, one can largely get around the break the schema break the API problem. Even so, as long as the consumers of the API are internal, you control both ends, so it’s pretty easy to just schedule your cutovers.
But I think the best use-case for Postgrest is old stable databases that aren’t really changing stuff much anymore but need to add a fancy web UI.
The database people spend 10 minutes turning up Postgrest and leave the UI people to do their thing and otherwise ignore them.
Hah, I don’t get views either. My philosophy is that the database is there to store the data. It is the last thing that scales. Don’t put logic and abstraction layers in the database. There is plenty of compute available outside of it and APIs can do precise data abstraction needed for the apps. Materialized views, may be, but still feels wrong. SQL is a pain to write tests for.
Your perspective is certainly a reasonable one, but not one I or many people necessarily agree with.
The more data you have to mess with, the closer you want the messing with next to the data. i.e. in the same process if possible :) Hence Pl/PGSQL and all the other languages that can get embedded into SQL databases.
Have you checked row-level security? I think it creates a good default, and then you can use security definer views for when you need to override that default.
Yes, That’s exactly how we use access control views! I’m a huge fan of RLS, so much so that all of our users get their own role in PG, and our app(s) auth directly to PG. We happily encourage direct SQL access to our users, since all of our apps use RLS for their security.
Our biggest complaint with RLS, none(?) of the reporting front ends out there have any concept of RLS or really DB security in general, they AT BEST offer some minimal app-level security that’s usually pretty annoying. I’ve never been upset enough to write one…yet, but I hope someone someday does.
That’s exactly how we use access control views! I’m a huge fan of RLS, so much so that all of our users get their own role in PG
When each user has it its own role, usually that means ‘Role explosion’ [1].
But perhaps you have other methods/systems that let you avoid that.
How do you do for example: user ‘X’ when operating at location “Poland” is not allowed to access Report data ‘ABC’ before 8am and after 4pm UTC-2, in Postgres ?
Well in PG a role IS a user, there is no difference, but I agree that RBAC is not ideal when your user count gets high as management can be complicated. Luckily our database includes all the HR data, so we know this person is employed with this job on these dates, etc. We utilize that information in our, mostly automated, user controls and accounts. When one is a supervisor, they have the permission(s) given to them, and they can hand them out like candy to their employees, all within our UI.
We try to model the UI around “capabilities”, all though it’s implemented through RBAC obviously, and is not a capability based system.
So each supervisor is responsible for their employees permissions, and we largely try to stay out of it. They can’t define the “capabilities”, that’s on us.
How do you do for example: user ‘X’ when operating at location “Poland” is not allowed to access Report data ‘ABC’ before 8am and after 4pm UTC-2, in Postgres ?
Unfortunately PG’s RBAC doesn’t really allow us to do that easily, and we luckily haven’t yet had a need to do something that detailed. It is possible, albeit non-trivial. We try to limit our access rules to more basic stuff: supervisor(s) can see/update data within their sphere but not outside of it, etc.
We do limit users based on their work location, but not their logged in location. We do log all activity in an audit log, which is just another DB table, and it’s in the UI for everyone with the right permissions(so a supervisor can see all their employee’s activity, whenever they want).
Certainly different authorization system(s) exist, and they all have their pros and cons, but we’ve so far been pretty happy with PG’s system. If you can write a query to generate the data needed to make a decision, then you can make the system authorize with it.
My philosophy is “don’t write half-baked abstractions again and again”. PostgREST & friends (like Postgraphile) provide selecting specific columns, joins, sorting, filtering, pagination and others. I’m tired of writing that again and again for each endpoint, except each endpoint is slightly different, as it supports sorting on different fields, or different styles of filtering. PostgREST does all of that once and for all.
Also, there are ways to test SQL, and databases supporting transaction isolation actually simplify running your tests. Just wrap your test in a BEGIN; ROLLBACK; block.
Idk, I’ve been bitten by this. Probably ok in a small project, but this is a dangerous tight coupling of the entire system. Next time a new requirement comes in that requires changing the schema, RIP, wouldn’t even know which services would break and how many things would go wrong. Write fully-baked, well tested, requirements contested, exceptionally vetted, and excellently thought out abstractions.
I’m a fan of tools that support incremental refactoring and decomposition of a program’s architecture w/o major API breakage. PostgREST feels to me like a useful tool in that toolbox, especially when coupled with procedural logic in the database. Plus there’s the added bonus of exposing the existing domain model “natively” as JSON over HTTP, which is one of the rare integration models better supported than even the native PG wire protocol.
With embedded subresources and full SQL view support you can quickly get to something that’s as straightforward for a FE project to talk to as a bespoke REST or GraphQL backend.. Keeping the schema definitions in one place (i.e., the database itself) means less mirroring of the same structures and serialization approaches in multiple tiers of my application.
I’m building a project right now where PostgREST fills the same architectural slot that a Django or Laravel application might, but without having to build and maintain that service at all. Will I eventually need to split the API so I can add logic that doesn’t map to tuples and functions on them? Sure, maybe, if the app gets traction at all. Does it help me keep my tiers separate for now while I’m working solo on a project that might naturally decompose into a handful of backend services and an integration layer? Yep, also working out thus far.
There are some things that strike me as awkward and/or likely to cause problems down the road, like pushing JWT handling down into the DB itself. I also think it’s a weird oversight to not expose LISTEN/NOTIFY over websockets or SSE, given that PostgREST already uses notification channels to handle its schema cache refresh trigger.
Again, though, being able to wire a hybrid SPA/SSG framework like SvelteKit into a “native” database backend without having to deploy a custom API layer has been a nice option for rapid prototyping and even “real” CRUD applications. As a bonus, my backend code can just talk to Postgres directly, which means I can use my preferred stack there (Rust + SQLx + Warp) without doing yet another intermediate JSON (un)wrap step. Eventually – again, modulo actually needing the app to work for more than a few months – more and more will migrate into that service, but in the meantime I can keep using fetch in my frontend and move on.
I think it’s true that, historically, Haskell hasn’t been used as much for open source work as you might expect given the quality of the language. I think there are a few factors that are in play here, but the dominant one is simply that the open source projects that take off tend to be ones that a lot of people are interested in and/or contribute to. Haskell has, historically, struggled with a steep on-ramp and that means that the people who persevered and learned the language well enough to build things with it were self-selected to be the sorts of people who were highly motivated to work on Haskell and it’s ecosystem, but it was less appealing if your goals were to do something else and get that done quickly. It’s rare for Haskell to be the only language that someone knows, so even among Haskell developers I think it’s been common to pick a different language if the goal is to get a lot of community involvement in a project.
All that said, I think things are shifting. The Haskell community is starting to think earnestly about broadening adoption and making the language more appealing to a wider variety of developers. There are a lot of problems where Haskell makes a lot of sense, and we just need to see the friction for picking it reduced in order for the adoption to pick up. In that sense, the fact that many other languages are starting to add some things that are heavily inspired by Haskell makes Haskell itself more appealing, because more of the language is going to look familiar and that’s going to make it more accessible to people.
There are tons of tools written in Rust you can name
I can’t think of anything off the dome except ripgrep. I’m sure I could do some research and find a few, but I’m sure that’s also the case for Haskell.
You’ve probably heard of Firefox and maybe also Deno. When you look through the GitHub Rust repos by stars, there are a bunch of ls clones weirdly, lol.
Agree … and finance and functional languages seem to have a connection empirically:
OCaml and Jane St (they strongly advocate it, mostly rejecting polyglot approaches, doing almost everything within OCaml)
the South American bank that bought the company behind Clojure
I think it’s obviously the domain … there is simple a lot of “purely functional” logic in finance.
Implementing languages and particularly compilers is another place where that’s true, which the blog post mentions. But I’d say that isn’t true for most domains.
BTW git annex appears to be written in Haskell. However my experience with it is mixed. It feels like git itself is more reliable and it’s written in C/Perl/Shell. I think the dominating factor is just the number and skill of developers, not the language.
OCaml also has a range of more or less (or once) popular non-fintech, non-compiler tools written in it. LiquidSoap, MLDonkey, Unison file synchronizer, 0install, the original PGP key server…
I think the connection with finance is that making mistakes in automated finance is actually very costly on expectation, whereas making mistakes in a social network or something is typically not very expensive.
Not being popular is not the same as being “ineffective”. Likewise, something can be “effective”, but not popular.
Is JavaScript a super effective language? Is C?
Without going too far down the language holy war rabbit hole, my overall feeling after so many years is that programming language popularity, in general, fits a “worse is better” characterization where the languages that I, personally, feel are the most bug-prone, poorly designed, etc, are the most popular. Nobody has to agree with me, but for the sake of transparency, I’m thinking of PHP, C, JavaScript, Python, and Java when I write that. Languages that are probably pretty good/powerful/good-at-preventing-bugs are things like Haskell, Rust, Clojure, Elixir.
In the past, a lot of the reason I’ve seen people being turned away from using Haskell based tools has been the perceived pain of installing GHC, which admittedly is quite large, and it can sometime be a pain to figure out which version you need. ghcup has improved that situation quite a lot by making the process of installing and managing old compilers significantly easier. There’s still an argument that GHC is massive, which it is, but storage is pretty cheap these days. For some reason I’ve never seen people make similar complaints about needing to install multiple version of python (though this is less off an issue these days).
The other place where large Haskell codebases are locked up is Facebook - Sigma processes every single post, comment and massage for spam, at 2,000,000 req/sec, and is all written in Haskell. Luckily the underlying tech, Haxl, is open source - though few people seem to have found a particularly good use for it, you really need to be working at quite a large scale to benefit from it.
I am working on incorporating feedback from the beta program for Berkeley Mono typeface last week. Thanks to everyone that participated. Here is the stack for the ecommerce site and details about the typeface [1]. Server side processing like it’s 2002!
You can also use a perceptual linear colorspace to generate visually pleasing color palettes. I wrote a library few years ago for Processing framework: https://github.com/neilpanchal/Chroma
I haven’t spent a lot of time thinking about this but here is my two cents:
The benchmark produces synthetic files which have low entropy thus, are highly compressible by lz4. This results in abnormal I/O bandwidth, i.e., small binary files on disk become big files on RAM.
Can you measure the compressibility on the synthetic files?
Small example: imagine the benchmark tool is creating binary files containing a long chain of 0’s. Lz4 can compress this file into a very small file. Real data will almost always have a decent amount of entropy, unless it is already in a compressed file format like most pictures or videos. I think zfs is intelligent that it doesn’t compress high entropy files
Not an expert, but I do run a lot of ZFS RAIDz2 on NVMe on Linux and have done a fair bit of tuning for it. I don’t know which specific thing is given you such “impossible” numbers, but I’m happy to suggest a few things that might be in play, and maybe how to even squeeze more out of it! (btw, don’t mean this to come across as patronising, I’m just writing a few things out for readers that haven’t seen it, or for actual experts to tell me I’m doing it wrong!).
Most of the performance is going to be from the ARC, that is, the memory cache. ZFS will aggressively use RAM for caching (on Linux, by default, the ARC will grow to as much as half the physical RAM). You’ve already seen this in Note #3; reducing the RAM reduces throughput. Incidentally, you can tune how much RAM is used for the ARC with zfs_arc_min and zfs_arc_max (see zfs-module-parameters(5)); you don’t have to reduce the system “physical” RAM (though maybe that was more convenient for you to do).
Compression gets ZFS a huge amount of throughput, because its faster to do a smaller read and decompress it than wait for the IO (turning compression off can actually make things slower, not faster, because it has to hit the disk more). Compression is block level, and as a special case, all-zero blocks are not even written - the block header has a special case that says “this is all-zeroes, length XXX” that ZFS just inflates. Finally, turning off compression doesn’t change the compression state of already-written blocks, so if you’re benchmarking on data that already exists, you’ll need to rewrite it to really “uncompress” it.
In a RAIDzX, data is striped across multiple devices, and reads can be issued in parallel to get bits of the file back and then reassemble them in memory. You have 32 lanes, so you’re probably right in saying you’re not saturating the PCI bandwidth. You’re almost certainly getting stuff as fast as the drives can give it to you.
You’re using 512B blocks. Most current NVMe is running 4K blocks internally. The drive firmware will likely be loading the full 4K block and returning a 512B chunk of it to the system, and keeping the rest of the block cached in its own memory. In sequential reads that’s going to mean almost always 7 out of 8 blocks are going to served from the drive’s own cache memory, before touching the actual flash cells. (This is well worth tuning, by the way - use flashbench to determine the internal block size of your drive, and then find out how to do a low-level format for your device to switch it to its native block size. Along with an appropriate ashift for your pool, it will let ZFS and the Linux block layer deal in the drives native block size all the way through the stack, without ever having to split or join blocks).
ZFS will use a variable blocksize, by default growing blocks as large as 128K. When reading, it will request the entire logical block, composed of multiple physical blocks, from the block layer. If they’re stored sequentially, that can translate to single “range request” on the PCI bus, which may get coalesced into an even larger range, which the drive may be able to service entirely with parallel fetches against multiple flash cells internally.
Not sure which version of ZFS you’re using, but versions before 2.0.6 or 2.1.0 have a serious performance bottleneck on wide low-latency vdevs (GH#12121GH#12212).
In my experience though, and yours too, ZFS performance is pretty good out of the box. Enough that even though my workload does some things that are outright hostile to a CoW filesystem, the gains have been so good that it hasn’t yet been worth changing the software.
Great list, that’s almost surely what’s at play here. I don’t think the drive/file system speeds are actually being measured.
Some other performance tuning things to think about with zfs: if you have a fast slog vdev you can set sync=always but if your zil is slow you can set sync=disabled to gain a lot of speed at the expense of safety. For some use cases that’s okay.
When I ran mechanical disks I used an Optane drive for my slog vdev and it was so fast I couldn’t measure a performance difference when using sync=always.
I am trying out a few things with fio and will post the results here. There was a suggestion on HN that mirrors what you’re suggesting. I’ll update the article if I find that 157 GB/s is a bogus result.
Edit:
OK folks, party is over. 157 GB/s is a misleading number. The FIO library needs separate files for each thread otherwise, it will report incorrect bandwidth numbers. See this post, I am in process of updating the article: https://news.ycombinator.com/item?id=29547346
and then find out how to do a low-level format for your device to switch it to its native block size
What does this mean? I’ve set up aligned partitions and filesystem block sizes (or ashift for ZFS), but I don’t know what a low-level format even means.
All drives (flash and spinners) have a “native” block size. This is the block size that the drive electronics will read from or write to the actual storage media (magnetic platters or flash cells) in a single unit. Sizes vary, but in pretty much every current NVMe SSD the block size is 4KB.
Traditionally though, most drives arrive from the factory set to present a 512B block size to the OS. This is mostly for legacy reasons; back in the mists of time, physical disks blocks were actually 512B, and then the joy of PC backward compatibility means that almost everything ever since starts by pretending to be from 1981, even if that makes no sense anymore.
So, the OS asks the drive what its block size is, and it comes back with 512B. Any upper layers (usually a filesystem, maybe also intermediate layers like cryptoloops) that operate in larger block sizes will eventually submit work to the block layer, and it will then have to split the block into 512B chunks before submitting them to the device.
But, if the device isn’t actually 512B natively, then it has to do more work to get things back into its native block size. Say you write a single 512B block. A drive doing 4K internally will have to fetch the entire 4K block from storage into its memory, update it with the changed 512B, then write it back down. So its a bit slower, and for SSDs, doing more writes, so increasing wear.
So what you can do on many drives is a “low-level format”, which is also an old and now meaningless term for setting up the basic drive structure. Among other things, you can change the block size that is exposed to the OS. If you can make it match the native block size, then the drive never has to do deal in partial blocks. And if you can set the same block size through the entire stack, then you get eliminate partial block overheads from the entire stack.
I should note here that all this talk of extra reads and writes and wear and whatnot makes it sound like every SSD must be a total piece of crap out of the box, running at glacial pace and wearing itself out while its still young. Not so! Drive electronics and firmware are extremely good at minimising the effects of all these, so for most workloads (especially large sequential reads) the difference is barely even measurable.
But if you’re building storage systems that are busy all the time, then there is performance being left on the table, so it can be worth looking at this. My particular workload includes constant I/O of mostly small random reads and writes, so anything extra I can get can help.
I mentioned flashbench before, which is a tool to measure the native block size of a flash drive, since the manufacturer won’t always tell you or might lie about it. It works by reading or writing blocks of different sizes, within and across theoretical block boundaries, and looks at the latency for each operation. For example, you might try to read 4K blocks at 0, 2K, 4K, 6K, etc offsets. If its 4K internally, then the drive only has to load a single block at 0, but will have to load two blocks at 2K to cross the block boundary, and this will be visible because it takes just a little longer to do its work. It’s tough to outsmart the drive electronics (for example, current Intel 3DNAND SSDs will do two 4K fetches in parallel, so a naive read of the latency figures can make it look like it actually has an 8K block size internally), but with some thought and care, you can figure it out. Most of the time it is 4K, so you can use that as a starting point.
On Linux, the nvme list tool can tell you the current block size reported by the drive. Here’s some output for a machine I’m currently right in the middle of reformatting as described above (it was inadvertently introduced to production without having been reformatted, so I’m having to to reformat individual drives then resilver, repeatedly, until its all reformatted. Just another sysadmin adventure!)
[fastmail root(robn)@imap52 ~]# nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- --------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 PHLJ133000RS8P0HGN INTEL SSDPE2KX080T8 1 8.00 TB / 8.00 TB 4 KiB + 0 B VDV10170
/dev/nvme10n1 PHLJ132601GU8P0HGN INTEL SSDPE2KX080T8 1 8.00 TB / 8.00 TB 512 B + 0 B VDV10170
/dev/nvme11n1 PHLJ133000RH8P0HGN INTEL SSDPE2KX080T8 1 8.00 TB / 8.00 TB 4 KiB + 0 B VDV10170
/dev/nvme12n1 PHLJ131000MS8P0HGN INTEL SSDPE2KX080T8 1 8.00 TB / 8.00 TB 4 KiB + 0 B VDV10170
So you can see that nvme10n1 is still on 512B.
And then once you’ve done that, you have to issue a low-level format. I think it might be possible with nvme format, but I use the Intel-specific isdct and intelmas tools. Dunno about other brands, but I expect the info is easily findable especially for high-quality devices.
Do remember though: low-level format destroys all data on the drive. Don’t attempt in-place! And I honestly wouldn’t bother if you’re not sure if you need it, though I guess plenty of people try it “just for fun”. You do you!
I really like it! You should put up a full alphabet if you have one. Also I see you have Ø and Å, but I don’t see Æ/æ, or AE. They’re all part of the Norwegian alphabet.
The example images have pretty low resolution, enough to make it look blurry on a “default scaled” 1440p 27”. Makes it a little hard to judge.
Looks great. I love the shape of the circles. Strong Eurostile vibes, which I don’t recall seeing in a monospace font.
I agree on the “r”. The serif on top seems too angular and not fitting with the rest of the font. I couldn’t find an uppercase “W”, but the lowercase looks good to me.
[Edit] I realize I read what I wanted from the article rather than what was actually written — I’m leaving this here but recognize that it’s not quite what the article is about.
I really like this, and I think this advice extends beyond just Linux troubleshooting. It’s really advice on how to teach people and how people learn. Answers are 20% of the learning process, 80% is understanding how to get to the answer, and it’s so critical for developing skills. I could rant about the US education system teaching-to-the-test which is focusing on that 20% and how terrible it is.
One of my roles at my current job is helping people learn Rust, and when someone comes to me with a confusing type error I always make an effort to explain how to read the error message, why the error message occurs, and the various ways to fix it. If I instead just provided the fix, there would be no learning, no growth and no development of self-sufficiency. It takes longer, and sometimes people just want an answer, but I stay firm on explaining what is going on (partially because I don’t want to be fixing everyone’s basic type errors). I wonder if part of the issue with Linux troubleshooting advice is that it doesn’t have that same feedback mechanism — someone not learning doesn’t affect the author of the advice in any way so there is no real push for building self-sufficiency.
Anyway, I think this post was really short and to the point, and I completely agree with the message, but I also think it’s interesting just how much it extends beyond Linux diagnostics and into learning everywhere.
I agree, it does work as “How to write good troubleshooting advice” in general (which IHMO would be a better title anyways)
Dave Jones (EEVBlog) does an excellent job of this in the world of electronics, a playlist of his troubleshooting videos and his approach: https://www.youtube.com/playlist?list=PLvOlSehNtuHsc8y1buFPJZaD1kKzIxpWL
I know this wasn’t the point of the article, but can we please either write pseudo code for this or a safe implementation? :(
I do agree though, that at least short comment about proper way would be good.
The right way would be to use the
secrets.compare_digest()
function. It’s right there in the standard library and has been since 3.6. It’s an alias ofhmac.compare_digest()
, which has been in the standard library since 3.3.More context on why you should do this: if you want to compare two things for security, use
compare_digest()
function. If you do naive comparison in python, it will stop at the first mismatch character, and therefore, prone to timing attacks.I didn’t know how to do a safe implementation but getting these comments, I’m learning quickly. Will make another article on how to do it the right way after learning more!
Working on launching a new backend+frontend for Berkeley Graphics website. Completely rewrote it in Django, took 3 months (part time) which also included learning Django itself.
Stack: Django + HTMX + jQuery + PostgreSQL. I’ll migrate the data to PostgreSQL 15 if it releases on time on Oct 13.
Here are some screenshots:
Account Management System: https://twitter.com/berkeleygfx/status/1578275905255841792
PDF invoices: https://twitter.com/berkeleygfx/status/1578259210579558400
This is a great article. Thanks for writing it!
It’s a perfect example of a language feature that many of us use from time to time, but also either don’t or barely understand what’s going on under the hood.
Glad that you found it useful
Clear, concise, to the point. Full marks!
I also liked it very much. Just an excellent presentation of a surprising if not esoteric technical topic. Thanks.
Jetbrains needed to fire their UI team years ago when they proudly wrote that one blog post about how they made every icon look the same and monochrome. They obviously don’t understand anything about their end users, humans, and how brains and eyes work. IntelliJ has been unusable without themeing plugins since then, and seeing how one of my coworkers is doing perfectly fine with Eclipse, I always thought it’ll be my first choice to try out when the Jetbrains design team forces their next abomination upon me. Looks like the time has come.
Agreed, iconography has regressed so much to a point where people have no idea what its purpose is. Remember FamFamFam icons [1], circa 2005? I think humanity peaked at this time. Few designers have conviction of their own anymore.
[1] http://www.famfamfam.com/
I see a ranty blog post in my future :-).
I really enjoy the icons of Haiku, and the colours… why do we have to live in a world without colour, we have a palette of at least 16M nowadays, and even more with HDR displays…
I didn’t know what those icons were called. Very nostalgic nowadays to me, although they still look great!
It puts a smile on my face whenever I spot Silk/Mini icons being used online. They’ve been around long enough that they sort of just blend into the background of the internet. Big thank you to Mark James for keeping the site around, it’s like a time capsule now.
Great, just what we need: more negative space everywhere. Not happy with this. IDE’s should be extremely dense and the current design is just fine.
My children run ThinkPad X250s; the two eldest recently started coding, using GameMaker Studio. I had to buy them external monitors; the IDEs were literally unusable on 12” screens.
However, I’ve used Emacs without any issues for years on a 12” X220.
Completely disagree with you on that.
One of the reasons I dislike Visual Studio so much is that it crams so many window panes and tabs and buttons and toolbars and other crap in the way that I can hardly focus on writing code.
When I’m working in the garage, I don’t take out every tool I own and put them in front of me while I work on my bike. I only take out the tools I’m using at that time. An IDE should be the same - I don’t need to see every piece of information about everything and have hundreds of tools (button) visible - just show me the ones I’m using at that moment.
Have you ever tried hiding stuff? Basically every single thing can be stowed away. The fact of the matter is that you’re not familiar with your own garage. The whole thing is customizeable to the teeth.
Learning Django, anyone has had a similar experience where you build something quick in Flask but then it suffers a bit of “cohesiveness”? Hard to describe. There are a bunch of
Flask-SomePackageThatDoesOneThing
in my dependencies. I’ve been reading Django docs and almost everything is included. Need to generate asitemap.xml
? You got it. I need to hack the auth system so it works with our current passwordless auth (sends an email login link), and a few other customizations (setup loggers, prometheus/loki, etc). Overall, Django appears to be impressive. Any suggestions appreciated.Django’s real killer feature is the admin site. With a bit of customization, it can be super useful for internal use. (i.e. within your team or company)
It also mixes well with their ORM, because keys can be strings like field__startswith, etc.
The big problem for me in writing complex application with Flask is the circular dependencies.
I really enjoyed the writeup.
As I read of the complexity needed to get fonts rendered, all I could think of is “More code, more security risk, for trivial things.”
Saul Bass is a legend. Highly recommend his Bell branding presentation[1]. Contemporary branding has regressed so much, it is hard to overstate.
[1] https://www.youtube.com/watch?v=xKu2de0yCJI
More code doesn’t necessarily imply more security risk. More code executing in a privileged context does. Text layout is quite amenable to sandboxing: it needs to be able to read fonts, read strings and bounding boxes, and produce a list of bezier paths. An implementation that cared about security could run this in a completely unprivilged context.
And then somebody manages to slip in a malicious application name that renders “Deny Access” on top of the “Allow Access” button in the request permission dialog…
Only if the sandboxing is exposed to the user and for a use case like this there is absolutely no reason that it needs to be.
I used to think this way too. Personally I grew up speaking only English, which has a relatively simple script. But as I learned more I realized that I can’t call German hyphenation, Korean line-breaking, Unicode left-to-right, or Mongolian script “trivial things.”
Humans aren’t simple, and languages are a big part of peoples’ culture. Trying to simplify human typography so it’d be easier to implement on computers is like trying to fit human feet into square shoes because they’d be easier to manufacture!
It’s easy to lose sight of the goal and dismiss things as indulgent when your own personal requirements have already been met.
I don’t know if implementing these language features is more or less complex than the font ligature examples given, so I can’t comment on this. To me the examples given were more stylistic than necessary. I rarely have use for fancy fonts and the attendant complexities of rendering. Also, languages are mutating creatures and technology is one of the things that mutates them.
the stories that could be told about the machines that run manufacturing plants. It’s great because everything is prod and everything will do horrible things if you fuck up. sometimes you are fixing something on a kiln 3x the size of your house and it breaks and you melt a couple thousand bricks, and sometimes you get told “yeah don’t worry about the puddles of acid on the ground we’ve neutralized them”.
Please write a newsletter, and subscribe me to it.
Same
Hot take: Could font designers please just agree that the only valid way to write 0 for technical fonts is with a dot in the middle? 0-with-nothing is irritatingly ambiguous with O, 0-with-a-slash is irritatingly ambiguous with Ø, and I’ve never seen the 0-with-broken-edges actually used outsize of Brazilian license plates.
Just pulled some statistics from what people download: https://neil.computer/notes/berkeley-mono-font-variant-popularity/
The dotted-zero is indeed the most popular.
I love slashed zeroes!
I’ve never used Ø or had to.
What a strange coincidence.
An Ø bit my sister once.
Ø bites cån be very painful!
Yes but it’s not common for islands to bite.
Maybe not anymore…
Nah, I like my slashed zeros. You just need properly distinguishable characters. Many font designers get it wrong.
Or just let you choose. There were a few things about those fonts that bothered me initially, but with customisation they became my favourites.
I’m at the sad and tired point in my life where I don’t want things where every nuance is customizable, I want things where the defaults are pretty good. :P
What is your opinion on writing a 0 with a backslash, like in Atkinson Hyperlegible?
Never seen it before in practice! I suppose I have no objective complaints. I might worry a little about dyslexic legibility, but no practical experience with it.
Yeah, I agree. my eyes are pretty bad, and I struggle to read code at even 14pt sometimes. I pretty much exclusively use Source Code Pro as my main programming font because it has the most distinctly different letters and the dot-in-the-middle 0 and NO LIGATURES.
I am seriously impressed by the quality of this font. Very regular, very readable; I would put it at the same level as PragmataPro for coding.
Vertical alignment is correct for arrows (<, -, >, =…). It is possible to choose among multiple styles of zero characters. I have not seen any line height issues in Emacs and in XTerm.
Unicode coverage could be better, but this typeface is brand new, so I guess it will improve.
Thanks for the kind words, this is how it looks on iTerm: Berkeley Mono iTerm screenshot.
I will get better over time with new glyphs and features. We’re planning for a condensed version next.
Honestly, you may want to try Input. You can modify it a bit to suit your tastes. I use it simply because it seems to make it very easy to distinguish between curly braces, brackets, and parentheses at very small font sizes, at least better than any others I’ve seen. there’s free licenses in addition to commercial ones so I don’t feel much guilt plugging it.
Love it.
https://input.djr.com/preview/?size=17&language=clike&theme=default&family=InputSans&width=200&weight=200&line-height=0.9&a=0&g=0&i=serif&l=serifs_round&zero=slash&asterisk=0&braces=0&preset=default&customize=please
Cloudflare Registrar has zero markups and excellent DNS management.
https://www.cloudflare.com/products/registrar/
Name popular OSS software, written in Haskell, not used for Haskell management (e.g. Cabal).
AFAICT, there are only two, pandoc and XMonad.
This does not strike me as being an unreasonably effective language. There are tons of tools written in Rust you can name, and Rust is a significantly younger language.
People say there is a ton of good Haskell locked up in fintech, and that may be true, but a) fintech is weird because it has infinite money and b) there are plenty of other languages used in fintech which are also popular outside of it, eg Python, so it doesn’t strike me as being a good counterexample, even if we grant that it is true.
Here’s a Github search: https://github.com/search?l=&o=desc&q=stars%3A%3E500+language%3AHaskell&s=stars&type=Repositories
I missed a couple of good ones:
Still, compare this to any similarly old and popular language, and it’s no contest.
Also Dhall
I think postgrest is a great idea, but it can be applied to very wrong situations. Unless you’re familiar with Postgres, you might be surprised with how much application logic can be modelled purely in the database without turning it into spaghetti. At that point, you can make the strategic choice of modelling a part of your domain purely in the DB and let the clients work directly with it.
To put it differently, postgrest is an architectural tool, it can be useful for giving front-end teams a fast path to maintaining their own CRUD stores and endpoints. You can still have other parts of the database behind your API.
I don’t understand Postgrest. IMO, the entire point of an API is to provide an interface to the database and explicitly decouple the internals of the database from the rest of the world. If you change the schema, all of your Postgrest users break. API is an abstraction layer serving exactly what the application needs and nothing more. It provides a way to maintain backwards compatibility if you need. You might as well just send sql query to a POST endpoint and eliminate the need for Postgrest - not condoning it but saying how silly the idea of postgrest is.
Sometimes you just don’t want to make any backend application, only to have a web frontend talk to a database. There are whole “as-a-Service” products like Firebase that offer this as part of their functionality. Postgrest is self-hosted that. It’s far more convenient than sending bare SQL directly.
with views, one can largely get around the break the schema break the API problem. Even so, as long as the consumers of the API are internal, you control both ends, so it’s pretty easy to just schedule your cutovers.
But I think the best use-case for Postgrest is old stable databases that aren’t really changing stuff much anymore but need to add a fancy web UI.
The database people spend 10 minutes turning up Postgrest and leave the UI people to do their thing and otherwise ignore them.
Hah, I don’t get views either. My philosophy is that the database is there to store the data. It is the last thing that scales. Don’t put logic and abstraction layers in the database. There is plenty of compute available outside of it and APIs can do precise data abstraction needed for the apps. Materialized views, may be, but still feels wrong. SQL is a pain to write tests for.
Your perspective is certainly a reasonable one, but not one I or many people necessarily agree with.
The more data you have to mess with, the closer you want the messing with next to the data. i.e. in the same process if possible :) Hence Pl/PGSQL and all the other languages that can get embedded into SQL databases.
We use views mostly for 2 reasons:
Have you checked row-level security? I think it creates a good default, and then you can use security definer views for when you need to override that default.
Yes, That’s exactly how we use access control views! I’m a huge fan of RLS, so much so that all of our users get their own role in PG, and our app(s) auth directly to PG. We happily encourage direct SQL access to our users, since all of our apps use RLS for their security.
Our biggest complaint with RLS, none(?) of the reporting front ends out there have any concept of RLS or really DB security in general, they AT BEST offer some minimal app-level security that’s usually pretty annoying. I’ve never been upset enough to write one…yet, but I hope someone someday does.
When each user has it its own role, usually that means ‘Role explosion’ [1]. But perhaps you have other methods/systems that let you avoid that.
How do you do for example: user ‘X’ when operating at location “Poland” is not allowed to access Report data ‘ABC’ before 8am and after 4pm UTC-2, in Postgres ?
[1] https://blog.plainid.com/role-explosion-unintended-consequence-rbac
Well in PG a role IS a user, there is no difference, but I agree that RBAC is not ideal when your user count gets high as management can be complicated. Luckily our database includes all the HR data, so we know this person is employed with this job on these dates, etc. We utilize that information in our, mostly automated, user controls and accounts. When one is a supervisor, they have the permission(s) given to them, and they can hand them out like candy to their employees, all within our UI.
We try to model the UI around “capabilities”, all though it’s implemented through RBAC obviously, and is not a capability based system.
So each supervisor is responsible for their employees permissions, and we largely try to stay out of it. They can’t define the “capabilities”, that’s on us.
Unfortunately PG’s RBAC doesn’t really allow us to do that easily, and we luckily haven’t yet had a need to do something that detailed. It is possible, albeit non-trivial. We try to limit our access rules to more basic stuff: supervisor(s) can see/update data within their sphere but not outside of it, etc.
We do limit users based on their work location, but not their logged in location. We do log all activity in an audit log, which is just another DB table, and it’s in the UI for everyone with the right permissions(so a supervisor can see all their employee’s activity, whenever they want).
Certainly different authorization system(s) exist, and they all have their pros and cons, but we’ve so far been pretty happy with PG’s system. If you can write a query to generate the data needed to make a decision, then you can make the system authorize with it.
My philosophy is “don’t write half-baked abstractions again and again”. PostgREST & friends (like Postgraphile) provide selecting specific columns, joins, sorting, filtering, pagination and others. I’m tired of writing that again and again for each endpoint, except each endpoint is slightly different, as it supports sorting on different fields, or different styles of filtering. PostgREST does all of that once and for all.
Also, there are ways to test SQL, and databases supporting transaction isolation actually simplify running your tests. Just wrap your test in a BEGIN; ROLLBACK; block.
Idk, I’ve been bitten by this. Probably ok in a small project, but this is a dangerous tight coupling of the entire system. Next time a new requirement comes in that requires changing the schema, RIP, wouldn’t even know which services would break and how many things would go wrong. Write fully-baked, well tested, requirements contested, exceptionally vetted, and excellently thought out abstractions.
Or just use views to maintain backwards compatibility and generate typings from the introspection endpoint to typecheck clients.
I’m a fan of tools that support incremental refactoring and decomposition of a program’s architecture w/o major API breakage. PostgREST feels to me like a useful tool in that toolbox, especially when coupled with procedural logic in the database. Plus there’s the added bonus of exposing the existing domain model “natively” as JSON over HTTP, which is one of the rare integration models better supported than even the native PG wire protocol.
With embedded subresources and full SQL view support you can quickly get to something that’s as straightforward for a FE project to talk to as a bespoke REST or GraphQL backend.. Keeping the schema definitions in one place (i.e., the database itself) means less mirroring of the same structures and serialization approaches in multiple tiers of my application.
I’m building a project right now where PostgREST fills the same architectural slot that a Django or Laravel application might, but without having to build and maintain that service at all. Will I eventually need to split the API so I can add logic that doesn’t map to tuples and functions on them? Sure, maybe, if the app gets traction at all. Does it help me keep my tiers separate for now while I’m working solo on a project that might naturally decompose into a handful of backend services and an integration layer? Yep, also working out thus far.
There are some things that strike me as awkward and/or likely to cause problems down the road, like pushing JWT handling down into the DB itself. I also think it’s a weird oversight to not expose LISTEN/NOTIFY over websockets or SSE, given that PostgREST already uses notification channels to handle its schema cache refresh trigger.
Again, though, being able to wire a hybrid SPA/SSG framework like SvelteKit into a “native” database backend without having to deploy a custom API layer has been a nice option for rapid prototyping and even “real” CRUD applications. As a bonus, my backend code can just talk to Postgres directly, which means I can use my preferred stack there (Rust + SQLx + Warp) without doing yet another intermediate JSON (un)wrap step. Eventually – again, modulo actually needing the app to work for more than a few months – more and more will migrate into that service, but in the meantime I can keep using
fetch
in my frontend and move on.I would add shake
https://shakebuild.com
not exactly a tool but a great DSL.
I think it’s true that, historically, Haskell hasn’t been used as much for open source work as you might expect given the quality of the language. I think there are a few factors that are in play here, but the dominant one is simply that the open source projects that take off tend to be ones that a lot of people are interested in and/or contribute to. Haskell has, historically, struggled with a steep on-ramp and that means that the people who persevered and learned the language well enough to build things with it were self-selected to be the sorts of people who were highly motivated to work on Haskell and it’s ecosystem, but it was less appealing if your goals were to do something else and get that done quickly. It’s rare for Haskell to be the only language that someone knows, so even among Haskell developers I think it’s been common to pick a different language if the goal is to get a lot of community involvement in a project.
All that said, I think things are shifting. The Haskell community is starting to think earnestly about broadening adoption and making the language more appealing to a wider variety of developers. There are a lot of problems where Haskell makes a lot of sense, and we just need to see the friction for picking it reduced in order for the adoption to pick up. In that sense, the fact that many other languages are starting to add some things that are heavily inspired by Haskell makes Haskell itself more appealing, because more of the language is going to look familiar and that’s going to make it more accessible to people.
I can’t think of anything off the dome except ripgrep. I’m sure I could do some research and find a few, but I’m sure that’s also the case for Haskell.
You’ve probably heard of Firefox and maybe also Deno. When you look through the GitHub Rust repos by stars, there are a bunch of ls clones weirdly, lol.
Agree … and finance and functional languages seem to have a connection empirically:
I think it’s obviously the domain … there is simple a lot of “purely functional” logic in finance.
Implementing languages and particularly compilers is another place where that’s true, which the blog post mentions. But I’d say that isn’t true for most domains.
BTW git annex appears to be written in Haskell. However my experience with it is mixed. It feels like git itself is more reliable and it’s written in C/Perl/Shell. I think the dominating factor is just the number and skill of developers, not the language.
OCaml also has a range of more or less (or once) popular non-fintech, non-compiler tools written in it. LiquidSoap, MLDonkey, Unison file synchronizer, 0install, the original PGP key server…
Xen hypervisor
The MirageOS project always seemed super cool. Unikernels are very interesting.
Well, the tools for it, rather than the hypervisor itself. But yeah, I forgot about that one.
I think the connection with finance is that making mistakes in automated finance is actually very costly on expectation, whereas making mistakes in a social network or something is typically not very expensive.
Git-annex
Not being popular is not the same as being “ineffective”. Likewise, something can be “effective”, but not popular.
Is JavaScript a super effective language? Is C?
Without going too far down the language holy war rabbit hole, my overall feeling after so many years is that programming language popularity, in general, fits a “worse is better” characterization where the languages that I, personally, feel are the most bug-prone, poorly designed, etc, are the most popular. Nobody has to agree with me, but for the sake of transparency, I’m thinking of PHP, C, JavaScript, Python, and Java when I write that. Languages that are probably pretty good/powerful/good-at-preventing-bugs are things like Haskell, Rust, Clojure, Elixir.
In the past, a lot of the reason I’ve seen people being turned away from using Haskell based tools has been the perceived pain of installing GHC, which admittedly is quite large, and it can sometime be a pain to figure out which version you need.
ghcup
has improved that situation quite a lot by making the process of installing and managing old compilers significantly easier. There’s still an argument that GHC is massive, which it is, but storage is pretty cheap these days. For some reason I’ve never seen people make similar complaints about needing to install multiple version of python (though this is less off an issue these days).The other place where large Haskell codebases are locked up is Facebook - Sigma processes every single post, comment and massage for spam, at 2,000,000 req/sec, and is all written in Haskell. Luckily the underlying tech, Haxl, is open source - though few people seem to have found a particularly good use for it, you really need to be working at quite a large scale to benefit from it.
hledger is one I use regularly.
Cardano is a great example.
Or Standard Chartered, which is a very prominent British bank, and runs all their backend on Haskell. They even have their own strict dialect.
GHC.
https://pandoc.org/
I used pandoc for a long time before even realizing it was Haskell. Ended up learning just enough to make a change I needed.
I am working on incorporating feedback from the beta program for Berkeley Mono typeface last week. Thanks to everyone that participated. Here is the stack for the ecommerce site and details about the typeface [1]. Server side processing like it’s 2002!
[1] https://neil.computer/notes/berkeley-mono-february-update/
You can also use a perceptual linear colorspace to generate visually pleasing color palettes. I wrote a library few years ago for Processing framework: https://github.com/neilpanchal/Chroma
This is an awesome work! I love how simple your API is
testColor = new Chroma(ColorSpace.LCH, l, c, h, 255);
The CIE-LCH color space is very attractive, uniform grays by default and closer to the eye.
Thank you for sharing it.
Hijacking your comment to say I’m looking forward to your mono typeface!
I haven’t spent a lot of time thinking about this but here is my two cents:
The benchmark produces synthetic files which have low entropy thus, are highly compressible by lz4. This results in abnormal I/O bandwidth, i.e., small binary files on disk become big files on RAM.
Can you measure the compressibility on the synthetic files?
Small example: imagine the benchmark tool is creating binary files containing a long chain of 0’s. Lz4 can compress this file into a very small file. Real data will almost always have a decent amount of entropy, unless it is already in a compressed file format like most pictures or videos. I think zfs is intelligent that it doesn’t compress high entropy files
I found the problem, its to do with the benchmarking library fio: https://news.ycombinator.com/item?id=29547346
Not an expert, but I do run a lot of ZFS RAIDz2 on NVMe on Linux and have done a fair bit of tuning for it. I don’t know which specific thing is given you such “impossible” numbers, but I’m happy to suggest a few things that might be in play, and maybe how to even squeeze more out of it! (btw, don’t mean this to come across as patronising, I’m just writing a few things out for readers that haven’t seen it, or for actual experts to tell me I’m doing it wrong!).
Most of the performance is going to be from the ARC, that is, the memory cache. ZFS will aggressively use RAM for caching (on Linux, by default, the ARC will grow to as much as half the physical RAM). You’ve already seen this in Note #3; reducing the RAM reduces throughput. Incidentally, you can tune how much RAM is used for the ARC with
zfs_arc_min
andzfs_arc_max
(seezfs-module-parameters(5)
); you don’t have to reduce the system “physical” RAM (though maybe that was more convenient for you to do).Compression gets ZFS a huge amount of throughput, because its faster to do a smaller read and decompress it than wait for the IO (turning compression off can actually make things slower, not faster, because it has to hit the disk more). Compression is block level, and as a special case, all-zero blocks are not even written - the block header has a special case that says “this is all-zeroes, length XXX” that ZFS just inflates. Finally, turning off compression doesn’t change the compression state of already-written blocks, so if you’re benchmarking on data that already exists, you’ll need to rewrite it to really “uncompress” it.
In a RAIDzX, data is striped across multiple devices, and reads can be issued in parallel to get bits of the file back and then reassemble them in memory. You have 32 lanes, so you’re probably right in saying you’re not saturating the PCI bandwidth. You’re almost certainly getting stuff as fast as the drives can give it to you.
You’re using 512B blocks. Most current NVMe is running 4K blocks internally. The drive firmware will likely be loading the full 4K block and returning a 512B chunk of it to the system, and keeping the rest of the block cached in its own memory. In sequential reads that’s going to mean almost always 7 out of 8 blocks are going to served from the drive’s own cache memory, before touching the actual flash cells. (This is well worth tuning, by the way - use flashbench to determine the internal block size of your drive, and then find out how to do a low-level format for your device to switch it to its native block size. Along with an appropriate
ashift
for your pool, it will let ZFS and the Linux block layer deal in the drives native block size all the way through the stack, without ever having to split or join blocks).ZFS will use a variable blocksize, by default growing blocks as large as 128K. When reading, it will request the entire logical block, composed of multiple physical blocks, from the block layer. If they’re stored sequentially, that can translate to single “range request” on the PCI bus, which may get coalesced into an even larger range, which the drive may be able to service entirely with parallel fetches against multiple flash cells internally.
Not sure which version of ZFS you’re using, but versions before 2.0.6 or 2.1.0 have a serious performance bottleneck on wide low-latency vdevs (GH#12121 GH#12212).
In my experience though, and yours too, ZFS performance is pretty good out of the box. Enough that even though my workload does some things that are outright hostile to a CoW filesystem, the gains have been so good that it hasn’t yet been worth changing the software.
Great list, that’s almost surely what’s at play here. I don’t think the drive/file system speeds are actually being measured.
Some other performance tuning things to think about with zfs: if you have a fast slog vdev you can set sync=always but if your zil is slow you can set sync=disabled to gain a lot of speed at the expense of safety. For some use cases that’s okay.
When I ran mechanical disks I used an Optane drive for my slog vdev and it was so fast I couldn’t measure a performance difference when using sync=always.
I am trying out a few things with fio and will post the results here. There was a suggestion on HN that mirrors what you’re suggesting. I’ll update the article if I find that 157 GB/s is a bogus result.
Edit: OK folks, party is over. 157 GB/s is a misleading number. The FIO library needs separate files for each thread otherwise, it will report incorrect bandwidth numbers. See this post, I am in process of updating the article: https://news.ycombinator.com/item?id=29547346
Updated, thanks everyone! - https://neil.computer/notes/zfs-raidz2/#note-5
What does this mean? I’ve set up aligned partitions and filesystem block sizes (or ashift for ZFS), but I don’t know what a low-level format even means.
All drives (flash and spinners) have a “native” block size. This is the block size that the drive electronics will read from or write to the actual storage media (magnetic platters or flash cells) in a single unit. Sizes vary, but in pretty much every current NVMe SSD the block size is 4KB.
Traditionally though, most drives arrive from the factory set to present a 512B block size to the OS. This is mostly for legacy reasons; back in the mists of time, physical disks blocks were actually 512B, and then the joy of PC backward compatibility means that almost everything ever since starts by pretending to be from 1981, even if that makes no sense anymore.
So, the OS asks the drive what its block size is, and it comes back with 512B. Any upper layers (usually a filesystem, maybe also intermediate layers like cryptoloops) that operate in larger block sizes will eventually submit work to the block layer, and it will then have to split the block into 512B chunks before submitting them to the device.
But, if the device isn’t actually 512B natively, then it has to do more work to get things back into its native block size. Say you write a single 512B block. A drive doing 4K internally will have to fetch the entire 4K block from storage into its memory, update it with the changed 512B, then write it back down. So its a bit slower, and for SSDs, doing more writes, so increasing wear.
So what you can do on many drives is a “low-level format”, which is also an old and now meaningless term for setting up the basic drive structure. Among other things, you can change the block size that is exposed to the OS. If you can make it match the native block size, then the drive never has to do deal in partial blocks. And if you can set the same block size through the entire stack, then you get eliminate partial block overheads from the entire stack.
I should note here that all this talk of extra reads and writes and wear and whatnot makes it sound like every SSD must be a total piece of crap out of the box, running at glacial pace and wearing itself out while its still young. Not so! Drive electronics and firmware are extremely good at minimising the effects of all these, so for most workloads (especially large sequential reads) the difference is barely even measurable.
But if you’re building storage systems that are busy all the time, then there is performance being left on the table, so it can be worth looking at this. My particular workload includes constant I/O of mostly small random reads and writes, so anything extra I can get can help.
I mentioned flashbench before, which is a tool to measure the native block size of a flash drive, since the manufacturer won’t always tell you or might lie about it. It works by reading or writing blocks of different sizes, within and across theoretical block boundaries, and looks at the latency for each operation. For example, you might try to read 4K blocks at 0, 2K, 4K, 6K, etc offsets. If its 4K internally, then the drive only has to load a single block at 0, but will have to load two blocks at 2K to cross the block boundary, and this will be visible because it takes just a little longer to do its work. It’s tough to outsmart the drive electronics (for example, current Intel 3DNAND SSDs will do two 4K fetches in parallel, so a naive read of the latency figures can make it look like it actually has an 8K block size internally), but with some thought and care, you can figure it out. Most of the time it is 4K, so you can use that as a starting point.
On Linux, the
nvme list
tool can tell you the current block size reported by the drive. Here’s some output for a machine I’m currently right in the middle of reformatting as described above (it was inadvertently introduced to production without having been reformatted, so I’m having to to reformat individual drives then resilver, repeatedly, until its all reformatted. Just another sysadmin adventure!)So you can see that nvme10n1 is still on 512B.
And then once you’ve done that, you have to issue a low-level format. I think it might be possible with
nvme format
, but I use the Intel-specificisdct
andintelmas
tools. Dunno about other brands, but I expect the info is easily findable especially for high-quality devices.Do remember though: low-level format destroys all data on the drive. Don’t attempt in-place! And I honestly wouldn’t bother if you’re not sure if you need it, though I guess plenty of people try it “just for fun”. You do you!
Any experts here who can shed light on how this is possible? See Note 2 and Note 3 at the end of the article.
I found 1 source that shows these drives have a 2GB LPDDR4 cache each[1]
So that’s 16GB of faster-than-nand cache total, plus the 64GB of system memory, is 80GB. So maybe that’s skewing the fio numbers?
Continue working on a typeface I’ve been designing for a year now: https://neil.computer/notes/introducing-berkeley-mono/
Feedback is most welcome. The ‘r’ glyph is wonky, there are issues with the way uppercase ‘W’ looks, especially in small sizes.
I really like it! You should put up a full alphabet if you have one. Also I see you have Ø and Å, but I don’t see Æ/æ, or AE. They’re all part of the Norwegian alphabet.
The example images have pretty low resolution, enough to make it look blurry on a “default scaled” 1440p 27”. Makes it a little hard to judge.
Looks great. I love the shape of the circles. Strong Eurostile vibes, which I don’t recall seeing in a monospace font.
I agree on the “r”. The serif on top seems too angular and not fitting with the rest of the font. I couldn’t find an uppercase “W”, but the lowercase looks good to me.
I really like it! I also appreciate the use of Univers on your site