1. 3

    Reposting my Reddit comment.


    I was surprised by how well it worked to integrate the fiat-crypto library into Orion. Fiat’s an impressive project. Now, Orion has X25519 with much less effort than a from-scratch implementation would have required, and while avoiding a huge class of vulnerabilities.

    1. 1

      I’ve also found &mut &[T] helpful for parsers when I want to store the changing “remainder” of what’s left to be parsed, and I want some inner function to mutate that remainder in its own scope.

      1. 2

        I think it’s a phenomenal book. I was a physics major in undergrad and only really had a basic knowledge of some scripting languages for stuff like physical modeling when I wanted to learn more about CS.

        “The Black Book” was a great way to learn about the fundamentals of OS design and abstraction. I particularly like that it presents the OS as a series of “virtualization” methods for various physical resources, ie. CPU, RAM, and disk. Those connections and that mental model really helped me to build on top of it.

        1. 6

          This reminds me of https://motherfuckingwebsite.com/ and http://bettermotherfuckingwebsite.com/. I tend to use something similar to the 2nd website when mocking out websites in projects. A similar option is classless CSS where there’s styling, but it’s limited to more standard html elements rather than creating new elements out of divs and classes. There’s value to both, but sometimes it’s nice to go back to a more minimal approach.

          I really wish there were more popular brutalist websites.

          1. 7

            brutalist websites

            I don’t think I’ve ever seen a website try to load so many images simultaneously.

            1. 4

              100%. Perfect use-case for some lazy-load scripting.

              1. 1

                Most of the sites seem pretty complex to me.

                This is a site where people can submit their work, and it’s pretty clear that “brutalist” means different things to different people (just like the architectural style).

                1. 1

                  Hah, I forgot about that - good callout - I’ve had that page bookmarked for a while and just opened it for a second to make sure it was what I was thinking of.

                  Brutalism does seem to mean different things for different people, but at least for me, I like when sites don’t overuse css/js but are still completely functional (See GoatCounter for an idea of what I’m thinking of). The brutalist websites page is definitely a good example where an aggressive focus on minimalism hurts the usability of the site. Everything good still needs to be in moderation.

                  1. 1

                    931 MB. Impressive.

                    1. 1

                      Thank you so much for measuring this. I really wanted to but also really didn’t want to.

                1. 1

                  How much does a TLS library need to know about the QUIC protocol? Normally with TCP it just acts as a stream plugin: you feed the socket’s input stream through it and feed your output through it to the socket.

                  But I know QUIC runs over UDP not TCP. It doesn’t just use DTLS?

                  1. 1

                    Without any serious research, if QUIC provides the sequencing, checksumming, and retransmit in user space (instead of wherever it’s implemented in TCP, the kernel or hardware?) the TLS library would need to be able to handle that level of normal network existence.

                    1. 1

                      QUIC has TLS bundled into the protocol. So TLS and QUIC are really the same layer if you’re using QUIC. In that sense, a TLS library probably doesn’t need to know much about QUIC. But a QUIC library needs to know everything about TLS.

                      https://www.rfc-editor.org/rfc/rfc9001.html

                    1. 11

                      Regarding a Rust version of the Argon2 module: check out the Orion crate. It’s pure Rust and has easy-to-use APIs for various modern crypto primitives, including Argon2i and XChaCha AEAD. I also know that the maintainer is extremely interested in adapting the API to make it easier and fit people’s use cases.

                      1. 5

                        The container format (bottle) is easily extensible for future compression or encryption algorithms.

                        Why would you tie your archive format to your compression and encryption? Seems like a step backwards to the ZIP old days…

                        1. 13

                          It’s helpful if you want to be able to extract/decrypt a single file. If encryption is a layer on top of the archive, like .tar.gz.age, then you have to decrypt the (potentially very large) archive just to get a single file.

                          Same goes for compression.

                          1. 10

                            There’s a tradeoff. With the tar + {gzip,bzip2,xz,lz4,…} model, you can treat the entire file as a single stream and the compression can easily handle cross-file redundancy. If you have a hundred files of English text then your compression dictionary from the first one can be reused across all of them. The downside from this is that you need to decompress before you can access metadata. A zip file, in contrast, effectively does the compress step separately: the metadata is at the end of the file and contains the index. You can pull a single file out of a zip, which is why it’s used as the basis for things like OpenOffice documents and Java Jar files: it’s easy to pretend that a zip file is a read-only filesystem.

                            There are some interesting middle points in the design space. Some modern compression algorithms (e.g. lz4 and zstd) provide a ‘dictionary mode’ where you can build a dictionary separately and then use that to compress individual files. You could build a compression format on top of this by doing two passes: scan the files to build a dictionary and then do file-at-a-time compression and store the dictionary, the individual compressed files, and a metadata dictionary. You’d then be able to load the dictionary into memory and do fast random reads within the container. You can do this with existing tools and a bit of scripting, if you run, say, zstd over your input files in dictionary mode, then compress each file individually and add the compressed files + the dictionary to a tarball (or pax archive). That doesn’t require a new format, just a modification to the tool. I’m quite tempted to try adding this mode to FreeBSD’s tar.

                            I didn’t read too much detail about what this thing is doing because the crypto stuff made me super nervous (there are a lot of difficult problems in designing an encrypted container and the README didn’t contain any discussion of them) so I’m not sure exactly where this sits in the space.

                            1. 1

                              I like your instincts about crypto. :) The mechanism is documented in format.md here.

                              I tried to stick with common patterns and algorithms, with a heavy weight toward DJB’s (of NaCl) work, because it’s often both faster and feels more trustworthy. This is why some of the terminology (“sealed box”, “xchacha”) is weird. (I also like SSH better than GPG because it just feels easier to use.) I’ve run this code by people I trust in this space, but will always welcome new criticism/advice – it’s been a long long time since paramiko.

                              1. 2

                                Thanks. That doesn’t look obviously wrong (and I’m not sufficiently qualified to tell you if it is non-obviously wrong), but it feels like it’s a bit pointless as an integrated thing. If the entire file is encrypted as a stream then you lose the benefits of a structured format. You may as well just do the tar.gz thing and wrap your archive in a separate encryption format. Then you have complete crypto agility (your archive format doesn’t need to know anything about your crypto because it only ever sees plaintext).

                                To me, the value of folding the encryption into the archive format would come from being able to separately decrypt individual files. If I have a 50 GiB archive, being able to separately decrypt the metadata and then decrypt and decompress individual files without having to stream-decrypt the whole thing would be useful. Unfortunately, that’s really hard to get right.

                                By the way, from the writeup it looks as if you’re using NaCl? I’d really recommend looking at libsodium: the APIs are much improved over the original and are very difficult to misuse.

                                1. 1

                                  Yeah, the goal of the unified file format was to make it easier for my friends to do a basic encrypted, compressed archive of a folder without having to use several different tools, some of which (gpg) are user-antagonistic, and some of which (tar) have a ridiculously fragile format. I realize this isn’t for everyone.

                                  Per-block encryption should be possible the same way as per-archive in the current scheme, just moving the bottle down a few layers. It would mean each block has its own key, each sealed for each recipient, so the overhead would increase slightly, but I don’t see why that wouldn’t be okay… though I haven’t tried implementing it. :)

                                  I believe libsodium is just an alternate implementation of the NaCl algorithms. I’m actually using “dryoc”, which is a rust port that’s probably different on a few other axes.

                                  [edit because my thumb hit “send” before I was done]

                                  1. 1

                                    Per-block encryption should be possible the same way as per-archive in the current scheme, just moving the bottle down a few layers. It would mean each block has its own key, each sealed for each recipient, so the overhead would increase slightly, but I don’t see why that wouldn’t be okay

                                    For one thing, it would be difficult to avoid leaking the sizes of the individual files (or you need to explicitly ensure that this isn’t part of your threat model). Similarly, if the metadata is separately protected then you have to be very careful about vulnerability to known-plaintext attacks (this is what killed the original zip encryption), because the metadata contents is often easily guessable (so is the contents of an individual file). I believe most of the constructions in NaCL should be resilient against this kind of thing but it’s the sort of thing that I’d want to see in a threat-model document.

                                    I believe libsodium is just an alternate implementation of the NaCl algorithms.

                                    I believe it is the opposite: it uses the implementation of the algorithms from NaCl (and is, in fact, a fork of NaCl, not a reimplementation), but exposes an API that is much harder to misuse. Some of the comments in your README suggest that NaCl is making you think about things that Sodium explicitly makes sensible decisions on and avoids exposing to the user unless they want to go past the high-level APIs. Not sure if dryoc does the same thing, but there are direct libsodium bindings available for Rust.

                          1. 8

                            A local font stack is all well and good, but Helvetica and Arial, et al are just…well…boring.

                            This is my only disagreement with the article. I prefer to just use the user’s default font. Sure, that is boring but I personally don’t find much value in every site having their own “exciting” font. I just want to see my nice, readable default font almost everywhere. (Some things like games have a reasonable excuse to use their own.)

                            So drop the font stack altogether and just leave the default. Or if you really must font: sans-serif (or serif). Or the newfangled ui-sans-serif (if I remember that correctly).

                            Of course the downside is that a lot of browsers use garbage default fonts by default…

                            1. 8

                              Of course the downside is that a lot of browsers use garbage default fonts by default…

                              This is the real underlying problem. The overwhelming vast majority of users don’t even know they can change the default font much less know how to.

                              1. 6

                                100%. Which is super disappointing because it would take browsers very little effort to fix.

                                Firefox doesn’t even default to my system font by default (which is quite nice) so I need to teach it to follow my system font which ~0% of users will do.

                              2. 7

                                Helvetica and Arial, et al are just…well…boring.

                                Boring is good! Most of the time, typography’s job is to get out of the way. Browser-default sans-serif is usually what you want, with a slight bump to the size.

                                1. 4

                                  If everyone needs to bump the size maybe we should petition browsers to make the default bigger…

                                  1. 2

                                    Agree! font-size: 16pt; is a much better default.

                                2. 4

                                  I just want to see my nice, readable default font almost everywhere.

                                  This is why I disable custom fonts entirely i my browser. Makes me laugh when a bad website (looking at anything created by Google) uses idiotic icon fonts, but whatever, those sites aren’t worth using anyway.

                                  1. 2

                                    I’ve done this on-and-off but I’m not completely sold. Luckily icon fonts are somewhat falling out of style but I think there are legitimate uses for custom fonts, especially for things that are more wacky and fun. I think it is just a shame that they are abused (IMHO) on just about every website.

                                    1. 4

                                      Not somewhat. Fonts for icons have not been a recommended practice in since 2014 when the people shifted to SVG icons via symbols. Prevalence is likely devs that learned about icon fonts at one point, especially in the past when IE compatibility took away many options, and those people do not do enough front-end to bother keeping up with most trends.

                                  2. 3

                                    The people who think to change their browser’s default font are probably the same people who can figure out how to force websites to use the default font.

                                    Shipping a font with a site is the same as any other styling: helpful for the average user, overridable by the power user.

                                    1. 2

                                      I’m kind of interested right now in both improving my website’s typography and getting rid of things like background images and web fonts. The page is already pretty light, but it can be lighter, and I’m questioning the value of the fonts I’m using. I do want to exercise some choice in fonts, though. The ui-sans-serif (etc) look like good options, but so far they’re only supported on Safari. I’m probably going to do some research on a local font stack that doesn’t include awful fonts like Helvetica and Times New Roman, has a reasonable chance of working cross-platform, and falls back to serif, sans-serif, and monospace.

                                      1. 4

                                        I don’t know that many people think Helvetica is “awful”.

                                        Just cut things. Cut the background images entirely; replace with a nice single color. Start by cutting all the web fonts: specify serif and sans-serif and no more than four or five sizes of each. Make sure your margins are reasonable and text column widths are enjoyable. How many colors are you using? Is there enough contrast?

                                        Once you’ve stripped everything back to a readable, comfortable page, see what you actually need to distinguish yourself rather that trying a bunch of trends. What image are you trying to project? Classic and authoritative? Modern? Comedic? Simple? Fashionable? Pick one approach and try to align things to that feel.

                                        1. 1

                                          Helvetica and Times New Roman are de facto standards for a reason. They’re solid, utilitarian typefaces. I wish more sites used them.

                                      1. 2

                                        This sounds like such an obvious thing but we keep seeing this happen over and over again: users write code that uses libcurl functions but they don’t check the return codes.

                                        I have to wonder how many times Rust’s “must use” warnings have saved my butt. This is an easy thing to miss once in a while.

                                        For those who aren’t familiar with Rust, it will yell at you if you don’t do anything with the “Result” return type, which may indicate an error. It’s a warning, not an error, but still very helpful.

                                        1. 1

                                          gcc (and llvm, and…) has __attribute__((warn_unused_result)) for the same purpose.

                                          1. 2

                                            Which are still helpful even on functions where you might want to use the result only say 95% of the time, since you can write a call like void someFunction(); to explicitly notify the compiler that you’re ignoring the result on purpose.

                                        1. 1

                                          It seems odd that they don’t compare Superpack to their other homegrown compression tool, Zstd.

                                          1. 3

                                            For Rust, why does this say that you need no_std to get portable binaries? I don’t feel like this is true. If you just want a portable binary within the same OS, you can use the MUSL libc target and statically compile everything. It’s crazy easy.

                                            1. 1

                                              Suppose you need to write something on a bare metal architecture, no operating system even, just an assembly bootstrap.

                                              1. 1

                                                Ok, sure, then you’d need no_std. But if we’re just talking about a binary being portable between different environments with the same kernel, then Rust + musl libc gets you there.

                                                The biggest hit to portability is dynamic linking. Once you start dynamically linking, you have to worry about what version of your dependencies are on your users’ machines. Or your users have to worry about it, at least.

                                                Portability between different operating systems is a less important problem, though seemingly still interesting given how much attention Cosmopolitan libc gets.

                                                Portability to the extent of “I don’t even need an OS” is probably not that valuable.

                                            1. 1

                                              These are all interesting optimizations (had no idea char was 2x slower to iterate over than the u8s in a &str, though that makes some sense in hindsight).

                                              But I feel like this post is missing a full comparison of all the various algorithms and strategies tested. What was the final result? How much faster did it get?

                                              1. 2

                                                This post is part of a series, and benchmark results were in a previous post (this is seemingly out of order, because author first implemented them, and now it’s distilling this knowledge).

                                              1. 12

                                                Isnt this… Really big?

                                                1. 15

                                                  It does seem like it. This is, to my knowledge, the first hugely popular I/O library which now lets its users use io_uring in a way which just looks like normal async file I/O.

                                                  Rust seems like it is in a special position in that it’s a language with good enough performance for io_uring to matter, but with powerful enough facilities to make clean abstractions on top of io_uring possible.

                                                  1. 6

                                                    Isn’t the problem that Rust bet the farm on readiness-based APIs and now it turns out (surprise) that completion-based APIs are generally “better” and finally coming to Linux (after Windows completely embarrassed Linux on that matter for like a decade).

                                                    1. 1

                                                      It’s not a problem in practice. Rust’s futures model handels io-uring just fine. There was some debate over how to handle “cancellations” of futures, e.g. when Rust code wants to just forget that it asked the OS for bytes from a TCP socket. But the “ringbahn” research prototype found a clean solution to that problem.

                                                      Actually, that entire blog is a wealth of information about Rust futures.

                                                      1. 1

                                                        found a clean solution

                                                        I’d call that a stretch, considering that the “solution” pretty much foregoes futures altogether (and with that async/await) and largely rolls its own independent types and infrastructure.

                                                        So I’m not seeing how this is evidence for:

                                                        futures model handels io-uring just fine

                                                        I’d say its evidence of the opposite.

                                                        Actually, that entire blog is a wealth of information about Rust futures.

                                                        Actually, that blog is the reason why I asked the question in the first place.

                                                        1. 1

                                                          I’m getting a little out of my depth here, but my understanding is that ringbahn (which inspired the tokio implementation) is meant to be used under the hood by a futures executor, just like epoll/kqueue are used under the hood now. It’s a very thin interface layer.

                                                          Basically, from application code you start up a TCP socket using an async library with io-uring support. Then whenever you read from it and await, the executor will do ringbahn-style buffer management and interface with io-uring.

                                                    2. 1

                                                      There’s also hugely popular https://github.com/libuv/libuv/pull/2322

                                                      (Since libuv isn’t modular it hasn’t officially landed yet, but the way I understand it, both projects are at about the same level of completion)

                                                    3. 4

                                                      Everything about rust is big now =)

                                                      1. 4

                                                        Rust is the epitome of Big FOSS.

                                                    1. 9

                                                      There’s a lot wrong with this rant.

                                                      First and foremost, the post is largely framed like there are two groups of people: the producers and consumers of Xorg/Wayland. You can see that framing in calls to “supply and demand” and mostly talking about what is most valuable to users, and ignoring ease of development as something to care about. Essentially, the post treats FOSS display servers as a product to consume.

                                                      But this is free software. There’s no fundamental difference between producers and consumers. You aren’t paying for the development of a display server. You aren’t owed anything. If the maintainers of Xorg no longer enjoy maintaining it, they should stop. And they did. If that doesn’t work for you, the onus is on you to pick up development where it left off.

                                                      If that model doesn’t work for you, go pick up Windows or macOS, where you are literally paying money to not have these problems. That’s awesome. But people who use Linux as a desktop are opting into a different model where they are using software distributed “WITHOUT WARRANTY”. It’s unreasonable to then complain that the people who screw around with this software in their free time have decided to stop or switch gears to something they enjoy more.

                                                      And then there are some small nitpicks.

                                                      Words like DPI, display scaling, framebuffer and such are meaningless

                                                      Plenty of users have issues with wanting different DPI and display scaling on different monitors. I’m one of them. This is a very real feature of Wayland that impacts my life, and I’m sure many would agree.

                                                      Nvidia makes hardware. Their job is not to sabotage their product so it can work with arbitrary bits of code out there. It is the job of software vendors to make their software work with the hardware in the best fashion (if they want to)

                                                      Open sourcing a driver (or even just making it source-available) is not “sabotaging their product”. AMD does it just fine with their GPUs. I won’t speculate on why Nvidia doesn’t cooperate with open source, but I suspect that the reason would frustrate me.

                                                      1. 1

                                                        If that model doesn’t work for you, go pick up Windows or macOS, where you are literally paying money to not have these problems. That’s awesome. But people who use Linux as a desktop are opting into a different model

                                                        This claim is fundamentally incompatible with… Something that I don’t know if there’s a name for.

                                                        Basically, there are two camps in Linux: the “LibreApple” camp (who want to replace Windows/etc with a FOSS equivalent and improve computing for everyone) and the “PowerUserOS” camp (who want an OS by power users for power users, a lot of whom see Linux’s niche status as a benefit, and who like to say “Linux is like Lego”).

                                                        By definition, LibreApple can’t be exclusive to hackers (“everyone” means “everyone”), which means you either literally teach the entire world to code, or you build Linux with a fundamental difference between producers and consumers.

                                                        Point is, you come down firmly in the “PowerUserOS” camp, to the point where you’re not even acknowledging the existence of LibreApple.

                                                        You could claim that “LibreApple means FSF and PowerUserOS means the open source crowd”, but 1) I’m not so sure that’s always true (see below), 2) you used the term “Free Software” so me describing you as in the “open source crowd” would make things less clear, and 3) there’s enough semantic confusion around the phrases “free software” and “open source” that I wanted to exclude them from my comment if at all possible.

                                                        From point 1 above: For instance, a lot of “LibreApple” people see Steam as a good thing for Linux, despite being a proprietary platform for (mostly) proprietary games. I don’t really want to get into this semantic discussion about FOSS though, I’m just preemptively responding to an expected response.

                                                        1. 1

                                                          The problem is that “LibreApple” doesn’t work. Apple has funds to be Apple. The Linux ecosystem is developed by volunteers for fun. For the majority of FOSS, development will always be by volunteers for fun.

                                                          Maybe if there were a company out there whose business model was “buy our linux-compatible software and we promise it will just work”, then there could be a LibreApple. But I don’t know of any such company. System76 is the closest I can come up with. Buy their stuff and it’ll probably just work. But it’s not like they have tons of control over Wayland/X11, since the display servers are written by volunteers for fun. So unless they want to take on the whole stack, there will still be problems.

                                                          I don’t know if the Linux ecosystem can house a company that’s paid to make the whole software stack Just Work. I hope it can. I know I’d pay for it.

                                                          1. 2

                                                            The Linux ecosystem is developed by volunteers for fun. For the majority of FOSS, development will always be by volunteers for fun.

                                                            This hasn’t been true for a long time. Most of the big projects (for example the kernel, glibc, systemd, GNOME) are backed by big companies. RedHat is not part of IBM and employs a load of the core developers for key parts of the system. It’s difficult for a volunteer-run project to keep up with these.

                                                            1. 1

                                                              I guess that’s fair. But I think the foundation of what I was saying is still true. If you’re not paying for a product, and instead are essentially relying on the enthusiasm of a volunteer (or a company donating to FOSS, which is not unlike volunteering), then we can’t look at things from the perspective of the software being a product. It’s not a product because no one is buying it.

                                                              That said, I admit the situation becomes more complicated with companies involved. They have more funds to throw at developers to maintain the small details that individual volunteers often don’t want to. But the incentive structure to do things like “support Xorg forever” is still missing. If there were a giant user base paying for Xorg that might stop paying if all their programs stopped working, maybe Wayland wouldn’t exist, or would have better backwards compatibility.

                                                      1. 3

                                                        It’s much worse than that in some languages. Swift, for example, defines grapheme clusters as the unit for the length of a string and so the length of a string can change between releases of your Swift compiler.

                                                        1. 4

                                                          Unfortunately, grapheme clusters are the closest thing there is to a “character”, so any other option (except relying on the ICU library in your OS instead) would yield even more nonsensical results.

                                                          1. 1

                                                            I haven’t yet seen a situation where I actually want to know how many grapheme clusters are in a string. When would this be useful? Maybe if you’re impmementint a text rendering engine?

                                                            1. 1

                                                              Not grapheme clusters; characters.

                                                              1. 1

                                                                Ok, but same question. Even if there were a reliable way to count the number of “characters” in a string of UTF8, I’ve never encountered the need to do that. Maybe it’s just the kind of code I work on that it doesn’t come up?

                                                                Even if you’re presenting a string to the user, don’t we 99.9% of the time just pass the UTF8 bytes to a rendering library and let it do the rest?

                                                                On the other hand, I very consistently need the length of the string in bytes for low-level, memory-management type work.

                                                                1. 2

                                                                  If you write web apps, everything you receive from an HTTP request is effectively stringly-typed simply because of the nature of HTTP. And many of the data types you will convert to as you parse and validate the user’s input will involve rules about length, about what can occur in certain positions, etc., which will not be defined in terms of UTF-8 bytes. They will be defined in terms of “characters”. Which, sure is a bit ambiguous when you translate into Unicode terminology, but generally always means either code points or graphemes.

                                                                  And you can protest until you’re blue in the face that the person who specified the requirements is just wrong and shouldn’t ask for this, but the fact is that the people specifying those requirements sign your paychecks and have the power to fire you, and a large part of your job as a web app developer is to translate their fuzzy human-language specifications into something the computer can do. Which, again, almost never involves replacing their terms with “UTF-8 bytes”.

                                                                  Amusingly this has become a problem for browsers, because the web specs are still written by people who think at least partly like you do, and so have a significant disconnect between things like client-side validation logic setting max length on a text input (which the spec writers tend to specify in terms of either a byte limit or a UTF-16 code-unit limit), and the expectations of actual humans (who use languages that don’t cleanly map one “character” to one byte and/or one UTF-16 code unit).

                                                                  IIRC Twitter actually went for the more human definition of “character” when deciding how to handle all the world’s scripts within its (originally 140, now 280) “character” limit, which means users of, say, Latin script are at a disadvantage in terms of the byte length of the tweets they can post compared to, say, someone posting in Chinese script.

                                                                  1. 2

                                                                    I think the twitter example is an interesting one that I hadn’t thought of. Limits on lengths are usually due to concerns about denial-of-service attacks, but not for Twitter. It’s actually supposed to be a limit on some kind of abstract human idea of “characters”. I don’t envy having to figure that one out.

                                                        1. 3

                                                          That is a really complicated setup.

                                                          At first, I was thinking that I can’t imagine what you’d want 25Gbit for. But then again, I moved recently from a 400Mbit cable to a 50-ish DSL and I really, really don’t like DSL. (Side note, they’ve just announced they’re laying fiber in my place, contract is signed and this time next year I could be on gigabit).

                                                          I assume jumping from < 1Gbit to 10+ Gbit is just a natural next step. I mean, yes I don’t need that speed all the time. But it’d still be nice to click “Download” and just a minute later, the entire 100+ GB Elder Scrolls online is here.

                                                          1. 3

                                                            That is a really complicated setup.

                                                            It seems that he’s not even using multiple subnets - I’d say his setup is a lot simpler than mine. :)

                                                            I could imagine having 25 Gbps at home, but I’ve just started to deploy 10 Gbps in my internal network (between a few hosts) so it might be slightly overkill for me as well. My current max is 1000/100 but I’ve only opted for 100/100 as I don’t need more, and since I can’t have 1000 Mbps in upload…

                                                            1. 2

                                                              might be useful to scan the entire ipv4 address space in a couple of minutes (or even faster)

                                                              1. 3

                                                                If your ISP doesn’t block you, that’s a great way to end up on threat intelligence feeds and labelled as a bot.

                                                                1. 1

                                                                  fine take a few more minutes :)

                                                            1. 5

                                                              tl;dr; They are sum types just like in Swift or F#.

                                                              1. 5

                                                                I hope we are going to see a post about how Rust immutability is amazing soon. After that we can have articles about every single ML feature that existed for decades that Rust successfully implemented.

                                                                1. 8

                                                                  You mean, it’s not a good thing to implement great features of a language that basically nobody is using into a new language that makes things better and more appealing?

                                                                  1. 2

                                                                    No, I mean presenting it as a novel idea when in fact it is around for 30 years is kind of funny.

                                                                    that basically nobody is using into a new language

                                                                    Factually incorrect. Please have a look at the languages in the ML family.

                                                                  2. 4

                                                                    It’s no secret that Rust was heavily inspired by ML, C, C++, etc. Rust itself has very few new language features, the borrow checker being one of them.

                                                                    But Rust appeals to the low-level systems crowd, and brings with itself a ton of nice-to-haves from ML and the like. So people who were previously stuck with C-like enums suddenly have nice options and want to talk about it.

                                                                    1. 4

                                                                      What’s so bad about highlighting the strengths of a programming language?

                                                                    2. 2

                                                                      PEP 634 is also bringing sum types to Python.

                                                                    1. 3

                                                                      What is stopping the community from building a net new (fully compatible) web browser at this point?

                                                                      I would love to hear from those who have the relevant experience (Chromium/Firefox developers, hobbyist browser developers).

                                                                      I see the answer to this come up often enough, that the endeavour is simply too big to try and make a new one at this point. I think if it’s worth it, then the amount of effort shouldn’t stop people from at least attempting to build something better.

                                                                      I’m intentionally being naive here with the hopes to spark some discussion.

                                                                      1. 4

                                                                        Rendering basic HTML is easy enough. Ensuring complex modern webapps like Google Docs work performantly is multiple orders of magnitude harder. Even Microsoft with all its corporate backing struggled to get the old Edge engine to run competitively.

                                                                        1. 3

                                                                          I’m curious what makes it orders of magnitude harder? Is it the amount of moving pieces? Is it the complexity of a specific piece needed to make modern web apps work? Maybe existing browser code bases are difficult to understand as a point of reference for someone starting out?

                                                                          1. 5

                                                                            A good way to understand the complexity of modern browsers is to look at standards that Mozilla is evaluating for support.

                                                                            You’ve got 127 standards from “new but slightly different notification APIs” to “enabling NFC, MIDI I/O, and raw USB access from the browser”. Now, obviously lots of these standards will never get implemented - but these are the ones that were important enough for someone to look at and consider.

                                                                        2. 3

                                                                          Drew DeVault goes through some of the challenges here. Short version: enormous complexity.

                                                                        1. 1

                                                                          I have a hard time figuring out how to get matrix setup and working. Like what the backend and frontend are and how they work. Am I not understanding what it is?

                                                                          1. 10

                                                                            TL;DR: If you want to try it out, download the Element client and let it walk you through making an account.

                                                                            You’ll have to choose a Matrix homeserver (like an email provider). If you won’t use it that frequently, the free matrix.org homeserver is good but slow. For more serious use, consider a subscription to Element Matrix Services, where they host a homeserver for you. Or you can try to self-host synapse. I wouldn’t.

                                                                            Other homeservers are being developed right now (Conduit is pretty cool). But none are ready for production just yet. And unfortunately the choice of homeserver is still important because your account will be tied to it. In the future, the “multi-homed accounts” feature will make this initial choice less important (hopefully).


                                                                            There are two basic components to understand if you’re just getting into Matrix, and the two components are best understood as an analog to email, which is really the only popular federated protocol today.

                                                                            There’s the Matrix homeserver, which is like your email provider. It’s where the messages and account information are stored. It’s what actually takes care of sending and receiving your messages. It’s where you sign up. Multiple people can have an account on the same homeserver. Synapse is the most popular homeserver right now, it’s developed by the team that founded Matrix, and it’s considered to be slow (Python) and is slated to be replaced.

                                                                            Then there’s Matrix clients. Just like email, Matrix is standardized. You can use any email client to get/send your Gmail, and you can use any Matrix client to get/send Matrix messages. Element is the most popular Matrix client (again made by the team that created Matrix). It’s the most feature-complete by far. It’s written in Electron, so it’s bloated. But it works fairly well.

                                                                            1. 3

                                                                              I wouldn’t.

                                                                              Can you elaborate ? We use a synapse server at work and it works.

                                                                              1. 1

                                                                                I should have been clearer. I meant that I don’t advise trying to self-host the homeserver at all. Self-hosting anything is a ton of work if done properly. Timely security updates, migrations, frequent (and tested) backups, and reliability all come to mind as things I personally don’t want to have to worry about. Element Matrix Services seems like a good deal for just a few people.

                                                                                1. 3

                                                                                  But these challanges aren’t at all Synapse-specific, are they? Updates. migrations and proper backups are something you have to do with any server that you self-host. And after running a homeserver for a few years, the only migration I ever had to do is from an older to a newer PostgreSQL version by simply dumping the whole database and reading it back in. All schema migrations are done automatically by Synapse and I never had any problems with that. Hosting a Matrix server with Synapse is so easy if you compare it e.g. to hosting your own email server. And Synapse really is battle-tested because it’s dogfooded at the huge matrix.org homeserver instance.

                                                                                  1. 1

                                                                                    No they’re definitely not specific to Synapse. That was pretty much my point.

                                                                                    And I know Synapse has put a ton of work into being easy to deploy. But I still won’t ever recommend managing infrastructure to anyone. It’s awesome that Synapse makes it as easy as possible for people like us to self-host it, but $5/month is probably well worth the lack of headache for most people.

                                                                              2. 2

                                                                                As far as I can tell, none of the homeserver implementations are ready for self-hosting – unless you disable federation, and then what’s the point?

                                                                                1. 3

                                                                                  I’m not sure where you’re getting that impression. I’m hosting two different Synapse instances myself. I just update them when new releases come out; it’s been relatively painless.

                                                                                  1. 1

                                                                                    Can you please give a reason why you don’t think Synapse is ready for self-hosting? I’ve been doing it for years with enabled federation and I never had any serious problems.

                                                                                    1. 1

                                                                                      Sure. I’ve heard again and again that if you enable federation on Synapse and someone joins a large room, the server bogs down and starts chewing through resources. Has that changed?

                                                                                      Also note that I’d be running it on an old laptop or a raspberry pi, just like I would run any chat server – IRC, Jabber, etc.

                                                                                2. 1

                                                                                  .. I mean, probably? What exactly are you struggling with?

                                                                                  1. 2

                                                                                    Uh oh, now I feel even dumber. The main website has information about something called Synapse and there is “element” which is a frontend I believe, but how do you install a matrix server and start using it?

                                                                                    1. 13

                                                                                      My attempt at clarification:

                                                                                      • Matrix is a protocol for a federated database, currently primarily used for chat
                                                                                      • Synapse is the reference home server (dendrite, conduit, construct etc. are other alternatives)
                                                                                      • Element is the reference client (there are versions of element for the web (electron), android and ios)
                                                                                      • A user account is (currently) local to a home server
                                                                                      • A chat room is federated and not located on a specific home server. The state is shared across home servers of all users that have joined the room.
                                                                                      • There are P2P tests where the client and home server are bundled together on e.g. a mobile phone
                                                                                      • Spaces are a way to organize rooms. Spaces are just a special case of a room and can include other rooms and spaces.
                                                                                      1. 4

                                                                                        Thank you! That clarifies a lot. I was stuck thinking Matrix is the server. So, Matrix is a protocol for a federated database, that’s very interesting and cool.

                                                                                        1. 1

                                                                                          Is it legitimate for me, as a user rather than someone who’s interested in the infrastructure, to just think of Matrix being like a finer-grained version of IRC, where instead of (mainly) a few big networks there are more smaller networks and instead of joining e.g. #linux on freenode, I’d join e.g. #linux:somewhere …

                                                                                          Would I now discover ‘rooms’ by starting from a project’s website, for example, rather than just joining some set of federated servers and looking for rooms with appropriate names?

                                                                                          I just searched for ‘linux room matrix’ and the top hit was an Arch Linux room #archlinux:archlinux.org

                                                                                          (I don’t really want to join a general Linux room - just using it as an example)

                                                                                          1. 3

                                                                                            Well, generally NO. Most all matrix home servers are all joined together via the federated protocol. So if you join #archlinux:archlinux.org on homeserver A, and your BFF uses homeserver B, you will still see each other and communicate with each other in that room like if you were both on homeserver A.

                                                                                            One COULD create a non-federated home server, but that’s not the typical use case, and the reasons to do so would be odd. If you are doing for example a chat server for internal chat @ $WORK, using Matrix is probably a terrible idea. Zulip, Mattermost, etc are all better solutions for that use-case.

                                                                                            1. 2

                                                                                              Discovering rooms is currently a bit problematic, as room directories are per server. But a client can query room directories from any server (that allows public queries). Spaces will probably help a lot with room discovery, as they can form deep hierarchies.

                                                                                          2. 8

                                                                                            I did a video to try to explain it last year (which i should update, but it’s still usable, even if it calls Element by its old name of Riot): https://www.youtube.com/watch?v=dDddKmdLEdg

                                                                                            1. 3

                                                                                              I recommend starting off by just creating an account at app.element.io and using the default homeserver so you don’t have to host anything youself

                                                                                              1. 2

                                                                                                Synapse is a server implementation and Element is one of the clients.

                                                                                                Installing Synapse: https://matrix.org/docs/guides/installing-synapse

                                                                                                1. 1

                                                                                                  Uh oh, now I feel even dumber.

                                                                                                  Don’t. The Matrix project is pretty terrible at both naming and the new user experience.

                                                                                                  1. 2

                                                                                                    Not trying to hate on them or anything. @ptman ‘s comment above really helped.

                                                                                                    1. 1

                                                                                                      Yeah, I wish them every success - but what I guess I’ll call the introductory surface of the system is fairly painful.

                                                                                            1. 30

                                                                                              Last month, I had to install a package manager to install a package manager. That’s when I closed my laptop and slowly backed away from it.

                                                                                              The whole article is good, but this was my favorite part.

                                                                                              1. 7

                                                                                                i came here to quote that line too :) i laughed out loud when i hit it.