1. 4

    We hence see essential complexity as “the complexity with which the team will have to be concerned, even in the ideal world”… Note that the “have to” part of this observation is critical — if there is any possible way that the team could produce a system that the users will consider correct without having to be concerned with a given type of complexity then that complexity is not essential.

    I strongly disagree with this and take the opposite stance: there’s a lot of complexity that people think is “accidental” that’s essential. One example is security. Most users will consider an insecure system correct, plenty of times an insecure system flies under the radar long enough to be net positive, and in an ideal world there’d be no attackers. But making things secure is an essential part of complexity part of essential complexity. Another is privacy and safety. Any system that can be used for interpersonal abuse will eventually be used for abuse.

    Overall I think accidental and essential complexity are too broad as definitions and we need a finer grain of meaning,

    1. 8

      Tbf I don’t think you’re disagreeing with the essence of the point.

      You’re making the additional, separate, and correct point that what people consider “the spec” is often not a reflection of what “the spec” truly is, or should be, in a real-world production system.

      They’re saying that, given some spec which we assume is correct, then “if there is any possible way that the team could produce a system that [conforms to that spec] without having to be concerned with a given type of complexity then that complexity is not essential.”

      1. 4

        That’s fair.

        1. 4

          I’d actually defend @hwayne’s point in stronger terms (that I’m not sure he’d agree with).

          The very framing of starting from a single, complete spec and following the spec to build an implementation is misleading at best and outright dangerous at worst. Idealized specs are unknowable in the real world. So yes, what the authors say applies to some idealized spec, but it’s unknowable. Who cares?

          The best use of specs is to nail down aspects of systems. The desired behavior is then cobbled together from a combination of multiple rigorous specs, informal prose and even more informal handwaving. And this is fine! As long as everyone understands where the boundaries are between the different categories.

          (I’ve reached this conclusion partly after reading @hwayne’s https://www.hillelwayne.com/post/why-dont-people-use-formal-methods. Corrections to my understanding most appreciated.)

          1. 2

            I agree with your overall point about the messiness of building real systems, and that the idealized spec is usually unknowable, at least in part.

            That said, i think it’s still a useful conceptual tool for thinking about problems, and i think the accidental/essential complexity distinction is extremely useful for analyzing code and design, in exactly the same way that idealized physics is useful for thinking about the real-world, even though real-world applications have friction, air resistance, etc.

            1. 4

              One final attempt to articulate the tiny area of disagreement amidst all our overlapping agreements:

              As I see it, the rhetoric around essential and accidental complexity is deeply tied to getting programs “correct”. The evolution I had in my thinking after that post I linked above was in realizing that it’s meaningless to ask if a program is correct. You can only ask if programs correctly satisfy certain properties.

              If programs don’t have a single global property to optimize for, it’s much harder to reason about essential vs accidental complexity. Some aspect of a program could be essential for one property (e.g. memory safety) but accidental for another (e.g. fault tolerance). So the distinction seems a lot less interesting to me in recent months.

              1.  

                As I see it, the rhetoric around essential and accidental complexity is deeply tied to getting programs “correct”

                Personally, I’ve always heard accidental complexity invoked in the context of what makes something unreadable, or difficult to understand, or bloated. It’s: “why is this thing that should be simple so huge and unwieldy?”

                Based on your about page, very much in line with your areas of research, it looks like. Regardless of the common use of the rhetoric, that’s the part of it I find value in.

                1.  

                  That’s fair! I was thinking about correctness because neither OP nor Fred Brooks’s original paper mention readability.

                  1.  

                    I would add “Hard to Operate” to the list as well. Basically, things that are built such that they are an operators nightmare.

          2. 2

            Not sure how that is a disagreement if security, privacy and safety are part of the user requirements.

            1. 2

              They rarely are. Most companies only care about security after they’ve been breached.

              1. 3

                Absent an explicit spec, “accidental” and “essential” are just very subjective terms, and there’s no easy way to pin down a firm boundary between them. Legitimate differences of opinion are highly likely.

                And, as you may be aware, specs are hard. Like, easy to get wrong. Usually incomplete.

            2. 2

              making things secure is an essential part of complexity

              Wait, what? I have no idea what you mean by this. I’ve definitely seen a lot of highly complex software that had not even Clue 1 about security anything.

              1. 1

                Sorry, I mean it’s a kind of essential complexity.

                1. 1

                  Concrete example: OpenSSL, LibreSSL, S2N. Rank by complexity and security; observe correlation.

                  Another one: Ubuntu, FreeBSD, seL4. Same exercise.

                  These are just examples and don’t necessarily prove a more general point. But I’m hard pressed to think of an example where a more complex system is more secure just by virtue of being more complex. Usually it’s the opposite; we make things more secure by reducing the attack surface to something manageable.

                  I guess you’re just thinking of relatively security-oblivious projects that struggle to “add security” as an afterthought, maybe once the horse has already left the barn so to speak. That’s kind of an uphill battle, but I’ll grant it’s an unfortunately commonplace scenario. I’d argue it generally places the defenders into a reactive position, a whack-a-mole arms race where they’re always at least one step behind the attackers. Sounds familiar, right?

                  Making a system more secure may entail making it more or less complex than it was; it really depends on the specific situation. But adding complexity (“essential” or not) generally makes a system less secure, all else being equal.

                  OK, that horse was probably dead already, but man is it really dead now. Sorry, I got a bit triggered back there.

                  1. 4

                    Lemme try clarifying what I meant: I’m arguing that trying to be secure adds complexity, especially if you have other requirements, too. But it’s also complexity we can’t write off as accident, and also it’s complexity that many users will write off as accidental. So it’s essential complexity, in that we can’t ignore or externalize it as a requirement and so makes things more complex.

                    1. 1

                      Ah, OK, you’re thinking of complexity of requirements, where I was thinking complexity of implementation? That makes some sense. (I don’t wanna touch the “essential”-vs-“accidental” distinction, I think it’s a reddish herring, a vintage herring that has outlasted its sell-by date, kinda like I said over there.)

                    2. 1

                      But I’m hard pressed to think of an example where a more complex system is more secure just by virtue of being more complex.

                      Windows NT is quite a bit more secure than DOS.

                2. 1

                  Is it just that there needs to be a finer grained meaning or is it that, in the real world, accidental v. essential complexity doesn’t answer the question of “Is this complexity avoidable?” Avoidable v. unavoidable complexity is a more practical categorization but harder to make, as the paper notes that some forms (but implies that not all) of accidental complexity are unavoidable.

                  That’s not to say that a finer grained categorization of essential and accidental complexity wouldn’t help to answer that question, but I’m dubious that the problems we see can be solved solely by avoiding complexity when things like poor user understanding of what they want software to do: the the clarity of problem specification that this paper assumes deteriorates in real world conditions.

                  1.  

                    I feel like the finer grained meaning from the paper is an elegant way of framing what a lot of people in the industry “feel” right now: FP simply allows devs to discern the difference between accidental and essential complexity, and presents a tradeoff worth taking a risk on. I would love to see a similar post on No Silver Bullet.

                1. 7

                  Unfortunately history doesn’t point to great outcomes with respect to standards and interoperability. Ten or 20 years ago the big battle was interoperable word processing and spreadsheets, not web browsers.

                  That issue seems less important now, but it’s still a problem, and it’s still unresolved.

                  Programmers use text and markdown so they probably don’t feel it, but lots of the world still runs on Word and Excel. Newer companies likely use cloud solutions which ironically don’t have STABLE formats, let alone OPEN ones (and occasionally break).

                  We still have a bunch of silos and “fake” standards. That is, stuff that’s too complicated for anyone besides a single company to implement. I haven’t really followed this mess, but it appears to be still going on:

                  https://en.wikipedia.org/wiki/Microsoft_Office_XML_formats

                  https://en.wikipedia.org/wiki/OpenDocument


                  I’ll also point out that POSIX shell is way behind the state of the art… There is a ton of stuff that is implemented in bash, ksh, zsh, busybox ash, and Oil that’s not in the standard. I’d estimate you could double the size of the POSIX shell standard with features that are implemented by multiple shells.

                  It’s a lot of work and I think basically nobody has the time or motivation to argue it out.

                  I’m still publishing spec tests that show this, and that other shell implementers can use. Example:

                  https://www.oilshell.org/release/0.8.0/test/spec.wwz/survey/brace-expansion.html

                  https://www.oilshell.org/release/0.8.0/test/spec.wwz/survey/dbracket.html

                  All that stuff could be in POSIX. Actually I noticed busybox ash is also copying bash – so bash is a new de facto standard with multiple implementations.

                  Likewise I’ve documented enhancements for other shells to implement:

                  https://www.oilshell.org/release/0.8.0/doc/simple-word-eval.html

                  1. 3

                    Related article about how not just the POSIX shell but the POSIX operating system APIs have become outdated:

                    https://www.usenix.org/publications/login/fall2016/atlidakis

                    https://www.usenix.org/system/files/login/articles/login_fall16_02_atlidakis.pdf

                    Basically OS X, Android, and Ubuntu are all build on top of POSIX, but it’s not really good enough for modern applications, so they have diverging solutions to the same problems. This can be “working as intended” for awhile, but if it goes on for too long, then the lack of interoperability impedes progress.

                    1. 2

                      Impediments to progress are sort of relative to some genuinely viable alternative. If there isn’t one, or people aren’t aware of one, we just keep slogging through the tar pits. Those who grow up in the tar pits don’t even notice.

                      1. 1

                        iOS as well, obviously, but that’s a different beast to test, so.

                        1. 3

                          iOS disallows stuff that’s part of POSIX, like fork, so even if all of the facilities are present, they aren’t part of the public API. So it’s not a POSIX OS.

                          1. 1

                            That’s an even better observation.

                            1.  

                              Macos apparently warns you if you do anything after fork except immediately execing.

                      1. 5

                        Part of my answer is that focusing on technology underpinnings is the wrong question, but that’s a half-story. The rest the reason I think OStatus is on the wrong track:

                        I need to be able to transparently migrate service provider. I should be able to switch from one service provider to another, and, after doing so, should not have to worry about the old provider any more. Such a feature makes it possible to take risky bets on service providers, and if it doesn’t work out, it’s not a permanent loss. People need to be able to make such risky bets, otherwise they’ll only register with service providers that they know will be reliable, and that means big services get bigger and small services can’t get off the ground.

                        That means domain names can’t form part of my permanent identity. If it looks like notriddle@example.com, then it means I’m now chained to example.com, and even if I decide to switch to another providers, I either get to send a message to all my contacts asking them to change, or I get to worry about example.com being able to host a permanent forwarding address. I can also own my own domain, but what happens if I screw up and it expires?

                        1. 2

                          domain names can’t form part of my permanent identity

                          Well.. the alternative is cryptographic keypairs as in SSB. But this makes multi-device hard, and making keys manageable by normal people is also hard (I guess deriving keys from passphrases is convenient but then the hard problem is recovery from loss/theft).

                          I can also own my own domain, but what happens if I screw up and it expires?

                          Don’t screw up? People have been owning personal domains for a couple decades now, with good results mostly.

                          Seems like the bigger barrier to mass adoption of personal domains is having to pay money at all in a world where most online communication has been free :/

                          1. 1

                            In your version you don’t actually really belong to a federation instance anymore. It acts more or less like a cache. I think this is a cool idea but doesn’t really match what most people mean when they discuss federation.

                            It’s probably difficult to form a sense of community in this case or to set up an instance for your friends since people don’t belong to an instance in any meaningful sense.

                            Conceptually, it seems more like p2p to me.

                          1. 6

                            Update: I’m told that newer versions of the compiler handle it just fine but the question still stands (was it just a compiler problem or the call definition has been changed?).

                            Neither. The definition of a borrow got more complex.

                            https://github.com/rust-lang/rfcs/blob/master/text/2025-nested-method-calls.md

                              1. 11

                                We are told that our computers are being stripped of their functionality because they are just too insecure and too complicated for the average “normal” or “normie” to deal with. After all, the problem could not possibly be that the Windows operating system is an insecure piece of junk, reminiscent of a 40-year-old family minivan held together with chewing gum and bailing wire.

                                I bet if Microsoft actually managed to fix every single remote code execution and privilege escalation flaw in Windows, that it wouldn’t even cut the number of people who lose their bank account to keyloggers in half.

                                The number and variety of malwares is plenty documented to anyone who’s curious. What’s in it? Some of them are drive-by downloads and worms, but a lot of them are just phishing. They have spreading methods like:

                                Copy and Paste Scams. Users are invited to paste malicious JavaScript code directly into their browser’s address bar in the hope of receiving a gift coupon in return.

                                Fake Plug-in Scams. Users are tricked into downloading fake browser extensions on their machines. Rogue browser extensions can pose like legitimate extensions but when installed can steal sensitive information from the infected machine.

                                I hate arguments like this, because the OP never says anything that’s false, but rather, they just totally ignore the counterargument. Maybe there isn’t any “The Problem(tm),” but rather a bunch problems that contribute to the effectiveness of malware as a way to make money.

                                This is, at its root, an ethical problem, and so it’s also a political one. If it seems one-sided to you, then you’ve probably been mindkilled. If you honestly think the existence and effectiveness of phishing is worth it for the sake of free speech and the ability to modify your hardware, then just say so.

                                1. 6

                                  This is an excellent point. If you want to secure the availability of general-purpose computing devices (which I would like to do), it’s important to understand why ordinary consumers of computing devices might not care about having a general-purpose computing device, or why they might value other things (like a reasonable guarantee that their software vendor will make sure they can’t get phished when they log into their bank account) more than the ability to compute freely; and either be able to build general-purpose computers that solve those problems for people, or be able to create a viable software ecosystem that doesn’t rely on that substantial number of people interacting with your hardware and software.

                                1. 4

                                  I’ve been thinking about something similar for a while now. Working on it—slowly, very slowly, maybe two decades will pass and it’ll still be vapourware;

                                  • There is a single ‘blessed’ application runtime for userspace. It is managed and safe. (In the tradition of java, javascript, lua, c#, etc.) This is necessary for some of the later points.

                                    • As gary bernhardt points out, this can be as fast as or even faster than running native code directly.

                                    • Since everything is running in ring 0, not only are ‘syscalls’ free, but so is task switching.

                                      • There is thus no reason to use coroutines over ‘real’ threads.
                                  • All objects are opaque. That is:

                                    • All objects are transparently synced to disc.

                                      • Thus, the ‘database’ doesn’t need to be something special, and you can form queries directly with code (this doesn’t necessarily scale as well, but it can be an option, and you can use DSL only for complex queries if necessary)
                                    • All objects may transparently be shared between threads (modulo permissioning; see below)

                                    • All objects may transparently originate in a remote host (in which case changes are not necessarily synced to disc, but are synced to the remote; a la nfs)

                                      • Network-shared objects can also be transparently upgraded to be shared via a distributed consensus system, like raft.
                                  • Instead of a file system, there is a ‘root’ object, shared between all processes. Its form is arbitrary.

                                  • Every ‘thread’ runs in a security domain which is, more or less, a set of permissions (read/write/execute) for every object.

                                    • A thread can shed permissions at will, and it can spawn a new thread which has fewer permissions than itself, but never gain permissions. There is no setuid escape hatch.

                                    • However, a thread with few permissions can still send messages to a thread with more permissions.

                                  • All threads are freely introspectible. They are just objects, associated with a procdir-like object in which all their unshared data reside.

                                  1. 3

                                    pst (since I’m the guy who has to point these out to everyone each time):

                                    • IBM i has a lot of these (object persistence, capabilities, high-level runtime only; older systems didn’t even have unprivileged mode on the CPU), but not all.

                                    • Domain has network paging (since that’s how it does storage architecture), but not most of the others.

                                    • Phantom does persistence for basically pausable/restartable computation. Weird, but interestingly adjacent.

                                    I need to write a blog post about this!

                                    1. 2

                                      Interesting! Never encountered that.

                                      Wiki says it’s proprietary and only works with ppc. Is there any way to play with it without shelling out impressive amounts of $$$ to IBM?

                                      1. 3

                                        If you want your own hardware, you can buy a used IBM Power server for an amount on the order of a few hundred dollars and installation media for a trial is available direct from IBM. While that’ll only work for 70 days before you need to reinstall, back up and restore procedures are fairly straightforward.

                                        If you don’t care about owning the hardware, there’s a public server with free access at https://pub400.com/.

                                        Whichever route you take, you’ll probably want to join ##ibmi on Freenode because you’ll have a lot of questions as you’re getting started.

                                        1. 2

                                          Is there a particular model you recommend of Power? The Talon stuff is way too pricey.

                                          1. 2

                                            If you want it to run IBM i, you’re going to need to read a lot of documentation to figure out what to buy, because it’s all proprietary and licensed, and IBM has exactly 0 interest in officially licensing stuff for hobbyists. It also requires special firmware support, and will therefore not run on a Raptor system.

                                            I think the current advice is to aim for a Power 5, 6, or 7 server, because they have a good balance of cost, not needing a ton of specialized stuff to configure, and having licenses fixed to the server. (With older machines, you really want to have a 5250 terminal, which would need to be connected using IBM-proprietary twinax cabling. Newer machines have moved to a model where you rent capacity from IBM on your own hardware.)

                                            I’d browse ebay for “IBM power server” and looking up the specs and license entitlements for each server you see. Given a serial number, you can look up the license entitlements on IBM’s capacity on demand website. For example, my server is an 8233-E8B with serial number 062F6AP. Plugging that into IBM’s website, you see that I have a POD code and a VET code. You can cross reference those codes with this website to see that I have entitlements for 24 cores and PowerVM Enterprise (even though there are only 18 cores in my server, in theory I could add another processor card to add another 6. I’m given to understand that this is risky and may involve needing to contact IBM sales to get your system working again)

                                            You really want something with a PowerVM entitlement, because otherwise you need special IBM disks that are formatted with 520-byte sectors and support the SCSI skip read and skip write commands. You will also need to cross reference your system with the IBM i system map to see what OS versions you can run.

                                            Plan to be watching eBay for a while; while you can find decent machines for €300-500, it’s going to take some time for one to show up.

                                            Also, I’m still relatively new to this whole field; it’s a very good idea to join ##ibmi on freenode to sanity check any hardware you’re considering buying.

                                        2. 1

                                          There’s no emulator, and I’m not holding my breath for one any time soon.

                                          Domain is emulated by MAME, and Phantom runs in most virtualization software though.

                                        3. 2

                                          Hey Calvin, Please write a blog post about this.

                                          1. 1

                                            Please do

                                          2. 3

                                            I’ve been working on this but with WebAssembly.

                                            1. 1

                                              I am curious. Is there source code available?

                                              1. 1

                                                It’s still in the planning phase, sadly. I only have so much time given it’s one of my many side projects.

                                            2. 2

                                              Sounds an awful lot like Microsoft Midori. It doesn’t mention transparent object persistence, but much of what you mentioned is there.

                                              1. 1

                                                You might be interested in this research OS, KeyKOS: http://cap-lore.com/CapTheory/upenn/

                                                It has some of what you’re describing: the transparent persistence, and the fine-grained permissions. I think they tried to make IPC cheap. But it still used virtual memory to isolate processes.

                                                I think it also had sort of… permissions for CPU time. One type of resource/capability that a process holds is a ticket that entitles it to run for some amount of time (or maybe some number of CPU cycles?). I didn’t really understand that part.

                                                1. 2

                                                  Looks interesting. (And, one of its descendants was still alive in 2013.) But, I think anything depending on virtual memory to do permissioning is bound to fail in this regard.

                                                  The problem is that IPC can’t just be cheap; it needs to be free.

                                                  Writing text to a file should be the same kind of expensive as assigning a value to a variable. Calling a function should be the same kind of expensive as creating a process. (Cache miss, maybe. Mispredict, maybe. Interrupt, full TLB flush, and context switch? No way.)

                                                  Otherwise, you end up in an in-between state where you’re discouraged from taking full advantage of (possibly networked) IPC; because even if it’s cheap, it’ll never be as cheap as a direct function call. By making the distinction opaque (and relying on the OS to smooth it over), you get a more unified interface.


                                                  One thing I will allow about VM-based security is that it’s much easier to get right. Just look at the exploit list for recent chrome/firefox js engine. Languages like k can be fast when interpreted without JIT, but such languages don’t have wide popularity. Still working on an answer to that. (Perhaps formal verification, a la compcert.)


                                                  CPU time permissions are an interesting idea, and one to which I haven’t given very much thought. Nominally, you don’t need time permissions as long as you have preemptive multitasking and can renice naughty processes. But there are other concerns like power usage and device lifetime.

                                                  1. 1

                                                    I’ve been imagining a system that’s beautiful. It’s a smalltalk with files, not images, with a very simple model. Everything is IPC. If you are on a Linux with network sockets, that socket is like every other method call, every addition, every syscall.

                                                    Let’s talk. I like your ideas, and think you might like this system in my mind.

                                                    1. 3

                                                      These sound great until you try and implement any of it, in which case you realise that now every single call might fail and/or simply never return, or return twice, or return to somebody else, or destroy your process entirely.

                                                      Not saying it can’t be done, just saying it almost certainly won’t resemble procedural, OO, or functional programming as we know it.

                                                      Edit: Dave Ackley is looking into this future, and his vision is about the distance from what we do now as I expect: https://www.youtube.com/user/DaveAckley

                                                      1. 1

                                                        You might want to read up on distributed objects from NeXT in the early 90s.

                                                  2. 1

                                                    This doesn’t solve all of the problems brought up in TFA. The main one is scheduling/autoscale. It is certainly easier—for instance, you can send a function object directly as a message to a thread running on a remote host—but you still have to create some sort of deployment system.

                                                    1. 1

                                                      (sorry, replied to wrong comment)

                                                  1. 1

                                                    I wonder if v2 will make it any easier for folks to torrent over NAT and/or VPN connections.

                                                    1. 2

                                                      No, the spec makes it pretty clear that nothing about the network topology will change.

                                                      However, if you haven’t checked with bittorrent in awhile, you should try again. μTP significantly improved the ability to bypass NAT and firewall. Nowadays, CGNAT is really the only big problem for anyone trying to get incoming connections.

                                                      Technically, μTP and bittorrent v2 are completely orthogonal. However, I expect that the only bittorrent client that implements BitTorrent v2 without also implementing μTP will be WebTorrent.

                                                    1. 6

                                                      When you have eliminated the JavaScript, whatever remains must be an empty page.

                                                      For real now, it’s 2020, enable JavaScript.

                                                      Reading this made me a little bit angry, and I decided not to read your blog post. I think if it had said “Sorry, my site doesn’t work without Javascript enabled!” I would have whitelisted you instead.

                                                      1. 6

                                                        Ditto – I understand the frustration of authors who want to use JS, but this error message is just silly. If anything, the case for keeping JS disabled by default has grown stronger over time.

                                                        1. 4

                                                          Making you angry wasn’t my goal. I’m sorry about that. I changed the message to something friendlier :)

                                                          1. 5

                                                            Why create a web page that can’t be read without using JS? For something complex I could see the need, but this is a simple blog post that could easily be served as a few KB of static HTML.

                                                            And it seems this mechanism is making your site slow and unreliable too. I got the error message, but then a second later it went away, then a second later the text appeared. Not a good UX…

                                                            1. 2

                                                              Why create a web page that can’t be read without using JS?

                                                              I have no reason to think KCreate intended to do this, but if someone wanted to cheat lobste.rs, this would be one way to do it.

                                                              Basically, making an article require JavaScript is a guaranteed way to get a few comments. Lobsters also counts comments as upvotes. Since there’s no “Requires JavaScript” downvote reason, requiring JavaScript will actually make your article rank higher, almost guaranteed

                                                              1. 4

                                                                That’s an exceedingly uncharitable interpretation. It’s an outstanding problem I’ve pointed out to him and he intends to remove the JS when he gets time. It’s just old code from when he first set up his website.

                                                                1. 2

                                                                  Since there’s no “Requires JavaScript” downvote reason, requiring JavaScript will actually make your article rank higher, almost guaranteed

                                                                  Users can flag the submission as “Broken link” instead of commenting.

                                                                  I’m fine with requiring any submission here to be readable without JS - but that’s currently not a hard requirement for a submission.

                                                                  Edit I just checked every site currently linked from the front page in w3m, and this was the only one that was not readable[1]. This gives one data point towards an informal or formal requirement that every submission should degrade gracefully to be on-topic for lobste.rs.

                                                                  [1] the page that is visible does link to the github source now.

                                                              2. 4

                                                                Thanks - the new message is much better :)

                                                            1. 2

                                                              This is interesting story. Thanks for sharing. And yes it’s quite unsettling to know that no matter you have taken precautions, one can take your over domain with out any warning. Does it we are really hapless here?

                                                              1. 2

                                                                Any centralized system (and yes, TLDs are currently centralized) are susceptible to this sort of thing. Centralized systems are no longer suitable in today’s world if you want to stay resilient.

                                                                1. 1

                                                                  Do we have any alternatives then? I’m not sure I know much into this area and curious to know how this can be fixed or even tried. Thank you.

                                                                  1. 1

                                                                    FreeNet worked pretty well last I tried it. It was just slow, and had little content that interested me (the two are probably related).

                                                              1. 4
                                                                1. The sender’s server sends the email, and the receiver stores it.
                                                                2. The receiver responds with 1MB of pseudo-random noise generated from a secret message-specific seed.
                                                                3. Every hour, the receiver says “append this additional MB, and return the hash”.
                                                                4. Once enough challenges have been passed, the message is moved to the inbox (or spambox!).

                                                                The two servers negotiate ahead of time what the proof-of-care profile will be. The recipient can decide arbitrarily what the profiles are, and multiple profiles can be made available. A trusted sender can be accepted with a low cost. If the email’s author set some priority option, the sending server might choose the “1 GB every minute 10 times” profile instead of the more economical “20 MB every hour 24 times”.

                                                                Either server might have downtime. This is fine. The sending server polls the recipient.

                                                                To reduce bandwidth costs for the recipient, the proof-of-care profiles could include delegation to a third-party — a bit like a CDN, a trusted 3rd party might be in the sender’s data center. Heck, the sender could delegate also, and perhaps they both choose the same 3rd party… (The sender must not get a discount from this.) Note that the only metadata the third parties can glean is “this server sent a message to that server”. Batching might reduce metadata leakage further.

                                                                This scheme can induct itself onto the existing federation. If the sender server doesn’t spaken la francaise, a bounce message can say “write a complaint to your server admin, and keep this webpage open in your browser for an hour”.

                                                                This protocol could be useful outside of email, so it should be independent of SMTP.

                                                                1. 50

                                                                  Regardless of whether you currently think your existing tools need replacing, I urge you to try ripgrep if you haven’t already. Its speed is just amazing.

                                                                  1. 7

                                                                    I’ll second this sentiment. Your favored editor or IDE probably has a plugin to use ripgrep and you should consider trying that too.

                                                                    1. 6

                                                                      As an experiment I wrote a tiny Go webservice that uses ripgrep to provide a regex aware global code search for the company I work at. The experiment worked so well over a code base of ~30GB that it will probably replace hound which we use for this purpose at the moment. I did not even use any form of caching for this web service, so there is still performance to squeeze out.

                                                                      1. 5

                                                                        https://github.com/phiresky/ripgrep-all comes with caching, it’s a wrapper around rg to search in PDFs, E-Books, Office documents, zip, tar.gz, etc

                                                                      2. 3

                                                                        ripgrep and fd have changed the way I use computers. I’m no longer so careful about putting every file in its right place and having deep, but mostly empty, directory structures. Instead, I just use these tools to find the content I need, and because they’re so fast, I usually have the result in front of me in less than a second.

                                                                        1. 5

                                                                          You should look into broot as well (aside, it’s also a Rust application). I do the same as you and tend to rotate between using ripgrep/fd and broot. Since they provide different experiences for the same goal sometimes one comes more naturally than the other.

                                                                          1. 2

                                                                            broot is sweet, thanks for mentioning it. Works like a charm and seems super handy.

                                                                        2. 1

                                                                          3 or 4 years ago it was announced that the vs code “find in files“ feature would be powered by ripgrep. Anyone know if that’s still the case?

                                                                          1. 1
                                                                        1. 8
                                                                          1. [[nodiscard]] is one of the greatest things to ever happen in C++.
                                                                          2. Assume any pointer (or equivalent) you get from something could be null unless proven otherwise.
                                                                          1. 4

                                                                            Rust has #[must_use] which is its version of the same thing. Here’s an example for those who are curious.

                                                                            1. 5

                                                                              Swift has a similar thing, but it’s opt-out instead of opt-in.

                                                                              1. 1

                                                                                Nim also has must-use by default with an explicit discard to ignore. You can add a {.discardable.} pragma at the definition site to opt-out of this safety system if it is particularly foreseeable that using the return value is “optional”.

                                                                                1. 1

                                                                                  I should have said as well that in Rust you can mark types as #[muse_use]. The Result type is marked must use in this way.

                                                                                  1. 4

                                                                                    I’m crowning Swift (and Nim and alike) winners here. I’ve done a sweep of Rust’s standard library functions once to find and fix ones that were missing #[must_use] and realized… almost all of them could be #[must_use]. There are exceptions, but they’re exceptions.

                                                                                    1. 2

                                                                                      I agree. I made a post about it in the Rust subreddit and the response was very negative at the thought of having all functions be must_use by default. I was pretty disappointed because they all acted like the false-positives would be overwhelmingly noisy. But I can only assume they haven’t done any Swift (or Nim), because it’s almost never been an issue for me. There are a small handful of functions you’re going to write that return a value that’s so ignorable that it’s normal to not even bind it to something. And if that’s the case (such as the HashMap entry methods), you just opt-out and mark it ignorable…

                                                                                      1. 1

                                                                                        Yeah, on the spectrum of hive mind <--> war zone the /r/rust subreddit is a bit too close to “hive mind” for my tastes. It’s one reason why I quit reddit.

                                                                                      2. 1

                                                                                        Eh, I don’t think it’s “winners” and “losers.” There is a trade-off to having more things be marked must use, that people may be more likely to turn off warnings related to it because they’re too noisy. I think Rust strikes a reasonable balance for the most part.

                                                                                        That said, to each their own!

                                                                                        1. 2

                                                                                          “people will turn off warnings” is not a big danger in Rust. Warning opt-outs are selective, explicit, and — most importantly — scoped. For discarding must_use there’s the let _ = pattern.

                                                                                          Swift has @discardableResult annotation, so it doesn’t have to annoy people. It’s just that the default is flipped to the more common case.

                                                                                          1. 1

                                                                                            They’re not inherently scoped. Yes the let _ = pattern does silence the “must use” warning, but you can do so globally with #![allow(unused_must_use)]. Here’s a Playground link showing this.

                                                                            1. 1

                                                                              Typographical point: I don’t think backticks (`) should be used as quotation marks, rather the dedicated open and close quote characters ( and , these are option+] and option+shift+] on a mac keyboard) or maybe the prime sign (', next to the return button) should be used?

                                                                              1. 2

                                                                                That ship has sailed, thanks to the various text markup languages that are the height of current fads.

                                                                                1. 1

                                                                                  On IRC I surround quotes with double backticks followed by double primes. This is the style used by GNU Info I believe, and is supposed to approximate the opening and closing double quotes.

                                                                                  1. 7

                                                                                    I’ve seen that too, and it bugs me a little bit. It’s not a big deal with monospace fonts, but when you’re using proportional fonts, the kerning of the backtick is usually such that there is a lot of space around it (because it’s supposed to be the accent grave without an actual letter underneath it) unlike an apostrophe or quote mark, so it looks awkward. But it might just be personal preference, and plus the article is set in a monospaced font too. #EmbraceUnicode

                                                                                    1. 4

                                                                                      Yeah, in a proportional font I just use double quotes (in Markdown) and let the SmartyPants filter translate them into HTML entities.

                                                                                    2. 4

                                                                                      On IRC I surround quotes with double backticks followed by double primes. This is the style used by GNU Info I believe, and is supposed to approximate the opening and closing double quotes.

                                                                                      This is inherited from TeX, which jumped through a lot of hoops to make it possible to represent complex typography in 7-bit ASCII typeable on a ‘70s US keyboard. It’s worth noting that the single and double back ticks in TeX are source code, not output. They render as open single or double quotes in the generated output and so are seen only by people editing the TeX source. As I recall (and it’s been over 15 years since I read the TeX papers, so I may be imagining it) they found smart quotes had too many false positives with the kind of read-ahead it was feasible to implement at the time and so needed an annotation to turn them off, and when a rendering pass took several minutes / hours it was more efficient to always use explicit open and close syntax than require the user to go through the typeset manuscript and check every quote mark / apostrophe was correct.

                                                                                      1. 3

                                                                                        I gave up doing that when more IRC clients started rendering pseudo-markdown and it people started getting code-quotes in their messages.

                                                                                        Also, the GNU project no longer suggests that style of quoting in its standards:

                                                                                        Although GNU programs traditionally used 0x60 (‘`’) for opening and 0x27 (‘’’) for closing quotes, nowadays quotes ‘`like this’’ are typically rendered asymmetrically, so quoting ‘“like this”’ or ‘‘like this’’ typically looks better.

                                                                                        Aside: is it possible to write a literal backtick in a markdown inline code span?

                                                                                        1. 3

                                                                                          You should be allowed to use HTML <code> tags and backslash-escape the backticks. Unfortunately, lobsters silently strips code tags instead of treating them as equivalent to backtick escapes.

                                                                                          For example <code>\`test\`</code> looks like `test`

                                                                                          This kind of corner-case nonsense is why the Markdown format is supposed to allow HTML. Even if you use further sanitization to limit the set of available HTML tags, I still ought to be allowed to use the HTML powers that are equivalent to what Markdown already gives me.

                                                                                          Here’s the same thing in a GitHub Gist, where it works correctly.

                                                                                          1. 5

                                                                                            I don’t see why lobste.rs should jump through a lot of hoops just to facilitate meta-discussion about Markdown syntax. Not allowing any extra HTML is a reasonable decision on a public forum.

                                                                                            1. 2

                                                                                              Funny thing, GitHub and Lobsters use the exact same Markdown parser and renderer, they just post-process it differently.

                                                                                            2. 2

                                                                                              I gave up doing that when more IRC clients started rendering pseudo-markdown and it people started getting code-quotes in their messages.

                                                                                              That’s unfortunate. I use the double-backtick/double-prime quoting for “real quotes”, while I use double-quotes for commentary. I guess no-one really realizes the difference but it matter to me.

                                                                                              I use both Windows and Macs to access IRC and I’m going to find it hard to locate the “real” quote characters - besides, I can’t be sure that they will be handled correctly by IRC clients who can’t handle Unicode! Not to mention Swedish quoting style is different from English - we don’t use opening double quotes, only closing at both ends.

                                                                                              1. 1

                                                                                                Is there actually a way to produce proper quote characters on Windows without using those Alt codes? I use mostly Linux these days, and I always set my keyboard layout to Macintosh to get those and composition (like using option+u U to make a Ü, or option+` a to make an à), but I’ve never figured out how to do that sensibly on Windows.

                                                                                        1. 13

                                                                                          IRC’s lack of federation and agreed-upon extensability is what drove me to XMPP over a decade ago. Never looked back.

                                                                                          1. 12

                                                                                            Too bad XMPP was effectively embraced/extended/extinguished by Google. In no small way thanks to lack of message acknowledgement in the protocol, which translated to lost messages and zombie presence, which was specially bad across servers, so it paid to be in the same server (which became typically google) as the other endpoint.

                                                                                            I did resist that, but unfortunately most of my contacts were in the Google server, and I got isolated from them when Google cut the cord. Ultimately, I never adopted Google Talk (out of principle), but XMPP has never been the same after that.

                                                                                            End to end encryption is also optional and not the default, which makes XMPP not much of an improvement over IRC. My hopes are with Matrix taking off, or a truly better (read: fully distributed) replacement like Tox gaining traction.

                                                                                            1. 5

                                                                                              Showerthought: decentralised protocols needs to have some kind of antinetwork effects baked into them somehow, where there’s some kind of reward for staying out of the monoculture. I dunno what this actually looks like, though. Feels like the sort of thing some of the blockchain people might have a good answer for.

                                                                                              1. 6

                                                                                                That’s a fascinating idea and I disagree. :D Network effects are powerful for good reason: centralization and economies of scale are efficient, both in resources like computer power, and in mental resources like “which the heck IRC network do I start a new channel on anyway”. What you do need is ways to avoid lock-in. If big popular network X starts abusing its power, then the reasonable response is to pick up your stakes and go somewhere else. So, that response needs to be as easy as possible. Low barriers to entry for creating new servers, low barriers to moving servers, low barriers to leaving servers.

                                                                                                I expect for any human system your going to result in something like Zipf’s law governing the distribution of who goes where; I don’t have a good reason for saying so, it’s just so damn common. Look at the population of Mastodon servers for example (I saw a really good graphic of sizes of servers and connections between them as a graph of interconnected bubbles once, I wish I could find it again). In my mind a healthy distributed community will probably have a handful of major servers/networks/instances, dozens or hundreds of medium-but-still-significant ones, and innumerable tiny ones.

                                                                                                1. 3

                                                                                                  More and more these days I feel like “efficiency” at a large enough scale is just another way to say “homogeneity”. BBSes and their store-and-forward message networks like FidoNet and RelayNet were certainly less efficient than the present internet, but they were a lot more interesting. Personal webpages at some-isp.com/~whoever might have been less efficient (by whatever metric you choose) than everyone posting on Facebook and Twitter but at least they actually felt personal. Of course I realize to some degree I’m over-romanticizing the past (culturally, BBSes and FidoNet especially, as well as the pre-social-media internet, were a lot more white, male, and cishet than the internet is today; and technologically, I’d gnaw my own arm off to not have to go back to dialup speeds), and having lowered the bar to publish content on the internet has arguably broadened the spectrum of viewpoints that can be expressed, but part of me wonders if the establishment of the internet monoculture we’ve ended up with, where the likes of Facebook basically IS the entire internet to the “average” person, was really necessary to get there.

                                                                                                2. 3

                                                                                                  I think in a capitalist system this is never going to be enough. What we really need is antitrust enforcement to prevent giant corporations from existing / gobbling up 98% of any kind of user.

                                                                                              2. 3

                                                                                                This! Too bad XMPP never really caught on after the explosion of social media, it’s a (near) perfect protocol for real time text-based communication, and then some.

                                                                                                1. 21

                                                                                                  It didn’t simply “not caught on”, it was deliberately starved by Facebook and Google by disabling federation between their networks and everyone else. There was a brief moment around 2010 when I could talk to all my friends on gTalk and Facebook via an XMPP client, so it did actually work.

                                                                                                  (This was my personal moment when I stopped considering Google to be “not evil”.)

                                                                                                  1. 3

                                                                                                    It was neat to have federatoion with gtalk, but when that died I finally got a bunch of my contacts off Google’s weak xmpp server and onto a better one, and onto better clients, etc. Was a net win for me

                                                                                                    1. 5

                                                                                                      What are “better clients” these days for XMPP? I love the IDEA of XMPP, but I loathe the implementations.

                                                                                                      1. 6

                                                                                                        Dino, Gajim, Conversations. You may want to select a suitable server from (or check your server via) https://compliance.conversations.im/ for the best UX.

                                                                                                      2. 5

                                                                                                        I don’t have that much influence over my contacts :-)

                                                                                                        1. 6

                                                                                                          This.

                                                                                                          Network effects win out over the network itself, every time.

                                                                                                          1. 1

                                                                                                            I guess neither do I? That’s why it took Google turning off the server to make them switch

                                                                                                        2. 3

                                                                                                          IIRC it was Facebook that was a bad actor and started letting the communication go only one way to siphon users from gtalk and forced Google’s hand.

                                                                                                          1. 5

                                                                                                            Google was playing with Google+ at that moment and wanted to build a walled garden, which included a chat app(s). They even invented some “technical” reasons why XMPP wasn’t at all workable (after it has been working for them for years.)

                                                                                                            1. 2

                                                                                                              It was weird ever since Android was released. The server could federate with other servers just fine, but Google Talk for Android spoke a proprietary C2S protocol, because the regular XMPP C2S involves keeping a TCP connection perpetually open, and that can’t be done on a smartphone without unacceptable power consumption.

                                                                                                              I’m not sure that truly counts as a “good” technical reason to abandon S2S XMPP, but it meant that the Google Talk server was now privileged above all other XMPP servers in hard-to-resolve ways. It made S2S federation less relevant, because servers were no longer interchangeable.

                                                                                                              1. 1

                                                                                                                I’m not sure the way GTalk clients talk to their server had anything to do with how the server talked to others. Even if it was, they could’ve treated as a technical problem needed solving rather than an excuse to drop the whole thing.

                                                                                                                1. 2

                                                                                                                  Dropping federation was claimed at the time (fully plausibly, imo) to be about spam mitigation. There was certainly a lot of XMPP spam around that time.

                                                                                                                2. 1

                                                                                                                  I have been using regular XMPP c2s on my phones over mobile data continuously since 2009 when I got my first smartphone. Battery life has never been an issue. I think if you have tonnes of TCPs the batterylife thing can be true, but for one XMPP session the battery impact is a myth

                                                                                                              2. 3

                                                                                                                AFAIK Facebook never had federated XMPP, just a slightly working c2s bridge

                                                                                                                1. 1

                                                                                                                  To make sure my memory wasn’t playing any tricks on me I did a quick google search. It did.

                                                                                                                  To make Facebook Chat available everywhere, we are using the technology Jabber (XMPP), an open messaging protocol supported by most instant messaging software,

                                                                                                                  From: https://www.facebook.com/notes/facebook-app/facebook-chat-now-available-everywhere/297991732130/

                                                                                                                  I don’t remember the move they did on Google to siphon users though, but I remember thinking it was a scummy move.

                                                                                                                  1. 2

                                                                                                                    That link is talking about their c2s bridge. You still needed a Facebook account to use it. It was not federated.

                                                                                                              3. 2

                                                                                                                That might be your experience but I’m not sure it’s true for the majority.

                                                                                                                From my contact list of like 30 people 20 weren’t using GTalk in the first place (and no one use used FB for this, completely separate type of folks) and they all stopped using XMPP independently, not because of anything Google. And yes, there were interop problems with those 5, but overall I see the problem of XMPP’s downfall in popularity kinda orthogonal to Google, not related.

                                                                                                                1. 3

                                                                                                                  There’s definitely some truth to that, but still, my experience differs greatly. The majority of my contacts used Gtalk back in the day, and once that was off, they simply migrated to more popular, walled garden messaging services. That was the point in time where maintaining my own, self hosted XMPP VPS instance became unjustifiable in terms of the monthly cost and time, simply because there was no one I could talk to anymore.

                                                                                                              4. 4

                                                                                                                I often hear this, but I’ve been doing most of my communicating with XMPP continuously for almost 20 years and it just keeps getting better and the community contiues to expand and get work done.

                                                                                                                When I first got a JabberID the best I could do was use an MSN gateway to chat with some highschool pals from Gaim and have them complain that my text wasn’t in fun colours.

                                                                                                                Now I can chat with most of my friends and family directly to their JabberIDs because it’s “just one more chat app” to them on their Android phone. I can send and receive text and picture messages with the phone network over XMPP, and just this month started receiving all voice calls to my phone number over XMPP. There are decent clients for every non-Apple platform and lots of exciting ecosystem stuff happening.

                                                                                                                I think good protocols and free movements are slower because there is so much less money and attention, but there’s also less flash in the pan fad adoption, less being left high and dry by corporate M&A, and over time when the apps you used to compete with are long gone you stand as what is left and still working.

                                                                                                                1. 4

                                                                                                                  My experience tells me that the biggest obstacle of introducing open and battle-tested protocols to the masses is the insane friction of installing yet another app and opening yet another account. Most people simply can’t be bothered with it.

                                                                                                                  I used to do a lot of fun stuff with XMPP back in the day, just like you did, but nowadays, it’s extremely hard to make non-geek people around me join the bandwagon of pretty much anything outside the usual FAANG mainstream stuff. The concept of open protocols, federation, etc. is a very foreign concept to many ordinary people, for reasons I could never fully grasp.

                                                                                                                  Apparently, no one has ever solved that problem, despite many of them trying so hard.

                                                                                                                  1. 2

                                                                                                                    I don’t really use XMPP, but I know that “just one more chat app” never works with almost everyone in my circle of friends. Unfortunately I still have to use Facebook Messenger to communicate with some people.

                                                                                                                  2. 3

                                                                                                                    When I was building stuff with XMPP, I found it a little difficult to grasp. At its core, it was a very good idea and continues to drive how federation works in the modern world. I’m not sure if this has to do with the fact that it used XML and wasn’t capable of being transmitted using JSON, protobuf, or any other lightweight transport medium. Or whether it had to do with an extensive list of proposals/extensions in various states of completion that made the topology of the protocol almost impossible to visualize. But in my opinion, it’s not a “perfect” protocol by any means. There’s a good (technical) reason why most IM service operators moved away from XMPP after a while.

                                                                                                                    I do wish something would take its place, though.

                                                                                                                    1. 5

                                                                                                                      Meanwhile it takes about a page or two of code to make an IRC bot.

                                                                                                                      1. 4

                                                                                                                        XMPP has gotten a lot better, to be fair – a few years ago, the situation really was dire in terms of having a set of extensions that enabled halfway decent mobile support.

                                                                                                                        It isn’t a perfect protocol (XML is a bit outdated nowadays, for one) – but crucially, the thing it has shown itself to be really good at is the extensibility aspect: the core is standardized as a set of IETF RFCs, and there are established ways to extend the core that protocols like IRC and Matrix really lack.

                                                                                                                        IRC has IRCv3 Capability Negotiation, sure, but that’s still geared toward client-server extensibility — XMPP lets you send blobs of XML to other users (or servers) and have the server just forward them, and provides a set of mechanisms to discover what anything you can talk to supports (XEP-0030 Service Discovery). This means, for example, you can develop A/V calls as a client-to-client feature without the server ever having to care about how they work, since you’re building on top of the standard core features that all servers support.

                                                                                                                        Matrix seems to be denying the idea that extensibility is required, and think they can get away with having One True Protocol. I don’t necessarily think this is a good long-term solution, but we’ll see…

                                                                                                                        1. 3

                                                                                                                          Matrix has the Spec Proposal progress for moving the core spec forward. And it has namespacing (with “m.” reserved as the core prefix, rest should use reverse domain like “rs.lobste.*”) for extension. What do you think is missing?

                                                                                                                          1. 1

                                                                                                                            Okay, this may have improved since I last checked; it looks like they at least have the basics of some kind of dynamic feature / capability discovery stuff down.

                                                                                                                          2. 2

                                                                                                                            IRCv3 has client-to-client tags which can contain up to 4096 bytes per message of arbitrary data, which can be attached to any message, or be sent as standalone TAGMSG.

                                                                                                                            This is actually how emoji reactions, thread replies, and stuff like read/delivery notifications are implemented, and some clients already made a prototype using it for handshaking WebRTC calls.

                                                                                                                            1. 4

                                                                                                                              Sure. However, message tags are nowhere near ubiquitous; some IRC netadmins / developers even reject the idea that arbitrary client-to-client communication is a good thing (ref).

                                                                                                                              You can get arbitrary client-to-client communication with ircv3 in some configurations. My point is that XMPP allows it in every configuration; in fact, that’s one of the things that lets you call your implementation XMPP :p

                                                                                                                            2. 1

                                                                                                                              I have been using XMPP on mobile without issue since at least 2009

                                                                                                                        2. 2

                                                                                                                          How is IRC not federated? It’s transparently federated, unlike XMPP/Email/Matrix/ActivityPub/… that require a (user, server) tuple for identification, but it still doesn’t have a central point of failure or just one network.

                                                                                                                          1. 3

                                                                                                                            IRC is not federated because a user is required to have a “nick” on each network they want to participate in. I have identities on at least 4 different disconnected IRC networks.

                                                                                                                            The IRC server to server protocol that allows networks to scale is very nice, and in an old-internet world of few bad actors having a single global network would have been great. But since we obviously don’t have a single global network, and since the network members cannot communicate with each other, it is not a federated system.

                                                                                                                            1. 3

                                                                                                                              Servers in a network federate, true. But it’s not an open federation like email, where anyone can participate in a network by running their own server.

                                                                                                                          1. 4

                                                                                                                            If you’re writing a function like unwrap that may panic, you can put this annotation on your functions, and the default panic formatter will use its caller as the location in its error message.

                                                                                                                            Millenials reinvent exceptions? ;)

                                                                                                                            1. 8

                                                                                                                              This isn’t about exceptions, but about attribution of the error. Some errors are the fault of the caller (e.g. passes parameters that were forbidden by the contract of the function), and some errors are fault of the function itself (a bug).

                                                                                                                              With access to caller’s location you can blame the correct line of code. Exception/panic stack traces are independent of this feature, because Rust didn’t want to pull in dependence on debug info and relatively expensive unwinding machinery.

                                                                                                                              1. 1

                                                                                                                                Using a strong type system for error handling predates exceptions.

                                                                                                                                1. 1

                                                                                                                                  Panic is for errors that the type systems cannot catch. A pragma that adds caller location to a panic message is a symptom of nostalgia for exception traces.

                                                                                                                                  I don’t know what compiler developers’ reasoning is. As a side observer, I can’t help but think “they could as well be implementing full exception traces now”.

                                                                                                                                  1. 2

                                                                                                                                    They actually do have full exception traces. You turn them on with the RUST_BACKTRACE=1 environment variable, and by making sure you don’t strip debug info.

                                                                                                                                    This is a convenience feature so that you get useful info when you’ve got traces turned off, or when you’re trying to get useful diagnostics from a released executable (if you’re stripping the executable, then the whole point of doing so is to ship less data than you would need for full traces, but there might still be a happy medium between nothing and full-exceptions).

                                                                                                                                    1. 1

                                                                                                                                      There’s a substantial runtime cost to full exception traces that is much smaller to non existent for the track caller annotation. For the latter, the compiler inserts some static info to be printed on panic. There will be some binary bloat and potentially some instruction density loss, but the performance impact will be very small. To do full exception traces, you have to ~constantly maintain full unwind tables somewhere and update them on every function call/return. You can already get that info via seeing RUST_BACKTRACE, but it is off by default.

                                                                                                                                1. 16

                                                                                                                                  It’s a welcome barrier IMHO. If someone can’t configure an email client properly I would question whether they should be trusted with writing kernel code.

                                                                                                                                  1. 6

                                                                                                                                    You’re not supposed to “trust” totally new contributors either way. That’s why their code is reviewed by a tree of maintainers.

                                                                                                                                  1. 2

                                                                                                                                    In case anyone’s curious, I got the date out of this site’s RSS feed. https://web.archive.org/web/20090123072348mp_/http://matt.might.net/articles/feed.rss

                                                                                                                                    It doesn’t seem to be anywhere on the page itself.

                                                                                                                                    1. 8

                                                                                                                                      I respect that you’re finding your way, but this is one tutorial-level guide article too many for me on Lobste.rs. I firmly believe that this is not the place for this variety of content.

                                                                                                                                      That being said, contrary to the flags available on Lobste.rs, I wouldn’t classify this as “spam” or “off-topic”. Some more helpful descriptions would be that the piece is redundant with other, easily-found works; is not comprehensive enough for this site’s readership; or needs more thought and substance put into it before it is ready for Lobste.rs. (If any mods see this, I am interested in your thoughts.)

                                                                                                                                      Good luck with everything you do; onward and upward!

                                                                                                                                      1. 1

                                                                                                                                        An important point that you should consider? Lobsters treats comments as upvotes (with a cap, but still).

                                                                                                                                        If you don’t like something, you should not comment on it.

                                                                                                                                        1. 1

                                                                                                                                          If you don’t like something, you should not comment on it.

                                                                                                                                          Unless I think there’s something worth discussing; then the system works as intended. I appreciate the reference, and I did not know about that, but I will still speak.

                                                                                                                                      1. 11

                                                                                                                                        We should remember that programmers are petite bourgeoise - not proletarian. By being socially adjacent to investors we typically share in a thin slice of exploitative gains, making us naturally more incentived to celebrate folks like Elon Musk instead of the vast graph of minimum wage workers who toil for our same day delivery purchases that bring us so much delight.

                                                                                                                                        We are the extractors. I disagree with the license because a commune of tech workers owning an equal share of their exploitative ad tech company or whatever does not make it anti capitalist at all. It turns otherwise subservient capitalists into a bunch of capitalists who make slow decisions :P

                                                                                                                                        I’m only half joking - those of us who are inclined to move in this direction need to both live progressive lives and work toward the emancipation of others.

                                                                                                                                        Many of us were shocked when GPT-3 started parroting the racism it learned from us. But GPT-3 is not so different in form or function to the systems many of us work on every day - it simply has the ability to talk about its conclusions in a way that is easily understood. Its function essentially boils down to “more of the same” by inferring the current racist and abusive state of our communication and repeating it back in non-novel variations. What systems do you optimize? What current realities do you allow to repeat at lower cost? Maybe your current work is not so different.

                                                                                                                                        Let’s keep iterating on this - while remembering the real racist, sexist and abusive realities that we want our systems to stand against. I don’t want to create a “dictatorship of the tech bros”. I want to limit their damage.

                                                                                                                                        1. 9

                                                                                                                                          We should remember that programmers are petite bourgeoise - not proletarian.

                                                                                                                                          People who work in startups and have stock options, maybe. Not the majority of “dark matter” developers and especially not outsourcees (those to whom work is outsourced).

                                                                                                                                          1. 3

                                                                                                                                            Yeah, that’s a lot of Californian ideology for a leftist. I’m sure Rajesh, working 12 hours a day in a shitty office in Bangalore to maintain a Java behemoth of some Western bank, would totally understand being called a petite bourgeois

                                                                                                                                            1. 1

                                                                                                                                              Who chooses a license?

                                                                                                                                              1. 2

                                                                                                                                                The owners of the company in a for-profit context, that in some cases are workers if the company is a coop. For the software created outside of a for-profit context, it depends on the organization. For software developed by independent individuals, the individuals themselves.

                                                                                                                                                1. 1

                                                                                                                                                  That doesn’t sound similar to the situation you described above.

                                                                                                                                            2. 2

                                                                                                                                              Sure, but that’s not the target. The target is the people who are in the position of selecting a license for their project. People who work in sweatshops may not have so many side projects on GitHub. The target is essentially us, the users of this site who often have the luxury of having enough time to start a nights and weekends project at all.

                                                                                                                                              1. 3

                                                                                                                                                I think “only the petite bourgeoise use open source software” is a hell of an assumption - and even if it were true, it would be an admission of our failure, not a defense.

                                                                                                                                                1. 4

                                                                                                                                                  Yes. Free software did fail. It certainly failed at what Ramsey Nasser & Everest Pipkin would have wanted it to accomplish, it failed at what I wanted it to accomplish, and it failed at what Richard Stallman wanted it to accomplish. It sounds like it didn’t accomplish your goals, either. Admitting our failure seems like the only honest evaluation of the situation.

                                                                                                                                                  1. -6

                                                                                                                                                    It seems like you’re angry about something.