1. 9

    Joe’s thesis is also extremely approachable and still full of great ideas: http://www.cs.otago.ac.nz/coursework/cosc461/armstrong_thesis_2003.pdf

    1. 3

      if you like this kind of thing, this is the kind of thing you will like :o)

    1. 4

      is rosetta-code not a better resource for this sort of thing ?

      1. 3

        i found this to be quite instructive as an overview of various hashing techniques. check it out :)

        1. 2

          Cool! The overall series looks useful, as I’m implementing a key-value store too.

          Of hashing techniques, the only fancy one I’ve tried (in a prior project) is Robin Hood, which I found worked well, increasing both speed and maximum load.

          1. 1

            I found that bidirectional linear probing worked better than Robin Hood: https://github.com/senderista/hashtable-benchmarks

        1. 5

          This is a great writeup of an interesting and attractive build, but looking at that low laptop-like monitor position is making my neck ache… 🙂

          1. 2

            Same! Most ergonomics guides I’ve read advise you to position your monitor with the top edge roughly at eye level. (“Eye level” as you’re looking straight in front of you, not hunched over.) Having a monitor on an adjustable arm is generally one of the big advantages of not using a laptop!

            1. 5

              I think the wide-angle photo of my desk made it look like the monitor was much more reclined than it is. Here’s a side photo.

              1. 1

                would be nice if you could share your window manager config. please ?

                1. 1
                2. 1

                  Nope, I got it right at the first photo. It looks as painfully reclined as in the first set of pics.

            1. 3

              That is a really complicated setup.

              At first, I was thinking that I can’t imagine what you’d want 25Gbit for. But then again, I moved recently from a 400Mbit cable to a 50-ish DSL and I really, really don’t like DSL. (Side note, they’ve just announced they’re laying fiber in my place, contract is signed and this time next year I could be on gigabit).

              I assume jumping from < 1Gbit to 10+ Gbit is just a natural next step. I mean, yes I don’t need that speed all the time. But it’d still be nice to click “Download” and just a minute later, the entire 100+ GB Elder Scrolls online is here.

              1. 3

                That is a really complicated setup.

                It seems that he’s not even using multiple subnets - I’d say his setup is a lot simpler than mine. :)

                I could imagine having 25 Gbps at home, but I’ve just started to deploy 10 Gbps in my internal network (between a few hosts) so it might be slightly overkill for me as well. My current max is 1000/100 but I’ve only opted for 100/100 as I don’t need more, and since I can’t have 1000 Mbps in upload…

                1. 2

                  might be useful to scan the entire ipv4 address space in a couple of minutes (or even faster)

                  1. 3

                    If your ISP doesn’t block you, that’s a great way to end up on threat intelligence feeds and labelled as a bot.

                    1. 1

                      fine take a few more minutes :)

                1. 12

                  does this feel exactly like the windows browser saga from about turn of the century to anyone else too ?

                  1. 9

                    Honestly? As someone who remembers the Halloween Documents clearly … not much.

                    Microsoft’s power at the time is nothing compared to the power Google wields now.

                    1. 18

                      I believe the specific complaint @signal-11 is referring to is that IE 4 had a special mode that made connection establishment faster with IIS, but it only worked if the client is IE 4 and the server is IIS and this was seen as an example of using the browser to get an unfair advantage in the web server market.

                      1. 3

                        yes, that’s exactly what i was referring to.

                  1. 3

                    i have my doubts, fwiw. programming at the level of specifications is still programming, and is quite hard to get right f.e. try coming up with a specification for a hash table which does not end up being linear search…

                    1. 9

                      Immutable by default prevents a lot of mistakes. Rust got this 100% correct. When I don’t see const on a variable in C++, I immediately believe that it’s going to change somewhere and I consider it a mistake if it doesn’t.

                      1. 3

                        afaik, erlang was there first…

                      1. 1

                        iirc, mips are still used within csco routers ? there used to be a book called “see mips run” that i have used for hacking around in mips asm. quite good too (fwiw).

                        1. 5

                          If you read ‘seem MIPS run’, make sure it’s the 32-bit version. The 64-bit version has a huge number of errors in it.

                          That said, even at my most cranky, MIPS assembly is not something I would ever inflict on someone, no matter how much they’d annoyed me. Between the lack of useful addressing modes, the inconsistent register naming (what is $t0? Depends on the assembler you’re using!), the huge number of pseudos that most MIPS assemblers make look like normal instructions but that will clobber $at, the magic of $25 in PIC modes, branch delay slots, and the exciting logic in the assembler for either letting you fill delay slots, padding them with nops, or trying to fill them from one of your instructions depending on the mode, it’s an awful experience.

                          I’m not really a fan of RISC-V, but RISC-V manages to copy MIPS while avoiding the most awful parts of MIPS. If you want to learn a simple RISC assembly language, RISC-V is a better choice than MIPS. If you want to learn assembly language for a well-designed ISA, learn AArch64. If you want to learn assembly language that’s a joy to write, learn AArch32 (things like stm and ldm, predication, and the fact that $pc is a general-purpose register are great to use for assembly programmers, difficult to use for compilers, and awful to implement).

                          1. 1

                            There’s an implicit “RISC-V is not a well-designed ISA” there.

                            Could you elaborate what issue do you see with RISC-V?

                        1. 2

                          good thing about standards is that there are so many to choose from that we don’t have to follow…

                          1. 5

                            pjsip, while specific to VoIP/RTC, is really interesting because it’s built from the ground up with a focus on portability.

                            1. 1

                              may you please consider updating the url ? the one referenced above, doesn’t point to https://www.pjsip.org

                              1. 1

                                Thank you for pointing that out. I don’t seem to be able to edit/update the comment. But, here is the PJSIP git repo as well.

                            1. 1

                              i, fwiw, am personally biased towards ietf-xdr. it provides the minimal thing that is required to exchange data between nodes separated on the network. rest everything is up to endpoints.

                              frankly, the idea of making a procedure call over the network horrifies me :) it just hides so many failure modes…

                              1. 4

                                You might want to read the papers on OKws. This was the OK Cupid web server architecture. It was designed by a few people from MIT and makes clever use of Unix domain sockets.

                                1. 4

                                  MIT 6.858 Computer Systems Security covers OKws as a case study. Well worth a watch IMO (or at least a skim of the lecture notes).

                                  1. 2

                                    is this the one that you had in mind ? may you please let me know ? thank you kindly !

                                    1. 2

                                      Not GP, but seems correct. You can check the Github repo too and they have linked the paper

                                      1. 2

                                        Yes, that is one of the papers.

                                    1. 3

                                      one place where bash-lsp would be useful :)

                                      1. 23

                                        I used to think this was the case until I realized that Google funds Firefox through noblesse oblige, and so all the teeth-gnashing over “Google owns the Internet” is still true whether you use Chrome directly or whether you use Firefox. The only real meaningful competition in browsers is from Apple (God help us.) Yes, Apple takes money from Google too, but they don’t rely on Google for their existence.

                                        I am using Safari now, which is… okay. The extension ecosystem is much less robust but I have survived. I’m also considering Brave, but Chromium browsers just gulp down the battery in Mac OS so I’m not totally convinced there yet.

                                        Mozilla’s recent political advocacy has also made it difficult for me to continue using Firefox.

                                        1. 19

                                          I used to think this was the case until I realized that Google funds Firefox through noblesse oblige, and so all the teeth-gnashing over “Google owns the Internet” is still true whether you use Chrome directly or whether you use Firefox.

                                          I’m not sure the premise is true. Google probably wants to have a practical monopoly that does not count as a legal monopoly. This isn’t an angelic motive, but isn’t noblesse oblige.

                                          More importantly, the conclusion doesn’t follow–at least not 100%. Money has a way of giving you control over people, but it can be imprecise, indirect, or cumbersome. I believe what Google and Firefox have is a contract to share revenue with Firefox for Google searches done through Firefox’s url bar. If Google says “make X, Y and Z decisions about the web or you’ll lose this deal”, that is the kind of statement antitrust regulators find fascinating. Since recent years have seen increased interest in antitrust, Google might not feel that they can do that.

                                          1. 9

                                            Yes, I agree. It’s still bad that most of Mozilla’s funding comes from Google, but it matters that Mozilla is structured with its intellectual property owned by a non-profit. That doesn’t solve all problems, but it creates enough independence that, for example, Firefox is significantly ahead of Chrome on cookie-blocking functionality - which very much hits Google’s most important revenue stream.

                                            1. 4

                                              Google never has to say “make X, Y and Z decisions about the web or you’ll lose this deal,” with or without the threat of antitrust regulation. People have a way of figuring out what they have to do to keep their job.

                                            2. 17

                                              I’m tired of the Pocket suggested stories. They have a certain schtick to them that’s hard to pin down precisely but usually amounts to excessively leftist, pseudo-intellectual clickbait: “meat is the privilege of the west and needs to stop.”

                                              I know you can turn them off.

                                              I’m arguing defaults matter, and defaults that serve to distract with intellectual junk is not great. At least it isn’t misinformation, but that’s not saying much.

                                              Moving back to Chrome this year because of that, along with some perf issues I run into more than I’d like. It’s a shame, I wanted to stop supporting Google, but the W3C has succeeded in creating a standard so complex that millions of dollars are necessary to adequately fund the development of a performant browser.

                                              1. 2

                                                Moving back to Chrome this year because of that, along with some perf issues I run into more than I’d like. It’s a shame, I wanted to stop supporting Google, but the W3C has succeeded in creating a standard so complex that millions of dollars are necessary to adequately fund the development of a performant browser.

                                                In case you haven’t heard of it, this might be worth checking out: https://ungoogled-software.github.io/

                                                1. 1

                                                  Except as of a few days ago Google is cutting off access to certain APIs like Sync that Chromium was using.

                                                  1. 1

                                                    Straight out of the Android playbook

                                              2. 4

                                                Mozilla’s recent political advocacy has also made it difficult for me to continue using Firefox.

                                                Can you elaborate on this? I use FF but have never delved into their politics.

                                                1. 16

                                                  My top of mind example: https://blog.mozilla.org/blog/2021/01/08/we-need-more-than-deplatforming/

                                                  Also: https://blog.mozilla.org/blog/2020/07/13/sustainability-needs-culture-change-introducing-environmental-champions/ https://blog.mozilla.org/blog/2020/06/24/immigrants-remain-core-to-the-u-s-strength/ https://blog.mozilla.org/blog/2020/06/24/were-proud-to-join-stophateforprofit/

                                                  I’m not trying to turn this into debating specifically what is said in these posts but many are just pure politics, which I’m not interested in supporting by telling people to use Firefox. My web browser doesn’t need to talk about ‘culture change’ or systemic racism. Firefox also pushes some of these posts to the new tab page, by default, so it’s not like you can just ignore their blog.

                                                  1. 6

                                                    I’m started to be afraid that being against censorship is enough to get you ‘more than de-platformed’.

                                                      1. 10

                                                        Really? I feel like every prescription in that post seems reasonable; increase transparency, make the algorithm prioritize factual information over misinformation, research the impact of social media on people and society. How could anyone disagree with those points?

                                                        1. 17

                                                          You’re right, how could anyone disagree with the most holy of holies, ‘fact checkers’?

                                                          Here’s a great fact check: https://www.politifact.com/factchecks/2021/jan/06/ted-cruz/ted-cruzs-misleading-statement-people-who-believe-/

                                                          The ‘fact check’ is a bunch of irrelevant information about how bad Ted Cruz and his opinions are, before we get to the meat of the ‘fact check’ which is, unbelievably, “yes, what he said is true, but there was also other stuff he didn’t say that we think is more important than what he did!”

                                                          Regardless of your opinion on whether this was a ‘valid’ fact check or not, I don’t want my web browser trying to pop up clippy bubbles when I visit a site saying “This has been officially declared by the Fact Checkers™ as wrongthink, are you sure you’re allowed to read it?” I also don’t want my web browser marketer advocating for deplatforming (“we need more than deplatforming suggests that deplatforming should still be part of the ‘open’ internet.) That’s all.

                                                          1. 15

                                                            a bunch of irrelevant information about how bad Ted Cruz and his opinions are

                                                            I don’t see that anywhere. It’s entirely topical and just some context about what Cruz was talking about.

                                                            the meat of the ‘fact check’ which is, unbelievably, “yes, what he said is true, but there was also other stuff he didn’t say that we think is more important than what he did!”

                                                            That’s not what it says at all. Anyone can cherry-pick or interpret things in such a way that makes their statement “factual”. This is how homeopaths can “truthfully” point at studies which show an effect in favour of homeopathy. But any fact check worth its salt will also look at the overwhelming majority of studies that very clearly demonstrate that homeopathy is no better than a placebo, and therefore doesn’t work (plus, will point out that the proposed mechanisms of homeopathy are extremely unlikely to work in the first place, given that they violate many established laws of physics).

                                                            The “39% of Americans … 31% of independents … 17% of Democrats believe the election was rigged” is clearly not supported by any evidence, and only by a tenuous interpretation of a very limited set of data. This is a classic case of cherry-picking.

                                                            I hardly ever read politifact, but if this is really the worst fact-check you can find then it seems they’re not so bad.

                                                            1. 7

                                                              This article has a few more examples of bad fact checks:

                                                              https://greenwald.substack.com/p/instagram-is-using-false-fact-checking

                                                            2. 7

                                                              Media fact-checkers are known to be biased.

                                                              [Media Matters lobby] had to make us think that we needed a third party to step in and tell us what to think and sort through the information … The fake news effort, the fact-checking, which is usually fake fact-checking, meaning it’s not a genuine effort, is a propaganda effort … We’ve seen it explode as we come into the 2020 election, for much the same reason, whereby, the social media companies, third parties, academic institutions and NewsGuard … they insert themselves. But of course, they’re all backed by certain money and special interests. They’re no more in a position to fact-check than an ordinary person walking on the street … — Sharyl Attkisson on Media Bias, Analysis by Dr. Joseph Mercola

                                                              Below is a list of known rebuttals of some “fact-checkers”.

                                                              Politifact

                                                              • I wanted to show that these fact-checkers just lie, and they usually go unchecked because most people don’t have the money, don’t have the time, and don’t have the platform to go after them — and I have all three” — Candace Owens Challenges Fact-Checker, And Wins

                                                              Full fact (fullfact.org)

                                                              Snopes

                                                              Associated Press (AP)

                                                              • Fact-checking was devised to be a trusted way to separate fact from fiction. In reality, many journalists use the label “fact-checking” as a cover for promoting their own biases. A case in point is an Associated Press (AP) piece headlined “AP FACT-CHECK: Trump’s inaccurate boasts on China travel ban,” which was published on March 26, 2020 and carried by many news outlets.” — Propaganda masquerading as fact-checking

                                                              Politico

                                                              1. 4

                                                                I’m interested in learning about the content management systems that these fact checker websites use to effectively manage large amounts of content with large groups of staff. Do you have any links about that?

                                                                1. 3

                                                                  The real error is to imply that “fact checkers” are functionally different from any other source of news/journalism/opinion. All such sources are a collection of humans. All humans have bias. Many such collections of humans have people that are blind to their own bias, or suffer a delusion of objectivity.

                                                                  Therefore the existence of some rebuttals to a minuscule number of these “fact checks” (between 0 and 1% of all “fact checks”) should not come as a surprise to anyone. Especially when the rebuttals are published by other news/journalism/opinion sources that are at least as biased and partisan as the fact checkers they’re rebutting.

                                                                  1. 1

                                                                    The real error is to imply that “fact checkers” are functionally different from any other source of news/journalism/opinion.

                                                                    Indeed they aren’t that different. Fact-checkers inherit whatever bias that is already present in mainstream media, which itself is a well-documented fact, as the investigative journalist Sharyl Atkisson explored in her two books:

                                                                    • The Smear exposes and focuses on the multi-billion dollar industry of political and corporate operatives that control the news and our info, and how they do it.
                                                                    • Slanted looks at how the operatives moved on to censor info online (and why), and has chapters dissecting the devolution of NYT and CNN, recommendations where to get off narrative news, and a comprehensive list of media mistakes.
                                                            3. 5

                                                              After reading that blog post last week I switched away from Firefox. It will lead to the inevitable politicization of a web browser where the truthfulness of many topics is filtered through a very left-wing, progressive lens.

                                                              1. 23

                                                                I feel like “the election wasn’t stolen” isn’t a left- or right-wing opinion. It’s just the truth.

                                                                1. 15

                                                                  To be fair, I feel like the whole idea of the existence of an objective reality is a left-wing opinion right now in the US.

                                                                  1. 5

                                                                    There are many instances of objective reality which left-wing opinion deems problematic. It would be unwise to point them out on a public forum.

                                                                    1. 8

                                                                      I feel like you have set up a dilemma for yourself. In another thread, you complain that we are headed towards a situation where Lobsters will no longer be a reasonable venue for exploring inconvenient truths. However, in this thread, you insinuate that Lobsters already has become unreasonable, as an excuse for avoiding giving examples of such truths. Which truths are being silenced by Lobsters?

                                                                      Which truths are being silenced by Mozilla? Keep in mind that the main issue under contention in their blog post is whether a privately-owned platform is obligated to repeat the claims of a politician, particularly when those claims would undermine democratic processes which elect people to that politician’s office; here, there were no truths being silenced, which makes the claim of impending censorship sound like a slippery slope.

                                                                      1. 4

                                                                        Yeah but none that are currently fomenting a coup in a major world power.

                                                                  2. 16

                                                                    But… Mozilla has been inherently political the whole way. The entire Free Software movement is incredibly political. Privacy is political. Why is “social media should be more transparent and try to reduce the spread of blatant misinformation” where you draw the line?

                                                                    1. 5

                                                                      That’s not where I draw the line. We appear to be heading towards a Motte and Bailey fallacy where recent events in the US will be used as justification to clamp down on other views and opinions that left-wing progressives don’t approve of (see some of the comments on this page about ‘fact checkers’)

                                                                      1. 7

                                                                        In this case though, the “views and opinions that left-wing progressives don’t approve of” are the ideas of white supremacy and the belief that the election was rigged. Should those not be “clamped down” on? (I mean, it’s important to be able to discuss whether the election was rigged, but not when it’s just a president who doesn’t want to accept a loss and has literally no credible evidence of any kind.)

                                                                        1. 2

                                                                          I mentioned the Motte and Bailey fallacy being used and you bring up ‘white supremacy’ in your response! ‘White Supremacy’ is the default Motte used by the progressive left. The Bailey being a clamp down on much more contentious issues. Its this power to clamp down on the more contentious issues that I object to.

                                                                          1. 6

                                                                            So protest clamp downs on things you don’t want to see clamp downs on, and don’t protest clamp downs on things you feel should be clamped down on? We must be able to discuss and address real issues, such as the spread of misinformation and discrimination/supremacy.

                                                                            But that’s not even super relevant to the article in question. Mozilla isn’t even calling for censoring anyone. It’s calling for a higher degree of transparency (which none of us should object to) and for the algorithm to prioritize factual information over misinformation (which everyone ought to agree with in principle, though we can criticize specific ways to achieve it).

                                                                            1. 4

                                                                              We are talking past each other in a very unproductive way.

                                                                              The issue I have is with what you describe as “…and for the algorithm to prioritize factual information over misinformation”

                                                                              Can you not see the problem when the definition of ‘factual information’ is in the hands of a small group of corporations from the West Coast of America? Do you think that the ‘facts’ related to certain hot-button issues will be politically neutral?

                                                                              It’s this bias that i object to.

                                                                              This American cultural colonialism.

                                                                              1. 3

                                                                                Can you not see the problem when the definition of ‘factual information’ is in the hands of a small group of corporations from the West Coast of America?

                                                                                ReclaimTheNet recently published a very good article on this topic

                                                                                https://reclaimthenet.org/former-aclu-head-ira-glasser-explains-why-you-cant-ban-hate-speech/

                                                                                1. 3

                                                                                  That’s an excellent article. Thank you for posting it.

                                                                                  1. 3

                                                                                    You’re welcome. You might be interested in my public notes on the larger topic, published here.

                                                                    2. 3

                                                                      Out of interest, to which browser did you switch?

                                                                2. 2

                                                                  if possible, try vivaldi, being based on chromium, it will be easiest to switch to f.e. you can install chromium’s extensions in vivaldi. not sure about their osx (which seems to be your use-case), support though, so ymmv.

                                                                  1. 2

                                                                    humm, i thought this would be real thang though.

                                                                  1. 9

                                                                    When I first came to go from C#, I found the lack of generics really noticeable, and missed them a lot, but over the years of writing go, I can only count the times I have really wanted them on one hand.

                                                                    I am looking forward to getting them in the language, but I hope it doesn’t mean everyone uses them everywhere, and we end up with a bloat of type parameters and type constraints, which C# suffers from if you use type constraints (e.g. in and out, representing covariance and contravariance.)

                                                                    1. 1

                                                                      yeah, in my modest experiments with golang, i must confess, i don’t miss generics all that much, i would rather trade fast compile times with that any day. nevertheless, it would be a welcome addition to the language.

                                                                      what is even cooler is this

                                                                      “This design does not support template metaprogramming or any other form of compile time programming.”

                                                                      good !

                                                                    1. 31

                                                                      X11 really delivered on the promise of “run apps on other machines and display locally.” XResources let you store your configuration information in the X server. Fonts were stored in your display server or served from a font server.

                                                                      X Intrinsics let you do some amazing things with keybindings, extensibility, and scripting.

                                                                      And then we abandoned all of it. I realize sometimes (often?) we had a good reason, but I feel like X only got to show off its true power only briefly before we decided to make it nothing but a local display server with a lot of unused baggage.

                                                                      (The only system that did better for “run apps remotely/display locally” was Plan 9, IMHO.)

                                                                      1. 11

                                                                        A lot of these things were abandoned because there wasn’t a consistent story of what state belonged in the client and server and storing state on the server was hard. For this kind of remote desktop to be useful, you often had thin X-server terminals and a big beefy machine that ran all of the X clients. With old X fonts, the fonts you could display depended on the fonts installed on the server. If you wanted to be able to display a new font, it needed installing on every X server that you’d use to run your application. Your application, in contrast, needed installing on the one machine that would run it. If you had some workstations running a mix of local and remote applications and some thin clients running an X server and nothing locally, then you’d often get different fonts between the two. Similarly, if you used X resources for settings, your apps migrated between displays easily but your configuration settings didn’t.

                                                                        The problem with X11’s remote story (and the big reason why Plan 9 was better) was that X11 was only a small part of the desktop programming environment. The display, keyboard, and mouse were all local to the machine running the X server but the filesystem, sound, printer, and so on were local to the machine running the client. If you used X as anything other than a dumb framebuffer, you ended up with state split across the two in an annoying manner.

                                                                        1. 11

                                                                          As someone who had to set up an X font server at least once, you didn’t have to have the fonts installed on every X server, you just had to lose all will to live.

                                                                          But yes, X was just one part of Project Athena. It assumed you’d authenticate via Kerberos with your user information looked up in Hesiod and display your applications via X, and I think there was an expectation that your home directory would be mounted over the network from a main location too.

                                                                          Project Athena and the Andrew Project were what could have been. I don’t think anyone expected local workstations to become more powerful than the large shared minis so quickly, and nobody saw the Web transforming into what it is today.

                                                                          1. 4

                                                                            At school I got to use an environment with X terminals, NFS mounts for user data, and NIS for authentication. It worked fairly well, and when you see something like that work, it’s hard to see the world in quite the same way afterwards.

                                                                            As for the web, it’s true that it challenged this setup quite a bit, because it was hard to have enough CPU on large server machines to render web content responsively for hundreds of users. But on the other hand, it seems like we’ve past the point of sending HTML/CSS/JS to clients being optimal from a bandwidth point of view - it’s cheaper to send an h264 stream down and UI interaction back. In bandwidth constrained environments, it’s not unimaginable that it makes sense to move back in the other direction, similar to Opera Mini.

                                                                            1. 2

                                                                              Omg what a scary thought!

                                                                            2. 2

                                                                              Andrew was really amazing. I wish it had caught on, but if wishes were horses &c &c &c.

                                                                              1. 1

                                                                                The early version of the Andrew Window Manager supported tiling using a complex algorithm based on constraint solving. They made an X11 window manager that mimicked the Andrew WM.

                                                                                Let me rephrase that: I know they made an X11 window manager that mimicked Andrew but I cannot for the life of me find it. It’s old, it would’ve been maybe even X10 and not X11…

                                                                                So yeah, if you know where that is, it would be a helluva find.

                                                                                1. 2

                                                                                  iirc scwm uses some kind of constraint-solving for window placement. but i am approx. 99% sure that that’s not what you are looking for.

                                                                                  it’s for that remaining 1% that i posted this message :)

                                                                          2. 9

                                                                            We didn’t abandon it; rather, we admitted that it didn’t really work and stopped trying. We spent our time in better ways.

                                                                            I was the developer at Trolltech who worked most on remote X, particularly for app startup and opening windows. It sucked, and it sucked some more. It was the kind of functionality that you have to do correctly before and after lunch every day of every week, and a moment’s inattention breaks it. And people were inattentive — most developers developed with their local X server and so wouldn’t notice it if they added something that would be a break-the-app bug with even 0.1s latency, and of course they added that sooner or later.

                                                                            Remote X was possible, I did use Qt across the Atlantic, but unjustifiable. It required much too much developer effort to keep working.

                                                                            1. 1

                                                                              I wonder how much of that pain was from legacy APIs? I spent some time about 15 years ago playing with XCB when it was new and shiny and it was possible to make something very responsive on top of it (I was typically doing development on a machine about 100ms away). The composite, damage, and render extensions gave you a good set of building blocks as long as everything involved in drawing used promises and you deferred blocking as long as possible. A single synchronous API anywhere in the stack killed it. I tried getting GNUstep to use the latency hiding that XCB enabled and it was a complete waste of time because there were so many blocking calls in the higher-level APIs that the mid-level APIs undid all of the work that you did at the lower level.

                                                                              If; however, you designed your mid- and higher-level drawing APIs to be asynchronous, remote X11 performed very well but getting people to adopt a completely new GUI toolkit seemed like too much effort.

                                                                              That said, the way that you talk to a GPU now (if you actually care about performance) is via asynchronous APIs because bus round trips can be a bottleneck. A lot of the lessons from good remote drawing APIs are directly applicable to modern graphics hardware. With modernish X11 you can copy images to the server and then send sequences of compositing commands. With a modern GPU, you copy textures into GPU memory and then send a queue of compositing command across the bus.

                                                                              1. 2

                                                                                We at Trolltech avoided legacy APIs, provided our users with good APIs (good enough that few people tried to go past it to the lower-level protocol), and still people had problems en masse.

                                                                                If your test environment has nanoseconds of latency, you as app developer won’t notice that the inner body of a loop requires a server roundtrip, but your users will notice it very well if they have 0.1s of latency. Boring things like entering data in a form would break, because users would type at their usual speed, and the app would mishandle the input.

                                                                                Enter first name, hit enter, enter last name, see that sometimes start of the last name was added to the first-name field depending on network load.

                                                                                Edited to add: I’m not trying to blame here (“fucking careless lazy sods” or whatever), I’m trying to say that coping with latencies that range from nanoseconds to near-seconds is difficult. A lack of high-latency testing hurts, but it’s difficult even with the best of testing. IMO it’s quite reasonable to give up on a high-effort marginal task, the time can be spent better.

                                                                                1. 2

                                                                                  Note that a lot of the problems exist whether or not the code is part of the application process or not. If you use Windows Remote Desktop around the world, you have 100s of milliseconds of latency. If you press shift+key, it’s not uncommon to see them delivered out of order and incorrect results. (I don’t know exactly how this happens because a TCP connection implies ordering, so I suspect this is really about client message construction and server message processing.) The engine somehow has to ensure logically correct rendering while avoiding making application UI calls synchronous. Applications may not be aware of it, but the logic is still there.

                                                                                  1. 1

                                                                                    We at Trolltech avoided legacy APIs, provided our users with good APIs (good enough that few people tried to go past it to the lower-level protocol), and still people had problems en masse.

                                                                                    I’ve never looked very closely at Qt, but I was under the impression that most of your drawing APIs were synchronous? I don’t remember promises or any other asynchronous primitives featuring in any of the APIs that I looked at.

                                                                                    1. 3

                                                                                      Neither really synchronous nor really asynchronous… There are three relevant clocks, all monotonic: Time proceeds monotonically in the server, in the app and for the user, and none of the three can ever pause any of the others as would be necessary for a synchronous call, and no particular latency or stable offset is guaranteed.

                                                                                      So, yes, Qt is synchronous, but when people say synchronous they usually have a single-clock model in mind.

                                                                                      There are no promises, but what some people consider synchronous drawing primitives don’t really work. Promises would let you write code along the lines “do something, then when the server call returns, draw blah blah”. Synchronous code would be “do something; draw blah blah;”. Qt’s way is react strictly to server events: The way to draw is to draw in response to the server’s asking for that, and you don’t get to cheat and expect what the server will ask for, and you can’t avoid redrawing if the server wants you to.

                                                                                      We were careful about the three monotonic times, so the default handling has always been correct. But it’s very easy to do things like change the destination of keyboard input, and forget that the events that arrive at the program after the change may have been sent from the server (or from the user’s mind) either before or after the program performed the change.

                                                                                      A tenth of a second is a long time in a UI. People will type two keystrokes or perform a mouse action that quickly. If you want to avoid latency-induced mistakes you have to think clearly about all three clocks.

                                                                              2. 2

                                                                                People got and continue to get a lot of mileage out of remote x. Even on windows, I believe you can use remote forwarding with putty to an x server like vcxsrv.

                                                                                The biggest killer of remote x was custom toolkits (as opposed to athena and xt) and, ultimately, glx. The former is slow to proxy transparently; the latter is nigh impossible.

                                                                                1. 1

                                                                                  Yeah. I feel that the problem with Xorg isn’t necessarily Xorg itself, but instead a lot of the programmatic interfaces are kludgy, and the more complex usecases are specific enough that there has been very little documentation of them. It very likely would fulfill a lot of uses that people have no alternative for, but that knowledge has not been passed on. So instead people see it as a huge bloated mess, only partly because it is, but partly because they simply either don’t know or don’t care about those other workflows and uses.

                                                                                1. 3

                                                                                  fiction : a bunch of stanisław-lem. most notable among those cyberiad, invincible, his masters voice (loved all of them), just started children-of-time (adrian-tchaikovsky) seems kind of okay’ish so (approx 100 odd pages) far.

                                                                                  non-fiction : arthur-c-clarke’s ‘how the world was one’ it gives a brief overview of birth of communication from laying the first trans-atlantic-cable to geo-stationary satellites. it is quite good actually.

                                                                                  1. 6

                                                                                    i have used the following style for state-machine implementation to great advantage. for larger more complicated cases i have found that HSM or hierarchical-state-machines are also pretty useful toolkit to have around.

                                                                                    1. 4

                                                                                      Searching for HSMs, I stumbled across this description of behavior trees, which are new to me: https://web.stanford.edu/class/cs123/lectures/CS123_lec08_HFSM_BT.pdf. Neat construction.

                                                                                      1. 6

                                                                                        Would recommend this pre-print book for a thorough discussion of behaviour trees.

                                                                                        Edit: And this old paper from Harel (PDF) for hierarchical state charts.

                                                                                        1. 1

                                                                                          yes, i am familiar with harel’s paper, and have used that as a basis for design & development of hsm’s used @day-job. thanks for the book link on behavior trees. looks pretty interesting.

                                                                                          as a side note, for parsing, specifically network protocols where things are strictly (a very loose definition of strict fwiw) specified through rfc’s etc., i have relied mostly on ragel. would love it hear your (and other folks as well!) experiences/wisdom in this regard as well.