1. 2

    I like how the article started:

    This is great if you’re used to reading it and if you know which parts to ignore and which parts to pay attention to, but for many people this is too much information.

    Now, I thought it’s going to be a bit more in that perspective, but it looks more like a simplified version of, say, intodns.com or similar tools that already exist. They cleaned the information up, but what else can you do with it now?

    So it should either stay exactly the same: super simplified look into the DNS records, for info purposes for muggles - or it should add more features to be actually useful for something else.

    I do bet that the author had a lot of fun making the tool though :)

    1. 10

      hi, author here! I’m curious about what you were expecting – I’m considering adding more features (like maybe reverse DNS), but for me just looking up the value of a given record for a domain is all I want to do 90% of the time.

      1. 2

        I think choosing a name server would be useful.

        1. 2

          I’ve been thinking about that! Can you give an example of a different nameserver you’d like to use instead of (like do you want to query the authoritative namerserver for the domain? or do you want to just look at the results from a different public recursive dns server?)

          1. 2

            Maybe you want to make requests against your own ISP’s nameserver, whatever it is. Or make requests against the mullvad public non logging DNS server at an IP address I can’t remember off the top of my head. Or if you are setting up your own nameserver and want to test it.

            1. 2

              some people simply do not like to use google infrastructure, so at least having would be nice.

              1. 1

                I am mainly interested in and

                1. 1

                  One DNS issue I’ve had to debug before is where there are several DNS servers and one of them is responding with the wrong value. So I look up the relevant NS records and then query each of them individually to see if there’s a mismatch.

                  Another thing I’ve cared about in the past is split horizon DNS, where a particular server has one A record on the public internet and a different A record inside our LAN so it could be seen from both. (In retrospect, perhaps doing this with routing entries to make the external IP address with on our LAN would have been better?)

                  It comes up rarely but “I’m testing our in house custom DNS server software before we deploy it” (yeah that sounds like a bad idea, doesn’t it? It was.) so obvs I wanna send queries to it instead of the live NS servers.

                  The last common use I can think of was “dig +trace $name” to quickly get a view of the whole chain from TLD on down. Used to use this to diagnose issues where there were 3 oe more levels of domain servers.

          1. 3

            those were some of the things that you can do with nano that a lot of people don’t seem to realise. it is definitely not up to the level of vim or emacs, but it is a very capable editor.

            I think that this is just the reason why so many people are surprised when someone invests time into nano: If you’re ready to learn an editor and invest your time, why not use something as widespread as vi or as powerful as Emacs? As a TA, I usually recommend people use nano over vim when starting to work in a shell environment, but if they seem interested in learning more, it seems to make sense to look elsewhere?

            1. 3

              I think it depends what the person wants. Nano is in my reckoning quite equivalent to Notepad++, and there are a lot of people out there who find Notepad++ the comfiest environment for them. The features in the article are essentially equivalent to what I use in vi/Vim, except with a modeless paradigm. Vim offers more (I don’t think vi does), but most people really don’t need more. Emacs is slightly different, but once again those features are the core of what I do. The Emacs advantage comes from being a whole interface rather than an editor. In my experience, Nano is as widespread as vi these days, and out of the box is friendlier.

              I don’t know. The point of the article wasn’t really to convince anyone to switch to Nano. Personally I use Emacs and am very content, but I was fed up with Nano being used as a punchline by people who hadn’t even actually looked at it.

              1. 1

                Oooh I wouldn’t go nearly that far to call nano equivalent to Notepad++. Notepad, yes, but the ++ has so many “advanced” features, and most of them out of the box, that it’s simply not the same category.

                At least that’s my opinion. But of course, what you use your editors for and what I use them for are probably vastly different.

                1. 2

                  I didn’t necessarily mean in terms of features (and in terms of features, just skimming the documentation it appears to have significantly more than both Nano and itself last time I used it). I do object to the Notepad comparison, which was what sparked the entire article in the first place. Notepad only allows the editing of plain text files. Nano has multiple features to benefit programmers and fit pretty well in a Unix environment.

                  By comparing Nano to Notepad++, I was definitely focusing on the “comfiness” aspect. That is, a primary goal is to be intuitive as well as power. Notepad++ fits in well on Windows and behaves as one would expect a Windows program to behave. I believe that Nano is the Unix equivalent.

                  Out of interest, what do you use your editors for? You’re probably right, I don’t program professionally and mostly do typesetting.

            1. 17

              Might be an unpopular opinion but I’d pick nano over vim any day.

              1. 2

                It may be unpopular but it gets the job done. Personally I’d pick vim, but that’s decades of experience to lean on.

              1. 3

                Now I want to go build emacs on my new laptop! I am curious where the Ryzen 7 4700H stands in this comparison.

                Edit: So, I have an old box with an i7-4790k, 32GB of RAM, and a brand new Tuxedo Pulse 15 laptop with Ryzen 7 4700H with 32GB of RAM. RAMs are obviously different generations, and the new baby has NVME, while the box has a SATA SSD, if that matters. I don’t have X installed (appropriate to the today’s topics I guess), so I had to tweak the ./configure line a bit. I’m happy to report that my new laptop builds emacs faster (with -jTHROUGH_THE_ROOF) then my box. Now I finally have justification to upgrade the box (well no. but still).

                Anyway if anyone’s curious.


                [zlatko@battlestation ~/Projects/emacs]$ ./configure     --with-xpm=ifavailable --with-jpeg=ifavailable --with-gif=ifavailable --with-tiff=ifavailable --with-gnutls=ifavailable
                time make
                real	4m23.558s
                ... with all the cores
                time make -j8
                real	0m32.292s

                Ryzen 7 4700H

                [zlatko@null-boundary ~/projects/emacs]$ ./configure --without-makeinfo --with-x-toolkit=no     --with-xpm=ifavailable --with-jpeg=ifavailable --with-png=ifavailable --with-gif=ifavailable --with-tiff=ifavailable --with-gnutls=ifavailable
                time make
                real  8m36,881s
                time make -j16
                real  0m25,751s
                1. 1

                  Interesting that you’re getting such low times. Just tried a build, it spent way too much time single-threadedly “Processing OKURI-NASI entries”… There’s definitely different configuration involved on different machines.

                  1. 1

                    On an “AMD Ryzen Threadripper PRO 3945WX 12-Cores” I got:

                    make -j24  262.39s user 21.60s system 890% cpu 31.899 total                                                                                       

                    May be a little slow because this partition is on an old SATA SSD instead of NVME.

                  1. 1

                    To those in the know - is the input lag an X thing? This wins my vote.

                    1. 8

                      I would love to see a side-by-side with Catala Lang, which was posted here awhile back, though only the Git repo: https://lobste.rs/s/b74svy/catalalang_catala

                      1. 27

                        Hi! Author of both Mlang and Catala here :) So Catala is basically an evolution/reboot of the M language, but this time done right using all the PL best practices.

                        1. 7

                          Wait, are you for real? That is absolutely fascinating! My wife is a lawyer (which makes me not a lawyer) and I am very interested in these types of intersections. Namely where a highly regimented and regulated domain gives rise to some type of formalism once exposed to CS through some “interdisciplinary process”.

                          I have studied DSL design peripherally but would really like to pick your brain about some things. I did once, long ago, design a policy language. Are you open to additional discussions and collaboration?

                          1. 10

                            Ha ha ha yes this area is fascinating. I have the impression that there’s a lot of people in legaltech that are all trying to make a DSL to express parts of the law but have no clue about how to properly make a DSL.

                            I am open to discussions and collaboration, moreover both Mlang and Catala are open-source and accept contributions. Hit me up using the email in the Mlang paper for instance :)

                            1. 2

                              As a lawyer designing my own DSL ;) I would love to know how using of Mlang has affected legislation. For example how do you deal with law being changed? Does your parliament creates updates as “diffs” or as already “merged” texts? Do you use lawxml? Soo many questions!

                              1. 4

                                The French laws are usually written in terms of “diff”. Also I had made a prototype that warned which articles of law your program was relying on were about to expire https://twitter.com/DMerigoux/status/1252914283836473345?s=19. I don’t use any form of XML, I just copy paste the law text to start writing a Catala program. XML would not improve the way Catala programs are written since the XML structure does not follow the logical structue of the law but rather its formatting structure, which we don’t care when translating it to executable code.

                          2. 3

                            Hi Denis - nothing constructive to say except that I am a British CS student and my friends and I are big fans of your work! In fact I think a friend of mine will be basing his undergraduate thesis on your ideas :-)

                            1. 3

                              Thanks Jack! Well if your friend does end up basing his undergrad thesis on Catala or else please drop me an email, I’ll be happy to give feedback or suggest interesting things to look at.

                            2. 2

                              I want to just praise you for the time and effort you put into this space. I’ve recently got into “hobbyist” law myself, specifically Canadian law (http://len.falken.ink/law/101.txt), and instantly had the same thoughts: where are the formal proofs? :) Sure there are tax calculators, and some will creators, but are they rigorous? Can they tell us other properties of a situation?

                              I’m 100% going to play with Catala. This is technology worth spending time on because law governs our every day lives.

                              1. 2

                                but this time done right using all the PL best practices.

                                Does this mean that DGFiP is migrating to something one of the implementers considers not done right?

                                1. 4

                                  I suppose it’s easier to migrate step by step: Improve the tooling, so that everything can be in the open without security concerns and so the system can evolve more easily from its apache cgi-bin roots. That’s what MLang seems to offer.

                                  Once that’s in place, there can be further steps to improve the language (e.g. by introducing Catala) because the foundations are state of the art again. And even if that doesn’t happen, the system is still better off than before because it’s a single system instead of a single system + 25 years of wrappers that extend it ad-hoc.

                                  1. 3

                                    I could not have said it better!

                                  2. 2

                                    Migrating to Mlang improves the compiler but the M language stays the same. For instance, in DGFiP’s M, there are no user-defined functions. And the undefined value in M is a contant reminder of the “billion dollar mistake”. So yes we can definitely improve the M language from its 1990 design :)

                                  3. 1

                                    I’m just curious: who is driving all this? Is this simply something you one day decided to go and implement, or were you approached by someone to do this seemingly huge project? How do you get it financed, did you have backing from the start?

                                    Fascinating stuff!

                                    1. 10

                                      I started looking into this after watching this talk: https://youtu.be/EshxZVMURt4. I always wondered whether it was possible for me to play with formal methods outside the traditional application domains like security or safety-critical embedded systems. Then I fell into a rabbit hole :) I started with a Python prototype of French law encoded into SMT, then moved to try and use the DGFiP code and ended up coding Mlang, then created Catala as a next logical step. I created these on my spare time during my PhD and was helped by some friends who contributed to the open source repos. I’m only starting now to have institutional backing! During a French PhD, your funding is secured for the whole duration from the start so I didn’t have to worry about it and could focus on other things. I would say stable and long-term unconditional funding enabled me to create all this. In my opinion research should promote that instead of the myriad of tiny little funding sources, each of them requiring a lot of paperwork to fill. But in that regard I go against the zeitgeist.

                                1. 22

                                  These days, as little as possible. I had a server (Xen VPS and then a dedicated server) online for something like 10 years but realized that administering was occupying too much of my time (a few hours per week) and it was just too expensive ($40/mo). I ran my own mail configuration (Postfix + Courier + enough anti-spam/anti-malware to use literally 1/2 of my RAM continually), Apache with probably a dozen sites (PHP, mostly WordPress or my custom sites from long ago) on it, a database server, some IRC bouncer software, iodine DNS tunneling, and a bunch of other crap.

                                  My inattentiveness to it caused some pretty serious data loss: the hard drive died in 2016 and while I’d recently manually backed up my blog by way of converting the WordPress database to Markdown files intending to switch to Jekyll, I learned the hard way that my automated off-site backups using rdiff-backup hadn’t succeeded for nearly two years. Fortunately, all of my email users except me were using a client that kept a full local mirror (most of them Thunderbird; I’d switched to Airmail, argh). So, I lost email from mid-2014 through mid-2016. That sounds awful, but it’s probably not. I’ve not… missed anything. I’ve not even imported the backed-up mail. Nothing of real, immediate value was lost: the only thing measurably impacted was my digital packrat pride.

                                  Since then, I’ve decided time and again that it’s nearly always a more efficient use of my time to pay someone else to be attentive. Fastmail costs me around $80/yr for my needs. Google gets around $3/mo for me to dump up to 100 GB of stuff I need to push to others via Drive. Dropbox keeps bothering me about paying but I got so much free space that I use it as a hot landing zone for some automation and backups for apps that integrate with it. The exception is in cost-prohibitive areas, such as backing up my photos and videos. I’ve got a huge NAS for that and budget about $500/year amortized for that data storage. The math works out in my favor versus just about any online service.

                                  In my home network, on that NAS, I’m running Home Assistant for some minor home automation, Miniflux for feed reading, some VPN software, a GitLab Runner so that my side gig’s builds keep building despite the recent reduction of free plan CI minutes, and some other minor things.

                                  Running less stuff means adminning less stuff and reserving my focus for building software and communities.

                                  1. 5

                                    Thank you for posting this.

                                    It’s SUPER important to be very careful about the risk/reward ratio for anything you want to self host.

                                    Mail is SUPER high risk IMO and honestly, from where I sit very low reward for self hosting. Sure, you retain custody of all the bits on your side, but you can retain that same custody by using Fastmail and a good IMAP client.

                                    I love Fastmail too BTW and wish they got more love :)

                                    You’ll note that every single one of those services above are:

                                    • Not mission critical to my household AT ALL. As in, they could be down for weeks and I might notice but I wouldn’t be blocked. At all.
                                    • Backed up at several levels on my NAS. For critical VMs, I have them running off a ProxMox storage hosted by my NAS, which is itself backed up to Backblaze B2.
                                    • For services where the data they front is way more important than the service itself (e.g. Gitea) I back the data for those services up in their own NFS share on the NAS. I also have a separate backed up share for container configs.

                                    So I feel like I’ve got my risk/reward dialed in pretty well.

                                    1. 3

                                      I’ve got a huge NAS for that and budget about $500/year amortized for that data storage

                                      I’m curious, what data volumes are we speaking about here? For me, I’m still okay with Google, currently on 200GB tier but even a full TB would still be cheaper and safer than doing it at home.

                                      1. 2

                                        At 200GB? You’re right. You won’t be able to beat a Gdrive or Dropbox.

                                        However when you’re talking terabytes, doing it at home is the only way to fly. I have an 8GB Synology NAS with their default 2 drive redundant setup (Don’t ask me what RAID level it is. I dunno. I took the defaults :) and I back up my 5TB worth of data on Backblaze B2 for ~$20/month. That works pretty well for me.

                                        1. 1

                                          My current configuration is 4x 6 TB = 24 TB — 16 TB after RAID5 — in one NAS and 8x 2 TB = 16 TB — 12 TB after RAID 6 — in another older one that was EoL’d at the end of 2019 and I haven’t fully moved off of it yet. The older unit is probably around 75% storage utilization. A huge chunk of that is 4K video from a conference I ran in 2019. The newer one is around 25% but has 100% of the services running on it that had been running on the older NAS. The summer got too busy for me to finish moving off the old NAS and then I just kind of got complacent. That is perhaps a testament of my inattentiveness!

                                          I’ve got some geolocational redundancy with off-site backups of critical, irreplaceable data. One of those off-site NAS devices is EoL now and will get replaced this year.

                                      1. 2

                                        There’s no garbage collector (although C++ allows for one, it is optional, and I’m not aware of any implementations that provide one).

                                        TIL. Is there anywhere theorizing on what this might look like?

                                        1. 2

                                          I believe the idea is that it would work automatically, so code written for garbage-collected C++ would be normal C++ code but without the need to explicitly perform heap deallocations (via delete). A few requirements on pointer manipulation were added to support this and a few library functions were added to handle some special cases, but these would generally not be needed. As I said (I’m the author of the post) I’m not aware of any actual implementations of this. There is no requirement for implementations (compilers & runtime libraries) to support garbage collection.

                                          This stackoverflow answer has a few more details if you’re still curious: https://stackoverflow.com/a/15157689/388661

                                        1. 7

                                          This is why whenever I’m writing anything complicated in bash, I open up my editor first. It saves me a lot of pain later. Sure you can recover from that pain, and the OPs solution was somewhat ingenious, but why not avoid that pain in the first place?

                                          1. 3

                                            Work smarter not harder :D

                                            1. 2

                                              Part of the problem with that is you’re not working on anything complicated in bash. At least it doesn’t start. So you just “hmm lets find all those files, right…soooo..no I don’t need this one, let’s pipe that out… Of wait what if I now extract the path I could directly…” And your flow is just arrow-up, tweak, enter. I can totally see the scenario from the article as viable. So you got it running, but will it hold? “Let’s just keep it in screen for two days for the next drop to check if it worked. If it does well we’ll write a proper daemon with all…” Then life comes and six months later you’re learning gdb :)

                                              1. 1

                                                My modus operandi is that but to stop it and copy it regardless before trying the long run.

                                                Heck I’d even work directly on a sh file and run that instead. I’ve got burnt enough times that it’s just by default for me to make sh file first.

                                            2. 1

                                              Yeah exactly… I cannot remember long commands so I just type them into a text editor by default. Pretty much anything more than “rm foo” goes in a script I save somewhere.

                                              I gave some examples in this post: Shell Scripts Are Executable Documentation. Although I have to admit it seems to serve better as documentation for myself than others, since there are a lot of people who don’t like reading shell scripts!

                                            1. 2

                                              This is not a node-specific problem, stale state is a matter of architecture. But yes, you can definitely get these and a lot more quirky problems with the event-loop.

                                              1. 31

                                                X11 really delivered on the promise of “run apps on other machines and display locally.” XResources let you store your configuration information in the X server. Fonts were stored in your display server or served from a font server.

                                                X Intrinsics let you do some amazing things with keybindings, extensibility, and scripting.

                                                And then we abandoned all of it. I realize sometimes (often?) we had a good reason, but I feel like X only got to show off its true power only briefly before we decided to make it nothing but a local display server with a lot of unused baggage.

                                                (The only system that did better for “run apps remotely/display locally” was Plan 9, IMHO.)

                                                1. 11

                                                  A lot of these things were abandoned because there wasn’t a consistent story of what state belonged in the client and server and storing state on the server was hard. For this kind of remote desktop to be useful, you often had thin X-server terminals and a big beefy machine that ran all of the X clients. With old X fonts, the fonts you could display depended on the fonts installed on the server. If you wanted to be able to display a new font, it needed installing on every X server that you’d use to run your application. Your application, in contrast, needed installing on the one machine that would run it. If you had some workstations running a mix of local and remote applications and some thin clients running an X server and nothing locally, then you’d often get different fonts between the two. Similarly, if you used X resources for settings, your apps migrated between displays easily but your configuration settings didn’t.

                                                  The problem with X11’s remote story (and the big reason why Plan 9 was better) was that X11 was only a small part of the desktop programming environment. The display, keyboard, and mouse were all local to the machine running the X server but the filesystem, sound, printer, and so on were local to the machine running the client. If you used X as anything other than a dumb framebuffer, you ended up with state split across the two in an annoying manner.

                                                  1. 11

                                                    As someone who had to set up an X font server at least once, you didn’t have to have the fonts installed on every X server, you just had to lose all will to live.

                                                    But yes, X was just one part of Project Athena. It assumed you’d authenticate via Kerberos with your user information looked up in Hesiod and display your applications via X, and I think there was an expectation that your home directory would be mounted over the network from a main location too.

                                                    Project Athena and the Andrew Project were what could have been. I don’t think anyone expected local workstations to become more powerful than the large shared minis so quickly, and nobody saw the Web transforming into what it is today.

                                                    1. 4

                                                      At school I got to use an environment with X terminals, NFS mounts for user data, and NIS for authentication. It worked fairly well, and when you see something like that work, it’s hard to see the world in quite the same way afterwards.

                                                      As for the web, it’s true that it challenged this setup quite a bit, because it was hard to have enough CPU on large server machines to render web content responsively for hundreds of users. But on the other hand, it seems like we’ve past the point of sending HTML/CSS/JS to clients being optimal from a bandwidth point of view - it’s cheaper to send an h264 stream down and UI interaction back. In bandwidth constrained environments, it’s not unimaginable that it makes sense to move back in the other direction, similar to Opera Mini.

                                                      1. 2

                                                        Omg what a scary thought!

                                                      2. 2

                                                        Andrew was really amazing. I wish it had caught on, but if wishes were horses &c &c &c.

                                                        1. 1

                                                          The early version of the Andrew Window Manager supported tiling using a complex algorithm based on constraint solving. They made an X11 window manager that mimicked the Andrew WM.

                                                          Let me rephrase that: I know they made an X11 window manager that mimicked Andrew but I cannot for the life of me find it. It’s old, it would’ve been maybe even X10 and not X11…

                                                          So yeah, if you know where that is, it would be a helluva find.

                                                          1. 2

                                                            iirc scwm uses some kind of constraint-solving for window placement. but i am approx. 99% sure that that’s not what you are looking for.

                                                            it’s for that remaining 1% that i posted this message :)

                                                    2. 9

                                                      We didn’t abandon it; rather, we admitted that it didn’t really work and stopped trying. We spent our time in better ways.

                                                      I was the developer at Trolltech who worked most on remote X, particularly for app startup and opening windows. It sucked, and it sucked some more. It was the kind of functionality that you have to do correctly before and after lunch every day of every week, and a moment’s inattention breaks it. And people were inattentive — most developers developed with their local X server and so wouldn’t notice it if they added something that would be a break-the-app bug with even 0.1s latency, and of course they added that sooner or later.

                                                      Remote X was possible, I did use Qt across the Atlantic, but unjustifiable. It required much too much developer effort to keep working.

                                                      1. 1

                                                        I wonder how much of that pain was from legacy APIs? I spent some time about 15 years ago playing with XCB when it was new and shiny and it was possible to make something very responsive on top of it (I was typically doing development on a machine about 100ms away). The composite, damage, and render extensions gave you a good set of building blocks as long as everything involved in drawing used promises and you deferred blocking as long as possible. A single synchronous API anywhere in the stack killed it. I tried getting GNUstep to use the latency hiding that XCB enabled and it was a complete waste of time because there were so many blocking calls in the higher-level APIs that the mid-level APIs undid all of the work that you did at the lower level.

                                                        If; however, you designed your mid- and higher-level drawing APIs to be asynchronous, remote X11 performed very well but getting people to adopt a completely new GUI toolkit seemed like too much effort.

                                                        That said, the way that you talk to a GPU now (if you actually care about performance) is via asynchronous APIs because bus round trips can be a bottleneck. A lot of the lessons from good remote drawing APIs are directly applicable to modern graphics hardware. With modernish X11 you can copy images to the server and then send sequences of compositing commands. With a modern GPU, you copy textures into GPU memory and then send a queue of compositing command across the bus.

                                                        1. 2

                                                          We at Trolltech avoided legacy APIs, provided our users with good APIs (good enough that few people tried to go past it to the lower-level protocol), and still people had problems en masse.

                                                          If your test environment has nanoseconds of latency, you as app developer won’t notice that the inner body of a loop requires a server roundtrip, but your users will notice it very well if they have 0.1s of latency. Boring things like entering data in a form would break, because users would type at their usual speed, and the app would mishandle the input.

                                                          Enter first name, hit enter, enter last name, see that sometimes start of the last name was added to the first-name field depending on network load.

                                                          Edited to add: I’m not trying to blame here (“fucking careless lazy sods” or whatever), I’m trying to say that coping with latencies that range from nanoseconds to near-seconds is difficult. A lack of high-latency testing hurts, but it’s difficult even with the best of testing. IMO it’s quite reasonable to give up on a high-effort marginal task, the time can be spent better.

                                                          1. 2

                                                            Note that a lot of the problems exist whether or not the code is part of the application process or not. If you use Windows Remote Desktop around the world, you have 100s of milliseconds of latency. If you press shift+key, it’s not uncommon to see them delivered out of order and incorrect results. (I don’t know exactly how this happens because a TCP connection implies ordering, so I suspect this is really about client message construction and server message processing.) The engine somehow has to ensure logically correct rendering while avoiding making application UI calls synchronous. Applications may not be aware of it, but the logic is still there.

                                                            1. 1

                                                              We at Trolltech avoided legacy APIs, provided our users with good APIs (good enough that few people tried to go past it to the lower-level protocol), and still people had problems en masse.

                                                              I’ve never looked very closely at Qt, but I was under the impression that most of your drawing APIs were synchronous? I don’t remember promises or any other asynchronous primitives featuring in any of the APIs that I looked at.

                                                              1. 3

                                                                Neither really synchronous nor really asynchronous… There are three relevant clocks, all monotonic: Time proceeds monotonically in the server, in the app and for the user, and none of the three can ever pause any of the others as would be necessary for a synchronous call, and no particular latency or stable offset is guaranteed.

                                                                So, yes, Qt is synchronous, but when people say synchronous they usually have a single-clock model in mind.

                                                                There are no promises, but what some people consider synchronous drawing primitives don’t really work. Promises would let you write code along the lines “do something, then when the server call returns, draw blah blah”. Synchronous code would be “do something; draw blah blah;”. Qt’s way is react strictly to server events: The way to draw is to draw in response to the server’s asking for that, and you don’t get to cheat and expect what the server will ask for, and you can’t avoid redrawing if the server wants you to.

                                                                We were careful about the three monotonic times, so the default handling has always been correct. But it’s very easy to do things like change the destination of keyboard input, and forget that the events that arrive at the program after the change may have been sent from the server (or from the user’s mind) either before or after the program performed the change.

                                                                A tenth of a second is a long time in a UI. People will type two keystrokes or perform a mouse action that quickly. If you want to avoid latency-induced mistakes you have to think clearly about all three clocks.

                                                        2. 2

                                                          People got and continue to get a lot of mileage out of remote x. Even on windows, I believe you can use remote forwarding with putty to an x server like vcxsrv.

                                                          The biggest killer of remote x was custom toolkits (as opposed to athena and xt) and, ultimately, glx. The former is slow to proxy transparently; the latter is nigh impossible.

                                                          1. 1

                                                            Yeah. I feel that the problem with Xorg isn’t necessarily Xorg itself, but instead a lot of the programmatic interfaces are kludgy, and the more complex usecases are specific enough that there has been very little documentation of them. It very likely would fulfill a lot of uses that people have no alternative for, but that knowledge has not been passed on. So instead people see it as a huge bloated mess, only partly because it is, but partly because they simply either don’t know or don’t care about those other workflows and uses.

                                                          1. 1

                                                            I think the article is missing one important line:

                                                            $ head -1 .Xresources 
                                                            ! This script blatantly based on https://aduros.com/blog/xterm-its-better-than-you-thought/
                                                            1. 1

                                                              Here’s this years’ edition, nothing censored. Largely the same as the last year edition. The big one is an early 34” ultrawide curved LG (3440x1440) and next to it I added (again) my old Dell U2412M with 16:10 ratio. Next to it all is my work workhorse, a fairly decent Dell work workhorse running corporate windows. I don’t very much like the laptop. What I’m especially happy about is coming soon, a Tuxedo Pulse 15 laptop I’ve ordered. It may even serve as a desktop replacement (to retire my old workstation with i7-4790k), but we’ll have to see how much I like it first.

                                                                1. 1

                                                                  You say chaos, I say heaven :) That looks like some serious workspace.

                                                                  1. 1

                                                                    A lot of the fun stuff isn’t visible – but most everything are day to day or at the least used weekly as part of all things Arcan development. Some of the major pieces (servers, android farm, … excluded):

                                                                    Weird Input / Output Devices:

                                                                    • Tobii 4C, 5C eye trackers
                                                                    • 3DConnexion 3D mouse
                                                                    • Captogloves
                                                                    • PS5 Dual Sense
                                                                    • Leap Motion
                                                                    • StreamDeck
                                                                    • Wacom Intuos tablet

                                                                    Capture/Measuring/Basic Electronics:

                                                                    • ElGato Game Capture HD
                                                                    • ElGato Camlink 4k
                                                                    • ColorHug
                                                                    • BladeRF
                                                                    • Bus Pirate, Jtagulator, Bluetooth LE Friend, OpenVizla 3.2)
                                                                    • LabNation SmartScope
                                                                    • SIGLENT 1102CML

                                                                    Displays (need to swap in something with DisplayHDR600 at least):

                                                                    • NOC G2460PF (for FreeSync / 144Hz)
                                                                    • Asus VG27BQ (for GSync / HDR10 / 165Hz)


                                                                    • Thinkpad Yoga X1 (3rd gen)
                                                                    • Asus ZenBook 15 (For ScreenPad and nvidia/intel hybrid graphics)
                                                                    • Macbook Pro 2015 (“Last time I’ll buy / use Apple laptops”, OSX testing)
                                                                    • Toshiba c380 Chromebook (lower-end target)
                                                                    • Surface Go

                                                                    E-ink/Mobile/Specialized etc:

                                                                    • GPD Pocket 2
                                                                    • reMarkable2
                                                                    • Kobo Forma


                                                                    • Rift betas + CV1
                                                                    • PSVR
                                                                    • Vive, Vive-Pro
                                                                    • Reverb G1, G2
                                                                    • assortment of SBCs (jetson, most of the fruit-named, computesticks), a few clusterboards fuzzing arm-stuff.
                                                                1. 6

                                                                  My workstation is fairly standard.

                                                                  One monitor hooked up to a kvm switch, the switch hooked up to my work/development laptop and to my gaming laptop.

                                                                  Dev Laptop is a System76 Lempur Pro

                                                                  Gaming Laptop is a System76 Oryx Pro

                                                                  Both running PopOS 20.04LTS currently.

                                                                  WM: i3

                                                                  IRC Client: weechat

                                                                  Shell: bash

                                                                  Command Execuctor: rofi

                                                                  Status bar: polybar

                                                                  For password management I use password-store and the pass-clip extension to hook into rofi.

                                                                  Keyboard: https://ibb.co/J3bz4zy

                                                                  KVM Switch: https://ibb.co/gT91nJ0

                                                                  Laptop Stand: https://ibb.co/dkWmbS7

                                                                  ScreenShot: https://ibb.co/fqfm96k

                                                                  1. 3

                                                                    Ha! I have a Lemur Pro (lemp9) as my personal laptop and an Oryx Pro (oryp4) as my work machine!

                                                                    1. 2


                                                                      Yay System76! I just bought a Thelio and am SUPER delighted with it. Both the hardware itself and the excellent pre and post sales support.

                                                                      I also love what they’re doing with Pop!_OS - the tiling window manager Gnome extension is a nice touch, although some accessibility bugs may have me falling back to my usual choice of Plasma or Elementary.

                                                                      1. 1

                                                                        I’ve ordered myself a Tuxedo Pulse 15 (Tuxedo also works with a Clevo, or in this case Tongfang shell), can’t wait to get it and see what it looks like.

                                                                        1. 1

                                                                          Fantastic! I think this trend towards Linux supporting hardware vendors is a SUPER exciting development for broadening the accessibility of open source operating systems.

                                                                          Now if only the non KDE desktops would get on the accessibility bandwagon, we’d be in good shape :)

                                                                    1. 9

                                                                      Sane atomic file I/O is totally broken: https://danluu.com/deconstruct-files/

                                                                      1. 1

                                                                        As Linux is idolized by many, it is great to see so many articles that hint at how bad Linux really is.

                                                                        The illusion needs to be broken, for progress to happen.

                                                                        1. 6

                                                                          From the linked article: “OpenBSD and NetBSD behaved the same as Linux (true error status dropped, retrying causes success response, data lost)”

                                                                          So while I agree Linux isn’t ideal, and is still broken in many respects, to claim it’s only Linux is also incorrect.

                                                                          1. 2

                                                                            to claim it’s only Linux

                                                                            But I never claimed it is only Linux?

                                                                            Linux just happens to be way more popular than the other systems mentioned.

                                                                          2. 4

                                                                            I don’t think it’s idolized, at least not as infallible. If anything, it is idolized because it is open source and people can do these analyses. There are zealots preaching the Linux superiority for one reason or other but they’re mostly focused on the Microsoft hate, in my experience. I think i was personally never such zealot and I use Linux simply because it works best for me out all of the options.

                                                                            But your experiencemay vary and maybe I’m wrong. Just saying that the fact that you point out flaws in a thing does not mean the thing is bad.

                                                                        1. 20

                                                                          I thought this was going to be about CPU activity but it’s regarding the network activity from each system. Unsurprisingly Windows is more “chatty”, but to be honest, less so than I expected and there aren’t really any surprises. A few notes from skimming the article as to some connections the author seems unsure about:

                                                                          This is presumably the default DNS domain for Windows when not connected to a corporate domain. The Windows DNS client appends the primary DNS domain of the system to unqualified queries (see DNS devolution for the grotty details).

                                                                          As for the queries, wpad will be Web Proxy Auto-Discovery (which is a security disaster-zone, but that’s another story), the ldap one is presumably some sort of probe for an AD domain controller, and the rest I’m guessing are captive portal or DNS hijacking detection, which could be either Windows or Chrome that’s responsible.

                                                                          No chance this is Windows itself. Pretty much guaranteed to be the Intel graphics driver, specifically the Intel Graphics Command Center which was probably automatically installed.

                                                                          The 4team.biz domains are definitely not Microsoft but some 3rd-party vendor of software within that ecosystem. So it turns out there’s at least one legitimate company out there that actually uses a .biz domain!

                                                                          The rest are largely telemetry, error reporting, Windows updates, PKI updates (CAs, CRLs, etc …), and various miscellaneous gunk probably from the interactive tiles on the Start Menu. Microsoft actually does a half-decent job these days of documenting these endpoints. A few potentially helpful links:

                                                                          1. 2

                                                                            I thought that a bunch of these are moving to dns-over-https with some built-in resolution servers which would then completely bypass his private dns server?

                                                                            1. 2

                                                                              There was another thing that surprised me, namely that Windows appears to connect to a Spotify-owned domain. I asked the author if he had installed Spotify, which he hadn’t.

                                                                              1. 4

                                                                                Isn’t there a tile for Spotify in W10 by default?

                                                                            1. 23

                                                                              As someone who has to use the Google Cloud UI multiple times a day I was excited to see an article with a performance analysis. This UI is the only webapp I use that regularly kills chrome browser tabs. When it’s not killing tabs it’s giving me multi-second input latency or responding to actions I took several seconds ago.

                                                                              Unfortunately this analysis didn’t go into the depth I’d like. I’d love to see a more opinionated deeper analysis. For instance if I’m reading the analysis correctly it takes 150ms to load the first spinner icon. That’s actually not a great number given it should be served from the CDN. JS parsing and compilation takes 250ms and 750ms respectively. Honestly from a user perspective that’s not even that bad. If the page took 1.15s to load I’d be pretty happy. Then there’s 1s of waiting for the initial angular render. So we’re at a little over 2s.

                                                                              That’s not so bad.

                                                                              Oh wait, it’s 2s until the 2nd spinner. Things just go downhill from there.

                                                                              All in all their recommendations are pretty weak. Change the priority of a piece of content. Remove a bit of unused code. None of those things would fix the ux disaster. Google Cloud Console is like the infamous Fool’s Gold sandwich that involves a jar of peanut butter, a jar of jelly, a pound of bacon and this article is recommending they use low salt bacon.

                                                                              1. 4

                                                                                As someone else who uses the console daily, I completely agree. Navigating though the GKE workloads takes several seconds per click, even when just going back. Using the browser builtin back to dumps you on the wrong page half the time.

                                                                                I also once looked at Firefox’s task manager view and saw that the console was using 5.5GB of RAM across 5 tabs! I don’t understand how a team can allow such egregious memory leaks.

                                                                                1. 1

                                                                                  It’s not so much a memory leak as it is just simple misuse. The article points out some problems: e.g. they load the same 200kb JS object in two places. This is a problem because: 1) if it was a json, then the scripts loading this would benefit from one-another and from browser cache and 2) now this thing gets instanciated twice in memory. So that looks very likely like a 400kb of possibly unneeded stuff per tab. Like I’ve said, not directly a memory leak, just bad misuse. (Although from a team that works on maintaining their cloud stuff, you could argue that it actually is a memory leak, to have a client so badly done.)

                                                                                  1. 2

                                                                                    What I’m specifically talking about is definitely a memory leak, it’s different from what you described. I frequently open the details for a Deployment on the GKE workloads page to check the status of a code change and to look at logs. I usually leave the tab open because navigating to that page is so slow.

                                                                                    Over time it creeps up in RAM usage, the worst I a saw was a single page taking 2.45GB of RAM. It must be from polling for some updates in the background and never cleaning up the old state. What’s also amazing to me is that I can run kubectl describe fooand it takes about a second with pessimistically 100KB of output data, yet just clicking refresh status button on the already loaded page with the same data takes several seconds.

                                                                              1. 2

                                                                                This kind of looks like a useful thing. Kind of like Publii but primarily for developers.

                                                                                1. 1

                                                                                  Thanks for your kind words ! You’re right on, Meli basically stands as your deployment platform. Once you’ve built your static site, just upload it to Meli from your CI (or anywhere) using our CLI, and it’s instantly available at your-site.your-meli.com. You can also have preview branches like main.my-site.mymeli.com or dev.my-site.mymeli.com, so it is useful when you want to have previews. Once you’re done developing, all you have to do is point yourdomain.com to yoursite.yourmeli.sh, and you’re done :) Plus you get automatic HTTPs and can server thousands of requests per second on a cheap VPS, so it’s also very efficient thanks to Caddy :)

                                                                                1. 26

                                                                                  Original CentOS founder Gregory Kurtzer is starting a new 100% compatible RHEL rebuild called Rocky Linux https://github.com/hpcng/rocky

                                                                                  1. 7

                                                                                    Now let’s just hope that people complaining about IBM’s move with CentOS will go and pay monthly support and donate money time and equipment to get everything set up.

                                                                                    1. 2

                                                                                      This is very useful information, thank you :) .

                                                                                      I really think your comment should be the top comment.