1. 3

    It’s nice, and I respect rxi. But this has no AT-SPI 2 integration, nor libatk integration. So you’d have to hand-roll any accessibility yourself.

    1. 4

      Would you really expect that in 1100 lines of C code?

      1. 5

        If some developer, trying to fight the rising tide of complexity, develops an app using this or a similar toolkit, for a task that doesn’t have inherent accessibility barriers, and as a result some blind person (or person with another disability) can’t do their job, I won’t care how few lines of code the toolkit required. Some problems are inherently complex, and developing a user interface that accommodates the full range of human abilities and needs is one of them. Yes, software complexity often gets out of control, but sometimes we react to that by going too far in the other direction.

        P.S. Yes, I’m assuming a world of opaque, proprietary applications that people are compelled to use to make a living. In a world of free software and open protocols, I suppose this wouldn’t be such a problem.

        1.  

          Yes, software complexity often gets out of control, but sometimes we react to that by going too far in the other direction.

          I do think there is a place for very simple software. Even if it isn’t a good idea to use it in a general setting, it still provides a great opportunity for people to learn about topics. It would be very scary to learn about compilers if the only available reference was GCC.

          To be clear, I don’t think that you are advocating for the removal of projects like microui. I just want to provide an alternative view on the usefulness of the project.

    1. 11

      Agile is a manifesto, a movement. The movement created/adopted/popularized a pile of tools from standups to storyboarding to kanban. You’re meant to pick and choose what fits your needs to help you get in sync with your clients/customers – and change your choices if they need change, when the goals change! That’s agility.

      Scrum is prescriptivist management that takes on the facade of Agile. It’s rigid, conformist, and designed for managers to have control over the processes and output. The absolute opposite of agility.

      1. 10

        You can’t have it both ways. Either Agile is a prescriptive system that can be implemented, described, analyzed and criticized.

        Or it’s a lofty goal, like “balance”, “correctness” or “wealth”, that by itself doesn’t give you any meaningful guidance on how to get there.

        Agile proponents seem to be perfectly fine talking about their pet system as a panacea that will Rid us of Bureaucracy and Bad Decisions, but when criticism comes, they quickly turn the tables and shout YOU WERE DOING IT WRONG.

        1. 4

          Scrum can be implemented well IMO, but it’s rare. Often it’s a top-down mandate from management seen as a silver bullet to solve their productivity problems (hint: it’s not a solution, it may even make things worse). I’m at a small startup and we’re using a very loose version of scrum (2 weeks sprints, story pointing, etc.). It wasn’t mandated, we decided it was a decent rough framework to keep track of our productivity and plan out milestones and we don’t follow it to a T if it’s not working for us.

          1. 7

            I tend to think of this as “Scrum can be agile, in spite of itself, but it’s rare.”

          2. 2

            There is even a church about Agile: http://www.thechurchofagile.org/

          1. 15

            Could have been titled “The Worst Experience I’ve Had With Emacs” no? Seems like everything else is amazing.

            1. 19

              I admit the title is clickbait, pedantically it is my best experience with apple silicon too!

              1. 18

                Why the compulsion to use a clickbait title? I have a feeling I know but I’d rather you say than me.

                As for the actual article: ARM processors in general seem to be the way of the future. The other day for fun I was looking up processors made in USA and Canada and so many companies are making ARM processors. Whoever is developing software for them is going to see the rewards in a decade.

                1. 7

                  This post was a fluff piece that I wrote on a whim and didn’t expect it to be anywhere much as popular as it is. I chose a clickbait title purely because I thought it would be amusing to.

            1. 64

              I find Docker funny, because it’s an admission of defeat: portability is a lie, and dependencies are unmanageable. Installing dependencies on your own OS is a lost battle, so you install a whole new OS instead. The OS is too fragile to be changed, so a complete reinstall is now a natural part of the workflow. It’s “works on my machine” taken to the conclusion: you ship the machine then.

              1. 17

                We got here because dependency management for C libraries is terrible and completely inadequate for today’s development practices. I also think Docker is a bit overkill, but I don’t think this situation can be remedied with anything short of NixOS or unikernels.

                1. 8

                  I place more of the blame on just how bad dynamic language packaging is (pip, npm), intersected with how bad most distributions butcher their native packages for those same dynamic languages. The rule of thumb in several communities seems to be a recommendation to avoid using native packages altogether.

                  Imagine if instead static compilation was more common (or even just better packaging norms for most languages), and if we had better OS level sandboxing support!

                  1. 3

                    Can you explain what you find bad about pip/npm packaging?

                    1. 2

                      I don’t think npm is problematic to Docker levels. It always supported project-specific dependencies.

                      Python OTOH is old enough that by default (if you don’t patch it with pipenv) it expects to use a shared system-global directory for all dependencies. This setup made sense when hard drive space was precious and computers were off-line. Plus the whole v2/v3 thing happened.

                      1. 5

                        by default (if you don’t patch it with pipenv)

                        pipenv is…controversial.

                        It also is not the sole way to accomplish what you want (isolated environments, which are called “virtual environments” in Python; pipenv does not provide that, it provides a hopefully-more-convenient interface to the thing that actually provides that).

                  2. 4

                    Yes, unikernels and “os as static lib” seem the sensible way forward from here to me, also. I don’t know why it never caught on.

                    1. 4

                      People with way more experience than me on the subject have made a strong point about debuggability. Also, existing software and libraries make assumptions about the filesystem and other things that are not immediately available on unikernels being there, and rewriting them to be reusable on unikernels is not an easy task. I’m also not sure about the state of the tooling for deploying unikernels.

                      Right now it’s an uphill battle, but I think we’re just a couple years away and we’ll get there eventually.

                      1. 6

                        Painfully easy to debug with GDB: https://nanovms.com/dev/tutorials/debugging-nanos-unikernels-with-gdb-and-ops - Bryan is full of FUD

                        1. 4

                          GDB being there is great!

                          Now you also might want lsof, netstat, strace, iostat, ltrace… all the tools which exist for telling you what’s going on in the application to kernel interface are now gone because the application is the kernel. Those interfaces are subroutine calls or queues instead.

                          It’s not insurmountable but you do need to recreate all of these things, no? And they won’t be identical to what people are used to.

                          I guess the upside is that making dtrace or an analogue of it in unikernel land is prolly easier than it was in split kernel userspace land: there’s only one address space in which you need to hot patch code. :)

                          1. 2

                            Perhaps some tools you’d put in as plugins but most of the output from these tools would be better off being exported through whatever you want to use for observability (such as prometheus). One thing that confuses a ton of people is that they are expecting to deal with a full blown general purpose operating system which it isn’t. For example if you take your lsof example - suppose I’m trying to figure out what port is tied to what process - well in this case you already know cause there’s only one.

                            As for things like strace - we actually already did implement something similar a year or so ago as it was vital to figure out what applications were doing what. We also have ftrace like functionality too.

                            Finally, as for tool parity you are right if all you are using is Linux then everything should be relatively the same, but if you jump between say osx and linux you’ll find quite a few different flags or different names.

                            1. 2

                              It obviously wouldn’t be “identical to what people are used to” though, that’s kind of the point. And you don’t want a narrow slice of a full linux system with just the syscalls you use compiled in, it’d be a completely different and much simpler system designed without having to constantly jump up and down between privelege levels, which would make a regular debugger a lot more effective to track a wider array of things than it can now while living in the user layer of a full OS.

                      2. 1

                        Can you further clarify? With your distribution’s package manager and pkg-config development in C and C++ seems fine. I could see docker being more of a thing on Windows with C libraries because package management isn’t really a thing on that OS (although msys seems like it has pacman which is nice). Also wouldn’t you use the same C library dependency management inside the container?

                        Funny enough, we are using docker at work for non-C languages (dotnet/mono).

                      3. 6

                        That’s exactly what I said at work when we began Dockerization of our services. “We just concluded that dependency management is impossible, so we may as well hermetically seal everything into a container.” It’s sad that we’re here, but there are several reasons both technical and business related why I see containerization as being useful for us at $WORK.

                        1. 5

                          Which is what we used to do back in the 70s and 80s. Then operating systems started providing a common set of interfaces so you could run multiple programs safe from each other (in theory), then too many holes started opening up and programs relying on specific global shared libs/state which would clash, and too many assumptions about global filesystem layout, and now we’ve got yet another re-implementation of the original idea, just stacked atop of and wrapped around the old, crud piling up around us comprised of yak hair, old buffer overflows, and decisions made when megabytes of storage were our most precious resource.

                          1. 1

                            What if I told you that you don’t need an os at all in your docker container? You can, and probably should, strip it down to the minimal dependencies required.

                            1. -1

                              This is amazing insight. Wow. :O Saving and sending this.

                                1. 1

                                  Thanks for the laugh :’)

                            1. 6

                              Very small and nice to look at, but keep in mind the context and time in which it was written, and be careful not to base a new wm on this: It uses xlib, not xcb.

                              1. 1

                                xlib is built on top of xcb. See the xcb adoption page.

                                1. 4

                                  It is, and that simplified the xlib codebase, but that doesn’t matter much to library users. XCB is a very thin wrapper around the X11 protocol. XLib provides a set of abstractions above the protocol. The problem with XLib is that it provides the wrong abstractions. It builds synchronous APIs on top of a fundamentally asynchronous protocol, which means it’s almost impossible to write code on top of XLib that performs well on a high-latency link.

                                  1. 1

                                    You’d think this problem would’ve motivated a different design back in the old days, when high-latency links were common, yet everyone used Xlib. What accounts for that?

                                    1. 3

                                      I think it is because the problem is greatly exaggerated. There are some specific functions that have the round trip latency problem - XInternAtom, XGetWindowProperty, and querying extensions I know do this… there’s probably more but I can’t think of them right now.

                                      XInternAtom was particularly problematic in the day, so they moved to XInternAtoms - the batch version and you can do all the atoms you actually need in one go at startup. Problem solved. Extensions not as popular back then, but again, a relatively small number of calls at startup in most programs and not a huge deal. (Higher level toolkits may not use this optimization though, or request a lot more atoms than they actually need, making the problem look worse than it actually is.)

                                      XGetWindowProperty is used in protocols like copy paste, but that’s in response to specific events and accompanied with data so a little lag there isn’t a big deal…. unless you use a higher level library that treats the clipboard as a whole to be a synchronous event. But Xlib doesn’t do that as a whole, just getting the next chunk.

                                      So I’d question if xlib is actually the problem in the first place.

                                      1. 1

                                        So why was the point of XCB? To spray XML over everything and make names longer?

                                    2. 1

                                      What specific APIs would you blame? I just said this in another comment but I can’t think of very many.

                                    3. 3

                                      Yes, but (unless I’m very much mistaken) Xlib is supposed to be deprecated, and new applications are to use XCB directly.

                                  1. 1

                                    You can have this same experience in 2021 if you write GUI applications using the FOX toolkit. Applications look like Windows 95, and you can use the FOX control center so that all FOX based applications have the same theme and colors.

                                    1. 1

                                      That temporary is used to construct the parameter value s in the set_s. The argument to the constructor of this s is a temporary – so it’s of type string &&.

                                      Is it true that all temporaries constructed by the compiler are available as r-value references (or are they just r-values)? Is there a part of the standard in C++11 that guarantees this?

                                      1. 2

                                        Yes, temporaries are rvalues by definition, since they don’t have a “name” — they’re not held by or pointed to by a variable. That means a temporary can safely be destroyed (moved out of) as part of the expression, since there’s no way to observe its value afterwards.

                                        (I am not a licensed C++ guru. But I use rvalues a lot.)

                                        1. 2

                                          as r-value references (or are they just r-values)?

                                          If you have an rvalue, you have an rvalue reference (or at least, you can get one for free). I’m not great with the terminology but I think of the temporary itself as being the rvalue. An rvalue reference can bind to such a temporary. For example, int &&r = 5 + 6; compiles just fine (and does pretty much what you’d expect; as a bonus, the lifetime of the temporary is extended to the scope of the reference, so you don’t immediately get a dangling reference).

                                          1. 1

                                            For example, int &&r = 5 + 6; compiles just fine (and does pretty much what you’d expect;

                                            That’s a really weird line of code. R-value references are usually for function arguments, where temporaries are natural. Honestly, I had no expectations whatsoever what would happen.

                                            However, it seems to only work for local variables. In this code b is an l-value reference:

                                            #include <stdio.h>
                                            
                                            struct a {
                                                int&& b;
                                            
                                                a(int&& c) : b(c) {}
                                            };
                                            
                                            int main() {
                                                a a(5+6);
                                                printf("%d\n", a.b);
                                            }
                                            

                                            And so it doesn’t compile (assigning an r-value to an l-value reference).

                                            1. 1

                                              I had no expectations whatsoever what would happen.

                                              I meant that it assigns r a reference to a temporary object (which is created via the expression 5 + 6), just as it reads.

                                              However, it seems to only work for local variables. In this code b is an l-value reference:

                                              No, b is declared as an rvalue reference: int&& b; - just as c.

                                              However, referring to c (or to b for that matter) still gives an lvalue (otherwise any use would perform a move). You need to explicitly std::move a value to get an rvalue reference:

                                              #include <algorithm>   // for std::move
                                              ...
                                                  a(int&& c) : b(std::move(c)) {}
                                              

                                              The same would be true if c was a local variable; there is nothing special about local variables in this regard.

                                              The difference between something declared an rvalue reference (&&) vs an lvalue reference (&) is what they can bind to. An rvalue reference can bind to an rvalue, an lvalue reference can’t. Each binds only to the corresponding type of value.

                                        1. 2

                                          HelenOS might be interesting too.

                                          1. 34

                                            If there are any questions or remarks, I am right here!

                                            1. 15

                                              I wish I could invite this story multiple times. The perfect combination of being approachable, while still being packed with (to me) new information. Readable without ever being condescending.

                                              One thing I learned was that DNA printers are a thing nowadays. I had no idea. Are these likely to be used in any way by amateur hackers, in the sense that home fusion kits are fun and educational, while never being useful as an actual energy source?

                                              1. 14

                                                So you can actually paste a bit of DNA on a website and they’ll print it for you. They ship it out by mail in a vial. Where is breaks down is that before you inject anything into a human being.. you need to be super duper extra totally careful. And that doesn’t come from the home printer. It needs labs with skilled technicians.

                                                1. 7

                                                  Could any regular person make themselves completely fluorescent using this method? Asking for a friend.

                                                2. 5

                                                  You may be interested in this video: https://www.youtube.com/watch?v=2hf9yN-oBV4 Someone modified the DNA of some yeast to produce spider silk. The whole thing is super interesting (if slightly nightmarish at times if you’re not a fan of spiders).

                                                  1. 1

                                                    So that’s going to be the next bioapocalypse then. Autofermentation but where as well as getting drunk, you also poop spider silk.

                                                3. 8

                                                  Love the article. Well done.

                                                  1. 5

                                                    Thanks for the awesome article! Are there any specific textbooks or courses you’d recommend to build context on this?

                                                    1. 12

                                                      Not really - I own a small stack of biology books that all cover DNA, but they cover it as part of molecular biology, which is a huge field. At first I was frustrated about this, but DNA is not a standalone thing. You do have to get the biology as well. If you want to get one book, it would have to be the epic Molecular Biology of the Cell. It is pure awesome.

                                                      1. 2

                                                        You can start with molecular biology and then a quick study of bio-informatics should be enough to get you started.

                                                        If you need a book, I propose this one, it is very well written IMO and covers all this stuff.

                                                      2. 2

                                                        Great article! I just have one question. I am curious why this current mRNA vaccine requires two “payloads” ? Is this because it’s so new and we haven’t perfected a single shot or some other reason?

                                                        1. 2

                                                          It’s just the way two current mRNA vaccines were formulated, but trials showed that a single shot also works. We now know that two shots are not required.

                                                          1. 2

                                                            The creators of the vaccine say it differently here: https://overcast.fm/+m_rp4MLQ0 If I remember correctly, they claim that one shot protects you but doesn’t prevent you to be infective, while the second make sure that you don’t infect others

                                                          2. 2

                                                            As I understand it[1] a shot of mRNA is like a blast of UDP messages from the Ethernet port — they’re ephemeral and at-most-once delivery. The messages themselves don’t get replicated, but the learnt immune response does permeate the rest of the body. The second blast of messages (1) ensures that the messages weren’t missed and (2) acts as a “second training seminar”, refreshing the immune system’s memory.

                                                            [1] I’m just going off @ahu’s other blogs that I’ve read in the last 24 hours and other tidbits I’ve picked up over the last 2 weeks, so this explanation is probably wrong.

                                                            1. 1

                                                              Not an expert either, but I think this is linked to the immune system response, like some other vaccines, the system starts to forget, so you need to remind him what the threat was.

                                                            2. 1

                                                              I enjoyed the article, reminded me of my days at the university :-)

                                                              So here are some quick questions in case you have an answer:

                                                              • Where does the body store info about which proteins are acceptable vs not?
                                                              • How many records can we store there?
                                                              • Are records indexed?
                                                              • How does every cell in the body gets this info?
                                                              1. 12

                                                                It is called negative selection. It works like this:

                                                                1. Body creates lots of white blood cells by random combination. Each cell has random binding sites binding to specific proteins and will attack them.
                                                                2. Newly created white blood cells are set loose in staging area, which is presumed to be free of threats. All cells triggering alarm in staging area kill themselves.
                                                                3. White blood cells, negatively selected not to react to itself, mature and are released to production.
                                                                1. 1

                                                                  Interesting, thanks for sharing!

                                                                2. 5

                                                                  How does info spread through the body

                                                                  I came across this page relatively recently and it really blew my mind.

                                                                  glucose is cruising around a cell at about 250 miles per hour

                                                                  The reason that binding sites touch one another so frequently is that everything is moving extremely quickly.

                                                                  Rather than bringing things together by design, the body can rely on high-speed stochastic events to find solutions.

                                                                  This seems related, to me, to sanxiyn’s post pointing out ‘random combination’ - the body:

                                                                  • Produces immune cells which each attack a different, random shape.
                                                                  • Destroys those which attack bodily tissues.
                                                                  • Later, makes copies of any which turn out to attack something that was present.

                                                                  This constant, high-speed process can still take a day or two to come up with a shape that’ll attack whatever cold you’ve caught this week - but once it does, that shape will be copied all over the place.

                                                                  1. 2

                                                                    I did some projects in grad school with simulating the immune system to model disease. Honestly we never got great results because a lot of the key parameters are basically unknown or poorly characterized, so you can get any answer you want by tweaking them. Overall it’s less well understood than genetics, because you can’t study the immune system in a petri dish. It’s completely fascinating stuff though: evolution built a far better antivirus system for organisms than we could ever build for computers.

                                                                  2. 1

                                                                    Is there any information on pseudouridine and tests on virus encorporating it in their DNA?

                                                                    The one reference in your post said that there is no machinery in cells to produce it, but the wiki page on it says that it is used extensively in the cell outside of the nucleus.

                                                                    It seems incredibly foolhardy to send out billions of doses of the vaccine without running extensive tests since naively any virus that mutated to use it would make any disease we have encountered so far seem benign.

                                                                    1. 1

                                                                      From https://en.wikipedia.org/wiki/Pseudouridine#Pseudouridine_synthase_proteins:

                                                                      Pseudouridine are RNA modifications that are done post-transcription, so after the RNA is formed.

                                                                      That seems to mean (to me, who is not a biologist) that a virus would have to grow the ability to do/induce such a post-processing step. Merely adding Ψ to sequences doesn’t provide a virus with a template to accelerate such a mutation.

                                                                      1. 1

                                                                        And were this merely a nuclear reactor or adding cyanide to drinking water I’d agree. But ‘I’m sure it will be fine bro’ is how we started a few hundred environmental disasters that make Chernobyl look not too bad.

                                                                        ‘We don’t have any evidence because it’s obvious so we didn’t look’ does not fill me with confidence given our track record with biology to date.

                                                                        Something like pumping rats with pseudouridine up to their gills then infecting them with rat hiv for a few dozen generations and measuring if any of the virus starts encorporating pseudouridine in its RNA would be the minimum study I’d start considering as proof that this is not something that can happen in the wild.

                                                                        1. 2

                                                                          As I mentioned, I’m not a biologist. For all I know they did that experiment years ago already. Since multiple laymen on this forum came up with that concern within a few minutes of reading the article, I fully expect biologists to be aware of the issue, too.

                                                                          That said, in a way we have that experiment already going on continuously: quickly evolving viruses (such as influenza) that mess with the human body for generations. Apparently they encountered pseudouridine regularly (and were probably at times exposed to PUS1-5 and friends that might have swapped out an U for a Ψ in a virus accidentally) but still didn’t incorporate it into their structure despite the presumed improvement to their fitness (while eventually leading our immune system to incorporate a response to that).

                                                                          Which leaves me to the conclusion that

                                                                          1. I’d have to dig much deeper to figure out a comprehensive answer, or
                                                                          2. I’ll assume that there’s something in RNA processing that makes it practically impossible for viruses to adopt that “how to evade the immune system” hack on a large scale.

                                                                          Due to lack of time (and a list of things I want to do that already spans 2 or 3 lifetimes) I’ll stick to 2.

                                                                  1. 21

                                                                    Agree that CPU and disk (and maybe ram) haven’t improved enough to warrant a new laptop, but a 3200x1800 screen really is an amazing upgrade I don’t want to downgrade from.

                                                                    1. 6

                                                                      I love my new 4k screen for text stuff.. Sadly on linux it seems to be pain in the ass to scale this appropriately and correctly. Even more with different resolutions between screens. So far windows does this quite well.

                                                                      1. 4

                                                                        Wayland can handle it ok, but Xorg doesn’t (and never will) have support for per-display DPI scaling.

                                                                        1. 3

                                                                          I don’t see myself being able to afford a 4k screen for a few years but if you just scale everything up, what’s the advantage?

                                                                          1. 4

                                                                            The text looks much crisper, so you can use smaller font sizes without straining your eyes if you want more screen real estate. Or you can just enjoy the increased readability.

                                                                            Note: YMMV. Some people love it and report significantly reduced eye strain and increased legibility, some people don’t really notice a difference.

                                                                            1. 2

                                                                              I use a much nicer font on my terminals now, which I find clearer to read. And I stare at terminals, dunno, 50% of my days.

                                                                              This is a Tuxedo laptop (I think it’s the same whitelabel as system86 sells) which don’t feel expensive to me.

                                                                              1. 1

                                                                                hah I’m also using a tuxedo one, but the font is far too tiny on that screen to work with everyday

                                                                                1. 1

                                                                                  Which tuxedo laptop has 4k?

                                                                                  1. 1

                                                                                    I can’t find them anymore either. They used to have an option for the high res display. I go this one a bit over a year ago:

                                                                                    1 x TUXEDO InfinityBook Pro 13 v4  1.099,00 EUR
                                                                                     - QHD+ IPS matt | silber/silber | Intel Core
                                                                                    i7-8565U
                                                                                    ...
                                                                                    Summe: 1.099,00 EUR
                                                                                    
                                                                                    1. 1

                                                                                      how was your driver experience ? I’ve had to re-send mine twice due to problems with the CPU/GPU hybrid stack. Though mine is now 3? years old.

                                                                                      1. 2

                                                                                        Drivers are fine, it all simply works. Battery could last longer.

                                                                                    2. 1

                                                                                      Yeah ok. I just ordered a Pulse 15. Also wanted a 4k display but didn’t see it anywhere. thanks

                                                                                  2. 1

                                                                                    well you have a much sharper font and can go nearer if you want (like with books). I get eye strain over time from how pixelated text can appear at evening to me. Also you can watch higher res videos and all in all it looks really crisp. See also you smartphone, mine is already using a 2k screen, and you can see how clean text etc is.

                                                                                    You may want to just get an 2k screen (and maybe 144 FPS?) as that may already be enough for you. I just took the gamble and wanted to test it. Note that I probably got a modell with an inferior background lighting, so it’s not the same around the edges when I’m less than 50CM away. I also took the IPS panel for superior viewing angle as I’m using it for movie watching also. YMMV

                                                                                    My RTX 2070 GPU can’t play games like destiny on 4k 60 FPS without 100% GPU usage and FPS drops the moment I’m more than walking around. So I’ll definitely have to buy a new one if I want to use that.

                                                                                  3. 1

                                                                                    I also just got a new 4k monitor, and that’s bothering me also. It’s only a matter of time before I fix the glitch with a second 4k monitor… Maybe after Christmas

                                                                                    1. 2

                                                                                      I ended up doing that. It sucks, but Linux is just plain bad at HiDPI in a way Windows/macOS is not. I found a mixed DPI environment to be essentially impossible.

                                                                                  4. 2

                                                                                    This is where I’m at too. I’m not sure I could go back to a 1024x768 screen or even a 1440x900 screen even. I have a 1900x1200 xps 13 that I really enjoy which is hooked up to a 3440x1440p ultrawide.

                                                                                    Might not need all the CPU power, but the screens are so so nice!

                                                                                    1. 2

                                                                                      And the speakers.

                                                                                      I love my x230, but I just bought an M1 Macbook Air, and god damn, are those speakers loud and crisp!

                                                                                      1. 1

                                                                                        For me it’s also screen size and brightness that are important. I just can’t read the text on a small, dim screen.

                                                                                        1. 1

                                                                                          Oh I’d love to have a 4k laptop. I’m currently using a 12” Xiaomi laptop from 2017 with 4GB of RAM and a 2k display. After adding a Samsung 960 evo NVMe and increasing Linux swappiness this is more than enough for my needs - but a 4k display would just be terrific!

                                                                                        1. 5

                                                                                          There’s actually an alternative method called std::get_if, IMO much more readable than std::visit. Second thing is that lots of C++’s constructs is for library use only, not really usable from the point of view of a standard application developer.

                                                                                          1. 4

                                                                                            Second thing is that lots of C++’s constructs is for library use only, not really usable from the point of view of a standard application developer.

                                                                                            Yes but generally when a library is added to some application, we like to have some idea of how that library works because an application developer will need to support the application and all code linked in. Large libraries like Qt have community and paid support but smaller libraries will have less reason to be adopted if they are using lots of C++ “magic” that the team might not understand.

                                                                                            For myself, I learned and can understand C very well (it’s not a difficult language) before learning C++. I am not a C++ expert by any means. If I’m adding a library and it’s using a lot of new C++ that I can’t easily grok I might fall back on some C based library instead.

                                                                                            1. 1

                                                                                              Not sure I agree. This should be the role of the documentation to explain how the library works, and how to use it. The library itself should be written with maintainability and extensibility in mind. Writing high level code (so using templates or metaprogramming) is done generally to enable easy extension without much increase of maintenance cost. The fact that such code is not understood by most of the people shouldn’t be really that important, the ease of use of the interface is more important than the implementation itself.

                                                                                              1. 2

                                                                                                That puts you at the mercy of the library. Not all libs are perfect, or perfectly documented. Sometimes behavior can be surprising or different from the documentation, and then what? You accept defeat? Nah, it’s good to be able to read into library implementations to better understand them. Not ALL the time, but it’s a skill I’ve found that divides programmers - some will get absolutely stuck on libraries as black boxes, and others will dive in and look to build a better understanding. I have an opinion about which is better.

                                                                                                1. 2

                                                                                                  Well, that’s true, it’s better to be able to read the source code as well than to read only documentation. But I don’t think it has anything to do with using advanced C++, rather it’s a matter of good code vs bad code. You can write low level C++ that will be completely unreadable. Also you can write high level generic C++ code that could be read easily.

                                                                                                  Second thing is that I don’t think that a person is on the mercy of the library if the library uses advanced C++, and that the person is forced to treat it like a black box. The person can always choose to learn more advanced C++. I think that most of the problems with templates and meta-programming come from people who already found their sweet spot in C++98 and refuse to go forward. I’m not saying this is a bad approach to take, but if someone does take it, then it’s good to realize that other people have their sweet spots in other places of the language, so if one is using this approach in life, it’s good to let others use theirs ;).

                                                                                                  I personally would like to have more functional features in C++, but at the same time I have friends who get a fever when they see a constexpr function. I respect their choice, but i refuse to submit to it – going with this logic we would still use Fortran (because it’s possible to write anything in Fortran, so there’s no need for new things).

                                                                                                  But well, having written that, I generally try to avoid C++, but becuase it’s too far behind other languages, and switching to C++ sometimes feels like taking a step back.

                                                                                                  1. 1

                                                                                                    I have no problem with functional programming. I like Scheme, Standard ML and Rust. The problem with these new features in C++ is that they are always implemented poorly. I consider sum types an absolute must for any new language I’m going to use, yet I’m not touching std::variant with a ten-foot pole. Similarly, I love parametrized types in Rust, but template code in C++ is a horror story.

                                                                                                    It’s not about “going forward” or “advanced”, it’s about how poor of a job the C++ committee is doing, when it comes to creating an intelligible and nice-to-use language. Those horrific error messages are never going away, not because the compilers are bad, but because the language won’t let them be any better.

                                                                                                    1. 1

                                                                                                      I’m not sure if I’m able to influence your pretty fatalistic view of C++ and the committee, but the ‘concepts’ feature promise better error messages, so improving the ease of use of the language. Also what about C++11 loops, C++11 initializers for member variables, C++17 if-with-statement, C++20 designated initializers? Those were changes made purely for ease of use.

                                                                                                      A good example of the “poor job” that the committee does is to try to think about reducing the complexity of the language by trying to fix the mess in references after introduction of std::move (rvalues, lvalues, etc).

                                                                                          1. 4

                                                                                            Starting cycling again after 15 years. I take advantage of working from home to go cycling at least one hour every day: I lost weight, I am feeling better mentally, I can sleep better in the night, my cycling fitness is improving day by day and I know my body better every day.

                                                                                            1. 1

                                                                                              Indoor or outdoor? If indoor, what vendor/model are you using? I’m looking at getting a cycling bike (Schwinn I think because I don’t have Peloton money) due to the colder weather.

                                                                                              1. 2

                                                                                                I am living in Italy and the weather right now is still good for cycling, then I prefer go outdoor. I have also a turbo trainer that I am using when it’s raining or when I did not have the time to go out on daylight. It’s just a simple one (Tacx Blue) without any frills. Anyway I don’t like too much training indoor, I think it’s quite boring but I recognize that an indoor training session is a big boost for the fitness and you can have a more accurate structured workout.

                                                                                            1. 3

                                                                                              Still parsing Ragnarök Online files (currently .grf, .spr) as part of my project of writting an actual alternative client for the game, which envisions to have modern features and multi-platform support. Specifically, I’m working on the .spr files which basically contain the texture atlas of the game sprites.

                                                                                              Also, working on a blog post to document the whole process.

                                                                                              1. 1

                                                                                                Sounds interesting. What language are you writing a client in?

                                                                                                1. 1

                                                                                                  In Go, but I’m only parsing the data files for now, the client work will start as soon as I manage to finish that.

                                                                                              1. 2

                                                                                                I might be wrong but I’ve heard that some IDEs like CLion and I think even Visual Studio have trouble with very large C++ codebases. Which IDE are Mozilla employees using with the Firefox code base?

                                                                                                1. 5

                                                                                                  There are some nuances here that many will miss as it requires you to know something about the architecture of the X server. The main thing is to not conflate ‘xfree86’ with rest of what you think of as X, hence why the blog post separates out XFree86-the-project from ‘xfree86 the hardware driver’.

                                                                                                  As someone knowing almost nothing about Xorg’s architecture, I am still missing these nuances. What is xfree86 the hardware driver? If I built Xorg without it today and tried to run a graphical environment, what would happen?

                                                                                                  1. 11

                                                                                                    xfree86 is a hardware driver. Xvfb == can draw to a framebuffer, Xvnc == exposes a virtual screen as a vnc server, Xephyr == treats its output as an x11 client (nesting), Xwayland == maps its outputs to wayland-like surfaces (a bit special in a terrible way)

                                                                                                    A (big) point is that xfree86 is the one that BOTH uses KMS/GBM (what some refer to as the modern graphics stack “DRI2/DRI3”) AND act as an actual driver (think UNIVBE in MS-DOS if you are that old). It is a complete historical document, you can find ISA VGA controller code all the way up to today’s things. This is what the maintainer doesn’t want to touch. This is the part of X that is actually really bad.

                                                                                                    The suggested compromise is that the library @emersion has been working on could replace that. Doing so fixes one of the core issues with current Xorg, and significantly improves the odds for the same code being used in Wayland compositors and brings both projects forward.

                                                                                                    1. 2

                                                                                                      Thank you, I think I get it now! It sounds like an abstraction and compatibility layer like terminfo, but for graphical displays.

                                                                                                      1. 2

                                                                                                        What is the difference between hw/xfree86 and driver/xf86-video-ati ?

                                                                                                        1. 4

                                                                                                          I will re-check the code later to make sure, but my memory is that driver/xf86-video-ati (and all the others) attach to the “xfree86” driver DDX so think of those as ‘plugins’ that go into it.

                                                                                                    1. 6

                                                                                                      The problem with Wayland is that there are basically 3 “options”:

                                                                                                      • Sway
                                                                                                      • KDE
                                                                                                      • GNOME

                                                                                                      What is the Wayland solution for those of us on XFCE, IceWM, FVWM, or WindowMaker?

                                                                                                      1. 7

                                                                                                        I use Emacs’s exwm as my window manager because it translates my key bindings from Emacs-style to conventional-style before sending them on to the X clients. Without this it is very difficult for me to use a web browser, since I always end up with twenty-four new windows due to holding down Ctrl-n in order to scroll down, or seventeen print dialogs due to trying to scroll up.

                                                                                                        It used to be possible to fix this in the browser with an extension, but a few years ago Firefox copied Chrome’s web extension mechanism, and now it prevents extensions from doing this for “security reasons” making the browser absolutely infuriating to use outside exwm.

                                                                                                        Anyway, I’m kind of tired of reading about how awful X is. Sure, it has a lot of shortcomings. Who cares. No amount of streamlined technical improvement is worth putting up with “I have to fight the urge to throw my laptop out the window multiple times per day because of the rage caused by shitty key bindings”. These two types problems are nowhere near the same order of magnitude.

                                                                                                        1. 6

                                                                                                          I think XFCE (err wait no, MATE? I think MATE) is going with Mir for some reason. (Which, in case you missed it, is just a Wayland compositor now, they abandoned the custom protocol.) But let me plug Wayfire (which I contribute to reasonably often).

                                                                                                          Wayfire is the Compiz of Wayland — a generic compositor to build desktop environments on top of, with a flexible plugin system to add whatever functionality you want, and with a standard collection of plugins that includes many old favorites like wobbly windows and desktop cube :)

                                                                                                          1. 3

                                                                                                            dwl is a “dwm for Wayland”; it’s obviously not for everyone, but the code just under 2,000 lines (similar to dwm). I haven’t looked at it in detail, but this seems to indicate that it’s not much harder to write a WM for Wayland than it is for Xorg.

                                                                                                            There are probably a few other smaller WMs out there as well.

                                                                                                            1. 4

                                                                                                              Yeah, but it’s using wlroots, so it’s not a true “from-scratch” implementation. I know, the X-libraries are also pulled in for dwm and bring complexity, but they are part of the “standard”.

                                                                                                              1. 6

                                                                                                                I have the impression that wlroot is more or less the unofficial “standard”.

                                                                                                                The end result is a binary that’s actually smaller than dwm on my system (44k vs 49k, stripped), so it doesn’t seem like there’s some needless bloat going on.

                                                                                                                1. 2

                                                                                                                  What’s the difference between an in-process wlroots library, and an out-of-process X.org server? Doesn’t dwm depend on X.org server just like dwl depends on wlroots?

                                                                                                                  1. 1

                                                                                                                    See my other comment, which hopefully clears that up.

                                                                                                              2. 3

                                                                                                                The wlroots wiki has a pretty big list of compositors. Many of them are smaller compositors.

                                                                                                                1. 3

                                                                                                                  Are they working though ?

                                                                                                                  After seeing the previous post about X, I decided to give wayland another try (my last try was before wlroots creation…).

                                                                                                                  I wanted to use wio (listed on wlroots page). it’s been 2 days, and I still couldn’t get it started. Whem run from a TTY, it freezes my laptop and I must hard-reboot it.

                                                                                                                  I looked into other projects from the page, and many compositors are, in fact, abandonware (not a single commit in a year or so). wlroots is a fine project, pushed forward by sway. Other compositors (including those written by Drew Devault himself) are lagging behind.

                                                                                                                  Thanks to wlroots, wayland is getting closer to full adoption everyday, but it seems to be moving real fast, to the point only sway can really keep up with it. This is true from the point of view of distros too. I run debian buster, and with standard repositories, there is no way you can even compile wlroots. The meson version required is 0.54 (0.49 in repository), wayland-server must be at 1.18, the very latest version (1.16 in repository). To my little experience, it seems to still be moving too fast to be a drop-in replacement.

                                                                                                                  I will however try to get it working in the next few days. I am really eager to try a wayland based desktop and make myself a real opinion about it. I really hope KDE, Gnome and sway aren’t the only viable options.

                                                                                                                  1. 5

                                                                                                                    wlroots is intentionally moving fast, depending on the newest dependencies instead of having support code and conditionals for previous versions. If you’re using a distro that doesn’t ship with the newest dependencies, you don’t need to pick the latest version of wlroots, you can compile an older one.

                                                                                                                    A lot of these smaller compositors aren’t working well, a lot don’t have docs, a lot are a one-man project, a lot are abandoned. I don’t think this is specific to Wayland though.

                                                                                                                    Of all of these projects, I’d say the more mature ones are Sway, Wayfire, Cage, Hikari, River and Librem5. Waymonad hasn’t seen development in a while but got pretty far. There are maybe others that are usable, apologies if I missed one.

                                                                                                                    If it wasn’t clear enough: all of these projects need help :P

                                                                                                                    1. 2

                                                                                                                      wlroots is intentionally moving fast, depending on the newest dependencies instead of having support code and conditionals for previous versions.

                                                                                                                      It is a totally sane approach. It has the drawback of obsoleting all the projects that depend on it after each release. You say that all the wayland projects need help, and I understand it as “wlroots is moving so fast, that a one-man project can’t keep up with it”. My take on it is that wlroots might still not be ready for stable use, and as a result, none of the projects that depend on it can have a stable version.

                                                                                                                      I wrote my own window manager for X, and it is now stable enough that I almost don’t touch the code anymore. The xcb lib isn’t changing, and hasn’t changed in a long time, so my one-man project is totally up to date with upstream. If I were to write my own compositor (and I want to!), I would have to constantly update it because wlroots changes so often, just for my project to remain relevant. In this regard, I think it’s more “safe” for me to wait for wlroots to reach a stable state (with a stable API), before starting my project.

                                                                                                                      1. 2

                                                                                                                        I understand it as “wlroots is moving so fast, that a one-man project can’t keep up with it”

                                                                                                                        No, that’s not what I mean. They don’t need help to update to newer wlroots versions, as the effort to do so isn’t huge at all. I mean they need help to finish their project: fix bugs, add features, etc.

                                                                                                                        Many of the wlroots-based compositors that are usable as a daily driver today are a one-man project.

                                                                                                                        But you’re right in saying that if you want to write a compositor and never touch the code again, wlroots isn’t ready yet. wlroots still moves.

                                                                                                                        EDIT: additionally, breaking the API and requiring up-to-date dependencies are two different things. If wlroots depends on newer libraries, this doesn’t mean it needs to break its API and require downstream changes. But yeah, right now wlroots does both.

                                                                                                              1. 3

                                                                                                                While std::regex is easy to read/write it’s actually not very performant. There are other C++ libraries (or even C ones like pcre) that should probably be used when you are benchmarking Rust vs C++.

                                                                                                                Please see: https://old.reddit.com/r/cpp/comments/e16s1m/what_is_wrong_with_stdregex/

                                                                                                                1. 2

                                                                                                                  If you care about performance (and comparing the two langs..) then it’s probably better to get rid of the regex entirely. For this example the rx is literally 2 or more of a character class. Just doing that test inline is a “one if statement” state machine. I personally feel regex is not “carrying its own abstraction weight” here, though I realize rx abstraction power was (maybe?) part of the point of the C++ post.

                                                                                                                  1. 4

                                                                                                                    Language speed comparisons are hard to do, because you can always tune small example code to death. C/C++/Rust can one-up each other on this one until only raw syscalls and SIMD intrinsics remain.

                                                                                                                    1. 1

                                                                                                                      I agree. With very little adjustment, the Nim was like 8.5x faster than the Rust on the same input/same machine for just one file. Here I mostly meant to steer away from a “this regex lib is faster” rabbit hole when regexes themselves are overkill for the “endswith” and “2 or more charclass” use cases of this small example code. (Maybe they wouldn’t be overkill if patterns got bubbled up to the user, but it often bothers me when people reflexively reach-for-regex)

                                                                                                                1. 4

                                                                                                                  I think all they need to do is fix the README.md ? That should be relatively quick. If not I guess they can remove the extractor/youtube.py from the repo and people can host youtube-dl “plugins” on other sites.

                                                                                                                  1. 1

                                                                                                                    Im puzzled. It says it’s a reimplemented readability.js, but uses libxml for parsing? Uh, no. That won’t get you good html parsing. There ought to be compliant and real world capable html5 parsers out there, no?

                                                                                                                    Python has html5lib, Rust html5ever. Both super well tested.

                                                                                                                    1. 5

                                                                                                                      As someone who doesn’t know much about HTML parsing libraries, but has used Nokogiri (libxml2 based) for some basic scraping scripts, what’s wrong with libxml2?

                                                                                                                      1. 4

                                                                                                                        libxml2 in html mode is the most competent html tagsoup parser I’ve ever used.

                                                                                                                        1. 2

                                                                                                                          There is gumbo, which is packaged for most distributions (gumbo-dev, etc).

                                                                                                                          1. 2

                                                                                                                            You can do HTML with libxml: http://xmlsoft.org/html/libxml-HTMLparser.html

                                                                                                                            1. 1

                                                                                                                              HTML4 though. 😕

                                                                                                                          1. 11

                                                                                                                            This could be painful for the BSDs. I doubt anybody wants to require rust in base.

                                                                                                                            1. 20

                                                                                                                              Rust keeps growing, so it’s only going to get harder to ignore. The focus should be on what needs to be improved to have it in, rather than how to keep it away.

                                                                                                                              1. 10

                                                                                                                                It is not about ignorance and more about compatibility.

                                                                                                                                “Such ecosystems come with incredible costs. For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.

                                                                                                                                Consider me a skeptic – I think these compiler ecosystems face a grim bloaty future.”

                                                                                                                                https://marc.info/?l=openbsd-misc&m=151233345723889

                                                                                                                                1. 3

                                                                                                                                  I wasn’t implying. I was stating a fact. There has been no attempt to move the smallest parts of the ecosystem, to provide replacements for base POSIX utilities. As a general trend the only things being written in these new languages are new web-facing applications, quite often proprietory or customized to narrow roles. Not Unix parts.

                                                                                                                                  There may also be compatibility issues but, in 2017, that is a pretty ignorant thing to say about Rust when the uutils project had already existed for years. Or maybe Theo is simply sneering about GNU’s implementation of the POSIX utilities, which uutils attempts to emulate.

                                                                                                                                  edit: also, started in early 2017, redox-os’s BSD-compatible coreutils project.

                                                                                                                                  1. 3

                                                                                                                                    Theo buried the lede in the email. In 2017, it was his considered opinion that it was essentially impossible to include Rust in the OpenBSD base distribution because Rust couldn’t be compiled by base.

                                                                                                                                    It’s possible that in 2020, this situation has improved, which would mean that a necessary precondition to include Rust programs in OpenBSD base has been fulfilled. It still needs to be proven that replacing tried and true utilities written in C with ones written in Rust would result in meaningful improvements of the quality of the OpenBSD system as a whole.

                                                                                                                                    1. 4

                                                                                                                                      It may have improved for i386 (I don’t know) but there are additional platforms that OpenBSD needs to consider.

                                                                                                                                      Today there are platforms that use clang and some other platforms that can’t use clang and are stuck with the last gcc, but at least 99% of OpenBSD base is just C and so generally most things can compile with either clang or gcc. Bringing rust into base then escalates this “have” vs “have nots” situation and then you start to have “tier 1” (amd64, arm64, i386?) vs “tier2” (sparc64, macppc, etc). Maybe that is already happening though. I imagine NetBSD will be in a similar situation.

                                                                                                                                      1. 4

                                                                                                                                        You’re correct, I didn’t even consider the other architectures that OpenBSD supports.

                                                                                                                                        So to summarize

                                                                                                                                        1. Rust isn’t supported on all platforms
                                                                                                                                        2. Rust is too large and complicated to be compiled from source via OpenBSD’s base
                                                                                                                                        3. Rust offers dubious advantages for OpenBSD
                                                                                                                                        1. 1

                                                                                                                                          Rust offers dubious advantages for OpenBSD

                                                                                                                                          In the context of mesa or in OpenBSD using Rust internally?

                                                                                                                                          I think those are 2 different arguments. OpenBSD might not want to commit to using (or supporting) Rust, but it cannot decide if another project wants to use Rust for reasons they consider good.

                                                                                                                                          1. 1

                                                                                                                                            In the context of OpenBSD using Rust internally (as part of base).

                                                                                                                                        2. 3

                                                                                                                                          Hm, so in general, the whole thing would start with upstreaming an OpenBSD target and working with upstream to get it maintained and then getting support in for things they need. i386 is a hard case though, because LLVM support may be lacking (I have no assessment of that).

                                                                                                                                          The hard reality here is that all project need to find maintainers for those things and they are few and far between.

                                                                                                                                          On the Rust side in general, I’m happy to see that OpenBSD currently maintains their own patches. I tried to encourage OpenBSD folks I met to upstream, but have yet so see that happening. We’re generally happy to accept them given bandwidth and maintenance commitment (bandwidth is a regular issue, but we’re always working to improve that).

                                                                                                                                          https://github.com/openbsd/ports/tree/master/lang/rust/patches

                                                                                                                                          1. 1

                                                                                                                                            Speaking for illumos, for which support ought to appear natively in the next Rust stable release, upstreaming has been a generally pleasant experience. It all took way longer than I expected, at least in part due to the long pipeline from master to stable, but everybody with whom I interacted along the way has been polite and helpful!

                                                                                                                                            1. 2

                                                                                                                                              Thanks for letting me know! BTW, this is the first time I got the pipeline from master to stable described as “long”, but I can understand where you are coming from!

                                                                                                                                              I’m happy to hear that the interaction was pleasant. That’s a good baseline to get the contribution speed up!

                                                                                                                                      2. 1

                                                                                                                                        I think should understand the other aspects of it. Rust, at the time, not being able to compile on i386.