Threads for pbsds

    1. 3

      The whole set of console arcitectures documented on that page is a treasure trove!

    2. 20

      I’m a bit surprised by the negative comments here. The sluggishness of Element is one of the worst problems for adoption. I am very glad it’s being worked on.

      And indeed, I just tried out Element X, it’s very fast and comparable to Whatsapp and co, very impressive!

      1. 4

        Its a step in the right direction, but i think many feel burned by the performance of element. Since switching to fluffychat my view of matrix is totally changed for the better. Cleaner ux and far better performance when fetching updates on already joined rooms, rivaling that of sliding sync.

        1.  

          it’s a bit unfortunate if folks won’t try Element X (which is lightyears ahead of both Element and FluffyChat) because of bad experiences on Element. :/

    3. 14

      Hey Lobsters,

      I’ve rebuilt fx from the ground up and I’m eager to share the improvements:

      • Handling of large JSON files? Check. 🚀
      • Enhanced TUI experience with themes. 🌈
      • Faster JSON navigation using Dig Fuzzy Search (just hit .). 🐾
      • Comprehensive regex search across the entire JSON. 🔍
      • Long strings now wrap as they should. 📜
      • Embracing all things JSON: comments, trailing commas, and streams. ✏️

      The entire rewrite focused on speed and usability. If you’ve found fx useful, I’d appreciate any support through GitHub sponsors.

      Feedback and thoughts are always welcome!

      1. 3

        I’m using fx on a daily basis. Great tool and experience using it. Thanks for this!

        1. 1

          Awesome) Thanks.

      2. 2

        What’s the difference between this and jq? It seems to be the TUI experience? Might be worth having a comparison somewhere.

        Website looks really nice :)

        1. 2

          Seems like the interactive exploration UI is what mostly sets it apart from jq - and also makes it seem similar at first glance to jless - a comparison with that would be interesting too

          1. 2

            I do like fx for what it offers in terms of exploring datasets. Based on your (valid) comment comparing it with jq I’m wondering if you know about ijq which is very much an “interactive jq”, from gpanders here.

            https://sr.ht/~gpanders/ijq/

            It’s a great tool.

            1. 6

              It would be great to merge these two concepts — rich JSON navigation with live processing in either jq or JS languages.

              Cue someone telling me this has been an Emacs mode for years… :)

      3. 2

        Can I when I’ve navigated to the value i care about extract its path, like I get with gron?

        1. 1

          Yes. Easy. Just press y and p.

    4. 1

      It’s nice to see that there are three (to my knowledge) genuine attempts at making a new css engine: servo, ladybird and ekioh flow. KHTML/Webkit Blink and Firefox is a bit anemic. And what happened to Edge? Wasn’t it kinda decent?

    5. 2

      Looks like Pagefind isn’t available through Nix. It seems like the build product is (more or less) a single binary, so conceptually it should be easy enough to package with Nix, but some include_bytes calls during the build are failing to find the files they’re supposed to.

        1. 2

          Ooh, thank you!

      1. 3

        No good release goes untarnished 🫣

      2. 2

        That one was on me. I’m packaging it in nixpkgs and they used a git dependency in cargo, which nixpkgs does not like that well…

    6. 2

      Mozilla should get funded by the EU

    7. 1

      This is essentially an improvement of this paper, but where the gaussian have view-dependent color, do not have to be inscribed into the shape, and that they use 500k gaussians instead of ~50. Great stuff.

    8. 8

      I really want matrix to succeed, but the issues are plentiful.

      The fact that self-hosting synapse in a performant manner is no trivial feat (this is slowly improving), compounded by the fact that no mobile client yet supports sliding sync (ElementX when) makes my user experience in general very miserable. Even the element-desktop client have horrible performance, unable to make use of GPU acceleration on nearly all of my devices.

      1. 12

        unable to make use of GPU acceleration on nearly all of my devices

        As an IRC user, do I want to know why a instant messaging client would need GPU acceleration? :x

        1. 8

          It’s nothing particularly novel to matrix: rendering UIs on the CPU tends to use more battery than the hardware component whose entire goal is rendering, and it’s hard to hit the increasingly-high refresh rates expected solely via CPU rendering.

          1. 3

            A chat application ought to do very infrequent redraws, basically when a new message comes in or whenever the user is composing, worst case when a 10fps gif is being displayed. I find it concerning we now need GPU acceleration for something as simple as a chat to render itself without feeling slugish.

            1. 8

              Rendering text is one of the most processor-intensive things that a modern GUI does. If you can, grab an early Mac OS X machine some time. Almost all of the fancy visual effects that you get today were already there and were mostly smooth, but rendering a window full of text would have noticeable lag. You can’t easily offload the glyph placement to the GPU, but you can render the individual glyphs and you definitely can composite the rendered glyphs and cache pre-composited text blocks in textures. Unless you’re doing some very fancy crypto, that will probably drop the power consumption of a client for a plain text chat protocol by 50%. If you’re doing rich text and rendering images, the saving will be more.

              1. 4

                The downside with the texture atlas rugged approach is that the distribution of glyphs in the various cached atlases in every process tend to become substantially re-invented across multiple graphics sources and make out quite a bit of your local and GPU RAM use. The number of different sizes, styles and so on aren’t that varied unless you dip into some kind of opinionated networked document, and even then the default is default.

                My point is that there is quite some gain to be had by somehow segmenting off the subsurfaces and somewhat split the load – a line packing format in lieu of the pixel buffer one with the LTR/RTL toggles, codepoint or glyph-index lookup, (so the client need to know at least GSUB of the specific font-set) and attributes (bold, italic, colour, …) one way and kerning feedback for picking/selection the other.

                That’s actually the setup (albeit there’s work to be done specifically in the feedback / shaping / substitution area) done in arcan-tui. Initial connection populates font slots and preferred size with a rough ‘how does this fit a monospaced grid w/h” hint. Clients using the same drawing properties shares glyph cache. We’re not even at the atlas stage (or worse, SDFs) stage yet the savings are substantial.

                1. 3

                  The downside with the texture atlas rugged approach is that the distribution of glyphs in the various cached atlases in every process tend to become substantially re-invented across multiple graphics sources and make out quite a bit of your local and GPU RAM use

                  I’m quite surprised by this. I’d assume you wouldn’t render an entire font, but maybe blocks of 128 glyphs at a time. If you’re not doing sub-pixel AA (which seems to have gone out of fashion these days), it’s 8 bits per pixel. I’d guess a typical character size is no more than 50x50 pixels, so that’s around 300 KiB per block. You’d need quite a lot of blocks to make a noticeable dent in the > 1GiB of GPU memory on a modern system. Possibly less if you render individual glyphs as needed into larger blocks (maybe the ff ligature is the only one that you need in that 128-character range, for example). I’d be really surprised if this used up more than a few tens of MiBs, but you’ve probably done the actual experiments so I’d be very curious what the numbers are.

                  That’s actually the setup (albeit there’s work to be done specifically in the feedback / shaping / substitution area) done in arcan-tui. Initial connection populates font slots and preferred size with a rough ‘how does this fit a monospaced grid w/h” hint. Clients using the same drawing properties shares glyph cache. We’re not even at the atlas stage (or worse, SDFs) stage yet the savings are substantial.

                  That sounds like an interesting set of optimisations. Can you quantify ‘substantial’ at all? Do you know if Quartz does anything similar? I suspect it’s a bit tricky if you’ve got multiple rounds of compositing, since you need to render text to some texture that the app then renders into a window (possibly via multiple rounds of render-to-texture) that the compositor composes onto the final display. How does Arcan handle this? And how does it play with the network transparency?

                  I recall seeing a paper from MSR at SIGGRAPH around 2005ish that rendered fonts entirely on the GPU by turning each bezier curve into two triangles (formed from the four control points) and then using a pixel shader to fill them with transparent or coloured pixels on rendering. That always seemed like a better approach since you just stored a fairly small vertex list per glyph, rather than a bitmap per glyph per size, but I’m not aware of any rendering system actually using this approach. Do you know why not? I presume things like font hinting made it a bit more complex than the cases that the paper handled, but they showed some very impressive performance numbers back then.

                  1. 3

                    I’m quite surprised by this. I’d assume you wouldn’t render an entire font, but maybe blocks of 128 glyphs at a time. If you’re not doing sub-pixel AA (which seems to have gone out of fashion these days), it’s 8 bits per pixel.

                    You could’ve gotten away with an alpha-coverage only 8-bit texture had it not been for those little emoji fellows, someone gave acid to the LOGO turtles and now it’s all technicolour rainbow – so full RGBA it is. While it is formally not a requirement anymore, there’s old GPUs around and you still can get noticeable difference when textures are a nice power-of-two (POT) so you align to that as well. Then comes the quality nuances when rendering scaled, since accessibility tools like there zooms in and out you want those to look pretty and not alias or shimmer too bad. The better way for that still is mip-mapping, so there is a point to raster at a higher resolution, switch that mipmap toggle and have the GPU sort out which sampling level to use.

                    That sounds like an interesting set of optimisations. Can you quantify ‘substantial’ at all? Do you know if Quartz does anything similar?

                    There was already a big leap for the TUI cases not having WHBPP*2 or so pixels to juggle around, render to texture or buffer to texture and pass onwards (that could be another *4 because GPU pipelines and locking semantics you easily get drawing-to, in-flight, queued, presenting).

                    The rest was that the font rendering code we have is mediocre (it was 2003 and all that ..) and some choices that doesn’t fit here. We cache on fonts, then the rasterizer caches on resolved glyphs, and the outliner/shaper caches on glyph lookup. I don’t have the numbers available, but napkin level I got it to around 50-75% overhead versus the uncompressed size of the font. Multiply that by the number of windows open (I drift towards the upper two digit of active CLI shells).

                    The size of a TPACK cell is somewhere around 8 bytes or so, using UCS4 even (you already needed the 32-bit due to having font-index addressing for literal substitution), then add some per-line headers. It also does I and P frames so certain changes (albeit not scrolling yet) are more compact. I opted against trying to be overly tightly packed as that has punished people in the past and for the network case, ZSTD just chews that up into nothing. It’s also nice having annotation-compact text-only intermediate representation to juggle around. We have some subprojects about to leverage that.

                    Do you know if Quartz does anything similar? I suspect it’s a bit tricky if you’ve got multiple rounds of compositing, since you need to render text to some texture that the app then renders into a window (possibly via multiple rounds of render-to-texture) that the compositor composes onto the final display. How does Arcan handle this? And how does it play with the network transparency?

                    I don’t remember what Quartz did or how their current *Kits, sorry.

                    For Arcan itself it gets much more complicated and a larger story, as we are also our own intermediate representation for UI components and nest recursively. The venerable format string based ‘render_text’ call at the Lua layer force rasterisation into text locally as some genius thought it a good idea to allow arbitrary embedding of images and other video objects. There’s a long checklist of things to clean up, but that’s after I close down the network track. Thankfully a much more plastic youngling is poking around in those parts.

                    Speaking of networking – depending on the network conditions we outperform SSH when it starts to sting. The backpressure from things like ‘find /’ or ‘cat /dev/random’ resolves and renders locally and with actual synch in the protocol you have control over tearing.

                    I recall seeing a paper from MSR at SIGGRAPH around 2005ish that rendered fonts entirely on the GPU by turning each bezier curve into two triangles (formed from the four control points) and then using a pixel shader to fill them with transparent or coloured pixels on rendering.

                    AFAIR @moonchild has researched this more than me as to the current glowing standards. Back in ‘05 there was still a struggle getting the text part to behave, especially in 3D. Weighted channel based hinting was much more useful for tolerable quality as well, and that’s was easier as a raster preprocess. Eventually Valve set the standard with SDFs that it still(?) the dominant solution today (recently made its way natively into FreeType), and quality optimisations like multi-channel SDFs.

                    1. 1

                      Thanks. I’m more curious about the absolute sizes than the relative savings. Even with emoji, I wouldn’t expect it to be a huge proportion of video memory on a modern system (even my 10-year-old laptop has 2 GiB of video memory). I guess it’s more relevant on mobile devices, which may have only this much total memory.

                      1. 1

                        I will try and remember to actually measure those bits myself, can’t find the thread where C-pharius posted it on Discord because well, Discord.

                        The savings are even more relevant if you hope to either a. at least drive some machines from an FPGAd DIY graphics adapter instead of the modern monstrosities, b. accept a 10-15 year rollback in terms of available compute should certain conflicts escalate, and c. try to consolidate GPU processing to a few victim machines or even VMs (though the later are problematic, see below) – both of which I eventually hope for.

                        I layered things such that the Lua API looks like a balance between ‘animated display postscript’ and ‘basic for graphics’ so that packing the calls in a wire format is doable and asynchronous enough for de-coupling. The internal graphics pipeline also goes through an intermediate-representation layer intended for a wire format before that gets translated to GL calls for the same reason – at any time, these two critical junctions (+ the clients themselves) cannot be assumed/relied upon to be running on the same device / security domain.

                        Public security researchers (CVE/bounty hunters) have in my experience been pack animals as far as targeting goes. Mobile GPUs barely did its normal job correctly and absolutely not securely for a long time and little to nothing could be heard. From DRM (as in corporate malware) unfriendly friends I’ve heard of continuous success bindiffing Nvidia blobs. Fast > Features > Correct > Secure seems generally to be the priority.

                        With DRM (as in direct rendering manager) the same codebase hits BSDs and Linux alike, and for any VM compartmentation, VirGL cuts through it. The whole setup is massive. It evolves at a glacial pace and it’s main job is different forms of memcpy where the rules for src, dst, size and what happens to the data in transit are murky at best. “Wayland” (as it is apparently now the common intersection for several bad IPC systems) alone would’ve had CVEs coming out the wazoo had there been an actual culture around it, we are still waiting around for conformance tests, much less anything requiring more hygiene. Fuzzing is non-existent. I am plenty sure there are people harvesting and filling their barns.

                      2. 1

                        An amusing related curiosity I ran across while revisiting a few notes on some replated topic - https://cgit.freedesktop.org/wayland/wayland/tree/NOTES?id=33a52bd07d28853dbdc19a1426be45f17e573c6b

                        “How do apps share the glyph cache?”

                        That’s the notes from the first Wayland commit covering their design axioms. Seems like they never figured that one out :-)

          2. 3

            Ah, that makes sense, thanks. I’m definitely sympathetic to the first problem.

        2. 1

          With irissi I’m using GPU acceleration because my terminal emulator is OpenGL based.

      2. 4
        1. 1

          Sadly I’m blocked by no support for SSO

          1. 4

            as your link says:

            Account creation and SSO will come with OIDC. OIDC will come in September.

            the code’s there and works; just needs to be released and documented. NB that shifting to native OIDC will be a slightly painful migration though; some of the old auth features may disappear until reimplemented in native OIDC, whicy may or may not be a problem for you.

      3. 4

        If you’re on Android, note that an early release of Element X just hit the Play Store yesterday: https://element.io/blog/element-x-android-preview/.

    9. 9

      Vaultwarden for my passwords.

      1. 2

        Yep, i hear great things about vaultwarden, I’m a 1password user though. :)

    10. 12

      Am I the only one that found this hilarious? The pointed references to “the manufacturer” got me laughing first. “It’s a little awkward that we beat the big corporation…” got me again. And then the subtle “the proprietary compiler won’t use it when compiling our test shaders” (which of course ended up being “but our compiler will”) got me a third time.

      1. 17

        Eh, the phrasing makes it sound like Apple isn’t supporting it because they haven’t worked it out yet, rather than because they deliberately deprecated OpenGL on their platforms 5 years ago.

        1. 3

          But you’d hope they’d at least sell an OpenGL dongle

    11. 3

      I’ve yet to try this one but It’s been recommended to me a few times. I’m still using go-jira, although it’s very broken with the latest JIRA versions, and doesn’t seem to be an active project (my admittedly incomplete PR is languishing like dozens of others)

      Extending go-jira is… interesting. You write weird little embedded shell scripts in YAML files that are executed by sub-processes of the main binary in different phases.

      1. 6

        The go-jira name is simply amazing. Doesn’t seem like they lean into the pun though?

        1. 1

          I know! A wasted opportunity.

      2. 2

        I knew a guy at Netflix who turned me on to go-jira but at the time (and again as little as a year ago) it wasn’t working for me with employer’s internally hosted JIRA. jira-cli at least works.

    12. 2

      I love freebsd to bits. My only gripe is that the services in the package manager are not by default configured with a watchdog.

    13. 1

      Doesn’t systemd also have a mdns service? I wonder why it rarely is the default on systemd distributions

    14. 1

      Cross platform? Does it support windows? Freebsd?

    15. 48

      This is wonderful news. Now people will be incentivized to set up IPv6, which means the documentation for setting up IPv6 will improve, which means more people will set up IPv6 by default, which eventually means everyone uses IPv6 and static IPs become free.

      1. 29

        I don’t see how it will incentivize ISPs to add IPv6 support. I would love to have IPv6 but my ISP doesn’t care (and I can’t switch ISPs).

        The only chance it will happen is if both things happen:

        1. Websites stop being accessible thru IPv4 on a significant scale.
        2. People blame ISPs for that instead of website operators.

        Because your suggestion means that AWS customers will spend more money to have IPv4 and, for some mysterious reason, would spend engineering effort to set up IPv6 on top of that. Doubling their costs for what exactly?

        Now, if AWS announced that they will stop allocating public IPv4 addresses by 2030, that would certainly get my ISP moving. But even that would not fulfill both parts of my test – the blame would fall on AWS.

        For now, I only see a prospect of shared/SNI hosting like GH pages, Netlify, or the good ol’ LAMP hosting being more attractive.

        1. 3

          There are government programs to pressure ISPs to add IPv6 support. Depending on the country, of course.

        2. 2

          What if the Google front page would bicker about your ISP being bad when you access via ipv4?

        3. 2

          My ISP (centurylink) supports ipv6, but it is almost worse than if they didn’t! I think they implemented some transitory version (6rd. also over PPPoE!) and seem to have never updated it since (eg. they consider it “job done”?). With what seems to be the proliferation of buggy dhcpv6 and prefix delegation, weird issues with ipv6 auto-address selection[1], getting a stable ipv6 address on an internal network seems nearly impossible. I’ve been tempted to try NAT66 ffs!

          [1]: you were originally supposed to be able to use multiple ipv6 networks on the same segment (eg. a GUA/public-routable/globally-unique and a ULA/site-local), and have address selection pick the site-local when it is relevant (via a source address selection algorithm), and the GUA otherwise. I don’t think I ever saw it work right! I think these days site-local addresses are even considered “deprecated”.

      2. 10

        There is long way from “paid/expensive IPv4 addresses” to “IPv6-only services that would force people to get IPv6 connectivity”.

        I used to be IPv6 zealot 10+ years ago. Today I am resigned to the fact that IPv4 will be around forever.

        1. 13

          I used to be IPv6 zealot 10+ years ago. Today I am resigned to the fact that IPv4 will be around forever.

          About 10 years ago, my ISP (a former monopoly that is notorious for putting any kind of infrastructure investment off until not doing it will lose them a lot of customers) operated equipment on their backbones that dropped packets that weren’t well-formed IPv4 packets and broke IPv6 even for other ISPs buying transit from them. Now, with their consumer router, every machine on my network has IPv6 connectivity automatically and my browser connects to a surprising number of things with IPv6 without any issues.

          I suspect IPv4 will be like old Android releases: people will track the number of customers still using it and eventually decide that it isn’t worth the cost to keep supporting. Once a few companies make that decision it will give cover to others wanting to do that same.

      3. 6

        I agree and hope that this is what will happen. Especially since cloud providers seemingly being one of the major providers for machines without IPv6 per default.

        However, I feel a bit like they are basically too cheap. Given how expensive AWS is in first place I feel like it’s more like a way to increase costs for Amazon rather than expecting a huge push for IPv6.

        1. 6

          At $44/yr I don’t see this being a big issue for anyone who spends any significant amount of money on AWS. Maybe it will move the needle on IPv6 adoption a small amount, but I just don’t see it making a big difference. I hope I’m wrong.

          1. 8

            At $44/yr I don’t see this being a big issue for anyone who spends any significant amount of money on AWS.

            That may be true, but I know a lot of folks on the lower end of things that this will be a significant change for. One thing I do is help non-profits get hosted as cheaply (and easily: I’d rather them NOT have to keep me on speed dial) as possible. Lightsail has been good for that. At $3.50 per month for Lightsail, the cost of the IP will double the cost of everything they are hosting. I completely agree that (usually) won’t break the bank, but a 100% increase in costs is still a 100% increase in costs.

            The exception to the above is non-profits I’ve helped keep a web presence after that have gone under so that their work is not lost. In that case, there are some very specific “free-tier” providers that can be used to keep something going for just the cost of a domain name. AWS will no longer be part of that. I say that acknowledging this is a very niche use case.

      4. 3

        Now if only the solution to at least 10% of my networking problems weren’t “disable IPv6 at a system-wide and network-wide level to make sure nothing ever tries to use it, anywhere ever”, I could get on board with this.

        Ignoring all my other problems and complaints with IPv6 (notably, that reciting an IP address for v6 is a disaster), “it doesn’t even work 10%+ of the time” is a showstopper that makes me laugh at this in the “please stop trying to make Fetch happen” way .

        Then again - freeing up IPv4 addresses in the server space will reduce the need for me to care about IPv6 at all on the client side, as the server sides can NAT their way through the mess transparently to me, so maybe in the spirit of this article 1 and a few others I’ve read that talk about IPv6 being a flop, this is actually a good thing. Shrug.

    16. 3

      The ability to add two numbers, producing a third number being the arithmetic sum of the prior two numbers.

    17. 3

      For me, the gTile gnome extension strikes the perfect balance. With a few quick key presses I can precisely position each window on a grid with a visual overlay to show where it will end up.

      I tend to work with overlapping windows, rather than tiled windows. For instance, rather than doing a half / half split between vscode and chrome, I’ll give each 7/8 of the screen so only one is fully visible at a time but I can easily click to bring one forward.

      1. 1

        i stay with gnome largely thanks too gtile and the gnome overview. I’m on the lookout for a good gtile alternative on windows.

    18. 3

      A great alternative to pprint is gron, which allows you to glean global structure easier

    19. 1

      What if a have a set of version restrictions? I’d like to use as few instances of nixpkgs as possible.