1. 3

    On systemd based systems, the initramfs /init is actually systemd itself

    Seems to be a small shell script on systemd based Arch Linux.

    1. 1

      I’ve been thinking about this too. Nice to know that the signed pages webextension already exists!

      signed pages is not able to prevent the loading of the website even if the signature is invalid

      They should be able to implement this functionality for Firefox. They’re already using a blocking request filter, they definitely can prevent the page from being rendered if the check fails.

      the keys in localStorage

      Hm, why not just encrypt them with a key derived from the password? Or is there “remember password” functionality for convenience?

      1. 1

        the idea is when you’re logged in, your keys stay in localStorage, and if you reload the html file (and if it is possible that it could have been tampered with) you are logged out and have to enter your password again. Password auto-fill can leak your password to an adversary too so you need to disable that, which is a big inconvenience.

        1. 1

          What’s the point of using localStorage if you are logged out when you reload? Sharing the logged-in state across multiple browser tabs?

          1. 1

            Yes, this and also a soft reload (clicking a link or entering cryptpad.fr in the location bar) will hit the cache, only F5 flushes the cache. Something I forgot to mention in the blog is that verifying the cross-domain iframe html is kind of a pain since iframes don’t (yet) allow an integrity field. My solution was to load it manually first with fetch() api (which supports integrity) and then load it with the iframe afterwords and hope for a cache hit.

      1. 8

        I use LineageOS for microG, not compiled from source currently, but I did compile CyanogenMod (LineageOS predecessor) a couple times in the past (when a new version was available, but official builds for my device weren’t).

        microG is the only real substitute for Play Services, it provides a FOSS client for Google push notifications and stuff.

        1. 1

          I mention this fairly frequently so I hope I don’t sound like a broken record - one security downside to MicroG is that you need to enable signature spoofing so that it can impersonate the official Google Play Services.

          Personally, I’m willing to give up push notifications for proprietary apps. There are plenty of FOSS apps that don’t depend on Google Cloud Messaging.

          1. 5

            But the impersonation also requires a permission. Only microG is allowed to impersonate Play Services, not any random app you have installed. I’m perfectly fine with that.

        1. 31

          at this point most browsers are OS’s that run (and build) on other OS’s:

          • language runtime - multiple checks
          • graphic subsystem - check
          • networking - check
          • interaction with peripherals (sound, location, etc) - check
          • permissions - for users, pages, sites, and more.

          And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

          1. 10

            Browsers rarely link out the system. FF/Chromium have their own PNG decodes, JPEG decodes, AV codecs, memory allocators or allocation abstraction layers, etc. etc.

            It bothers me everything is now shipping as an electron app. Do we really need every single app to have the footprint of a modern browser? Can we at least limit them to the footprint of Firefox2?

            1. 9

              but if you limit it to the footprint of firefox2 then computers might be fast enough. (a problem)

              1. 2

                New computers are no longer faster than old computers at the same cost, though – moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

                (Maybe secondary storage speed will have a big bump, if you’re moving from hard disk to SSD, but that only happens once.)

                1. 3

                  moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

                  Are you claiming there have been no speedups due to better pipelining, out-of-order/speculative execution, larger caches, multicore, hyperthreading, and ASIC acceleration of common primitives? And the benchmarks magazines post showing newer stuff outperforming older stuff were all fabricated? I’d find those claims unbelievable.

                  Also, every newer system I had was faster past 2005. I recently had to use an older backup. Much slower. Finally, performance isn’t the only thing to consider: the newer, process nodes use less energy and have smaller chips.

                  1. 2

                    I’m slightly overstating the claim. Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                    Once we’ve picked all the low-hanging fruit (simple optimization tricks with major & general impact) we’ll need to start seriously milking performance out of multicore and other features that actually require the involvement of application developers. (Multicore doesn’t affect performance at all for single-threaded applications or fully-synchronous applications that happen to have multiple threads – in other words, everything an unschooled developer is prepared to write, unless they happen to be mostly into unix shell scripting or something.)

                    Moore’s law isn’t all that matters, no. But, it matters a lot with regard to whether or not we can reasonably expect to defend practices like electron apps on the grounds that we can maintain current responsiveness while making everything take more cycles. The era where the same slow code can be guaranteed to run faster on next year’s machine without any effort on the part of developers is over.

                    As a specific example: I doubt that even in ten years, a low-end desktop PC will be able to run today’s version of slack with reasonable performance. There is no discernible difference in its performance between my two primary machines (both low-end desktop PCs, one from 2011 and one from 2017). There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.

                    1. 4

                      Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                      I agree with that totally.

                      “Multicore doesn’t affect performance at all for single-threaded applications “

                      Although largely true, people often forget a way multicore can boost single-threaded performance: simply letting the single-threaded app have more time on CPU core since other stuff is running on another. Some OS’s, esp RTOS’s, let you control which cores apps run on specifically to utilize that. I’m not sure if desktop OS’s have good support for this right now, though. I haven’t tried it in a while.

                      “There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.”

                      Yeah, all the ideas I have for it are incremental. The best illustration of where rest of gains might come from is Cavium’s Octeon line. They have offloading engines for TCP/IP, compression, crypto, string ops, and so on. On rendering side, Firefox is switching to GPU’s which will take time to fully utilize. On Javascript side, maybe JIT’s could have a small, dedicated core. So, there’s still room for speeding Web up in hardware. Just not Moore’s law without developer effort like you were saying.

            2. 9

              Although you partly covered it, I’d say “execution of programs” is good wording for JavaScript since it matches browser and OS usage. There’s definitely advantages to them being smaller. A guy I knew even deleted a bunch of code out of his OS and Firefox to achieve that on top of a tiny, backup image. Dude had a WinXP system full of working apps that fit on one CD-R.

              Far as secure browsers, I’d start with designs from high-assurance security bringing in mainstream components carefully. Some are already doing that. An older one inspired Chrome’s architecture. I have a list in this comment. I’ll also note that there were few of these because high-assurance security defaulted on just putting a browser in a dedicated partition that isolated it from other apps on top of security-focused kernels. One browser per domain of trust. Also common were partitioning network stacks and filesystems that limited effect of one partition using them on others. QubesOS and GenodeOS are open-source software that support these with QubesOS having great usability/polish and GenodeOS architecturally closer to high-security designs.

              1. 6

                Are there simpler browsers optimised for displaying plain ol’ hyperlinked HTML documents, and also support modern standards? I don’t really need 4 tiers of JIT and whatnot for web apps to go fast, since I don’t use them.

                1. 12

                  I’ve always thought one could improve on a Dillo-like browser for that. I also thought compile-time programming might make various components in browsers optional where you could actually tune it to amount of code or attack surface you need. That would require lots of work for mainstream stuff, though. A project like Dillo might pull it off, though.

                  1. 10
                    1. 3

                      Oh yeah, I have that on a Raspberry Pi running RISC OS. It’s quite nice! I didn’t realise it runs on so many other platforms. Unfortunately it only crashes on my main machine, I will investigate. Thanks for reminding me that it exists.

                      1. 1

                        Fascinating; how had I never heard of this before?

                        Or maybe I had and just assumed it was a variant of suckless surf? https://surf.suckless.org/

                        Looks promising. I wonder how it fares on keyboard control in particular.

                        1. 1

                          Aw hell; they don’t even have TLS set up correctly on https://netsurf-browser.org

                          Does not exactly inspire confidence. Plus there appears to be no keyboard shortcut for switching tabs?

                          Neat idea; hope they get it into a usable state in the future.

                        2. 1

                          AFAIK, it doesn’t support “modern” non-standards.

                          But it doesn’t support Javascript either, so it’s way more secure of mainstream ones.

                        3. 7

                          No. Modern web standards are too complicated to implement in a simple manner.

                          1. 3

                            Either KHTML or Links is what you’d like. KHTML would probably be the smallest browser you could find with a working, modern CSS, javascript and HTML5 engine. Links only does HTML <=4.0 (including everything implied by its <img> tag, but not CSS).

                            1. 2

                              I’m pretty sure KHTML was taken to a farm upstate years ago, and replaced with WebKit or Blink.

                              1. 6

                                It wasn’t “replaced”, Konqueror supports all KHTML-based backends including WebKit, WebEngine (chromium) and KHTML. KHTML still works relatively well to show modern web pages according to HTML5 standards and fits OP’s description perfectly. Konqueror allows you to choose your browser engine per tab, and even switch on the fly which I think is really nice, although this means loading all engines that you’re currently using in memory.

                                I wouldn’t say development is still very active, but it’s still supported in the KDE frameworks, they still make sure that it builds at least, along with the occasional bug fix. Saying that it was replaced is an overstatement. Although most KDE distributions do ship other browsers by default, if any, and I’m pretty sure Falkon is set to become KDE’s browser these days, which is basically an interface for WebEngine.

                            2. 2

                              A growing part of my browsing is now text-mode browsing. Maybe you could treat full graphical browsing as an exception and go to the minimum footprint most of the time…

                          2. 4

                            And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

                            user choice. rampant complexity has restricted your options to 3 rendering engines, if you want to function in the modern world.

                            1. 3

                              When reimplementing malloc and testing it out on several applications, I found out that Firefox ( at the time, I don’t know if this is still true) had its own internal malloc. It was allocating a big chunk of memory at startup and then managing it itself.

                              Back in the time I thought this was a crazy idea for a browser but in fact, it follows exactly the idea of your comment!

                              1. 3

                                Firefox uses a fork of jemalloc by default.

                                1. 2

                                  IIRC this was done somewhere between Firefox 3 and Firefox 4 and was a huge speed boost. I can’t find a source for that claim though.

                                  Anyway, there are good reasons Firefox uses its own malloc.

                                  Edit: apparently I’m bored and/or like archeology, so I traced back the introduction of jemalloc to this hg changeset. This changeset is present in the tree for Mozilla 1.9.1 but not Mozilla 1.8.0. That would seem to indicate that jemalloc landed in the 3.6 cycle, although I’m not totally sure because the changeset description indicates that the real history is in CVS.

                              2. 3

                                In my daily job, this week I’m working on patching a modern Javascript application to run on older browsers (IE10, IE9 and IE8+ GCF 12).

                                The hardest problems are due the different implementation details of same origin policy.
                                The funniest problem has been one of the used famework that used “native” as variable name: when people speak about the good parts in Javascript I know they don’t know what they are talking about.

                                BTW, if browser complexity address a real problem (instead of being a DARPA weapon to get control of foreign computers), such problem is the distribution of computation among long distances.

                                Such problem was not addressed well enough by operating systems, despite some mild attempts, such as Microsoft’s CIFS.

                                This is partially a protocol issue, as both NFS, SMB and 9P were designed with local network in mind.

                                However, IMHO browsers OS are not the proper solution to the issue: they are designed for different goals, and they cannot discontinue such goals without loosing market share (unless they retain such share with weird marketing practices as Microsoft did years ago with IE on Windows and Google is currently doing with Chrome on Android).

                                We need better protocols and better distributed operating systems.

                                Unfortunately it’s not easy to create them.
                                (Disclaimer: browsers as platforms for os and javascript’s ubiquity are among the strongest reasons that make me spend countless nights hacking an OS)

                              1. 3

                                Looks like the result at the top is pulled from the “min” column because it’s more favorable to nuster :)

                                1. 3

                                  I’m not sure how to interpret the requests per second. They are all over the place on nginx but the average is pretty close to nuster.

                                  1. 1

                                    Not min, from finished in 29.51s, 338924.15 req/s, 48.81MB/s and finished in 90.56s, 110419.16 req/s, 15.62MB/s

                                  1. 3

                                    Caches, caches, caches. Pretty much every language package manager accumulates stuff in ~/.folders.

                                    1. 4

                                      One of the downsides of mailing lists: git send-email is scary. Using it for the first time fills you with dread. You worry about sending everything right.

                                      Then possibly there’s a moderation delay (e.g. on freedesktop lists).

                                      Then the project owners don’t see your mail for a long time because they use goddamn Gmail, which in its infinite wisdom counts you as a spammer if your domain has strict SPF policies and you send mail to mailing lists, which mail others on your behalf from a server you didn’t approve…

                                      1. 1

                                        Laptop grade sounds promising I guess. Cortex cores so far have been really weak. Hopefully this is actually better.

                                        1. 6

                                          I think their direction is more exciting than the hardware itself, to be honest. The current crop of Snapdragon 835 PCs are appealing to me, because I’d rather have outstanding battery life than top-tier performance (they’re obviously nowhere close.) So, that-but-better is an attractive prospect to me.

                                        1. 2

                                          macOS Mojave?? I wonder if they’ve anticipated the amount of Fallout: New Vegas jokes coming their way…

                                          Also, “UIKit for the desktop”, didn’t Iconfactory try that a few years ago?

                                          1. 42

                                            GitLab is really worth a look as an alternative. One big advantage of GitLab is that the core technology is open source. This means that anybody can run their own instance. If the company ends up moving in a direction that the community isn’t comfortable with, then it’s always possible to fork it.

                                            There’s also a proposal to support federation between GitLab instances. With this approach there wouldn’t even be a need for a single central hub. One of the main advantages of Git is that it’s a decentralized system, and it’s somewhat ironic that GitHub constitutes a single point of failure.

                                            1. 17

                                              Federated GitLabs sound interesting. The thing I’ve always wanted though is a standardised way to send pull requests/equivalent to any provider, so that I can self-host with Gitea or whatever but easily contribute back and receive contributions.

                                              1. 7

                                                git has built-in pull requests They go to the project mailing list, people code review via normal inline replies Glorious

                                                1. 27

                                                  It’s really not glorious. It’s a severely inaccessible UX, with basically no affordances for tracking that review comments are resolved, for viewing different slices of commits from a patchset, or integrating with things like CI.

                                                  1. 7

                                                    I couldn’t tell if singpolyma was serious or not, but I agree, and I think GitHub and the like have made it clear what the majority of devs prefer. Even if it was good UX, if I self-host, setting up a mail server and getting people to participate that way isn’t exactly low-friction. Maybe it’s against the UNIX philosophy, but I’d like every part of the patchset/contribution lifecycle to be first-class concepts in git. If not in git core, then in a “blessed” extension, à la hub.

                                                    1. 2

                                                      You can sort of get a tracking UI via Patchwork. It’s… not great.

                                                      1. 1

                                                        The only one of those Github us better at is integration with CI. They also have an inaccessible UX (doesn’t even work on my mobile devices, can’t imagine if I had accessibility needs…), doesn’t track when review comments are resolved, and there’s no UX facility for viewing different slices, you have to know git stuff to know the links

                                                      2. 3

                                                        I’ve wondered about a server-side process (either listen on http, poll a mailbox, etc) that could parse the format generated by git request-pull, and create a new ‘merge request’ that can then be reviewed by collaborators.

                                                        1. 2

                                                          I always find funny that usually, the same people advocating that emails are a technology with many inherent flaws that cannot be fixed, are the same people that advocate using the built in fit feature using emails…

                                                      3. 6

                                                        Just re: running your own instance, gogs is pretty good too. I haven’t used it with a big team so I don’t know how it stacks up there, but I set it up on a VPS to replace a paid Github account for private repos, where it seems fast, lightweight and does everything I need just fine.

                                                        1. 20

                                                          Gitea is a better maintained Gogs fork. I run both Gogs on an internal server and Gitea on the Internet.

                                                          1. 9

                                                            Yeah, stuff like gogs works well for private instances. I do find the idea of having public federated GitLab instances pretty exciting as an alternative to GitHub for open source projects though. In theory this could work similarly to the way Mastodon works currently. Individuals and organizations could setup GitLab servers that would federate between each other. This could allow searching for repos across the federation, tagging issues across projects on different instances, and potentially fail over if instances mirror content. With this approach you wouldn’t be relying on a single provider to host everybody’s projects in one place.

                                                          2. 1

                                                            Has GitLab’s LFS support improved? I’ve been a huge fan of theirs for a long time, and I don’t really have an intense workflow so I wouldn’t notice edge cases, but I’ve heard there are some corners that are lacking in terms of performance.

                                                            1. 4

                                                              GitLab has first-class support for git-annex which I’ve used to great success

                                                          1. 3

                                                            My first recommendation is to use a fast terminal with a bitmap font. The reason I say bitmap is because it will keep up with the display of a live capture.

                                                            In any reasonable terminal, bitmap shouldn’t be faster, because glyphs from vector fonts are rendered only once and cached into a glyph atlas in memory and then painted exactly like a bitmap. With Alacritty, that happens on the GPU :)

                                                            1. 3

                                                              In any reasonable terminal, bitmap shouldn’t be faster, because glyphs from vector fonts are rendered only once and cached into a glyph atlas in memory and then painted exactly like a bitmap.

                                                              Nah. In bitmap fonts, a pixel is either set or not. Vector fonts tend to generate partially covered pixels that are handled by alpha blending, which is more expensive than the and+or you use to blit a hard-edged shape.

                                                              The trend to push everything onto the GPU is quite harmful especially for those of us not running mainstream OSen.

                                                              1. 2

                                                                You don’t have to do alpha blending every time, just cache the glyph together with the background color.

                                                                1. 1

                                                                  But do terminals actually do that? Background color can change any time at any cell.

                                                                  I think that would be a silly way to spend memory.

                                                            1. 12

                                                              Unmentioned: Hardware RAID generally has battery backup so writes are completed even if the power fails (or the kernel panics). Software and Fake RAID can’t do that.

                                                              1. 4

                                                                Or alternately that having that allows them to (legitimately) acknowledge writes before the data has actually hit disk platters and hence offer better write performance – i.e. that if they didn’t have that they would presumably (hopefully!) wait to acknowledge writes until the data actually had hit the platters rather than “cheating” and losing data on a power loss.

                                                                That said, with the SSDs that are now easily available you can achieve a similar effect using host-side software layers like bcache/dm-cache in writeback mode.

                                                                1. 1

                                                                  Generally that is the best option, it would only not apply when the drives are lying about syncing to disk (some cheap models still do on both SSD and HDD controllers, gets better benchmark results).

                                                                2. 1

                                                                  More unmentioned: with a copy-on-write FS like ZFS, you won’t ever get corruption from incomplete writes, because writes are atomic.

                                                                1. 12

                                                                  Wow, that’s a lot of bloat, and a great demonstration of why I don’t use Gnome (or KDE).

                                                                  I’m much happier with StumpWM, which just does its job and doesn’t try to integrate with everything.

                                                                  1. 12

                                                                    Unfortunately, if you want Wayland — and I do, as it really has made all my vsync/stuttering/tearing issues go away; Fedora is now as smooth as my mac/windows — your choices are limited. Sway is starting to look good but otherwise there’s not much at the minimal end of the spectrum.

                                                                    If I have to choose between GNOME and KDE, I pick GNOME for the same reasons the author of this piece does. I was hoping the tips would come down to more than “uninstall tracker, evolution daemons et al. and hope for the best”. I’ve done that before on Fedora and ended up wrangling package dependancies in yum. I really wish GNOME/Fedora would take this sort of article to heart and offer a “minimal GNOME” option which is effectively just gnome-shell.

                                                                    1. 3

                                                                      Why is Wayland so poorly implemented? Is it because few distributions have it as default or is it because it’s harder? I see many tilling wm written in 50 different languages and it seems that sway is getting slowly it’s way to a usable wm, but it seems like a slow adoption from my point of view.

                                                                      1. 4

                                                                        It is a slow adoption, I’m not particularly sure why. Most (all?) of the tiling wms for X leverage Xlib or XCB, right? Perhaps it’s just needed some time for a similarly mature compositor lib to appear for Wayland (indeed, Sway is replacing their initial use of wlc with wlroots which may end up being that).

                                                                        As for why Wayland in general isn’t more prevalent, I’d guess compatibility. X is just so well established that replacing it is inherently a lot of work in the “last mile”. Fedora/GNOME/Wayland works great for me with my in-kernel open AMD driver. Maybe it’s not as good for Intel iGPUs? Maybe it’s not so good on Nvidia systems? Maybe it doesn’t work at all on arm SoC things? I have no idea, but I can easily understand distros holding off on making it default.

                                                                        1. 3

                                                                          Maybe it’s not so good on Nvidia systems?

                                                                          Exactly, the proprietary driver does not support GBM, they’ve been pushing their own thing (EGLStreams) that compositors don’t want.

                                                                          Maybe it’s not as good for Intel iGPUs? Maybe it doesn’t work at all on arm SoC things?

                                                                          Everything works great with any open drivers, including VC4 for the RPi.

                                                                          1. 2

                                                                            Maybe it’s not as good for Intel iGPUs?

                                                                            Just a data point: I’ve got a new thinkpad recently, installed linux on it, together with gnome3. Only yesterday I’ve discovered it was running on wayland the whole time, with no apparent problems what-so-ever. And that includes working with a dock with two further displays attached, and steam games. Even the touch panel on the screen works without any further config.

                                                                        2. 1

                                                                          Unfortunately, if you want Wayland — and I do, as it really has made all my vsync/stuttering/tearing issues go away; Fedora is now as smooth as my mac/windows

                                                                          And effortless support for multiple displays with different DPIs, plus better isolation of applications. I completely agree, when I switched to Wayland on Fedora 25 or 26, it was the first time I felt in a long time that the Linux desktop is on par again with macOS and Windows (minus some gnome-shell bugs that seem to have been mostly fixed now).

                                                                          At some point, I might switch to Sway. But with Sway 0.15, X.org applications are still scaled up and blurry on a HiDPI screen (whereas they work fine in GNOME). I’ll give it another go once Sway 1.0 is out.

                                                                          1. 1

                                                                            not much at the minimal end of the spectrum

                                                                            Weston! :)

                                                                            My fork even has fractional scaling (Mac/GNOME style downscaling) and FreeBSD support.

                                                                            1. 1

                                                                              There’s a Wayland for FreeBSD? I thought Wayland had a lot of Linux specific stuff in it?

                                                                              1. 3

                                                                                Sure, there is some, but who said you can’t reimplement that stuff?

                                                                                • libwayland, the reference implementation of client and server libraries, uses epoll. We have an epoll implementation on top of kqueue.
                                                                                • Most compositors use libinput to read from input devices, and libinput:
                                                                                  • reads from evdev devices (via libevdev but that’s a really thin lib). We have evdev support in many drivers, including Synaptics (with TrackPoint support).
                                                                                  • uses libudev for device lookup and hotplug. We have a partial libudev implementation on top of devd.
                                                                                • For GPU acceleration, compositors need a modern DRM/KMS/GBM stack with PRIME and whatnot. We have that.
                                                                                • Compositors also need some way of managing a virtual terminal (vt), this is the fun part (not).
                                                                                  • direct vt manipulation / setuid wrapper (weston-launch) is pretty trivial to modify to support FreeBSD, that’s how Weston and Sway work right now
                                                                                  • I’m building a generic weston-launch clone: loginw
                                                                                  • ConsoleKit2 should work?? I think we might get KDE Plasma’s kwin_wayland to work on this??
                                                                                  • there were some projects aimed at reimplementing logind for BSD, but they didn’t go anywhere…
                                                                                1. 1

                                                                                  For GPU acceleration, compositors need a modern DRM/KMS/GBM stack with PRIME and whatnot. We have that.

                                                                                  Do NVidia’s drivers use the same stack, or are they incompatible with the Wayland port? I’d give Wayland a try, but it seems hard to find a starting point… I’m running CURRENT with custom Poudriere-built packages, so patches or non-standard options aren’t a problem, I just can’t find any info on how to start.

                                                                                  1. 2

                                                                                    No, proprietary nvidia drivers are not compatible. Nvidia still does not want to support GBM, so even on Linux, support is limited (you can only use compositors that implemented EGLStreams, like… sway 0.x I think?) Plus, I’m not sure about the mode setting situation (nvidia started using actual proper KMS on Linux recently I think?? But did they do it on FreeBSD?)

                                                                                    It should be easy to import Nouveau to drm-next though, someone just has to do it :)

                                                                                    Also, you can get it to work without hardware acceleration (there is an scfb patch for Weston), but I think software rendering is unacceptable.

                                                                            2. 1

                                                                              I tried to give Wayland a try twice, on both my media PC and a new Laptop. It’s still really not there yet. I use i3 on X11 and Sway is really buggy, lacks a lot of backwards compatibility stubs (notification tray icons are a big one) and just doesn’t quite match i3 yet. Weston, the reference window manager, had a lot of similar problems when using it with my media PC.

                                                                              I want to move on to Wayland, and I might give that other i3 drop-in for Wayland a try in the future, but right now it’s still not there yet.

                                                                          1. 2

                                                                            GnuPG is — in addition to being an OpenPGP client — also an S/MIME client

                                                                            Wat.

                                                                            I mean, I know it has all the things, but I did not expect this one in particular.

                                                                            1. 2

                                                                              All the GDPR stuff is full of “business organization company …” What about individuals operating non-commercial websites? Specifically outside the EU?

                                                                              Let’s say I operate a forum in Russia, some EU citizens come and save their home addresses into their profiles on the forum and then want me to erase all the things. I reply with “sorry, your data is stored in 10 blockchains, it’s immutable”. They complain to their local data protection thingy. What happens next?

                                                                              1. 0

                                                                                /r/Buttcoin is probably a better place for this than lobste.rs :D

                                                                                Sad to see HTC picking up on the blockchain hype.

                                                                                1. 2

                                                                                  Isnt protonmail on the browser? That seems like a hell of a big leap of trust to make already. You’d be better off encrypting (and hosting, well this is debatable, but the less actors involved the better no?) it yourself.

                                                                                  1. 2

                                                                                    If you assume that the adversary could change protonmail’s encryption script, that adversary also could’ve modified the crypto software in your OS’s repository…

                                                                                    1. 5

                                                                                      I think superpat was implying anything depending on a browser just made the security equivalent to finding a hole in the browser. That happens a lot. In high-assurance security, the standard way to do secure email was a combination of proxies and/or guards. The proxies are what matters here where they were in their own process sitting between the native client and the network. They handled the crypto. You could write them pretty memory-safely. You couldn’t do that if relying on a common browser.

                                                                                      1. 2

                                                                                        ^^this

                                                                                  1. 1

                                                                                    I love Docker. I really do, but here’s the thing that we’ll regret about Docker, and we don’t need a massive article to explain it.

                                                                                    You are letting regular users do this:

                                                                                    echo 'echo "Hello." > /etc/motd' | docker run -i -v /etc:/etc -- bash

                                                                                    Hooray! I made a file as root! I don’t even need sudo or su any more!

                                                                                    Here’s hoping you’re not running the Docker HTTP server locally so that I can do that to you over HTTP in a coffee shop. Wait… You’re not, right?

                                                                                    While I’m at it, maybe I’ll flip the sticky bit on your shell or cat my system root over to another server!

                                                                                    This is great!

                                                                                    1. 2

                                                                                      What, does that actually work!?

                                                                                      1. 6

                                                                                        No, Connecting to the docker daemon requires root so that command will fail unless you modified things so regular users can access the daemon.

                                                                                        1. 1

                                                                                          Obviously this assumes your user is in the docker group. I’m suggesting that - if we want to point fingers at Docker - the bigger issue is these kinds of things happening in places like development environments.

                                                                                          I’m much more worried about someone being able to SCP a developer’s root device because they are running the Docker API w/out auth than I am worried about Docker failing in production, for instance. People who run Docker in production generally know how it works. People running Docker on workstations generally have no idea, and will happily paste commands as they’re told.

                                                                                          1. 0

                                                                                            and will happily paste commands as they’re told.

                                                                                            This is not a docker specific problem. I can give you a long list of non docker commands that will do bad things if pasted blindly.

                                                                                            1. 1

                                                                                              The difference is specifically that it is surprising to most users I’ve worked with people on Docker projects with that when you launch a container, the process is run as your host system’s root user. Although this is obvious to someone who knows how Docker works, it’s not obvious to everyone and is dangerous.

                                                                                              Anyway, my original point was mostly sarcastic and apparently that didn’t come across well

                                                                                        2. 2

                                                                                          Yeah. When you launch a container, it is running on the same kernel as the Linux host (no VM) as the root user. If you mount a directory as a volume, you essentially can access it as root. A lot of people don’t realize the permissions they are giving docker containers, and I personally believe that this is probably the most concerning issue with Docker right now.

                                                                                          We have all of our development on managed remote servers, because running local services in Docker is super dangerous if you don’t understand how Docker works internally imo.

                                                                                          1. 2

                                                                                            Isn’t it one of the limitation gVisor is trying to remedy by “controlling” all syscalls?

                                                                                            1. 1

                                                                                              Hmm! Maybe! gVisor looks interesting but I haven’t read much on it yet :)

                                                                                        3. 2

                                                                                          Well, hopefully no one would do -v /etc:/etc on random containers that don’t actually need to touch the host’s /etc

                                                                                          1. 2

                                                                                            My concerns are 100x more about workstations than servers. So many developers are running Docker and don’t even know what it does.

                                                                                            How many people just copy/paste docker run commands to get work done? This is a social exploit w/ Docker users, but I do believe that it’s still a valid exploit and I don’t think that invalidating it is helping anyone.

                                                                                            1. 0

                                                                                              Agreed, the stupidity of the user can’t really be blamed on the tool. The user can just as easily do dumb things with standard linux tools.

                                                                                          1. 2

                                                                                            mdocml is small and has minimal dependencies, but it has runtime dependencies - you need it installed to read the man pages it generates. This is Bad.

                                                                                            mdoc is part of the system. I guess not on Linux??

                                                                                            1. 3

                                                                                              mdoc is part of the system on Linux too.

                                                                                              1. 3

                                                                                                Depends on the Linux.

                                                                                                1. 1

                                                                                                  Do you have any particular distribution in mind where it isn’t?

                                                                                              2. 1

                                                                                                Guess what? There is life outside Unix! :-D

                                                                                              1. 5

                                                                                                Tested drm-v4.15 for FreeBSD, turns out it works — even with AMDGPU DC! (On Polaris, but looking forward to someone testing if Vega works.)

                                                                                                Updated weston-rs to use libweston’s new head-based output API.

                                                                                                Updated the SDKs/dependencies of freepass’s Android version (which doesn’t really work yet, but now I can work on it again maybe).