1. 1

    I wonder what they have been using that data for. Were they doing something boring with it like selling it to advertisers or were they doing something interesting like, say, maybe finding malware distribution sites by finding URLs that have really strong positive correlation with malware-infected computers?

    1. 2

      I wonder how many projects are collecting data just for the heck of it, because everyone else is doing that too and mayyybe it’ll be useful one day?

    1. 7

      This line of reasoning forms a reasonable argument against TDD and ‘early testing’ in general, but

      For other types of code, your time is better spent carefully re-reading your code or having it reviewed by a peer.

      is a false dichotomy. You should reread your code and have it reviewed and write tests.

      In my experience the time writing and executing tests that verify that even small functions do what I think they do is worth it. When I am being lazy and think I can do without a test, and the reviewer doesn’t catch either the missing test or the bug, it simply happens too often that I am quickly reproved when the code hits production.

      I have started to enjoy writing tests, exactly because they provide confidence my code does what it is supposed to do and will not bother everyone else by containing a stupid bug that could have been prevented by spending an additional 30 minutes writing some tests.

      1. 8

        is a false dichotomy

        Kinda, but. Everything you do consumes time. Review and tests included. If time were of no concern and you wanted maximum quality, maybe you’d do formal proof, verification, review, and then if you don’t trust your proof, maybe you’d do all the tests you want. When time and money is limited, you opt for what gives most bang for the tick-tock. I’m more and more convinced that testing has become such a religion that some places exercise it without regard for cost versus benefit. Of course, it’s damn hard to measure. But I’ve seen too many bad tests (that confuse the developer when it breaks and they need to figure out WTF is wrong and what to do about it), tests that don’t actually test anything useful, tests that are redundant because the functionality is tested multiple times at multiple levels, test batteries that have grown so big people don’t actually run them because it takes forever…

        1. 2

          All true: it all depends on what your process is currently like and how it can improve.

          I mainly want to warn for the implicitly stated assumption that the trade off in time has to be between QA activities such as rereading code, reviewing and testing. The time for testing can also come from implementing fewer features or hiring an additional engineer. Or from starting to write unit tests and discovering that that actually saves time because of the reduction in debugging and rework.

          Damn hard to measure, but either you start measuring it, you find some other way to honestly appraise the value of certain activities (root cause analysis of bugs and ways to have prevented them are a way) or you don’t improve, because you are only shifting time between activities that add value, instead of adding time and evaluating the results.

      1. 28

        I guess the author hasn’t worked on large code bases. Tests avoid regressions without needing to analyse the impact on the whole codebase.

        One of the points of tdd is in fact that it encourages small interfaces. Even in cases where the interface is not stable, having a suite of tests that captures the expected scenarios and behaviors allows developers to make changes (including during initial development) and know the impact on existing functionality.

        There are cases where manual testing makes more sense, but I’ve found those to be the exception rather than the rule. Generally scripts of no more than 1000 lines with a single well defined purpose. Of course those can then be integration tested, and also manually tested as a single unit.

        1. 14

          The post isn’t “don’t test,” it’s “mostly avoid unit tests.”

          I’m kinda inclined to agree with the author here, though I think it really depends on what sort of software you’re working on. There are projects that inevitably have lots and lots of easily unit testables interfaces. And there are projects that are inherently very stateful, making it difficult to unit test without lots of mocking. You can still do integration (or black box, or whatever) testing.

          I’m kinda in this boat at work, with a highly stateful system. There are a few things you could break out into a rather easily testable API that takes an input and outputs a result that you can check, but these are generally trivial, and if these parts don’t work, then the integration tests would reveal it anyway. So why maintain redundant unit tests? That’s not where the hard parts are.

          You could try highly stateful, highly mocked unit tests for the stateful parts but (without experience) I’ll say that it’s probably going to be high maintenance effort for poor yield. Keep changing the mocks as internals change. I’m concerned that they still wouldn’t catch the hard bugs.

          The hard bugs relate to transient behavior of the system. Threads being an obvious case. Flow control. Changes in system state that affect other components. IME unit tests are really bad at catching these type of bugs. And I’ve watched people struggle to write test cases for that type of bugs.

          I wish the system I was working on were as easy to test as sqlite, but no..

          1. 6

            The biggest advantage of unit tests for stateful systems in my experience is that it will point directly to the place in the code that is busted for the most part. integration tests tend to cover a lot of ground, but it can be hard to pinpoint what went wrong.

            My policy on this is always evolving, but usually I will use integration tests as my main line of defense for a brand new feature. This will cover the most ground but let me get something out there. Then, once I find issues outside of the happy path, I tend to target them with more unit-y tests.

            In practice, often this means that a bug in production turns into “factor out this problematic code to make it more testable”, then writing a test against it.

            This leaves the integration tests out of most edge case testing, but means that when other people hit corner cases they have a well documented example to work off of.

            If I write a specific test on an edge case, it’s more likely to be seen as intentional than if it’s just a part of an integration test.

            1. 4

              As an SRE and distributed database engineer, I despise traditional unit tests for large stateful systems. I touched on this a lot in this talk on testing complex systems. Your biases while writing the implementation are the same as your biases when writing the tests, so don’t write tests at all. Use tools that generate tests for you, because that pushes the bias space into the realm of broad classes, which our puny minds can stumble into much more easily than enumerating interleaving spaces of multiple API calls or realistic hardware phenomena over time. You can apply this technique to anything, not just simple pure functions. This paper by John Hughes goes into plenty of specifics on how to do this, too.

              We can build our systems so that these randomized tests can shit out deterministic regression tests that facilitate rapid debugging without pinning ourselves to a single point in statespace as unit / example / whatever tests do.

              Unit tests and integration tests that explore a single script just create code that is immune to the test, but not necessarily reliable.

              1. 3

                let me start off by saying I like things like generative testing and am always looking for ways to integrate that kind of tooling into projects.

                I have found that for enterprise software, where you have a pretty heterogenous system with a lot of edge cases around existing data in the system, it’s hard (in a holistic sense, not in a tooling sense) to really make cross-cutting declarations about the behaviour of the system. X will be true, but only when settings Y and Z are toggled in this way, and only during this time of day. Often times probing in the database for certain sorts of global conditions that affect things in a cross-cutting manner.

                You can decouple systems to make this flow better, but often there’s intrinsic difficulties, where your best bet is to isolate X truth-ness. But when you have calculateXTruthiness: Bool -> Bool -> Bool -> Bool, the value of generative testing goes down a decent amount because it’s just a predicate! Meanwhile you do get at least a bit of value from some unit tests at least to document known correct behaviours (from a business rules perspective).

                It’s all a spectrum, but it can be slim pickings in enterprise software for generative testing. Your best bet is to refactor to pull out “systemic” parts of your code to make it easier to test, even if your top layer remains messy as a consequence of reality being tricky.

                Lots of time there is simply not really many overall properties to eke out of your system beyond “whatever the system is doing already” (because backwards compatibility is so important nowadays in the kinds of systems we build).

                1. 2

                  Whatever your expectations of a thing are, you will almost always have success in violating them through a sequence of generated interactions if you built the thing with scripted unit and integration tests. If you have no expectations, then your job is done and you can look busy in other ways :P

          2. 10

            Integration tests tend to be more useful for broad regression detection. Often, the failures in a system come from mistaken assumptions about the behavior of other modules interfaces, and not from within the module itself. If I had a choice, I would prefer a handful of end to end tests over the same amount of time invested in unit tests. Or even better, a mix of integration tests to cover end to end issues, with unit tests on subtle or hairy core algorithms.

            It’s not a choice between unit testing and manual testing – there are other types of automated test.

          1. 2

            I just switched to OpenBSD for e-mail using the following stack:

            Inbound: opensmtpd -> spampd(tag) -> opensmtpd -> clamsmtpd(tag) -> opensmtpd -> procmail -> dovecot(Maildir) outbound: opensmtpd -> dkim_proxy -> opensmtpd(relay)

            I don’t use the spamd/grey listing up front like a lot of tutorials suggest, but spampd(spam assistant) seems to get the majority of it.

            My old stack was similar, but used postfix on opensuse. I really like the opensmtpd configuration; loads simpler than postfix. However I wish it supporter filters that the other MTAs do. It had filter support for a bit, but was clunky and subsequently removed. It makes it difficult (impossible?) to run things like rspam.

            1. 5

              rspamd has an MDA mode, so you can do like

              accept from any for local virtual { "@" => mike } deliver to mda "rspamc --mime --ucl --exec /usr/loca
              l/bin/dovecot-lda-mike" as mike
              

              and dovecot-lda-mike is

              #! /bin/sh
              exec /usr/local/libexec/dovecot/dovecot-lda -d mike
              

              smtpd is really really really good. For some reason the email software ecosystem is a mess of insane configs and horrible scripts, but my smtpd.conf is 12 lines and the only script I use (that rspamd one) is going to go away when filters come back. smtpd is so good I went with an MDA instead of a web app to handle photo uploads to my VPS. It’s one line in smtpd.conf and ~70 lines of python, and I don’t have to deal with fcgi or anything like that.

              1. 1

                smtpd is so good I went with an MDA instead of a web app to handle photo uploads to my VPS

                Oh that’s a clever idea. I’ve been using ssh (via termux) on my phone but that is so clumsy.

              2. 5

                I do greylisting on my email server [1] and I’ve found that it reduces the incoming email by 50% up front—there are a lot of poorly written spam bots out there. Greylisting up front will reduce the load that your spam system will have to slog through, for very little cost.

                [1] Yes, I run my own. Been doing it nearly 20 years now (well over 10 at its current location) so I have it easier than someone starting out now. Clean IP, full control over DNS (I run my own DNS server; I also have access to modify the PTR record if I need to) and it’s just me—no one else receives email from my server.

                1. 2

                  I’m the author/presenter of the tutorial. If I may, I suggest looking at my talk this year at BSDCan: Fighting Spam at the Frontline: Using DNS, Log Files and Other Tools in the Fight Against Spam. In those slides I talk about using spf records (spf_fetch, smtpctl spfwalk, spfwalk standalone) to whitelist IPs and mining httpd and sshd logs for bad actors and actively blacklisting them.

                  For those who find blacklisting a terrifying idea, in the presentation I suggest configuring your firewall rules so that your whitelists always win. That way, if Google somehow get added to your blacklists, the whitelist rule will ensure Gmail can still connect.

                  I also discuss ways to capture send-to domains and add them to your whitelists so you don’t have to wait hours for them to escape the greylists.

                  1. 1

                    I didn’t find SPF to be all that great, and it was the nearly the same three years earlier. Even the RBL were problematic, but that was three years ago.

                    As for greylisting, I currently hold them for 25 minutes, and that might be 20 minutes longer than absolutely required.

                  2. 1

                    Greylisting is the best. Back when my mailserver was just on a VPS it was the difference between spamd eating 100% CPU and a usable system.

                1. 1

                  I do not like the click-baity title, but I kept it the way it was. There are some interesting extensions listed that I never heard of before.

                  1. 3

                    It looks like there are some “interesting” extension nobody else has heard of before.

                    Like that Web Security addon, which appears to send all your navigation data to their servers over plain http, using some homegrown “crypto” to obfuscate the details. According to their privacy policy, they build a profile for advertising purposes.

                  1. 3

                    The problem turns out to be some obscure FUSE mounts that the author had lying around in a broken state, which subsequently broke the kernel namespace system. Meanwhile, I have been running systemd on every computer I’ve owned in many years and have never had a problem with it.

                    Does this not seem a bit melodramatic?

                    1. 9

                      From the twitter thread:

                      Systemd does not of course log any sort of failure message when it gives up on setting up the DynamicUser private namespace; it just goes ahead and silently runs the service in the regular filesystem, even though it knows that is guaranteed to fail.

                      It sounds like the system had an opportunity to point out an anomaly that would guide the operator in the right direction, but instead decided to power through anyways.

                      1. 8

                        A lot like continuing to run in a degraded state is a plague that affects distributed systems. Everybody thinks it’s a good idea “some service is surely better than no service” until it happens to them.

                        1. 3

                          At $work we prefer degraded mode for critical systems. If they go down we make no money, while if they kind of sludge on we make less but still some money while we firefight whatever went wrong this time.

                          1. 8

                            My belief is that inevitably you could be making $100 per day, would notice if you made $0, but are instead making $10 and won’t notice this for six months. So be careful.

                            1. 4

                              We have monitoring and alerting around how much money is coming in, that we compare with historical data and predictions. It’s actually a very reliable canary for when things go wrong, and for when they are right again, on the scale of seconds to a few days. But you are right that things getting a little suckier slowly over a long time would only show up as real growth not being in line with predictions.

                          2. 2

                            I tend to agree that hard failures are nicer in general (especially to make sure things work), but I’ve also been in scenarios where buggy logging code has caused an entire service to go down, which… well that sucked.

                            There is a justification for partial service functionality in some cases (especially when uptime is important), but like with many things I think that judgement calls in that are usually so wrong that I prefer hard failures in almost all cases.

                            1. 1

                              Running distributed software on snowflake servers is the plague to point out.

                              1. 1

                                Everybody thinks it’s a good idea “some service is surely better than no service” until it happens to them.

                                So if the server is over capacity, kill it and don’t serve anyone?

                                Router can’t open and forward a port, so cut all traffic?

                                I guess that sounds a little too hyperbolic.

                                But there’s a continuum there. At $work, I’ve got a project that tries to keep going even if something is wrong. Honest, I’m not sure I like how all the errors are handled. But then again, the software is supposed to operate rather autonomously after initial configuration. Remote configuration is a part of the service; if something breaks, it’d be really nice if the remote access and logs and all were still reachable. And you certainly don’t want to give up over a problem that may turn out to be temporary or something that could be routed around… reliability is paramount.

                                1. 2

                                  And you certainly don’t want to give up over a problem that may turn out to be temporary

                                  I think that’s close to the core of the problem. Temporary problems recur, worsen, etc. I’m not saying it’s always wrong to retry, but I think one should have some idea of why the root problem will disappear before retrying. Computers are pretty deterministic. Transient errors indicate incomplete understanding. But people think a try-catch in a loop is “defensive”. :(

                            2. 4

                              So you never had legacy systems (or configurations) to support? I read Chris’ blog regularly, and he works at a university on a heterogeneous network (some Linux, some other Unix systems) that has been running Unix for a long time. I think he started working there before systemd was even created.

                              1. 3

                                Why do you say that the FUSE mounts were broken? As far as we can see they were just set up in a uncommon way https://twitter.com/thatcks/status/1027259924835954689

                                1. 3

                                  It does look brittle that broken fuse mounts prevent the ntpd from running. IMO the most annoying part is the debugability of the issue.

                                  1. 2

                                    Yes, it seems melodramatic, even to my anti-systemd ears. It’s a documentation and error reporting problem, not a technical problem, IMO. Olivier Lacan gave a great talk last year about good errors and bad errors (https://olivierlacan.com/talks/human-errors/). I think it’s high time we start thinking about how to improve error reporting in software everywhere – and maybe one day human-centric error reporting will be as ubiquitous as unit testing is today.

                                    1. 2

                                      In my view (as the original post’s author) there are two problems in view. That systemd doesn’t report useful errors (or even notice errors) when it encounters internal failures is the lesser issue; the greater issue is that it’s guaranteed to fail to restart some services under certain circumstances due to internal implementation decisions. Fixing systemd to log good errors would not cause timesyncd to be restartable, which is the real goal. It would at least make the overall system more debuggable, though, especially if it provided enough detail.

                                      The optimistic take on ‘add a focus on error reporting’ is that considering how to report errors would also lead to a greater consideration of what errors can actually happen, how likely they are, and perhaps what can be done about them by the program itself. Thinking about errors makes you actively confront them, in much the same way that writing documentation about your program or system can confront you with its awkward bits and get you to do something about them.

                                  1. 4

                                    This is really interesting to get an idea of how people are taking advantage of BSD! I now have a much nicer idea of why people are going to it (and am a bit tempted myself). That feeling of having to go through ports and simply not having 1st-class support for some software seems… rough for desktop usage though

                                    1. 3
                                      1. 1

                                        I mean “someone talks to me about an application and I’m interested in trying it out on my system”?

                                        I feel like the link to the CVE database is a bit of an unwarranted snipe here. I’m not talking too much about security updates, just “someone released some software and didn’t bother to confirm BSD support so now I’m going to need to figure out which ways this software will not work”.

                                        To be honest I don’t really think that having all userland software come in via OS-maintained package managers is a great idea in the first place (do I really need OS maintainers looking after anki?). I’m fine downloading binaries off the net. Just nicer if they have out of the box support for stuff. I’m not blaming the BSDs for this (it’s more the software writer’s fault), just that it’s my impression that this becomes a bit of an issue if you try out a lot of less used software.

                                        1. 4

                                          As an engineer that uses and works on a minority share operating system, I don’t really think it’s reasonable to expect chiefly volunteer projects to ship binaries for my platform in a way that fits well with the OS itself. It would be great if they were willing to test on our platform, even just occasionally, but I understand why they don’t.

                                          Given this, it seems more likely to expect a good experience from binaries provided by somebody with a vested interest in quality on the OS in question – which is why we end up with a distribution model.

                                          1. 2

                                            Yep, this makes a lot of sense.

                                            I’m getting more and more partial to software relying on their host language’s package manager recently. It’s pretty nice for a Python binary to basically always work so long as you got pip running properly on your system, plus you get all the nice advantages of virtual environments and the like letting you more easily set things up. The biggest issue being around some trust issues in those ecosystems.

                                            Considering a lot of communities (not just OSes) are getting more and more involved in distribution questions, we might be getting closer to getting things to work out of the box for non-tricky cases.

                                            1. 8

                                              software relying on their host language’s package manager

                                              In general I’m not a fan. They all have problems. Many (most?) of them lack a notion of disconnected operation when they cannot reach their central Internet-connected registry. There is often no complete tracking of all files installed, which makes it difficult to completely remove a package later. Some of the language runtimes make it difficult to use packages installed in non-default directory trees, which is one way you might have hoped to work around the difficulty of subsequent removal. These systems also generally conflate the build machine with the target machine (i.e., the host on which the software will run) which tends to mean you’re not just installing a binary package but needing to build the software in-situ every time you install it.

                                              In practice, I do end up using these tools because there is often no alternative – but they do not bring me joy.

                                              Operating system package managers (dpkg/apt, rpm/yum, pkg_add/pkgin, IPS, etc) also have their problems. In contrast, though, these package managers tend to at least have some tools to manage the set of files that were installed for a particular package and to remove (or even just verify) them later. They also generally offer some first class way to install a set of a packages from archive files obtained via means other than direct access to a central repository.

                                              1. 3

                                                For development I use the “central Internet-connected registry.”, for production I use DEB/RPM packages in a repository:

                                                • forces you to limit the number of dependencies you use, otherwise too much work to package them all;
                                                • force you to choose high quality dependencies that are easy to package or already packaged;
                                                • makes sure every dependency is buildable from source (depending on language);
                                                • have an “offline” copy of the dependencies, protect against “left-pad” issues;
                                                • run unit tests of the dependencies during package build, great for QA!;
                                                • have (PGP) signed packages that uses the distribution’s tools to verify.

                                                There are probably more benefits that escape me at the moment :)

                                      2. 1

                                        That feeling of having to go through ports and simply not having 1st-class support for some software seems… rough for desktop usage though

                                        What kind of desktop software do you install from these non-OS sources?

                                        1. 2

                                          Linux is moving more and more towards Flatpak and Snap for (sandboxed) application distribution.

                                          1. 2

                                            I remember screwing around with Flathub on the command line in Fedora 27, but right now on Fedora 28, if you enable Flatpak in the Gnome Software Center thingy, it’s actually pretty seamless - type “Signal” in the application browser, and a Flatpak install link shows up.

                                            With this sort of UX improvements, I’m optimistic. I feel like Fedora is just going to get easier and easier to use.

                                      1. 8

                                        Speaking as a C programmer, this is a great tour of all the worst parts of C. No destructors, no generics, the preprocessor, conditional compilation, check, check, check. It just needs a section on autoconf to round things out.

                                        It is often easier, and even more correct, to just create a macro which repeats the code for you.

                                        A macro can be more correct?! This is new to me.

                                        Perhaps the overhead of the abstract structure is also unacceptable..

                                        Number of times this is likely to happen to you: exactly zero.

                                        C function signatures are simple and easy to understand.

                                        It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it, so makes certain lifetime expectations on it. Not one single piece of documentation I’ve seen in the last 5 years mentions this fact.

                                        1. 4

                                          It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it

                                          Which system? I’m pretty sure OpenBSD doesn’t.

                                          https://github.com/openbsd/src/blob/4a4dc3ea4c4158dccd297c17b5ac5a6ff2af5515/sys/kern/uipc_syscalls.c#L200

                                          https://github.com/openbsd/src/blob/4a4dc3ea4c4158dccd297c17b5ac5a6ff2af5515/sys/kern/uipc_syscalls.c#L1156

                                          1. 2

                                            Linux (that’s the manpage I linked to above). This was before I discovered OpenBSD.

                                            Edit: I may be misremembering and maybe it was connect() that was the problem. It too seems fine on OpenBSD. Here’s my original eureka moment from 2011: https://github.com/akkartik/wart/commit/43366d75fbfe1. I know it’s not specific to that project because @smalina and I tried it again with a simple C program in 2016. Again on Linux.

                                              1. 1

                                                Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.

                                                I’ll dig up a simple test program later today.

                                                1. 2

                                                  Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.

                                                  bind and connect are syscalls, libc would only have a stub doing the syscall if anything at all since they are not part of the standard library.

                                          2. 2

                                            Perhaps the overhead of the abstract structure is also unacceptable..

                                            Number of times this is likely to happen to you: exactly zero.

                                            I have to worry about my embedded C code being too big for the stack as it is.

                                            1. 1

                                              Certainly. But is the author concerned with embedded programming? He seems to be speaking of “systems programming” in general.

                                              Also, I interpreted that section as being about time overhead (since he’s talking about the optimizer eliminating it). Even in embedded situations, have you lately found the time overheads concerning?

                                              1. 5

                                                I work with 8-bit AVR MCUs. I often found myself having to cut corners and avoid certain abstractions, because that would have resulted either in larger or slower binaries, or would have used significantly more RAM. On an Atmega32U4, resources are very limited.

                                            2. 1

                                              Perhaps the overhead of the abstract structure is also unacceptable..

                                              Number of times this is likely to happen to you: exactly zero.

                                              Many times, actually. I see FSM_TIME. Hmm … seconds? Milliseconds? No indication of the unit. And what is FSM_TIME? Oh … it’s SYS_TIME. How cute. How is that defined? Oh, it depends upon operating system and the program being compiled. Lovely abstraction there. And I’m still trying to figure out the whole FSM abstraction (which stands for “Finite State Machine”). It’s bad enough to see a function written as:

                                              static FSM_STATE(state_foobar)
                                              {
                                              ...
                                              }
                                              

                                              and then wondering where the hell the variable context is defined! (a clue—it’s in the FSM_STATE() macro).

                                              And that bind() issue is really puzzling, since that haven’t been my experience at all, and I work with Linux, Solaris, and Mac OS-X currently.

                                              1. 1

                                                I agree that excessive abstractions can hinder understanding. I’ve said this before myself: https://news.ycombinator.com/item?id=13570092. But OP is talking about performance overhead.

                                                I’m still trying to reproduce the bind() issue. Of course when I want it to fail it doesn’t.

                                            1. 37

                                              I’ve been very happy with pass, a command-line tool that stores passwords and notes in a git repository. Being a directory of text files, it’s easy to use standard command-line tools on or tinker with programmatically. There’s a thriving ecosystem of plugins, tools, and clients.

                                              I also use autopass for autofilling in X applications. As time goes in, I fill in more and more autotype fields to check ‘remember me’ boxes and other non-standard fields. It’s really convenient. (One annoyance is that if any password files are not valid YAML, autopass errors to stdout without opening a window, so I hit my hotkey and nothing happens.)

                                              1. 11

                                                One more vote for pass, i’ve been a happy user for years now. Was missing a proper browser extension for it so I built one: Browserpass. It’s no longer maintained by me due to lack of time, but the community is doing a far better job at maintaining it than I possibly could so that’s all good!

                                                1. 10

                                                  Pass looks pretty neat, but the reason I stick with KeePass(XC) is that Pass leaks metadata in the filenames - so your encryption doesn’t protect you from anyone reading the name of every site you have an account with, which is an often overlooked drawback IMO.

                                                  1. 5

                                                    Your filenames don’t have to be meaningful though. It would be relativity trivial to extend pass to use randomly generated names, and then use an encrypted key->value file to easily access the file you want.

                                                    On the other hand, if someone already has that access to your device, accessing ~/.mozilla/firefox/... or analogous other directories with far more information is just as trivial, and has probably more informational value.

                                                    1. 3

                                                      Then youre working around a pretty central part of pass’s design, which I don’t really like. It should be better by default.

                                                      wrt your second point, if you give up when they can read the filesystem, why even encrypt at all? IMO the idea is you should be able to put your password storage on an untrusted medium, and know that your data are safe.

                                                      1. 12

                                                        if you give up when they can read the filesystem, why even encrypt at all?

                                                        Because in my opinion, there’s a difference between a intruder knowing that I have a “mail” password, and them actually knowing this password.

                                                  2. 5

                                                    The QR code feature of pass is neat for when you need to login on a phone.

                                                    1. 2

                                                      Huh, you made me read the man page and learn about this - it’s really cool! What’s your usage like for this though? Just use any barcode reader and then copy paste in the password box?

                                                      1. 1

                                                        A barcode reader I trusted, but yeah - its a good hack because I usually have my laptop which has full disk encryption.

                                                        1. 2

                                                          Yeah, when you said that all I could think of was the barcode scanner that I used to use where it would store the result of each barcode scanned in a history file… Not ideal :)

                                                    2. 2

                                                      Seems like the android version’s maintainer is giving up. (Nice, 80k lines of code in just one dep…)

                                                      The temptation to nih it is growing stronger but I don’t have enough time :(

                                                    1. 8

                                                      KeePass has clients that work the 3 operation systems in question, and I’ve had good luck using Syncthing to share the password file between computers, but the encryption of the database means that any good sync utility can work with it.

                                                      1. 4

                                                        I KeePassX together with SyncThing on multiple Ubuntus and Androids for two years now. By now I have three duplicate conflict files which I keep around because I have no idea what the difference between the files is. Once I had to retrieve a password from such conflict file as it was missing in the main one.

                                                        Not perfect, but works.

                                                        Duclare, using ssh instead of SyncThing would certainly work since the database is just a file. I prefer SyncThing because of convenience.

                                                        1. 2

                                                          Duclare, using ssh instead of SyncThing would certainly work since the database is just a file.

                                                          Ideally it’d be automated and integrated into the password manager though. Keepass2android does support it, but it does not support passwordless login and don’t recall it ever showing me the server’s fingerprint and asking if that’s OK. So it’s automatically logging in with a password to a host run by who knows. Terribly insecure.

                                                          1. 1

                                                            I had the same situation. 3 conflict files and merging is a pain. I’ve switched to Pass instead now.

                                                          2. 2

                                                            I use Keepass for a few years now too. I tried other Password managers in the meantime but I never got quite satisfied, not even pass though that one was just straight up annoying.

                                                            I’ve had a few conflicts over the years but usually Nextcloud is rather good at avoiding conflicts here and KPXC handles it very well. I think Syncthing might casue more problems as someone else noted, since nodes might take a while to sync up.

                                                          1. 4

                                                            I personally find this API fairly frustrating. This call can have three different semantics, depending on the values and context in which you call it. This contributes to complexity.

                                                            I notice that in the userspace diff, the “lock unveil” functionality is never used, even in cases where unveil is added to the pledge string. As far as I understand it, this means that if an attack obtained code execution, they’d simply be able to undo the unevil with unveil("/", "rwx"). That’s unintuitive and likely to be a regular source of programming errors.

                                                            Grabbing some comments I made on IRC last night on how I’d persue this API:

                                                            22:10:09 <Alex_Gaynor> If I was doing this API, I'd probably do `sandbox_context *sandbox_context_create(void)` and then a bunch of `sandbox_context_add_X(sandbox_context *, ...)` with appropriate signatures, and then a `sandbox_context_apply(sandbox_context *)` and basically a default `sandbox_context` had no permissions, and then you can add back whatever you want, and calling `sandbox_apply` a second time on a process killed the process or something
                                                            22:10:34 <Alex_Gaynor> (Or maybe was allowed, as long as the permissions were a strict subset of what was already applied)
                                                            22:12:33 <Alex_Gaynor> Oh, and they should add a platform-specific `posix_spawn_...` thing to take a `sandbox_context` so that it's applied right at `exec`, before any user code runs.
                                                            22:13:23 <Alex_Gaynor> Basically the two properties I've found useful in sandboxing are: a) It should be extremely easy to see what capabilities your process has, you want them all in one place, and defaulted to "nothing" so basically the permissions are what you have written down, b) It should be extremely easy to draw a perimeter around what your process already does, and slowly wittle it down by basically deleteing "adds".
                                                            
                                                            1. 2

                                                              I notice that in the userspace diff, the “lock unveil” functionality is never used, even in cases where unveil is added to the pledge string.

                                                              I only saw two or three diffs where it isn’t clear if the unveil pledge is later revoked. All others that add unveil to pledge also have a pledge without unveil soon after the unveil calls. It’s possible the two or three cases also have it but it’s just not visible in the diff.

                                                              So in these programs, that attack does not work unless you get your RCE during the initialization phase.

                                                              Even if unveil was never locked, it can still protect against all-too-common path traversal style bugs (especially in web crapps) that leak data without RCE.

                                                            1. 2

                                                              What if I don’t want to align to the adjacent line?

                                                              1. 1

                                                                Well, then you have to add a blank line in between.

                                                              1. 4

                                                                Having PRIMARY and CLIPBOARD is a good thing and once you get used to it, it’s like having two clipboards.

                                                                Shame he never tells how to actually use them both. Afaict only the primary selection is usable with the default binds.

                                                                XTerm.VT100.translations: #override \n\
                                                                        Ctrl Shift <Key>C: copy-selection(CLIPBOARD) \n\
                                                                        Ctrl Shift <Key>V: insert-selection(CLIPBOARD)
                                                                

                                                                Now if only I could get all the other software to support them both as well.

                                                                EDIT: Another tip. If you find the font sizes available in the menu to be ridiculous, they’re pretty easy to change.

                                                                XTerm*faceSize1: 8
                                                                XTerm*faceSize2: 10
                                                                XTerm*faceSize3: 13
                                                                XTerm*faceSize4: 16
                                                                XTerm*faceSize5: 20
                                                                XTerm*faceSize6: 26
                                                                

                                                                faceSize1 corresponds to “Unreadable.”

                                                                Now would someone give me key binds to decrease/increase font size? :-)

                                                                1. 6

                                                                  You might want to read X Selections, Cut Buffers, and Kill Rings for how to use the PRIMARY and CLIPBOARD selections in X Windows.

                                                                  1. 1

                                                                    It doesn’t, and can’t really explain how to use them because there is no way to use them in X. Instead, you have to use them in applications running under X and each application does its own thing. I still don’t know if there’s a way to copy to clipboard in xterm without creating a custom bind.

                                                                    1. 0

                                                                      Does it also work with X?

                                                                      1. 1

                                                                        If by “X” you mean “the graphical interface that runs on Linux” then yes, it works, because that is X Windows.

                                                                        1. -1

                                                                          Eh, the developers would disagree, but what do they know?

                                                                          1. 1

                                                                            Where did this X Windows meme even start?

                                                                            Some lamer back in 1995 thinking it sounded cool and having it go viral on Usenet?

                                                                            1. 2

                                                                              Where did this X Windows meme even start?

                                                                              I don’t know. Probably people who think it’s the X-TREME version of Microsoft Windows.

                                                                              1. 1

                                                                                It’s mentioned in The Unix-Haters Handbook as a reliable tool for getting Unix weenies angry.

                                                                                1. 1

                                                                                  I’m pretty sure “X Windows” is much older than that (as is MS Windows). I vaguely recall reading about “X Windows” in Byte magazine in 1993 or so.

                                                                                  The comp.windows.x newsgroup goes back to at least 1987 (https://groups.google.com/forum/message/raw?msg=comp.windows.x/TtNRIfTKqsw/i7hzWBiDfkgJ), a month after X11 was created. They even refer to it as “x-windows”.

                                                                                  1. 1

                                                                                    Could it have been a different implementation? Cuz I remember doing the RTFM thing way back when, and it was very clear about not being “X Windows”, though didn’t specify why.

                                                                                    Sorry if this is explained in the link. Can’t be arsed with Google. Usenet used to come without opt-in spying.

                                                                                2. 1

                                                                                  Well, excuse me for using outdated terminology then. Would if have been better had I said “You might want to read X Selections, Cut Buffer, and Kill Rings for how to use the PRIMARY and CLIPBOARD selections in X”?

                                                                                  1. 1

                                                                                    Not outdated, just incorrect.

                                                                          2. 3

                                                                            Now would someone give me key binds to decrease/increase font size? :-)

                                                                            i have the following

                                                                            *VT100*translations: #override \
                                                                                Meta <Key> minus: smaller-vt-font() \n\
                                                                                Meta <Key> plus: larger-vt-font() \n\
                                                                                Super <Key> minus: smaller-vt-font() \n\
                                                                                Super <Key> plus: larger-vt-font() \n\
                                                                            

                                                                            and either meta/super keys work as expected.

                                                                          1. 5

                                                                            I’m playing around with libtls (per advice. I’ve already proved to myself that it can be used in an event based server, and now I’m playing around with trying to get it integrated into our network flow, which in this case means writing a Lua wrapper for it. [1]

                                                                            [1] There are two Lua modules for libtls that I’ve found, but neither one meets my criteria, namely, using the call back mechanism to control the network. The changes are extensive enough that I find it easier to write my own version.

                                                                            1. 1

                                                                              Please write about your experience with libtls once you know how it pans out :)

                                                                            1. 9

                                                                              Many of the author’s experiences speaking with senior government match my own.

                                                                              However, there’s one element that I think is very easily lost in this conversation, and which I want to highlight: there is no group I spend more time trying to convince of the importance of security than other software engineers.

                                                                              Software engineers are the only group of people I’ve ever had push back when I say we desperately need to move to memory safe programming languages. All manner of non-engineers, when I’ve explained the damages wrought by C/C++, and how nearly every mass-vulnerability they know about has a shared root cause, generally understand why this is an important problem, and want to discuss ideas about how do we resolve this.

                                                                              Engineers complain to me that rewriting things is hard, and besides if you’re disciplined in writing C and use sanitizers and fuzzers you’ll be ok. Rust isn’t ergonomic enough, and we’ve got a really good hiring pipeline for C++ engineers.

                                                                              If we want to build software safety into everything we do, we need to get engineers on board, because they’re the obstacle.

                                                                              1. 11

                                                                                People don’t even use sanitizers and fuzzers, so I’m not sure why you would expect them to rewrite in Rust. It’s literally 1000x less effort.

                                                                                As far as I can tell, CloudFlare’s CloudBleed bug would have been found if they compiled with ASAN and fed about 100 HTML pages into it. You don’t even have to install anything; it’s built right into your compiler! (both gcc and Clang)

                                                                                I also don’t agree that “nearly every mass vulnerability has a shared root cause”. For example, you could have written ShellShock in Rust, Python, or any other language. It’s basically a “self shell-code injection” and has very little to do with memory safety (despite a number of people being confused by this.)

                                                                                The core problem is the sheer complexity and number of lines of unaudited code, and the fact that core software like bash has exactly one maintainer. There are actually too many people trying to learn Rust and too few people maintaining software that everybody actually uses.

                                                                                In some sense, Rust can make things worse, because it leads to more source code. We already have memory-safe languages: Python, Ruby, JavaScript, Java, C#, Erlang, Clojure, OCaml, etc.

                                                                                Software engineers should definitely spend more time on security, and need to be educated more. But the jump to Rust is a non-sequitur. Rust is great for kernels where the above languages don’t work, and where C and C++ are too unsafe. But kernels are only a part of the software landscape, and they don’t contain the majority of security bugs.

                                                                                I would guess that most data breaches these days have nothing to do with memory safety, and have more to do with bugs similar to the ones in the OWASP top 10 (e.g. XSS, etc.)

                                                                                https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf


                                                                                Edit: as another example, Mirai has nothing to do with memory safety:

                                                                                https://en.wikipedia.org/wiki/Mirai_(malware)

                                                                                All it does it try default passwords, which gives you some idea of where the “bar” is. Rewriting software in Rust has nothing to do with that, and will actually hurt because it takes effort and mindshare away from solutions with a better cost/benefit ratio. And don’t get me wrong, I think Rust has its uses. I just see people overstating them quite frequently, with the “why don’t more people get Rust?” type of attitude.

                                                                                1. 2

                                                                                  There were languages like Opa that tried to address what happened on web app side. They got ignored just like people ignore safety in C. Apathy is the greatest enemy of security. It’s another reason we’re pushing the memory-safe, higher-level languages, though, with libraries for stuff likely to be security-critical. The apathetic programmers do less damage on average that way. Things that were code injections become denial of service. That’s an improvement.

                                                                                2. 2

                                                                                  not only software engineers, almost the entire IT industry has buried it’s head in the sand and is trying desperately hard to hide from the problem, because “security is too hard”. We are pulling teeth to get people to even do the minimal upgrades to things. I recently had a software vendor refusing to support anything other than TLS 1.0. After many exchanges back and forth, including an article from Microsoft(and basically every other sane person) saying they were dropping all support of older TLS protocols because of their insecurity, they finally said, OK we will look into it. I’m sure we all have stories like this.

                                                                                  If you can’t even bother to take the minimum of steps to upgrade your security stacks after more than a decade,(TLS1.0 released in 1999 and TLS 1.2 is almost exactly a decade old now) because it’s “too hard”, trying to get people to move off of memory unsafe languages like C/C++ is a non-starter.

                                                                                  But I agree with you, and the author.

                                                                                  1. 2

                                                                                    I would like to use TLS 1.3 for an existing product. It’s in C and Lua. The current system is network driven using select() (or poll() or epoll() depending upon the platform). The trouble I’m having is finding a library that is easy, or even a bit complicated but sane to use. The evented nature means I an notified when data comes in, and I want to feed this to the TLS library instead of having the TLS library manage the sockets for me. But the documentation is dense, the tutorials only cover blocking calls, and that’s when they’re readable! Couple this with the whole “don’t you even #$@#$# think of implementing crypto” that is screamed from the roof tops and no wonder software engineers steer away from this crap.

                                                                                    I want a crypto library that just handles the crypto stuff. Don’t do the network, I already have a framework for that. I just need a way to feed data into it, and get data out of it, and tell me if the certificate is good or not. That’s all I’m looking for.

                                                                                    1. 2

                                                                                      OpenBSD’s libtls.

                                                                                      1. 2

                                                                                        TLS 1.3 is not quite ready for production use, unless you are an early adopter like Cloudfare. Easy to use API’s that are well-reviewed are not there yet.

                                                                                        Crypto Libraries: OpenBSD’s libtls like @kristapsdz mentioned, or libsodium/nacl or OpenSSL. If it’s just for your internal connections and don’t actually need TLS, just talking to libsodium or NaCL for an encrypted stream of bytes is probably your best bet, using XSalsa20+Poly1305. See: https://latacora.singles/2018/04/03/cryptographic-right-answers.html

                                                                                        TLS is a complicated protocol(TLS1.3 reduces a LOT of complexity, it’s still very complicated).

                                                                                        If you are deploying to Apple, Microsoft or OpenBSD platforms, you should just tie to the OS provided services, that provide TLS. Let them handle all of that for you(including the socket). Apple and MS platforms have high-level API’s that will do all the security crap for you. OpenBSD has libtls.

                                                                                        On other platforms(Linux, etc), you should probably just use OpenSSL. Yes it’s a fairly gross API, but it’s pretty well-maintained nowadays(5 years ago, it would not qualify as well maintained.). The other option is libsodium/NaCL.

                                                                                        1. 1

                                                                                          Okay, fine. Are there any crypto libraries that are easy to use for whatever is current today? My problem is: a company that is providing us information today via DNS has been invaded by a bunch of hipster developers [1] who drunk the REST Kool-Aid™ so I need a way to make an HTTPS call in an event driven architecture and not blow our Super Scary SLAs with the Monopolistic Phone Company (which would case the all-important money to flow the other way), so your advice to let OS provided TLS services control the socket is a non-starter.

                                                                                          And for the record, the stuff I write is deployed to Solaris. For reasons that exceed my pay grade.

                                                                                          So I read the Cryptographic Right Answers you linked to and … okay. That didn’t help me in the slightest.

                                                                                          The program I’m working on is in C, and not written by me (so it’s in “maintenance mode”). It works, and rewriting it from scratch is probably also a non-starter.

                                                                                          Are you getting a sense of the uphill battle this is?

                                                                                          [1] Forgive my snarky demeanor. I am not happy about this.

                                                                                          Edit: further clarification on what I have to work with.

                                                                                          1. 1

                                                                                            I get it, it sucks sometimes. I’m guessing you are not currently doing any TLS at all? So you can’t just upgrade the libraries you are currently using for TLS, whatever they are.

                                                                                            In my vendor example, the vendor already implemented TLS (1.0) and then promptly stopped. They have never bothered to upgrade to newer versions of TLS. I don’t know the details of their implementation, obviously, since it’s closed-source; but unless they went crazy and wrote their own crypto code, upgrading their crypto libraries is probably all that’s required. I’m not saying it’s necessarily easy to do that, but this is something everyone should do at least once every decade, just to keep the code from rotting a terrible death anyways. TLS 1.2 becomes a decade old standard next month.

                                                                                            I don’t work on Solaris platforms (and haven’t in at least a decade, so you are probably better off checking with other Solaris people). Oracle might have a TLS library these days, I have no clue. I tend to avoid Oracle land whenever possible. I’m sorry you have to play in their sandbox.

                                                                                            I agree the Crypto right-answers page isn’t useful for you, since you just want TLS, It’s target is for developers who need more than TLS. I used it here mostly as proof of why I recommended XSalsa20+Poly1305 for symmetric encryption. Again, you know you need TLS, so it’s a non-useful document for you at this point.

                                                                                            Event driven IO is possible with OpenSSL, but it’s not super easy see: https://www.openssl.org/docs/faq.html#PROG11. Then again, nothing around event driven IO is super easy. Haproxy and Nginx both manage to do it, and are both open-source implementations of TLS, so you have working code you can go examine. Plus it might give you access to developers who have done event driven IO with TLS. I haven’t ever written that implementation, so I can’t help with those specifics.

                                                                                            OpenSSL is working on making their API’s easier to use, but it’s a long, slow haul, but it’s definitely a known problem, and they are working on it.

                                                                                            As for letting the OS do the work for you, you are correct there are definitely use-cases where it won’t work, and it seems you fit the bill. For most applications, letting the OS do it for you is generally the best answer, especially around Crypto which can be hard to get right, and of course only applies to the platforms that offer such things(Apple, MS, etc). Which is why I started there ;)

                                                                                            Anyways, good luck! Sorry I can’t just point to a nice easy example, for you. Maybe someone else around here can.

                                                                                            1. 1

                                                                                              I’m not even using TCP! This is all driven with UDP. TCP complicates things but is manageable. Adding a crap API between TCP and my application? Yeah, I can see why no one is lining up to secure their code.

                                                                                              1. 1

                                                                                                I think there is a communication issue here.

                                                                                                The vendor you are connecting with over HTTPS supports UDP packets on a REST API interface? really? Crazier things have happened I guess.

                                                                                                I think what you are saying is you are doing DNS over UDP for now, but are being forced into HTTPS over TCP?

                                                                                                DNS over UDP is very far away from a HTTPS rest API.

                                                                                                Anyways, for being an HTTPS client, against a HTTPS REST API over TCP, you have 2 decent options:

                                                                                                Event driven/async: use libevent, example code: https://github.com/libevent/libevent/blob/master/sample/https-client.c

                                                                                                But most people will be boring, and use something like libcurl (https://curl.haxx.se/docs/features.html) and do blocking I/O. If they have enough network load, they will setup a pool of workers.

                                                                                                1. 2

                                                                                                  Right now, we’re looking up NAPTR records over DNS (RFC-3401 to RFC-3404). The summary is that one can query name information for a given phone number (so 561-555-5678 is ACME Corp.). The vendor wants to switch to a REST API and return JSON. Normally I would roll my eyes at this but the context I’m working in is more realtime—as in Alice is calling Bob and we need to look up the information as the call is being placed! WE have a hard deadline with the Monopolistic Phone Company to provide this information [1].

                                                                                                  We don’t use libevent but I’ll look at the code anyway and try to make heads and tails.

                                                                                                  [1] Why are we querying a vendor this for? Well, it used to be in house, but now “we lease this back from the company we sold it to - that way it comes under the monthly current budget and not the capital account.” (at least, that’s my rational for it).

                                                                                                  1. 2

                                                                                                    Tell me how it goes. Fwiw, you might want to take a quick look at mbed TLS. Sure it wants to wrap a socket fd in its own context and use read/write on it, but you can still poll that fd and then just call the relevant mbedtls function when you have data coming in. It does also support non-blocking operation.

                                                                                                    https://tls.mbed.org/api/net__sockets_8h.html#a2ee4acdc24ef78c9acf5068a423b8c30 https://tls.mbed.org/api/net__sockets_8h.html#a03af351ec420bbeb5e91357abcfb3663

                                                                                                    https://tls.mbed.org/api/structmbedtls__net__context.html

                                                                                                    https://tls.mbed.org/kb/how-to/mbedtls-tutorial (non-blocking io not covered in the tutorial but it doesn’t change things much)

                                                                                                    I’ve no experience with UDP (yet – soon I should), but if you’re doing that, well, mbedtls should handle DTLS too: https://tls.mbed.org/kb/how-to/dtls-tutorial (There’s even a note relevant to event based i/o)

                                                                                                    We use mbedtls at work in a heavily event based system with libev. Sorry, no war stories yet, I only got the job a few weeks ago.

                                                                                                    1. 1

                                                                                                      Right, let’s add MORE latency for a real-time-ish system. Always a great idea! :)

                                                                                    1. 15

                                                                                      Seemed too good to be true.

                                                                                      I tried infer on our standard build environment but it depends on newer libs than you get on rhel/centos7. So I moved on to the docker image but it ran out of memory while building infer. So I moved on to using the prebuilt binaries in a custom docker image running ubuntu, and after getting the deps right, I can finally run infer on a trivial one-line C program. Unfortunately the thing segfaults if I try to use it on real code.

                                                                                      1. 5

                                                                                        The most glaring omission on the post is Infer from Facebook. I woud rate Infer as the most impressive open source C/C++ static analyzer, by far.

                                                                                        1. 3

                                                                                          ugh, I’ve been trying to package it for arch and it’s such a pain in the ass. It uses a bunch of ocaml libraries that didn’t previously have packages and it bundles a custom version of clang with its own modifications and extensions. Oh, and due to requiring a custom clang, builds can be over half an hour before anything goes wrong.

                                                                                          1. 2

                                                                                            Whoa, if that thing does what it says on the tin, I’m super interested.

                                                                                            I hope it does.

                                                                                            Cppcheck did not.

                                                                                            EDIT: A nasty nest of segfaults is all I can get out of it. Maybe I’ll check back next year.

                                                                                          1. 20

                                                                                            Kinesis Advantage. I’ve been using them for almost twenty years, and other than some basic remapping, I don’t customize.

                                                                                            1. 2

                                                                                              Ditto, I’m at a solid decade. I cannot recommend them enough.

                                                                                              1. 2

                                                                                                Also Kinesis Advantage for over a decade. On the hardware side I’ve only mapped ESC to where Caps Lock would be. On the OS side I’ve got a customized version of US Dvorak with scandinavian alphabet.

                                                                                                I’d like to try a maltron 3d keyboard with integrated trackball mouse. It’s got better function keys too, and a numpad in the middle where there’s nothing except leds on the kinesis.

                                                                                                1. 2

                                                                                                  Me too. I remap a few keys like the largely useless caps-lock and otherwise I don’t program it at all. It made my wrist pain disappear within a couple weeks of usage though.

                                                                                                  1. 2

                                                                                                    My only “problem” with the Kinesis, and it’s not even my problem, was that the office complained about the volume of the kicks while I was on a call taking notes.

                                                                                                    So I switch between the Kinesis and a Apple or Logitech BT keyboard for those occasions.

                                                                                                    1. 1

                                                                                                      You can turn the clicks off! I think the combo is Prgm-\

                                                                                                      1. 2

                                                                                                        Yeah, its not that click, it’ the other one from the switches :-)

                                                                                                        I can be a heavy typer and for whatever reason, these keys stand out more than I expected to others behind the microphone.

                                                                                                    2. 2

                                                                                                      I prefer the kinesis freestyle2. I like the ability to move the two halves farther apart (broad shoulders) and the tilt has done wonders for my RSI issues.

                                                                                                      1. 2

                                                                                                        similar, largely I like that I can put the magic trackpad in between the two halves and have something that feels comparable to using the laptop keyboard. I got rid of my mouse years ago but I’m fairly biased on a trackpad’s potential.

                                                                                                        I’ve sometimes thought about buying a microsoft folding keyboard and cutting/rewiring it to serve as a portable setup. Have also thought of making a modified version of the nyquist keyboard to be a bit less ‘minimal’ - https://twitter.com/vivekgani/status/939823701804982273

                                                                                                    1. 12

                                                                                                      Anybody with a brain knows that open plan offices are just plain bad in just about every way - except one - they’re dirt freaking cheap, which is why they’re everywhere, and that’s unlikely to change anytime soon.

                                                                                                      I’ve worked in this business long enough to remember when, even as a lowly sysadmin, I had either my own office or a shared office with one other person.

                                                                                                      Those were the days :)

                                                                                                      1. 1

                                                                                                        I’ve worked in this business long enough to remember when, even as a lowly sysadmin, I had either my own office or a shared office with one other person.

                                                                                                        I’ve worked in this business for three weeks and I’m hacking away alone in what used to be the boss’ office.

                                                                                                        The plan is to set up my very own remote office too.

                                                                                                      1. 21

                                                                                                        Stylus is using the same theme database without collecting your history:

                                                                                                        1. 7

                                                                                                          +1

                                                                                                          But the problem is: how to ensure that Stylus (or any alternative) won’t become the next “Stylish”?

                                                                                                          1. 7

                                                                                                            I’ve written a couple of my own extensions, partly for this reason. For certain complicated or common needs (like ad-blocking) I have no choice but to find an extension I trust and use it. But in other cases I just end up writing my own because I can’t find something that doesn’t feel sketchy.

                                                                                                            Ironically, one of my extensions was recently removed from the Firefox store because there was some incidental code in a dependency (that isn’t used at runtime) that makes a network request.

                                                                                                            1. 1

                                                                                                              I’ve written a couple of my own extensions, partly for this reason.

                                                                                                              This is the “hacker’s approach” that I prefer.
                                                                                                              Everyone should be able to hack software for his own need.

                                                                                                              For certain complicated or common needs (like ad-blocking) I have no choice but to find an extension I trust and use it.

                                                                                                              Well, actually you can also review them, if the sources are available.

                                                                                                              1. 6

                                                                                                                Well, actually you can also review them, if the sources are available.

                                                                                                                Certainly an important part of the process, but both major browsers push updates to extensions silently, and there’s no guarantee that the code my browser runs is the same code that was in the OSS repository. It’s a crap situation all-around, really.

                                                                                                                1. 4

                                                                                                                  This is the “hacker’s approach” that I prefer.

                                                                                                                  I prefer it too, but as far as I can tell webextensions goes out of its way to make this tedious and annoying.

                                                                                                                  I’ve tried building webextensions from source, and as far as I can tell there is no way to permanently install them. You can only install them for a single session at a time. (Hopefully there’s a workaround someone can suggest, but I didn’t find one at the time.) It was pretty appalling from a hackability/software-freedom perspective, so I was pretty surprised to see it coming from Mozilla.

                                                                                                                  1. 2

                                                                                                                    Idk about mozilla, but I made my own permanently installed extension for an appliance with chromium. Precisely to avoid the risk of updates or unavailability due to internet outages.

                                                                                                              2. 4

                                                                                                                Consumers should demand that extensions don’t improperly use personal info, and that the browser vendors only allow extensions that adhere to these rules.

                                                                                                                1. 17

                                                                                                                  Consumers should demand that extensions don’t improperly use personal info

                                                                                                                  Do you know any consumer that want extensions to sell their personal info?
                                                                                                                  I mean, it’s like relying on consumers’ demand for pencils that do not explode.

                                                                                                                  Yes, they might ask for it… if only they knew they should!
                                                                                                                  (I’m not just sarcastic: perfect symmetric information is the theoretical assumption of free market efficiency)

                                                                                                                  1. 2

                                                                                                                    I was being half sarcastic. Marketing is basically information arbitrage, after all.

                                                                                                                    But as a practical matter I believe voluntary regulation is the way forward for this. Laws are struggling to catch up, although it would be interesting to see how GDPR applies here.

                                                                                                                    1. 5

                                                                                                                      I believe voluntary regulation is the way forward for this.

                                                                                                                      Gentlemen agreements work in a world of gentlemen.
                                                                                                                      In a world wide market cheating is too easy. It’s too easy to hide.

                                                                                                                      GDPR reception shows how much we can trust companies “voluntary regulations”.

                                                                                                                      Laws are struggling to catch up

                                                                                                                      True. This is basically because many politics rely on corporate “experts” to supply for their ignorance.

                                                                                                                  2. 3

                                                                                                                    In theory the permissions system should govern this. For example, I can imagine a themeing extension needing permission to access page content; but it should be easy to make it work without any external communication, e.g. no network access, read-only access to its own data directory (themes could be separate extensions, and rely on the extension manager to copy them into place), etc.

                                                                                                                    1. 2

                                                                                                                      It can leak data to its server by modifying just css, not even touching DOM, by adding background images for example. I don’t know if it’s even possible to design browser extensions system so extension effects are decently isolated.

                                                                                                                      However, these exfiltration hacks might attract attention easier than plain XHR.

                                                                                                                      1. 1

                                                                                                                        Hmm, yes. I was mistakenly thinking of a theme as akin to rendering given HTML to a bitmap; when in fact it’s more like a preprocessor whose result is sent to the browser engine. With no way of distinguishing between original page content and extension-provided markup, you’re right that it’s easy to exfiltrate data.

                                                                                                                        I can think of ways around this (e.g. setting a dirty bit on anything coming from the theme, or extending cross domain policies somehow, etc.) but it does seem like I was being a bit naive about how hard it would be.

                                                                                                                  3. 2

                                                                                                                    Theoretically, you could audit the GitHub repo (https://github.com/openstyles/stylus) and build it yourself. Unfortunately that doesn’t seem too feasable.

                                                                                                                    1. 1

                                                                                                                      For this reason I install the absolute minimum extensions. I usually only have privacy badger installed as I’m fairly sure the EFF won’t sell out.