1. 1

    Has anyone seen what the other two packages mentioned in the email are/were?

    (Seems even if they were accidentally installed by someone they won’t do any harm, but seems odd not to name them so people can check.)

    1. 3

      I found someone on reddit mentioning balz and minergate as the other two packages.

      1. 1

        Thanks!

    1. 6

      OK but the tag line is asinine. As a regular user of a Linux distribution it is actually impossible for me to take the time to do a full analysis on every package I install to get work done.

      SOME level of trust has to be there or else the whole idea of a Linux distro can’t work.

      1. 10

        Well, AUR specifically isn’t part of the actual Arch distro. It’s no safer than the curl | bash invocations on github.

        1. 4

          But it makes your wonder if there is no middle-ground between the AUR and the community repository. Have a Git{Hub,Lab,ea} repository where the community can do pull requests for new packages and updates, but the pull requests are reviewed by trusted users or developers. And then build the packages on trusted infrastructure.

          1. 9

            This is how the OpenBSD ports tree works. Anyone can send a new port or an update to the ports@ mailing list. It then gets tested & committed by developers.

            In this specific instance, I think what hurt Arch here is too good tooling. The community developed a lot of automation tools that boil down third party package installs to pressing enter a bunch of times - even with all the warnings present people stopped reviewing the packages. If I recall correctly, the main point of AUR was to gather community packages then promote the popular ones (by votes) to trusted repositories - essentially the promotion to trusted repos lost meaning as everyone can install yaourt/pacaur or $pkgmgr du jour and just go on with their life.

          2. 2

            It’s no safer than the curl | bash invocations on github.

            Highly disagree. Using the AUR without any supporting tools like pacaur, you’re cloning into a git repository to retrieve the PKGBUILD and supporting files, so you have the opportunity to view them. With pacaur, you’re shown the PKGBUILD at first install so you can make sure nothing’s malicious, and then you’re shown diffs when the package version updates. That’s MUCH better than curl | bash already.

            1. 1

              Also, while you shouldn’t rely on others to spot malicious code, the fact that the malicious modifications were spotted and reverted after about 9 hours shows that the AUR is subject to at least slightly more scrutiny than random scripts from github or elsewhere are.

              Admittedly, it doesn’t sound like this particular attack was very sophisticated or well hidden.

        1. 2

          are we really giving free advertising to a company that offers large sums of money to anyone that introduces vulnerabilities to open source OSes?

          1. 1

            Financial opportunities for FOSS hackers is how I read it. They could even hit a rival BSD or Linux to make money to pay for developer time, features and/or security reviews, on their own project.

            I at least considered doing something like that at one point. Although I didn’t, I wouldn’t be surprised if someone rationalized it away: greater good of what money bought; fact that vulnerabilities were already there waiting to be found in product that will be hacked anyway; blame demand side where FOSS and commercial users willingly use buggy/risky software for perceived benefits instead of security-focused alternatives.

            1. 1

              The amount of trust people need to put in others for a functioning FOSS world is very high. Groups that have strong financial behavior to betray their surroundings have to behave in an extremely paranoid way, and it’s far easier to introduce vulnerabilities in your own project than find a vulnerability in another.

              suppose I find a vulnerability, I report it to security-officer@somebsd.org, they didn’t fix it yet. what am I supposed to understand? that they are behind on handling tickets (it happens), or that security-officer had 500,000 reasons to stay quiet?

              What about the person who is creating the release - he can do the build with an extra change. Are all builds that aren’t reproducible suspicious now?

              Suppose you do find a vulnerability in your own project. You can see who introduced this. Are you kicking them out of your project or assuming it’s a mistake?

              Yes, I should review the work of others and I do, but there’s a limit for how much one person can check.

              1. 1

                re vulnerability brokers in general

                You’re giving me a lot of examples but missing or disagreeing with a fundamental point. I’m going straight for it instead. It’s been bugging me a lot in past few years. It’s that most users and developers want their product to be vulnerable to achieve other goals. They willingly choose against security in a lot of ways. Users will go with a product that has lots of failures or hacks even with safer ones are available because it has X, Y, or Z traits that they think is worth that. The companies usually go with profit and/or feature maximization even when they can afford to boost QA or simplify. Both commercial and FOSS developers often use unsafe languages (or runtimes), limited security tooling, or small amounts of code reviews. These behaviors damn-near guarantee a lot of this software is going to be hacked. They do it anyway.

                So, the market is pro-getting hacked to the point they almost exclusively use things with lots of prior CVE’s. The network effects and oligopolistic tactics of companies mean there’s usually just a few things in each category. Black hats and exploit vendors are putting lots of time and money into the bug hunting that those suppliers aren’t doing and customers are voting for with their wallet. There’s going to be 0-days found in them. If there’s damage to be done, it will be done just as each party decided with their priorities. With that backdrop, will your bug in a Linux or BSD make a difference to whether folks buying from Zerodium will hack that platform? Probably not. Will it make a difference as to who gets paid and how much if you choose responsible disclosure over them? Probably so.

                To drive that home, Microsoft, IBM, Google, and Apple all have both the brains and money to make their TCB’s about as bug-proof as they can get. If they care about security, then that’s a good thing to do. If their paying users care, then it’s even more a good thing to do. They spend almost nothing on preventative security compared to what they make on their products and services. They don’t care. They’ll put the vulnerabilities in themselves just to squeeze more profit out of customers. Letting a broker have them before someone else isn’t making much difference. That’s at least on damage assessment angle.

                I think about it differently if the customer is paying a lot extra for what’s supposed to be good security. I think the supplier should be punished in courts or something for lying with the cost high enough that they start doing security or stop lying about what they’re not doing. Also, I think suppliers who have put good effort in shouldn’t be punished over a little slip or a new class of attack. I’d rather people finding those get paid so well by the companies and/or a government fund that they don’t go to vulnerability brokers most of the time. I’m just not having much sympathy for either users or suppliers griping about vulnerability brokers if they both favor products they know will get hacked because they accepted the tradeoffs. Whereas, projects that focus on balance of features and security with strong review often languish with low revenues or (for FOSS) hardly any financial contributions.

                re suspicious builds

                All software is insecure and suspicious until proven otherwise by strong review. That’s straight-up what security takes. Since you mentioned it, the guy (Paul Karger) that invented the compiler-compiler attack that Thompson demod laid out some requirements for dealing with threats like that. Reproducible builds don’t begin to cover it, esp malicious developers. For the app, you need precise requirements, design, security policy, and proof that they’re all consistent with nothing bad added or good subtracted. Then, a secure repo like described here. Object code validation like in DO-178C regulations if worried about compilers. Manual per app or use a certifying compiler like CompCert after it is validated. Then, Karger et al recommended all of that be sent via protected channel to customers so they can re-run the analyses/tests and build from source locally. All of that was what would be required to stop people like him from doing a subversion attack. Those were 1970’s to early 1990’s era requirements they used in military and commercial products.

                re someone introduces vulnerability

                I’d correct the vulnerability. I’d ask them if it was a slip up or they’d like to learn more about preventing that. I’d give them some resources. I’d review their submissions more carefully throwing some extra tooling at them, too. Anyone that keeps screwing up will be out of that project. People who improve will get a bit less review. However, as you saw above, my standard for secure software would already include some strong review plus techniques for blocking root causes of code injection and (if needed) covert channels. Half-ass code passing such a standard should usually not lead to big problems. If it is, they or their backers are so clever you aren’t going to beat them by ejecting them anyway. Ejection is symbolic. Just fix the problem. Add preventative measures for it if possible.

                Notice what I’m doing focuses on the project deliverables and their traits instead of the person. That’s intentional. If I have to trust them, my process is doing it wrong. At the least, I need more peer review and/or machine checks in it. As Roger Schell used to say, software built with the right methods is trustworthy enough that you can “buy it from your worst enemy.” He oversold it but it seems mostly true on low-to-mid-hanging fruit.

            2. 1

              free advertising to a company

              or a heads up for people running those systems that a vendor is actually restocking targeting those platforms. Which implies that either the exploits they had for the platform were recently patched or they were actually approached by a customer for targeted exploitation.

            1. -2

              while you’re at it don’t use email at all, just use signal because PGP can’t protect you from security leaks in your mail client

              1. 3

                And what protects you from security leaks in your signal app? Signal desktop recently had several CVE’s issued.

                https://www.cvedetails.com/vulnerability-list/vendor_id-17912/year-2018/Signal.html

                1. 1

                  yeah i realize my sarcasm didn’t come off well

                2. 1

                  Just write your own mail client, or stick with mutt. ( I’m contemplating both. I have betrayed mutt, and I’m “homesick” now)

                  Also nobody is going to protect from security leaks in your Signal client, and than you have an OS underneath in both cases…

                  I think GPG and plain text email are OKish for most threats, just as well as any other alternatives.

                  1. 1

                    i was making a joke… but as i understand it, you won’t have these issues if your mail client doesn’t render HTML or doesn’t make external HTTP requests. pretty much all mail clients can be set that way; many have it as the default.

                    1. 1

                      Yes, or you can set up a paranoid firewall that way…

                1. 3

                  My friend has a very interesting hybrid setup:

                  You can see how it works on his hacking/ctf/education youtube streams. The one covering the setup is unfortunately in Polish only but the above Google doc describes it fully in Eng.

                  1. 4

                    If the book is so bad, then what is the publisher doing? Isn’t it their job to weed out bad content?

                    1. 6

                      I wanted to explore that question some more in the post, but it got out of scope and is really its own huge topic.

                      The short version is that perhaps, as readers, we think they are asking “Is this content any good?” when what they’re really asking is, “Will this sell?”

                      1. 5

                        In the preface of the second edition it says that the first edition was reviewed “by a professional C programmer hired by the publisher.” That programmer said it should not be published. That programmer was right, but the publisher went ahead and published it anyway.

                        Can you expand slightly on this? I understand that the second edition contains a blurb that someone they hired reviewed the 1st edition and decided it should never be published. I’m slightly lost in meaning here.

                        1. Did they hire a person for the second edition, to review the first edition where the conclusion was ‘that should have not been published’?
                        2. Hired a person to review the first edition, the conclusion was to not publish but they still decided to publish and included a blurb about it in the second edition?

                        I guess the question is, did they knew before publishing that it’s this bad.

                        Additionally was the second edition reviewed by the same person and considered OK to be published?

                        1. 5

                          Here’s a longer excerpt from the second edition’s preface.

                          Prior to the publication of the first edition, the manuscript was reviewed by a professional C programmer hired by the publisher. This individual expressed a firm opinion that the book should not be published because “it offers nothing new—nothing the C programmer cannot obtain from the documentation provided with C compilers by the software companies.”

                          This review was not surprising. The reviewer was of an opinion that was shared by, perhaps, the majority of professional programmers who have little knowledge of or empathy for the rigors a beginning programmer must overcome to achieve a professional-level knowledge base of a highly technical subject.

                          Fortunately, that reviewer’s objections were disregarded, and “Mastering C Pointers” was released in 1990. It was an immediate success, as are most books that have absolutely no competition in the marketplace. This was and still is the only book dedicated solely to the subject of pointers and pointer operations using the C programming language.

                          To answer your question, then, all we can conclude is that a “professional C programmer” reviewed the first edition before it was published, recommended against publishing it, but the book was published anyway. If the quoted portion were the reviewer’s only objection, then we could surmise that the reviewer didn’t know much either, or didn’t actually read it.

                          1. 1

                            little knowledge of or empathy for … a beginning programmer

                            This is an important point I feel that has been left out of the discussion of this book. Yes the book contains harmful advice that should not be followed. It is probably a danger to make this text available to beginners, and it serves as little more than an object of ridicule for more experienced readers.

                            However, I think there is something to be gained from a more critical analysis that doesn’t hinge on the quality or correctness of the example. This reviewer takes a step in the right direction by trying to look at Traister’s background and trying to interpret how he arrived at holding such fatal misconceptions about C programming from a mental model seemingly developed in BASIC.

                            Traister’s code examples are in some cases just wrong and non-functioning, but in other cases I can understand what he wanted to achieve even if he has made a serious mistake. An expert C programmer has a mental model informed by their understanding of the memory management and function call semantics of C. A beginner or someone who has experience in a different sort of language will approach C programming from their own mental model.

                            Rather than pointing and laughing at his stupidity, or working to get this booked removed from shelves, maybe there’s something to be gained by exercising empathy for the author and the beginner programmer. Are the mistakes due to simple error, or do they arise from an “incorrect” mental model? Does the “incorrect” mental model actually make some sense in a certain way? Does it represent a possibly common misconception for beginners? Is it a fault of the programmer or the programming language?

                            1. 1

                              …an opinion that was shared by, perhaps, the majority of professional programmers who have little knowledge of or empathy for the rigors a beginning programmer must overcome…

                              What utter nonsense. This is inverse-meritocracy: claiming that every single expert is blinded by their knowledge & experience. Who are we to listen to then?

                              It seems like they’d prefer lots of terrible C programmers cropping up right away, to a moderate number of well-versed C programmers entering the craft over time. Which, now that I think about it, is a sensible approach for a publisher to take.

                        2. 3

                          Cynically? The publishers job is to make money. If bad content makes them money, they’ll still publish it.

                          1. 2

                            Exactly. There’s tons of news outlets, magazines, and online sites that make most of their money on fluff. Shouldn’t be surprised if computer book publishers try it. The managers might have even sensed IT books are BS or can risk being wrong individually given how there’s piles of new books every year on the same subjects. “If they want to argue about content, let them do it in the next book we sell!” ;)

                            1. 2

                              I recommend a scene from Hal Harley’s film “Fay Grim” (the sequel to “Henry Fool”) here. At a point, Fay questions the publishers decision to publish a work (‘The Confessions’) of her husband - she only read “the dirty parts” but still recognized the work as “really, really bad”.

                              Excerpted from a PopMatters review: “One proposal, from Simon’s publisher Angus (Chuck Montgomery), will lead to publication of Henry’s (admittedly bad) writing and increased sales of Simon’s poetry (on which royalties Fay and Ned depend to live). (Though the writing is, Fay and Angus agree, “bad,” he asserts they must press on, if only for the basest of reasons: “We can’t be too hard-line about these things, Fay. Anything capable of being sold can be worth publishing.”)”

                        1. 6

                          Team lobste.rs, @lattera, @nickpsecurity?

                          1. 5

                            Haha. I would love it if I had the time to play. Perhaps next year. Thanks for the ping, though. I’ve forwarded this on to a few of my coworkers who play CTFs.

                            1. 4

                              I’d love to if I hadn’t lost my memory, including of hacking, to that injury. I never relearned it since I was all-in with high-assurance security at that point which made stuff immune to almost everything hackers did. If I still remembered, I’d have totally been down for a Lobsters hacking crew. I’d bring a dozen types of covert channels with me, too. One of my favorite ways to leak small things was putting it in plain text into TCP/IP headers and/or throttling of what otherwise is boring traffic vetted by NIDS and human eye. Or maybe in HTTPS traffic where they said, “Damn, if only I could see inside it to assess it” while the data was outside encoded but unencrypted. Just loved doing the sneakiest stuff with the most esoteric methods I could find with much dark irony.

                              I will be relearning coding and probably C at some point in future to implement some important ideas. I planned on pinging you to assess the methods and tooling if I build them. From there, might use it in some kind of secure coding or code smashing challenge.

                              1. 5

                                I’m having a hard time unpacking this post, and am really starting to get suspicious of who you are, nickpsecurity. Maybe I’ve missed some background posts of yours that explains more, and provides better context, but this comment (like many others) comes off…almost Markovian (as in chain).

                                “If I hadn’t lost my memory…” — of all the people on Lobsters, you seem to have the best recall. You regularly cite papers on a wide range of formal methods topics, old operating systems, security, and even in this post discuss techniques for “hacking” which, just sentences before “you can’t remember how to do.”

                                You regularly write essays as comments…some of which are almost tangential to the main point being made. These essays are cranked out at a somewhat alarming pace. But I’ve never seen an “authored by” submitted by you pointing outside of Lobsters.

                                You then claim that you need to relearn coding, and “probably C” to implement important ideas. I’ve seen comments recently where you ask about Go and Rust, but would expect, given the number of submissions on those topics specifically, you’d have wide ranging opinions on them, and would be able to compare and contrast both with Modula, Ada, and even Oberon (languages that I either remember you discussing, or come from an era/industry that you often cite techniques from).

                                I really, really hate to have doubt about you here, but I am starting to believe that we’ve all been had (don’t get me wrong, we’ve all learned things from your contributions!). As far as I’ve seen, you’ve been incredibly vague with your background (and privacy is your right!). But, that also makes it all the more easy to believe that there is something fishy with your story…

                                1. 11

                                  I’m not hiding much past what’s private or activates distracting biases. I’ve been clear when asked on Schneier’s blog, HN, maybe here that I don’t work in the security industry: I’m an independent researcher who did occasional gigs if people wanted me to. I mostly engineered prototypes to test my ideas. Did plenty of programming and hacking when younger for the common reasons and pleasures of it. I stayed in jobs that let me interact with lots of people. Goal was social research and outreach on big problems of the time like a police state forming post-9/11 which I used to write about online under aliases even more than tech. I suspected tech couldn’t solve the problems created by laws and media. Had to understand how many people thought, testing different messages. Plus, jobs allowing lots of networking mean you meet business folks, fun folks, you name it. A few other motivations, too.

                                  Simultaneously, I was amassing as much knowledge as I could about security, programming, and such trying to solve the hardest problems in those fields. I gave up hacking since its methods were mostly repetitive and boring compared to designing methods to make hacking “impossible.” Originally a mix of public benefit and ego, I’d try to build on work by folks like Paul Karger to beat the worlds’ brightest people at their game one root cause at a time until a toolbox of methods and proven designs would solve the whole problem. I have a natural, savant-like talent for absorbing and integrating tons of information but a weakness for focusing on doing one thing over time to mature implementation. One is exciting, one is draining after a while. So, I just shared what I learned with builders as I figured it out with lots of meta-research. My studies of work of master researchers and engineers aimed to solve both individual solutions in security/programming (eg secure kernels or high-productivity) on top of looking for ways to integrate them like a unified, field theory of sorts. Wise friends kept telling me to just build one or more of these to completion (“focus Nick!”). Probably right but I’d have never learned all I have if I did. What you see me post is what I learned during all the time I wasn’t doing security consulting, building FOSS, or something else people pushed.

                                  Unfortunately, right before I started to go for production stuff beyond prototypes, I took a brain injury in an accident years back that cost me most of my memory, muscle memory, hand-eye coordination, reflexes, etc. Gave me severe PTSD, too. I can’t remember most of my life. It was my second, great tragedy after a triple HD failure in a month or two that cost me my data. All I have past my online writings are mental fragments of what I learned and did. Sometimes I don’t know where they came from. One of the local hackers said I was the Jason Bourne of INFOSEC: didn’t know shit about my identity or methods but what’s left in there just fires in some contexts for some ass-kicking stuff. I also randomly retain new stuff that builds on it. Long as it’s tied to strong memories, I’ll remember it for some period of time. The stuff I write-up helps, too, which mostly went on Schneier’s blog and other spaces since some talented engineers from high-security were there delivering great peer review. Made a habit out of what worked. I put some on HN and Lobsters (including authored by’s). They’re just text files on my computer right now that are copies of what I told people or posted. I send them to people on request.

                                  Now, a lot of people just get depressed, stop participating in life as a whole, and/or occasionally kill themselves. I had a house to keep in a shitty job that went from a research curiosity to a necessity since I didn’t remember admining, coding, etc. I tried to learn C# in a few weeks for a job once like I could’ve before. Just gave me massive headaches. It was clear I’d have to learn a piece at a time like I guess is normal for most folks. I wasn’t ready to accept it plus had a job to re-learn already. So, I had to re-learn the skills of my existing job (thank goodness for docs!), some people stuff, and so on to survive while others were trying to take my job. Fearing discrimination for disability, I didn’t even tell my coworkers about the accident. I just let them assume I was mentally off due to stress many of us were feeling as Recession led to layoffs in and around our households. I still don’t tell people until after I’m clearly a high-performer in the new context. Pointless since there’s no cure they could give but plenty of downsides to sharing it.

                                  I transitioned out of that to other situations. Kind of floated around keeping the steady job for its research value. Drank a lot since I can’t choose what memories I keep and what I have goes away fast. A lot of motivation to learn stuff if I can’t keep it, eh? What you see are stuff I repeated the most for years on end teaching people fundamentals of INFOSEC and stuff. It sticks mostly. Now, I could’ve just piece by piece relearned some tech in a focused area, got a job in that, built up gradually, transitioned positions, etc… basically what non-savants do is what I’d have to do. Friends kept encouraging that. Still had things to learn talking to people especially where politics were going in lots of places. Still had R&D to do on trying to find the right set of assurance techniques for right components that could let people crank out high-security solutions quickly and market competitive. All the damage in media indicated that. Snowden leaks confirmed most of my ideas would’ve worked while most of security community’s recommendations not addressing root causes were being regularly compromised as those taught me predicted. So, I stayed on that out of perceived necessity that not enough people were doing it.

                                  The old job and situation are more a burden now than useful. Sticking with it to do the research cost me a ton. I don’t think there’s much more to learn there. So, I plan to move on. One, social project failed in unexpected way late last year that was pretty depressing in its implications. I might take it up again since a lot of people might benefit. I’m also considering how I might pivot into a research position where I have time and energy to turn prior work into something useful. That might be Brute-Force Assurance, a secure (thing here), a better version of something like LISP/Smalltalk addressing reasons for low uptake, and so on. Each project idea has totally different prerequisites that would strain my damaged brain to learn or relearn. Given prior work and where tech is at, I’m leaning most toward a combo of BFA with a C variant done more like live coding, maybe embedded in something like Racket. One could rapidly iterate on code that extracted to C with about every method and tool available thrown at it for safety/security checks.

                                  So, it’s a mix of indecision and my work/life leaving me feeling exhausted all the time. Writing up stuff on HN, Lobsters, etc about what’s still clear in my memory is easy and rejuvenating in comparison. I also see people use it on occasion with some set to maybe make waves. People also send me emails or private messages in gratitude. So, probably not doing what I need to be doing but folks were benefiting from me sharing pieces of my research results. So, there it is all laid out for you. A person outside security industry going Ramanujan on INFOSEC and programming looking for its UFT of getting shit done fast, correct, and secure (“have it all!”) while having day job(s) about meeting, understanding, and influencing people for protecting or improving democracy. Plus, just the life experiences of all that. It was fun while it lasted. Occasionally so now but more rare.

                                  1. 4

                                    Thank you for sharing your story! It provides a lot of useful context for understanding your perspective in your comments.

                                    Putting my troll hat on for a second, what you’ve written would also make a great cover story if you were a human/AI hybrid. Just saying. :)

                                    1. 1

                                      Sure. Im strange and seemingly contradictory enough that I expect confusion or skepticism. It makes sense for people to wonder. Im glad you asked since I needed to do a thorough writeup on it to link to vs scattered comments on many sites.

                                  2. 0

                                    I have to admit similar misgivings (unsurprisingly, I came here via @apg and know @apg IRL). For someone so prolific and opinionated you have very little presence beyond commenting on the internet. To me, that feels suspicious, but who knows. I’m actually kind of hoping you’re some epic AI model and we’re the test subjects.

                                    1. 0

                                      Occam’s Razor applies. ‘A very bright human bullshitter’ is more likely than somebody’s research project.

                                      @nickpsecurity, have you considered “I do not choose to compete” instead of “If only I hadn’t had that memory loss”?

                                      I, for one, will forgive and forget what I’ve seen so far. (TBH, I’m hardly paying attention anyway.)

                                      But, lies have a way of growing, and there is some line down the road where forgive-and-forget becomes GTFO.

                                      1. 1

                                        have you considered “I do not choose to compete” instead of “If only I hadn’t had that memory loss”?

                                        I did say the way my mind works makes it really hard to focus on long-term projects to completion. Also, I probably should’ve been doing some official submissions in ACM/IEEE but polishing and conferencing was a lot of work distracting from the fun/important research. If I’m reading you right, it’s accurate to say I wasn’t trying to compete in academia, market, or social club that is the security industry on top of memory loss. I was operating at a severe handicap. So, I’d (a) do those tedious, boring, distracting, sometimes-political things with that handicap or (b) keep doing what I was doing, enjoying, and provably good at despite my troubles. I kept going with (b).

                                        That was the decision until recently when I started looking at doing some real, public projects. Still in the planning/indecision phase on that.

                                        “But, lies have a way of growing, and there is some line down the road where forgive-and-forget becomes GTFO.”

                                        I did most of my bullshitting when I was a young hacker trying to get started. Quite opposite of your claim, the snobby, elitist, ego-centered groups I had to start with told you to GTFO by default unless you said what they said, did what they expected, and so on. I found hacker culture to be full of bullshit beliefs and practices with no evidence backing them. That’s true to this day. Just getting in to few opportunities I had required me to talk big… being a loud wolf facing other wolves… plus deliver on a lot of it just to not be filtered. I’d have likely never entered INFOSEC or verification otherwise. Other times have been personal failures that required humiliating retractions and apologies when I got busted. I actually care about avoiding unnecessary harm or aggravation to decent people. I’m sure more failures will come out over time with them costing me but there will be a clear difference between old and newer me. Since I recognize my failure there, I’m focusing on security BSing for rest of comment since it’s most relevant here.

                                        The now, especially over past five years or so, has been me sharing hard-won knowledge with people with citations. Most of the BS is stuff security professionals say without evidence that I counter with evidence. Many of their recommendations got trashed by hackers with quite a few of mine working or working better. Especially on memory safety, small TCB’s, covert channels, and obfuscation. I got much early karma on HN in particular mainly countering BS in fads, topics/people w/ special treatment, echo chambers, and so on. My stuff stayed greyed out but I had references. They usually got upvoted back by the evening. To this day, I get emails thanking me for doing what they said they couldn’t since any dissenting opinion on specific topics or individuals would get slammed. My mostly-civil, evidence-based style survived. Some BS actually declined a bit since we countered it so often. Just recently had to counter a staged comparison here which is at 12 votes worth of gratitude, high for HN dissenters. The people I counter include high-profile folks in security industry who are totally full of shit on certain topics. Some won’t relent no matter who concrete the evidence is since it’s a game or something to them. Although I get ego out of being right, I mainly do this since I think safe, secure systems are a necessary, public good. I want to know what really works, get that out there, and see it widely deployed.

                                        If anything, I think my being a bullshitting hacker/programmer early on was a mix of justified and maybe overdoing it vs a flaw I should’ve avoided. I was facing locals and an industry that’s more like a fraternity than meritocracy, itself constantly reinforcing bullshit and GTFO’ing dissenters. With my learning abilities and obsession, I got real knowledge and skills pretty quickly switching to current style of just teaching what I learned in a variety of fields with tons of brainstorming and private research. Irritated by constant BS, I’ve swung way in the other direction by constantly countering BS in IT/INFOSEC/politics while being much more open about personal situation in ways that can cost me. I also turned down quite a few jobs offers for likely five to six digits telling them I was a researcher “outside of industry” who had “forgotten or atrophied many hands-on skills.” I straight-up tell them I’d be afraid to fuck up their systems by forgetting little, important details that only experience (and working memory) gives you. Mainly admining or networking stuff for that. I could probably re-learn safe/secure C coding or something enough to not screw up commercial projects if I stayed focused on it. Esp FOSS practice.

                                        So, what you think? I had justification for at least some of my early bullshit quite like playing the part for job interviews w/ HR drones? Or should’ve been honest enough that I never learned or showed up here? There might be middle ground but that cost seems likely given past circumstances. I think my early deceptions or occasional fuckups are outweighed by the knowledge/wisdom I obtained and shared. It definitely helped quite a few people whereas talking big to gain entry did no damage that I can tell. I wasn’t giving bad advice or anything: just a mix of storytelling with letting their own perceptions seem true. Almost all of them are way in my past. So, really curious what you think of how justified someone entering a group of bullshitters with arbitrary, filtering criteria is justified in out-bullshiting and out-performing them to gain useful knowledge and skills? That part specifically.

                                        1. 2

                                          As a self-piloted, ambulatory tower of nano machines inhabiting the surface of a wet rock hurtling through outer space, I have zero time for BS in any context. Sorry.

                                          I do have time for former BSers who quit doing it because they realized that none of these other mechanical wonders around them are actually any better or worse at being what they are. We’re all on this rock together.

                                          p.s. the inside of the rock is molten. w t actual f? :D

                                          1. 2

                                            Actually, come to think of it, I will sit around and B.S. for hours, in person with close friends, for fun. Basically just playing language games that have no rules. It probably helps that all the players love each other. That kind of BS is fine.

                                            1. 1

                                              I somehow missed this comment before or was dealing with too much stuff to respond. You and I may have some of that in common since I do it for fun. I don’t count that as BS people want to avoid so much as just entertainment since I always end with a signal its bullshit. People know it’s fake unless tricking them is part of our game, esp if I owe them a “Damnit!” or two. Even then, it’s still something we’re doing voluntarily for fun.

                                              My day-to-day style is a satirist like popular artists doing controversial comedy or references. I just string ideas together to make people laugh, wonder, or shock them. Same skill that lets me mix and match tech ideas. If shocking stuff bothers them, tone it way down so they’re as comfortable as they let others be. Otherwise, I’m testing their boundaries with stuff making them react somewhere between hysterical laughter and “Wow. Damn…” People tell me I should Twitter the stuff or something. Prolly right again but haven’t done it. Friends and coworkers were plenty fun to entertain without any extra burdens.

                                              One thing about sites like this is staying civil and informational actually makes me hide that part of my style a lot since it might piss a lot of people off or risk deleting my account. I mostly can’t even joke here since it just doesn’t come across right. People interpret via impression those informational or political posts gave vs my in-person, satirical style that heavily leans on non-tech references, verbal delivery, and/or body language. Small numbers of people face-to-face instead of a random crowd, too, most of the time. I seem to fit into that medium better. And trying to be low-noise and low-provocation on this site in particular since I think it has more value that way.

                                              Just figured I’d mention that since we were talking about this stuff. I work in a pretty toxic environment. In it, I’m probably the champion of burning jerks with improv and comebacks. Even most naysayers pay attention with their eyes and some smirks saying they look forward to next quip. I’m a mix of informative, critical, random entertainment, and careful boundary pushing just to learn about people. There’s more to it than that. Accurate enough for our purposes I think.

                                            2. 1

                                              Lmao. Alright. We should get along fine then given I use this site for brainstorming, informing, and countering as I described. :)

                                              And yeah it trips me out that life is sitting on a molten, gushing thing being supplied energy by piles of hydrogen bombs going off in a space set to maybe expand into our atmosphere at some point. That is if a stray star doesn’t send us whirling out of orbit. Standing in the way of all of this is the ingenuity of what appear to be ants on a space rock whose combined brainpower got a few off of it and then back on a few times. They have plans for their pet rock. Meanwhile, they scurry around on it making all kinds of different visual, IR, and RF patterns for space tourists to watch for a space buck a show.

                                1. 2

                                  As terrible as this is, I bet the company didn’t lose a single sale from this. Part of why IoT is so horrible, there is just no reason to make a secure system when the general public won’t care.

                                  1. 1

                                    The devil’s advocate is that the lock is still roughly as secure as some random Masterlock that kids use on lockers.

                                    Most locks exist more as sign postage and preventing errant access. You definitely don’t want to be using this to protect against motivated actors… but that was true even without these exploits?

                                    That being said the random Masterlock at least requires someone to fidget with it physically to get it open.

                                    1. 2

                                      I think the main difference is non IoT locks actually require some effort to unlock. If these IoT locks take over, someone will just make an app that automatically scans the area for devices and lets you hack them with a button press.

                                      1. 1

                                        yeah this is a very real possibility. I’m a strong believer in the gradient of security but the idea of just walking down and being able to unlock all the doors is v scary

                                        (also: why does this even need to be on the internet?? We made electronic devices before bluetooth low energy, it really feels like we should be able to make a lot of this stuff in an offline way)

                                      2. 1

                                        2018/06/16: Tapplock got the API down after pressure because it was exposing GDPR data.

                                        That’s why I actually like GDPR. I bet before that law came into life the vendor would not react at all. Now they face a huge fine and most of all are obliged by law to inform about the potential breach of customer data.

                                    1. 4

                                      Nice article. How do you feel about the size of the language? One thing that keeps me off from looking at rust seriously is the feeling that it’s more of a C++ replacement (kitchen & sink) vs a C replacement.

                                      The Option example feels a bit dropped off too early, you started by showing an example that fails and then jumped to a different code snippet to show nicer compiler error messages without ever going back and showing how the error path is handled with the Option type.

                                      You should also add Ada to the list of your languages to explore, you will be surprised how many of the things you found nice or interesting were already done in the past (nice compiler errors, infinite loop semantics, very rich type system, high level language yet with full bare metal control).

                                      1. 2

                                        Thank you for commenting! I agree that Rust’s standard library feels as big as C++‘s, but I haven’t been too bothered by the size of either one. To quote Bjarne Stroustrup’s “Foundations of C++” paper, “C++ implementations obey the zero-overhead principle: What you don’t use, you don’t pay for [BS94]. And further: What you do use, you couldn’t hand code any better.” I haven’t personally noticed any drawbacks of having a larger standard library (aside from perhaps binary size constraints, but you would probably end up including a similar amount of code anyway, just code that you wrote yourself), and in addition to the performance of standards-grade implementations of common data structures, my take on it is that having a standardized interface to them improves readability quite a bit - when you go off to look through a codebase, the semantics of something like a hashmap shouldn’t be surprising. It’s a minor draw, but I feel like I have to learn a new hash map interface whenever I go off to grok a new C codebase.

                                        I’ll definitely take a look at Ada, seems like a very promising language. Do you have any recommendations for books? I think my friend has a copy of John Barnes’ Programming in Ada 2012 I can borrow, but I’m wondering if there’s anything else worth reading.

                                        Also, thank you for pointing out the issue with the Option example, I’ll make an edit to the post at some point today.

                                        1. 5

                                          It’s funny how perspectives change; to C and JavaScript people, we have a huge standard library, but to Python, Ruby, Java, and Go people, our standard library is minuscule.

                                          1. 2

                                            I remember when someone in the D community proposed to include a basic web server in the standard library. Paraphrased:

                                            “Hell no, are you crazy? A web server is a huge complex thing.”

                                            “Why not? Python has one and it is useful.”

                                          2. 2

                                            What you don’t use, you don’t pay for [BS94]

                                            That is true however you have little impact on what others use. Those features will leak into your code via libraries or team mates using features you might not want. Additionally when speaking about kitchen & sink I didn’t only mean the standard library, the language itself is much larger than C.

                                            I think my friend has a copy of John Barnes’ Programming in Ada 2012 I can borrow, but I’m wondering if there’s anything else worth reading.

                                            Last I did anything related to Ada was somewhere around 2012. I recall the Barnes books were well regarded but I don’t know if that changed in any significant way.

                                            For casual reading the Ada Gems from Adacore are fun & informing reads.

                                            1. 2

                                              I’ll definitely take a look at Ada, seems like a very promising language. Do you have any recommendations for books? I think my friend has a copy of John Barnes’ Programming in Ada 2012 I can borrow, but I’m wondering if there’s anything else worth reading.

                                              I recommend Building High Integrity Applications in SPARK. It covers enough Ada to get you into the meat of SPARK (the compile time proving part of Ada) and goes through a lot of safety features that will look familiar after looking at Rust. I wrote an article converting one of the examples to ATS in Capturing Program Invariants in ATS. You’ll probably find yourself thinking “How can I do that in Rust” as you read the book.

                                          1. 5

                                            Congratulations to Lua, Zig, and Rust on being in C’s territory. Lua actually beat it. Nim and D are nearly where C++ is but not quite. Hope Nim closes that gap and any others given its benefits over C++, esp readability and compiling to C.

                                            1. 1

                                              To be clear, and a little pedantic, Lua =/= Luajit.

                                              1. 1

                                                The only thing I know about Lua is it’s a small, embeddable, JIT’d, scripting language. So, what did you mean by that? Do Lua the language and LuaJIT have separate FFI’s or something?

                                                1. 5

                                                  I think just that there are two implementations. One is just called “Lua”, is an interpreter written in C, supposedly runs pretty fast for a bytecode interpreter. The other is LuaJIT and runs much faster (and is the one benchmarked here).

                                                  1. 1

                                                    I didn’t even know that. Things I read on it made me think LuaJIT was the default version everyone was using. Thanks!

                                                      1. 2

                                                        I waited till I was having a cup of coffee. Wow, this is some impressive stuff. More than I had assumed. There’s a lot of reuse/blending of structures and space. I’m bookmarking the links in case I can use these techniques later.

                                                      2. 2

                                                        I think people when doing comparative benchmarks very often skip over the C Lua implementation because it isn’t so interesting to them.

                                                    1. 4

                                                      Extra context: LuaJIT isn’t up to date with the latest Lia either, so they’re almost different things, sorta.

                                                      LuaJIT is extremely impressive.

                                                1. 5

                                                  As exciting as this is, I’m wary about dependency in GNU tools, even though I understand providing an opembsd-culture-friendly implementation would require extra work and could be a nightmare maintainance, with two different codebases for shell scripts, but perhaps gmake could be replaced with something portable.

                                                  1. 12

                                                    This version of Wireguard was written in go, which means it can run on exactly 2 (amd64, i386) of the 13 platforms supported by OpenBSD.

                                                    The original Wireguard implementation written in C is a Linux kernel module.

                                                    A dependency on gmake is the least of all portability worries in this situation.

                                                    1. 18

                                                      While it’s unfortunate that Go on OpenBSD only supports 386 and amd64, Go does support more architectures that are also supported by OpenBSD, specifically arm64 (I wrote the port), arm, mips, power, mips. I have also implemented Go support for sparc64, but for various reasons this wasn’t integrated upstream.

                                                      Go also supports power, and it used to run on the power machines supported by OpenBSD, but sadly now it only runs on more modern power machines, which I believe are not supported by OpenBSD. However, it would be easy to revert the changes that require more modern power machines. There’s nothing fundamental about them, just that the IBM maintainer refused to support such old machines.

                                                      Since Go support both OpenBSD and the architectures mentioned, adding support in Go for OpenBSD+$GOARCH is about a few hours of work, so if there is interest there would not be any problem implementing this.

                                                      I can help and offer advice if anyone is willing to do the work.

                                                      1. 3

                                                        Thanks for your response! I didn’t know that go supports so many platforms.

                                                        Go support for sparc64, but for various reasons this wasn’t integrated

                                                        Let me guess: Nobody wanted to pay the steep electricity bill required to keep a beefy sparc64 machine running?

                                                        1. 23

                                                          No, that wasn’t the problem. The problem was that my contract with Oracle (who paid me for the port) had simply run out of time before we had a chance to integrate.

                                                          Development took longer then expected (because SPARC is like that). In fact it took about three times longer than developing the arm64 port. The lower level bits of the Go implementation have been under a constant churn which prevented us from merging the port because we were never quite synced up with upstream. We were playing a whack’a’mole game with upstream. As soon as we merged the latest changes, upstream had diverged again. In the end my contract with Oracle had finished before we were able to merge.

                                                          This could all have been preventable if Google had let us have a dev.sparc64 branch, but because Google is Google, only Google is allowed to have upstream branches. All other development must happen at tip (impossible for big projects like this, also disallowed by internal Go rules), or in forks that then have to keep up.

                                                          The Go team uses automated refactoring tools, or sometimes even basic scripts to do large scale refactoring. As we didn’t have access to any of these tools, we had to do the equivalent changes on our side manually, which took a lot of time and effort. If we had an upstream branch, whoever did these refactorings could have simply used the same tools on our code and we would have been good.

                                                          I estimate we spent more effort trying to keep up with upstream than actually developing the sparc support.

                                                          As for paying for electricity, Oracle donated one of the first production SPARC S7-2 machines (serial number less than 100) to the Go project. Google refused to pay for hosting this machine (that’s why it’s still sitting next to me as I type this).

                                                          In my opinion after being involved with Go since the day of the public release, I’d say the Go team at Google is unfortunately very unsympathetic to large scale work done by non-Google people. Not actively hostile. They thanked me for the arm64 port, and I’m sure they are happy somebody did that work, but indirectly hostile in the sense that the way the Go team operates is not compatible with large scale outside contributions.

                                                          1. 1

                                                            Having to manually follow automated tools has to suck. I’d be overwhelmed by the tedium or get side-tracked trying to develop my own or something. Has anyone attempted a Go-to-C compiler developed to attempt to side-step all these problems? I originally thought something like that would be useful just to accelerate all the networking stuff being done in Go.

                                                            1. 2

                                                              There is gccgo, which is a frontend for gcc. Not quite a transpiler but it does support more architectures than the official compiler.

                                                              1. 1

                                                                Yeah, that sounds good. It might have a chance of performing better, too. The thing working against that is the Go compiler is designed for optimizing that language with the gccgo just being coopted. Might be interesting to see if any of the servers or whatever perform better with gccgo. I’d lean toward LLVM, though, given it seems more optimization research goes into it.

                                                              2. 2

                                                                The Go team wrote such a (limited) transpiler to convert the Go compiler itself from C to Go.

                                                                edit: sorry, I misread your comment - you asked for Go 2 C, not the other way around.

                                                                1. 1

                                                                  Hey, that’s really cool, too! Things like that might be a solution to security of legacy code whose language isn’t that important.

                                                            2. 1

                                                              But these people are probably more than comfortable with cryptocurrency mining 🙃

                                                            3. 3

                                                              Go also supports power, and it used to run on the power machines supported by OpenBSD, but sadly now it only runs on more modern power machines, which I believe are not supported by OpenBSD. However, it would be easy to revert the changes that require more modern power machines. There’s nothing fundamental about them, just that the IBM maintainer refused to support such old machines.

                                                              The really stupid part is that Go since 1.9 requires POWER8…. even on big endian systems, which is very pointless because most running big endian PPC is doing it on pre-POWER8 systems (there’s still a lot!) or a big endian only OS. (AIX and OS/400) You tell upstream, but they just shrug at you.

                                                              1. 3

                                                                I fought against that change, but lost.

                                                              2. 2

                                                                However, it would be easy to revert the changes that require more modern power machines.

                                                                Do you have a link to a revision number or source tree which has the code to revert? I still use a macppc (32 bit) that I’d love to use Go on.

                                                                1. 3

                                                                  See issue #19074. Apparently someone from Debian already maintains a POWER5 branch.

                                                                  Unfortunately that won’t help you though. Sorry for speaking too soon. We only ever supported 64 bit power. If macppc is a 32-bit port, this won’t work for you, sorry.

                                                                  1. 3

                                                                    OpenBSD/macppc is indeed 32-bit.

                                                                    I kinda wonder if say, an OpenBSD/power port is feasible; fast-ish POWER6 hardware is getting cheap (like 200$) used and not hard to find. (and again, all pre-P8 POWER HW in 64-bit mode is big endian only) It all depends on developer interest…

                                                                    1. 3

                                                                      Not to mention that one Talos board was closer to two grand than eight or ten. Someone could even sponsor the OpenBSD port by buying some dev’s the base model.

                                                                      1. 3

                                                                        Yeah, thankfully you can still run ppc64be stuff on >=P8 :)

                                                              3. 2

                                                                This version of Wireguard was written in go, which means it can run on exactly 2 (amd64, i386)

                                                                That and syspatch make me regret of buying EdgeRouter Lite instead of saving up for an apu2.

                                                              4. 2

                                                                I’m a bit off with the dependency of bash on all platforms. Can’t this be achieved with a more portable script instead (POSIX-sh)?

                                                                1. 3

                                                                  You don’t have to use wg-quick(8) – the thing that uses bash. You can instead set things up manually (which is really easy; wireguard is very simple after all), and just use wg(8) which only depends on libc.

                                                                  1. 2

                                                                    I think the same as you, I’m sure it is possibe to achieve same results using portable scripts. I’m aware of the conviniences bash offers, but it is big, slow, and prompt to bugs.

                                                                1. 14

                                                                  They in fact dont chase fads. Others they ignored included virtualization, a desktop experience, and a business model like Red Hat’s supporting the developers. They always do their own thing. Whether that’s better or not varies a lot. ;)

                                                                  1. 3

                                                                    Hey, not sure if this post was sarcasm, but OpenBSD does have virtualization now, called vmm. And it’s actually known for having pretty good desktop experience relative to other BSDs, they even have mocked up enough systemd stuff to get gnome3 running.

                                                                    1. 1

                                                                      They ignored it for a long time with the kind of mockery you see here. It’s one of few times they reversed their position with them building a hypervisor.

                                                                      Far as desktop, you can install a 3rd party desktop. You used to be able to do that for MS-DOS, too, but we wouldn’t call it a “Microsoft” desktop. I think an OpenBSD-focused project would be pretty different than vanilla Gnome. It could even be just an standardized integration of select components like individual developers probably already do. Note that there have been attempts to build OpenBSD distros for things like LiveCD’s. Idk if they were project members, though.

                                                                      1. 1

                                                                        I’m not sure what you’re saying. Are you saying that in order for OpenBSD to be considered not ignoring the desktop experience they have to build their own desktop environment? Why do that if they can leverage what already exists and works?

                                                                        1. 3

                                                                          Why do that if they can leverage what already exists and works?

                                                                          That’s not how OpenBSD project does things haha. You’re talking like a Linux user now now with Gnome itself coming from those kind of people. ;) Same crowd was behind most desktop environments that could go mainstream with lay people. That kind of thing just doesn’t happen with OpenBSD’s work since they don’t target such people. They mostly seem to use terminals with a preference for the base system to focus on terminal use. Some are content with 3rd party stuff on OpenBSD for this purpose. Some are on Mac’s which were a great answer to usable desktop with UNIX’s power underneath.

                                                                          To further illustrate the point, OpenBSD people like to build their own stuff consistent with their philosophy of doing things with a focus on a specific, technical audience (esp themselves: the main users). They have a part of the site where they highlight a lot of “OpenBSD’s” contributions to software. Those are their things that they’re proud of. They certainly can and do use third party software on an individual basis. That’s not the default experience of OpenBSD, though, which is a narrower set of software done in a specific way.

                                                                          I did try to imagine an OpenBSD desktop when looking for potential business models they could use to get more support. If they built one, I predicted it would be lightweight, consistent, well-documented, use their custom libraries, and work with their security mitigations. It would also be the default, graphical environment you get after installing OpenBSD. It would probably be an option in the installer given many will use terminals. Major apps would have packages that installed them into this environment automatically setting up any configurations, functional and security, that were necessary. That would use their internal tooling as well to reduce crud. It would be tested or certified on a small set of high-quality hardware their developers were able to build good drivers for. That would be a desktop deserving the OpenBSD branding that might also sell commercially.

                                                                          Gnome on OpenBSD isn’t an OpenBSD desktop. It’s a Linux desktop experience running on a terminal- and server-focused BSD. Totally different.

                                                                          1. 4

                                                                            OpenBSD actually comes with Calm Window Manager, some developers use it but not all of us. If I had to hand wave what’s popular among developers I would say: i3, cwm, fvwm, ratpoison, dwm, xfce4 - in no particular order.

                                                                            I think both of you are right to a degree. @apy is correct that OpenBSD can be deployed as perfectly fine desktop and m:tier proved that by providing Gnome based workstations for corporations. When @apy says:

                                                                            And it’s actually known for having pretty good desktop experience relative to other BSDs

                                                                            I think he has in mind the pretty good situation with OpenBSD on laptops (suspend/hibernate, wifi, many things working out of the box), which is a result of many developers running the system as daily drivers on their main machines.

                                                                            That said, you are correct that the project goals are not focused on providing a mainstream desktop. If desktop became a big focus for the project I still believe it would not be any different to the work already done with cwm, OpenBSD doesn’t cater to mainstream needs, if you dog food your software - you tend to make what you like to eat.

                                                                            OpenBSD works for me as a desktop, I use it for gaming, work and every other computer related task. It runs on 3 of my laptops and on my server. If it doesn’t work for you yet, and you’re the type of person that wants to put the work required to make that happen - you will find a supportive community. However if you only want a working desktop without putting in any effort, you are better of picking up a mainstream operating system.

                                                                            1. 2

                                                                              I appreciate your reply. I forgot about CWM. It’s not quite a full, desktop environment but it does address the most important functionality. Interestingly, its description matches some of my predictions for what one would look like on OpenBSD. I should probably try using it at some point since it’s keyboard-focused. Been meaning to try one that was.

                                                                              I didn’t know about m:tier’s desktops. Thanks for the tip along with the picture of what people are using. All that collectively could be helpful in event anyone wanted to tackle a project creating a more complete desktop. Gotta know people’s preferences to start with. I doubt there will be a lot of interest in that, though. One advantage of just using something like Gnome is people only need to be trained on one thing across several platforms. Pretty intuitive, too, for basic use.

                                                                              Far as gaming, it still trips me out that there’s an OpenBSD gaming scene at all given how niche it is. Unexpected but neat. Aside from the Reddits, are there any goto, introductory links for the best games and/or what goes into porting one to OpenBSD? I figure the latter part could have interesting challenges if game wasn’t originally made for OpenBSD.

                                                                              1. 2

                                                                                We are pretty active on freenode #openbsd-gaming, thfr made amazing progress with porting FNA based games which accounts for some pretty decent and quite new Indie titles (https://github.com/rfht/fnaify#status). He is also now making strides into VR on OpenBSD.

                                                                                We also have a GOG mix listing games with native OpenBSD engine available (so excluding dosbox etc). We also game every Saturday, usually Quake 1,2,3. This Sat we spent 8 hours on a 3v3 game of Wesnoth :)

                                                                  1. 8

                                                                    To prevent mutt from auto invoking GPG use the following in your ~/.muttrc:

                                                                    set pgp_decrypt_command = “false”
                                                                    set pgp_auto_decode = no
                                                                    set pgp_use_gpg_agent = no
                                                                    set crypt_autopgp = no
                                                                    set crypt_verify_sig = no
                                                                    set crypt_use_gpgme = no

                                                                    I found it still calling pgp_decrypt_command even after setting all other variables, hence preemptively setting it to “false” as we don’t know what triggers the vuln.

                                                                    1. 8

                                                                      At least by using mutt/neomutt, we’ve secured ourselves against HTML-based exfiltration attacks. :)

                                                                      1. 4

                                                                        Most HTML-aware MUA these days don’t auto-load external resources either

                                                                        1. 4

                                                                          Still I find it a bit worrying that mutt is so eager to shell out to a command by default and apparently ignoring the auto decode flag - wonder if there are more, less popular formats that make it try calling random stuff. @fcambus found plenty in Lynx when he started pledging it.

                                                                          1. 1

                                                                            Maybe it’s just a bug?

                                                                        2. 5

                                                                          The efail paper (warning: pdf) has a table that shows mutt has no exfiltration channels. I believe pgp to be safe with mutt in the context of the efail attacks.

                                                                          1. 1

                                                                            yeah, when I wrote the comment the paper was not available yet (or I wasn’t yet aware it was published).

                                                                        1. 1

                                                                          My impressions are as follows.

                                                                          The interface is bad, the email notifications are useless and don’t distinguish between ‘hey, someone sent you a direct message/mentioned you on a channel’ and ‘here is a dump of messages from last week’.

                                                                          Handling e2e encryption keys and device verification is terrible, including tying the device key to the browser user agent - I had to re-authenticate my browser after Chrome UA changed on OpenBSD.

                                                                          There are some messages that nothing except my phone can decrypt, and ‘requesting’ keys doesn’t help with it at all.

                                                                          The interface and service feels sluggish.

                                                                          e2e encryption is not enabled by default.

                                                                          I love the idea of matrix & the riot client - it lacks a lot of polish at this point in time. It’s annoying enough that I do not use it daily, I take a look at the openbsd riot channel every few weeks - that’s all.

                                                                          1. 1

                                                                            e2e encryption is not enabled by default.

                                                                            I agree with a lot of what you said (although I disagree with the degree which it is a problem). For this one, I’m not sure is a negative. E2E encryption still is in beta, so turning that on by default would probably produce the opposite complaint from a lot of people, possibly even you given your earlier statements on its quality. It also cannot be undone, so making public channels would be annoying. I also don’t really think public channels probably need e2e given anyone can join them. Maybe direct chats should be e2e by default once it’s ready, I’m not sure. But I do believe there is a valid argument for e2e encryption being off by default.

                                                                            1. 1

                                                                              For group channels - sure. I do believe however that direct messages e2e should be on by default. Especially if they consider e2e encryption still a beta - this needs huge usage exposure before people start relying on it in the real world for serious stuff.

                                                                              1. 1

                                                                                I find your statement kind of confusing. You are suggesting we opt people into e2e encryption by default but at the same time it’s not ready for serious stuff. IMO, letting people opt themselves in and slowly work out bugs and eventually transition people into it by default sounds like a more pleasant user experience than dropping everyone into a buggy solution. I can see merits to your suggestion, but my values prefers a slower solution.

                                                                                1. 2

                                                                                  IMO, letting people opt themselves in and slowly work out bugs and eventually transition people into it by default sounds like a more pleasant user experience than dropping everyone into a buggy solution.

                                                                                  I think it will lead to it remaining non-default forever and people sending messages without turning on e2e encryption on. Defaults matter.

                                                                                  I also believe it’s better to expose as many users as possible to the e2e feature now - people using matrix today are most likely technical already. It’s harder to change defaults when things go mainstream.

                                                                                  1. 1

                                                                                    I think it will lead to it remaining non-default forever and people sending messages without turning on e2e encryption on

                                                                                    Maybe! It’s hard to tell the future. At least anyone who is sufficiently motivated can write a client which does default to e2e encryption or can make a PR to Riot that defaults to it, etc etc (it’s a client decision not a server decision). I feel like you’re being overly pessimistic, but we’ll find out!

                                                                                  2. 2

                                                                                    IMO, letting people opt themselves in and slowly work out bugs and eventually transition people

                                                                                    They need to just fix the bugs so we don’t have to slowly opt people in. Most of the private or FOSS alternatives to proprietary software fail due to user experience. Those developing them should’ve learned by now. I’d hold off on new features where possible to just fix everything people have reported. Then, do iterations as follows: build some stuff with good diagnostics or logging built-in; fix the bugs people report; build some more stuff; fix some more stuff; maybe trim anything that turned out unnecessary. Just rinse repeat maintaining good user experience with core functionality that works well. If there’s bugs, they should be in rarely-used features.

                                                                                    1. 4

                                                                                      They need to just fix the bugs so we don’t have to slowly opt people in.

                                                                                      This statement is ridiculous. It’s an open source project with limited resources. Yes, it would be nice if they could just fix the bugs. Wouldn’t life be great in every project if that could just happen.

                                                                                      Those developing them should’ve learned by now.

                                                                                      It’s new people developing every project, it’s not Ocean’s 11 where the same crew gets together on every project. Those who can program are growing at an insane rate, most of them are green.

                                                                                      Then, do iterations as follows: …

                                                                                      Feel free to run an open source project like this. But this isn’t a company with top-down management, it’s a bunch of actors in the world doing whatever they are doing and things happen. There is no-one in control.

                                                                                      1. 2

                                                                                        This statement is ridiculous. It’s an open source project with limited resources. Yes, it would be nice if they could just fix the bugs. Wouldn’t life be great in every project if that could just happen.

                                                                                        There’s open source projects that fix their bugs. There’s others that ignore the bugs to work on other parts of the project like new features. So, it’s not ridiculous: it’s a course of action proven by other projects that focus on quality and polishing what they have. Many projects and products do ignore that approach, though, for endless addition of features.

                                                                                        Now, it might be acceptable to ignore bugs if users love the core functionality enough to work around them. Maybe the new features would be justified. Happens with a lot of software. However, bugs in basic use of a chat client that is not in wide demand which its competitors don’t have are going to be a deal-breaker for a wide audience. It’s already a hard, uphill sell to get people to use private, encrypted clients like Signal that work. People mostly cite network effects of existing ecosystems but also things like visuals and breakage of some features. Really petty given the benefits and developers available but gotta play to the market’s perception. Leaving the alternatives broken in whatever ways you were noticing just makes that hard sell worse both for that project and any others that get mentally associated with that experience down the line. As in, people stop wanting to try encrypted, chat programs when the last two or three were buggy as hell or had poor UI. It can even hurt credibility of people recommending them.

                                                                                        “Feel free to run an open source project like this.”

                                                                                        There’s groups that do. They have less contributors but higher quality. Another alternative is one person who does care spending extra time on fixing bugs or QA-checking contributions. I’m usually that guy at my job doing a mix of the stuff people overlook and the normal stuff. There’s people doing it in FOSS projects. This one clearly needs at least one person doing that. Maybe one more person if a person or some people are already doing it but overloaded.

                                                                                        When it comes down to it, though, I said the group wanting a lot of people to switch to their chat client should fix the problems in it. Your counter implies they shouldn’t fix the problems in it. I’m assuming you meant they should keep doing more features or whatever they’re doing while ignoring the problems. I think for chat clients that fixing problems that would reduce or block adoption should one of highest priorities. Even a layperson would tell you they want their new tech to work about as well on main functions as their old ones its replacing. The old ones work really well. So, the new one needs to. That simple to them.

                                                                                        1. 1

                                                                                          There’s open source projects that fix their bugs.

                                                                                          Your counter implies they shouldn’t fix the problems in it.

                                                                                          Ok I think we are talking about different things then because that is not what I meant at all. I’m not saying they don’t fix their bugs, I’m saying they are slowly working a new feature out. Maybe it’s a language barrier but that is what I meant here:

                                                                                          and slowly work out bugs and eventually transition

                                                                                          I think it’s better to give people a new feature they can opt into than force them into something broken.

                                                                                          1. 2

                                                                                            Maybe a misunderstanding. Your original writeup suggested they had bugs in quite a few things, invluding E2E messaging. E2E should be on by default due to its importance. So, Im just saying that fixing esp E2E messaging bugs should be high priority since it’s important and should stay on by default. Plus anything else causing problems in daily use.

                                                                                            1. 1

                                                                                              But that depends on what problem you think Matrix is solving. Currently it’s replacing Slack and IRC, both of which mostly focus on public rooms that anyone can join. E2E encryption doesn’t do much for you in those places. For direct messages, yeah it probably should be on by default. For the private rooms I’m in, we turned it on.

                                                                                              So if one thinks Matrix is the next step in IRC or replacing Slack, then E2E encryption isn’t a high priority for you.

                                                                                              So, Im just saying that fixing esp E2E messaging bugs should be high priority since it’s important and should stay on by default. Plus anything else causing problems in daily use.

                                                                                              It’s easy to dictate project priorities from an arm chair.

                                                                                              1. 1

                                                                                                Currently it’s replacing Slack and IRC, both of which mostly focus on public rooms that anyone can join. E2E encryption doesn’t do much for you in those places.

                                                                                                That makes more sense. I assumed it had a privacy focus since someone mentions it in every thread on stuff like Signal and given homepage line. If just a Slack replacement, E2E wouldn’t make sense by default.

                                                                                                “It’s easy to dictate project priorities from an arm chair.”

                                                                                                It really isn’t. There’s always lots of debate that follows that consumes time and energy. ;)

                                                                                      2. 3

                                                                                        Totally agree! Leaving bugs in the code is just stupid. You’d think they should’ve learnt that by now.

                                                                              1. 5

                                                                                This poses an interesting problem for anti-cheat systems like VAC. It’s not impossible to detect this kind of hack, but could it then be used to trick VAC into banning legitimate players?

                                                                                I’m not aware of any stories about VAC false positives. Trust in VAC seems almost absolute. So anything like the above happening could turn hairy quick.

                                                                                1. 3

                                                                                  Honestly, games seem like a security issue waiting to happen. Almost every part of them is designed without security in mind (except on consoles, and even the only to an extent) in exchange for performance. Now with Vulkan, they have much lower levels of access to the GPU than they did before, allowing for greater risks involving GPU drivers to happen. Their network protocols are likely highly exploitable, as this article shows.

                                                                                  VAC has mostly dealt with script kiddies. Once the cheating world develops far more advanced methods, then I think Valve et al will have a hell of a time.

                                                                                  1. 4

                                                                                    VAC has mostly dealt with script kiddies. Once the cheating world develops far more advanced methods, then I think Valve et al will have a hell of a time.

                                                                                    Steam itself has millions of active users, most of them with a credit card on file - games are a big target not only for cheating - it’s a lucrative target for criminals and I’m surprised wide scale exploitation of them is not yet a thing.

                                                                                1. 8

                                                                                  I have huge hopes that this will result in ironing out the e2e encryption out of beta and improving the UI of verification and key management. I have huge hopes for matrix/riot, I want to see it succeed.

                                                                                  1. 5

                                                                                    One of the slack channels I’m on is evaluating it now to switch, it clearly still needs work but it’s not the worst thing. I’ve heard the server is pretty terrible to run so I’ve only experienced it from a client perspective. I’m hoping that France is invests in making the system better, I’d rather use Matrix/Riot than Slack/Discord/any of the other 100 proprietary chat apps.

                                                                                  1. 11

                                                                                    Can you tell me why your blog requires javascript to display the content? It’s extremely frustrating, it’s all there but post the initial blip the site just turns white with only the header remaining visible.

                                                                                    https://i.imgur.com/mQHB3N0.png

                                                                                    1. 2

                                                                                      Fixed.

                                                                                      1. 2

                                                                                        Works fine with FF’s reader view mode even with javascript blocked. He has a bit of javascript that loads typekit and toggles opacity on success. As to why?

                                                                                        1. 2

                                                                                          Looks like there’s an “opacity: 0” in the CSS.

                                                                                          1. 1

                                                                                            so i just have to disable css too? :)

                                                                                        1. 2

                                                                                          cough anti-adblock cough

                                                                                          1. 6

                                                                                            It has one? I use ublock origin and have javascript disabled by default. The article is readable with no annoying anti-adblock with javascript disabled ;)

                                                                                          1. 8

                                                                                            I run OpenBSD as my only operating system on:

                                                                                            • on my daily driver (T420 Thinkpad) that I use for work, gaming & everything else (OpenBSD -current)
                                                                                            • on the Lenovo G50-70 which is a daily driver for my wife - currently running OpenBSD 6.3 (just updated from 6.2)
                                                                                            • our server on vultr running OpenBSD 6.2 (soon to be updated to 6.3)
                                                                                            • an asus intel atom eeepc running snapshots/-current and serves as a backup machine for hacking on stuff

                                                                                            I do have a fallback work assigned laptop with Linux, that I haven’t booted even once this year. I do however use the PS4 extensively for additional gaming and streaming Netflix/HBO Go

                                                                                            1. 2

                                                                                              How has your experience been with suspending/hibernating? When I bought a ThinkPad X41, I first installed OpenBSD, but the fact that every time when I suspended the device, the screen permanently blanked until I forcefully rebooted, really prevented me from using it.

                                                                                              1. 5

                                                                                                If there is a TPM config option in the BIOS, try to disable the TPM and try again (not sure if this applies to the x41 but it applies to some of the more recent models).

                                                                                                1. 5

                                                                                                  Suspend & hibernate works perfectly on both laptops I mentioned in my post. Keep in mind, a lot will depends on the hardware model and the amount of time since you tried (OpenBSD is not standing still).

                                                                                                  1. 3

                                                                                                    I have an ThinkPad X41 that has been running OpenBSD from new and both suspend and hibernate work on it.

                                                                                                    Sometimes when it comes out of hibernation / sleep the X desktop did appear to come up blank - but if your press the brightness keys (Fn + Home button on my X41) the screen restored as normal - but I sometimes see this on my Toshiba laptop as well. I have not noticed this on my X41 recently.

                                                                                                  2. 2

                                                                                                    This isn’t completely related, but I also use (Free)BSD on vultr. I’m not really a sysadmin type and barely know that I’m doing, but I like it.

                                                                                                  1. 3

                                                                                                    C4n w3 h4z pr0p3r BBC0de, l1nk3d n0 w0rk5 4nd h4z n0 1mG t4g. I c4n+ 1nl1n3 m3 c00l p1cs?

                                                                                                    http://jo.zan.hu/poen/pictures/pictures_2/fuck_mod_perl.jpg