Search
 
   

59 results for "tedunangst.com"

  1. 3

    I’m all for browsers making it easier to manage your own trust roots. But realistically the end user is a lot more imperfect than Firefox most of the time; good defaults are by far the most important part of what the likes of Firefox need to be doing, and I’d actually consider WoSign/StartCom/… a success story - Firefox et al did the right thing, and sent a much stronger message than individuals acting alone ever would. Government surveillance is the kind of thing that requires collective action to counter - uncoordinated individual opposition doesn’t cut it.

    Heck, I’m probably one of the most paranoid 0.1% of users, but I never curated my root certificate list. There are only so many hours in the day, I have things to be doing, I’m not going to evaluate an individual CA for every website I go to. At best I’d use a list run by the EFF or someone, but really that someone might as well be Firefox.

    I don’t know what he’s proposing, but it’s hard to imagine what advantage it offers that he can’t get by using a CA-signed certificate. Installing the Ted CA doesn’t stop another CA from signing tedunangst.com. It does give Ted the authority to sign certificates for other websites, which I don’t want - 150 root CAs is pretty bad but 151 is still worse. If he has mechanisms for publishing fingerprints out of band, why not just do that with his site’s certificate? If he doesn’t trust the CAs, there are any number of mechanisms - HPKP, DANE,… - for increasing authentication while remaining compatible with the existing CA system, which, for all its flaws, is pretty effective. If he’s not willing to cooperate with the most effective, widely deployed security mechanism then screw him; I can live without his blog, it’s not worth the amount of time it would take me to figure out whether he’s just being awkward or actually wants to compromise my security. If he really wants to run a CA, he can go through the process to get it approved by Firefox; they’re far more capable of doing audits than I am.

  2. 5

    But how do I know that I’m downloading the bona fide tedu certificate? I’ve clicked through the HTTPS warnings, so all bets are off as to whether the content I’m being served has been manipulated by a 3rd party. In theory, an attacker could mitm the connection, spoof the certificate with his own and the change the content at https://www.tedunangst.com/ca-tedunangst-com.crt to download his own cert. How can I verify that this hasn’t happened? If the cert has been issued by a root CA and is valid for the page, I know that the content hasn’t been tampered with, assuming I trust the certificate issuer / the root CA selection process.

    I agree that implicitly trusting a group of organisations to not fuck up and/or act maliciously is not ideal, but I don’t think everybody using self-signed certs solves the problem either.

  3. 7

    Ha! This is the first time I’ve ever seen anyone (other than enterprise IT departments) serve an out-of-band root CA trust certificate, in case someone needs it for their blog.

    https://www.tedunangst.com/ca-tedunangst-com.crt

    Pretty awesome!

  4. 1

    Hah, so says the REO Speedwagon fan! :P

  5. 5

    From my “user but not OpenBSD developer” perspective, HAMMER looks like a reasonable option worth considering. Unfortunately it’s in a bit of a no-man’s land at the moment as HAMMER2 is under development but not yet ready for prime time. Some work has been done on porting NetBSD’s WAPBL (I believe it works, but there are some issues and it’s therefore not in-tree yet).

    Pity ZFS has so much against it, as @tedu noted.

  6. 2

    I read most of the article, but I’m not sure what inks actually is. It’s 90% up my alley, but I’m missing a line or two explaining what it is - just looking at the page wasn’t too revealing for me.

  7. 17

    [Edit: I forgot to add, Google generated two different files with the same SHA-1, but that’s dramatically easier than a preimage attack, which is what you’d need to actually attack either Git or Mercurial. Everything I said below still applies, but you’ve got time.]

    So, first: in the case of both Mercurial and Git, you can GPG-sign commits, and that will definitely not be vulnerable to this attack. That said, since I think we can all agree that GPG signing every commit will drive us all insane, there’s another route that could work tolerably in practice.

    Git commits are effectively stored as short text files. The first few lines of these are fixed, and that’s where the SHA-1 shows up. So no, the SHA-1 isn’t going anywhere. But it’s quite easy to add extra data to the commit, and Git clients that don’t know what to do will preserve it (after all, it’s part of the SHA-1 hash), but simply ignore it. (This is how Kiln Harmony managed to have round-trippable Mercurial/Git conversions under-the-hood.) So one possibility would be to shove SHA-256 signatures into the commits as a new field. Perfect, right?

    Well, there are some issues here, but I believe they’re solvable. First, we’ve got a downgrade vector: intercept the push, strip out the SHA-256, replace it with your nefarious content that has a matching SHA-1, and it won’t even be obvious to older tools anything happened. Oops.

    On top of that, many Git repos I’ve seen in practice do force pushes to repos often enough that most users are desensitized to them, and will happily simply rebase their code on top of the new head. So even if someone does push a SHA-256-signed commit, you can always force-push something that’ll have the exact same SHA-1, but omit the problematic SHA-256.

    The good news is that while the Git file format is “standardized,” the wire format still remains a bastion of insanity and general madness, so I don’t see any reason it couldn’t be extended to require that all commits include the new SHA-256 field. I’m sure this approach also has its share of excitement, but it seems like it’d get you most of the way there.

    (The Mercurial fix is superficially identical and practically a lot easier to pull off, if for no other reason than because Git file format changes effectively require libgit2/JGit/Git/etc. to all make the same change, whereas Mercurial just has to change Mercurial and chg clients will just pick stuff up.)

  8. 6

    Funny how different people look at the same thing and see different problems. I look at all these bugs and I see a lack of type safety. ftp has a bug when following redirects because doesn’t distinguish between the filename and the local viewer to run, it just smooshes them together into a single string. Shellshock because Bash doesn’t distinguish between function/variable definitions and commands to execute, it just accepts them all in the same input string. ImageTragick because ImageMagick doesnt' distinguish between scripts and image data, it just accepts whatever as input. ed and pdflatex vulnerabilities because again there’s no distinction made between data and executable commands. Vim lilili because tmux is apparently configured by sending <F19> followed by some text over its keyboard input channel. (I think the NSF vulnerability is a fairly ordinary vulnerability, that doesn’t follow the pattern of the rest of the examples; it’s not really a feature interaction, just a bug in an emulator, though I suppose the emulator implementation technique could be blamed).

    Back in the 1940s, theorists realised that the lambda calculus (i.e. all programming languages of the time) was subtly but deeply flawed and that types were needed to fix it. And they added types, and added generics to make them practical to use, and got on with things. Unfortunately no-one told the inventors of Unix or C, so these things carried the fundamental brokenness within them, and in any case C is so bad at expressing structured values that this same author recommends passing strings instead and parsing them, perpetuating this kind of vulnerability as soon as someone wants to change that value in response to user input.

    I don’t think Unix is salvageable. Constant vigilance can reduce your defect rate by a factor of 10 or 100, and ad-hoc mitigation measures like ASLR and pledge can reduce exploitability of defects by a similar rate, but that still just makes exploitability of a system a question of scale. The OpenBSD team have removed many features, but they still have more than 2 features and more than 100 lines of code, and probably always will do. Any new systems I produce will be OCaml unikernels. They’ll still get exploited for the time being because they have to actually run somewhere and at the moment that’s Xen and Xen’s codebase is pretty bad, but “run this unikernel image” is something that seems at least feasible to implement with less than 100 lines and fewer than 2 features.

  9. 5

    PGP signing scares me a little. You’re creating cryptographic proof that you wrote the thing, and it becomes pretty hard to deny that once you’re caught with the private key that signed a message by Dread Pirate Roberts. Even if you’re not the kingpin of a vast illegal operation, maybe one day someone will sign some song lyrics and create indisputable proof that they may owe Sony Music some royalties.

    There’s also replay attacks. Unless the message itself contains a lot of context that disproves it, I can take a signed message, add my own context around it and claim it proves something it doesn’t (“I asked someone what their favourite song was and they replied with a signed message that contained these lyrics”). Others can dispute my context of course, but I can keep handwaving and appeal to mathematics while others have to rely on those darned logic and reason things that humans are so bad at (this is possibly a similar problem to signify’s “untrusted comment” header). I’m nervous for people who sign every outgoing message they send to mailing lists because it’s only a matter of time before they sign something that they’ll either regret later (I wish there was cryptographic proof that I wrote everything that I’ve ever written on the internet, said nobody ever) or that can be taken out of context. This is worse than DKIM because at least DKIM usually includes some important headers (To/From/Date/Subject) in the signature.

    One of the only signed messages I’ve ever sent was an absentee vote in an election for the Galactic Empire*. I was careful to include the date and specifically what my vote was for, but if I wasn’t and my message just said:

    ---BEGIN PGP MESSAGE---
    I vote for Darth Vader
    ---END PGP MESSAGE---
    

    I would look pretty stupid a year later when whoever I sent my vote to conspires and claims that I voted for Vader that year, even after the whole Death Star scandal. Or someone intercepts my message and claims I was voting Vader for Worst Boss of the Year. Even the way I did it, I still created proof that on a specific day of a specific month of a specific year in the election for Supreme Commander of the Imperial Fleet, I voted for Vader, who then went on to blow up Alderaan. Darn, knew I should have voted for that other guy. Wish I could deny that now.

    I would like deniable signing for 1-on-1 messages (I think reop does this, IIUC?) so it’s encrypted and signed by a combination of your private key and the recipient’s public key and it’s impossible for an outside party to determine if the sender or the recipient wrote it, so the signature is only useful to the specific person it was sent to since they know that if they didn’t write it the other person must have, unless this becomes a plot point in a Memento sequel. Now your friend can confirm those really are your Signal safety numbers, but no officer I didn’t send those safety numbers to that degenerate what are you talking about.

    *This did happen but I may have changed some details to make it sound more interesting than it was

  10. 2
  11. 2

    Yup. I recommend http://www.rssboard.org/rss-profile for reference, which is lamentably difficult to stumble upon serendipitously. It includes recommendations based on surveys of publishers and aggregators in the wild… well, from 10 years ago, but still.

    Hm, if that peril is also the reason you don’t have a <guid>… that would be nice, because in absence of it, aggregators must guess how to identify an item as being the same one throughout edits. For flak you can just switch the <link> to <guid> I think (you never change those URLs, right?)… or have both if you worry about edge-case aggregators. For inks, I’ve noticed you number the blocks in the HTML, so you already have an identifier to reuse – keep the <link> and add a <guid isPermaLink="false">, probably with a tag: URL, maybe tag:www.tedunangst.com,2016:inks:37 (where only the trailing number varies; the date is just any point in time you controlled the domain, it can be constant). That would go a long way to ensuring that your updates to items do come through as updates, rather than showing up as dupes. (That’s part of the reason I sed your feed – I’d get dupes all the time when you edited your inks tags, which you do quite a bit, whereas metadata doesn’t figure into the deduping in Liferea, so now I only get dupes anymore when you actually update the item description.)

  12. 3

    BTW, @tedu blogged about his experiences with OpenBSD on a T5120.

  13. 16

    I just happened by Lobste.rs after many months, and will take a couple of paragraphs to explain my history before the past week, if anyone can bear with me..

    For the past couple of years I’ve been researching ways to write software that make it easier for newcomers to understand rather than for insiders to maintain. In this quest I built a toy OS and Basic-like language called Mu which tries to “rhyme with” the design process by which Unix was coevolved with C. The big additional design constraint compared to Unix+C is to make the OS primitives testable. I want to be able to pretend in tests that we run out of memory or disk, that a context switch happens between these two instructions, and so on. My hypothesis is that having the ability to easily write such tests from day 1 would radically impact the culture of an eco-system in a way that no bolted-on tool or service at higher levels can replicate: it would enable new kinds of tests to be written and make it easier to be confident that an app is free from regression if all automated tests pass. This would make the stack easy to rewrite and simplify by dropping features, without fear that a subset of targeted apps might break. As a result people might fork projects more easily, and also exchange code between disparate forks more easily (copy the tests over, then try copying code over and making tests pass, rewriting and polishing where necessary). The community would have in effect a diversified portfolio of forks, a “wavefront” of possible combinations of features and alternative implementations of features instead of the single trunk with monotonically growing complexity that we get today. Application writers who wrote thorough tests for their apps (something they just can’t do today) would be able to bounce around between forks more easily without getting locked in to a single one as currently happens.

    There’s more details on my site, perhaps starting from my mission statement. I’ll also add below a couple of hopefully tantalizing bullet lists.

    A. The zen of Mu:

    • tests, not interfaces
    • be rewrite-friendly, not backwards-compatible
    • be easy to port rather than portable
    • global structure matters more than local hygiene

    B. Mu’s vision of utopia:

    • Run your devices with 1/1000th the code.
    • 1000x more forks for open source projects.
    • Make simple changes to (some carefully chosen fork of) any project in an afternoon, no matter how large it is. Gain an hour’s worth of understanding for an hour’s worth of effort, rather than a quantum leap in understanding after a week or month of effort.
    • Projects don’t slow down with age, they continue to evolve just as fast as when they were first started.
    • All software rewards curiosity, allowing anyone to query its design decisions, gradually learn how to tweak it, try out increasingly radical redesign ideas in a sandbox. People learn programming as an imperceptible side effect of tinkering with the projects they care about.
    • Habitable digital environments.
    • A literate digital society with widespread skills for comprehending large-scale software structure and comparing-and-contrasting similar solutions, even if most people can’t write an OS. (I don’t think anybody is literate by this definition today. All we can do easily is read our own programs that we wrote recently.)

    (I know this is all extremely unconventional and risky. I might well be wasting my life barking up the wrong tree. But it seems promising enough to be worth one lifetime. And if someone can persuade me that it’s a bad idea I’ll gratefully take what’s left of my life back.)

    Over 2 years I’ve at least failed to disprove some aspects of the hypothesis in Mu, and I’ve had some success teaching with it and getting feedback on the design in the process. I’ve managed to convince one person – smalina – to join me in my quest[1], and the two of us have been investigating ways to bring the lessons out of Mu and into a ‘real’ stack, one with a real compiler and libraries that would permit more ambitious programs to be written, and with an eco-system of apps that we can try to port to our approach. The challenge here is a chicken-and-egg problem: we can’t make radical changes to a real-world OS without (at least) years of understanding all the things it’s trying to do. After much soul-searching we’ve decided to experiment with OpenBSD, not least because of tedu’s writings which show that at least a few people understand this platform from end to end. I have zero confidence that that’s true of Linux anymore[2][3][4][5], at least past the kernel.


    Anyway, after that lengthy prologue, here’s what we’ve been doing in recent weeks:

    • Building a testable network interface. Both of us have extremely limited knowledge here, so we’re currently building a fake network just at the level of HTTP that will let us record and replay network events and so turn manual tests into reproducible automated ones.
    • Getting into the guts of OpenBSD. So far we’ve gotten the ports tree to build from source (that was easy), but we’ve run into issues like how modifying /bin/ls/ls.c seems to unnecessarily recompile a large chunk of the ports tree.

    Any critique of all this would be appreciated. Also help or pointers from people knowledgeable about OpenBSD. (Heh, I’d started to write about another issue in OpenBSD that had me stumped, but figured it out as I was writing. So this comment has already paid for itself.)


    [1] He’s the one who reminded me of lobste.rs a couple of days ago, and here I am.

    [2] http://queue.acm.org/detail.cfm?id=2349257

    [3] http://web.archive.org/web/20131205090841/http://deadmemes.net/2010/10/19/fear-and-loathing-in-debianubuntu-or-who-needs-etcmotd

    [4] http://landley.net/aboriginal/history.html

    [5] http://harmful.cat-v.org/cat-v; http://harmful.cat-v.org/software; etc.

  14. 8

    I think there was some issue with how the kernel did scheduling.

    tedu wrote a brief summary a few months ago

  15. 6

    Is the aim to be “above it”, though? One of the better tedu articles is basically just laughing at all the absurd and stupid things people do to seed random. There are doubtless other articles past and future that work in the same way.

    I completely agree that we don’t want to replicate the toxic environment of other communities: we want commenters to always be courteous and civil towards one another, even in disagreement. I humbly also assert that we want to avoid falling prey to content marketing and news spam. I understand your concern about the precedent this sort of thing would set.

    That said, people are going to post this sort of thing whether we like it or not. If you want it to stop, you have to flag such articles and say in the comments why and that you had done so…and then people will kvetch and downvote for meta interruptions. Ask me how I know this.

    So, we can either label these stories and hotmod them so they dissipate rapidly, burn karma policing the submission comment threads, or we can do nothing and watch them roll in now and again.

  16. -3

    Perhaps we can look at another example which I consider worse UI. It’s quite similar, but even using the process of elimination I had trouble knowing what to do.

    http://www.tedunangst.com/flak/images/twitterubuntu.jpg

    So here we see that I have four buttons labeled F152, F147, ubuntu, and F150. They don’t really even look like buttons, but I’m savvy enough to figure that out.

    Which button do I click? I know I don’t want ubuntu, so we can rule that out.

    That leaves three options. I watch CNN, so I’m familiar with the Twitter platform and I expect there are buttons for reply, retweet, and report to SJW authorities. I just don’t know which button is which.

    I can peer about for a while, but there’s no eliminating any more buttons. I’ve got three buttons, not even a 50/50 chance. Maybe if I could eliminate two other options so that at least I’m down to one button that either replies or does something else, I’d take my chances. But I can’t even make it that far.

  17. 2

    The only reason to have a problem with the GPL is if you want to do non-free software or you want to help others make non-free software or you have anything to do with non-free software. If non-free software isn’t important to you, then you should have no problem with the GPL.

    Or your code is licensed under a free, open source license that is GPL incompatible. OpenSSL being the best example because of how many projects have had problems it.

    For example it has lead to hilarious busywork like offlineimap having to get all their contributors to agree to a relicense to prevent Debian from pulling their project. And Postgres which had similar problems. Or stunnel, which includes an OpenSSL exception but the author asserts that doesn’t apply to forks such as LibreSSL (see also: tedu’s post about stunnel).

    There are other niggly bits in the GPL that are only acceptable because they are widely ignored and never enforced. Is there any project at all that complies with GPLv2 section 2.a? Is there any small town electronics store selling Android phones complying with sections 3.a or 3.b? (They can’t use 3.c as selling a phone isn’t noncommercial).

    Or, the P2P problem. So much for the joke that seeding Linux distributions is the only legitimate use of BitTorrent.

  18. 3

    I completely agree that GPG is a royal pain to use. Ideally, someone would come and write something new.

    Relevant: http://www.tedunangst.com/flak/post/reop

  19. 3

    Perhaps it is a response to my blog post: http://www.tedunangst.com/flak/post/now-or-never-exec

    There was also a rather acrimonious thread about this very topic a mere 13 years ago as well: http://marc.info/?t=105058908400003&r=1&w=2

  20. 2

    tedu recently also had a post about this: http://www.tedunangst.com/flak/post/openbsd-laptops