1. 3

    Reminds me of the format we wanted to use in Firefox OS next, which never came to be.

    There’s also an IETF draft following a similar goal: https://datatracker.ietf.org/doc/html/draft-yasskin-webpackage-use-cases-01

    1. 3

      One idea I had to solve the fundamental insecurity of web-based crypto was to add subresource integrity support to service workers.

      Service workers are persistent and are essentially a piece of JavaScript which can intermediate all HTTP requests to an origin, which means a trusted service worker could verify the integrity of all loaded resources according to an arbitrary policy. Then you only need to secure the service worker itself. If you could specify the known hash of a service worker JS file, you could know that all future resource loads from the origin would be intermediated by that JS. Presumably, the service worker JS would change rarely and be publically auditable. (If the inconvenience of not being able to change its hash is too inconvenient, it could chainload a signed service worker file; you can implement arbitrary policies.)

      This creates a TOFU environment. One logical extension if this were implemented would be to create a browser extension which preseeds such service workers with their known hashes, similarly to HTTPS Everywhere.

      I created an issue suggesting this at the Subresource Integrity WG: https://github.com/w3c/webappsec-subresource-integrity/issues/66

      1. 3

        I am one of the editors of the SRI spec but I am currently on a month long vacation in remote places that doesn’t allow internet access (except this comment, written on a potato.)

        Having said that, you’ll likely be interested in this web hacking challenge of mine from last year. It involves SRI and Service Workers: https://serviceworker.on.web.security.plumbing/

        I’ve summarized my findings here: https://frederik-braun.com/sw-sri-challenge.html

      1. 3

        Reverse emulating? TLDR: a DIY cartridge that’s pushing the boundary of the console (and default cartridge) hardware.

        E.g., (Mis)using rereads to increase addressable storage capacity.

        1. 14

          Making the best of my paternity leave and starting a 2 months bike trip with the whole family.

          See you all in August :-)

          1. 5

            Paternity leave is great.

          1. 2

            I’m going to admit that I just recently learned rust and have never heard of event sourcing before, but this post explains the concept. And it contains some good Rust code, well explained for learners.

            1. 2

              I guess I should be thankful that my circuit breaker just cuts off electricity when there is too much load so it is like a forced reboot at least once a month.

              The FBI recommends any owner of small office and home office routers reboot the devices to temporarily disrupt the malware and aid the potential identification of infected devices. Owners are advised to consider disabling remote management settings on devices and secure with strong passwords and encryption when enabled. Network devices should be upgraded to the latest available versions of firmware.

              I would like to learn more about this. I am pretty sure Verizon has a backdoor to my WiFi router FiOS-G1100. Does anyone else have this router? What do you see when you go to http://myfiosgateway.com/#/monitoring ? I see

              UI Version: v1.0.294 Firmware Version 02.00.01.08 Model Name: FiOS-G1100 Hardware Version: 1.03

              1. 2

                Access to your router is likely not publicly routed. I can’t access that web page (connection failed).

                1. 1

                  Ah, I should have mentioned you need to be at home behind your FiOS F1100 router, log in and click on system monitoring on the top right corner.

                  Here’s the router/modem in question: https://www.verizon.com/home/accessories/fios-quantum-gateway/

                2. 1

                  Why do you think Verizon has a backdoor?

                  1. 2

                    They along with other ISP’s took tens to hundreds of millions to backdoor their networks for NSA. That was in leaks. You should assume they might backdoor anything else.

                    1. 1

                      Got a link to the specific leaks?

                      1. 1

                        Forbes article.

                    2. 2

                      Once man’s backdoor is another man’s mass provisioning service.

                      1. 1

                        Maybe I used an incorrect technical word. I meant to say I think they can remotely access and configure the modem / router.

                        1. 1

                          ISP’s backdooring home routers isn’t unknown, where here I use ‘backdooring’ to mean “ISP can log in and make changes even though most home users don’t know they can do this”. Some use it to push out router firmware updates (for their preferred models).

                      1. 10

                        Ive been in conversations online in various places about getting Firefox revenue off ad revenue. One of my ideas was enterprise features licensed at a nice price. Like wigh Open Core, makknv the enterprise features paid has almost no effect on individuals that make up their majority of users.

                        “a little something extra for everyone who deploys Firefox in an enterprise environment. …”

                        Then, they start adding that stuff in for free. So much for that idea.

                        1. 9

                          They could start with a Windows Server GPO that was easy to install and configure. There’s no bigger Firefox advocate than me, yet I’m forced to use Chrome on my network because it was so easy to configure high-security policies for it, whereas I gave up trying to do the same for Firefox.

                          1. 4

                            Bookmarking that idea in case I ever get a chance to talk to their managemeng about this stuff. :)

                            1. 9

                              Thanks Nick! I’m no manager but I can take it from here (on Monday, because I’m off for the rest of the week):-))

                              @jrc: Are you willing to expand on that hardship? AFAIU our project managers have worked with some enterprises to hear about their needs. This is in part because the enterprise mailing list we have doesn’t contain enough vocal enterprises willing to talk about their pain points in the open.

                              Did you try the GPO features we just released with Firefox 60? What were you trying to do that didn’t work? Is there anything else you were missing?

                              For everyone else reading this, please answer those questions as well and I’m happy to forward the whole thread.

                              1. 2

                                I’m not jrc, and this isn’t specifically related but my biggest problem with Firefox largely boils down to the fact that it’s not portable. It’s one of the few things where I get a new computer, plug in my drive, and it isn’t already working. I just did it again today, and while I use sync, losing my open tabs (on the session I’m using), cookies, extension data, and everything else that goes along with my previous session isn’t great.

                                1. 4

                                  Sorry to pile onto that, but on a slightly related note: It’s embarrassing that Firefox is still dumping folders into $HOME instead of following the applicable standard.

                                  1. 1

                                    Update! Please read through the policy templates repo and file issues there.

                                    1. 1

                                      No fix for this and I don’t think that’s the appropriate place for it. :-/

                                2. 1

                                  Hi! Sorry I didn’t see your reply or I would have commented back sooner. To answer your question, it’s been a couple years since I tried it. However, I’m about to upgrade to Windows Server 2016, so I will give it another go with Firefox and document the experience.

                                  I can say off the top of my head, on my particular network, I’m looking to:

                                  Browse websites and do nothing else. Easily lock out the ability to print, change any configuration settings at all, including visibility of toolbars, Firefox sync, managing search engines, anything like that.

                                  I’d also like to be able to easily (1) install and (2) configure settings for add-ons, to manage mass deployment of updates to those add-ons, etc.

                                  1. 1

                                    Thanks for the feedback. Great to hear you’ll give it a try. I suppose that not exactly 100% of your requirements will be satisfied, but I’d love to see a blog post about your endeavors (unless it’s shattering criticism ;))

                                  2. 1

                                    Update! Please read through the policy templates repo and file issues there.

                            1. 2

                              I’ve used nethogs before. This looks much nicer.

                              1. 12

                                When people tell me to stop using the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA, I get suspicious, even hostile. It’s triply frustrating when, at the end of the linked rant, they actually recognize that PGP isn’t the problem:

                                It also bears noting that many of the issues above could, in principle at least, be addressed within the confines of the OpenPGP format. Indeed, if you view ‘PGP’ to mean nothing more than the OpenPGP transport, a lot of the above seems easy to fix — with the exception of forward secrecy, which really does seem hard to add without some serious hacks. But in practice, this is rarely all that people mean when they implement ‘PGP’.

                                There is a lot wrong with the GPG implementation and a lot more wrong with how mail clients integrate it. Why would someone who recognises that PGP is a matter of identity for many of its users go out of their way to express their very genuine criticisms as an attack on PGP? If half the effort that went into pushing Signal was put into a good implementation of OpenPGP following cryptographic best practices (which GPG is painfully unwilling to be), we’d have something that would make everyone better off. Instead these people make it weirdly specific about Signal, forcing me to choose between “PGP” and a partially-closed-source centralised system, a choice that’s only ever going to go one way.

                                1. 9

                                  I am deeply concerned about the push towards Signal. I am not a cryptographer, so all I can do is trust other people that the crypto is sound, but as we all know, the problems with crypto systems are rarely in the crypto layers.

                                  On one hand we know that PGP works, on the other hand we have had two game over vulnerabilities in Signal THIS WEEK. And the last Signal problem was very similar to the one in “not-really-PGP” in that the Signal app passed untrusted HTML to the browser engine.

                                  If I were a government trying to subvert secure communications, investing in Signal and tarnishing PGP is what I would try to do. What better strategy than to push everyone towards closed systems where you can’t even see the binaries, and that are not under the user’s control. The exact same devices with GPS and under constant surveilance.

                                  My mobile phone might have much better security mechanisms in theory, but I will never know for sure because neither I, nor anyone else can really check. In the meantime we know for sure what a privacy disaster these mobile phones are. We also know for sure the the various leaks that government implant malware on mobile devices, and we know that both manufacturers and carriers can install software, or updates, on devices without user consent.

                                  Whatever the PGP replacement might be, moving to the closed systems that are completely unauditable and not under the user’s control is not the solution. I am not surprised that some people advocate for this option. What I find totally insane is that a good majority of the tech world finds this position sensible. Just find any Hacker News thread and you will see that any criticism towards Signal is downvoted to oblivion, while the voices of “experts” preach PGP hysteria.

                                  PGP will never be used by ordinary people. It’s too clunky for that. But it’s used by some people very successfully, and if you try to dissuade this small, but very important group of people to move towards your “solution”, I can only suspect foul play. Signal does not compete with PGP. It’s a phone chat app. As Signal does not compete with PGP, why do you have to spend all this insane ammount of effort to convince an insignificant amount of people to drop PGP for Signal?

                                  1. 4

                                    I can’t for the life of me imagine why a CIA-covert-psyops-agency funded walled garden service would want to push people away from open standards to their walled garden service.

                                    Don’t get me wrong, Signal does a lot of the right things but a lot of claims are made about it implying it’s as open as PGP, which it isn’t.

                                    1. 2

                                      What makes Signal a closed system?

                                      https://github.com/signalapp

                                      1. 12

                                        Not Signal, iOS and Android, and all the secret operating systems that run underneath.

                                        As for Signal itself, moxie forced F-Droid to take down Signal, because he didn’t want other people to compile Signal. He said he wanted people only to use his binaries, which even if you are ok with in principle, on Android it mandates the use of the Google Play Store. If this is not a dick move, I don’t know what is.

                                        1. 3

                                          I’m with you on Android and especially iOS being problematic. That being said, Signal has been available without Google Play Services for a while now. See also the download page; I couldn’t find it linked anywhere on the site but it is there.

                                          However, we investigated this for PRISM Break, and it turns out that there’s a single Google binary embedded in the APK I just linked to. Which is unfortunate. See this GitHub comment.

                                          1. 2

                                            because he didn’t want other people to compile Signal. He said he wanted people only to use his binaries

                                            Ehm… he chose the wrong license in this case.

                                      2. 4

                                        As I understand it, the case against PGP is not with PGP in and of itself (the cryptography is good), but the ecosystem. That is, the toolchain in which one uses it. Because it is advocated for use in email and securing email, it is argued, is nigh on impossible, then it is irresponsible to recommend using PGP encrypted email for general consumption, especially for journalists.

                                        That is, while it is possible to use PGP via email effectively, it is incredibly difficult and error-prone. These are not qualities one wants in a secure system and thus, it should be avoided.

                                        1. 4

                                          But the cryptographyisn’t good. His case in the blog post is intentionally besides all of the crypto badness.example: the standard doesn’t allow any other hash function than sha1, which has been proven broken. The protocol itself disallows flexibility here to avoid ambiguity and that means there is no way to change it significantly without breaking compatibility.

                                          And so far, it seems, people wanted compatibility (or switched to something else, like Signal)

                                        2. 4

                                          Until this better implementation appears, an abstract recommendation for PGP is a concrete recommendation for GPG.

                                          Imagine if half the effort spent saying PGP is just fine went into making PGP just fine.

                                          1. 2

                                            I guess that’s an invitation to push https://autocrypt.org/

                                          2. 3

                                            When people tell me to stop using the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA, I get suspicious, even hostile.

                                            Without wanting to sound rude, this is discussed in the article:

                                            The fact of the matter is that OpenPGP is not really a cryptography project. That is, it’s not held together by cryptography. It’s held together by backwards-compatibility and (increasingly) a kind of an obsession with the idea of PGP as an end in and of itself, rather than as a means to actually make end-users more secure.

                                            OpenPGP might have resisted the NSA, but that’s not a unique property. Every modern encryption tool or standard has to do that or it is considered broken.

                                            I think most people unless they are heavily involved in security research don’t know how encrytion/auth/integrity protection are layered. There are a lot of layers in what people just want to call “encryption”. OpenPGP uses the same standard crypto building blocks as everything else and unfortunately putting those lower level primitives together is fiendishly difficult. Life also went on since OpenPGP was created meaning that those building blocks and how to put them together changed in the last few decades, cryptographers learned a lot.

                                            One of the most important things that cryptographers learned is that the entire ecosystem / the system as a whole counts. Even Snowden was talking about this when he said that the NSA just attacks the endpoints, where most of the attack surface is. So while the cryptography bits in the core of the OpenPGP standard are safe, if dated, that’s not the point. Reasonable people can’t really use PGP safely because we would have to have a library that implements the dated OpenPGP standard in a modern way, clients that interface with that modern library in a safe and thought-through way and users that know enough about the system to satisfy it’s safety requirements (which are large for OpenPGP)

                                            Part of that is attitude, most of the existing projects for implementing the standard just don’t seem to take a security-first stance. Who is really looking towards providing a secure overall experience to users under OpenPGP? Certainly not the projects bickering where to attribute blame.

                                            I think people kept contrasting this with Signal because Signal gets a lot of things right in contrast. The protocol is modern and it’s not impossibly demanding on users (ratcheting key rotation, anyone?), there is no security blame game between Signal the desktop app vs signal the mobile app vs the protocol when a security vulnerability happens, OWS just fixes it with little drama. Of course Signal-the-app has downsides too, like the centralization, however that seems like a reasonable choice. I’d rather have a clean protocol operating through a central server that most people can use than an unuseable (from the pov of most users) standard/protocol. We’re not there yet where we can have all of decentralization, security and ease of use.

                                            1. 2

                                              OpenPGP might have resisted the NSA, but that’s not a unique property. Every modern encryption tool or standard has to do that or it is considered broken.

                                              One assumes the NSA has backdoors in iOS, Google Play Services, and the binary builds of Signal (and any other major closed-source crypto tool, at least those distributed from the US) - there’s no countermeasure and virtually no downside, so why wouldn’t they?

                                              there is no security blame game between Signal the desktop app vs signal the mobile app vs the protocol when a security vulnerability happens, OWS just fixes it with little drama.

                                              Not really the response I’ve seen to their recent desktop-only vulnerability, though I do agree with you in principle.

                                              1. 3

                                                Signal Android has been reproducible for over two years now. What I don’t know is whether anyone has independently verified that it can be reproduced. I also don’t know whether the “remaining work” in that post was ever addressed.

                                                1. 2

                                                  The process of verifying a build can be done through a Docker image containing an Android build environment that we’ve published.

                                                  Doesn’t such process assume trust on who created the image (and on who created each of layers it was based on)?

                                                  A genuine question, as I see the convenience of Docker and how it could lead to more verifications, but on the other hand it create a single point of failure easier to attack.

                                                  1. 1

                                                    That question of trust is the reason why, if you’re forced to use Docker, build every layer for yourself from the most trustworthy sources. It isn’t even hard.

                                            2. 1

                                              the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA

                                              I’m pretty ignorant on this matter, but do you have any link to share?

                                              There is a lot wrong with the GPG implementation

                                              Actually, I’d like to read the opinion of GPG developers here, too.

                                              Everyone makes mistakes, but I’m pretty curious about the technical allegations: it seems like they did not considered the issue to be fixed in their own code.

                                              This might have pretty good security reasons.

                                              1. 3

                                                To start with, you can’t trust the closed-source providers since the NSA and GHCQ are throwing $200+ million at both finding 0-days and paying them to put backdoors in. Covered here. From there, you have to assess open-source solutions. There’s a lot of ways to do that. However, the NSA sort of did it for us in slides where GPG and Truecrypt were worst things for them to run into. Snowden said GPG works, too. He’d know given he had access to everything they had that worked and didn’t. He used GPG and Truecrypt. NSA had to either ignore those people or forward them to TAO for targeted attack on browser, OS, hardware, etc. The targeted attack group only has so much personnel and time. So, this is a huge increase in security.

                                                I always say that what stops NSA should be good enough to stop the majority of black hats. So, keep using and improving what is a known-good approach. I further limit risk by just GPG-encrypting text or zip files that I send/receive over untrusted transports using strong algorithms. I exchange the keys manually. That means I’m down to trusting the implementation of just a few commands. Securing GPG in my use-case would mean stripping out anything I don’t need (most of GPG) followed by hardening the remaining code manually or through automated means. It’s a much smaller problem than clean-slate, GUI-using, encrypted sharing of various media. Zip can encode anything. Give the files boring names, too. Untrusted, email provider is Swiss in case that buys anything on any type of attacker.

                                                Far as the leaks, I had a really-hard time getting you the NSA slides. Searching with specific terms in either DuckDuckGo or Google used to take me right to them. They don’t anymore. I’ve had to fight with them narrowing terms down with quotes trying to find any Snowden slides, much less the good ones. I’m getting Naked Security, FramaSoft, pharma spam, etc even on p 2 and 3 but not Snowden slides past a few, recurring ones. Even mandating the Guardian in terms often didn’t produce more than one, Guardian link. Really weird that both engines’ algorithms are suppressing all the important stuff despite really-focused searches. Although I’m not going conspiracy hat yet, the relative-inaccuracy of Google’s results compared to about any other search I’ve done over past year for both historical and current material is a bit worrying. Usually excellent accuracy.

                                                NSA Facts is still up if you want the big picture about their spying activities. Ok, after spending an hour, I’m going to have to settle for giving you this presentation calling TAILS or Truecrypt catastrophic loss of intelligence. TAILS was probably temporary but the TrueCrypt derivatives are worth investing effort in. Anyone else have a link to the GPG slide(s)? @4ad? I’m going to try to dig it all up out of old browser or Schneier conversations in near future. Need at least those slides so people knows what was NSA-proof at the time.

                                                1. 2

                                                  Why would TAILS be temporary? If anything this era of cheap devices makes it more practical than ever.

                                                  1. 3

                                                    It was secure at the time since either mass collection or TAO teams couldnt hack it. Hacking it requires one or more vulnerabilities in the software it runs. The TAILS software includes complex software such as Linux and a browser with history of vulnerabilities. We should assume that was temporary and/or would disappear if usage went up enough to budget more attacks its way.

                                                    1. 2

                                                      I’d still trust it more than TrueCrypt just due to being open-source.

                                                      What would it take to make an adequate replacement for TAILS? I’m guessing some kind of unikernel? Are there any efforts in that direction?

                                                      1. 1

                                                        Well, you have to look at the various methods of attack to assess this:

                                                        1. Mass surveillance attempting to read traffic through protocol weaknesses with or without a MITM. They keep finding these in Tor.

                                                        2. Attacks on the implementation of Tor, the browser, or other apps. These are plentiful since it’s mostly written in non-memory safe way. Also, having no covert, channel analysis on components processing secrets means there’s probably plenty of side channels. There’s also increasingly new attacks on hardware with a network-oriented one even being published.

                                                        3. Attacks on the repo or otherwise MITMing the binaries. I don’t think most people are checking for that. The few that do would make attackers cautious about being discovered. A deniable way to see who is who might be a bitflip or two that would cause the security check to fail. Put it in random, non-critical spots to make it look like an accident during transport. Whoever re-downloads doesn’t get hit with the actual attack.

                                                        So, the OS and apps have to be secure with some containment mechanisms for any failures. The protocol has to work. These must be checked against any subversions in the repo or during transport. All this together in a LiveCD. I think it’s doable minus the anonymity protocol working which I don’t trust. So, I’ve usually recommended dedicated computers bought with cash (esp netbooks), WiFi’s, cantennas, getting used to human patterns in those areas, and spots with minimal camera coverage. You can add Tor on top of it but NSA focuses on that traffic. They probably don’t pay attention to average person on WiFi using generic sites over HTTPS.

                                                        1. 1

                                                          Sure. My question was more: does a live CD project with that kind of aim exist? @josuah mentioned heads which at least avoids the regression of bringing in systemd, but doesn’t really improve over classic tails in terms of not relying on linux or a browser.

                                                          1. 2

                                                            An old one named Anonym.OS was an OpenBSD-based, Live CD. That would’ve been better on code injection front at least. I don’t know of any current offerings. I just assume they’ll be compromised.

                                                        2. 1

                                                          I think it is the reason why https://heads.dyne.org/ have been made: Replacing the complex software stack with a simpler one with aim to avoid security risks.

                                                          1. 1

                                                            Hmm. That’s a small start, but still running Linux (and with a non-mainstream patchset even), I don’t think it answers the core criticism.

                                                    2. 2

                                                      Thanks for this great answer.

                                                      Really weird that both engines’ algorithms are suppressing all the important stuff despite really-focused searches.

                                                      If you can share a few of your search terms I guess that a few friends would find them pretty interesting, with their research.

                                                      For sure this teach us a valuable lesson. The web is not a reliable medium for free speech.

                                                      From now on, I will download from the internet interesting documents about such topics and donate them (with other more neutral dvds) to small public libraries around the Europe.

                                                      I guess that slowly, people will go back to librarians if search engines don’t search carefully enough anymore.

                                                      1. 2

                                                        It was variations, with and without quotes, on terms I saw in the early reports. They included GPG, PGP, Truecrypt, Guard, Documents, Leaked, Snowden, and catastrophic. I at least found that one report that mentions it in combination with other things. I also found, but didn’t post, a PGP intercept that was highly-classified but said they couldn’t decrypt it. Finally, Snowden kept maintaining good encryption worked with GPG being one he used personally.

                                                        So, we have what we need to know. From there, just need to make the programs we know work more usable and memory safe.

                                                1. 6

                                                  As others have alluded to, this is the classic plight of early “Web 2.0” successes where they thought they could keep their service “free” by using advertiser support. Only when nobody cared and everybody was enjoying their free lunch Twitter among many others has started to clamp down.

                                                  What I would LOVE to see is widespread acceptance of the idea that advertiser funding is a fatally flawed model. One way for Twitter to go with this is to offer a “pro” option which would be ad free and paid, and also allow full and open access to all of its APIs, including the ones they’ve nuked in recent years.

                                                  One of the things that drew me to Twitter was its diverse ecosystem of users and clients because developers had free reign to innovate using their platform. Clearly the future for this kind of innovation lies with tools like Mastodon and Pleroma, but as I say above it’s not too late for companies like Twitter to make bold moves and fix the broken model before it destroys them.

                                                  1. 2

                                                    I find the “pro” strategy appealing, but I can’t think of a big site that’s succeeded with it. I’ve seen a lot of sites try and it doesn’t really seem to last. I don’t have numbers available, but I suspect that advertising revenue substantially outweighs subscription revenue most of the time.

                                                    1. 4

                                                      Can’t remember where I heard it, but on some sites the value of a user (to advertisers) who would use a pro option exceeds what said user is willing to pay.

                                                      Not sure if true or not, but it has stuck in my mind.

                                                      1. 2

                                                        Yes, that’s what I was suggesting.

                                                        1. 1

                                                          Oh! This is super interesting for a completely different discussion I’ve been having recently. Can you do me a favor and try to find out where you got that?

                                                        2. 2

                                                          I did some googling wondering if I could find some real data on this and failed. Flickr comes to mind, which was in fact quite successful and is still much loved despite having been bought by that roving dumpster fire that is Yahoo, and recently SmugMug.

                                                          1. 1

                                                            This is not at all an apples-to-apples comparison, but The Guardian (a newspaper/media co) now makes more from subscribers than from advertising. It’s a far cry from saying “this model works!” (the same article notes they still posted a loss) but I think it’s promising.

                                                          2. 2

                                                            What I would LOVE to see is widespread acceptance of the idea that advertiser funding is a fatally flawed model. One way for Twitter to go with this is to offer a “pro” option which would be ad free and paid, and also allow full and open access to all of its APIs, including the ones they’ve nuked in recent years.

                                                            This may be an unpopular opinion, but I don’t think social networks offer enough value for enough people to pay in the “pro” model. It might work on a small scale, but I don’t think it can work for a network as large as Twitter.

                                                            1. 2

                                                              You may be right. That would have me leaning towards the idea that behemoths like Twitter will need to go full on closed system draconian advertising for everyone and no third party anything, which will drive away the minority who really care (who should likely be seeking safe harbor in open networks like Mastodon at this point anyway.)

                                                              I personally feel that if someone could make a Mastodon or Mastodon-like server simple enough to deploy that grandma could do it, Mastodon would really take off in a big way.

                                                          1. 4

                                                            I’m a newbie rustaceans and I need to second what’s said in this blog post: people are truly excellent to each other. Great community and interesting tech!

                                                            1. 1

                                                              “but WebAssembly is designed to run safely on remote computers, so it can be securely sandboxed without losing performance.”

                                                              Assuming the hardware works correctly. This assumption is failing more often now. You pretty much have to know what the code is going to do up front to securely run it.

                                                              1. 5

                                                                WebAssembly is the new SWF (Adobe Flash) ?!??!. A, potentially, binary format that we expect to run and do “totally not malicious” things on our computers delivered via our web browser.

                                                                I think it’s certainly the case that WASM has it’s use cases, and this maybe one of them. But, I’m so much more skeptical of it all than I am excited.

                                                                1. 4

                                                                  We’re already running untrusted JavaScript. A binary representation of JavaScript would be exactly equally safe. WASM is a binary representation of a language that’s simpler than JS and has the same or fewer capabilities. Whether JS is safe or not can be debated, but WASM isn’t a step down; it’s at worst a lateral move.

                                                                  You could argue that it’s harder to inspect WASM (you need to translate it into its textual representation, and then read a language much lower level than JS), but really, reading through minified and obfuscated JS, or JS compiled from C, isn’t exactly easy either.

                                                                  1. 1

                                                                    I havent looked at it deeply yet. I have no opinion. Highly skeptical by default due to Worse is Better effect in web tech.

                                                                    1. 1

                                                                      The APIs that come to talk to the environment outside of the waam runtime are going to be the deciding factor here Safely crunching numbers types doesn’t get you very far.

                                                                      1. 1

                                                                        Periodic reminder: The Flash plugin was also supposed to be a set of safe APIs.

                                                                        1. 1

                                                                          Sure. I’m not arguing for or against WASM. I’m just saying that current wasm doesn’t have any of those APIs at all.

                                                                          1. 1

                                                                            That’s fair.

                                                                  1. 10

                                                                    Good write up of click jacking, IMHO the best. The reveal-button is neat too.

                                                                    Shameless plug: This can be prevented if you put google things into dedicated Firefox Container Tabs (e.g ., to isolate work things) or First Party Isolation prevent these attacks :P

                                                                    1. 3

                                                                      If there’s one thing you should take out of that blog post, then it’s the link to this conversation on stackoverflow about using regular expressions for higher ranking (i.e. non-regular) grammars

                                                                      1. 5

                                                                        For websites: Firefox Sync :-) Everything that isn’t a website or is important enough to have more than 3 copies (laptop, workstation, phone) lives in a keepass file, hosted on a nextcloud instance.

                                                                        1. 2

                                                                          Do note that Firefox Sync has a pretty nasty security flaw: your passwords are ultimately protected by your Firefox Account password — so you need to make sure that it’s a high entropy one (like 52ICsHuwrslpDl6fbjdvtv, not like correct horse battery staple). You also need to make sure that you never log into your Firefox Account online: Mozilla serve the login UI with JavaScript, which means that they can serve you malicious JavaScript which steals your password (this is worse than a malicious browser, because someone might actually notice a malicious browser executable, but the odds of detecting a single malicious serve of a JavaScript resource are pretty much nil).

                                                                          I use pass, with git-remote-gcrypt to encrypt the pass repo itself (unfortunately, pass has a security flaw in that it doesn’t encrypt filenames).

                                                                          1. 2

                                                                            I’m pretty sure the password isn’t used directly but derived into a crypto key using PBKDF2 on the client.

                                                                            1. 3

                                                                              This does not protect you from physical access (if you ever let your computer unlocked). It took me 10 seconds to discover that firefox lets anyone see the plain password of every account.

                                                                              https://i.imgur.com/lbxmMow.png

                                                                              1. 3

                                                                                If you use a master password, you have to enter that to see the plain password in that dialog.

                                                                                1. 1

                                                                                  That makes more sense.

                                                                                2. 2

                                                                                  True! imho physical access should be countered with something else. Lockscreens, hard disk encryption etc.

                                                                                  1. 1

                                                                                    Yes, of course if there is a physical access there is no much hope left: even with ssh, if ssh-agent is running or a terminal with a recent sudo and much damage can be done.

                                                                                    What did surprise me is how fast and easy it is to go straight to the password.

                                                                                3. 1

                                                                                  Yes, but that doesn’t add any entropy: if your password is ‘love123,’ it’s still low-entropy, even if it’s stretched.

                                                                                  Remember, too, that the client-side stretching is performed by JavaScript Mozilla delivers when you attempt to log in — as I noted, they could deliver malicious JavaScript at a whim (or under duress …).

                                                                            1. 2

                                                                              The protocol is fun (I must admit I only know the basics). I once wrote a python script that seeds to a single predefined IP/port combination. Scenario is that a friend downloads something, which you already have and you want to support him, but not the whole BitTorrent network.

                                                                              1. 2

                                                                                I find it a little ironic that after using the open-web browser that I am not able to inspect the sessionstore-backups/recovery.jsonlz4 file after a crash to recover some textfield data, as Mozilla Firefox is using a non-standard compression format, which cannot be examined with lzcat nor even with lz4cat from ports.

                                                                                The bug report about this lack of open formats has been filed 3 years ago, and suggests lz4 has actually been standardised long ago, yet this is still unfixed in Mozilla.

                                                                                Sad state of affairs, TBH. The whole choice of a non-standard format for user’s data is troubling; the lack of progress on this bug, after several years, no less, is even more so.

                                                                                1. 15

                                                                                  https://bugzilla.mozilla.org/show_bug.cgi?id=1209390#c10 states that when Mozilla adopted using LZ4 compression there wasn’t a standard to begin with. Yeah, no one has migrated the format to the standard variant, which sucks, but it isn’t like they went out of their way in order to hide things from the user.

                                                                                  It was probably unwise for Mozilla to shift to using that compression algorithm when it wasn’t fully baked, though I trust that the benefits outweighed the risks back then.

                                                                                  1. 14

                                                                                    This will sound disappointing to you, but your case is as edge-caseish as it gets.

                                                                                    It’s hard to prioritize those things over things that affect more users. Note that other browser makers have security teams larger than all of Mozilla’s staff. Mozilla has to make those hard decisions.

                                                                                    These jsonlz4 data structure are meant to be internal (but your still welcome to use the open source implementation within Firefox to mess with it).

                                                                                    1. 2

                                                                                      I got downvoted twice for “incorrect” though I tried my best to be neutral and objective. Please let me know, what I should change to make these statements more correct and why. I’m happy to have this conversation.

                                                                                      1. 0

                                                                                        Priorities can be criticized.

                                                                                        Mozilla obviously has more than enough money that they could pay devs to fix this — just sell Mozilla’s investment in the CliqZ GmbH and there would be enough to do so.

                                                                                        But no, Mozilla sets its priorities as limiting what users can do, adding more analytics and tracking, and more cross promotions.

                                                                                        Third party cookie isolation still isn’t fully done, while at the same time money is spent on adding more analytics to AMO, on CliqZ, on the Mr Robot addon, and even on Pocket. Which still isn’t ooen source.

                                                                                        Mozilla has betrayed every single value of its manifesto, and has set priorities opposite of what it once stood for.

                                                                                        That can be criticized.

                                                                                        1. 11

                                                                                          Wow, that escalated quickly :) It sounds to me that you’re already arguing in bad faith, but I think I’ll be able to respond to each of your points individually in a meaningful and polite way. Maybe we can uplift this conversation a tiny bit? However, I’ll do this with my Mozilla hat off, as this is purely based on public information and I don’t work on Cliqz or Pocket or any of those things you mention. Here we go:

                                                                                          • Cliqz: Mozilla wants a web with more than just a few centralized search engines. For those silos to end, decentralization and experimentation is required. Cliqz attempts to do that
                                                                                          • Telemetry respects your privacy
                                                                                          • You can isolate cookies easily. EIther based on custom labels (“Multi Account Containers”) or based on the first party domain (i.e., the website in the URL bar). The former is in the settings, the latter is behind a pref (first party isolate). For your convenience, there’s also an add-on for first party isolation
                                                                                          • Cross Promotions: The web economy is based on horrible ads that are annoying and tracking users. To show that ads can be profitable without being tracking or annoying, Mozilla shows sponsored content (opt-out btw) by computing the recommendations locally on your own device
                                                                                          • Some of the pocket source code is already open source. It’s not a lot, that’s true. But we consider that a bug.
                                                                                          1. 2

                                                                                            As someone who also got into 1-3 arguments against firefox I guess you’ll always have to deal with criticism that is nit picking, because you’ve written “OSS, privacy respecting, open web” on your chest. Still it is obvious you won’t implement an lz4 file upgrade mechanism (oh boy is that funny when it’s only some tiny app and it’s sqlite tables). Because there are much more important things than two users not being able to use their default tools to inspect the internals of firefox.

                                                                                            1. 2

                                                                                              Sure, but it’s obvious that somehow Mozilla has enough money to buy shares in one of the largest Advertisement and Tracking companies’ subsidiaries (Burda, the company most known for shitty ads and its Tabloids, owns CliqZ), where Burda retains majority control.

                                                                                              And yet, there’s not enough left to actually fix the rest.

                                                                                              And no, I’m not talking about Telemetry — I’m talking about the fact that about:addons and addons.mozilla.org use proprietary analytics from Google, and send all page interactions to Google. If I wanted Google to know what I do, I’d use Chrome.

                                                                                              Yet somehow Mozilla also had enough money to convert all its tracking from the old, self-hosted Piwik instance to this.

                                                                                              None of your arguments fix the problem that Mozilla somehow sees it as higher priority to track its users and invest in tracking companies than to fix its bugs or promote open standards. None of your arguments even address that.

                                                                                              1. 3

                                                                                                about:addons code using Google analytics has been fixed and is now using telemetry APIs, adhering to the global control toggle. Will update with the link, when I’m not on a phone.

                                                                                                Either way, Google Analytics uses a mozilla-customized privacy policy that prevents Google from using the data.

                                                                                                If your tinfoil hat is still unimpressed, you’ll have to block those addresses via /etc/hosts (no offense.. I do too).

                                                                                            2. 3

                                                                                              I won’t comment on the rest of your comment, but this is really a pretty tiny issue. If you really want to read your sessionstore as a JSON file, it’s as easy as git clone https://github.com/Thrilleratplay/node-jsonlz4-decompress && cd node-jsonlz4-decompress && npm install && node index.js /path/to/your/sessionstore.jsonlz4. (that package isn’t in the NPM repos for some reason, even though the readme claims it is, but looking at the source code it seems pretty legit)

                                                                                              Sure, this isn’t perfect, but dude, it’s just an internal datastructure which uses a format which is slightly non-standard, but which still has open-source tools to easily read it - and looking at the source code, the format is only slightly different from regular lz4.

                                                                                        1. 3

                                                                                          For you

                                                                                          What did he mean by this?

                                                                                          1. 8

                                                                                            It’s purely cosmetic. the code is left unchanged.

                                                                                            1. 3

                                                                                              It’s not very readable for someone who isn’t used to those symbols.

                                                                                              1. 2

                                                                                                I meant that this probably makes code harder to read for anyone standing behind you.

                                                                                              1. 4

                                                                                                TLDR: Most of the keyboard shortcuts here work basically in every software that reads input text.

                                                                                                I daily use CTRL+L (clear screen) and CTRL+U (to cut everything before the cursor) in my shell.

                                                                                                1. 2

                                                                                                  I use Ctrl-W (delete previous word) Ctrl-A (go to beginning of line) and Ctrl-R (search previous lines) a lot.

                                                                                                  1. 2

                                                                                                    I used exactly these 15 years long until I discovered that I can switch the VI mode on. Now the first line that I type when logged in on a (foreign) Linux box is “set -o vi”. I hope all terminal REPL applications would use readline so that I could use VI mode line editing everywhere. But that’s not everywhere the case.

                                                                                                1. 8

                                                                                                  If he says he disabled telemetry, maybe this is a feature that isn’t telemetry? Maybe search is “designed” to search on the web and you have to find a different toggle.

                                                                                                  I’m not defending their practice here. Just pointing out, that these are common patterns:

                                                                                                  • people look for global switches
                                                                                                  • software changes and new features have their own switches