Odd, because I didn’t read the XKCD comic as making fun of security people for saying ‘voting machines won’t work, stay away’ at all. I read it as saying voting machines won’t work and that we should stay away from them. And to that I have to say: I totally agree. Voting works fine as it is: done by humans, counted by humans, entirely on paper with not a computer or network in sight.
Elections are really hard regardless if it’s done by computers or not, but we didn’t get to the point where we figured out the computer side of it at all. What’s worse, is that adding computers into the mix was an excuse to go back on well-tested election related rules, such as secret voting. No, we can’t have voting over the internet or via mobile phones or anything like that.
We should really go back to limiting computer involvement in elections to UI, with the papertrail as the official record of votes. Involving computers in the actual process adds such a huge leap of complexity that it excludes most people from ever being able to verify results. Everyone can verify paper ballots.
Not really sure why you’d even want computers as UI. The ‘UI’ of a piece of paper you tick a box on really is quite good.
All I can say is that I’m glad that New Zealand has never (as least to my knowledge) involved computers in actual voting. Not even UI. I hope that the complete disaster that was our recent attempt at doing a census online[0] will help dissuade anyone from trying to do elections online as well.
[0]: Somehow they managed to simplify the census, put it online, reduce the number of questions and get fewer responses than before even though it’s still mandatory. What. And in return for significantly reducing the amount of information we get from the census, now they have a mandatory incredibly invasive survey of a randomly selected few percent of the population.
The reason for fewer responses may have little to do with technology and more to do with that notorious citizenship question.
What’s worse, is that adding computers into the mix was an excuse to go back on well-tested election related rules, such as secret voting. No, we can’t have voting over the internet or via mobile phones or anything like that.
There’s designs and protocols for that. We could even have diverse suppliers on the hardware side to mitigate the oligopoly risks. The question is, “Should we?” I think traditional, in-person methods combined with optical scanning is still the best tradeoff. The remote protocols might still be useful to reduce cost or improve accuracy on some mail-in votes, though.
I absolutely agree. Voting should be as simple for voters to understand as possible. Introducing an electronic device makes it auditable only to experts and even they might have a difficult job given the many layers at which things can go wrong (including hardware vulnerabilities).
One of the reasons people are advocating electronic voting is their lower cost. Personally, I think this argument is totally wrong. Cost is a factor but not the most important one - not having elections would be cheaper.
And let’s face it, how significant is the cost of having elections really? The 2008 general election in NZ cost about $36 million. Sounds like a lot, but that’s $12 million per year: 1/1719th of the Government’s budget. Spending 0.058% of the budget to ensure we have safe and fair elections is pretty insignificant really, it’s about as much as is spent on Parliament and its services and buildings etc, and about half as much as the Police earn the Government in fines from summary infringement notices (speeding tickets etc).
100% agree. I counted votes in the last federal election of Germany and that is some serious work, but totally worth it and very hard to tamper with.
This blogpost is a good example of fragmented, hobbyist security maximalism (sprinkled with some personal grudges based on the tone).
Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.
Talking about threat models, it’s important to start from them and that explains most of the misconceptions in the post.
Were tradeoffs made? Yes. Have they been carefully considered? Yes. Signal isn’t perfect, but it’s usable, high-level security for a lot of people. I don’t say I fully trust Signal, but I trust everything else less. Turns out things are complicated when it’s about real systems and not fantasy escapism and wishes.
Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.
In this article, resistance to governments constantly comes up as a theme of his work. He also pushed for his tech to be used to help resist police states like with the Arab Spring example. Although he mainly increased the baseline, the tool has been pushed for resisting governments and articles like that could increase perception that it was secure against governments.
This nation-state angle didn’t come out of thin air from paranoid, security people: it’s the kind of thing Moxie talks about. In one talk, he even started with a picture of two, activist friends jailed in Iran in part to show the evils that motivate him. Stuff like that only made the stuff Drew complains about on centralization, control, and dependence on cooperating with surveillance organization stand out even more due to the inconsistency. I’d have thought he’d make signed packages for things like F-Droid sooner if he’s so worried about that stuff.
A problem with the “nation-state” rhetoric that might be useful to dispel is the idea that it is somehow a God-tier where suddenly all other rules becomes defunct. The five-eyes are indeed “nation state” and has capabilities that are profound; like the DJB talk speculating about how many RSA-1024 keys that they’d likely be able to factor in a year given such and such developments and what you can do with that capability. That’s scary stuff. On the other hand, this is not the “nation state” that is Iceland or Syria. Just looking at the leaks from the “Hacking Team” thing, there are a lot of “nation states” forced to rely on some really low quality stuff.
I think Greg Conti in his “On Cyber” setup depicts it rather well (sorry, don’t have a copy of the section in question) and that a more reasonable threat model of capable actors you do need to care about is that of Organized Crime Syndicates - which seems more approachable. Nation State is something you are afraid of if you are political actor or in conflict with your government, where the “we can also waterboard you to compliance” factors into your threat model, Organized Crime hits much more broadly. That’s Ivan with his botnet from internet facing XBMC^H Kodi installations.
I’d say the “Hobbyist, Fragmented Maximalist” line is pretty spot on - with a dash of “Confused”. The ‘threats’ of Google Play Store (test it, write some malware and see how long it survives - they are doing things there …) - the odds of any other app store; Fdroid, the ones from Samsung, HTC, Sony et al. - being completely owned by much less capable actors is way, way higher. Signal (perhaps a Signal-To-Threat ratio?) perform an good enough job in making reasonable threat actors much less potent. Perhaps not worthy of “trust”, but worthy of day to day business.
Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.
And yet, Signal is advertising with the face of Snowden and Laura Poitras, and quotes from them recommending it.
What kind of impression of the threat models involved do you think does this create?
Whichever ones are normally on the media for information security saying the least amount of bullshit. We can start with Schneier given he already does a lot of interviews and writes books laypeople buy.
He encourages use of stuff like that to increase baseline but not for stopping nation states. He adds also constantly blogged about the attacks and legal methods they used to bypass technical measures. So, his reporting was mostly accurate.
We counterpoint him here or there but his incentives and reo are tied to delivering accurate info. Moxie’s incentives would, if he’s selfish, lead to locked-in to questionable platforms.
We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol.
I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”
If you actually look at the protocols? Slack is a clear case of Not Invented Here syndrome. Slack’s interface is not only slower, but does some downright crazy things (Such as transliterating a subset of emojis to plain-text – which results in batshit crazy edge-cases).
If you have a free month, try writing a slack client. Enlightenment will follow :P
I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”
Per IRCv3 people I’ve talked to, IRCv3 blew up massively on the runway, and will never take off due to infighting.
There are swathes of people still using Windows XP.
The primary complaint of people who use Electron-based programs is that they take up half a gigabyte of RAM to idle, and yet they are in common usage.
The fact that people are using something tells you nothing about how Good that thing is.
At the end of the day, if you slap a pretty interface on something, of course it’s going to sell. Then you add in that sweet, sweet Enterprise Support, and the Hip and Cool factors of using Something New, and most people will be fooled into using it.
At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on: https://ircv3.net/specs/extensions/batch/chathistory-3.3.html)
At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on […])
The time for the IRC group to be working on a solution to persistent history was a decade ago. It strikes me as willful ignorance to disregard the success of Slack et al over open alternatives as mere fashion in the face of many meaningful functionality differences. For business use-cases, Slack is a better product than IRC full-stop. That’s not to say it’s perfect or that I think it’s better than IRC on all axes.
To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool? But imagine being a UX designer and wanting to help make some native open-source IRC client fun and easy to use for a novice. “Sisyphean” is the word that comes to mind.
If we want open solutions to succeed we have to start thinking of them as products for non-savvy end users and start being honest about the cases where closed products have superior usability.
IRC isn’t hip and cool because people can’t make money off of it. Technologies don’t get investment because they are good, they get good because of investment. The reason that Slack is hip/cool and popular and not IRC is because the investment class decided that.
It also shows that our industry is just a pop culture and can give a shit about good tech .
There were companies making money off chat and IRC. They just didn’t create something like Slack. We can’t just blame the investors when they were backing companies making chat solutions whose management stayed on what didn’t work in long-term or for huge audience.
IRC happened before the privatization of the internet. So the standard didn’t lend itself well for companies to make good money off of it. Things like slack are designed for investor optimization, vs things like IRC being designed for use and openness.
My point was there were companies selling chat software, including IRC clients. None pulled off what Slack did. Even those doing IRC with money or making money off it didn’t accomplish what Slack did for some reason. It would help to understand why that happened. Then, the IRC-based alternative can try to address that from features to business model. I don’t see anything like that when most people that like FOSS talk Slack alternatives. Then, they’re not Slack alternatives if lacking what Slack customers demand.
Thanks for clarifying. My point can be restated as… There is no business model for federated and decentralized software (until recently , see cryptocurrencies). Note most open and decentralized tech of the past was government funded and therefore didn’t face business pressures. This freed designets to optimise other concerns instead of business onrs like slack does.
To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool?
The argument being made is that the vast majority of Slack’s appeal is the “hip-and-cool” factor, not any meaningful additions to functionality.
Right, as I said I think it’s important for proponents of open tech to look at successful products like Slack and try to understand why they succeeded. If you really think there is no meaningful difference then I think you’re totally disconnected from the needs/context of the average organization or computer user.
That’s all well and good, I just don’t see why we can’t build those systems on top of existing open protocols like IRC. I mean: of course I understand, it’s about the money. My opinion is that it doesn’t make much sense to insist that opaque, closed ecosystems are the way to go. We can have the “hip-and-cool” factor, and all the amenities provided by services like Slack, without abandoning the important precedent we’ve set for ourselves with protocols like IRC and XMPP. I’m just disappointed that everyone’s seeing this as an “either-or” situation.
I definitely don’t see it as an either-or situation, I just think that the open source community typically has the wrong mindset for competing with closed products and that most projects are unapproachable by UX or design-minded people.
Open, standard chat tech has had persistent history and much more for decades in the form of XMPP. Comparing to the older IRC on features isn’t really fair.
The fact that people are using something tells you nothing about how Good that thing is.
I have to disagree here. It shows that it is good enough to solve a problem for them.
I don’t see how Good and “good enough to solve a problem” are related here. The first is a metric of quality, the second is the literal bare minimum of that metric.
Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.
I’d dispute that. People who become interested in Signal seem much more prone to be using F-Droid than, say, WhatsApp users. Signal tries to be an app accessible to the common person, but few people really use it or see the need… and often they are free software enthusiasts or people who are fed up with Google and surveillance.
More likely sure, but that doesn’t mean that many of them reach the threshold of effort that they do.
I usually link to Betteridge’s Law when I write a post like this, but didn’t this time.
Apparently a significant portion of people found the title to be clickbait-y, but I thought it was a pretty straightforward question. Oh well!
This knee-jerk reaction against “clickbait” kind of annoys me. Imo there is nothing wrong with an article having a title that attempts to engage a reader and pique their interest. I would also much rather a title pose a question and answer it in the article, rather than containing the answer in the title itself. (The latter can lead to people just reading the title and missing any nuance the article conveys).
I agree. Clickbait really implies that the article has no meaningful content. If the article is actually worth reading, it’s not clickbait, it’s catchy.
“WebAssembly is not the return of Java Applets and Flash.”
Edit: I did enjoy the article, however.
Edit2: As site comment:
I had no idea what the “kudos” widget was, moved my mouse to it, saw some animation happening, and realized I just “upvoted” a random article, with no way to undo it. Wondeful design. >.<
A lot of cryptographers call these ciphers “vanity national ciphers”, basically due to the fact that we don’t need regional/country specific crypto and these are not more secure than what’s out there in the global academia anyway.
It’s taking up space in the OpenSSL codebase etc.
DNSSEC as a standard is just really badly done. The closest comparison is OpenPGP (as it relates to emails), technically both standards want to increase security but due to design issues, both spectacularly failed to improve the status quo.
It’s good that DNSSEC hasn’t been deployed widely enough for it to “fail-close”.
When people tell me to stop using the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA, I get suspicious, even hostile. It’s triply frustrating when, at the end of the linked rant, they actually recognize that PGP isn’t the problem:
It also bears noting that many of the issues above could, in principle at least, be addressed within the confines of the OpenPGP format. Indeed, if you view ‘PGP’ to mean nothing more than the OpenPGP transport, a lot of the above seems easy to fix — with the exception of forward secrecy, which really does seem hard to add without some serious hacks. But in practice, this is rarely all that people mean when they implement ‘PGP’.
There is a lot wrong with the GPG implementation and a lot more wrong with how mail clients integrate it. Why would someone who recognises that PGP is a matter of identity for many of its users go out of their way to express their very genuine criticisms as an attack on PGP? If half the effort that went into pushing Signal was put into a good implementation of OpenPGP following cryptographic best practices (which GPG is painfully unwilling to be), we’d have something that would make everyone better off. Instead these people make it weirdly specific about Signal, forcing me to choose between “PGP” and a partially-closed-source centralised system, a choice that’s only ever going to go one way.
I am deeply concerned about the push towards Signal. I am not a cryptographer, so all I can do is trust other people that the crypto is sound, but as we all know, the problems with crypto systems are rarely in the crypto layers.
On one hand we know that PGP works, on the other hand we have had two game over vulnerabilities in Signal THIS WEEK. And the last Signal problem was very similar to the one in “not-really-PGP” in that the Signal app passed untrusted HTML to the browser engine.
If I were a government trying to subvert secure communications, investing in Signal and tarnishing PGP is what I would try to do. What better strategy than to push everyone towards closed systems where you can’t even see the binaries, and that are not under the user’s control. The exact same devices with GPS and under constant surveilance.
My mobile phone might have much better security mechanisms in theory, but I will never know for sure because neither I, nor anyone else can really check. In the meantime we know for sure what a privacy disaster these mobile phones are. We also know for sure the the various leaks that government implant malware on mobile devices, and we know that both manufacturers and carriers can install software, or updates, on devices without user consent.
Whatever the PGP replacement might be, moving to the closed systems that are completely unauditable and not under the user’s control is not the solution. I am not surprised that some people advocate for this option. What I find totally insane is that a good majority of the tech world finds this position sensible. Just find any Hacker News thread and you will see that any criticism towards Signal is downvoted to oblivion, while the voices of “experts” preach PGP hysteria.
PGP will never be used by ordinary people. It’s too clunky for that. But it’s used by some people very successfully, and if you try to dissuade this small, but very important group of people to move towards your “solution”, I can only suspect foul play. Signal does not compete with PGP. It’s a phone chat app. As Signal does not compete with PGP, why do you have to spend all this insane ammount of effort to convince an insignificant amount of people to drop PGP for Signal?
I can’t for the life of me imagine why a CIA-covert-psyops-agency funded walled garden service would want to push people away from open standards to their walled garden service.
Don’t get me wrong, Signal does a lot of the right things but a lot of claims are made about it implying it’s as open as PGP, which it isn’t.
What makes Signal a closed system?
Not Signal, iOS and Android, and all the secret operating systems that run underneath.
As for Signal itself, moxie forced F-Droid to take down Signal, because he didn’t want other people to compile Signal. He said he wanted people only to use his binaries, which even if you are ok with in principle, on Android it mandates the use of the Google Play Store. If this is not a dick move, I don’t know what is.
I’m with you on Android and especially iOS being problematic. That being said, Signal has been available without Google Play Services for a while now. See also the download page; I couldn’t find it linked anywhere on the site but it is there.
However, we investigated this for PRISM Break, and it turns out that there’s a single Google binary embedded in the APK I just linked to. Which is unfortunate. See this GitHub comment.
because he didn’t want other people to compile Signal. He said he wanted people only to use his binaries
Ehm… he chose the wrong license in this case.
As I understand it, the case against PGP is not with PGP in and of itself (the cryptography is good), but the ecosystem. That is, the toolchain in which one uses it. Because it is advocated for use in email and securing email, it is argued, is nigh on impossible, then it is irresponsible to recommend using PGP encrypted email for general consumption, especially for journalists.
That is, while it is possible to use PGP via email effectively, it is incredibly difficult and error-prone. These are not qualities one wants in a secure system and thus, it should be avoided.
But the cryptographyisn’t good. His case in the blog post is intentionally besides all of the crypto badness.example: the standard doesn’t allow any other hash function than sha1, which has been proven broken. The protocol itself disallows flexibility here to avoid ambiguity and that means there is no way to change it significantly without breaking compatibility.
And so far, it seems, people wanted compatibility (or switched to something else, like Signal)
Until this better implementation appears, an abstract recommendation for PGP is a concrete recommendation for GPG.
Imagine if half the effort spent saying PGP is just fine went into making PGP just fine.
I guess that’s an invitation to push https://autocrypt.org/
When people tell me to stop using the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA, I get suspicious, even hostile.
Without wanting to sound rude, this is discussed in the article:
The fact of the matter is that OpenPGP is not really a cryptography project. That is, it’s not held together by cryptography. It’s held together by backwards-compatibility and (increasingly) a kind of an obsession with the idea of PGP as an end in and of itself, rather than as a means to actually make end-users more secure.
OpenPGP might have resisted the NSA, but that’s not a unique property. Every modern encryption tool or standard has to do that or it is considered broken.
I think most people unless they are heavily involved in security research don’t know how encrytion/auth/integrity protection are layered. There are a lot of layers in what people just want to call “encryption”. OpenPGP uses the same standard crypto building blocks as everything else and unfortunately putting those lower level primitives together is fiendishly difficult. Life also went on since OpenPGP was created meaning that those building blocks and how to put them together changed in the last few decades, cryptographers learned a lot.
One of the most important things that cryptographers learned is that the entire ecosystem / the system as a whole counts. Even Snowden was talking about this when he said that the NSA just attacks the endpoints, where most of the attack surface is. So while the cryptography bits in the core of the OpenPGP standard are safe, if dated, that’s not the point. Reasonable people can’t really use PGP safely because we would have to have a library that implements the dated OpenPGP standard in a modern way, clients that interface with that modern library in a safe and thought-through way and users that know enough about the system to satisfy it’s safety requirements (which are large for OpenPGP)
Part of that is attitude, most of the existing projects for implementing the standard just don’t seem to take a security-first stance. Who is really looking towards providing a secure overall experience to users under OpenPGP? Certainly not the projects bickering where to attribute blame.
I think people kept contrasting this with Signal because Signal gets a lot of things right in contrast. The protocol is modern and it’s not impossibly demanding on users (ratcheting key rotation, anyone?), there is no security blame game between Signal the desktop app vs signal the mobile app vs the protocol when a security vulnerability happens, OWS just fixes it with little drama. Of course Signal-the-app has downsides too, like the centralization, however that seems like a reasonable choice. I’d rather have a clean protocol operating through a central server that most people can use than an unuseable (from the pov of most users) standard/protocol. We’re not there yet where we can have all of decentralization, security and ease of use.
OpenPGP might have resisted the NSA, but that’s not a unique property. Every modern encryption tool or standard has to do that or it is considered broken.
One assumes the NSA has backdoors in iOS, Google Play Services, and the binary builds of Signal (and any other major closed-source crypto tool, at least those distributed from the US) - there’s no countermeasure and virtually no downside, so why wouldn’t they?
there is no security blame game between Signal the desktop app vs signal the mobile app vs the protocol when a security vulnerability happens, OWS just fixes it with little drama.
Not really the response I’ve seen to their recent desktop-only vulnerability, though I do agree with you in principle.
Signal Android has been reproducible for over two years now. What I don’t know is whether anyone has independently verified that it can be reproduced. I also don’t know whether the “remaining work” in that post was ever addressed.
The process of verifying a build can be done through a Docker image containing an Android build environment that we’ve published.
Doesn’t such process assume trust on who created the image (and on who created each of layers it was based on)?
A genuine question, as I see the convenience of Docker and how it could lead to more verifications, but on the other hand it create a single point of failure easier to attack.
That question of trust is the reason why, if you’re forced to use Docker, build every layer for yourself from the most trustworthy sources. It isn’t even hard.
the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA
I’m pretty ignorant on this matter, but do you have any link to share?
There is a lot wrong with the GPG implementation
Actually, I’d like to read the opinion of GPG developers here, too.
Everyone makes mistakes, but I’m pretty curious about the technical allegations: it seems like they did not considered the issue to be fixed in their own code.
This might have pretty good security reasons.
To start with, you can’t trust the closed-source providers since the NSA and GHCQ are throwing $200+ million at both finding 0-days and paying them to put backdoors in. Covered here. From there, you have to assess open-source solutions. There’s a lot of ways to do that. However, the NSA sort of did it for us in slides where GPG and Truecrypt were worst things for them to run into. Snowden said GPG works, too. He’d know given he had access to everything they had that worked and didn’t. He used GPG and Truecrypt. NSA had to either ignore those people or forward them to TAO for targeted attack on browser, OS, hardware, etc. The targeted attack group only has so much personnel and time. So, this is a huge increase in security.
I always say that what stops NSA should be good enough to stop the majority of black hats. So, keep using and improving what is a known-good approach. I further limit risk by just GPG-encrypting text or zip files that I send/receive over untrusted transports using strong algorithms. I exchange the keys manually. That means I’m down to trusting the implementation of just a few commands. Securing GPG in my use-case would mean stripping out anything I don’t need (most of GPG) followed by hardening the remaining code manually or through automated means. It’s a much smaller problem than clean-slate, GUI-using, encrypted sharing of various media. Zip can encode anything. Give the files boring names, too. Untrusted, email provider is Swiss in case that buys anything on any type of attacker.
Far as the leaks, I had a really-hard time getting you the NSA slides. Searching with specific terms in either DuckDuckGo or Google used to take me right to them. They don’t anymore. I’ve had to fight with them narrowing terms down with quotes trying to find any Snowden slides, much less the good ones. I’m getting Naked Security, FramaSoft, pharma spam, etc even on p 2 and 3 but not Snowden slides past a few, recurring ones. Even mandating the Guardian in terms often didn’t produce more than one, Guardian link. Really weird that both engines’ algorithms are suppressing all the important stuff despite really-focused searches. Although I’m not going conspiracy hat yet, the relative-inaccuracy of Google’s results compared to about any other search I’ve done over past year for both historical and current material is a bit worrying. Usually excellent accuracy.
NSA Facts is still up if you want the big picture about their spying activities. Ok, after spending an hour, I’m going to have to settle for giving you this presentation calling TAILS or Truecrypt catastrophic loss of intelligence. TAILS was probably temporary but the TrueCrypt derivatives are worth investing effort in. Anyone else have a link to the GPG slide(s)? @4ad? I’m going to try to dig it all up out of old browser or Schneier conversations in near future. Need at least those slides so people knows what was NSA-proof at the time.
Why would TAILS be temporary? If anything this era of cheap devices makes it more practical than ever.
It was secure at the time since either mass collection or TAO teams couldnt hack it. Hacking it requires one or more vulnerabilities in the software it runs. The TAILS software includes complex software such as Linux and a browser with history of vulnerabilities. We should assume that was temporary and/or would disappear if usage went up enough to budget more attacks its way.
I’d still trust it more than TrueCrypt just due to being open-source.
What would it take to make an adequate replacement for TAILS? I’m guessing some kind of unikernel? Are there any efforts in that direction?
Well, you have to look at the various methods of attack to assess this:
Mass surveillance attempting to read traffic through protocol weaknesses with or without a MITM. They keep finding these in Tor.
Attacks on the implementation of Tor, the browser, or other apps. These are plentiful since it’s mostly written in non-memory safe way. Also, having no covert, channel analysis on components processing secrets means there’s probably plenty of side channels. There’s also increasingly new attacks on hardware with a network-oriented one even being published.
Attacks on the repo or otherwise MITMing the binaries. I don’t think most people are checking for that. The few that do would make attackers cautious about being discovered. A deniable way to see who is who might be a bitflip or two that would cause the security check to fail. Put it in random, non-critical spots to make it look like an accident during transport. Whoever re-downloads doesn’t get hit with the actual attack.
So, the OS and apps have to be secure with some containment mechanisms for any failures. The protocol has to work. These must be checked against any subversions in the repo or during transport. All this together in a LiveCD. I think it’s doable minus the anonymity protocol working which I don’t trust. So, I’ve usually recommended dedicated computers bought with cash (esp netbooks), WiFi’s, cantennas, getting used to human patterns in those areas, and spots with minimal camera coverage. You can add Tor on top of it but NSA focuses on that traffic. They probably don’t pay attention to average person on WiFi using generic sites over HTTPS.
Sure. My question was more: does a live CD project with that kind of aim exist? @josuah mentioned heads which at least avoids the regression of bringing in systemd, but doesn’t really improve over classic tails in terms of not relying on linux or a browser.
An old one named Anonym.OS was an OpenBSD-based, Live CD. That would’ve been better on code injection front at least. I don’t know of any current offerings. I just assume they’ll be compromised.
I think it is the reason why https://heads.dyne.org/ have been made: Replacing the complex software stack with a simpler one with aim to avoid security risks.
Hmm. That’s a small start, but still running Linux (and with a non-mainstream patchset even), I don’t think it answers the core criticism.
Thanks for this great answer.
Really weird that both engines’ algorithms are suppressing all the important stuff despite really-focused searches.
If you can share a few of your search terms I guess that a few friends would find them pretty interesting, with their research.
For sure this teach us a valuable lesson. The web is not a reliable medium for free speech.
From now on, I will download from the internet interesting documents about such topics and donate them (with other more neutral dvds) to small public libraries around the Europe.
I guess that slowly, people will go back to librarians if search engines don’t search carefully enough anymore.
It was variations, with and without quotes, on terms I saw in the early reports. They included GPG, PGP, Truecrypt, Guard, Documents, Leaked, Snowden, and catastrophic. I at least found that one report that mentions it in combination with other things. I also found, but didn’t post, a PGP intercept that was highly-classified but said they couldn’t decrypt it. Finally, Snowden kept maintaining good encryption worked with GPG being one he used personally.
So, we have what we need to know. From there, just need to make the programs we know work more usable and memory safe.
This is the best article I’ve read on this topic and it pretty clearly demonstrates why the community and attitude around a standard matters.
This is a nice summary, thanks for sharing it. Combined with this tweet: https://twitter.com/kellabyte/status/996429414970703872
…I’m inclined to wonder how much time/bandwidth would be saved at larger sites if people cleaned these up, although I suspect that “size of HTTP headers” is not the worst bottleneck for most people.
For most sites the comparison goes something like javascript > unoptimized images > cookie size > other http headers for bytes/load time wasted.
I suspect the impact is minimal. It’s a few hundred bytes at worst, and the site is probably more affected by 3rd party adtech or unoptimized pictures.
Somewhat related, but even small changes to the request/response can have large impact on the bandwidth consumed.
From Stathat “This change should remove about 17 terabytes of useless data from the internet pipes each month” https://blog.stathat.com/2017/05/05/bandwidth.html
Optimized Images alone would most likely save a lot more since they can save a lot more too. A recent google blog loaded a 8MB GIF image to show a few second long animation in a 250x250 thumbnail. 2 minutes in ffmpeg reduced that to about 800KB.
Imagine if people did this on sites with more traffic than some random google product announcement blog…
A quick search shows me that the article doesn’t include the word webassembly, so it probably misses a large part of the picture. I’m not worried if webassembly wins and JS winning implicitly implies that.
I would actually wait until GDPR to kick in before deleting Facebook, or any other online account for that matter, so that keeping user information even after a user has requested deletion is simply against the law.
I don’t think the fines for violating GDPR are large enough to make Facebook think twice about ignoring it. Short of dissolving Facebook and seizing its assets under civil forfeiture, no civil or criminal penalty seems severe enough to force it to consider the public good.
don’t think the fines for violating GDPR are large enough
Actually, they are very large:
Up to €20 million, or 4% of the worldwide annual revenue of the prior financial year, whichever is higher [0]
Based on 2017 revenue [1] of $40B, that’s $1.6 Billion Dollars
But it’s not just the fines. The blowback from the stock hit and shareholder loss, as well as cascading PR impact, is a high motivator too.
[0] https://www.gdpreu.org/compliance/fines-and-penalties/ [1] https://www.statista.com/statistics/277229/facebooks-annual-revenue-and-net-income/
0.04 << 1 until you can quantify the cascading PR impact. It will not effect their day-to-day operations from an economic standpoint.
I would be curious to know how many people have actually taken action on their FB usage based on the recent CA news outbreak. I am willing to bet it’s miniscule.
The fines are per distinct issue (not number of people affected). If Facebook breaches GDPR with multiple issues, then Facebook could get hit by a large percentage of their annual revenues.
Nice summary of the crypto bits, I’d only remark on the backup suggestion though: just use borgbackup with whatever storage provider fits for you
There is no excuse these days not to have an autoupdating CMS. Even Wordpress has that (well, until Wordpress was Wordpress and they broke the autoupdater in a patch release, but I digress).
even? I don’t see too many CMS nowadays, but isn’t that an almost unique feature of Wordpress? Which other CMS do support auto updating functionality?
And Wordpress auto-updating supports only Wordpress itself, no plugins or themes.
I dunno. Giving an application (Wordpress, Drupal, etc) with a fucking atrocious security history, permission to overwrite it’s own files doesn’t seem that smart to me.
It’s a hierarchy (from worst to least worst)
Tag yourself ;)
That isn’t least worst. “Auto update” implies the software can overwrite itself. That’s the whole problem.
It isn’t that much of a problem if one is able to verify the modifications, i.e. by using reproducible builds and update them in place.
If the app server (eg either php-fpm or apache w/mod_php) has write permissions to the files it executes, and an attacker finds an RCE vulnerability, all the reproducible builds in the world won’t help the average joe whose site just got hosed or compromised to serve malware or whatever.
remote code execution is also pretty bad without the app server having write permission on the files and - given the way most Wordpress plugins, for example, are developed - there will be RCE opportunities. Auto upgrades greatly decrease the chance that those will be exploited in the wild. So sadly it’s a tradeoff which makes sense for the target audience. i.e hobby sites and businesses on shared hosting or so, without a dedicated ops team.
I’m quite happy that the MitM options didn’t make it into the standard.
Though I’m missing encrypted SNI, I hope we get that soon.
It was interesting to see the reason why encrypted SNI wasn’t included - too much complexity to achieve it.
Regarding encrypted SNI, this Internet-Draft, last updated March 1, summarizes the proposals that appear to be furthest along.
People The Media have been acting very suprised about all the news around Facebook which has been popping up for the last few weeks.
But frankly, I find exactly these reactions the coverage far more surprising. I mean, didn’t everyone already kind of know that this has been going on if you use Facebook? People don’t have to be told that something unusual is going on. Just look at their app permissions (or their business model). What probably shocks irritates most people is the facts that they can’t go on telling themselves that everything is fine.
Edit: I would like to clarify – my issue isn’t who knew what and who didn’t. I am talking about the popular reaction and the narrative in which tese events are being placed, which I belive to be wrong. I don’t understand why people see this as trolling?
I see this sort of comment a lot, and I think it’s wrong headed and counterproductive:
There’s a difference between a general believe that Facebook doesn’t respect your privacy and a very specific “they collected this data, unnecessarily and stored it in perpetuity”
Chastising people for not having been aware in the past doesn’t encourage them to be more proactive in the future, it pushes them to just stop caring entirely. If you want people to be more upset and take action, use this opportunity to push them forwards, not lecture them for having been late to the party.
I’m not blaming Facebook users or trying to act as if I were superior. I mean, I use WhatsApp on a (far too) regular basis, and have a pretty good feeling that it is going on there too And I understand why they are using it.
But in the end, what else were they supposed to be doing with the data? The people I am “concerned” with are those who are talking about this the most, acting as if nobody would have guessed that this could be happening in a million years. If anything, this seems to be the harmful thing to do, since it seems to neglect that Facebook isn’t doing this because they are evil or something, but anyone, any social network with a similar history, size and system of operation, would have to do the same. The crime is intrinsic in the form.
Personally, I don’t know specifically what Facebook or other companies are doing. However, I know that they are in the business of data collection so this is not shocking. What they do specifically depends on what they are able to do technically.*
If they were doing something outside their scope of business, like raise a great old one from the void, then I might be shocked.
** That something might be technically feasible might be shocking, but that’s another story.
E.g., “Facebook scraped call data from Android” vs. “Android leaks call data to third party apps.”
That is kind of what I am trying to say. It isn’t supprising, and this fact should be emphasized. Sadly, @alex_gaynor misunderstood me a bit, in that I want people to understand why this shouldn’t be surprising. It is their buisness model, and no matter who or what, something along these lines happening will have ultimately unavoidable.
What they do specifically depends on what they are able to do technically
And what they have to do as a business to always be a step ahead of their competition! And again, this isn’t anyones individual responsibility, just as nobody is to blame when a player is ahead in Mensch ärgere Dich nicht and others loose.
Maybe a bit snarky, but let me draw some parallels with this take:
“The Big Bang? Why are you interested in it now? It happened 13 billion years ago. It obviously happened, otherwise we wouldn’t be here at all. Why study it? Pretty much everybody knows about cosmology. Add some fundamental laws and, well the current state of the universe naturally follows.”
The point is: not even close to the number of people you think knew knew. Those who knew didn’t know details. Those who had some details didn’t have certainty. Those who knew, had details and certainty didn’t reach large enough numbers to have a public debate about this issue.
People see it as trolling because no one can be certain over the internet if anyone is actually surprised or not. A lot of people feign surprise to puff themselves up. An obnoxiously obvious version of this would be “Not only did I know about this breaking news before everyone else, I was so certain of it that I believed it was universal knowledge! I’m shocked, shocked that people did not understand this as well as me, a genius.” You didn’t write like this, but feigning surprise is common enough that any expression of surprise is received very skeptically.
Ok, I understand that, but I hope I clarified my position in my other responses. Looking back at my original phrasing, I understand the possibility for misunderstanding. “Media coverage” might have been a better word to use instead of “reactions”, which could be understood to be too general.
And despite Facebook being devoid of ethics and morality, despite them abusing their users and their data, people will keep using Facebook by the billions. It’s hopeless; people just don’t care enough.
The network effects are so strong that competition is, for all intents and purposes, impossible. Google Plus is the canonical case study here, though I’m sure there’s an entire graveyard full of them. Facebook’s value is that it has all the people on it, and any competitor will by definition start without any people, which gives it no value proposition to pull people off Facebook.
Unfortunately it’s still the most viable platform for certain things. I use Facebook almost exclusively to buy and sell event tickets at the last minute. In the past 2 years I have bought tickets from the actual ticket vendor for only 2 out of 20+ shows. Facebook provides a web of trust that no other platform can match. I would hesitate to buy a ticket from “edmfan1337,” but some random person with years of photos, a job, a school, and hundreds of friends is way more trustworthy. Often I’ll even have a mutual friend or two for events that are local. I’d love if there were some other platform equally viable, but I am not really interested in technical solutions involving third party guarantees or other “secure” systems. It’s better to deal with real people who can come to agreements and make compromises.
I think there’s something far more sinister going on here. We don’t really have free media in today’s world. It looks free, but there are only a few major players and a lot of major advertisers controlling those outlets. At work we have a CNN feed in the entry way. 90% of the time the word Trump is on the screen. It’s all Trump all the time. Unlike 1984 with its 2 minute hate, for several decades we’ve been living in a 24/7 hate.
These types of stories are designed to keep us scared or to put the population down a certain path. I have a feeling Zuckerberg pissed off someone recently. Maybe it’s someone in the 1% trying to put him in his place after he talked about running for President. Maybe he pissed off some board members at Google. It doesn’t take much. Someone with the means just needs to get one or two publications to start down the path and soon the rest of the media follows because it’s what people want and it sells.
I don’t believe in sinister masterminds controlling things from behind the scenes. And the usage of the word “media” retrospectively didn’t help much to clarify what I intended to say. Maybe “popular discourse” would have been better?
Regarding the points you brought up, I just believe that Trump is a easy to report topic that a lot of people (in some perverse sense) enjoy to hear about. And why should a media network not talk about it, if there’s a “marketplace of attention”? And also, one should avoid falling into cognitive biases. Trump get’s mentioned a lot, one the one hand because his policies are controversial, but also because he is the president of the USA… It’s not like Obama or Bush were minor political actors. And a “1%” is really a void term. It means nothing, and just gives space for ones own imagination. Some things just happen, randomly, and there isn’t a overarching narrative one can coherently place it in.
Google contributes suprisingly little back to in terms of open source compared to the size of the company and the number of developers they have. (They do reciprocate a bit, but not nearly as much as they could.)
For example this is really visible in the area where they do some research and/or set a standard like with compression algorithms (zopfli, brotli), network protocols (HTTP/2, QUIC), the code and glue they release is minimal.
It’s my feeling that Google “consumes”/relies on a lot more open source code than they then contribute back to.
Go? Kubernetes? Android? Chromium? Those four right there are gargantuan open source projects.
Or are you specifically restricting your horizon to projects that aren’t predominantly run by Google? If so, why?
I’m restricting my horizon for projects that aren’t run by Google because it better showcases the difference between running and contributing to a project. Discussing how Google runs open source projects is another interesting topic though.
Edit: running a large open source project for a major company is in large part about control. Contributing to a project where the contributor is not the main player running the project is more about cooperation and being a nice player. It just seems to me that Google is much better at the former than the latter.
It would be interesting to attempt to measure how much Google employees contribute back to open source projects. I would bet that it is more than you think. When you get PRs from people, they don’t start off with, “Hey so I’m an engineer at Google, here’s this change that we think you might like.” You’d need to go and check out their Github profile and rely on them listing their employer there. In other words, contributions from Google may not look like Contributions From Google, but might just look like contributions from some random person on the Internet.
I don’t have the hat, but for the next two weeks (I’m moving teams) I am in Google’s Open Source office that released these docs.
We do keep a list of all Googlers who are on GitHub, and we used to have an email notification for patches that Googlers sent out before our new policy of “If it’s a license we approve, you don’t need to tell us.” We also gave blanket approval after the first three patches approved to a certain repo. It was ballpark 5 commits a day to non-Google code when we were monitoring, which would exclude those which had been given the 3+ approval. Obviously I can share these numbers because they’re all public anyway ;)
For reasons I can’t remember, we haven’t used the BigQuery datasets to track commits back to Googlers and get a good idea of where we are with upstream patches now. I know I tried myself, and it might be different now, but there was some blocker that prevented me doing it.
I do know that our policies about contributing upstream are less restrictive than other companies, and Googlers seem to be happy with what they have (particularly since the approved licenses change). So I disagree with the idea that Google the company doesn’t do enough to upstream. It’s on Googlers to upstream if they want to, and that’s no different to any other person/group/company.
So I disagree with the idea that Google the company doesn’t do enough to upstream.
Yeah, I do too. I’ve worked with plenty of wonderful people out of Google on open source projects.
More accurately, I don’t even agree with the framing of the discussion in the first place. I’m not a big fan of making assumptions about moral imperatives and trying to “judge” whether something is actually pulling its weight. (Mostly because I believe its unknowable.)
But anyway, thanks for sharing those cool tidbits of info. Very interesting! :)
Yeah, sorry I think I made it sound like I wasn’t agreeing with you! I was agreeing with you and trying to challenge the OP a bit :)
Let me know if there’s any other tidbits you are interested in. As you can tell from the docs, we try to be as open as we can, so if there’s anything else that you can think of, just ping me on this thread or cflewis@google.com and I’ll try to help :D
FWIW I appreciate the effort to shed some light on Google’s open source contributions.Do you think that contributions could be more systemic/coordinated within Google though, as opposed to left to individual devs?
Do you think that contributions could be more systemic/coordinated within Google though, as opposed to left to individual devs?
It really depends on whether a patch needs to be upstreamed or not, I suppose. My gut feeling (and I have no data for this) and entirely personal and not representative of my employer opinion, is that teams as a whole aren’t going to worry about it if they can avoid it… often the effort to convince the upstream maintainers to accept the patch can suck up a lot of time, and if the patch isn’t accepted then that time was wasted. It’s also wasted time if the project is going in a direction that’s different to yours, and no-one really ever wants to make a competitive fork. It’s far simpler and a 100% guarantee of things going your way if you just keep a copy of the upstream project and link that in as a library with whatever patches you want to do.
The bureaucracy of upstreaming, of course, is working as intended. There does have to be guidance and care to accepting patches. Open source != cowboy programming. That’s no problem if you are, say, a hobbyist who is doing it in the evenings here and there, where timeframes and so forth are less pressing. But when you are a team with directives to get your product out as soon as you can, it generally isn’t something a team will do.
I don’t think this is a solved problem by any company that really does want to commit back to open source like Google does. And I don’t think the issue changes whether you’re a giant enterprise or a small mature startup.
This issue is also why you see so much more open source projects released by companies rather than working with existing software: you know your patches will be accepted (eventually) and you know it’ll go in your direction, It’s a big deal to move a project to community governance as you now lose that guarantee.
Cool! You should really explain to Google your build process!
And to everybody else, actually.
Because a convoluted and long build process, concretely reduce the freedom that an open source license gives you.
Cool! You should really explain to Google your build process!
Google explained it to me actually. https://chromium.googlesource.com/chromium/src/+/lkcr/docs/linux_build_instructions.md#faster-builds
Because a convoluted and long build process, concretely reduce the freedom that an open source license gives you.
Is the implication that Google intentionally makes the build for Chromium slow? Chromium is a massive project and uses the best tools for the job and has made massive strides in recent years to improve the speed, simplicity, and documentation around their builds. Their mailing lists are also some of the most helpful I’ve ever encountered in open source. I really don’t think this argument holds any water.
The amount Google invests in securing open source software basically dwarfs everyone else’s investment, it’s vaguely frightening. For example:
I don’t think anyone else is close, either by number (and severity) of vulnerabilities reported or in proactive work to prevent and mitigate them.
Google does care a lot about security and I know of plenty of positive contributions that they’ve made. We probably could spend days listing them all, but in addition to what you’ve mentioned project zero, pushing the PKI towards sanity, google summer of code (of which I was one recipient about a decade ago), etc all had a genuinely good impact.
OTOH Alphabet is the world’s second largest company by market capitalization, so there should be some expectation of activity based on that :)
Stepping out of the developer bubble, it is an interesting thought experiment to consider if it would be worth trading every open source contribution Google ever made for changing the YouTube recommendation algoritm to stop promoting extremism. (Currently I’m leaning towards yes.)
If you mean it would be written in Rust in 2018, nope. Most platforms of interest to SQLite are still tier-2/tier-3 support by Rust at best.
I believe it. But what platforms are at issue? Edit: yikes, the tier 1 list is way more limited than I thought. Never mind this question.
Also, for projects starting in 2018, the question isn’t what Rust supports today, but what platforms you’re willing to bet Rust will support in 5-10 years. Hopefully that list is bigger.
We’ve been talking about re-vamping the tier system, because it doesn’t do a great job of recognizing actual support. For example, ARM is a Tier 1 platform for Firefox, so stuff gets checked out and handled quite a bit, but given the current rules of how we clarify support, it appears like it’s a lot less than it actually is.
in 5-10 years. Hopefully that list is bigger.
We recently put together a working group to work on ease of porting rust to other platforms, so yeah, we expect it to grow. The hardest part is CI, honestly. Getting it going is one thing, but having someone who’s willing to commit to fixing breakage in a timely manner without bringing all development to a halt is tough for smaller/older platforms.
This blogpost is a bit of a mixed bag of advices:
It’s trivial to store usernames and email addresses in all lowercase and transform any input to lowercase before comparing.
Bullshit. Unicode case-folding is anything but trivial for non-ascii codepoints. You better use case-folding for only internal/login purposes in no user visible way and even then: you have to stick to a single case-folding algorithm forever (or migrate very carefully to another one by converting your existing user data). Mismatch between case-folding at registration vs login vs anywhere else? Boom, huge security problems. It is treacherously hard to get this right.
A while ago I was in charge of a large project to move thousands of sites over to HTTPS. Using automation, Let’s Encrypt, mixed content, TLS terminating load balancers, certificate inventory management, monitoring and auditing: the effort wasn’t trivial. It was also absolutely necessary and this is on the high-end for what has to be done by any entity dealing with certs on the planet.
For the single legacy sites case that this guy mentions it is usally a matter of two simple headers:
That’s it. Done. Legacy site converted.
Until HTTP gets the clickthrough treatment and browsers default to HTTPS first, every HTTP page load is a potential invitation to inject anything into that connection including 0days, cryptocurrency mining malware and tracking/advertising code.
I think ads are the worst way to support any organization, even one I would rate as highly as Mozilla. People however are reluctant to do so otherwise, so we get to suffer all the negative sides of ads.
I just donated to Mozilla with https://donate.mozilla.org, please consider doing the same if you think ads/sponsored stories are the wrong path for Firefox.
Mozilla has more than enough money to accomplish their core task. I think it’s the same problem as with Wikimedia; if you give them more money, they’re just going to find increasingly irrelevant things to spend it on. Both organizations could benefit tremendously from a huge reduction in bureaucracy, not just more money.
I’ve definitely seen this with Wikimedia, as someone who was heavily involved with it in the early years (now I still edit, but have pulled back from meta/organizational involvement). The people running it are reasonably good and I can certainly imagine it having had worse stewardship. They have been careful not to break any of the core things that make it work. But they do, yeah, basically have more money than they know what to do with. Yet there is an organizational impulse to always get more money and launch more initiatives, just because they can (it’s a high-traffic “valuable” internet property).
The annual fundraising campaign is even a bit dishonest, strongly implying that they’re raising this money to keep the lights on, when doing that is a small part of the total budget. I think the overall issue is that all these organizations are now run by the same NGO/nonprofit management types who are not that different from the people who work in the C-suites at corporations. Universities are going in this direction too, as faculty senates have been weakened in favor of the same kinds of professional administrators. You can get a better administration or a worse one, but barring some real outliers, like organizations still run by their idiosyncratic founders, you’re getting basically the same class of people in most cases.
So Mozilla does something bad, and as a result I am supposed to give it money?? Sorry, that doesn’t make any sense to me. If they need my money, they should convince me to donate willingly. What you are describing is a form of extortion.
I donate every month to various organizations; EFF, ACLU, Wikipedia, OpenBSD, etc. So far Mozilla has never managed to convince me to give them my money. On the contrary, why would I give money to a dysfunctional, bureaucratic organization that doesn’t seem to have a clear and focused agenda?
They may be a dysfunctional bureaucratic organisation without a focused agenda (wouldn’t know as I don’t work for it) which would surely make them less effective, but shouldn’t the question instead be how effective they are? Is what they produce a useful, positive change and can you get that same thing elsewhere more cost-effectively?
If I really want to get to a destination, I will take a run-down bus if that is the only transport going there. And if you don’t care about the destination, then transport options don’t matter.
They may be a dysfunctional bureaucratic organisation without a focused agenda (wouldn’t know as I don’t work for it) which would surely make them less effective, but shouldn’t the question instead be how effective they are? Is what they produce a useful, positive change and can you get that same thing elsewhere more cost-effectively?
I am frequently in touch with Mozilla and while I sometimes feel like fighting with windmills, other parts of the org are very quick moving and highly cost effective. For example, they do a lot of very efficient training for community members like the open leadership training and the Mozilla Tech speakers. They run MDN, a prime resource for web development and documentation. Mozilla Research has high reputation.
Firefox in itself is in constant rebuild and is developed. MozFest is the best conferences you can go to in this world if you want to speak tech and social subjects.
I still find their developer relationship very lacking, which is probably the most visible part to us, but hey, it’s only one aspect.
The fact that Mozilla is going to spend money on community activities and conferences is why I don’t donate to them. The only activity I and 99% of people care about is Firefox. All I want is a good web browser. I don’t really care about the other stuff.
Maybe if they focused on what they’re good at, their hundreds of millions of dollars of revenue would be sufficient and they wouldn’t have to start selling “sponsored stories”.
The only activity I and 99% of people care about is Firefox.
This is a very easy statement to throw around. It’s very hard to back up.
Also, what’s the point of having a FOSS organisation if they don’t share their learnings? This whole field is fresh and we have maintainers hurting left and right, but people complain when organisations do more then just code.
To have a competitive, web browser we can trust plus exemplary software in a number of categories. Mozilla couldve been building trustworthy versions of useful products like SpiderOak, VPN services, and so on. Any revenue from business licensing could get them off ad revenue more over time.
Instead, they waste money on lots of BS. Also, they could do whaf I say plus community work. It’s not either or. I support both.
To have a competitive, web browser we can trust plus exemplary software in a number of categories. Mozilla couldve been building trustworthy versions of useful products like SpiderOak, VPN services, and so on. Any revenue from business licensing could get them off ad revenue more over time.
In my opinion, the point of FOSS is sharing and I’m pretty radical that this involves approaches and practices. I agree that all you write is important, I don’t agree that it should be the sole focus. Also, Mozilla trainings are incredibly good, I have actually at some point suggested them to sell them :D.
Instead, they waste money on lots of BS. Also, they could do whaf I say plus community work. It’s not either or. I support both.
BS is very much in the eye of the beholder. I also haven’t said that they couldn’t do what you describe.
Also, be aware that they often collaborate with other foundations and bring knowledge and connections into the deal, not everything is funded from the money MozCorp has or from donations.
“Also, Mozilla trainings are incredibly good, I have actually at some point suggested them to sell them :D.”
Well, there’s a good idea! :)
That’s a false dichotomy because there are other ways to make money in the software industry that don’t involve selling users to advertisers.
It’s unfortunate, but advertisers have so thoroughly ruined their reputation that I simply will not use ad supported services any more.
I feel like Mozilla is so focused on making money for itself that it’s lost sight of what’s best for their users.
That’s a false dichotomy because there are other ways to make money in the software industry that don’t involve selling users to advertisers.
Ummm… sorry? The post you are replying to doesn’t speak about money at all, but what people carry about?
Yes, advertising and Mozilla is an interesting debate and it’s also not like Mozilla is only doing advertisement. But flat-out criticism of the kind “Mozilla is making X amount of money” or “Mozilla supports things I don’t like” is not it
This is a very easy statement to throw around. It’s very hard to back up.
Would you care to back up the opposite, that over 1% of mozilla’s userbase supports the random crap Mozilla does? That’s over a million people.
I think my statement is extremely likely a priori.
I’d venture to guess most of them barely know what Firefox is past how they do stuff on the Internet. They want it to load up quickly, let them use their favorite sites, do that quickly, and not toast their computer with malware. If mobile tablet, maybe add not using too much battery. Those probably represent most people on Firefox along with most of its revenue. Some chunk of them will also want specific plugins to stay on Firefox but I don’t have data on their ratio.
If my “probably” is correct, then what you say is probably true too.
This is a valid point of view, just shedding a bit of light on why Mozilla does all this “other stuff”.
Mozilla’s mission statement is to “fight for the health of the internet”, notably this is not quite the same mission statement as “make Firefox a kickass browser”. Happily, these two missions are extremely closely aligned (thus the substantial investment that went into making Quantum). Firefox provides revenue, buys Mozilla a seat at the standards table, allows Mozilla to weigh in on policy and legislation and has great brand recognition.
But while developing Firefox is hugely beneficial to the health of the web, it isn’t enough. Legislation, proprietary technologies, corporations and entities of all shapes and sizes are fighting to push the web in different directions, some more beneficial to users than others. So Mozilla needs to wield the influence granted to it by Firefox to try and steer the direction of the web to a better place for all of us. That means weighing in on policy, outreach, education, experimentation, and yes, developing technology.
So I get that a lot of people don’t care about Mozilla’s mission statement, and just want a kickass browser. There’s nothing wrong with that. But keep in mind that from Mozilla’s point of view, Firefox is a means to an end, not the end itself.
I don’t think Mozilla does a good job at any of that other stuff. The only thing they really seem able to do well (until some clueless PR or marketing exec fucks it up) is browser tech. I donate to the EFF because they actually seem able to effect the goals you stated and don’t get distracted with random things they don’t know how to do.
What if, and bear with me here, what they did ISN’T bad? What if instead they are actually making a choice that will make Firefox more attractive to new users?
The upside is that atleast Mozilla is trying to make privacy respecting ads instead of simply opening up the flood gates.
For my own projects I use cron exclusively.
At work we use cron for system-level tasks (e.g. backups) and Celery for application-level tasks (e.g. periodically poll inventory from warehouses), with RabbitMQ as its backend.
Also, think about monitoring those tasks, especially backups. A lot of people don’t and it’s a recipe for disaster. I have started using https://cronhub.io/ recently but there are other similar services such as https://cronitor.io/, or you can roll your own like I used to do.
I would like to second this post.
The programming language/framework specific scheduling parts don’t matter all that much, but the message bus/backend parts do. RabbitMQ and other AMQP solutions are pretty good, try avoiding a simple key-value store based backend such as Redis.
Any specific reason for avoiding Redis/key-value stores? I’ve only had one such experience (resque-php) and the main downside seemed to be the need for polling, but honestly I don’t know if that’s because of Redis or because of resque-php’s implementation. I’d like to hear more about that!
It’s too simplistic. I mean it works for very basic usage, but once you start caring about things like HA or backups or wider usage (so multiple vhosts in rabbitmq terminology) or logging/monitoring it kind of shows how inadequate it is.
Redis clustering is not that nice. Introspectability - it’s on the wrong level, you don’t generally care about the key/value parts, you care more about the message bus parts and since Redis isn’t aware of that it can’t help you with it.