I recently discovered how horribly complicated traditional init scripts are whilst using Alpine Linux. OpenRC might be modern, but it’s still complicated.
Runit seems to be the nicest I’ve come across. It asks the question “why do we need to do all of this anyway? What’s the point?”
It rejects the idea of forking and instead requires everything to run in the foreground:
/etc/sv/nginx/run:
#!/bin/sh
exec nginx -g 'daemon off;'
/etc/sv/smbd/run
#!/bin/sh
mkdir -p /run/samba
exec smbd -F -S
/etc/sv/murmur/run
#!/bin/sh
exec murmurd -ini /etc/murmur.ini -fg 2>&1
Waiting for other services to load first does not require special features in the init system itself. Instead you can write the dependency directly into the service file in the form of a “start this service” request:
/etc/sv/cron/run
#!/bin/sh
sv start socklog-unix || exit 1
exec cron -f
Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
The only other feature I can think of is “reloading” a service, which Aker does in the article via this line:
ExecReload=kill -HUP $MAINPID
I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?
Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
The logging mechanism works like this to be stable and only lose logs in case runsv and the log service would die.
Another thing about separate logging services is that stdout/stderror are not necessarily tagged, adding all this stuff to runsv would just bloat it.
There is definitively room for improvements as logger(1) is broken since some time in the way void uses it at the moment (You can blame systemd for that).
My idea to simplify logging services to centralize the way how logging is done can be found here https://github.com/voidlinux/void-runit/pull/65.
For me the ability to exec svlogd(8) from vlogger(8) to have a more lossless logging mechanism is more important than the main functionality of replacing logger(1).
Instead you can write the dependency directly into the service file in the form of a “start this service” request
But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order. Depending on network being setup, for example, brings complexity to each of those shell scripts.
I’m of the opinion that a dsl of whitelisted items (systemd) is much nicer to handle than writing shell scripts, along with the standardized commands instead of having to know which services that accepts ‘reload’ vs ‘restart’ or some other variation in commands - those kind of niceties are gone when the shell scripts are individually an interface each.
The runit/daemontools philosophy is to just keep trying until something finally runs. So if the order is wrong, presumably the service dies if a dependent service is not running, in which case it’ll just get restart. So eventually things progress towards a functioning state. IMO, given that a service needs to handle the services it depends on crashing at any time anyways to ensure correct behaviour, I don’t feel there is significant value in encoding this in an init system. A dependent service could also be moved to running on another machine which this would not work in as well.
It’s the same philosophy as network-level dependencies. A web app that depends on a mail service for some operations is not going to shutdown or wait to boot if the mail service is down. Each dependency should have a tunable retry logic, usually with an exponential backoff.
But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order.
That was my initial thought, but it turns out the opposite is true. The services are retried until they work. Things are definitely paralleled – there is not “exit” in these scripts, so there is no physical way of running them in a linear (non-parallel) nature.
Ignoring the theory: void’s runit provides the second fastest init boot I’ve ever had. The only thing that beats it is a custom init I wrote, but that was very hardware (ARM Chromebook) and user specific.
Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services.
runit and s6 also don’t support cgroups, which can be very useful.
Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services
Why? The runit/daemontools philsophy is just to try to keep something running forever, so if something dies, just restart it. If one restarts a service, than either those that depend on it will die or they will handle it fine and continue with their life.
either those that depend on it will die or they will handle it fine
If they die, and are configured to restart, they will keep bouncing up and down while the dependency is down? I think having dependency resolution is definitely better than that. Restart the dependency, then the dependent.
It’s a computer, it’s meant to do dumb things over and over again. And presumably that faulty component will be fixed pretty quickly anyways, right?
It’s a computer, it’s meant to do dumb things over and over again
I would rather have my computer do less dumb things over and over personally.
And presumably that faulty component will be fixed pretty quickly anyways, right?
Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever). The dependency tree can be complicated. Ideally once something is fixed everything that depends on it can restart immediately, rather than waiting for the next automatic attempt which could (with the exponential backoff that proponents typically propose) take quite a while. And personally I’d rather have my logs show only a single failure rather than several for one incident.
But, there are merits to having a super-simple system too, I can see that. It depends on your needs and preferences. I think both ways of handling things are valid; I prefer dependency management, but I’m not a fan of Systemd.
I would rather have my computer do less dumb things over and over personally.
Why, though? What’s the technical argument. daemontools (and I assume runit) do sleep 1 second between retries, which for a computer is basically equivalent to it being entirely idle. It seems to me that a lot of people just get a bad feeling about running something that will immediately crash.
Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever).
What’s the distinction here? Also, with microservices the dependency graph in the init system almost certainly doesn’t represent the dependency graph of the microservice as it’s likely talking to services on other machines.
I think both ways of handling things are valid
Yeah, I cannot provide an objective argument as to why one should prefer one to the other. I do think this is a nice little example of the slow creep of complexity in systems. Adding a pinch of dependency management here because it feels right, and a teaspoon of plugin system there because we want things to be extensible, and a deciliter of proxies everywhere because of microservices. I think it’s worth taking a moment every now and again and stepping back and considering where we want to spend our complexity budget. I, personally, don’t want to spend it on the init system so I like the simple approach here (especially since with microservies the init dependency graph doesn’t reflect the reality of the service anymore). But as you point out, positions may vary.
Why, though? What’s the technical argument
Unnecessary wakeup, power use (especially for a laptop), noise in the logs from restarts that were always bound to fail, unnecessary delay before restart when restart actually does become possible. None of these arguments are particularly strong, but they’re not completely invalid either.
We’re not necessarily just talking about standard daemons …
What’s the distinction here?
I was trying to point out that we shouldn’t make too many generalisations about how services might behave when they have a dependency missing, nor assume that it is always ok just to let them fail (edit:) or that they will be easy to fix. There could be exceptions.
Perhaps wandering off topic, but this is a good way to trigger even worse cascade failures.
eg, an RSS reader that falls back to polling every second if it gets something other than 200. I retire a URL, and now a million clients start pounding my server with a flood of traffic.
There are a number of local services (time, dns) which probably make some noise upon startup. It may not annoy you to have one computer misbehave, but the recipient of that noise may disagree.
In short, dumb systems are irresponsible.
But what is someone supposed to do? I cannot force a million people using my RSS tool not to retry every second on failure. This is just the reality of running services. Not to mention all the other issues that come up with not being in a controlled environment and running something loose on the internet such as being DDoS’d.
I think you are responsible if you are the one who puts the dumb loop in your code. If end users do something dumb, then that’s on them, but especially, especially, for failure cases where the user may not know or observe what happens until it’s too late, do not ship dangerous defaults. Most users will not change them.
In this case we’re talking about init systems like daemontools and runit. I’m having trouble connecting what you’re saying to that.
N.B. bouncing up and down ~= polling. Polling always intrinsically seems inferior to event based systems, but in practice much of your computer runs on polling perfectly fine and doesn’t eat your CPU. Example: USB keyboards and mice.
USB keyboard/mouse polling doesn’t eat CPU because it isn’t done by the CPU. IIUC the USB controller generates an interrupt when data is received. I feel like this analogy isn’t a good one (regardless). Checking a USB device for a few bytes of data is nothing like (for example) starting a Java VM to host a web service which takes some time to read its config and load its caches only to then fall over because some dependency isn’t running.
Sleep 1 and restart is the default. It is possible to have another behavior by adding a ./finish script to the ./run script.
I really like runit on void. I do like the simplicity of SystemD target files from a package manager perspective, but I don’t like how systemd tries to do everything (consolekit/logind, mounting, xinet, etc.)
I wish it just did services and dependencies. Then it’d be easier to write other systemd implementations, with better tooling (I’m not a fan of systemctl or journalctl’s interfaces).
You might like my own dinit (https://github.com/davmac314/dinit). It somewhat aims for that - handle services and dependencies, leave everything else to the pre-existing toolchain. It’s not quite finished but it’s becoming quite usable and I’ve been booting my system with it for some time now.
I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?
It’s nice to be able to reload a well-written service without having to look up what mechanism it offers, if any.
Runits sv(8) has the reload command which sends SIGHUP by default.
The default behavior (for each control command) can be changed in runit by creating a small script under $service_name/control/$control_code.
I was thinking of the difference between ‘restart’ and ‘reload’.
Reload is only useful when:
I have not been in environments where this is necessary, restart has always done me well. I assume that the primary use cases are high-uptime webservers and databases.
My thoughts were along the lines o: If you’re running a high-uptime service, you probably don’t care about the extra effort of writing ‘killall -HUP nginx’ than ‘systemctl reload nginx’. In fact I’d prefer to do that than take the risk of the init system re-interpreting a reload to be something else, like reloading other services too, and bringing down my uptime.
I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
I used to use something like logexec for that, to “wrap” the program inside the runit script, and send output to syslog. I agree it would be nice if it were builtin.
Why did this end up a slack channel rather than IRC? Many, to most, of these tools have IRC presences and frankly I just like not having to run multiple slack instances, which have numerous drawbacks.
IRC works really well when you have a core team who spend a lot of time in a topic-specific channel where visitors can stop by and engage with them. That’s great for channels focused on individual open source tools, as you point out. One of Slack’s strengths for community channels is that it handles low volume conversation better because you can check it intermittently and catch up on what you missed. Being able to respond to questions that were asked when you weren’t around is a big plus.
Our intention here definitely wasn’t to try to take the place of project-specific IRC channels at all. We just felt that there wasn’t really place to talk about experiences with different tools and tactics at a higher level. We get emails from people all the time about topics like this, and our hope is that making these sort of discussions public will be helpful to the community.
It’s absolutely not the the case that I fear that you’re trying to displace people from project channels to slack, apologies if it’s come across that way. It’s more frustration that for a community that’s fairly well established on IRC, there’s pressure to fragment across platforms.
While you present the argument that Slack is better for low-traffic communities, I’m not sure I agree. You mainly rely on these points:
Both of these points are well covered by IRC. While it’s true that the core protocol doesn’t cover it, it’s now the standard practically to use a bouncing service that permits it, or self host your own. A bunch of ZNC specific providers can be found here: https://wiki.znc.in/Providers , and additionally there’re services like https://www.irccloud.com/ that provide the entire system including a web client.
Slack has some downsides, like really poor community management, instead deferring to out-of-band systems to deal with things like harassment as well, essentially showing it’s colours as a business service offering. For example, there’s no ability for an individual user to ignore another they do not get along with or are being harassed by, with Slack instead suggesting this be resolved with HR policies.
“Slack is better than IRC” is like saying “gmail is better than SMTP”.
Slack is owning:
People who appreciate running themself the programs they use go to IRC (get its hand dirty).
People who prefer not be involved in maintaining anything go to Slack (living in the “cloud”).
This is how I get my hands dirty: on a server: $ abduco -A irc weechat.
You can even have this in a laptop .bashrc:
alias irc='ssh user@your-server.tld abduco -A irc weechat'
And then you have the same feature of “being able to respond to questions that were asked when you weren’t around”. :)
This reminds me to strive to find efficient (and hopefully simple) designs that solve the problem instead of doing the extra work that covers the 30% more use case.
Once you have found a simple design and documented it well, other developers will be able to do it again without too much effort.
Regardless if you like daemontools or hate it, you might be able to do a supervision suite in this style, like other people did.
When people tell me to stop using the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA, I get suspicious, even hostile. It’s triply frustrating when, at the end of the linked rant, they actually recognize that PGP isn’t the problem:
It also bears noting that many of the issues above could, in principle at least, be addressed within the confines of the OpenPGP format. Indeed, if you view ‘PGP’ to mean nothing more than the OpenPGP transport, a lot of the above seems easy to fix — with the exception of forward secrecy, which really does seem hard to add without some serious hacks. But in practice, this is rarely all that people mean when they implement ‘PGP’.
There is a lot wrong with the GPG implementation and a lot more wrong with how mail clients integrate it. Why would someone who recognises that PGP is a matter of identity for many of its users go out of their way to express their very genuine criticisms as an attack on PGP? If half the effort that went into pushing Signal was put into a good implementation of OpenPGP following cryptographic best practices (which GPG is painfully unwilling to be), we’d have something that would make everyone better off. Instead these people make it weirdly specific about Signal, forcing me to choose between “PGP” and a partially-closed-source centralised system, a choice that’s only ever going to go one way.
I am deeply concerned about the push towards Signal. I am not a cryptographer, so all I can do is trust other people that the crypto is sound, but as we all know, the problems with crypto systems are rarely in the crypto layers.
On one hand we know that PGP works, on the other hand we have had two game over vulnerabilities in Signal THIS WEEK. And the last Signal problem was very similar to the one in “not-really-PGP” in that the Signal app passed untrusted HTML to the browser engine.
If I were a government trying to subvert secure communications, investing in Signal and tarnishing PGP is what I would try to do. What better strategy than to push everyone towards closed systems where you can’t even see the binaries, and that are not under the user’s control. The exact same devices with GPS and under constant surveilance.
My mobile phone might have much better security mechanisms in theory, but I will never know for sure because neither I, nor anyone else can really check. In the meantime we know for sure what a privacy disaster these mobile phones are. We also know for sure the the various leaks that government implant malware on mobile devices, and we know that both manufacturers and carriers can install software, or updates, on devices without user consent.
Whatever the PGP replacement might be, moving to the closed systems that are completely unauditable and not under the user’s control is not the solution. I am not surprised that some people advocate for this option. What I find totally insane is that a good majority of the tech world finds this position sensible. Just find any Hacker News thread and you will see that any criticism towards Signal is downvoted to oblivion, while the voices of “experts” preach PGP hysteria.
PGP will never be used by ordinary people. It’s too clunky for that. But it’s used by some people very successfully, and if you try to dissuade this small, but very important group of people to move towards your “solution”, I can only suspect foul play. Signal does not compete with PGP. It’s a phone chat app. As Signal does not compete with PGP, why do you have to spend all this insane ammount of effort to convince an insignificant amount of people to drop PGP for Signal?
I can’t for the life of me imagine why a CIA-covert-psyops-agency funded walled garden service would want to push people away from open standards to their walled garden service.
Don’t get me wrong, Signal does a lot of the right things but a lot of claims are made about it implying it’s as open as PGP, which it isn’t.
What makes Signal a closed system?
Not Signal, iOS and Android, and all the secret operating systems that run underneath.
As for Signal itself, moxie forced F-Droid to take down Signal, because he didn’t want other people to compile Signal. He said he wanted people only to use his binaries, which even if you are ok with in principle, on Android it mandates the use of the Google Play Store. If this is not a dick move, I don’t know what is.
I’m with you on Android and especially iOS being problematic. That being said, Signal has been available without Google Play Services for a while now. See also the download page; I couldn’t find it linked anywhere on the site but it is there.
However, we investigated this for PRISM Break, and it turns out that there’s a single Google binary embedded in the APK I just linked to. Which is unfortunate. See this GitHub comment.
because he didn’t want other people to compile Signal. He said he wanted people only to use his binaries
Ehm… he chose the wrong license in this case.
As I understand it, the case against PGP is not with PGP in and of itself (the cryptography is good), but the ecosystem. That is, the toolchain in which one uses it. Because it is advocated for use in email and securing email, it is argued, is nigh on impossible, then it is irresponsible to recommend using PGP encrypted email for general consumption, especially for journalists.
That is, while it is possible to use PGP via email effectively, it is incredibly difficult and error-prone. These are not qualities one wants in a secure system and thus, it should be avoided.
But the cryptographyisn’t good. His case in the blog post is intentionally besides all of the crypto badness.example: the standard doesn’t allow any other hash function than sha1, which has been proven broken. The protocol itself disallows flexibility here to avoid ambiguity and that means there is no way to change it significantly without breaking compatibility.
And so far, it seems, people wanted compatibility (or switched to something else, like Signal)
Until this better implementation appears, an abstract recommendation for PGP is a concrete recommendation for GPG.
Imagine if half the effort spent saying PGP is just fine went into making PGP just fine.
I guess that’s an invitation to push https://autocrypt.org/
When people tell me to stop using the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA, I get suspicious, even hostile.
Without wanting to sound rude, this is discussed in the article:
The fact of the matter is that OpenPGP is not really a cryptography project. That is, it’s not held together by cryptography. It’s held together by backwards-compatibility and (increasingly) a kind of an obsession with the idea of PGP as an end in and of itself, rather than as a means to actually make end-users more secure.
OpenPGP might have resisted the NSA, but that’s not a unique property. Every modern encryption tool or standard has to do that or it is considered broken.
I think most people unless they are heavily involved in security research don’t know how encrytion/auth/integrity protection are layered. There are a lot of layers in what people just want to call “encryption”. OpenPGP uses the same standard crypto building blocks as everything else and unfortunately putting those lower level primitives together is fiendishly difficult. Life also went on since OpenPGP was created meaning that those building blocks and how to put them together changed in the last few decades, cryptographers learned a lot.
One of the most important things that cryptographers learned is that the entire ecosystem / the system as a whole counts. Even Snowden was talking about this when he said that the NSA just attacks the endpoints, where most of the attack surface is. So while the cryptography bits in the core of the OpenPGP standard are safe, if dated, that’s not the point. Reasonable people can’t really use PGP safely because we would have to have a library that implements the dated OpenPGP standard in a modern way, clients that interface with that modern library in a safe and thought-through way and users that know enough about the system to satisfy it’s safety requirements (which are large for OpenPGP)
Part of that is attitude, most of the existing projects for implementing the standard just don’t seem to take a security-first stance. Who is really looking towards providing a secure overall experience to users under OpenPGP? Certainly not the projects bickering where to attribute blame.
I think people kept contrasting this with Signal because Signal gets a lot of things right in contrast. The protocol is modern and it’s not impossibly demanding on users (ratcheting key rotation, anyone?), there is no security blame game between Signal the desktop app vs signal the mobile app vs the protocol when a security vulnerability happens, OWS just fixes it with little drama. Of course Signal-the-app has downsides too, like the centralization, however that seems like a reasonable choice. I’d rather have a clean protocol operating through a central server that most people can use than an unuseable (from the pov of most users) standard/protocol. We’re not there yet where we can have all of decentralization, security and ease of use.
OpenPGP might have resisted the NSA, but that’s not a unique property. Every modern encryption tool or standard has to do that or it is considered broken.
One assumes the NSA has backdoors in iOS, Google Play Services, and the binary builds of Signal (and any other major closed-source crypto tool, at least those distributed from the US) - there’s no countermeasure and virtually no downside, so why wouldn’t they?
there is no security blame game between Signal the desktop app vs signal the mobile app vs the protocol when a security vulnerability happens, OWS just fixes it with little drama.
Not really the response I’ve seen to their recent desktop-only vulnerability, though I do agree with you in principle.
Signal Android has been reproducible for over two years now. What I don’t know is whether anyone has independently verified that it can be reproduced. I also don’t know whether the “remaining work” in that post was ever addressed.
The process of verifying a build can be done through a Docker image containing an Android build environment that we’ve published.
Doesn’t such process assume trust on who created the image (and on who created each of layers it was based on)?
A genuine question, as I see the convenience of Docker and how it could lead to more verifications, but on the other hand it create a single point of failure easier to attack.
That question of trust is the reason why, if you’re forced to use Docker, build every layer for yourself from the most trustworthy sources. It isn’t even hard.
the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA
I’m pretty ignorant on this matter, but do you have any link to share?
There is a lot wrong with the GPG implementation
Actually, I’d like to read the opinion of GPG developers here, too.
Everyone makes mistakes, but I’m pretty curious about the technical allegations: it seems like they did not considered the issue to be fixed in their own code.
This might have pretty good security reasons.
To start with, you can’t trust the closed-source providers since the NSA and GHCQ are throwing $200+ million at both finding 0-days and paying them to put backdoors in. Covered here. From there, you have to assess open-source solutions. There’s a lot of ways to do that. However, the NSA sort of did it for us in slides where GPG and Truecrypt were worst things for them to run into. Snowden said GPG works, too. He’d know given he had access to everything they had that worked and didn’t. He used GPG and Truecrypt. NSA had to either ignore those people or forward them to TAO for targeted attack on browser, OS, hardware, etc. The targeted attack group only has so much personnel and time. So, this is a huge increase in security.
I always say that what stops NSA should be good enough to stop the majority of black hats. So, keep using and improving what is a known-good approach. I further limit risk by just GPG-encrypting text or zip files that I send/receive over untrusted transports using strong algorithms. I exchange the keys manually. That means I’m down to trusting the implementation of just a few commands. Securing GPG in my use-case would mean stripping out anything I don’t need (most of GPG) followed by hardening the remaining code manually or through automated means. It’s a much smaller problem than clean-slate, GUI-using, encrypted sharing of various media. Zip can encode anything. Give the files boring names, too. Untrusted, email provider is Swiss in case that buys anything on any type of attacker.
Far as the leaks, I had a really-hard time getting you the NSA slides. Searching with specific terms in either DuckDuckGo or Google used to take me right to them. They don’t anymore. I’ve had to fight with them narrowing terms down with quotes trying to find any Snowden slides, much less the good ones. I’m getting Naked Security, FramaSoft, pharma spam, etc even on p 2 and 3 but not Snowden slides past a few, recurring ones. Even mandating the Guardian in terms often didn’t produce more than one, Guardian link. Really weird that both engines’ algorithms are suppressing all the important stuff despite really-focused searches. Although I’m not going conspiracy hat yet, the relative-inaccuracy of Google’s results compared to about any other search I’ve done over past year for both historical and current material is a bit worrying. Usually excellent accuracy.
NSA Facts is still up if you want the big picture about their spying activities. Ok, after spending an hour, I’m going to have to settle for giving you this presentation calling TAILS or Truecrypt catastrophic loss of intelligence. TAILS was probably temporary but the TrueCrypt derivatives are worth investing effort in. Anyone else have a link to the GPG slide(s)? @4ad? I’m going to try to dig it all up out of old browser or Schneier conversations in near future. Need at least those slides so people knows what was NSA-proof at the time.
Why would TAILS be temporary? If anything this era of cheap devices makes it more practical than ever.
It was secure at the time since either mass collection or TAO teams couldnt hack it. Hacking it requires one or more vulnerabilities in the software it runs. The TAILS software includes complex software such as Linux and a browser with history of vulnerabilities. We should assume that was temporary and/or would disappear if usage went up enough to budget more attacks its way.
I’d still trust it more than TrueCrypt just due to being open-source.
What would it take to make an adequate replacement for TAILS? I’m guessing some kind of unikernel? Are there any efforts in that direction?
Well, you have to look at the various methods of attack to assess this:
Mass surveillance attempting to read traffic through protocol weaknesses with or without a MITM. They keep finding these in Tor.
Attacks on the implementation of Tor, the browser, or other apps. These are plentiful since it’s mostly written in non-memory safe way. Also, having no covert, channel analysis on components processing secrets means there’s probably plenty of side channels. There’s also increasingly new attacks on hardware with a network-oriented one even being published.
Attacks on the repo or otherwise MITMing the binaries. I don’t think most people are checking for that. The few that do would make attackers cautious about being discovered. A deniable way to see who is who might be a bitflip or two that would cause the security check to fail. Put it in random, non-critical spots to make it look like an accident during transport. Whoever re-downloads doesn’t get hit with the actual attack.
So, the OS and apps have to be secure with some containment mechanisms for any failures. The protocol has to work. These must be checked against any subversions in the repo or during transport. All this together in a LiveCD. I think it’s doable minus the anonymity protocol working which I don’t trust. So, I’ve usually recommended dedicated computers bought with cash (esp netbooks), WiFi’s, cantennas, getting used to human patterns in those areas, and spots with minimal camera coverage. You can add Tor on top of it but NSA focuses on that traffic. They probably don’t pay attention to average person on WiFi using generic sites over HTTPS.
Sure. My question was more: does a live CD project with that kind of aim exist? @josuah mentioned heads which at least avoids the regression of bringing in systemd, but doesn’t really improve over classic tails in terms of not relying on linux or a browser.
An old one named Anonym.OS was an OpenBSD-based, Live CD. That would’ve been better on code injection front at least. I don’t know of any current offerings. I just assume they’ll be compromised.
I think it is the reason why https://heads.dyne.org/ have been made: Replacing the complex software stack with a simpler one with aim to avoid security risks.
Hmm. That’s a small start, but still running Linux (and with a non-mainstream patchset even), I don’t think it answers the core criticism.
Thanks for this great answer.
Really weird that both engines’ algorithms are suppressing all the important stuff despite really-focused searches.
If you can share a few of your search terms I guess that a few friends would find them pretty interesting, with their research.
For sure this teach us a valuable lesson. The web is not a reliable medium for free speech.
From now on, I will download from the internet interesting documents about such topics and donate them (with other more neutral dvds) to small public libraries around the Europe.
I guess that slowly, people will go back to librarians if search engines don’t search carefully enough anymore.
It was variations, with and without quotes, on terms I saw in the early reports. They included GPG, PGP, Truecrypt, Guard, Documents, Leaked, Snowden, and catastrophic. I at least found that one report that mentions it in combination with other things. I also found, but didn’t post, a PGP intercept that was highly-classified but said they couldn’t decrypt it. Finally, Snowden kept maintaining good encryption worked with GPG being one he used personally.
So, we have what we need to know. From there, just need to make the programs we know work more usable and memory safe.
On my Mac, I use MacPass (compatible with KeePass .kbd files) and I sync with Dropbox. On Android, I use KeePassDroid and Dropbox. I also use the password manager built-in in Chrome to sync passwords between my Mac and Android (to login in sites like Lobsters).
If you ever let your computer unlocked, it takes me 4 second to retrieve a passwords from chrome built-in password manager:
3-dot option button > settings > search for “pass” > click on the eye to see the password.
I did steal password this way, and anyone who can use a GUI can do it.
Just to make it clear:
According to your screenshot, it looks like you are not using a Mac, but at least, it looks like the threat you mentioned doesn’t exist on Mac.
As you knew the vulnerability, you could block the threat.
I’ll recommend this way of doing for people storing passwords on chrome and Mac. It looks like they chose great defaults for this.
For websites: Firefox Sync :-) Everything that isn’t a website or is important enough to have more than 3 copies (laptop, workstation, phone) lives in a keepass file, hosted on a nextcloud instance.
Do note that Firefox Sync has a pretty nasty security flaw: your passwords are ultimately protected by your Firefox Account password — so you need to make sure that it’s a high entropy one (like 52ICsHuwrslpDl6fbjdvtv, not like correct horse battery staple). You also need to make sure that you never log into your Firefox Account online: Mozilla serve the login UI with JavaScript, which means that they can serve you malicious JavaScript which steals your password (this is worse than a malicious browser, because someone might actually notice a malicious browser executable, but the odds of detecting a single malicious serve of a JavaScript resource are pretty much nil).
I use pass, with git-remote-gcrypt to encrypt the pass repo itself (unfortunately, pass has a security flaw in that it doesn’t encrypt filenames).
I’m pretty sure the password isn’t used directly but derived into a crypto key using PBKDF2 on the client.
This does not protect you from physical access (if you ever let your computer unlocked). It took me 10 seconds to discover that firefox lets anyone see the plain password of every account.
True! imho physical access should be countered with something else. Lockscreens, hard disk encryption etc.
Yes, of course if there is a physical access there is no much hope left: even with ssh, if ssh-agent is running or a terminal with a recent sudo and much damage can be done.
What did surprise me is how fast and easy it is to go straight to the password.
Yes, but that doesn’t add any entropy: if your password is ‘love123,’ it’s still low-entropy, even if it’s stretched.
Remember, too, that the client-side stretching is performed by JavaScript Mozilla delivers when you attempt to log in — as I noted, they could deliver malicious JavaScript at a whim (or under duress …).
I would not recommend to store any password “in the cloud” (dropbox, password managers), as it is heavily targeted to surveillance and attacks.
An advantage of git is that you can use it over SSH (again, I do not against GitHub for passwords).
Why not keeping the passwords on a usb flash drive at your keyring (the physical one)? You have then all your keys in a safe place with no leak ever possible while you are not at the moment where you want to log in. Then even if you loose your laptop, you do not loose any precious password.
Putting them on an encrypted file/drive partition might also be a more reasonable choice as long as you know where your computer is going.
gpg encrypted file somewhere. With a simple grep script if I need a password, and a vim plugin to edit gpg files if I need to add/update something.
Always a little wary when this is the first line:
At Microsoft, the core of our vision is “Any Developer, Any App, Any Platform”
Orlly?
But I’m interested…
To be fair, that puts them miles ahead of almost everybody else working on platforms these days :(
All the action right now seems to be on “JS tool of the week” and “build for Android and iOS with one codebase”
I sincerely don’t understand this hate against Microsoft.
Yeah sure, Ballmer days sucked and they really screwed up. But since Natya picked up the role, there seems to have been quite a shift in the company’s mindset. Plus all the OS things they’ve been doing in the past few years.
This is true in any language, unfortunately. Choice quote that resonates with anybody who’s worked with GTK: “if you just want a few buttons and some text, using GTK is like mowing a lawn with a helicopter.” (GTK is usable at all thanks entirely to glade.)
Absolutely doesn’t resonate with me. Came here to complain about this quote actually.
I don’t understand this reasoning. Having one toolkit that scales from a few buttons to GIMP/Inkscape is the point of any reasonable toolkit. And it’s not like GTK always requires a complicated setup or whatever. Creating a window is really simple. Adding a button looks like, well, adding a button.
By the way, the problem with OCaml is that no one made a working gobject-introspection library, and it’s stuck with GTK2 :(
I get some stuff - some simple use cases are fine, but other ones are complex - for example, if you want a simple list or tree view, you have to essentially implement a bunch of MVC boilerplate to do so.
Unlike GTK (at least I think) some GUI are built as a separate rendering program that communicate through a stream, so it is possible to change that program for another. But does it solve the problem?
The web grew way too complex to permit a trivial implementation.
Ncurses somehow feels bloated too. Though it can be bypassed: ANSI escape sequences seems to be supported by most (if not all) emulators.
The only exception I can think of is with Racket; writing cross-platform code that uses GTK is quite pleasant with that tool set. https://gitlab.com/technomancy/world-color/blob/master/world-color.rkt
You may want change
\[033\[3;9H] into \033[3;9H and
/x1b[31m into \033[31m.
\033 (octal 33) is the same as \x1b (hexadecimal 1b).
I think there’s blame to go around. For example, part of it is web designers trying to treat the web like a print medium and forcing devs to reproduce designs with pixel-perfect fidelity. If you care that much about pixels, you simply haven’t understood what a web page is.
Once I saw a designer who gave advises such as “internal margin always 5px”, “everything clickable this colour”, “no space between these kind of elements”, along with illustrations and mockups to give an idea.
One can quite put these into a CSS stylesheet and attach it to any kind of HTML content and then it works!
TL;DR: “we take your privacy and security very seriously” starts with “we take your privacy and security”
iOS and apple being less evil for their “Security White Papers”, why not, but quoting them as a decent alternative, I don’t know:
These Internet services have been built with the same security goals that iOS promotes throughout the platform. These goals include secure handling of data, whether at rest on the device or in transit over wireless networks; protection of users’ personal information; and threat protection against malicious or unauthorized access to information and services. Each service uses its own powerful security architecture without compromising the overall ease of use of iOS. – https://www.apple.com/business/docs/iOS_Security_Guide.pdf page 49
This looks like as every company, they do their best to not let the data leak in the hands of bad evil hackers. That does not presume that they do not have a peek on their own.
Also, how can one quote MacOS without quoting any BSD regarding privacy?
Ok, now I have iOS 10, MacOS, an Apple Home Pod, and I am using Siri everywhere. I am protected by the Apple Privacy Policy, which says:
When we use data to create better experiences for you, we work hard to do it in a way that doesn’t compromise your privacy. One example is our pioneering use of Differential Privacy, where we scramble your data and combine it with the data of millions of others. So we see general patterns, rather than specifics that could be traced back to you. These patterns help us identify things like the most popular emoji, the best QuickType suggestions, and energy consumption rates in Safari. – https://www.apple.com/privacy/
So data are being gathered and analyzed and used by Apple (but not other companines).
Your iOS device can collect analytics about your iOS device and any paired Apple Watch and send it to Apple for analysis. The collected information does not identify you personally and can be sent to Apple only with your explicit consent. Analytics may include details about hardware and operating system specifications, performance statistics, and data about how you use your devices and applications. When it’s collected, personal data is either not logged at all, removed from reports before they’re sent to Apple, or protected by techniques such as Differential Privacy.
The information we gather from Differential Privacy helps us improve our services without compromising individual privacy. For example, in iOS 10, this technology helped improve Lookup Hints in Notes.
We now identify commonly used data types in the Health app and web domains in Safari that cause performance issues. This information will allow us to work with developers to improve your experience without revealing anything about your individual behavior.
If you give your explicit consent, Apple can improve intelligent features by analyzing how you use iCloud and the data from your account. Analysis happens only after the data has gone through privacy-enhancing techniques so that it cannot be associated with you or your account. – https://www.apple.com/privacy/approach-to-privacy/
So apple collects data and use them to improve their services without letting human read the names, but machines can. Google does this too: never any human ever read the name of one of its client, and privacy is preserved right?
This does also does not keep Apple from using the data to sell advertizing themself: the data will not be used by 3rd party companies, the advertizing itself will come from 3rd party companies.
Apple harnesses machine learning to enhance your experience — and your privacy. We’ve used it to enable image and scene recognition in Photos, and more. Now we’re allowing developers to use our frameworks to create powerful new app experiences that don’t require your data to leave your device. That means apps can analyze user sentiment, classify scenes, translate text, tag music, and more without putting your privacy at risk. – https://www.apple.com/lae/privacy/approach-to-privacy/
How can one put in the same sentence “analyze user sentiment” and “privacy”?
So that means Apple hold a whole lot of the user’s data and still have access to it. Yet again, it can not do anything against the NSA:
U.S. National Security Orders demand that Apple provide information in response to U.S. National Security legal authorities. They are not counted as Device Requests or Account Requests. In the second half of 2016, Apple received between 5,750 and 5,999 National Security Orders. Apple reports National Security Orders to the extent allowed by law. Though we would like to be more specific, by law this is the most precise information we are currently allowed to disclose. – https://www.apple.com/lae/privacy/government-information-requests/
To me, privacy is not only about who have legal access to user’s data, but where data is on the first place.
After thought: while it is not yet satisfying, Apple position is still better than:
We collect information to provide better services to all of our users – from figuring out basic stuff like which language you speak, to more complex things like which ads you’ll find most useful, the people who matter most to you online, or which YouTube videos you might like. – https://policies.google.com/privacy
So I agree it fits in the “more private alternative to google”
Some of the ‘alternatives’ are a bit more iffy than others. For any service that you don’t have the source to or can’t self-host (telegram, protonmail, duckduckgo, mega, macOS, siri to name a few), you’re essentially trusting them to uphold their privacy policy and to respect your data (now, but also hopefully in the future).
And in some cases it seems to me that it’s little more than fancy marketing capitalizing on privacy-conscious users.
Telegram group messages aren’t even e2e encrypted, Telegram has access to full message content. The only thing Telegram is good at is marketing, because they’ve somehow convinced people they’re a secure messenger.
To be fair, they at least had the following going for them:
Maybe there was more, but these were the arguments I could think of on the spot. I agree that it isn’t enough, but it’s not like their claim was unsubstantiated. It just so happened that other services started adopting some of Telegrams features, making them loose their edge over the competition.
Also the client UX is pretty solid imho. Bells and whistles are not too intrusive, and stuff works as you’d expect.
Regarding its security: It is discussed in the FAQ what security models they offer in which chat mode.
https://github.com/nikitavoloboev/privacy-respecting/commit/3dfb4baf3d5fc9a90dd82c9fc41f898e0a04802e
Bit more accurate now, I hope?
I’m much less worried about the source code than I am the incentives of the organization behind the software. YMMV, of course.
Even if you have source code, it’s difficult to verify a service or piece of software (binary) matches that source code.
Yes, but then if anything feels wrong, it gets possible to find an alternative provider for the same software.
Still… Hard to beat the privacy of a hard drive at home accessed through SFTP.
I was checking email SaaS providers last weekend as the privacy policy changes at current provider urge me not to renew my subscription when it ends. I have found mostly the same offers, and to be honest neither seemed convincing to me.
For example the Tutanota offer seemed questionable: They keep me so secured that the email account can only be accessed by their email client, no free/open protocol is available. Only their mail client can be used, they use proprietary encryption scheme for my own benefit… OK, it is open sourced, but come on… I cannot export my data in a meaningful way to change providers. So what kind of encryption scheme is it? It is RSA-2048+AES, not using GPG/PGP “standards”, and is hosted in Germany, pretty much a surveillance state… This makes their claims questionable at least.
I can’t really say it costs any effort at all if you are satisfied with the minimal: Here are the steps as author suggested:
So no apache module to setup? :P
No mention of gopher clients! How are you supposed to see other people’s posts? I found this one: http://gopher.quux.org:70/devel/gopher/Downloads/ which seems to work pretty well. I remember back in the day firefox/netscape used to support gopher:// url’s but pretty sure that’s no longer the case.
I use OverbiteFF on Firefox. Lynx also supports gopher. But where was the author’s gopher site? If it’s so easy (and it is [1]) why did he not do it himself? Seems odd.
[1] Not only do I run gopher but I wrote my own server, mainly to serve up my blog.
The original article is posted on gopher here: gopher://sdf.org/0/users/dbucklin/posts/how_gopher.txt
Lynx is a fantastic gopher browser and there are several new ones also in active development. There’s sacc(1) from the folks at bitreich.org and also VF-1 if you prefer more of a REPL style interface.
I’m going to take this rare opportunity to plug my gopher client: https://github.com/enkiv2/misc/blob/master/ncgopher.py – not because it’s particularly good, but because it’s a good illustration of how straightforward a featureful gopher client is to write.
I’m aware of a couple people on mastodon making much more polished & featureful clients. I can’t remember their names offhand, unfortunately.
You can use elinks, lynx, cgo, sacc (that you can try via ssh at ssh://kiosk@bitreich.org), clic, curl to download…
Most browsers can start an external program after downloading a file, (xdg-open by default). Gopher has text-menu but is not text-only.
Even plain netcat/telnet, given how simple is the protocol. If all you want is getting a document from gopher: printf '/0/%s\r\n' "$url" nc "$host" 70 > file.
Firefox dropped the gopher:// protocol support. moz :/ la…
Indeed it isn’t (and I think even the Firefox add-ons that added back support don’t work anymore…)
Haiku’s network protocol client layer has first-class Gopher support, and since our WebKit port uses our internal protocol stack, you can browse Gopher in WebPositive.
“you are not an individual” kind of messages can be offending, but this time, it seems that it is for a noble cause: make people react and claim their individuality, rejecting systems that abuse them.
Even me was offended, as indeed, I use the web from time to time, I connect to servers (web browser) without a VPN so I give them my address and cross-referencing enable companies to have an accurate profile of me.
Even the least “techie” person use the web by necessity and get affected.
Presence of advertising everywhere means it does actually work, otherwise, companies would have gave up, so let’s claim that we are all guided by what advertising tell to us, making us nothing more but data off an ad sense hard drive, that evolve according to messages that appear in a small square at the right of the article we are reading.
Thanks for the thoughtful reply. Yes, I intended “you are not an individual” to be provocative. It’s built into the way the industry even talks. “We can target individual profiles,” “We have a database of 50 million profiles,” etc. Switching from “people” to “profile” makes it easier to do things that are morally questionable.
A hub for getting started with gopher is the “gopher lawn”, which is a hand-made index of many things you can find on Gopher.