Global warming is important, but realistically we can’t address it until we have regained political stability (and significantly improved on the pre-Trump status quo). Goals for the next 10 years are:
If I can make impacts on longer term issues during that time, great, but it’s hard to think about right now.
So, essentially you’re saying that since Trump was elected we are collectively incapable of doing anything but running in circles shouting about imminent fascism? Any efforts to improve technology wrt. environmental impact cannot realistically be expected to succeed, because politics? Seems like a terrible, self-defeating attitude to me.
Global warming is not a technological problem insofar as you can’t just invent a widget to solve global warming. Even if your widget is something like “planetary scale air filter”, you will not be able to build or operate it without social/political backing. Also:
If I can make impacts on longer term issues during that time, great
It’s not a black and white issue, and it’s not going to be ‘solved’ by one major breakthrough. Their point is just that there’s no reason why the current political situation in the USA needs to bring everything to a halt. If you don’t have the time or headspace to deal with it right now, that’s absolutely okay (what matters is you’re aware of it)! Everyone’s circumstances are different, but collectively, we can’t afford to just put it on hold, and it doesn’t have to be at the expense of other important issues. If anything, I’d hope that it might have the power to bring people closer together (if a threat to humanity can’t do that, what can?).
Yes, you’re right that we can’t solve this problem with technical solutions. Other commenters notwithstanding..
What makes you think that? Climate change is in many ways a technical problem, how do you think we are going to solve it if not by adapting our technology?
Did mere technology or lobbying/sales decide what kinds of power plants will be all over many countries? Did technology itself create the disposable culture that adds to waste or did user demand? Is there a technological solution in sight for the methane emissions from cattle whose beef is in high demand? On other side, would we be storing endless amounts of data in these data centers appearing everywhere if technology didn’t make storage and computing so cheap? And is there a technological solution to avoiding them throwing that stuff away on a regular basis when customers want new stuff or manager want metrics to change? Is there a technological solution to getting people who neither care nor are legally required to care to stop doing damaging behaviors?
Sounds more like people-oriented decisions are causing most of the problem. Even if you create a beneficial technology, those people might create new practices or legislation that reduce or counter its benefits. Actually, that’s the default thing they do which they’re doing right now on a massive scale. I think we just got lucky with low-power chips/appliances since longer-lasting batteries and cheaper, utility bills are immediate benefits for most people that just happen to benefit the environment on the side.
It is obviously not merely technology that got us here. But these problems are all about technology on a fundamental level and if we want things to change, we need the tech that makes these changes viable. No point lobbying for an alternative that does not exist.
Sounds more like people-oriented decisions are causing most of the problem.
Always an interplay of technology- and people-oriented decisions. But changing technology is much easier compared to changing people, which has resulted in utter dystopia many times.
Even if you create a beneficial technology, those people might create new practices or legislation that reduce or counter its benefits.
Same with well-intentioned legislation. But companies have no intrinsic incentive not to use beneficial technology, only to inflate its impact for marketing purposes (like the faked car emissions). They do have an incentive to game legislation, otherwise there would be no point to that legislation (in general; individual cases might profit from being good examples).
This is pretty far off-topic, and most likely to result in a bunch of yelling back and forth between True Believers.
Flagged.
EDIT:
OP didn’t even bother to link to the claimed “increasing evidence”. This is a bait thread. Please don’t.
Shrug. I find the complete lack of political awareness at most of the tech companies I’ve worked at to be rather frustrating and I welcome an occasional thread on these topics in this venue.
It’s possible that many of your coworkers are more politically aware than they let on, and deliberately avoid talking about it in the workplace in order to avoid conflict with people who they need to work with in order to continue earning money.
All work is political. “Jesus take the wheel” for your impact on the world through your employment decisions is nihilistic.
Not trumpeting all your political views in the workplace does not mean completely ignoring political incentives for employment or other decisions. I’m not sure what made you think GP is advocating that.
Obviously “off-topic-ness” is subjective, but so far your prediction re: yelling back and forth hasn’t happened. Perhaps your mental model needs updating… maybe your colleagues are better equipped to discuss broad topics politely than you previously imagined?
Obviously “off-topic-ness” is subjective, but so far your prediction re: yelling back and forth hasn’t happened.
Probably because everyone on this site is good and right-thinking — or knows well enough to keep his head down and his mouth shut.
(Which has nothing to do with the truth of either side’s beliefs; regardless of truth, why cause trouble for no gain?)
To me, the people on this site definitely handle these discussions better. Hard to say how much better given that’s subjective. Let’s try for objective criteria: there’s less flame wars, more people sticking to the facts as they see them vs comments that re pure noise, and moderation techniques usually reduce the worst stuff without censorship of erasing civil dissenters. If those metrics are sound, then Lobsters community are objectively better at political discussions than many sites.
Here are some articles for your reading.
https://www.newscientist.com/round-up/worse-climate/
Some of those articles link to other articles. You can get pretty deep if you want.
These all seem to say one thing: climate change is going to be worse faster than some other prediction said. But that does not even remotely address your claim that “organized human life might not be possible by the end of the century and possibly sooner”. What on earth makes you think you know anything about what conditions humans need to organize?
This is a good point. I guess my “evidence” would be past civilization collapse as a result of environmental destruction like what happened on Easter Island.
At this point you can’t easily lease non open-plan office space because landlords believe it’s undesirable.
The landlords believe its undesirable because they really appreciated everyone paying more for a less developed space.
One nit: GPG 2.1+, I think, actually does start gpg-agent automatically on demand. But the main reason it does that is because for some unknown reason the GPG people decided to move most of the system’s functionality out of the gpg binary and into like 5 daemons that you now have to have running all the time and muddy the whole thing up. Why the old system was inadequate other than the fact that it was old and not shiny and overengineered is beyond me. (If someone knows, I would love to find out.)
That all being said I am very happy to see this experiment. PGP is awful. I’d love to see it finally die.
GPG contains lots of engineering effort to make sure that keys are not accidentally leaked to swap, and that applications using gpg under the hood have a safe method of doing so.
I haven’t seen the GPG threat model fully documented but it’s definitely much more involved than Enchive’s.
They split the program into communicating parts to help isolate the address spaces of the executables.
See the section of Neal’s talk starting at 0:31:45 - https://begriffs.com/posts/2016-11-05-advanced-intro-gnupg.html
I finally made time to look at this, thanks! AFAICT there are two reasons he gave (I watched until about 40 minutes left in the video; the dumb player UI won’t show me how far in that is):
Honestly I don’t see why either of those things couldn’t just be accomplished with the exact same architecture except using regular subprocesses instead of daemons. Can anyone give a reason that isn’t the case?
Reason 2 in particular seems like a lot of engineering to support a vaguely defined future scenario which may or may not show up, ever, and certainly does not exist now.
They broke a whole bunch things when they moved to that new architecture and much if it was never fixed. It’s one of the reasons I stopped using GnuPG directly.
Is there any well known PGP alternative other than this? Based from history, I cannot blindly trust code written by one human being and that is not battle tested.
In any case, props to them for trying to start something. PGP does need to die.
a while ago i found http://minilock.io/ which sounds interesting as pgp alternative. i don’t have used it myself though.
Its primitives and an executable model were also formally verified by Galois using their SAW tool. Quite interesting.
This is mostly a remix, in that the primitives are copied from other software packages. It’s also designed to be run under very boring conditions: running locally on your laptop, encrypting files that you control, in a manual fashion (an attacker can’t submit 2^## plaintexts and observe the results), etc.
Not saying you shouldn’t be ever skeptical about new crypto code, but there is a big difference between this and hobbyist TLS server implementations.
I’m Enchive’s author. You’ve very accurately captured the situation. I didn’t write any of the crypto primitives. Those parts are mature, popular implementations taken from elsewhere. Enchive is mostly about gluing those libraries together with a user interface.
I was (and, to some extent, still am) nervous about Enchive’s message construction. Unlike the primitives, it doesn’t come from an external source, and it was the first time I’ve ever designed something like that. It’s easy to screw up. Having learned a lot since then, if I was designing it today, I’d do it differently.
As you pointed out, Enchive only runs in the most boring circumstances. This allows for a large margin of error. I’ve intentionally oriented Enchive around this boring, offline archive encryption.
I’d love if someone smarter and more knowledgeable than me had written a similar tool — e.g. a cleanly implemented, asymmetric archive encryption tool with passphrase-generated keys. I’d just use that instead. But, since that doesn’t exist (as far as I know), I had to do it myself. Plus I’ve become very dissatisfied with the direction GnuPG has taken, and my confidence in it has dropped.
I did invent the KDF, but it’s nothing more than SHA256 applied over and over on random positions of a large buffer, not really a new primitive.
It always bothers me when I see the update say it needs over 80 megabytes for something doing crypto. Maybe no problems will show up that leak keys or cause a compromise. That’s a lot of binary, though. I wasn’t giving it my main keypair either. So, I still use GPG to encrypt/decrypt text or zip files I send over untrusted mediums. I use Keybase mostly for extra verification of other people and/or its chat feature.
Something based on nacl/libsodium, in a similar vein to signify, would be pretty nice. asignify does apparently use asymmetric encryption via cryptobox, but I believe it is also written/maintained by one person currently.
https://github.com/stealth/opmsg is a possible alternative.
Then there was Tedu’s reop experiment: https://www.tedunangst.com/flak/post/reop
I didn’t read this novel, but I do think systems like SVN are under-appreciated.
At one point I set up a dropbox like system for completely non-technical employees to use. They just had to edit their documents in MS-{word,powerpoint,excel}, hit save, and then hit sync or whatever in TortoiseSVN and I basically never got any “I deleted everything, what is a staging area? omg I am freaking out right now” sort of support requests. The system never degraded when they accidentally committed massive files to the repo and then went “oops” and deleted them.
Good experiences had by all. Git would not have worked.
I’d argue that what you want here is not a VCS and is in fact a document store. There are several good ones out there. I say manage your documents with tools for managing documents and manage your source with tools designed for that.
Out of genuine curiosity, do you have recommendations for a good document store that does versioning?
I’ve committed stuff in Git because it’s only been me caring about said stuff, and also sync things with ownCloud, but there could be something better, esp. wrt versioning and non-geeks.
I’ve not used any of the open source offerings - there are several and Google can help you find them. I’ve used commercial ones though like DocStar and … Oy I can’t remember the name of the other one :)
I never knew Dropbox does versioning, that’s pretty cool! I should have specified that I’d prefer something self-hosted/OSS but maybe I should look deeper into Dropbox.
Oh my god you’re right. Edit: none of this comment is sarcasm! genuinely dumbfounded
git checkout branch1
git treesame-commit branch2 # a complex program
is the same as
git checkout branch1
branch1ref=$(git log --format=%H -1)
git reset --hard branch2
git reset $branch1ref
git add .
git commit -m "treesame-commit" # no complex program
and this whole time I believed there was no way to do this!
In my literally 7 years of using my own custom tool that writes git commit objects by hand, how have I never realized that? :headdesk:
Of course, to make the latter really solid you’d also want a git clean -dffx, git submodule sync, git submodule update --init --recursive, etc. In some sense, it’s hard to beat the concreteness of writing out the exact tree hash you want into a commit object, so frankly I’ll probably still keep using my tool but I do feel a bit sheepish.
Another person on reddit pointed out git read-tree and git checkout-index which also can solve the problem.
Nope! Just dumbfounded by my own stupidity. I agree though, I don’t know how to convey the dumbfoundedness without sounding sarcastic. Internetting is hard.
Someone who controls your network will simply drop the DNSKEY/DS records, so DNSSEC would not have provided any protection for “MyEtherwallet”. People who have already visited it were (hypothetically) protected by TLS, and people who hadn’t, would have received bogus records anyway.
So DNSSEC could, in an ideal setting, provide a benefit similar to HPKP, but why wait? HPKP is here now.
Furthermore, “DNSSEC wasn’t easy to implement” is a massive understatement.
No, they can’t drop your DS records, because those reside in the parent TLD. They would have to also hack your domain registrar to do that.
That’s completely wrong.
If someone controls your network, they don’t need to hack anyone else: They can feed you whatever they want.
If someone controls your network, they don’t need to hack anyone else: They can feed you whatever they want.
That’s only true of non-DNSSEC signed records. DNSSEC is PKI allows one to cryptographically delegate authority of a zone. In practice, this means that the root zone signs the public keys of authoritative top-level domains, TLDs then sign the public keys of owners of regular domain names. These keys can then be used to sign any arbitrary DNS record. So, if you have a validating local resolver, it can use the public key of the root zone to cryptographically validate the chain of trust from ICANN down to the authoritative nameserver for a domain.
DNSCurve isn’t a bad idea, I think lookup privacy is a good thing and I would much prefer to trust Google or Cloudflare than my local ISP for unsigned domain names. That being said, it doesn’t fix the massive problem of computers trusting the DNS cache of a 10-year-old router controlled by malware. It’s also really unhelpful when people claim that DNSCurve is some sort of alternative to DNSSEC.
Seriously, go read Dan Kaminsky’s critique of the DNSCurve proposal.
DNSSEC is PKI allows one to cryptographically delegate authority of a zone
Which an attacker guarantees you’ll never see.
This isn’t a hypothetical attack: Your computer asks your ISP’s nameservers, and it strips out all the DNSSEC records. Unless your computer expects those records, it won’t ever be able to tell you anything is wrong.
if you have a validating local resolver, it can use the public key of the root zone to cryptographically validate the chain of trust from ICANN down to the authoritative nameserver for a domain.
If you don’t, and for some reason use a “validating local resolver” on another machine, you have nothing.
Even if you have a validating-capable resolver, and you never see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 then you might never learn that you might find keys for cloudflare.com.
Even if you do have a validating-capable resolver, and you see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 you still can’t visit google.com safely.
And what about .ae? Or other roots?
DNSSEC supporters are happy enough to ignore the problem of deploying DNSSEC, like it’s somehow someone else’s problem.
That being said, it doesn’t fix the massive problem of computers trusting the DNS cache of a 10-year-old router controlled by malware.
What are you talking about?
It’s also really unhelpful when people claim that DNSCurve is some sort of alternative to DNSSEC.
It’s annoying that DNSSEC “supporters” hand-wave the fact that DNSSEC has no security, and doesn’t have a deployment plan except “do it”.
IPv6 is at 23% deployment. After more than twenty years. DNSSEC is something like 0.5% of the dot-com. After more than twenty years (although admittedly they completely changed what DNSSEC was several times in that time). DNSSEC isn’t a real thing. It’s not even a spec for a real thing. How can I possibly take it seriously?
Seriously, go read Dan Kaminsky’s critique of the DNSCurve proposal.
Have you read it?
It’s bonkers. It admits DNSSEC is a moving target that hasn’t yet been implemented “in all its glory” and puts this future fantasy version of DNSSEC that has been fully deployed and had all operating systems, routers and applications rewritten, against DNSCurve.
Kaminsky is as brain-damaged as those IPv6 nutters, waiting for some magic moment for over twenty years that simply never came – and the only way his “critique” would have any value at all is if it were printed on bog roll.
For what it’s worth: I think DNSCurve solves a problem I don’t have, but it attracts no ire from me.
This isn’t a hypothetical attack: Your computer asks your ISP’s nameservers, and it strips out all the DNSSEC records. Unless your computer expects those records, it won’t ever be able to tell you anything is wrong.
A resolver can refuse to perform DNSSEC validation or even strip out records, but a local resolver can detect this and even work around it.
if you have a validating local resolver, it can use the public key of the root zone to cryptographically validate the chain of trust from ICANN down to the authoritative nameserver for a domain.
If you don’t, and for some reason use a “validating local resolver” on another machine, you have nothing.
What do you mean by using a validating local resolver on another machine? It’s local, there is no other machine.
If you are saying that most clients rely on their router (or whatever) to do DNSSEC validation then yes, that router can perform a MITM attack. It’s still more secure than trusting every single upstream DNS resolver, but we need to move to local validation. The caching layer provided by DNS is a byproduct of the limited computing resources 1985.
Even if you have a validating-capable resolver, and you never see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 then you might never learn that you might find keys for cloudflare.com.
It sounds like you are describing a broken resolver.
Even if you do have a validating-capable resolver, and you see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 you still can’t visit google.com safely.
I believe the local resolver would just ask com for a DS record for google.com and receive either a DS record or an NSEC record. If it doesn’t receive one of those two records, then you are correct: you can’t visit google.com safely. It’s no different than an HTTPS downgrade attack.
And what about .ae? Or other roots?
If we can get people to stop claiming that “DNSSEC does nothing for security” and make use of the cool stuff you can do with DNSSEC, then the market will force the last 10% of ccTLDs to adopt it.
DNSSEC supporters are happy enough to ignore the problem of deploying DNSSEC, like it’s somehow someone else’s problem.
I personally am working very, very hard on addressing every pain point there is. There are a lot of moving pieces and the standards left some holes until recently. I believe captive portals and VPN domains are thorny issues, but these issues can be addressed in an incremental fashion.
It doesn’t help when people make erroneous claims about DNSSEC based on an incorrect understanding of DNS, DNSSEC, DNSCurve, and decentralized naming systems.
That being said, it doesn’t fix the massive problem of computers trusting the DNS cache of a 10-year-old router controlled by malware.
What are you talking about?
DNSCurve relies on trusting the DNS resolver above you. For most people that is a 10-year-old router which has never gotten a security update. Best-case scenario is someone switching to Google DNS or Cloudflare - but with proper encryption, no upstream resolver would be capable of performing MITM attacks.
It’s annoying that DNSSEC “supporters” hand-wave the fact that DNSSEC has no security
I have patiently responded to every single claim you have made about DNSSEC’s security model. Please refrain from repeating this claim until you have figured out how a MITM attacker can force a local validating resolver to accept forged DS or NSEC records.
IPv6 is at 23% deployment. After more than twenty years. DNSSEC is something like 0.5% of the dot-com. After more than twenty years (although admittedly they completely changed what DNSSEC was several times in that time). DNSSEC isn’t a real thing. It’s not even a spec for a real thing. How can I possibly take it seriously?
So was IPv6 until ~6 years ago - now there is exponential growth. DNSSEC is at a similar tipping point: the basic security model was worked out a long time ago, but there were plenty of sharp corners until recently (large key sizes, NSEC3, etc). If we can stop people from claiming that the security model is broken then Cloudflare and other big providers will pour money into taking business away from the HTTPS certificate authorities.
It’s also a necessity for decentralized DNS, which gives us an environment where we can implement everything without having to wait for legacy infrastructure to catch up.
Seriously, go read Dan Kaminsky’s critique of the DNSCurve proposal.
It’s bonkers. It admits DNSSEC is a moving target that hasn’t yet been implemented “in all its glory” and puts this future fantasy version of DNSSEC that has been fully deployed and had all operating systems, routers and applications rewritten, against DNSCurve.
The post is mainly useful for explaining how DNSSEC and DNSCurve relate to one another. While the grand vision is the eventual goal, there are incremental benefits and huge gains can be had by simply making the application DNSSEC aware. For example, browsers are already switching to doing DNS resolution themselves, so the work required isn’t much more involved than that of upgrading to TLS.
For what it’s worth: I think DNSCurve solves a problem I don’t have, but it attracts no ire from me.
Then why hate on DNSSEC but evangelize DNSCurve? You can happily ignore DNSSEC as an end user or even as a system admin. If you care about security, well, that’s a different story.
If they hack your primary nameserver and keep the zone signed, then maybe, as long as you’re running the primary. But as the original commenter said, “they can drop your DNSKEYs and DS”, no. They can drop the DNSKEY, but the DS resides in the parent node, and as long as it’s there, resolvers will look for DNSSEC validated responses, which they won’t get.
If someone controls your network, whenever you request a DNSSEC “protected” domain, you will never know because the attacker can drop whatever records they want. DNSSEC clearly offers nothing.
If someone controls the network of a website, they don’t need to interfere with the nameservers. They can simply MITM the traffic. Since they can request a TLS certificate from anyone who does HTTP or mail validation, DNSSEC still offers nothing. This is true whether they control the network by broadcasting “invalid” BGP routes, or whether they attack the physical infrastructure.
Why are you defending this snake oil? “Hack[ing] your primary nameserver” is a pointless strawman that nobody cares about: “your primary nameserver” is likely controlled by Amazon or someone else competent. Your webserver is controlled by you, who lack the experience to identify the (complex) services at risk and properly secure them.
f someone controls your network, whenever you request a DNSSEC “protected” domain, you will never know because the attacker can drop whatever records they want. DNSSEC clearly offers nothing.
I don’t know what you mean by this. For starters, are you assuming “your network” includes all the nameservers? Let’s assume so. If DNSSEC is enabled, they can’t alter any of the DNS responses because they will break DNSSEC validation for aware resolvers. Sure, they can drop queries but what does that buy them other than a DDOS? They can’t stand up a fake site.
Why are you defending this snake oil? “Hack[ing] your primary nameserver” is a pointless strawman that nobody cares about: “your primary nameserver” is likely controlled by Amazon or someone else competent. Your webserver is controlled by you, who lack the experience to identify the (complex) services at risk and properly secure them.
Clearly you haven’t been paying attention. The entire chain of events that started my posts about this was a BGP hijack that was used to impersonate Route53 nameservers by hijacking Amazon IP space, which were the nameservers that MyEtherWallet was using. From there they stood up fake nameservers which directed victims to a fake myetherwallet site. That’s the exactly what happened, why don’t you go tell the people who had their wallet’s drained not to worry about it because it is all a pointless strawman.
So DNSSEC could, in an ideal setting, provide a benefit similar to HPKP, but why wait?
Doing PKI at the DNS level means that we can leverage it for every network and application protocol … including public key pinning. It would also enable us to do cool stuff like encrypt at both the network and application layers.
It’s just that sysadmins didn’t want the headache of key management, so everyone engaged in bikeshedding. It didn’t help that a few (well intentioned!) security researchers threw shade on DNSSEC for not solving Zooko’s triangle or offering encryption for DNS lookups.
So now we are stuck with Mozilla funding Let’s Encrypt to the tune of $2 million/year and non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI. Which, in practice, means that it’s either non-existent (SSH) or barely functioning (GPG).
HPKP is here now.
Sadly, HPKP has been deprecated by Chrome. But, FWIW, these standards existed long before HPKP.
It didn’t help that a few (well intentioned!) security researchers threw shade on DNSSEC for not solving Zooko’s triangle or offering encryption for DNS lookups.
The current system (CA’s) is human meaningful, secure, and decentralized federated. It’s not perfect, but there are ways to improve the last point, so that we have more control over badly behaving CAs. But even as implemented, that’s better than human meaningful, secure, and a single point of failure (DNSSEC).
So now we are stuck with Mozilla funding Let’s Encrypt to the tune of $2 million/year and non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI.
You can use x509 certificates from Let’s Encrypt to secure any IP connection. What’s the problem?
It’s not perfect, but there are ways to improve the last point, so that we have more control over badly behaving CAs.
For non-decentralized naming systems, the (abstract) DNSSEC chain of trust looks (roughly) like this:
Government -> ICANN -> Registrar -> DNS Provider -> Local Validating Resolver -> Browser
HTTPS certificate authorities “validate” control over a domain by checking DNS records (either TXT or via an email). Their chain of trust looks like this:
Government -> ICANN -> Registrar -> DNS Provider -> ~650 CAs [1] -> Browser
The best way to exercise more control over them is to cut them out of the trust chain entirely. Or switch to a decentralized naming system … which also relies on DNS (and thus DNSSEC) for compatibility reasons:
Blockchain -> Lightclient w/ DNSSEC auto-signer -> Browser
But even as implemented, that’s better than human meaningful, secure, and a single point of failure (DNSSEC).
In terms of the security model, DNS is still a single point of failure. If you don’t like managing PKI you can always outsource it to someone … just like you do with HTTPS certificates.
If I want to compromise you, attacking your DNS resolver doesn’t mean I’ve also attacked PayPal’s CA even if they used their DNS resolver to verify ownership of paypal.com
My point is that one can trick one of the ~650 CAs into generating an X509 certificate by hacking their upstream DNS client or performing a MitM attack. This would be pretty easy for any large network operator to pull off.
Doing PKI at the DNS level means that we can leverage it for every network and application protocol … including public key pinning. It would also enable us to do cool stuff like encrypt at both the network and application layers.
What exactly are you referring to: DNSCurve?
DNSSEC doesn’t offer anything like this.
It’s just that sysadmins didn’t want the headache of key management, so everyone engaged in bikeshedding.
Paul Vixie, June 1995: “This sounds simple but it has deep reaching consequences in both the protocol and the implementation – which is why it’s taken more than a year to choose a security model and design a solution. We expect it to be another year before DNSSEC is in wide use on the leading edge, and at least a year after that before its use is commonplace on the Internet”
Paul Vixie, November 2002: “We are still doing basic research on what kind of data model will work for DNS security. After three or four times of saying NOW we’ve got it THIS TIME for sure there’s finally some humility in the picture … Wonder if THIS’ll work? … It’s impossible to know how many more flag days we’ll have before it’s safe to burn ROMs … It sure isn’t plain old SIG+KEY, and it sure isn’t DS as currently specified. When will it be? We don’t know… There is no installed base. We’re starting from scratch.”
It didn’t help that a few (well intentioned!) security researchers threw shade on DNSSEC for not solving Zooko’s triangle or offering encryption for DNS lookups.
Or the fact DNSSEC creates DDOS opportunities, introduced lots of bugs in the already buggy BIND, and still offers no real security.
No thanks.
So now we are stuck with Mozilla funding Let’s Encrypt to the tune of $2 million/year
DNSSEC has received millions of US tax dollars offers nothing, while Let’s Encrypt actually provides some transport security. Hrm…
non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI. Which, in practice, means that it’s either non-existent (SSH) or barely functioning (GPG).
I don’t see how DNSSEC even begins to solve these problems.
FWIW: Almost everything is HTTPS anyway.
Sadly, HPKP has been deprecated by Chrome
It is sad. Firefox and others still support it, and HSTS + Certificate Transparency is probably good enough anyway.
Doing PKI at the DNS level means that we can leverage it for every network and application protocol … including public key pinning. It would also enable us to do cool stuff like encrypt at both the network and application layers.
What exactly are you referring to: DNSCurve?
No, using DNSSEC to bootstrap the public keys for … any cryptographic protocol. Just as DANE can be used to distribute the TLS keys for an HTTPS server, SSHFP records can be used to publish the public keys for a given SSH server. AWS, for example, could just publish SSHFP records when they provision a new instance and you would have end-to-end verification for your SSH connection. No need for Amazon to partner with Let’s Encrypt or force SSH clients to switch to X509 certificates.
Since DNSSEC makes it simple to publish arbitrary public keys for a domain, you can use something like TCPCrypt to encrypt connections at the transport level. Transport level encryption reduces information leakage (SNI headers for HTTPS, what application you are using, network level “domain” fronting, etc) and mitigates flaws in any application layer encryption.
WRT to your Paul Vixie quotes: they are 16 years old. I’ve tried really hard to find showstopper issues, but when you dig into criticisms of DNSSEC they boil down to complaints about DNS, problems that have already been fixed, or gripes about the complexity of managing PKI.
Or the fact DNSSEC creates DDOS opportunities
DNS reflection attacks are a thing because there are tens of thousands of public DNS resolvers willing to send DNS record requests to anyone. The worst offender here are ANY requests that return all records associated with a domain. The public key and signature used to verify a DNS response do not incur that much overhead 1 2.
The response from DNS providers hasn’t been to rip out DNSSEC, but to rate limit requests that produce large responses. More fundamental changes include switching to TCP, ingress filtering of spoofed UDP packets, supporting edns_client_subnet, and shutting down public DNS servers.
introduced lots of bugs in the already buggy BIND
Please do not blame DNSSEC for BIND being a buggy POS.
and still offers no real security.
DNSSEC prevents a wide range of attacks. Are seriously arguing that removing trust in every DNS server between yourself and the registrar doesn’t materialistically improve security? What about removing trust in the ~650 CAs capable of producing an HTTPS certificate? Wouldn’t you like to live in a world where TCP, SSH, email, IRC, etc can take advantage of PKI instead of opportunistic crypto?
non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI. Which, in practice, means that it’s either non-existent (SSH) or barely functioning (GPG).
I don’t see how DNSSEC even begins to solve these problems.
Publish a DNS record with the public key for the encryption protocol you would like to use (see: SSHFP, DANE, PGP).
It is sad. Firefox and others still support it, and HSTS + Certificate Transparency is probably good enough anyway.
As a decentralized domain name nerd, I strongly disagree. We need a standard way for naming systems to declare the public keys for their services. Seriously, we have to sign TOR domains with HTTPS certificates from DigiCert because the browser doesn’t support DANE.
No, using DNSSEC to bootstrap the public keys for … any cryptographic protocol.
This is some fantasy version of DNSSEC that doesn’t exist yet and likely never will: Browers don’t do DANE because it’d piss people off.
Are seriously arguing that removing trust in every DNS server between yourself and the registrar doesn’t materialistically improve security?
Yes.
Until you know that you’re supposed to be seeing a DS/DNSKEY chain, every recursive resolver (and every stub resolver) gains nothing, and risks tricking people into thinking they have some security because they installed something called DNSSEC.
As a decentralized domain name nerd, I strongly disagree.
Well, you’re wrong. Decentralising trust just creates multiple single-points of failure unless you’re willing to wait for consensus, in which case you might as well use HSTS+Certificate Transparency (and your favourite mirror).
This is some fantasy version of DNSSEC that doesn’t exist yet and likely never will: Browers don’t do DANE because it’d piss people off.
It would only piss off people who think DNSSEC is a bad thing. Chrome actually implemented it but it was removed due to lack of critical mass. I’m thinking of pitching Cloudflare on pushing for DANE.
Until you know that you’re supposed to be seeing a DS/DNSKEY chain, every recursive resolver (and every stub resolver) gains nothing, and risks tricking people into thinking they have some security because they installed something called DNSSEC.
If the parent zone is signed and has a DS key for the child zone, then your local resolver would know that the child zone is supposed to be signed.
As a decentralized domain name nerd, I strongly disagree.
Well, you’re wrong.
No, I’m not. This was a major issue with Namecoin: we had to MITM every HTTPS connection to check the certificate against the blockchain records then replace it with a local certificate. There was no uniform way of making this work: the hack required tweaking for every OS and application and prevented users from selecting their own SOCKS5 proxy. The entire team agreed that DANE was the only way forward and we even got DigiCert to ensure that they used DANE when minting their .onion certs.
Decentralising trust just creates multiple single-points of failure
Um, what?
unless you’re willing to wait for consensus
Consensus from the Blockchain?
DNSSEC would be cool if it allowed multiple CA hierarchies. I really like the idea of a global key/value store that you bootstrap other secure protocols on top of. DNS could be the basis for that in theory.
But DNSSEC as standardized bubbles up to a single government run CA per TLD, and that’s much less cool.
Not liking government control of the domain name system is not a valid reason to dislike cryptographic verification of DNS records … especially since decentralized naming systems also need a protocol for signing their DNS records :D
Maybe @steveno or someone else can ELI5 this to me why is this advantageous over traditional, platform-agnostic, and dependency-less symlinking in a bash script? Cf. my dotfiles and the install script.
Salt’s declarative nature means that you’re mostly describing the end state of a system, not how to get there.
So instead of saying “copy this stuff to this directory and then chmod” you say “I want this other directory to look like this”. Instead of saying “install these packages” you say “I want this to be installed”. You also get dependency management so if you (say) just want to install your SSH setup on a machine you can say to do that (and ignore your window manager conf).
If your files are grouped well enough and organized enough you can apply targeted subsets of your setup on many machines based off of what you want. “I want to use FF on this machine so pull in that + all the dependencies on that that I need”. “Install everything but leave out the driver conf I need for this one specific machine”
This means that if you update these scripts, you can re-run salt and it will just run what needs to run to hit the target state! So you get recovery from partial setup, checking for divergences in setups, etc for free! There’s dry run capabilities too so you can easily see what would need to change.
This is a wonderful way of keeping machines in sync
Looking at my repository right now, there isn’t any advantage. You could do everything I’ve done with a bash script. The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily. For example, my plan is buy a RaspberryPi and setup an encrypted DNS server. All I need to do is install salt on the Pi and it gets all of this setup just like my NUC currently has. I can then use salt to target specific machines and have it setup a lot of this for me.
The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily
You can also do this with a shell script.
All I need to do is install salt
With shell scripts you don’t need to install anything.
As I previously stated, given what’s currently in this repository, there isn’t anything here that you couldn’t do with a shell script. That’s missing the point though. Salt, or ansible, or chef, provide you with a way to manage complex setups on multiple systems. Salt specifically (because I’m not very familiar with ansible or chef) provides a lot of other convenient tools like salt-ssh or reactor as well.
I feel like your point is just that shell script is turing complete. Ok. The interesting questions are about which approach is better/easier/faster/safer/more powerful.
If you’re targeting different distributions of linux or different operating systems entirely, the complexity of a bash script will start to ramp up pretty quickly.
I disagree, I use a shell script simply because I use a vast array of Unix operating systems. Many of which don’t even support tools like salt, or simply do not have package management at all.
I have a POSIX sh script that I use to manage my dotfiles. Instead of it trying to actually install system packages for me, I have a ./configctl check command that just checks if certain binaries are available in the environment. I’ve found that this approach hits the sweet spot since I still get a consistent environment across machines but I don’t need to do any hairy cross-distro stuff. And I get looped in to decide what’s right for the particular machine since I’m the one actually going and installing stuff.
The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily.
Have to agree with @4ad on this one. I have to use remote machines I don’t have sudo rights and/or often are completely bare bones (eg., not even git preinstalled.) My goal, in essence, is a standardized, reproducible, platform-agnostic, dependency-less dotfile environment which I can install with as few commands as possible and use as fast as possible. I don’t see how adding such a dependency benefits me in this scenario. I’m not against Ansible-like dotfile systems, but, in my opinion, using such systems for this task seems like an overkill. Happy to hear otherwise, though.
So, this might be a good time to float an idea:
None of this would be an issue if users brought their own data with them.
Imagine if users showed up at a site and said “Hey, here is a revokable token for storing/amending information in my KV store”. The site itself never needs to store anything about the user, but instead makes queries with that auth token to modify their slice of the user’s store.
This entire problem with privacy and security would go away, because the onus would be on the user to keep their data secure–modulo laws saying that companies shouldn’t (and as a matter of engineering and cost-effectiveness, wouldn’t) store their own copies of customer data.
Why didn’t we do this?
http://remotestorage.io/ did this. I’ve worked with it and it’s nowhere near usable. There are so many technical challenges (esp. with performance) you face on the way that result of you basically having to process all user data clientside, but storing the majority of data serverside. It gets more annoying when you attempt to introduce any way of interaction between two users.
We did try this, saw that it’s too hard (and for some services an unsolved problem) and did something else. There’s no evil corporatism in that, nor is it a matter of making profit, even if a lot of people especially here want to apply that imagination to everything privacy-related. It’s human nature.
basically having to process all user data clientside
If I go to a site, grant that site a token, couldn’t that server do processing server side?
It gets more annoying when you attempt to introduce any way of interaction between two users.
Looking at remotestorage it appears there’s no support for pub/sub, which seems like a critical failing to me. To bikeshed an example, this is how I see something like lobste.rs ought to be implemented:
User data is stored in servers (like remotestorage) called pods, which contain data for users. A person can sign up at an existing pod or run their own, fediverse-style.
These pods support pub/sub over websocket.
A particular application sits on an app server. That app server subscribes to a list of pods for pub/sub updates, for whatever users that have given that application permission. On top of these streams the app server runs reduce operations and keeps the result in cache or db. A reduce operation might calculate something like, give me the top 1000 items sorted by hotness (a function of time and votes), given streams of user data.
A user visits the site. The server serves the result instantly from its cache.
Additionally the pub/sub protocol would have to support something like resuming broken connections, like replay messages starting from point T in time.
Anyway, given this kind of architecture I’m not sure why something like lobste.rs for example couldn’t be created - without the performance issues you ran into.
If I go to a site, grant that site a token, couldn’t that server do processing server side?
If your data passes through third-party servers, what’s the point of all of this?
The rest of your post is to me, with all due respect, blatant armchair-engineering.
The pub/sub stuff completely misses the point of what I am trying to say. I’m not talking about remotestorage.io in particular.
Lobste.rs is a trivial usecase, and not even an urgent one in the sense that our centralized versions violate our privacy, because how much privacy do you have on a public forum anyway? Let’s try something like Facebook. When I post any content at all, that content will have to be copied to all different pods, making me subject to the lowest common denominator of both their privacy policies and security practices. This puts my privacy at risk. Diaspora did this. It’s terrible.
Let’s assume you come up with the very original idea of having access tokens instead, where the pods would re-fetch the content from my pod all the time instead of storing a copy. This would somewhat fix the risk of my privacy (though I’ve not seen a project that does this), but:
If your data passes through third-party servers, what’s the point of all of this?
It decouples data and app logic. Which makes it harder for an application to leverage its position as middle man to the data you’re interested in. Doing stuff like selling your data or presenting you with ads. Yet you put up with it because you are still interested in the people there. Because if data runs over a common protocol you’re free to replace the application-side of things without being locked in. For example, I bet there’s some good content on Facebook but I never go there because I don’t trust that company with my data. I wish there were some open source, privacy friendly front end to the Facebook network available, that would let me interact with people there, without sitting on Facebook’s servers, and open source. Besides that, if an application changes its terms of use, maybe you signed up trusting the application, but now you’re faced with a dilemma of rejecting the ToS and losing what you still like about the application, or accepting new crappy terms.
The rest of your post is to me, with all due respect, blatant armchair-engineering.
Ha! Approaching a design question by first providing an implementation without discussion seems pretty backwards to me. Anyway, as far as I’m concerned I’m just talking design. Specifically I’m criticizing what I perceive as a deficiency in remotestorage’s capabilities. And arguing that a decentralized architecture doesn’t have to be slow, is at least as good as a centralized architecture, and better, in many regards, for end users.
Let’s try something like Facebook. When I post any content at all, that content will have to be copied to all different pods,
No, I was saying that this would be published to subscribing applications. There could be a Facebook application. And someone else could set up a Facebook-alternative application, with the same data, but a different implementation. Hey, you could even run your own instance of Facebook-X application.
making me subject to the lowest common denominator of both their privacy policies and security practices.
If you grant an application access to your data, you grant it access to your data. I don’t see a way around that puzzle in either a centralized or decentralized architecture. If anything, in a decentralized architecture you have more choices. Which means you don’t have to resign yourself to Facebook’s security and privacy policies if you want to interact with the “Facebook” network. You could move to Facebook-X.
Now the slowest pod is a bottleneck for the entire network. Especially stuff like searching through public postings. How do you implement Twitter moments, global or even just local (on a geographical level, not on network topology level) trends?
What I was describing was an architecture where pods just store data. Apps consume and present it. If I have an app, and I subscribe to X pods, there’s no reason I have to wait for the slowest pod’s response in order to construct a state that I can present users of my app.
So for something like search, or Twitter moments, you would have an application that subscribes to whatever pods it knows about. Those pods publish notifications to the app over web socket, for example whenever a user tweets. Your state is a reduction over these streams of data. Let’s say I store this in an indexed lookup like ElasticSearch. So every time a user posts a tweet, I receive a notification and add it to my instance of ElasticSearch. Now someone opens my app, maybe by going to my website. They search for X. The app queries the ElasticSearch instance. It returns the matching results. I present those results to the user’s browser.
Fetching the data from my pod puts the reader’s privacy at risk.
Hmm, I’m not sure if we’re on the same page. In the design I laid out, the app requests this data, not the pod.
With respect, “social media” and aggregator sites are red herrings here. They cant be made to protect privacy by their very nature.
I’m more thinking about, say, ecommerce or sites that aren’t about explicitly leaking your data with others.
“With respect, “social media” and aggregator sites are red herrings here. They cant be made to protect privacy by their very nature.”
Sure they can. Starting with Facebook, they can give privacy settings per post defaulting on things like Friends Only. They could even give different feeds for stuff like Public, Friends Only, or Friends of Friends. They can use crypto with transparent key management to protect as much of the less-public plaintext as possible. They can support E2E messaging. They can limit discovery options for some people where they have to give you a URL or something to see their profile. Quite a few opportunities for boosting privacy in the existing models.
Far as link aggregators, we have a messaging feature that could be private if it isn’t already. Emails and IP’s if not in public profile. The filters can be seen as a privacy mechanism. More to that point, though, might be things like subreddits that were only visible to specific, invited members. Like with search, even what people are looking at might be something they want to keep private. A combo of separation of user activities in runtime, HTTPS and little to no log retention would address that. Finally, for a hypothetical, a link aggregator might also be modified to easily support document drops over an anonymity and filesharing service.
Because the most formidably grown business of late are built on the ability to access massive amounts of user data at random. Companies simply don’t know how to make huge money on the Internet without it.
We did. They’re called browser cookies.
The real problems are around an uneducated consumption-driven populous: Who can resist finding out “which spice girl are you most like?” – but would we be so willing to find out if it meant we get a president we wouldn’t like?
It is very hard for people to realise how unethical it is to hold someone responsible for being stupid, but we crave violence: We feel no thrill that can compare serving food, working in an office, or driving a taxi. Television and Media give us this violence, an us versus them; Hillary versus Urine Hilarity or The Corrupt Incumbent versus a Chance to Make America Great Again, or even Kanye versus anybody and everybody.
How can we make a decision to share our data? We can never be informed of how it will be used against us.
The GDPR does something very interesting: It says you’re not allowed to use someones data in a way they wouldn’t want you to.
I wish it simply said that, but it’s made somewhat complicated by a weird concept of “data” It’s clear that things like IP addresses aren’t [by themselves] your data, and even a name like John Smith isn’t data. Software understands data but not the kind of “data” that the GDPR is talking about. Pointing to “you” and “data” is a fair thick bit of regulation if you don’t want to draw a box around things and prevent sensible people from interpreting the forms of “data” nobody has yet thought of.
But keep it simple: Would that person want you doing this? Can you demonstrate why you think that is and convince reasonable people?
I’m doing a fair bit of GDPR consulting at the moment, and whilst there’s a big task in understanding their business, there’s also a big task getting them to approach their compliance from that line of questioning: How does this make things better for that person? Why do they want us to do this?
We’re not curing cancer here, fine, but certainly there are degrees.
Browser cookies is something that crossed my mind after I suggested this, but my experience as a web dev makes me immediately suspect of them as durable stores. :)
I agree with your points though.
This still doesn’t solve problems with tracking, because companies have already started to require GDPR opt-in to use their products (even when using the product doesn’t necessarily require data tracking), or to use their products without a degraded user experience.
See cloudflare, recaptcha, facebook, etc.
“You can’t use this site without Google Analytics having a K/V-auth-token”, “We will put up endless ‘find-the-road-sign’ captchas if we can’t track you”, etc.
It’s a mistake to think you can “GDPR opt-in”. You can’t.
You have to prove that the data subject wants this processing. One way to do this is to ask for their consent and make them as informed as possible about what you’re doing. But they can decide not to, and they can even decide to revoke their consent at any time until you’ve actually finished the processing and erased their data.
These cookie/consent banners are worse than worthless; a queer kind of game people like Google are playing to try to waste time of the regulators.
We will put up endless ‘find-the-road-sign’ captchas if we can’t track you
I’ve switched to another search engine for the time being. It’s faster, the results are pretty good, and I don’t have to keep fiddling with blocking that roadblock on Google’s properties.
Hi, dabmancer.
I want to tell you a story… I skimmed your laptop.txt and found no pictures. I went to back to the parent… menu, still didn’t find any pictures.
So I decided to contact you and ask for pics! I was just about to ssh into a tilde and weechat into the local ircd to ask who knows much about gopher when I realized that whoever responded would just browse your whole hole to find contact information–and I can do that, the floodgap proxy works fine from work.
AND, your guestbook works. :) My message was delivered already, well before I tapped out this rambling, pointless message. Cheers! p.s. send laptop pics
I didn’t realize I was reading this through a Gopher proxy until I read this comment. I just though I was on a mailing list reader.
I really should setup a gopher server to serve up all the content on my website, in a Docker container, just because I can.
I wrote my own gopher server mainly to mirror my blog to gopherspace. It wasn’t that hard.
Oh shit, it was a Gopher! Given a prior thread, I guess this one should be on list for coolest, modern Gophersites. The FloodGap homepage is itself really neat, too.
Running a gopher hole is pretty easy. I run mine off pygopherd, which is nice in that it will turn directories into gophermaps with type hinting, but if you plan to write your own maps a gopher server is only a handful of lines of code.
Now that you mention it, I do need to take pictures. My email is dabmancer@dread.life, for anyone interested (I did not get your email if you sent one already). I’ll try to respond to every email that I get (and also be helpful). I’m glad the stuff works. The whole point of gopher is that it’s too simple to go wrong.
I use an iPhone 6s, the only google service is maps, and it only has permission to run when I open it (or services like Lyft, which use google services under the hood – however, I mostly stick to Firefox Focus, Whatsapp, Signal, and Mail). Caldav and carddav work pretty seamlessly – I set them up once targeting Zoho PIM (~$23 a year – I am not the product!) and haven’t thought about it again.
Designers are moving away from “native look and feel”, which is an exciting trend because each application can have a (carefully thought out, but) unique rendering method. Java Swing was a no-go in 200x because it didn’t feel like windows, but writing a mini-toolkit in your language of choice makes a lot more sense in 2018+ because the platform no longer matters (the 4+ target platforms look nothing alike anyway!). Google is doing this with Dart, but it could be done in Ocaml, etc.
Jerk has many different interpretations, though. The person who cancels your under-performing project because continued investment is unwise is sort of being a jerk, but is also necessary. Software folks can be incredibly fragile in the face of criticism, business needs, and politics. So lets shut down any instances of office harassment or other abusive, narcissistic behavior, but (completely independent of that very valiant goal!) be realistic that we’re not living in a fantasy conflict-free zone of ponies and cupcakes.
Maybe this is my own weird career path talking, but I wish I could get my colleagues to understand that feedback (solicited or not) is far better for everyone than none, and that even the best teams have imperfect communication routines (and being a team player does mean having some level of skin thickness to work specific disagreements). I.e:
blatantly interrupting a colleague in a meeting
Sometimes people interrupt each other. It’s best to avoid, but sometimes it’s for a good reason. Most of the time the right course of action is to either say, “excuse me, I’d appreciate it if you don’t interrupt me(/them) while I’m(/they’re) talking”, or just get over it, depending on the situation.
to subtly belittling
Emphasis mine.
Relatively misleading title - I thought we were getting an interesting new take on or analysis of context switching costs, process duplication overhead, etc. but the article was mostly “don’t call system or similar with an untrusted argument” which seems obvious.
“Subprocesses are a code smell” seems to me to be a wholly unsubstantiated claim in the article. Subprocesses which kick off any command/program an attacker wishes? Definitely more than a code smell. Use of subprocesses at all though?
It boils down to “don’t blacklist, whitelist“. The example git commit -m “<userdata >” is super safe if userdata matches [A-z0-9 ]+
What would ‘the correct API’ be for the case @ec mentions?
libgit2. Maybe.
I mean, it depends on why am I letting someone make commit messages?
Sure, there’s lots of situations where libgit2 might be appropriate, but it’s a big dep. What am I doing, really?
Maybe I just write the git objects directly. It’s not hard.
But there’s also lots of cases where I would probably just use system. I don’t see what’s so hard about quoting/escaping, since it’s easy to make the shell will do it for you:
if(0==setenv("message",text))system("git commit -m \"$message\" -a");
@ec is right about input sanitising (though): Do I want someone to make a 64kb git commit message? Do I want a commit message that contains evil strings? If I try to build a blacklist, at which point is it good enough? This is an important point, it just has nothing to do with subprocesses.
It depends on the language, but use the exec* functions or something that wraps them. In Python subprocess lets you pass in a list of arguments which is safe to escaping.
I’m honestly surprised intelligence agencies hadn’t already thought of this for barium meal traps. Or I suppose they have to pretend to be wowed.
Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable
I think the key misunderstanding he has about cryptocurrencies, and even many people who know a lot about them, is that behind the “trustless world” veil is a utopian and optimistic view about people. The core central principle of why this stuff actually works is because MOST PEOPLE ARE TRUSTWORTHY. Otherwise you wouldn’t create a system that requires more than 51% of people to do the right thing.
Cryptocurrency advocates actually deep down believe people can govern themselves. I believe people can govern themselves. I think people are good and trustworthy by default. What cryptocurrencies do is reward the trustworthy, and punish those that are not.
If this isn’t a “people are great” view of the world, I don’t know what is. In my opinion, those that believe we NEED police and we NEED the government and we NEED the military, those people are the true pessimists and have a vulgar view of humanity.
Yes, this “people are great” viewpoint is exactly the naivety that fuels many libertarian fantasies. But it is wrong, and it isn’t even self consistent: you can’t simultaneously believe that people are self interested rational economic agents AND essentially good too. Those are different, typically mutually exclusive things.
Being a self interested rational economic agent means you will skirt tax law to better yourself, pollute the environment instead of paying more for expensive eco-friendly options, ignore homelessness because it isn’t your problem, etc. And most people do those things, so we know that most people CAN’T be trusted to make better choices. People are fundamentally jerks, and need nannying for grownups (I.e. government). It is ignorant to think otherwise.
Be careful with who you mean when you say “libertarian”. Are you talking about American Libertairian? or the European variety? Judging from your criticism, you clearly mean American Libertarian. Keep in mind the cryptocurrency community isn’t solely composed of American style Libertarians.
I think your criticism about the rational self interest being a contradiction with “people are great” viewpoint is valid. However, your pessimistic conclusion is wrong about people. Do you believe that people are rational economic agents?
And most people do those things, so we know that most people CAN’T be trusted to make better choices.
You are clearly proving my point, which is the true pessimists are people who believe everyone is fundamentally a jerk and therefore we need a state. Just because we have one now doesn’t mean it proves people are jerks. Slave systems lasted for thousands of years, feudalism lasted another thousand. The current system of countries won’t last forever either. Progress is made.
What I think is true, is that people are good at adapting to any environment they are in, and if they grow up in that environment, they see it as normal. In other words, we are great at normalizing anything. It doesn’t mean people’s nature is the same as that environment.
This is why I believe the computer is man’s greatest invention. Computers have the potential to amplify and enhance our abilities. Douglas Engelbart envisioned them as extensions of our mind.
We have these amazing tools to connect and enrich humanity, to let people grow and become super people. That’s why I write software. But yes, the money helps, but please don’t tell anyone I would be doing this for free too, lets keep it our little secret ;-)
You are clearly proving my point
Yes, I’m saying pessimism on this subject is the correct position based on the actions of real life people.
It doesn’t mean people’s nature is the same as that environment.
If you can somehow change human nature so we naturally take care of each other without coercion, I’m all in. But I don’t see how software development or absence of government will suddenly make people care about, say, the poor.
I think you have it exactly backwards. It’s the environment that is creating uncaring people, not the nature. The coercion is what is creating uncaring people. Proof is that other environments have caring people. Our environment promotes certain traits over others. Are people capable of not caring ? of course. Is the the natural state of man? hell no.
When the Europeans came to the Americas, they found egalitarian cultures. These cultures were not primitives who were naive about civilization. It turns our the Americas had a gigantic civilization along the Mississippi that rivaled Europe, but disappeared hundreds of years before the Europeans visited. These tribes were refugees of civilization. In other words, they structured their culture to prevent the horrors of civilization from happening again. Unfortunately it made them vulnerable to extermination by the barbarians.
The coercion is what is creating uncaring people. Proof is that other environments have caring people.
That’s not proof, there are many other variables involved: like the size of the population, the technology level of the civilization, etc. Sociology rarely has such black and white explanations.
A bigger issue, though, is that even if you are right: we have no viable transition plan. If you were to drop government regulation tomorrow, we would have decades before all of these “environment created jerks” die off, and on the odd chance you’re wrong, we are stuck that way indefinitely.
At least with regulation we can now provide basic human care to each other: shelter, food, medicine. That’s a much less bad worst case than your way (even if the best case isn’t quite so kumbaya).
We also don’t need to speculate about this given Republicans usually vote anti-regulation. We’ve seen them deregulate quite a few things. One of those resulted in near collapse of our financial system. More recently them working against net neutrality is reinforcing some abusive practices. Similar problems in food, medicine, and legal liability for companies. The amount of evil goes up after the regulations are removed. Those evils are of course the very reason they were there.
Potholes are there because the voters and government are doing nothing. It takes some tiny group operating at a loss to cover the problem others are generating. The problem will continue as this one does in many areas. The solution is so rare it’s newsworthy.
If anything, the link proves pessimism is justified despite the occasional efforts by great folks like that.
Really? Just picture yourself saying something similar right before the American civil war
‘If anything , these abolitionists proves pessimism is justified despite the occasional efforts by great folks like that’. Doesn’t that sound funny?
You always have to cherry pick to back your optimistic predictions. Pick something in my lifetime. Nah, I’ll make it easier: just the past 18 years. In that time span, people will have been born, gone through school, and entered college with ambition to change the world. What have they seen that turn major problems on their head across the nation in terms of the big causes? Last I checked, we’ve been losing ground with the protests on quite a few not going so well. The villians even won several times from Patriot Act to Middle Eastern policy to 2008 bailout + immunity to reduced liability for pharma murder to brief wins on net neutrality followed by rollbacks recently. We also got a Middle Eastern-like dictator who beat the fascist both of which mock the important parts of the Constitution.
What would you show, not tell, the person wanting to get active that was a revolutionary good happening in their lifetime? Or even major resistance that worked with persistence on the topic over time vs fire-and-forget, temporary victory? I’m not getting much popping in my head.
This one is easy, Rojava is happening now and is the best example I can think of. It is still a fight but they are kicking some major ass. Read their constitution. Watch this Documentary.
I was talking about the U.S.. It’s cool people are fighting to make things better in other countries, though.
Antifa in the US are pretty cool. The white supremacists are saying Antifa are winning .
I’d really like some references about the Mississipi civilization you are talking about.
I knew nothing about it!
Here’s you a fun article to start with from an infotainment site. Just Google anything you find questionable. ;)
Just Google anything you find questionable.
Doesn’t work: I’ve just searched “Google” on Google… :-D
(Funny read, thanks)
The only shot these decentralized networks/social things have of accomplishing their dream is to ditch federation and deploy true P2P via desktop/mobile apps. Think old Skype.
Any server requirement more than a non-proxying NAT hole puncher is a death sentence for decentralized services targeting mainstream users (unless you get big investments into a handful of quasi-centralized servers). The network needs to run on a handful of techie users fronting $10/month in server resources, or it just won’t scale.
People can install apps, they can’t manage servers.
People should just implement these things using email as the communication substrate. The servers are already out there and federated.
This is the conclusion I came to as well. The “liked by who” is metadata that would have to be stored somewhere though.
That can be sent around via email as well. Likes can be implemented as a CRDT, so people may not have a consistent view but it can become consistent over time.
You don’t have to use their personal email addresses, they can create free ones on gmail or wherever. I’m just saying to use email as the underlying protocol.
That doesn’t help. Even if it was a randomly chosen email, the sender and receiver are in the clear for the network to see and construct the social graph. Even if you rotate emails it’s probably still reconstructible.
Not all federation has expensive costs. Pleroma can run on a raspberry pi. The Pleroma/Mastodon/GNUSocial is around a million users right now, so I’m not really sure that argument holds. Being said I would also love to see “Old Skype” apps. Ring.cx is a good example of this working well. Decentralization and Federation don’t have to be mutually exclusive and we should stop thinking about this space as an either / or.
This is the approach I took with Firestr, Just download the app and run it. Only server thing is a non-proxying NAT hole puncher. I took this approach because that’s exactly what i thought, no user is going to run a server, therefore has to be an all in one experience where you just run the app and go.
Isn’t Skype a bad example for “true decentralisation” (I am assuming you mean by this “distributed”), since a cenral server managed usernames, statuses and IP address communication (if I am not wrong)? Attempts to truly create P2P networks, lets take the standard example of IM/video chat, like Tox suffer from cryptic user names (ie. DHT codes), the need for both parties to be simultaneously online for messages to be sent and received and most of the time a “hacky” feel to the whole setup. The last issue could be avoided by good cooperation between a design/UX/UI and developer team, but I don’t see any way around the first two, without setting some absolute standards (eg. reference servers).
It works for certain use cases, for example firechat for physically manifested crowds or Tox for absolutely anonymous chat, but this doesn’t do what most people want, which has sadly always been what centralized systems are intrinsically good at: deferring responsibility to validated identities, transmit information and guarantee/promise operation from the users to some other instance, which is usually legally bindable.
I 100% agree. This is the approach we’ve taken with Peergos. You can create your account by running the desktop version, or you can sign up on our central server (or anyone elses), but your identity, social graph, etc. has nothing do do with that choice of server. All that decides is where, initially, all your data is stored. Through the magic of IPFS it’s accessible from anywhere - we only need at least one server to store each users files to guarantee no loss.
Moving an account you created on our server to your own (desktop or cloud instance) is trivial and doesn’t lose any data, metadata or social connections. This gives both a nice on-boarding experience, and also allows us to satisfy a wide range of threat models. The average user can just log in to a server via a web browser exactly like facebook. More discerning users can run their own server, in the cloud or at home.
The problem turns out to be some obscure FUSE mounts that the author had lying around in a broken state, which subsequently broke the kernel namespace system. Meanwhile, I have been running systemd on every computer I’ve owned in many years and have never had a problem with it.
Does this not seem a bit melodramatic?
From the twitter thread:
It sounds like the system had an opportunity to point out an anomaly that would guide the operator in the right direction, but instead decided to power through anyways.
A lot like continuing to run in a degraded state is a plague that affects distributed systems. Everybody thinks it’s a good idea “some service is surely better than no service” until it happens to them.
At $work we prefer degraded mode for critical systems. If they go down we make no money, while if they kind of sludge on we make less but still some money while we firefight whatever went wrong this time.
My belief is that inevitably you could be making $100 per day, would notice if you made $0, but are instead making $10 and won’t notice this for six months. So be careful.
We have monitoring and alerting around how much money is coming in, that we compare with historical data and predictions. It’s actually a very reliable canary for when things go wrong, and for when they are right again, on the scale of seconds to a few days. But you are right that things getting a little suckier slowly over a long time would only show up as real growth not being in line with predictions.
I tend to agree that hard failures are nicer in general (especially to make sure things work), but I’ve also been in scenarios where buggy logging code has caused an entire service to go down, which… well that sucked.
There is a justification for partial service functionality in some cases (especially when uptime is important), but like with many things I think that judgement calls in that are usually so wrong that I prefer hard failures in almost all cases.
Running distributed software on snowflake servers is the plague to point out.
So if the server is over capacity, kill it and don’t serve anyone?
Router can’t open and forward a port, so cut all traffic?
I guess that sounds a little too hyperbolic.
But there’s a continuum there. At $work, I’ve got a project that tries to keep going even if something is wrong. Honest, I’m not sure I like how all the errors are handled. But then again, the software is supposed to operate rather autonomously after initial configuration. Remote configuration is a part of the service; if something breaks, it’d be really nice if the remote access and logs and all were still reachable. And you certainly don’t want to give up over a problem that may turn out to be temporary or something that could be routed around… reliability is paramount.
I think that’s close to the core of the problem. Temporary problems recur, worsen, etc. I’m not saying it’s always wrong to retry, but I think one should have some idea of why the root problem will disappear before retrying. Computers are pretty deterministic. Transient errors indicate incomplete understanding. But people think a try-catch in a loop is “defensive”. :(
So you never had legacy systems (or configurations) to support? I read Chris’ blog regularly, and he works at a university on a heterogeneous network (some Linux, some other Unix systems) that has been running Unix for a long time. I think he started working there before
systemdwas even created.Why do you say that the FUSE mounts were broken? As far as we can see they were just set up in a uncommon way https://twitter.com/thatcks/status/1027259924835954689
It does look brittle that broken fuse mounts prevent the ntpd from running. IMO the most annoying part is the debugability of the issue.
Yes, it seems melodramatic, even to my anti-systemd ears. It’s a documentation and error reporting problem, not a technical problem, IMO. Olivier Lacan gave a great talk last year about good errors and bad errors (https://olivierlacan.com/talks/human-errors/). I think it’s high time we start thinking about how to improve error reporting in software everywhere – and maybe one day human-centric error reporting will be as ubiquitous as unit testing is today.
In my view (as the original post’s author) there are two problems in view. That systemd doesn’t report useful errors (or even notice errors) when it encounters internal failures is the lesser issue; the greater issue is that it’s guaranteed to fail to restart some services under certain circumstances due to internal implementation decisions. Fixing systemd to log good errors would not cause timesyncd to be restartable, which is the real goal. It would at least make the overall system more debuggable, though, especially if it provided enough detail.
The optimistic take on ‘add a focus on error reporting’ is that considering how to report errors would also lead to a greater consideration of what errors can actually happen, how likely they are, and perhaps what can be done about them by the program itself. Thinking about errors makes you actively confront them, in much the same way that writing documentation about your program or system can confront you with its awkward bits and get you to do something about them.