I can’t decide if Let’s Encrypt is a godsend or a threat.
On one hand, it let you support HTTPS for free.
On the other, they collect an enourmous power worldwide.
Agreed, they are quickly becoming the only game in town worth playing with when it comes to TLS certs. Luckily they are a non-profit, so they have more transparency than say Google, who took over our email.
It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.
Is there anything preventing another (or another ten) free CAs from existing? Let’s Encrypt just showed everyone how, and their protocol isn’t a secret.
OpenCA tried for a long time, and I think now has pretty much given up: https://www.openca.org/ and just exist in their own little bubble now.
Basically nobody wants to certify you unless you are willing to pay out the nose and are considered friendly to the way of doing things. LE bought their way in I’m sure, to get their cert cross-signed, which is how they managed so “quickly” and it still took YEARS.
I’ve created lots of CAs, trusted by at most 250 people. :)
Of course it’s not easy to make a new generally-trusted CA — nor would I want it to be. It’s a big complicated expensive thing to do properly. But if you’re willing to do the work, and can arrange the funding, is anything stopping you? I don’t know that browser vendors are against the idea of multiple free CAs.
Obviously I was not talking about the technical stuffs.
One of my previous boss explored the matter. He had the technical staff already but he wanted to become an official authority. It was more or less 2005.
After a few time (and a lot of money spent in legal consulting) he gave up.
He said: “it’s easier to open a bank”.
In a sense, it’s reasonable, as the European laws want to protect citizens from unsafe organisations.
But, it’s definitely not a technical problem.
Luckily they are a non-profit
Linux Foundation is a 501(c)(6) organization, a business league that is not organized for profit and no part of the net earnings goes to the benefit of any private shareholder or individual.
The fact all shareholders benefit from its work without a direct economical gain, doesn’t means it has the public good at heart. Even less the public good of the whole world.
It sound a lot like another attempt to centralize the Internet, always around the same center.
It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.
And such certificates protect people from a lot of relatively cheap attacks. That’s why I’m in doubt.
Probably, issuing TLS certificates should be a public service free for each citizen of a state.
Oh Jeez. Thanks, I didn’t realize it was not a 501c3, When LE was first coming around they talked about being a non-profit and I just assumed. That’s what happens when I assume.
Proof, so we aren’t just taking @Shamar’s word for it:
Linux Foundation Bylaws: https://www.linuxfoundation.org/bylaws/
Section 2.1 states the 501(c)(6) designation with the IRS.
My point stands, that we do get more transparency this way than we would if they were a private for-profit company, but I agree it’s definitely not ideal.
So you think local cities, counties, states and countries should get in the TLS cert business? That would be interesting.
It’s true the Linux Foundation isn’t a 501(c)(3) but the Linux Foundation doesn’t control Let’s Encrypt, the Internet Security Research Group does. And the ISRG is a 501(c)(3).
So your initial post is correct and Shamar is mistaken.
The Linux Foundation will provide general and administrative support services, as well as services related to fundraising, financial management, contract and vendor management, and human resources.
This is from the page linked by @philpennock.
I wonder what is left to do for the Let’s Encrypt staff! :-)
I’m amused by how easily people forget that organisations are composed by people.
What if Linux Foundation decides to drop its support?
No funds. No finance. No contracts. No human resources.
Oh and no hosting, too.
But hey! I’m mistaken! ;-)
Unless you have inside information on the contract, saying LE depends on the Linux Foundation is pure speculation.
I can speculate too. Should the Linux Foundation withdraw support there are plenty of companies and organisations that have a vested interest in keeping LetsEncrypt afloat. They’ll be fine.
Agreed.
Feel free to think that it’s a philanthropic endeavour!
I will continue to think it’s a political one.
The point (and as I said I cannot answer yet) is if the global risk of a single US organisation being able to break most of HTTPS traffic world wide is worth the benefit of free certificates.
Any trusted CA can MITM, though, not just the one that issued the certificate. So the problem is (and always has been) much, much worse than that.
Good point! I stand corrected. :-)
Still note how it’s easier for the certificate issuer to go unnoticed.
What’s Linux Foundation got to do with it? Let’s Encrypt is run by ISRG, Internet Security Research Group, an organization from the IAB/IETF family if memory serves.
They’re a 501(c)(3).
LF provide hosting and support services, yes. Much as I pay AWS to run some things for me, which doesn’t lead to Amazon being in charge. https://letsencrypt.org/2015/04/09/isrg-lf-collaboration.html explains the connection.
Look at the home page, top-right.
The Linux Foundation provides hosting, fundraising and other services. LetsEncrypt collaborates with them but is run by the ISRG:
Let’s Encrypt is a free, automated, and open certificate authority brought to you by the non-profit Internet Security Research Group (ISRG).
An IMAP server handles N users and can have M shared folders readable by all of them. Rather than have every user be presented with the complete list, each user has a maintained subscription list and mail-clients (used to? mine still does) default to only showing the subscribed folders, with a toggle to switch to see all folders and sub/unsub as desired.
I for one actively use this, with mail folders. Some lists which I might occasionally want to delve into local copies of, I unsub from. The mail still flows in, I don’t need to get notified for it, but I can search it locally when I want.
You haven’t even touched on the two vaguely-compatible revisions of the ACL flags and their meanings and how the permissions need to map from one to the other. ;) Nor that for the sake of letting some ancient servers continue to be inefficient, clients are barred by spec from certain behaviors across mailboxes. The IMAP police will tell you how wrong and evil you are for wanting, say, a count of total/new/unread mail across 30 mailboxes. Sometimes you just have to ignore the spec and say “this tool is for competently written IMAP servers” and go ahead and issue a bunch of STATUS commands.
Disambiguating async notifications from part of the response to a given command almost requires an organically grown codebase handling the entire history of IMAP, rather than having a clean model.
There’s a reason that the IETF Working Group to turn IMAP+SMTP into something which mobile clients could use for sane handling of attachments ended up being called “lemonade”. When life hands you …
[full disclosure: I’m the Phil referenced; I’m not an SKS maintainer, but did write various wiki pages and do have patches in the codebase]
The attacks causing disks to fill are problems with specific keys breaking reconciliation and triggering transaction failures in BDB, leading to many GB of disk usage by those unable to get the broken key.
On-disk size has gone from around 6GB to 40+GB in the space of a couple of weeks, and that’s what’s knocked a bunch of SKS systems offline, repeatedly. All the decades of cruft is an order of magnitude less disk space than that caused by a couple of keys designed to break SKS.
Also, Kristian is one of the SKS developers, but is not the original developer. He, like everyone else involved, is a volunteer with a day-job unrelated to SKS.
I’ve been on the SKS devel mailing-list for probably 8 years (guess) and I’ve never seen hostility to the idea that SKS should change or to any reasonable proposal for doing so. I’ve seen various levels of resignation and annoyance at (1) people who propose changes without thinking through how to deal with the fundamental SKS reconciliation algorithm; (2) people who make demands that others do work for them, but never contribute patches themselves. The Almighty Designers who sketch out a non-viable proposal and can’t understand why others aren’t prepared to leap to do the work to make their vision a reality.
In stark contrast, in March Andrew Gallagher posted (thread “SKS apocalypse mitigation”) and took on board the points about algorithm and design issues and himself put in the effort to design something which might work. Haven’t seen code yet, but he’s demonstrated how easy it is to get a productive discussion if you’re willing to take account of engineering design constraints; so many before have instead pouted and stomped their feet and said “well that should be fixed”.
Hockeypuck has been around for a few years; it’s gained a little traction, but is not a silver bullet: it peers by using the SKS reconciliation algorithm and what’s needed is a design approach to change how reconciliation happens, not just a different codebase. SKS itself is GPLv2, Hockeypuck is AGPLv3, both are available for folks to work on and propose changes.
Thank you for the reply, i have added an edit about why the servers have gone off line. Could you send me the link to Andrew gallaghers thread i would be interested in reading it. i found the link
Thanks. As a user, even though I enjoyed reading the post and be aware of the issues, I like to always hear/read the other side of the story/argument.
Yeah, I know someone who runs a keyserver and they are getting absolutely sick of responding to the GDPR troll emails.
Love the idea to use activitypub (the same technology involved in mastadon) for keyservers. That’s really smart!
Offtopic: Excuse me.
I think it depends on some conditions, so not everybody is going to see this every time. But when I click on medium links I tend to get this huge dialog box come up over the entire page saying some thing about registering or something. It’s really annoying. I wish we could host articles somewhere that doesn’t do this.
My opinion is that links should be links to some content. Not links to some kind of annoyware that I have to click past to get to the real article.
Could you give an example? That sounds like a pleasant improvement, but i don’t know exactly what you mean by a cached link.
I started running uMatrix and added rules to block all 1st party JS by default. It does take a while to white list things, yes, but it’s amazing when you start to see how many sites use Javascript for stupid shit. Imgur requires Javascript to view images! So do all Square Space sites (it’s for those fancy hover-over zoom boxes).
As a nice side effect, I rarely ever get paywall modals. If the article doesn’t show, I typically plug it into archive.is rather than enable javascript when I shouldn’t have to.
I do this as well, but with Medium it’s a choice between blocking the pop-up and getting to see the article images.
I think if you check the ‘spoof noscript>l tags’ option in umatrix then you’ll be able to see the images.
How timely! Someone at the office just shared this with me today: http://makemediumreadable.com
From what I can see, the popup is just a begging bowl, there’s actually no paywall or regwall involved.
I just click the little X in the top right corner of the popup.
But I do think that anyone who likes to blog more than a couple of times a year should just get a domain, a VPS and some blog software. It helps decentralization.
I use the kill sticky bookmarklet to dismiss overlays such as the one on medium.com. And yes, then I have to refresh the page to get the scroll to work again.
On other paywall sites when I can’t scroll, (perhaps because I removed some paywall overlay to get at the content below,) I’m able to restore scrolling by finding the overflow-x CSS property and altering or removing it. …Though, that didn’t work for me just now on medium.com.
Actually, it’s the overflow: hidden; CSS that I remove to get pages to scroll after removing some sticky div!
I run an SKS keyserver, have some patches in the codebase, wrote the operations documents in the wiki, etc.
Each keyserver is run by volunteers, peering with each other to exchange keys. The design was based around “protection against government attempts to censor keys”, dating from the first crypto wars. They’re immutable append-only logs, and the design approach is probably about dead. Each keyserver operator has their own policies.
I am a US citizen, living in the USA, with a keyserver hosted in the USA. My server’s privacy statement is at https://sks.spodhuis.org/#privacy but that does not cover anyone else running keyservers. [update: I’ve taken my keyserver down, copy/paste of former privacy policy at: https://gist.github.com/philpennock/0635864d34a323aa366b0c30c7360972 ]
You don’t know who is running keyservers. It’s “highly likely” that at least one nation has some acronym agency running one, at some kind of arms-length distance: it’s an easy and cheap way to get metadata about who wants to communicate privately with whom, where you get the logs because folks choose to send traffic to you as a service operator. I went into a little more depth on this over at http://www.openwall.com/lists/oss-security/2017/12/10/1
Thanks for this info.
Fundamentally, GDPR is about giving the right to individuals to censor content related to themselves.
A system set out to thwart any censorship will fall afoul of GDPR, based on this interpretation
However, people who use a keyserver are presumably A-OK with associating their info with an append-only immutable system. Sadly , GDPR doesn’t really take this use case into account (I think, I am not a lawyer).
I think what’s important to note about GDPR is that there’s an authority in each EU country that’s responsible for handling complaints. Someone might try to troll keyserver sites by attempting to remove their info, but they will have to make their case to this authority. Hopefully this authority will read the rules of the keyserver and decide that the complainant has no real case based on the stated goals of the keyserver site… or they’ll take this as a golden opportunity to kneecap (part of) secure communications.
I still think GDPR in general is a good idea - it treats personal info as toxic waste that has to be handled carefully, not as a valuable commodity to be sold to the highest bidder. Unfortunately it will cause damage in edge cases, like this.
gerikson you make really good points there about the GDPR.
Consenting people are not the focus of this entirely though , its about current and potential abuse of the servers and people who have not consented to their information being posted and there being no way for removal.
The Supervisory Authority’s wont ignore that, this is why the key servers need to change to prevent further abuse and their extinction.
They also wont consider this case, just like the recent ICANN case where they want it to be a requirement to store your information publicly with your domain which was rejected outright. The keyservers are not necessary to the functioning of the keys you upload, and a big part of the GDPR is processing only as long as necessary.
Someone recently made a point about the below term non-repudiation.
Non-repudiation this means in digital security
A service that provides proof of the integrity and origin of data.
An authentication that can be asserted to be genuine with high assurance.
KeyServers don’t do this!, you can have the same email address as anyone else, and even the maintainers and creator of the sks keyservers state this as well and recommend you check through other means to see if keys are what they appear to be, such as telephone or in person.
I also don’t think this is an edge case i think its a wake up call to rethink the design of the software and catch up with the rest of the world and quickly.
Lastly i don’t approve of trolling, if your doing it just for the sake of doing it “DON’T”, if you genuinely feel the need to submit a “right to erasure” due to not consenting to having your data published, please do it.
Thank you for the link: http://www.openwall.com/lists/oss-security/2017/12/10/1, its a fantastic read and makes some really good points.
Its easy for anyone to get hold of recent dumps from the sks servers, i have just hunted through a recent dump of 5 million + keys yesterday looking for interesting data. Will be writing an article soon about it.
i totally agree, it has been bothering me as well, i am in the middle of considering starting up my own self hosted blog. I also don’t like mediums method of charging for access to peoples stories without giving them anything.
I’m thinking of setting up a blog platform, like Medium, but totally free of bullshit for both the readers and the writers. Though the authors pay a small fee to host their blog (it’s a personal website/blog engine, as opposed to Medium which is much more public and community-like).
If that could be something that interests you, let me know and I’ll let you know :)
correction, turns out you can get paid if you sign up for their partner program, but i think it requires approval n shit.
hey @pushcx, is there a feature where we can prune a comment branch and graft it on to another branch? asking for a friend. Certainly not a high priority feature.
No, but it’s on my list of potential features to consider when Lobsters gets several times the comments it does now. For now the ‘off-topic’ votes do OK at prompting people to start new top-level threads, but I feel like I’m seeing a slow increase in threads where promoting a branch to a top-level comment would be useful enough to justify the disruption.
The less successful person didn’t write much code, and he had excellent reasons why: I’m too busy! The person who made the request can’t wait! I have 100 other things to do today! Nobody’s allocating time for me to write code!
Google has this fixed. SREs can only spend a maximum of 50% of time doing manual admin work.
Certainly wasn’t true when I was an SRE at Google, and the person on my team who documented everything he did manually, such that the first automated system for doing that work was called “Electric [hisname]”, was penalized for rolling up his sleeves to do the grotty work and logging things as Tom advocates here, while others who were all about parroting rules on how much could be in shell, etc, were studiously nowhere to be found in the aftermath of an Emergency Power Off.
I know which people I’d rather have on my team again. Tom’s article is excellent.
Ah to be clear I have never worked at google, I just read that from an official source. (Think it was googles book on SRE)
Slide 42 figure 6’s Representation of a break statement (Greenfoot, 2006) is really intriguing as an example of use of color and graphics to make it much easier to spot potential problems cleanly. Has anyone seen a plugin for Vim which can do something like this, either for C or for any langserver backend? While C macros might make it awkward, there’s some codebases (and a few security-critical bugs) which would benefit from being able to have this sort of view while looking at them.
Please stop. DNSSEC is not a solution. Let it die already.
https://ianix.com/pub/dnssec-outages.html
“Reminder: you could publish the DNSSEC root RSA secret keys on Pastebin and nothing on the Internet that matters would break.”
edit: oh I forgot about this gem
“Overlooking some DNSSEC outages because they’re so frequent: By default, Unbound ignores for up to 24 hours any DNSSEC failure resulting from expired RRSIGs.”
Let what die? DNSSEC? It’s at over 50% of all .NL domains and generally on an upwards trend. The number of mail-systems being protected with DANE (TLSA records in DNSSEC-signed domains) is ever-increasing, since the only alternative for MX delivery is MTA-STS (spec still in draft, has gone through incompatible changes, and bakes in the same failure modes which led us to reject TLSA Usages 0 and 1 for DANE/SMTP).
Every Internet technology ever has led to outages in the early days of deployment, until people figured out how to make tools more robust … and even then has led to reductions in the frequency of outages, not to eliminating them. The questions are “what’s the failure mode?” and “will things improve?”. We see enough outages on a per-domain basis caused by inept management of DNS itself, without DNSSEC, that I don’t see DNSSEC as moving the needle on outage frequency here.
I do see more folks outsourcing their DNS management (eg, AWS Route 53, CloudFlare) and as we’ve seen from CloudFlare’s DNSSEC support, this pays off in getting professionally managed DNS+DNSSEC by people who understand it.
The Internet is full of sites which enumerate mistakes and try to say that the existence of mistakes by individuals means the technology should die. Finding one website which does this for DNSSEC does not mean that DNSSEC is dying.
Oh, and I agree that DNSSEC is ugly and problematic, but for verifying authenticity of name resolution, it’s the only solution we’ve got today. So today, it’s what we deploy. Let’s not abandon something which works, just because it’s not perfect.
I am very annoyed because I wrote a 3 page rebuttal to every point and accidentally force closed my browser when switching apps.
tl;dr it’s a dead RFC from 1997. Its usage is measurably on the decline. We peaked at ~1% of the important domains (net com and org).
They tried to use DANE for IRC and nobody wanted it. They removed DANE code from Irssi.
DANE for SMTP is a poor argument with the existence of LetsEncrypt. This argument is so tired I don’t know why it persists.
If you can convince Green, Ptacek, Bernstein, or Marlinspike that DNSSEC is worth having I will rescind my statements. But it’s not going to happen. It’s awful, adds vulnerabilities to DNS resolvers, and has too many failure modes which are completely opaque to end users/applications.
DNSSEC is basically Wayne’s ex-girlfriend Stacy. “It’s over. Get the net!”
If you want security here’s what you do: you use dnscrypt or equivalent to a large provider like OpenDNS. They have the means to actively monitor for cache poisoning and other attacks worldwide in real-time. Voila, you know your DNS isn’t being tampered with.
For browsers/HTTPS, we have a semi-working model now without DNSSEC. I can’t speak authoritatively to the trade-offs which apply there.
SMTP I can speak authoritatively on: I added the initial DNSSEC support to Exim (although Jeremy Harris later picked it up and did the bulk of the work to take it to full DANE support) and talked extensively with Viktor Dukhovni of Postfix on the DANE spec, refining the text which became the RFCs.
For Submission/Submissions service, or smarthost identity configuration, Let’s Encrypt is a sufficient answer.
For MX delivery, LE buys you nothing. For TLS security, you need an identity which you can verify. That identity can not be derived by insecure means. With email to MX, that means the only verifiable identity is the domain. The mail-domain is rarely in the certificate SAN list. To fix this, you need a way to map from the domain to a host identity, securely. Further, it needs to be done in such a way that one external domain important to your organization (“when the CEO starts shouting about mail not going through”) can’t force domains into your trust-store for use with all other domains. This is why DANE for SMTP prohibits TLSA Usage fields 0 and 1. This is one of the severe flaws in MTA-STS.
My current recommendation for MTA security for MX hosts is to get a Let’s Encrypt cert, setup DANE referencing that, and also set up the MTA-STS publishing side to let senders such as Gmail work, IF you’re willing to keep tracking the MTA-STS drafts for further breaking changes. This is what I set up for exim.org and for some of my own domains.
Thus Let’s Encrypt solves absolutely nothing for MX SMTP.
“If you want security” … I say that you first need to break down what you mean by “security” and who “you” is. When it comes to DNS, there’s authenticity and there’s privacy. dnscrypt provides privacy between you and whomever you talk to, as does DNS-over-HTTPS and DNS-over-TLS. dnscrypt does not provide any protection against tampering, whether at the resolver provider (under court order) or between them and the upstream.
If “you” is an end-user or home operator, then you can carefully pick a DNS resolver and choose one who don’t actively tamper with the results for profit (and where you trust the jurisdiction, etc), then using an external provider with very-local-to-you resolvers, or with client-subnet support, pays off and gets you fast easy wins and is usually worth doing. If you pick one which does DNSSEC validation for you and has privacy/integrity between you and them, then you’re in a strong position. Google, CloudFlare, censurfridns, Verisign Labs, these are decent choices.
If you’re a mail-server operator with bulk DNS traffic, that’s less tenable. There’s a reason that for decades now it’s been best practice for MTA operators for domains handling any non-trivial traffic to have a local resolver, either on-subnet or on-host. Thus the large external providers don’t help.
Urgh, what a horrible page. @johnblood this page is a clusterfuck, hope your ad revenue is nice.
I’m not convinced, the cost of the extension boards is insanely high given what the cost would be to just shove it all on one board and that two of them don’t have active components - you need to purchase the actual e.g. WNIC or SFP module on top, not to mention antennas for wifi
With the super professionally produced video makes it seem even more like crowdfunding fodder to make a buck.
cz.nic appears to be a non-profit; I’m not familiar with Czech law, but section 46 of their statutes prohibit disbursements to their member base, and it’s an association of legal entities, not a share-based structure. The statutes: https://www.nic.cz/files/nic/doc/Stanovy__20170701_AJ.pdf
So, no “making a buck”; I believe that the people involved are all salaried. cz.nic have been doing good solid open source software work for many years. It honestly looked to me like a fun video put together in the spirit of crowd-funding, relying upon “humor” and editing away anyone going “uhm” or “er”.
I backed the Turris Omnia and am Very Happy with the resulting product, as it’s by far the best home router I’ve owned. It’s things like “actually pushes out software updates with security fixes, in good time” which help keep it that way. So I backed the Mox too, for more ad-hoc use.
Given how much confusion is created by systems which do allow “foo.bar” and “foobar” to be different email addresses in the same domain, for different users, Gmail saying “we won’t allow that” is wonderful. Given how often people don’t correctly write down dots or whatever when copying email addresses, Gmail’s behavior is also good for getting the mail to just flow.
Saying Netflix shouldn’t have to have insider knowledge misses that (1) they made assumptions which required that insider knowledge, and (2) most sites make insider assumptions. Continuing with 2 for now: every site is allowed to have whatever rules they want for the left-hand-side (LHS), and per the standards the left-hand-side is case-sensitive. If I want “bar@” and “bAr@” to be different email addresses, that’s my business. Any email handling system which generally loses case of the LHS is, technically, broken. The federation used by email allows whatever systems are responsible for a given domain to have complete control over the semantics of the LHS.
In practice, the most widely deployed LHS canonicalization is almost certainly “be case-insensitive”, followed by “have sub-addresses with + or perhaps -”. IMO, the Gmail dot handling is incredibly sane and everyone running mail-systems should seriously consider it.
If I went out filing bugs against systems which made the case-insensitive assumption, then I’d be dismissed as a crazy person. In practice we (almost) all accept that some assumptions will be made. If you want to be safe, or not have to make assumptions, then validate the email addresses used at signup.
A friend had some issues with his wife because four different people had signed up for Ashley Madison using his email address (first-name @ gmail.com) and A-M never validated. Perhaps the potential consequences here highlight why not validating email addresses at sign-up or email address change should be interpreted (legally) as reckless negligence. If you’re going to decide that you don’t need to validate, then you assume responsibility for knowing about the canonicalization performed by every recipient domain. So the author of this piece is flat wrong: the moment Netflix decided to not bother validating email addresses, while also using email addresses as authentication identifiers, they assumed complete responsibility for the security consequences of having correct information about canonicalization used in every domain, to keep their authentication identifiers distinct.
(disclosure: as well as the hat, I’m also a former Gmail SRE, but had nothing to do with this feature)
About 40 years too late to decide to start restricting what can be on the LHS. That’s entirely up to the domain. You can have empty strings, SQL injection attacks, path attacks and more, because you can have fairly arbitrary (length-restricted) strings, if you use double-quotes. The LHS without quotes is an optimization for simple cases.
Given that there exist today domains where the dot matters, and fred.bloggs != fredbloggs, instead those belonging to different people, any site which disallows dots in sign-up will cut off legitimate users.
Just validate.
I can’t help but wonder at having does-added methods which override the self-same method and using this to implement a state machine.
There are various command-line concoctions such as password-store which stores PGP-encrypted files in a Git repo, but that doesn’t improve my situation over 1Password. I would still have to manually look up passwords and copy them to the clipboard. These command-line packages also lack mobile apps and syncing.
That’s not completely true. I use pass with syncing via a private Git repository, there’s a Firefox plugin with autofill support, good mobile clients for both Android and iOS. The best password management system I’ve used (I’ve been a user of 1Password for about 3 years before that). Being able to do git log to see password history for a website is awesome. Bonus point: OTP plugin works like a charm.
The major problem with pass is that the mobile clients don’t supported encrypted git remotes, which is a huge problem: anyone with read access to the remote repo can see what your accounts are.
Given that git is distributed and makes it very easy to push from any client to any remote, it’s a pretty safe assumption that one day you’ll accidentally push to another remote where you realize shortly after doing so that this was A Bad Plan.
The key to this work is throwing out old assumptions and requiring explicit guest support.
Historically, VM systems “had” to be able to boot guests which didn’t need to know they were in a VM, but the guest could optionally implement dedicated “hardware” drivers to have more optimized I/O than through emulated devices. Still, you could take the install media for various OSes and install them all.
This project requires explicit guest support for basic boot-up. Which is great, if your model is around managing everything in the guest and you can make that demand. They reap major benefits from doing so, and there’s no reason for everyone creating images for deployment needs to be held back because the target system is also trying to be compatible with stuff which you’ll never deploy. But it’s very much a case of needing the guest to be compiled explicitly for the target hosting platform.
Since the competition is structured containerization with something like a Dockerfile defining entry-points, environmental dependencies, etc, this is not different. It’s a great trade-off. But it is made possible by the target audience having moved and adapted to a world of on-demand machine instances and container workloads.
Link now 404s; going from http://vjolt.org/archives/older-volumes/ to volume 2 issue 1, we see:
- The Use of Encrypted Coded, and Secret Communications is an Ancient Liberty Protected by the United States Constitution [html] [Adobe .pdf format]
By John A. Fraser III
Neat troll at the end:
id also like to thank andrew loyd weber, inventer of the world wide web for making the internet
[sic]
Author is generous. Originally from UK, moving from NL to US in 2006 I ended up remarking to colleagues that the US banking system was like moving back to the 1970s. 11 years later I’m still using passwords for bank website authentication, with knowledge of a bank account number being a closely held secret.
IBAN fee-free international transfers to friends or for paying bills (same day in-country, instant if same bank chain); fee-free cross-bank ATM withdrawals; sane security for web sign-in or initiating transfers; banking websites which don’t require you to lower the browser security settings to work; PIN-less on-card small-balance cash so you’re not typing your PIN into everything (paying for parking or using vending machines), all stuff I am still waiting for. Well, aside from the browser security settings: American banks have mostly caught up there.
IBAN fee-free international transfers to friends or for paying bills (same day in-country, instant if same bank chain)
SEPA Instant Credit Transfer is launching in November and will hopefully see support from banks somewhen next year. It will allow instant (less than 10s) transfers across banks.
Dryly amusing: my
.sigfor years (a decade or so ago) used to contain a short example of just how zsh does handle NULs in strings.Loosely speaking though, the moment that you’re using the beyond-POSIX features of bash or zsh for anything other than REPL control, that’s a sign that you’re entering technical debt territory and should be rewriting, now that the shell prototype has confirmed what needs to happen and what the general failure modes are.