Based on reading all this stuff and using many package managers, including dep but not yet including vgo, it seems like there are two different things happening here.
First, major miscommunication between Go core and the dep committee. Like, disastrously major. Enough said.
Second, crucially different assumptions in the design. The dep folks seem to be coming from a concentrated study of existing reality, inelegant and unpleasant though it may be, and bringing in lessons learned in other communities. Russ seems to be coming from a position of deciding the way things “should” be and assuming the community will act accordingly. Which is totally the Go way of doing things!
If you’ve maintained a Rails project with 30 gems that work OK but haven’t had a maintainer for years, the assumption of a singular, very active library maintainer whose library just gets jettisoned by everyone if they misbehave doesn’t ring true. But the Go community, so far, seems different, so I’m willing to see how the experiment comes out over the next 10 years.
It seems to me it is dep that is trying to be elegant, enforcing single major version constraint. Real world is inelegant so vgo considers multiple major versions support non-negotiable.
That’s a good point…they’ve both tried to accommodate some real-world problems, while dismissing others. The vgo approach does seem more Go-y (Goful? Goesque?). The parallel with the Go regex library makes sense (can’t handle some languages that Perl regexes can, but never goes exponential, either).
I read the whole thing to see how the contradiction between “information-leaking attacks that can be carried out by merely viewing a web page” and “if all the software running on your computer was software you could trust” would be resolved. Ah: “blocking by default, even when the code is marked as Free Software, might be a safer policy.” So…it wasn’t. I think the article could be reduced to the advice “only run audited code”, which is charming but not very practical.
Another youngster who equates “intuitive” with “like Unix” (I assume, since no other definition is given). :)
She is shocked to find that “a data set is a file”, despite the term “data set” being the older one. When called on this in the comments, she says “the term “data set” refers to a broad collecting of constructs that have wildly different behavior and purposes” — well, the “Unix philosophy” is literally “everything is a file”, and it’s hard to find a broader collection of constructs than that!
Surprise, computer systems still exist that aren’t Unix and don’t share any heritage with it. You know what? They all used to be like that.
Another youngster who equates “intuitive” with “like Unix” (I assume, since no other definition is given). :)
Like Unix, which is like VMS, which is like MS-DOS, which is like Windows except to the extent Windows has moved away from MS-DOS and become more like either VMS or Unix. MS-DOS itself is CP/M with more Unix grafted onto it, which you can see if you try to name a file CON.
My point is, intuitive means familiar, and Unix is what’s familiar outside of a few insular realms. Acting all surprised at this is odd, at this point.
Surprise, computer systems still exist that aren’t Unix and don’t share any heritage with it. You know what? They all used to be like that.
Quiet, or you’ll awaken the people who know OSes which don’t share any heritage with CTSS! ;)
The systems that exist in the Serious IBM Business world are not very accessible to outsiders. Unix is everywhere, Unix has won, of course “like Unix” is intuitive for most of us.
Also, seriously, touch some.file is objectively easier than filling out a form with 16 (sixteen!) fields :D
Windows, Mac OS X, UNIX, iOS, and Android. Those collectively won. The intuitive feel will connect to one or more of them depending on audience background.
Look at the man page of find and then talk to me about intuitive. :)
Oh, mentioning the form reminds me of another thing that bugged me — not realizing a 327x terminal is page-based, not character stream based. You fill out forms on the screen and submit the whole page at once. So of course you have to type the command in the right place. Just like in a web browser, which I’m guessing is “intuitive” now, but sure wasn’t in, e.g., 1994.
Unmentioned: Hardware RAID generally has battery backup so writes are completed even if the power fails (or the kernel panics). Software and Fake RAID can’t do that.
Or alternately that having that allows them to (legitimately) acknowledge writes before the data has actually hit disk platters and hence offer better write performance – i.e. that if they didn’t have that they would presumably (hopefully!) wait to acknowledge writes until the data actually had hit the platters rather than “cheating” and losing data on a power loss.
That said, with the SSDs that are now easily available you can achieve a similar effect using host-side software layers like bcache/dm-cache in writeback mode.
Generally that is the best option, it would only not apply when the drives are lying about syncing to disk (some cheap models still do on both SSD and HDD controllers, gets better benchmark results).
More unmentioned: with a copy-on-write FS like ZFS, you won’t ever get corruption from incomplete writes, because writes are atomic.
There is a French sporting goods retailer called Go Sport, and the new Go logo immediately made me think of their logo.
I don’t know if he’s literally saying you don’t need to know SQL, the language —in which case whatever, OK — or that you don’t need to know how databases work (constraints, concurrency, join performance, etc.) — in which case no, you do need to know that stuff or you will find yourself in bewildering pain once your application becomes large or successful enough to be interesting.
That is classic Rails app evolution, though — you can get so much done without knowing what you’re doing, then a few years later you have a terrifying ball of misconceptions.
Oracle does not do things for the benefit of the community; everything it does is for the benefit of its bottom line. This technology looks fascinating, but frankly I’m not going to take the time to figure out what Oracle’s angle is here to get the sales and legal divisions involved, or convince myself that, for the first time in my experience, there isn’t one. (Oracle v. Google was just the last of many straws.)
I keep reading this, but I’ve only seen it once, thankfully - and I’ve been in my share of workplaces, having spent about half my career as a contractor. Am I a lucky freak, or is it just not actually that common?
Well, for another anecdotal data point… In my 30 year career I’ve seen many “necessary nice people”, and several executives who were jerks, but I can’t think of any “necessary jerk” individual contributors. There were certainly some jerks, but they didn’t seem necessary.
They do enable good dramatic situations, though, so I can understand why they’re popular in literature. And for obvious reasons they’re overrepresented in real life stories of office harassment.
What do you mean by “necessary nice person”?
Is it “A is the only person who knows how to do X?”, ie https://en.m.wikipedia.org/wiki/Bus_factor
Sometimes, but more often it’s “A can do X twice as fast as anybody else” or “A knows who knows how to do X for any value of X”.
What I’ve seen is people whose lack of social finesse has been papered over because “well programmers haha”
(I’ve had this advantage as well)
I have never seen another job position where being bad at humans is considered acceptable. I am, of course, all for giving people opportunities to improve, but the bar is set so much lower than basically any other job
I think the people who truly encompass this personality type (or a combination of traits that make for this type of personality) typically find they’re war towards the top .. if they’re good at what they do. Those who are truly lacking empathy, that find their way further along on the sociopath/psychopathic scales, tend to via for positions at the top. They take large risks and, if they’re good at it, they jump in to fill positions the moment they can.
I agree, I have encountered few of these Necessary Jerks on my team in myself in my 15+ years in tech. There were one or two, but none that were really that bad or who I couldn’t find some common ground and get along with. There were more people who were incompetent, which is annoying, but so long as they’re nice and trying .. eh..everyone needs a job. There are people who are incompetent and refuse to learn and shit heads about it, and you wonder why the hell they still have a job – and you just gotta be as nice as you can (They are a lesson in patience).
From what I’ve heard, people with necessary jerks, are typically teams with just really shitty management. I think this post that was up a few months ago really encompasses that type of work environment:
https://startupsventurecapital.com/you-fired-your-top-talent-i-hope-youre-happy-cf57c41183dd
For the type of person I mentioned at the beginning of this comment, I recommend the book The Dictators Handbook. It’s pretty eye opening as far as what it really takes to grab and hold onto a position of power, like being a CEO. Spoiler alert, knowing anything useful about your business or technology, or even caring about your employees/staff, has very little to do with it.
This is kind of the idea behind the nanopass framework. E.g.:
https://github.com/nanopass/nanopass-framework-scheme/blob/master/tests/new-compiler.ss#L9
Also http://andykeep.com/pubs/dissertation.pdf
I’ve been using this for my senior project for compilers, and it’s been quite nice. I really like how it does verification of your IR at each pass (really helps debugging), and how declarative each transformation is.
I can’t sing enough praise for WireGuard. I’ve set up IPSec (strongswan and OpenIKED), OpenVPN, tinc, and pretty much every VPN software under the sun, nothing holds a candle to WireGuard. The adoption of formal methods, the smart cryptographic choices, and the code quality have made me a daily user.
This doesn’t worry you? I’ve lost so many hours fighting IPsec, and dream about using WireGuard, but…
WireGuard is not yet complete. You should not rely on this code. It has not undergone proper degrees of security auditing and the protocol is still subject to change. We’re working toward a stable 1.0 release, but that time has not yet come.
For my own person infastructure not a bit, it’s beta software, I know the risks and accounted for them when setting things up. I have been following WireGuard from it’s very early stages and it’s code quality already far exceed that of almost every other VPN software. I am not suggesting people rely on it until 1.0, but it is something you should play with, because it made me fine with dealing with it’s beta state instead of fighting IPSec/OpenVPN.
I don’t know that it’s off-base to think that startups shouldn’t hire junior developers. A dev team has to reach a certain size, a certain profitability, and a certain probability of having a future before it can reasonably afford to maintain a developer training pipeline.
I would suggest that part of the problem with the industry is that there are too many startups. The Paul Graham “do a startup first” method works for the upper end of the developer bell curve, but for a lot of people (and arguably for the industry as a whole) the traditional method of joining a company big enough to have some training infrastructure and doing your apprenticeship there before launching on your own works better.
At my current company we started up with a team of ~10 developers, all experienced. It was a terrible mistake in my opinion.
Because everyone was experienced and we were a startup under huge pressure to ship, we cut way too many corners. We knew everyone was good enough to deal with the mess we were creating, at least temporarily.
But in the long run, we had to pay it all down, and it cost us way more than we initially thought. Write the missing documentation, remove needlessly over-complicated code, untangle a mess of inter-dependencies… We are not even done with it yet, and it has been four years.
Had we had a balanced team, that would not have happened. Juniors would have forced us to document more, write more straightforward code they could understand, do code reviews and not have so many parts of the code base owned by a single person with no one else really understanding how it works.
I knew it was the problem, because my previous company (also a startup, since then acquired by Google) had been the opposite. Most hires were juniors, trained internally. It resulted in a cohesive team with the same code culture (no clashing styles, useless debates on which tools to use…) and processes that led to much cleaner code, even though the problems we were solving were probably harder. Training sure takes time, but eventually it pays off, especially if you have good retention (which we did).
I was just a developer on my current team when I joined, now I am in a position where I am in charge of leading it and preventing those issues in the future. One of my first decisions was to hire juniors, and I can testify that it is working so far.
Note that I am not saying that there should not be seniors and that you should never cut corners at a startup. Actually, knowing when and how to cut corners is probably one of the most important skills of a senior developer at a startup in my opinion. When you have to add features to your production code before Tuesday next week because [something something involving VCs], that’s when you need someone who can do that dirty hack that will work short term, but who also knows how to avoid catastrophic bugs without the time to write enough tests, and how the hack will be reversed in the future without too much impact. That is the one situation where you take two or three senior developers you trust, lock yourselves into a room and do your thing. But hopefully, that doesn’t happen too often.
At size 10, it is probably time for more balance. But that experience sounds more like “our experienced developers made a mistake” than “having experienced developers was a mistake”. I mean, experience shouldn’t dictate cutting corners unnecessarily — quite the opposite. If you really had to cut that many corners to survive, then having to manage juniors as well might have been deadly, and cleaning up afterward is the tradeoff you consciously made (and one that most startups have to make).
Ideally you aren’t in so much of a race against time and can “plan for success” from the beginning by starting a dev training pipeline right away, but that seems like a luxury you can’t count on at first.
All that said, the worse imbalance is to have no sufficiently experienced developers — see any number of bizarrely mismanaged blockchain startups in the news — which is what makes me think “too many startups”.
Great, but when oh when will you stop telling me not to use it? IPsec has stolen too much of my life…
I really love Sqlite, and reading accounts like this is great. BUT, note this all reads with no inserts/updates/deletes. Sqlites achilles heel for being really useful?
Additionally, though this test was focused on read performance, I should mention that SQLite has fantastic write performance as well. By default SQLite uses database-level locking (minimal concurrency), and there is an “out of the box” option to enable WAL mode to get fantastic read concurrency — as shown by this test.
You’d be surprised. Serializing writes on hardware with 10ms latency is pretty disasterous, giving parallel write databases a huge advantage over SQLite on hard drives. But even consumer solid state drives are more like 30us write latency, over 300 times faster than a conventional hard drive.
Combine with batching writes in transactions and WAL logging and you’ve got a pretty fast little database. Remember, loads of people loved MongoDB’s performance, even though it had a global exclusive write lock until 2013 or something.
People really overestimate the cost of write locking. You need a surprising amount of concurrency to warrant a sophisticated parallel write datastructure. And if you don’t actually need it, the overhead of using a complex structure will probably slow your code down.
Sounds like you might like the “COST” metric…. https://lobste.rs/s/dyo11t/scalability_at_what_cost
Given that they run all of expensify.com on a single (replicated) Bedrock database, that would pass my “really useful” test, at least. :)
The project page itself warns about that. When toying with ideas, I thought about a front-end that sort of acted as a load balancer and cache that basically could feed writes to SQLite at the pace it could take with excesses held in a cache of sorts. It would also serve those from its cache directly. Reads it could just pass onto SQLite.
This may be what they do in the one or two DB’s Ive seen submitted that use SQLite as a backend. I didnt dig deep into them, though. I just know anything aiming for rugged database should consider it because the level of QA work that’s gone into SQLlite is so high most projects will never match it. That’s kind of building block I like.
Now I’ll read the article to see how they do the reads.
Devil’s advocate. If you are going to give up on easy durability guarantees, you could also try just disabling fsync and letting the kernel do the job you are describing.
I’ve been trying to make posts shorter where possible. That’s twice in days someone’s mentioned something I deleted: original version mentioned strongly-consistent with a cluster. I deleted it thinking people would realize I wanted to keep the properties that made me select SQLite in first place. Perhaps, it’s worth being explicit there. I’ll note I’m brainstorming way out of my area of expertise: databases are black boxes I never developed myself.
After this unforgettable paper, I’d be more likely to do extra operations or whatever for durability given I don’t like unpredictability or data corruption. It’s why I like SQLite to begin with. It does help that a clean-slate front-end would let me work around such problems with it more true for memory-based… depending on implementation. Again, I’m speculating out of my element a bit since I didn’t build databases. Does your line of thinking still have something that might apply or was it just for a non-durable, front end?
I just realized this the Apple-elegant version of control-alt-delete! (And given the ridiculous placement of volume-up and power buttons, nearly as awkward.)
Please no. If you want structured configs, use yaml. JSON is not supposed to contain junk, it’s a wire format.
But YAML is an incredibly complex and truth be told, rather surprising format. Every time I get it, I convert it to JSON and go on with my life. The tooling and support for JSON is a lot better, I think YAMLs place is on the sidelines of history.
it’s a wire format
If it’s a wire format not designed to be easily read by humans, why use a textual representation instead of binary?
If it’s a wire format designed to be easily read by humans, why not add convenience for said humans?
Things don’t have to be black and white, and they don’t even have to be specifically designed to be something. I can’t know what Douglas Crockford was thinking when he proposed JSON, but the fact is that since then it did become popular as a data interchange format. It means it was good enough and better than the alternatives at the time. And is still has its niche despite a wide choice of alternatives along the spectrum.
What I’m saying is that adding comments is not essential a sure-fire way to make it better. It’s a trade-off, with a glaring disadvantage of being backwards incompatible. Which warrants my “please no”.
http://hjson.org/ is handy for human-edited config files.
The solutions exist!
https://github.com/json5/json5
I don’t know why it’s not more popular, especially among go people.
There is also http://json-schema.org/
I had to do a bunch of message validation in a node.js app a while ago. Although as Tim Bray says the spec’s pretty impenetrable and the various libraries inconsistent, once I’d got my head round JSON Schema and settled on ajv as a validator, it really helped out. Super easy to dynamically generate per message-type handler functions from the schema.
I think this only show that JSON has chosen tradeoff that make it more geared to be edited by software, but has the advantage of being human editable/readable for debugging. JSON as config is not appropriate. There is so many more appropriate format (toml, yaml or even ini come to mind), why would you pick the one that doesn’t allows comments and nice sugar such as trailing commas or multiline string. I like how kubernetes does use YAML as its configuration files, but seems to work internally with JSON.
IMO YAML is not human-friendly, being whitespace-sensitive. TOML isn’t great for nesting entries.
Sad that JSON made an effort to be human-friendly but missed that last 5% that everyone wants. Now we have a dozen JSON supersets which add varying levels of complexity on top.
And a metric ton of stuff you do not want! (Not to mention…what humans find XML friendly?)
This endless cycle of reinvention of S-expressions with slightly different syntax depresses me. (And yeah, I did it too.)
Honestly, I can’t help but to feel very bearish on ETH. I really like the idea, but I think the implementation is poor, and the community is poorly aligned in values to making it a success.
The most important construct in ETH that sets it apart from other currencies is the Smart Contract. I don’t believe though that these are either smart, nor contracts. Whether or not you agree with the resolution of the DAO hack or not, the fact that we consider it a hack to be in some way resolved indicates we do see smart contracts as programs that can and should be changeable to better meet the intent.
Based on the DAO and a number of other issues with smart contracts, I don’t think they are smart based on the design of the language being so poorly adapted for the kind of verification needed to make robust contracts. It isn’t smart.
Based on the communities willingness to fork over contract actions they don’t agree with means they aren’t contracts. In real life, if you’re duped by a creative but legal (as judged by the legal process, or in this case the execution on the blockchain) interpretation, you need to suck it up and move on. In Ethereum, you can fork, and in practice the group that lead to the fork of ETH were a minority. Smart contracts aren’t contracts because by the decision of a few they can be rewritten without the agreement of all involved parties.
Ultimately, if I were looking to do non-hobbyist business, either as the business or a customer, for these reasons I wouldn’t feel comfortable using Ethereum.
I am not a lawyer, but I did grow up with one, and I’m pretty sure a legal but clever and tricky contract has legal grounds to be thrown out in court.
As a kid I was curious if the “tiny fine print that you couldn’t read” could really be used to trick someone. It can’t. The legal system is very aware of the distinction, it’s called acting in good faith.
Again, not a lawyer, not legal advice, don’t make choices based on what I’ve said, but it’s not as cut and dry as you claim it is.
And contracts with “bugs” in them (i.e., that don’t accurately represent the intent of the parties) aren’t taken literally either. There are rules/principles about how to interpret them that are much more nuanced than that. Only a programmer who doesn’t get out much would think that a better approach is to eliminate the potential for ambiguity and then always interpret contracts literally.
I generally understand your point and agree with it, but what I’m suggesting is that the execution of a smart contract is the legal process in this context.
It’s not that it’s right or wrong that the contract was interpreted/executed in a given way, it’s that after the field has been set and the dice cast, then going back in time and writing out the execution because some definition of majority (usually a minority in practice) didn’t win is the issue.
Changing how the outcome played out after the fact that it was interpreted and executed feels (in the context of a smart contract being interpreted by the legal process of the block chain) like an extrajudicial action by people who lost out.
The legal system has been dealing with smartasses since before your ancestors were deloused.
Think of it like the efficient market hypothesis: People have been banging on legal systems for so long that you can reasonably assume that all of the interesting stuff has been found, and is either a known technique or is already illegal. There might be exceptions to this, but the fact the system is administered by humans who exercise human judgement closes a lot of novel loopholes, as well.
I’d go one step further and assert that, in legal systems that have been functioning for centuries and are thoroughly debugged, some obvious glaring flaws will continue to exist, but they are those that are actively maintained by some group which has an extraordinary amount of power and stands to gain an extraordinary amount of wealth from them.
I used to think this way, until I realized that all these high-profile bugs in applications on Ethereum have very little to do with the code in Ethereum.
The DAO is a good example. It was not written by the core Ethereum project. It was a distributed application written by unrelated developers, and crowdfunded by a token sale. Blaming the Ethereum project for DAO’s code quality is like blaming the Unix developers for a segfault in some third-party app.
You don’t have to blame the core developers for the DAO contract code’s bugs to blame them for forking the block chain to “fix” the bugs for THE DAO developers.
Those are two separate acts from two separate groups of people.
On the other hand, one of the Ethereum founders was responsible for the Parity bug.
I agree with you but think the conclusion you draw is incorrect. While Solidity itself is not a bug, the language itself is part of the design of Ethereum, and by using a language (Solidity) that is so poorly adapted to verification, it’s made it easier for users to write buggy contracts.
C is buggy, but that didn’t kill Unix.
Unless a credible competitor appears, I think Ethereum will continue to dominate the smart contracts space.
C isn’t buggy. Solidity isn’t buggy. Their use in the systems mentioned have lead to more bugs, and an environment more user- and developer-hostile than had they instead been replaced with other languages.
I agree that Solidity won’t kill Ethereum, but a credible competitor will. I think it is almost a certainty that the biggest shining star of a more mature smart contract blockchain system will be better verifiability in the language. It might not be the immediate killer of Ethereum, it might not even be the technology that kills the Ethereum killer, but I really do think that a verifiable in practice language will be a requisite feature for a smart contract technology that isn’t as known for being a massive footgun as Ethereum is.
I should have been more precise.
While C itself is not a bug, the language itself is part of the design of Unix, and by using a language (C) that is so poorly adapted to verification, it’s made it easier for users to write buggy programs.
Buggy programs didn’t kill Unix, so I doubt Ethereum is in danger.
Visual studio code is amazing. There are tons of reasons here, but even without them I think the consistent performance in both Windows and Mac says a lot.
Consistent performance on Windows, Mac, AND LINUX! I’ve been using it for Go Development on arch for a while and it’s extremely good. To the point where I’m thinking of switching from Sublime Text entirely. I was really resistant to trying it (M$) but it’s probably the nicest GUI Editor/semi-IDE-thing that I’ve used.
It’s not perfect (my comparison is evil-mode in Emacs which is close to perfect) but it’s good enough. Basic editing/movement is great, but it runs into trouble with things like multiple-cursor support (it tries to implement block-visual mode with multiple cursors and sometimes gets into a…situation).
Oh, me too, and maybe I should have mentioned I do miss Spacemacs terribly in VS Code. But most people wouldn’t file that under “vim emulation”. :) [Edit: and also it occurs to me that macros run really, really slow, so I switch back to Emacs for complex editing.]
It has “fine” vim emulation, but not good enough to feel natural when I’m pairing with my coworker who uses it.
It has some okay keystroke emulation, but I miss a lot of the more niche features of vim, like page marks, bufdo, and good macros. I realize it’s all of the stuff that makes vim “vim” to me, and not just modal editing.
https://forums.developer.apple.com/thread/79235
November 13th, this was a known behavior
Your comment should be on the top. Looks like apple should have responded two weeks ago. It would be interesting to study how widely exploited this bug has been. Does anybody have an estimate how many people could have seen that solution post on the developer forum?
Does anybody have an estimate how many people could have seen that solution post on the developer forum?
One fewer than should have seen it.
So odd… The solution of entering “root” twice is given as if that’s just kind of a normal thing to do if you need to create an admin account. Is this behavior perhaps actually intentional, but should only work if there are no existing admin accounts?
Here is the security patch: https://support.apple.com/en-us/HT208315
While writing (as opposed to “having written”) software, I think my favorite experiences share these characteristics:
I can’t decide if Let’s Encrypt is a godsend or a threat.
On one hand, it let you support HTTPS for free.
On the other, they collect an enourmous power worldwide.
Agreed, they are quickly becoming the only game in town worth playing with when it comes to TLS certs. Luckily they are a non-profit, so they have more transparency than say Google, who took over our email.
It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.
Is there anything preventing another (or another ten) free CAs from existing? Let’s Encrypt just showed everyone how, and their protocol isn’t a secret.
OpenCA tried for a long time, and I think now has pretty much given up: https://www.openca.org/ and just exist in their own little bubble now.
Basically nobody wants to certify you unless you are willing to pay out the nose and are considered friendly to the way of doing things. LE bought their way in I’m sure, to get their cert cross-signed, which is how they managed so “quickly” and it still took YEARS.
Have you ever tried to create a CA?
I’ve created lots of CAs, trusted by at most 250 people. :)
Of course it’s not easy to make a new generally-trusted CA — nor would I want it to be. It’s a big complicated expensive thing to do properly. But if you’re willing to do the work, and can arrange the funding, is anything stopping you? I don’t know that browser vendors are against the idea of multiple free CAs.
Obviously I was not talking about the technical stuffs.
One of my previous boss explored the matter. He had the technical staff already but he wanted to become an official authority. It was more or less 2005.
After a few time (and a lot of money spent in legal consulting) he gave up.
He said: “it’s easier to open a bank”.
In a sense, it’s reasonable, as the European laws want to protect citizens from unsafe organisations.
But, it’s definitely not a technical problem.
Linux Foundation is a 501(c)(6) organization, a business league that is not organized for profit and no part of the net earnings goes to the benefit of any private shareholder or individual.
The fact all shareholders benefit from its work without a direct economical gain, doesn’t means it has the public good at heart. Even less the public good of the whole world.
It sound a lot like another attempt to centralize the Internet, always around the same center.
And such certificates protect people from a lot of relatively cheap attacks. That’s why I’m in doubt.
Probably, issuing TLS certificates should be a public service free for each citizen of a state.
Oh Jeez. Thanks, I didn’t realize it was not a 501c3, When LE was first coming around they talked about being a non-profit and I just assumed. That’s what happens when I assume.
Proof, so we aren’t just taking @Shamar’s word for it:
Linux Foundation Bylaws: https://www.linuxfoundation.org/bylaws/
Section 2.1 states the 501(c)(6) designation with the IRS.
My point stands, that we do get more transparency this way than we would if they were a private for-profit company, but I agree it’s definitely not ideal.
So you think local cities, counties, states and countries should get in the TLS cert business? That would be interesting.
It’s true the Linux Foundation isn’t a 501(c)(3) but the Linux Foundation doesn’t control Let’s Encrypt, the Internet Security Research Group does. And the ISRG is a 501(c)(3).
So your initial post is correct and Shamar is mistaken.
This is from the page linked by @philpennock.
I wonder what is left to do for the Let’s Encrypt staff! :-)
I’m amused by how easily people forget that organisations are composed by people.
What if Linux Foundation decides to drop its support?
No funds. No finance. No contracts. No human resources.
Oh and no hosting, too.
But hey! I’m mistaken! ;-)
Unless you have inside information on the contract, saying LE depends on the Linux Foundation is pure speculation.
I can speculate too. Should the Linux Foundation withdraw support there are plenty of companies and organisations that have a vested interest in keeping LetsEncrypt afloat. They’ll be fine.
Agreed.
Feel free to think that it’s a philanthropic endeavour!
I will continue to think it’s a political one.
The point (and as I said I cannot answer yet) is if the global risk of a single US organisation being able to break most of HTTPS traffic world wide is worth the benefit of free certificates.
Any trusted CA can MITM, though, not just the one that issued the certificate. So the problem is (and always has been) much, much worse than that.
Good point! I stand corrected. :-)
Still note how it’s easier for the certificate issuer to go unnoticed.
What’s Linux Foundation got to do with it? Let’s Encrypt is run by ISRG, Internet Security Research Group, an organization from the IAB/IETF family if memory serves.
They’re a 501(c)(3).
LF provide hosting and support services, yes. Much as I pay AWS to run some things for me, which doesn’t lead to Amazon being in charge. https://letsencrypt.org/2015/04/09/isrg-lf-collaboration.html explains the connection.
Look at the home page, top-right.
The Linux Foundation provides hosting, fundraising and other services. LetsEncrypt collaborates with them but is run by the ISRG: