If only json had allowed trailing commas in lists and maps.
And /* comments! */
/* comments! */
And 0x... hex notation…
Please no. If you want structured configs, use yaml. JSON is not supposed to contain junk, it’s a wire format.
But YAML is an incredibly complex and truth be told, rather surprising format. Every time I get it, I convert it to JSON and go on with my life. The tooling and support for JSON is a lot better, I think YAMLs place is on the sidelines of history.
it’s a wire format
it’s a wire format
If it’s a wire format not designed to be easily read by humans, why use a textual representation instead of binary?
If it’s a wire format designed to be easily read by humans, why not add convenience for said humans?
Things don’t have to be black and white, and they don’t even have to be specifically designed to be something. I can’t know what Douglas Crockford was thinking when he proposed JSON, but the fact is that since then it did become popular as a data interchange format. It means it was good enough and better than the alternatives at the time. And is still has its niche despite a wide choice of alternatives along the spectrum.
What I’m saying is that adding comments is not essential a sure-fire way to make it better. It’s a trade-off, with a glaring disadvantage of being backwards incompatible. Which warrants my “please no”.
http://hjson.org/ is handy for human-edited config files.
The solutions exist!
I don’t know why it’s not more popular, especially among go people.
There is also http://json-schema.org/
I had to do a bunch of message validation in a node.js app a while ago. Although as Tim Bray says the spec’s pretty impenetrable and the various libraries inconsistent, once I’d got my head round JSON Schema and settled on ajv as a validator, it really helped out. Super easy to dynamically generate per message-type handler functions from the schema.
One rather serious problem with json5 is its lack of unicode.
I think this only show that JSON has chosen tradeoff that make it more geared to be edited by software, but has the advantage of being human editable/readable for debugging. JSON as config is not appropriate. There is so many more appropriate format (toml, yaml or even ini come to mind), why would you pick the one that doesn’t allows comments and nice sugar such as trailing commas or multiline string. I like how kubernetes does use YAML as its configuration files, but seems to work internally with JSON.
IMO YAML is not human-friendly, being whitespace-sensitive. TOML isn’t great for nesting entries.
Sad that JSON made an effort to be human-friendly but missed that last 5% that everyone wants. Now we have a dozen JSON supersets which add varying levels of complexity on top.
“anything whitespace sensitive is not human friendly” is a pretty dubious claim
Not even being ironic here. It has everything you’d want.
And a metric ton of stuff you do not want! (Not to mention…what humans find XML friendly?)
This endless cycle of reinvention of S-expressions with slightly different syntax depresses me. (And yeah, I did it too.)
Keep this shit off lobsters.
Remember (the first?) bitcoin fork? On August 8th, 2010, ~184,467,440,737 bitcoins were created out of thin air. Bitcoin had to be forked to fix it.
it is totally unrelated. bitcoin fork are related to weakness in the protocol, which have to be corrected to work as it should work. The problematic ethereum fork are here because the protocol work as it should work that is running program without human supervision
Great comment on Hacker News about what happened:
Here’s how I’ve used git to push to deploy in the past.
This then evolved to where I used git tags to determine deployment with something along the lines of this:
I see a lot of what he talks about echoed in Effective Go.
I can’t help but feel their resignation is a mistake.
From my laypersons perspective, their resignation is a public recognition and declaration that the EFF no longer has confidence in the W3C’s ability to operate with the goals and bounds of its original mission.
In that, if that is the case, I agree with the sentiment, and the EFF’s protest.
Given how the disagreement has developed so far, it was the only thing for them left to do.
Though quite tragical, in its most literal, classical sense, I agree.
What do they achieve by resigning? What will they miss out on? I honestly don’t know enough to judge or feel much about this, but either way it looks like this is a bridge burnt in a demonstration of principles. Perhaps they garner upvotes and support – in principle. Does that give them more than they gained or could have gained from W3C membership in the future?
[Comment removed by author]
Quite right–as illustration of this, part of the reason this has succeeded was that Mozilla caved in an effort to preserve marketshare in the face of less ideologically-pure browsers. Their tacit approval of DRM emboldened the others to push it through, and now it can be pointed at in defense of the odious thing.
Well, they only joined W3C to veto EME. The veto didn’t work, so what’s left for them to do?
If you had joined a chess club, and then some way or another it turned out to be a rape club travelling the world raping random people. Should you resign? Or should you stay to ‘correct the course’?
I mean, you do gain a lot from the membership of said rape club, namely the opportunity to rape random people.
What would you achieve from resigning?
Could you make your point without the references to sexual violence next time? It’s neither necessary to making your point nor kind to those in your audience who might have been on the receiving end of similar, which is two out of the three strikes.
Some argue that this could be not the only high-paying company with anti-tab guidelines: https://google.github.io/styleguide/cppguide.html#Spaces_vs._Tabs
Google also made Go which uses tabs.
We use tabs for indentation and gofmt emits them by default. Use spaces only if you must.
We use tabs for indentation and gofmt emits them by default. Use spaces only if you must.
OK, this is totally personal preference here, but - this is another nail in Go’s coffin as far as I’m concerned.
It’s 2017. Why tabs? WHY? :)
Will this help OpenBSD as well?
Originating from NetBSD, the *BSD wireless stacks have diverged quite a bit at this point, both in design and implementation. FreeBSD can certainly serve as a design reference for OpenBSD wireless code. Copying code is possible, too, but not verbatim because of several differences in the core wireless data structures. Porting wifi drivers between systems takes a bit of effort for the same reasons.
JWT != JOSE.
Reprising and reformatting something I wrote on that other site about this:
The problem with JWT/JOSE is that it’s too complicated for what it does. It’s a meta-standard capturing basically all of cryptography which wasn’t written by or with cryptographers. Crypto vulnerabilities usually occur in the joinery of a protocol. JWT was written to maximize the amount of joinery.
Negotiation: Good modern crypto constructions don’t do complicated negotiation or algorithm selection. Look at Trevor Perrin’s Noise protocol, which is the transport for Signal. Noise is instantiated statically with specific algorithms. If you’re talking to a Chapoly Noise implementation, you cannot with a header convince it to switch to AES-GCM, let alone “alg:none”. The ability to negotiate different ciphers dynamically is an own-goal. The ability to negotiate to no crypto, or (almost worse) to inferior crypto, is disqualifying.
Defaults: A good security protocol has good defaults. But JWT doesn’t even get non-replayability right; it’s implicit, and there’s more than one way to do it.
Inband Signaling: Application data is mixed with metadata (any attribute not in the JOSE header is in the same namespace as the application’s data). Anything that can possibly go wrong, JWT wants to make sure will go wrong.
Complexity: It’s 2017 and they still managed to drag all of X.509 into the thing, and they indirect through URLs. Some day some serverside library will implement JWK URL indirection, and we’ll have managed to reconstitute an old inexplicably bad XML attack.
Needless Public Key: For that matter, something crypto people understand that I don’t think the JWT people do: public key crypto isn’t better than symmetric key crypto. It’s certainly not a good default: if you don’t absolutely need public key constructions, you shouldn’t use them. They’re multiplicatively more complex and dangerous than symmetric key constructions. But just in this thread someone pointed out a library — auth0’s — that apparently defaults to public key JWT. That’s because JWT practically begs you to find an excuse to use public key crypto.
These words occur in a JWT tutorial (I think, but am not sure, it’s auth0’s):
“For this reason encrypted JWTs are sometimes nested: an encrypted JWT serves as the container for a signed JWT. This way you get the benefits of both.”
There are implementations that default to compressing plaintext before encrypting.
There’s a reason crypto people table flip instead of writing detailed critiques of this protocol. It’s a bad protocol. You look at this and think, for what? To avoid the effort of encrypting a JSON blob with libsodium and base64ing the output? Burn it with fire.
I have a related but somewhat OT question. In one of the articles linked to by the article , they say this:
32 bytes of entropy from /dev/urandom hashed with sha256 is sufficient for generating session identifiers.
What purpose does the hash serve here besides transforming the original random number into a different random number? Surely the only reason to use hashing in session ID generation is if there’s no good RNG available in which case one might do something like hash(IP, username, user_agent, server_secret) to generate a unique token? (And in the presence of server-side session storage there’d be no point to including the secret in the hash because its presence in the session table would prove its validity.)
hash(IP, username, user_agent, server_secret)
Yeah, if urandom is actually good, then hashing it serves no real purpose. (In fact if you want to get mathematical, it can only decrease the randomness, but luckily by an absolutely negligible amount). Certain kinds of less-than-great randomness can be improved by hashing (as a form of whitening), but no good urandom deserves to be treated that way.
The reason for that is PHP is weird. PHP hashes session entropy with MD5 by default. Setting it to SHA256 just minimizes the entropy reduction by this step. There is no “don’t hash, just use urandom” configuration directive possible (unless you’re rolling your own session management code, in which case, please just use random_bytes()).
This is no longer the case in PHP 7.1.0, but that blog post is nearly two years old.
Thanks for that very thorough dissection of JWT. Are there web app frameworks/stacks that do have helpfully secure and well-engineered defaults that you’d recommend?
The post itself offers a suggestion (at the bottom): use libsodium.
The author refers to Fernet as a JWT alternative. https://github.com/fernet/spec/blob/master/Spec.md
However, Fernet is not nearly as comprehensive as JOSE and does not appear to be a suitable alternative.
Hah, it seems the article changed a few times, and not just the title…
And comments on https://datatracker.ietf.org/wg/cose/documents/ ?
I built a tiny lib for working with json configs and go’s flag lib: https://github.com/zamicol/jsonflags
Are passwords ‘broken’ in general? I don’t see the need to fix them.
Passwords are pretty insecure. Most people reuse them. Compromising their password in one place (eg. Neopets) will allow you to assume their identity on most other services (Google, Facebook, Amazon, maybe a bank).
Is it passwords that are insecure or the person behind the password? If you use a password manager that lets you generate strong and unique passwords then I don’t see an issue. Then if a site gets compromised you just need to regenerate a new password for that site and not worry about the others you may have used the old password on.
At the end of the day, it comes down to making a conscious effort to be smart about how you maintain your online identities.
From a policy perspective, there’s not much difference between “this cannot be used securely” and “this is not used securely”. The net result in either case is poor security, and bemoaning the fact that people don’t pick secure passwords doesn’t solve the problem.
Obviously from a personal perspective, there’s a lot you can do to maximize the security of your passwords (starting with generating distinct long, random passwords for everything and keeping them in a password manager). But your good password practices don’t really matter to Google, or anybody else using those passwords to authenticate you.
Many security issues will be closely linked in some way to people. If your security mechanisms can’t account for the soft exploits, it’s an insecure system.
The problem is that most people don’t do this, and there’s only so much a site can do to encourage its users to do this. In the end, it can’t tell if your password is used elsewhere, or from a password manager, or anything else. What they can do is look for some method of authentication that avoids or augments the password, to (hopefully) provide a greater degree of security by default.
Passwords are absolutely broken.
LinkedIn was hacked 4 years ago, 164 million accounts compromised, and we just find this out in the past month?
Https? There’s no way to ensure that it’s even set up properly. The DROWN attack and heartbleed are both great examples. https://thehackernews.com/2016/03/drown-attack-openssl-vulnerability.html
Depending on any multi-use token for authentication should be considered poor security.
Https? There’s no way to ensure that it’s even set up properly.
I’m not sure what this has to do with passwords.
They’re typically transmitted over HTTPS; doubly a problem if they’re reused.
If this is a problem for passwords, isn’t it also a problem for biometric data sent to a backend?
Edit: in fact, all biometric data is “reused”, so wouldn’t that be even worse because once someone captures whatever data your fingerprint is turned into they can use that with any system that uses the same type of data?
Korelogics analysis of the Linkedin hash dump shows some interesting issues with the use and generation of passwords.
I’m not sure what the solution is - but for me passwords are part of the problem.
Here’s the thing: You’d have a similar problem with storing biometric data. Biometrics are strictly equivalent to a reasonably strong password, from the server’s perspective.
They’re quite different from the user’s perspective, but I’d argue that they’re a step backwards because they’re immutable.
Wasn’t the hack acknowledged way back in 2012? The new thing you’re hearing about is that the hacked passwords are finally being used.
What you’re saying is that passwords are broken because LinkedIn is a bad company and some people implement HTTPS wrong. That makes no sense.
What would you do instead, in the context of a web application needed to authenticate a user?
Great write up!
Does git have an equivalent to Fsmonitor?
It looks like it’s being actively worked on: http://marc.info/?l=git&w=2&r=1&s=watchman&q=b
Strange no one is discussing it more.
I love the idea. I think it’s about time passwords die, one way or another.
I wish I could say “Because it’s a solved problem? SSL client certificates have been around for ages.” but alas I know of only one public website that uses SSL client certificates for authentication. (And it’s an SSL CA)
Linked Data Server https://databox.me/ uses client certificates.