Threads for Zamicol


    The hype has certainly been intense. I think there are other applications that simply a search engine helper though, for example I think chatGPT could probably be trained to take over most call center jobs, and for summarising things or rewording them for different levels of understanding, like in tutorials and guides. In any use case the types of errors mentioned here would have to be taken into account.

    I think what got programmers excited/scared is that it seems to point to the possibility that we could reach a point where code is written automatically in the not too distant future, not that we are already there. Tools like chatGPT, copilot, and full featured IDEs that not only catch syntax but also simple logical errors and code style could be combined to make pretty a pretty powerful programming assistant. Add to this the possiblity of creating unit tests in an automated way to catch AI brain-farts, or alternatively, using test driven development where humans write the tests and the AI simply fills in all the code. It seems that working in the industry is likely to change because of AI development in the next few years.

    1. 8

      I think chatGPT could probably be trained to take over most call center jobs

      You’ve discovered a way to make service lines even more hateful

      1. 6

        I, for one, am looking forward to the systems responsible for refunds being vulnerable to prompt injection attacks.


          Now the smooth talking Nigerian Prince can get your funds without even asking you.


          Or, possibly, make chatbots less hateful?


          I don’t think ChatGPT on its own is actually useful for things involving a human in a loop where the AI has to reason about the conversation; determining whether statements are true/false, being able to understand not just sentiment, but the reason for that sentiment (e.g. the caller is upset because of X, even though they called about Y), and importantly, being able to perform tasks without another human in the loop.

          ChatGPT can only be described as “understanding” its inputs/outputs in the loosest sense IMO - it is almost magical how well it seems to mimic “real” undstanding, but you can pretty quickly get it to confabulate, and very assertively tell you completely incorrect information, or generate code that is completely wrong. Without metacognition - the ability to learn from interactions, and to reason about inputs/outputs and whether the output truly satisifes the input (for example, if asking for an explanation on some real property of the world, that the output is factually correct, or at least explains why its confidence/lack thereof) - I just don’t think ChatGPT as it is will be useful as much more than an assistant whose work you have to always double check.

          I get the impression that people think we’re on the cusp of solving those issues, but from what I’ve gathered after poking around a bit, we’re just as far away from a solution as we were a few years ago. If you go read the papers behind these models, it is openly acknowledged that we don’t actually understand why the models work the way they do. That is a really significant problem IMO if we want to solve some of the problems that have been brought up with ChatGPT.

          I haven’t found much on the topic yet, but I’m curious if there are ongoing efforts to design a model that works in tandem with a knowledge base of some kind, so that it can not only learn new facts which it feeds back into itself, but that allow it to form a concept of reality, fact vs opinion, confidence in the accuracy of its output, and the ability to explain how it derived its output. That probably doesn’t really work with models like GPT-3 that probabilistically generate a string of output text, since there isn’t any evaluation of what the text actually means; but it feels like there has to be some way to take the kind of pseudo-understanding that ChatGPT derives and extract a model that can think about why ChatGPT derived that output.


            I was just playing with it. It is totally amazing! I presented a library that it had never seen before and it was able to pull it apart and understand it. After I explained it, it wrote a decent Go library. It was aware of very specific edge cases (though, sometimes wrongly applied), but it’s awareness of the edge cases in the general context blew me away.

            I then feed it the README, and it started documenting the appropriate code sections with comments from the README. I’m amazed.


            It would be nice to see benchmarks on this improvement.

            1. 3

              Passkeys are being implemented with the philosophy “keys never get leaked”. From the article “I’m not exactly sure how this is going to be handled. Presumably there will need to be some other out-of-band login.” My understanding is that it’s simply not specified, and I foresee this as being a footgun for developers implementing passkeys in their services.

              Keys should be easily replaceable, and passkey’s lack standardization around revocation and replacement is concerning. Instead, the implicit expectation is that individual users must “delete” (revoke) keys from services using a non-programmatic, user interface that’s different for every service.

              1. 1

                I’m sorry, who is leaking what keys here? Because if a site leaks their key database, then yes, they’re boned, but they’re also boned if they leak their password db. If a user leaks their passkeys then of course they have to reset them all.

                What is the threat you’re worried about here?

                1. 3

                  Site leaking their key database is not as big of an issue, as that’s just a bunch of public keys, whose private key counterparts only respond to the specific website.

                  The thing worried about there is that when user looses a credential in some way, the revocation process requires them to go through every single site they used them on, which both takes a bunch of time and requires remembering where the keys are used. At least for passkeys, this would be viable to automate, but there currently is no such interface.

                  1. 1

                    I don’t get what you’re saying: if a user leaks all their sites and passwords then of course they have to go through all of those sites and reset those passwords. Using webauthn doesn’t change anything here.

                  2. 1

                    To add to what ignaloidas said:

                    I’ve lost count of how many times just this last year various wallet software used insufficient entropy or was misconfigured which resulted in effective key leaking. To build cryptosystems in 2022 that ignore the importance of replacing keys is to ignore the “lessons learned” wisdom of previous and plentiful failures and the admonishment of experts.

                    Moreover, leaking keys isn’t required to break cryptosystem. After being considered “probably secure” for years, in June Rainbow was catastrophically found to be weak. Of course this can happen again, and should be expected to happen, to any algorithm. It is foolhardy to design systems and hardware assuming forever security. Instead designs should favor resiliency in face of catastrophic breaking of cryptosystems or key leaking.

                    The consequences of new technology in this era of extraordinary progress is not always obvious. Cryptosystems are nascent and as engineers we should anticipate breaks and design resilient systems.

                    1. 1

                      And those various wallet software using insufficient entropy would also apply to random passwords as well. If the RNG used to generate a secure auth system is weak, the secure auth is insecure. If the rng used to generate a random password is weak then the passwords are weak. In either case a user has to reset all their auth credentials, and there isn’t a way to do that without the user going to every service they have credentials for. The only difference between the systems is that in the absence of errors the password system is universally weaker.

                      Side note: The UOV schemes are part of the myriad PQC algorithms that have had a generally minimal amount of classical analysis until recently. So then “probably secure” was an artifact of their traditionally niche usage meaning minimal interest in cryptographers that have spent decades attacking the classically secure systems.

                      1. 1

                        I’m all for killing passwords, but the leap to public key cryptography should be done right.

                        1. 2

                          Which is exactly what webauthn is doing - it directly addresses every problem that passwords have, and there are no parts of the protocol that aren’t required in order to securely fix all of those problems.

                          The “problems” people have with webauthn is that they can’t be bothered learning why webauthn does what it does, and instead they just say “this sounds complicated” and then write blog posts about how their pet idea is better because it’s simple, even though it fails a bunch of basic security requirements.

                          1. 1

                            I don’t think this response addresses anything to do with the point:

                            The passkey cryptosystem should specify a programmatic way to revoke/replace keys, i.e., there needs to be a standard API for revoking/replacing keys.

                            Give the WebAuthn spec a “ctrl-f” of “revoke”. How can user keys be revoked in WebAuthn? That’s before even discussing passkeys.

                            1. 1

                              Are you asking for a standard API that lets a site say “regenerate your credentials” and/or the client to say “here are some new credentials”? or something more?

                              I can see those not being specified for webauthn because that’s not a cryptographic operation.

                1. 12

                  On the topic of user lock-in to a specific ecosystem by not allowing keys to leave it: any well implemented RP will allow you to register multiple authenticators for a single account. A while ago twitter only allowed a single one, but seems like that has been solved. And since you can login to another device that’s not in the same ecosystem with the QR code + Bluetooth flow, you can log in with another device, and then from it add another authenticator from the new device.

                  From the administrator side, if they wish they can request authentication information and only allow keys from vendors that do not share the keys between devices.

                  I don’t think a “vendor exchange” process should be a thing. As I said, services should support multiple authenticators at the same time. But I would see benefits in a cross-vendor sped-up enrollment. Hopefully in a way where it could be done by any vendor.

                  1. 9

                    The main issue with this as the sole approach to avoiding lock-in is that you need both devices present for the initial registration for every single RP. This was an interesting proposal to solve an adjacent use case, with the idea of registering a secondary key without it being present. Hopefully we’ll see something similar standardized eventually.

                    1. 6

                      Yes, although having to go through and configure a new authenticator with tens or hundreds of different services is a pretty good disincentive to switching away.

                      1. 4

                        any well implemented RP

                        This is the trick, though. If I’m a user and 5 out of the 6 services I depend on are well-implemented, but one isn’t, I’m still locked in. As you point out, even high-profile services with thousands of engineers can get it wrong.

                        I think the chances of any given user having at least one important account on a site that doesn’t support registering multiple authenticators are pretty high.

                        1. 4

                          I will say that Twitter is the only case that I have seen. And I’m fairly certain that’s because their 2FA system was limited to a single method at the time they added it. But right now every resource I have seen does talk about the need of allowing multiple authenticators, so I really hope services do implement it right.

                        2. 1

                          allow you to register multiple authenticators for a single account.

                          I don’t see any reason why it can’t be implemented globally across the internet.

                          1. 1

                            Would this be possible in the current Oauth2 system by only the user’s primary Oauth2 IdP needing to be a Passkey RP?

                            1. 2

                              Any design that depends upon a single key should be avoided. The authentication system itself need to be multi-key aware.

                          2. 1

                            This isn’t really going to work in many common scenarios, for example if you’re currently in one phone ecosystem and decide to switch to another.

                          1. 1

                            This reminds me of Bootstrap Reboot, which Bootstrap itself uses:

                            1. 8

                              It was made to replace one of the last remaining GnuPG use cases, but it was not made to replace GnuPG because in the last 20 years we learned that cryptographic tools work best when they are specialized and opinionated instead of flexible Swiss Army knives

                              It saddens me deeply that each one of the specialized tools has their own tiny cryptographic system with their own separate keys that I need to manage. GnuPG might not be that good with it’s bunch of crappy utilities, but what it was decent at, is tying everything up with a consistent system and cryptographic identity. It’s not great at it, but having one singular file for most of your cryptography needs is a lot easier to manage than a bunch of keys for every single application. I’d love to see a modern re-imagining of a common cryptosystem, just like PGP was, but for a modern world. I have thoughts on how that could be achieved, but I don’t have time or expertise to build it.

                              1. 3

                                One of the reasons for “eveyone makes a new key format” is that “PGP keys suck” (via Matthew Green).

                                I agree we need a re-imagining of cryptosystem. The difficulty in using the cryptographic tools, the wide range of philosophies and design patterns, and the high demand for such systems shows that the market is ripe for alternatives. (I am working on alternatives.)

                                In a somewhat related case, I’d like to point out one, highly alarming “crypto tools that are broken”. If you Google “Ed25519 tool” the (now) second result is compromised and sends all private key information off to the server.

                                This backdoored tool angered me so much we created the (now) #1 tool, that runs in browser and never sends off keys.

                                So if you have an extra moment of time, report this site to Google: https: // (I put a space in it to prevent clicking, but remove the space when sending it to Google).

                                Report here:

                                Here’s the “backdoor” code.

                                var sign = function () {
                                	var postData = {
                                		privateKey: $('#privateKey').val(),
                                		message: $('#message').val()
                                		dataType: 'JSON',
                                		url: '/api/Sign',
                                		data: postData,
                                		method: 'POST'
                                	}).done(function (data) {
                                		if (data) {
                                			if (data.error) showError(data.error);
                                			if (data.signature) $('#signature').val(data.signature);
                                1. 3

                                  Well yes, they do. But that’s not a reason why there cannot be a common cryptosystem.

                                  1. 1

                                    Here’s what I’m using as a key format: alg, x, and d. That’s it.


                                    Then, a serialization format is specified per algorithm.

                                    The container format can then be JSON, YAML, XML, or whatever.

                                    1. 4

                                      While it’s fine for a key format, I find this bad for a cryptographic identity. People have multiple devices these days, each one should have it’s own key, but the identity should stay the same. Nobody has beaten Keybase in that respect yet. It really needs to be at least as good of an end-user experience as with Keybase if the cryptosystem wants to be widely used.

                                      1. 1

                                        I full heartedly agree. That’s the problem I’m working on.

                                        Do you have a github?

                                        1. 1

                                          I do, under the same username as here, but I’m not really using it. Sadly I don’t really have much time for open source contributions these days. But I’m down to chat about some ideas I have.

                                          1. 1

                                            I sent you a pm.

                                2. 3

                                  Yeah, it’s a baby-with-the-bathwater situation. People are grumpy with gnupg or with slow movement in the OpenPGP WG, so instead of helping they just do a project and call it done instead of evolving the standard formats.

                                  1. 1

                                    yes, very much agreed

                                  1. 3

                                    The author mentions Windows environments, GNOME, and Mate, but makes no mention of KDE or i3wm. GNOME/Mate is great, but not flexible. For example, I wish the terminal would save tab sessions like Konsole, but there is no ability to configure such a feature as it is simply unsupported. If looking for flexibility, KDE is fantastic.

                                    The simplicity of i3wm is awesome, which allows flexibility through building complexity on top.

                                    1. 1

                                      If you’re interested in this sort of thing, we’ve created a a JSON signing library in Go and Javascript.


                                      We’d love to see it used more.

                                      1. 1

                                        I use Zerolog in my project. Works great!

                                        1. 1

                                          Great little article. Thanks for posting!

                                          We use Nayuki’s QR library. We published our tweaks as a Javascript fork here: edit: and we just added a link to this article in the README. Thanks!

                                          1. 11

                                            Highly productive engineers is a thing.

                                            “10x” is just the colloquialism.

                                            1. 3

                                              Yeah, but you have to also weight a couple things in that ‘10x’ ratio. I’ve met highly productive engineers that were brilliant… and some that were too brilliant for their own good, kept solving problems that already have off-the-shelve solutions, spent inordinate amounts of time focusing on optimizing the wrong thing, used the wrong tool for a job because they didn’t fully understand there are different tools with similar but not equal use cases, and generally wrote tons of shitty code.

                                              Those people are super good at getting something up and running, but their output is hell for anyone else to extend, maintain or debug.

                                              1. 3

                                                I wanted to phrase the same myself, see the elephant in the room.

                                                The point and fact is, there are highly productive engineers and we want to identify them somehow. Terminology is just byline, doesn’t really matter how we call them, 10x/cheetah/expert, whatever.

                                                Remove the context: get together randomly selected 100 backend engineers who have 5 years experience doing REST APIs with Spring Boot. Give them the specification and watch how they preform on the task. You’ll spot the 10x without doubt. Since this scenario is less likely to be organized for a particular job opening, we stuck at estimating throughput and naming it “ten times faster”, “ten times productive”, etc (it should have some ground, though).

                                                Problem with estimation is that it is affected by a lot of factors, as mentioned in other comments, therefore the “10x” can be easily dissected and criticized, but that was never the point.

                                              1. 5

                                                It’s a shame we went through that “65536 characters ought to be enough for anybody” era that gave us UTF-16, and that string APIs designed during that era are stuck with that model — not just JS but also Objective-C/Foundation’s NSString. IIRC Windows APIs also use it? It doubles the memory usage and still doesn’t save you from the complexities of multi-unit characters.

                                                (Do JS interpreters actually store strings as UTF-16, though? I know NSString will use 8-bit encodings when the string contains only ASCII. Of course that increases the time complexity of string operations.)

                                                1. 4

                                                  Not to mention this was the #1 motivation for, and cause of, the Python 2 -> 3 incompatibility, and now Python 3 is moving toward UTF-8 again!

                                                  That was human-centuries of work!

                                                  Lots of people didn’t understand why the reference implementation of is in Python 2. I ported it to Python 3, and then back to Python 2! Both ports were easy, but Python 2 is better for our UTF-8 centric model.

                                                  (And to answer another question I get often: we’re moving away from reusing any part of Python code at all in the runtime. The dev tools and reference implementation are all Python, but the final product is C++.)

                                                  So you could say that Windows OS encodings “infected” all of:

                                                  1. Java and every JVM language
                                                  2. JS, and every compile-to-JS language
                                                  3. Python

                                                  In contrast, newer languages like Go and Rust use a UTF-8 centric model, which is what Oil uses too.

                                                  1. 2

                                                    Not to mention this was the #1 motivation for, and cause of, the Python 2 -> 3 incompatibility

                                                    Not exactly. Python went from byte strings with unspecified encoding and using ASCII by default for implicit conversions towards separating bytes and human readable strings into two types, and making the latter default. There was a short period when that was represented as UCS2 (which is UTF-16 without surrogates), but now it’s almost universally UCS4, which is UTF-32, and doesn’t have the problem with surrogates.

                                                    Also, they explicitly left the encoding of a string as an implementation detail. They can switch it to UTF-8 without affecting userland (theoretically). And I’m all for it, since everything in UTF-8 proved to be an easier and more efficient model over time.

                                                    1. 3

                                                      CPython since 3.3 uses the PEP 393 approach, though, where the internal representation of a string is the narrowest encoding, out of the set (latin-1, UCS-2, UCS-4), capable of handling the widest code point of that particular string. This ensures that operations at the C level are always fixed-width, and avoids issues like surrogates or other artifacts of variable-width encoding leaking up to the programmer (as used to happen in “narrow” builds of pre-3.3 Python).

                                                      1. 1

                                                        Yeah there were some ASCII defaults that were bad, but my point is that Python could be like Go or Rust with respect to Unicode, and nothing would be lost.

                                                        The strongest argument for UTF-16 and the like was “Windows works that way”, but AFAICT Go and Rust have working solutions for that.

                                                      2. 2

                                                        I’d be curious to know what you mean by “Python 3 is moving toward UTF-8 again”. PEP 393 is the last major effort I’m aware of, and that’s not at all what it did. It’s true that Python now defaults to assuming the filesystem, standard streams and other locale-y bits use UTF-8 (PEP 540 and then PEP 686 made it an always-on mode), but that doesn’t affect how Python internally stores strings or the fact that the str abstraction is a sequence of code points.

                                                        1. 1

                                                          Basically I mean that with PEP 540 and 686 making the APIs UTF-8, they might as well have kept bytes as the internal representation and avoided a lot of complexity from PEP 393 (flexible representations for space optimization).

                                                          The argument is: What Python applications need O(1) random code point access as opposed to O(n) ? UTF-16 and UTF-32 give you O(1) code point access, at the expense of space. UTF-8 gives you O(n) code point access.

                                                          This is an honest question: I can’t think of any code that relies on it and is correct.

                                                          That is, code points basically don’t mean anything to applications – they are for libraries like ICU, which are most naturally written in C or C++.

                                                          I think every simple operation you can do on code points in Python has corner cases that are wrong, and if you want to do it correctly, you need Unicode tables for graphemes and combining code points and so forth.

                                                          Saying it more simply via my other comment:

                                                          My point is that Python could be like Go or Rust with respect to Unicode, and nothing would be lost.

                                                          The strongest argument for UTF-16 and the like was “Windows works that way”, but AFAICT Go and Rust have working solutions for that.

                                                          They could have used the Python 2 -> 3 switch to adopt that behavior, rather than the very complex current behavior which is still not settled

                                                          1. 2

                                                            I am of the opinion that if code points are considered too dangerous to be the atomic unit of a string type, then the solution can never be to introduce an even more dangerous lower-level atomic unit like code units or bytes; the only solution is to go higher and make a string a sequence of graphemes.

                                                            But for all the times people in threads like this one have told me that code points are useless, I actually maintain a library which would be incorrect if if didn’t treat strings as sequences of code points and perform operations like checking what the code point is at a particular index in a string. These are actually really common and vital operations in a lot of domains of programming. They’re also not that hard to implement correctly; insisting otherwise because there are “corner cases that are wrong” is, to me, like saying that because there are corner cases to names and mailing addresses, it should be forbidden to work with them. Yes, there are corner cases. You should know what they are up-front and whether they’ll affect you. But very often you actually can manage quite well without needing to implement hundreds of pages of specs.

                                                            And since there is no lower-level atomic unit that reduces the number of corner cases (going lower only introduces more ways to break things), and since code points are the atomic units of Unicode, I’m perfectly fine with Python’s approach. But for the record, I also believe it is and should be considered factually incorrect to refer to a sequence of UTF-8-encoded bytes as a “string”, and I believe UTF-8 ’s ASCII compatibility is a terrible design mistake which comes close to making the whole encoding fundamentally broken, so take that as you will.

                                                            1. 1

                                                              I looked at the library:


                                                              If the HTML is encoded in UTF-8, then all of this can be done easily with Go or Python 2’s string type. If you have to handle multiple encodings (and browsers obviously do), then I can see why the unicode type is more convenient.

                                                              But it’s not like you couldn’t do it in Python 2! It had a unicode type

                                                              1. 1

                                                                It’s not about the encoding – it’s about the fact that the relevant web standards require the ability to work with the inputs as sequences of code points in order to correctly implement them. “Do this if the code point at this index is U+0023” simply doesn’t work unless you have a way to get code points by index. And a huge variety of data parsing and validation routines for web apps – which have to accept all their data as initially stringly-typed, regardless of the language running on the backend – run into stuff like this. So dismissing it as some sort of odd/rare use case that doesn’t need or deserve efficiency makes no sense to me.

                                                                And forcing everything to lower-level abstractions, as I mentioned, just introduces even more ways to mess things up. With a code-point abstraction, you can slice in the middle of a grapheme cluster; with a code-unit or byte abstraction you can slice in the middle of a code point. So you’re not gaining any correctness from it. And for Python’s case you’re not gaining on storage – the PEP 393 model can store many strings in less space than UTF-8 would.

                                                                But it still mostly comes back to the fact that I don’t believe bytes should be considered strings and thus that Python 3 made the correct choice.

                                                      3. 2

                                                        UTF-16 demonstrates of the dangers of design-by-committee and the dangers of publishing bad designs.

                                                        UTF-16 needed an experienced, respected, industry veteran to say “No” often, frequently, and consistently. (The fact that some code points appear after other Unicode ordered code points mind boggling. The day the engineers that pass off on that, did they not yet have their coffee?)

                                                        Unfortunately, the “No”s came too late, so UTF-8 was needed, and then quickly took over the web.

                                                        I wish the Javascript communities and others had a stronger ethos of being careful with foundation design decisions and fixing foundational issues when acknowledged instead of living with them forever, which results in building more systems tightly coupled to bad designs.

                                                        1. 2

                                                          Yeah, I think windows still uses screwed up UTF-16.

                                                          V8 used to use multiple string representations in different parts of the code, so I would guess they do not use UTF-16 internally for many things, but I cannot claim I know this.

                                                          Here’s one reference, almost a decade old now, but I haven’t re-read it.

                                                          1. 1

                                                            also Objective-C/Foundation’s NSString

                                                            This, in turn, was the inspiration for the API of Java’s String (many of the same people worked on both projects), which inspired C#‘s String. It’s very rare to find a high-level language that doesn’t think that 16 bits is sufficient for a ‘character’ (whatever it thinks that means). I believe Swift does this right, with the advantage of learning from the pain of NSString.

                                                            1. 2

                                                              Go does a decent job too, in its typically-minimalist way: a string is basically a distinct type of byte array, but there are APIs to access “runes”, i.e. codepoints.

                                                              I sometimes suspect emoji were secretly invented by the Unicode consortium as a plot to get Western users to start using characters outside the BMP, so programmers would finally have to make their code support them properly.

                                                            2. 1

                                                              Good point. I am wondering if there were other strong use cases for the so-called “astral” characters besides emoji? They do have several sets that aren’t emoji. However, many of the additional planes are completely or partially unassigned.

                                                              When it comes to storing, from my understanding, although the JavaScript standard mentions string characters as “UTF-16 code units” and this is how they are exposed to JavaScript developers, it does not mandate how those strings should be internally implemented. I believe most actually do store those differently.

                                                              1. 2

                                                                Whether there’s an existing use case doesn’t preclude that we might need them in the distant future. I think it’s good that the design has given itself room to grow without imposing an efficiency cost.

                                                                1. 2

                                                                  Great point too. Private use areas (U+E000 to U+F8FF, and planes 15/16) are also good ideas in that direction.

                                                                2. 1

                                                                  I am wondering if there were other strong use cases for the so-called “astral” characters besides emoji?

                                                                  CJK scripts?

                                                              1. 4

                                                                Exegesis is a great word here that I think I’ll add to my lexicon.

                                                                1. 1

                                                                  I agree with the points from the original article. I don’t think Elon discovered any of them, they were applied in software long before Elon, but they’re all smart points to advocate.

                                                                  I’m not a fan of this:

                                                                  I’ll talk about what’s wrong […] [i]t’s not genius at all. [in reference to] Try and Delete Part of the Process:

                                                                  That’s been said a million times a million ways in software. Worse is better:

                                                                  It’s basically just a reword of the Unix philosophy:

                                                                  Or even Einstein: “Make everything as simple as possible, but not simpler.”

                                                                  “Try and Delete Part of the Process” is just saying the same thing.

                                                                  1. 8

                                                                    Huh. Cute. I guess spending a single bit of entropy on that is no big deal in terms of overall security.

                                                                    1. 4

                                                                      I don’t think they spend any entropy at all.

                                                                      1. 5

                                                                        Well they could be adjusting the rate limit to account for these two checks. By some logic that is making the password 1 bit easier to guess.

                                                                        1. 4

                                                                          Yeah, most likely they specifically check if the first letter of the submitted password is lower case and try a second attempt to compare against the hash with that first letter capitalised.

                                                                          1. 1

                                                                            How would this not result in an effective loss of entropy?

                                                                            First character alpha is 52 possibilities, or ~6 bits. If it’s case insensitive, 26 characters, ~5 bits, which is half as entropic.

                                                                            1. 3

                                                                              I see two ways to implement this:

                                                                              • on registration lowercase/uppercase the first char and store it like this in the database (loss of entropy)
                                                                              • on login first try login with the way the user typed it, if it does not work, retry with uppercased/lowercased first character (no loss of entropy in the database)

                                                                              Second one is no loss of entropy, because the password in the database can still be uppercase or lowercase (of course hashed). Since the client has to submit the password twice it will go through the same protection that is setup for login security, e.g. slow hashing with PBKDF2.

                                                                              1. 1

                                                                                Yeah, you’re right. Your second implementation hadn’t occurred to me. I note that it’s the rate limit imposed by the web server which is the main protection in practice; the cost of the hash is only relevant for attackers who have already stolen the hashed passwords.

                                                                                1. 1

                                                                                  “Hash/iterate/some sort of proof of work” to login is an interesting idea.

                                                                                  Since many passwords are already low entropy, I fear loosing any bits. I remain skeptical, but I also despise passwords in general.

                                                                                2. 2

                                                                                  From what I understand, server only stores and accepts one version of the password. The trick is that client - after unsuccessful login - tries modified version of the password (pretty much like a human might try).

                                                                              2. 1

                                                                                200% the number of expected passwords work in this scheme.

                                                                              1. 1

                                                                                An open source distributed cloud that with a ledger to value storage and processing.

                                                                                An open source and distributed app store for apps that I could single click install on my home server that’s a member of this collective cloud.

                                                                                1. 27

                                                                                  That you know tools and processes that work for you is great. But it doesn’t sound like you have the best ideas about when to use them.

                                                                                  You say you got decent work done while your wife was talking with her friends at dinner. What they saw was someone tuning their presence out to do what they’d really wanted.

                                                                                  If you’re gonna be there, be there.

                                                                                  1. 12

                                                                                    There are a few details to my story that I had left out because I didn’t think they were relevant. You just made them relevant.

                                                                                    1. My wife and her friends know I’m a writer.
                                                                                    2. They know that I sometimes come up with ideas at socially inconvenient times.
                                                                                    3. I had told them that talking with them had sparked an idea, and asked them if they would mind if I wrote it down right away.
                                                                                    4. Only one person objected.
                                                                                    5. The one person who objected wasn’t my wife, so her opinion didn’t really matter.
                                                                                    6. I didn’t have headphones plugged in, so I was able to put aside my work and engage when the conversation turned back toward me.
                                                                                    7. When you’re the only man in a party of twelve, the conversation doesn’t turn to you that often.

                                                                                    Do you usually give out unsolicited etiquette advice online, or should I be flattered?

                                                                                    1. 5

                                                                                      It’s advice, we don’t know the whole story.

                                                                                      I’m thankful when someone online is forward and seems genuinely interested in giving helpful etiquette advice. Unsolicited is the only solution if I don’t know I’m being rude.

                                                                                    2. 1

                                                                                      Hah, I thought the same. It’s funny how most people will not put up with someone opening up their laptop in the restaurant but we don’t bat an eye when we start using smart phones.

                                                                                      1. 7

                                                                                        I’m not going to defend smart phone usage and will call my friends out for using them at the table (in a friendly manner, of course), but the difference of degree between pulling out your phone and pulling out a laptop is so large it becomes a difference of kind.

                                                                                        1. 2

                                                                                          I guess it only depends of the intent. If it’s just to write your thoughts quickly not to forget something, then a notebook is as intrusive in a dinner as a smartphone or a laptop.

                                                                                    1. 7

                                                                                      Embrace, extend, and extinguish.

                                                                                      1. 1

                                                                                        Nice post Jeff.

                                                                                        2^256 is about 10^77, which happens to be an estimate for the number of atoms in the universe.

                                                                                        I really like your blog layout. Have you published the code?

                                                                                        1. 1

                                                                                          Thanks! It’s using this Hugo theme with some tiny modifications.

                                                                                        1. 2

                                                                                          These are probably the weakest arguments against Bitcoin I’ve seen. But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.

                                                                                          Real arguments against Bitcoin are:

                                                                                          And I’m sure there are others but literally none of the ones presented here are valid.

                                                                                          1. 29

                                                                                            These are probably the weakest arguments against Bitcoin I’ve seen.

                                                                                            As it says, this is in response to one of the weakest arguments for Bitcoin I’ve seen. But one that keeps coming up.

                                                                                            But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.

                                                                                            When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business.

                                                                                            1. 19

                                                                                              I would also like to be able to upgrade my gaming PC’s GPU without spending what the entire machine cost.

                                                                                              This is getting better though.

                                                                                              1. 1

                                                                                                For what it’s worth, Bitcoin mining doesn’t use GPUs and hasn’t for several years. GPUs are being used to mine Ethereum, Monero, etc. but not BItcoin or Bitcoin Cash.

                                                                                              2. 0

                                                                                                When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business

                                                                                                And yet, still less electricity than… Christmas lights in the US or gold mining.


                                                                                                1. 21

                                                                                                  When you reach for “Tu quoque” as your response to a criticism, then you’ve definitely run out of decent arguments.

                                                                                              3. 13

                                                                                                Bitcoin (and all blockchain based technology) is doomed to die as the price of energy goes up.

                                                                                                It also accelerates the exaustion of many energy sources, pushing energy prices up faster for every other use.

                                                                                                All blockchain based cryptocurrencies are scams, both as currencies and as long term investments.
                                                                                                They are distributed, energy wasting, ponzi scheme.

                                                                                                1. 2

                                                                                                  wouldn’t an increase in the cost of energy just make mining difficulty go down? then the network would just use less energy?

                                                                                                  1. 2

                                                                                                    No, because if you reduce the mining difficulty, you decrease the chain safety.

                                                                                                    Indeed the fact that the energy cost is higher than the average bitcoin revenue does not means that a well determined pool can’t pay for the difference by double spending.

                                                                                                    1. 3

                                                                                                      If energy cost doubles, a mix of two things will happen, as they do when the block reward halves:

                                                                                                      1. Value goes up, as marginal supply decreases.
                                                                                                      2. If the demand isn’t there, instead the difficulty falls as miners withdraw from the market.

                                                                                                      Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value. This cost is what secures the blockchain by making attacks costly.

                                                                                                      1. 1

                                                                                                        Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value.

                                                                                                        You forgot one word: average.

                                                                                                        1. 2

                                                                                                          It is implied. The sentence makes no sense without it.

                                                                                                          1. 1

                                                                                                            And don’t you see the huge security issue?

                                                                                                  2. 1

                                                                                                    Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.

                                                                                                    PoS has no such energy requirements. Peercoin (2012) was one of the first, Blackcoin, Decred, and many more serve as examples. Ethereum, #2 in “market cap”, is moving to PoS.

                                                                                                    So to say “ [all blockchain based technology] is doomed to die as the price of energy goes up” is silly.

                                                                                                    1. 1

                                                                                                      Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.

                                                                                                      Hum… are you saying that Bitcoin miners have no brain? :-D

                                                                                                      I know that PoS, in theory, is more efficient.
                                                                                                      The fun fact is that all implementation I’ve seen in the past were based on PoW based crypto currencies stakes. Is that changed?

                                                                                                      As for Ethereum, I will be happy to see how they implement the PoS… when they will.

                                                                                                      1. 2

                                                                                                        Blackcoin had a tiny PoW bootstrap phase, maybe weeks worth and only a handful of computers. Since then, for years, it has been purely PoS. Ethereum’s goal is to follow Blackcoin’s example, an ICO, then PoW, and finally a PoS phase.

                                                                                                        The single problem PoW once reasonably solved better than PoS was egalitarian issuance. With miner consolidation this is far from being the case.

                                                                                                        IMHO, fair issuance is the single biggest problem facing cryptocurrency. It is the unsolved problem at large. Solving this issue would immediately change the entire industry.

                                                                                                        1. 1

                                                                                                          Well, proof of stake assumes that people care about the system.

                                                                                                          It see the cryptocurrency in isolation.

                                                                                                          An economist would object that a stake holder might get a lot by breaking the currency itself despite the loss in-currency.

                                                                                                          There are many ways to gain value from a failure: eg buying surrogate goods for cheap and selling them after the competitor’s failure has increased their relative value.

                                                                                                          Or by predicting the failure and then causing it, and selling consulting and books.

                                                                                                          Or a stake holder might have a political reason to demage the people with a stake in the currency.

                                                                                                          I’m afraid that the proof of stake is a naive solution to a misunderstood economical problem. But I’m not sure: I will surely give a look to Ethereum when it will be PoS based.

                                                                                                    2. 0

                                                                                                      doomed to die as the price of energy goes up.

                                                                                                      Even the ones based on proof-of-share consensus mechanisms? How does that relate?

                                                                                                      1. 3

                                                                                                        Can you point to a working implementation so that I can give a look?

                                                                                                        Last time I checked, the proof-of-share did not even worked as a proof-of-concept… but I’m happy to be corrected.

                                                                                                        1. 2

                                                                                                          Blackcoin is Proof of Stake. (I’ve not heard of “Proof of Share”).

                                                                                                          Google returns 617,000 results for “pure pos coin”.

                                                                                                          1. 1

                                                                                                            Instructions to get on the Casper Testnet (in alpha) are here: . No need to bold your words to emphasize your beliefs.

                                                                                                            1. 3

                                                                                                              The emphasis was on the key requirement.

                                                                                                              I’ve seen so many cryptocurrencies died few days after ICO, that I raised the bar to take a new one seriously: if it doesn’t have a stable user base exchanging real goods with it, it’s just another waste of time.

                                                                                                              Also, note that I’m not against alternative coins. I’d really like to see a working and well designed alt coin.
                                                                                                              And I like related experiments as GNU Teller.

                                                                                                              I’m just against scams and people trying to fool other people.
                                                                                                              For example, Casper Testnet is a PoS based on a PoW (as Etherum currently is).

                                                                                                              So, let’s try again: do you have a working implementation of a proof of stake to suggest?

                                                                                                              1. 1

                                                                                                                It’s not live or open-source, so I’d understand if you’re still skeptical, but Algorand has simulated 500,000 users.

                                                                                                                1. 1

                                                                                                                  Again I don’t seem to understand your anger. We’re on a tech site discussing tech issues. You seem to be getting emotional about something that’s orthogonal to this discussion. I don’t think that emotional exhorting is particularly conducive to discussion, especially for an informed audience.

                                                                                                                  And I don’t understand what you mean by working implementation. It seems like a testnet does not suffice. If your requirements are: widely popular, commonly traded coin with PoS, then congratulations you have built a set of requirements that are right now impossible to satisfy. If this is your requirement then you’re just invoking the trick question fallacy.

                                                                                                                  Nano is a fairly prominent example of Delegated Proof of Stake and follows a fundamentally very different model than Bitcoin with its UTXOs.

                                                                                                                  1. 3

                                                                                                                    No anger, just a bit of irony. :-)

                                                                                                                    By working implementation of a software currency I mean not just code and a few beta tester but a stable userbase that use the currency for real world trades.

                                                                                                                    Actually that probably the minimal definition of “working implementation” for any currency, not just software ones.

                                                                                                                    I could become a little lengthy about vaporware, marketing and scams, if I have to explain why an unused software is broken by definition.
                                                                                                                    I develop an OS myself tha literally nobody use, and I would never sell it as a working implementation of anything.

                                                                                                                    I will look to Nano and delegated proofs of stake (and I welcome any direct link to papers and code… really).

                                                                                                                    But frankly, the sarcasm is due to a little disgust I feel for proponents of PoW/blockchain cryptocurrencies (to date, the only real ones I know working, despite broken as actual long term currency): I can understand non programmers that sell what they buy from programmers, but any competent programmer should just say “guys Bitcoin was an experiment, but it’s pretty evident that has been turned to a big ponzi scheme. Keep out of cryptocurrencies! Or you are going to loose your real money for nothing.”

                                                                                                                    To me, programmers who don’t explain this are either incompetent enough to talk about something they do not understand, or are trying to profit from those other people, selling them their token (directly or indirectly).

                                                                                                                    This does not means in any way that I don’t think a software currency can be built and work.

                                                                                                                    But as an hacker, my ethics prevent me from using people’s ignorance against them, as does who sell them “the blockchain revolution”.

                                                                                                                2. 2

                                                                                                                  The problem is that in the blockchain space, hypotheticals are pretty much worthless.

                                                                                                                  Casper I do respect, they’re putting a lot of work in! But, as I note literally in this article, they’re discovering yet more problems all the time. (The latest: the security flaws.)

                                                                                                                  PoS has been implemented in a ton of tiny altcoins nobody much cares about. Ethereum is a great big coin with hundreds of millions of dollars swilling around in it - this is a different enough use case that I think it needs to be regarded as a completely different thing.

                                                                                                                  The Ethereum PoS FAQ is a string of things they’ve tried that haven’t quite been good enough for this huge use case. I’ll continue to say that I’ll call it definitely achievable when it’s definitely achieved.

                                                                                                          2. 4

                                                                                                            ASICboost was fixed by segwit. Bitcoin isn’t subject to ASICboost anymore, but Bitcoin Cash is.

                                                                                                            1. 2

                                                                                                              Covert asicboost was fixed with segwit, overt is being used: