1.  

    Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2.

    OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores

    If that doesn’t ease your fears about an impending AI apocalypse, I don’t know what will.

    1.  

      That actually makes my AI fears worse. But that’s because they are not the exact stereotypical AI fears.

      What the article says is: if you can afford ten times more computing resources, you get better chances to achieve superhuman resources than by using novel approaches. Train once, run cheaply. So, capital matters, labor qualification doesn’t, economoies of scale with huge front-up costs and small recurring costs. That’s how you get badly broken monopoly markets, no? And of course then it breaks because someone spends on servers and not on human time to make sure nothing stupid happens.

      Yes, OpenAI on its own will probably try to release something already trained and with predictable failure risks; and this is likely to improve the situation overall — I like what they want to do and what they do, I am just afraid of what they find (and thanks to them for disclosing it).

    1. 1

      Sculpt is a general-purpose OS on top of the Genode microkernel.

      They seem to be porting some specific driver subsystems from Linux and BSD (with some driver isolation measures).

      1. 31

        at this point most browsers are OS’s that run (and build) on other OS’s:

        • language runtime - multiple checks
        • graphic subsystem - check
        • networking - check
        • interaction with peripherals (sound, location, etc) - check
        • permissions - for users, pages, sites, and more.

        And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

        1. 11

          Browsers rarely link out the system. FF/Chromium have their own PNG decodes, JPEG decodes, AV codecs, memory allocators or allocation abstraction layers, etc. etc.

          It bothers me everything is now shipping as an electron app. Do we really need every single app to have the footprint of a modern browser? Can we at least limit them to the footprint of Firefox2?

          1. 10

            but if you limit it to the footprint of firefox2 then computers might be fast enough. (a problem)

            1. 2

              New computers are no longer faster than old computers at the same cost, though – moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

              (Maybe secondary storage speed will have a big bump, if you’re moving from hard disk to SSD, but that only happens once.)

              1. 3

                moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

                Are you claiming there have been no speedups due to better pipelining, out-of-order/speculative execution, larger caches, multicore, hyperthreading, and ASIC acceleration of common primitives? And the benchmarks magazines post showing newer stuff outperforming older stuff were all fabricated? I’d find those claims unbelievable.

                Also, every newer system I had was faster past 2005. I recently had to use an older backup. Much slower. Finally, performance isn’t the only thing to consider: the newer, process nodes use less energy and have smaller chips.

                1. 2

                  I’m slightly overstating the claim. Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                  Once we’ve picked all the low-hanging fruit (simple optimization tricks with major & general impact) we’ll need to start seriously milking performance out of multicore and other features that actually require the involvement of application developers. (Multicore doesn’t affect performance at all for single-threaded applications or fully-synchronous applications that happen to have multiple threads – in other words, everything an unschooled developer is prepared to write, unless they happen to be mostly into unix shell scripting or something.)

                  Moore’s law isn’t all that matters, no. But, it matters a lot with regard to whether or not we can reasonably expect to defend practices like electron apps on the grounds that we can maintain current responsiveness while making everything take more cycles. The era where the same slow code can be guaranteed to run faster on next year’s machine without any effort on the part of developers is over.

                  As a specific example: I doubt that even in ten years, a low-end desktop PC will be able to run today’s version of slack with reasonable performance. There is no discernible difference in its performance between my two primary machines (both low-end desktop PCs, one from 2011 and one from 2017). There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.

                  1. 5

                    Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                    I agree with that totally.

                    “Multicore doesn’t affect performance at all for single-threaded applications “

                    Although largely true, people often forget a way multicore can boost single-threaded performance: simply letting the single-threaded app have more time on CPU core since other stuff is running on another. Some OS’s, esp RTOS’s, let you control which cores apps run on specifically to utilize that. I’m not sure if desktop OS’s have good support for this right now, though. I haven’t tried it in a while.

                    “There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.”

                    Yeah, all the ideas I have for it are incremental. The best illustration of where rest of gains might come from is Cavium’s Octeon line. They have offloading engines for TCP/IP, compression, crypto, string ops, and so on. On rendering side, Firefox is switching to GPU’s which will take time to fully utilize. On Javascript side, maybe JIT’s could have a small, dedicated core. So, there’s still room for speeding Web up in hardware. Just not Moore’s law without developer effort like you were saying.

          2. 9

            Although you partly covered it, I’d say “execution of programs” is good wording for JavaScript since it matches browser and OS usage. There’s definitely advantages to them being smaller. A guy I knew even deleted a bunch of code out of his OS and Firefox to achieve that on top of a tiny, backup image. Dude had a WinXP system full of working apps that fit on one CD-R.

            Far as secure browsers, I’d start with designs from high-assurance security bringing in mainstream components carefully. Some are already doing that. An older one inspired Chrome’s architecture. I have a list in this comment. I’ll also note that there were few of these because high-assurance security defaulted on just putting a browser in a dedicated partition that isolated it from other apps on top of security-focused kernels. One browser per domain of trust. Also common were partitioning network stacks and filesystems that limited effect of one partition using them on others. QubesOS and GenodeOS are open-source software that support these with QubesOS having great usability/polish and GenodeOS architecturally closer to high-security designs.

            1. 6

              Are there simpler browsers optimised for displaying plain ol’ hyperlinked HTML documents, and also support modern standards? I don’t really need 4 tiers of JIT and whatnot for web apps to go fast, since I don’t use them.

              1. 12

                I’ve always thought one could improve on a Dillo-like browser for that. I also thought compile-time programming might make various components in browsers optional where you could actually tune it to amount of code or attack surface you need. That would require lots of work for mainstream stuff, though. A project like Dillo might pull it off, though.

                1. 10
                  1. 3

                    Oh yeah, I have that on a Raspberry Pi running RISC OS. It’s quite nice! I didn’t realise it runs on so many other platforms. Unfortunately it only crashes on my main machine, I will investigate. Thanks for reminding me that it exists.

                    1. 2

                      Fascinating; how had I never heard of this before?

                      Or maybe I had and just assumed it was a variant of suckless surf? https://surf.suckless.org/

                      Looks promising. I wonder how it fares on keyboard control in particular.

                      1. 1

                        Aw hell; they don’t even have TLS set up correctly on https://netsurf-browser.org

                        Does not exactly inspire confidence. Plus there appears to be no keyboard shortcut for switching tabs?

                        Neat idea; hope they get it into a usable state in the future.

                      2. 1

                        AFAIK, it doesn’t support “modern” non-standards.

                        But it doesn’t support Javascript either, so it’s way more secure of mainstream ones.

                      3. 8

                        No. Modern web standards are too complicated to implement in a simple manner.

                        1. 3

                          Either KHTML or Links is what you’d like. KHTML would probably be the smallest browser you could find with a working, modern CSS, javascript and HTML5 engine. Links only does HTML <=4.0 (including everything implied by its <img> tag, but not CSS).

                          1. 2

                            I’m pretty sure KHTML was taken to a farm upstate years ago, and replaced with WebKit or Blink.

                            1. 6

                              It wasn’t “replaced”, Konqueror supports all KHTML-based backends including WebKit, WebEngine (chromium) and KHTML. KHTML still works relatively well to show modern web pages according to HTML5 standards and fits OP’s description perfectly. Konqueror allows you to choose your browser engine per tab, and even switch on the fly which I think is really nice, although this means loading all engines that you’re currently using in memory.

                              I wouldn’t say development is still very active, but it’s still supported in the KDE frameworks, they still make sure that it builds at least, along with the occasional bug fix. Saying that it was replaced is an overstatement. Although most KDE distributions do ship other browsers by default, if any, and I’m pretty sure Falkon is set to become KDE’s browser these days, which is basically an interface for WebEngine.

                          2. 2

                            A growing part of my browsing is now text-mode browsing. Maybe you could treat full graphical browsing as an exception and go to the minimum footprint most of the time…

                        2. 4

                          And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

                          user choice. rampant complexity has restricted your options to 3 rendering engines, if you want to function in the modern world.

                          1. 3

                            When reimplementing malloc and testing it out on several applications, I found out that Firefox ( at the time, I don’t know if this is still true) had its own internal malloc. It was allocating a big chunk of memory at startup and then managing it itself.

                            Back in the time I thought this was a crazy idea for a browser but in fact, it follows exactly the idea of your comment!

                            1. 3

                              Firefox uses a fork of jemalloc by default.

                              1. 2

                                IIRC this was done somewhere between Firefox 3 and Firefox 4 and was a huge speed boost. I can’t find a source for that claim though.

                                Anyway, there are good reasons Firefox uses its own malloc.

                                Edit: apparently I’m bored and/or like archeology, so I traced back the introduction of jemalloc to this hg changeset. This changeset is present in the tree for Mozilla 1.9.1 but not Mozilla 1.8.0. That would seem to indicate that jemalloc landed in the 3.6 cycle, although I’m not totally sure because the changeset description indicates that the real history is in CVS.

                            2. 3

                              In my daily job, this week I’m working on patching a modern Javascript application to run on older browsers (IE10, IE9 and IE8+ GCF 12).

                              The hardest problems are due the different implementation details of same origin policy.
                              The funniest problem has been one of the used famework that used “native” as variable name: when people speak about the good parts in Javascript I know they don’t know what they are talking about.

                              BTW, if browser complexity address a real problem (instead of being a DARPA weapon to get control of foreign computers), such problem is the distribution of computation among long distances.

                              Such problem was not addressed well enough by operating systems, despite some mild attempts, such as Microsoft’s CIFS.

                              This is partially a protocol issue, as both NFS, SMB and 9P were designed with local network in mind.

                              However, IMHO browsers OS are not the proper solution to the issue: they are designed for different goals, and they cannot discontinue such goals without loosing market share (unless they retain such share with weird marketing practices as Microsoft did years ago with IE on Windows and Google is currently doing with Chrome on Android).

                              We need better protocols and better distributed operating systems.

                              Unfortunately it’s not easy to create them.
                              (Disclaimer: browsers as platforms for os and javascript’s ubiquity are among the strongest reasons that make me spend countless nights hacking an OS)

                            1. 2

                              A large enough private segment is hard to protect completely; and taking leverage of more and more systems in the protected segment is a well-documented intrusion technique.

                              I think after it became undeniable that NSA obtained data from Google by NSLs but also had some covert monitoring of the unencrypted inside-the-private-datacenter communication, it became more typical to encrypt everything. At some point things might get bad enough that people would encrypt even inside-the-box traffic…

                              A lot of flows are IO-limited anyway (or sometimes RAM-amount-limited, or external-network-limited), so encryption might not be as much of an extra expense as it would be if everything was CPU-limited.

                              It might be reasonable to add a small hidden segment for that one thing that you cannot afford to encrypt in-transit — if there is actually a problem — but encrypting everything is a good default.

                              1. 2

                                This is one part awesome, one part disappointing. Yes he installed it on his phone, but it, as far as I could tell, also left his phone completely worthless.

                                1. 2

                                  I see no indication that the Android system has suffered in any way. It’s just for managing non-Android software on the phone (lazy people use a Debian chroot for that).

                                1. 9

                                  If you don’t mind the tinfoil, this could well be a shakedown test to see how Russia might deal with partitioning of the network in a time of relative peace, before being surprised during some other time.

                                  Then again, that’s the sort of idle speculation I’d give back in my HN days.

                                  1. 3

                                    Maybe not the intention, but I can’t imagine the data point would go unnoticed.

                                    1. 3

                                      According to the time line, it may seem related to telegram.

                                      Here’s my tinfoil take :)

                                      Russia banned the telegram app at the beginning of the month[1]. They basically blacklisted their domains.

                                      Telegram started to use the google app engine as a domain front [2].

                                      I guess Russia is trying to prevent domain fronting for future ban cases. I guess it is easier for them to send a takedown notice to a Russian cloud provider than sending that to a American one.

                                      [1]: https://www.nytimes.com/2018/04/13/world/europe/russia-telegram-encryption.html

                                      [2]: https://en.wikipedia.org/wiki/Domain_fronting

                                      1. 2

                                        Probably not the intention, because running the blocklist updates in that mode means that an external party can easily force a block of something critical inside Russia at the moment than neither the blocklist operators not ISPs have spare capacity to react sanely. People who are qualified to understand your point also know that Roskomnadzor is not qualified to prevent the risk I describe.

                                        But some note-taking about unexpected dependency chains will be done anyway.

                                        1. 1

                                          If you were to pile some more tinfoil on, what else might we expect to see from Russian authorities?

                                        1. 3

                                          A little under an hour ago OVH popped in to the VPS provider Slack server I’m on and said they were blocked. I haven’t seen any bounces or received any support requests for my network. Is anyone out there having connection trouble?

                                          1. 5

                                            I would imagine they’re going to block all the popular cloud services that noone in Russia uses for “legitimate” reasons, but which are quite popular outside of Russia. Russia has a pretty big hosting industry and a plethora of VPS providers (in fact, many virtualisation technologies (e.g., Virtuozzo/OpenVZ) and hosting tools (e.g., ISPmanager and ISPBSD fork of FreeBSD) come out of Russia), so, I’d wager that fewer home-run shops/startups are actually affected than most western folks realise.

                                            1. 2

                                              Well, those serving the external market are annoyed by the need to access their own deployments via tunnels, but this is a solvable problem so far. Those who deployed on Amazon but didn’t depend on the fancy stuff have probably redeployed to local providers at least as a backup for local connections. But quite a few are hit, and then there are branches of international companies…

                                              At some point, though, someone (apparently anyone can deploy a Telegram proxy) might remote-order VMs in some Russian datacenters, and deploy proxies that use some spoofing to hide which remote connections are relevant.

                                          1. 2

                                            To minimize noise, we no longer allow pull requests from contributors unaffiliated with the project or the changes proposed. Specifically, pull requests will be restricted if:

                                            There’s no explanation of changes in the body of the commit, and The author is not a bot account, and The author is not the owner or a member of the owning organization, and The author doesn’t have push access to the head and the source branches

                                            Doesn’t this directly conflict with the idea of “great first issue”? “This would be a great first issue…if you can convince me to give you push access to the repo before I know whether or not you are going to commit good code.”

                                            Or am I misunderstanding? I guess if treated as boolean and, an explanation in the commit body is good enough?

                                            1. 3

                                              What’s been disallowed if PRs from non-committers with no explanation. Having a commit comment would remove the restriction and allow the PR.

                                              At least that is my understanding.

                                              1. 1

                                                But can’t bots, and drive by PRs then… make boilerplate commit bodies and leave boilerplate explanations? I get that it might make humans behave a bit better, but it likely leads to modifed behavior that is just as bad. “Round of changes” with an explanation of “I made a round of changes.”

                                                1. 2

                                                  I can only speak to my experience working as committer to Pony. We do have a bit of an issue with humans doing drive by, no comment commits. We have no issue with bots doing it. I hear, from talking to other maintainers I know, that the drive by, no comment commits from humans get worse as your project grows larger. I assume the change was made to address this concern/problem that I know some open source maintainers raised with GitHub.

                                                  1. 2

                                                    I get that. And, have also seen it happen. But it’s still easily thwarted with crap messages, instead of no messages, which means the problem isn’t solved.

                                                    To be clear, I don’t think it’s actually solvable if your goal is allowing everyone to contribute unless you change the social behavior. A better tool, in my mind would be a button that closes the PR with an explanation of the social rules of contribution, and an invitation to resubmit after reviewing them, and making changes.

                                              2. 2

                                                Given the motivation, I hope that it is enough to have push access to the source branch of the PR — i.e. you can PR your changes, but you need some conditions to PR changes by someone else.

                                              1. 2

                                                I think the claim that escaping is unlikely to get done correctly is too strong — getting shell escaping right once is not harder than getting HTML generation with escaped user content done right, and single-quoting requires to escape only the single quote itself (although in an annoying way).

                                                And many useful system APIs have process granularity.

                                                1. 3

                                                  Just use the correct API and you do not need to escape.

                                                1. 1

                                                  I think the article does ask a good question — how to uncouple identity from both moderation policy and server reliability at once.

                                                  Maybe something based on SRV records could help? The user can set up just a few DNS records to establish the long-term account name being a synonym to an actual account on an actual server.

                                                  I guess Patreon supporters would get the feature of having this account’s messages be stored/shown/forwarded as belonging to the long-term name, not the local account. Message archive import could also be possible, but that would be a more long-term goal.

                                                  1. 5

                                                    Re: severity of the trigger: the problem with firmware repository is zero-padding the mode (I really hope this doesn’t give you enough entropy for creating SHA1 collisions), and there are tools that do that by mistake when generating git repositories without using git itself.

                                                    So, this is a good reminder that git sometimes defaults to speed over safety, but not a definite sign of an ongoing compromise of linux-firmware repository. (Of course, there might be an ongoing compromise that is not visible to this specific check — I just say the issue reported is not strong evidence in favour of it).

                                                    1. 1

                                                      (Of course, there might be an ongoing compromise that is not visible to this specific check — I just say the issue reported is not strong evidence in favour of it).

                                                      This is a good point, and I wonder, is git still fundamentally vulnerable to sha1 hash collisions? If so, there’s little defense against that and this won’t catch it.

                                                      1. 2

                                                        I think git verifies if an object is vulnerable to a collision similar in its structure to the published one, and there is definitely work being done to migrate to another hash function, but I think it is still not complete.

                                                        Note that the best undetectable attack would be taking over repository to replace a preexisting (non-attacker-controlled) object with a different one with the same length and hash. This is harder than the published attack, but combinatorially definitely possible.

                                                        You could also just push an extra malicious commit with false author identity and no signature, and hope some maintainer would mistakenly pull it and commit on top of it. Much simple, and for blobs it can be quite effective.

                                                    1. 4

                                                      It’s always nice to see such a bright optimism, both in using a new generation of tools as a go-to learning tool, and in estimating the current situation.

                                                      «This is table stakes for a package manager» — if only it was true for general-purpose (not single-ecosystem) package managers, Nix would probably try to solve very different problems… You are right it is nice to have it, but it is not always there when you need it.

                                                      And an interesting property of verification is that it forces you to pay attention to details. So it doesn’t even matter if your model describes things correctly, you are still forced to think about the small details. If you ever look at the real thing, you have a prepared state of mind to consider it.

                                                      1. 4

                                                        «This is table stakes for a package manager» — if only it was true for general-purpose (not single-ecosystem) package managers, Nix would probably try to solve very different problems…

                                                        I’m just gonna call this a point in favor of formal methods, since it made the upgrade bug so incredibly obvious that I assumed nobody would fall for it!

                                                        1. 2

                                                          Nobody have failed for the bug you have correctly described.

                                                          Instead, a lot of people use package managers that specify compatibility ranges for packages, and also can upgrade a lot of packages by the logic «new GCC needs new glibc that forces and upgrade to a lot of things».

                                                          Then they use SAT-solvers to find out if the requested changes to the current system are possible to implement among all the version mess. Then it turns out that a typical installation is too large for precise SAT-solving, and they use heuristics to limit the scope and sometimes miss obvious solutions (as in: «you complain that you cannot find a solution where an unrelated package X is still installed, let me add it to the explicit part of my request, much better now»). Then a user wants two versions of a program that are not supported to be installed at once, and finds out it is impossible.

                                                          And along the way the package manager developers have always used whatever tools for this task they could take from automated formal methods (which was «none» in the beginning of the journey). The formal methods cannot save you from inherited axioms about tradeoffs in resources (these axioms were obviously true in the 1980s, but nowadays the situation is more complicated).

                                                          1. 3

                                                            Since you’re bringing logic into this, you might find this paper interesting which implemented make in Prolog. I found it when I was looking into the concept of cheating around verified implementations of some software by using first-order logic for specification with a verified Prolog instead. Mostly knock out the coding step. Keep the specs and code closer together if nothing else.

                                                          2. 2

                                                            Separately, re: value of formal methods. Your article actually shows their value more as a mental discipline, a method to force attention to details — after all, you explicitly acknowledge you didn’t check if your model matches what you verify. I would say that implementing your final approach would result in something closer to Flatpak or Docker than Nix.

                                                            An example is that you naive model with explicit and requires has a problem roughly symmetrical to what you found but less interesting. It requires a formally longer demonstration, so you still were pushed to look for a workaround for a more realistic problem. This actually happens a lot — after all, a lot of proofs in inconsistent naive set theory are directly applicable in many other set theories, hopefully including consistent ones! You don’t know what your solution corresponds to in technical terms, but if you ever need to learn the model of a similar package manager, you are already mentally prepared.

                                                            I think what matters a lot is that you think about the details, but are prevented from coupling that thinking with code. At least a formal specification by a person prohibited from coding helps understanding even when it is impossible to formally verify. I would expect that going into polar direction opposite to formal methods — something like hiring people who can translate from many languages, including ones they don’t actually speak, but who cannot program because program languages are too rigid and unforgiving to translate — might also allow you to get a code documentation with attention to details but separate from the code, and reading that translation would also be quite enlightening. I guess the intersection of companies who want to experiment in crazy directions and companies who want to spend money on code comprehension and correctness is too small, so nobody tried.

                                                        1. 3

                                                          This article attacks the validity of the study cited in the Mozilla blog post as justification that anti-female bias in code reviews for open source projects exists in the first place.

                                                          That said, I do think that concealing your demographic details from open-source projects is a good idea for a number of a reasons, not the least of which is that it’s none of the project’s business. I don’t have a problem with maintainers using software features like this to conceal contributors’ identies, or contributors creating and making use of false identies when contributing code.

                                                          1. 4

                                                            It can even be beneficial to read a proposed patch without knowing if the submitter is the same person as the submitter of an unrelated recent patch, because negotiations around genuine tradeoffs can easily get emotional, and leaking emotions between two unrelated heated conversations doesn’t help.

                                                          1. 1

                                                            This seems really cool. I’d love to have email more under my own control. I also need 100% uptime for email though, so it’s hard to contemplate moving from some large hosted service like Gmail.

                                                            1. 4

                                                              If email is that important to you (100% uptime requirement), then what’s your backup plan for a situation where Google locks your account for whatever reason?

                                                              1. 1

                                                                Yeah, that’s true. I mean I do have copies of all my email locally, so at least I wouldn’t lose access to old email, but it doesn’t help for new email in that eventuality.

                                                              2. 3

                                                                Email does have the nifty feature that (legit) mail servers will keep retrying SMTP connections to you if you’re down for a bit, so you don’t really need 100% uptime.

                                                                Source: ran a mail server for my business for years on a single EC2 instance; sometimes it went down, but it was never a real problem.

                                                                1. 1

                                                                  True. I rely on email enough that I’m wary of changing a (more or less) working system. But I could always transition piece by piece.

                                                                2. 3

                                                                  If you need 100% delivery, then you can just list multiple MX records. If your primary MX goes down (ISP outage, whatever), then your mail will just get delivered to the backup. My DNS registrar / provider offers backup MX service, and I have them configured to just forward everything to gmail. So when my self hosted email is unavailable, email starts showing up via gmail until the primary MX is back online. Provides peace of mind when the power goes out or my ISP has outages, or we’re moving house and everything is torn apart.

                                                                  1. 1

                                                                    That’s a good system that seems worth looking into.

                                                                  2. 2

                                                                    Note that email resending works. If your server is unreachable, the sending mail server will actually try the secondary MX server, and if both are down, it will retry half an hour later, then a few more times up to 24 hours later, 48 hours if you are lucky. The sender will usually receive a noification if the initial attempts fail (and a second one when the sending server gives up)

                                                                    On the other hand, if your GMail spam filter randomly decides without a good reason that a reply to your email is too dangerous even to put into the spam folder, neither you nor the sender will be notified.

                                                                    1. 1

                                                                      And I have had that issue with GMail, both as a sender and a receiver, of mail inexplicably going missing. Not frequently, but it occurs.

                                                                  1. 2

                                                                    … What? SAT is already NP-complete. This paper talks about constructing a problem related to SAT and proving that that problem is also NP-hard. As far as I was able to digest from my quick read, it does nothing to establish the relationship of the new problem to SAT in terms of complexity, and it’s hard to see how that approach could prove anything about P even if it did.

                                                                    The writing is also kind of unclear, but I’m hesitant to ding a preprint just for that because it’s possible that English is not the first language of its author. But of course for a paper on a topic this important to be taken seriously, it would almost have to have perfect writing and come from a well-known name in the field.

                                                                    1. 5

                                                                      The paper is better than that, but there is still probably a mistake. I mean, a serious one, because I have already found a fixable one.

                                                                      1. The author defines yet another NP-Complete function. I think it is even claimed to be previously known.

                                                                      2. The author says that the new literal-compatibility-satisfiability is «isotonic» — nondecreasing in every variable.

                                                                      3. The author remarks that polynomial-time algorithm implies a polynomial-size boolean circuit.

                                                                      4. The author claims that a boolean circuit for a non-decreasing function can be built out of AND and OR with a constant overhead compared to the optimal circuit. The proof ends up listing some NOT operations in the final count… I will not be surprised if this is fixable, but that is a warning sign that not all the details are fully polished yet.

                                                                      5. The author presents a way to assign weights to all the elements of an AND-OR circuit for the NP-Complete function in question; the claim is that every element has a polynomial weight but the total weight is exponential. There is nothing apriori wrong with such an approach.

                                                                      (4) has a typo-level mistake and (5) could easily hide a few mistakes. I am not currently in the mood to check, and statistically it is likely that there are mistakes and someone will post a detailed explanation next week. Or maybe (4) is actually substantially wrong.

                                                                      1. 1

                                                                        Interesting. I appreciate your explanation, I wasn’t able to focus on it enough to understand that.

                                                                    1. 4

                                                                      As usual, David apparently fails or refuses to understand how and why PoW is useful and must attack it at every opportunity (using his favorite rhetorical technique of linking negatively connoted phrases to vaguely relevant websites).

                                                                      That said, the article reminds me of a fun story - I went to a talk from a blockchain lead at <big bank> a while back and she related that a primary component of her job was assuring executives that, in fact, they did not need a blockchain for <random task>. This had become such a regular occurrence that she had attached this image to her desk.

                                                                      1. 10

                                                                        What would you consider a useful situation for PoW? In the sense that no other alternative could make up for the advantages in some real life use-case?

                                                                        But otherwise, and maybe it’s just me, since I agree wuth his premise, but I see @David_Gerard as taking the opposite role of popular blockchain (over-)advocates, who claim that the technology is the holy grail for far too many problems. Even if one doesn’t agree with his conclusions, I enjoy reading his articles, and find them very informative, since he doesn’t just oppose blockchains from a opinion-based position, but he also seems to have the credentials to do so.

                                                                        1. 1

                                                                          Relying to @gerikson as well. I personally believe that decentralization and cryptographically anchored trust are extremely important (what David dismissively refers to as “conspiracy theory economics”). We know of two ways to achieve this: proof of work, and proof of stake. Proof of stake is interesting but has some issues and trade-offs. If you don’t believe that PoW mining is some sort of anti-environmental evil (I don’t) it seems to generally offer better properties than PoS (like superior surprise-fork resistance).

                                                                          1. 13

                                                                            I personally believe that decentralization and cryptographically anchored trust are extremely important

                                                                            I personally also prefer decentralised or federalised systems, when they have a practical advantage over a centralized alternative. But I don’t see this to be the case with most application of the blockchain. Bitcoin, as a prime example, to my knowledge is too slow, too inconvenient, too unstable and too resource hungry to have a practical application, as a real substitute for money, either digital or virtual. One doesn’t have the time to wait 20m or more whenever one pays for lunch or buys some chewing gum at a corner shop, just because some other transactions got picked up first by a miner. It’s obviously different when you want to do something like micro-donations or buying illegal stuff, but I just claim that this isn’t the basis of a modern economy.

                                                                            Cryptography is a substitute for authority, that is true, but I don’t belive that this is always wanted. Payments can’t be easily reveresed, addresses mean nothing, clients might loose support because the core developers arbitrarily change stuff. (I for example am stuck with 0.49mBTC from an old Electrum client, and I can’t do anything with it, since the whole system is a mess, but that’s rather unrelated.) This isn’t really the dynamic basis which capitalism has managed to survive on for this long. But even disregarding all of this, it simply is true that bitcoin isn’t a proper decentralized network like BitTorrent. Since the role of the wallet and the miner is (understandably) split, these two parts of the network don’t scale equally. In China gigantic mining farms are set up using specialized hardware to mine, mine, mine. I remember reading that there was one farm that predominated over at least 10% of the total mining power. All of this seems to run contrary to the proclaimed ideals. Proof of Work, well “works” in the most abstract sense, that it produces the intended results on one side, at the cost of disregarding everything can be disregarded, irrespective of whether it should be or not. And ultimately I prioritise other things over an anti-authority fetish, as do most people -which reminds us that even if everything I said is false that Bitcoin just doesn’t have the adoption to be significant enough to anyone but Crypto-Hobbiests, Looney Libertarians and some soon-to-fail startups in Silicon Valley.

                                                                            1. 5

                                                                              there was one farm that predominated over at least 10% of the total mining power

                                                                              There was one pool that was at 42% of the total mining power! such decentralization very security

                                                                                1. 5

                                                                                  To be fair, that was one pool consisting of multiple miners. What I was talking about was a single miner controlling 10% of the total hashing power.

                                                                                  1. 7

                                                                                    That’s definitely true.

                                                                                    On the other hand, if you look at incident reports like https://github.com/bitcoin/bips/blob/master/bip-0050.mediawiki — the pool policies set by the operators (often a single person has this power for a given pool) directly and significantly affect the consensus.

                                                                                    Ghash.io itself did have incentives to avoid giving reasons for accusations that would tank Bitcoin, but being close to 50% makes a pool a very attractive attack target: take over their transaction and parent-block choice, and you take over the entire network.

                                                                                2. 0

                                                                                  But I don’t see this to be the case with most application of the blockchain.

                                                                                  Then I would advise researching it.

                                                                                  One doesn’t have the time to wait 20m or more whenever one pays for lunch or buys some chewing gum at a corner shop

                                                                                  Not trying to be rude, but it’s clear whenever anyone makes this argument that they don’t know at all how our existing financial infrastructure works. In fact, it takes months for a credit card transaction to clear to anything resembling the permanence of a mined bitcoin transaction. Same story with credit cards.

                                                                                  Low-risk merchants (digital goods, face-to-face sales, etc.) rarely require the average 10 minute (not sure where you got 20 from) wait for a confirmation.

                                                                                  If you do want permanence, Bitcoin is infinitely superior to any popular payment mechanism. Look into the payment limits set by high-value fungible goods dealers (like gold warehouses) for bitcoin vs. credit card or check.

                                                                                  Bitcoin just doesn’t have the adoption to be significant enough to anyone but Crypto-Hobbiests, Looney Libertarians and some soon-to-fail startups in Silicon Valley.

                                                                                  Very interesting theory - do you think these strawmen you’ve put up have collective hundreds of billions of dollars? As an effort barometer, are you familiar with the CBOE?

                                                                                  1. 10

                                                                                    Please try to keep a civil tone here.

                                                                                    Also, it’s hard to buy a cup of coffee or a steam game or a pizza with bitcoin. Ditto stocks.

                                                                                    1. -4

                                                                                      It’s hard to be nice when the quality of discourse on this topic is, for some reason, abysimally low compared to most technical topics on this site. It feels like people aren’t putting in any effort at all.

                                                                                      For example, why did you respond with this list of complete non-sequiturs? It has nothing to do with what we’ve been discussing in this thread except insofar as it involves bitcoin. I feel like your comments are normally high-effort, so what’s going on? Does this topic sap people’s will to think carefully?

                                                                                      (Civility is also reciprocal, and I’ve seen a lot of childish name-calling from the people I’m arguing with in this thread, including the linked article and the GP.)

                                                                                      Beyond the fact that this list is not really relevant, it’s also not true; you could have just searched “bitcoin <any of those things>” and seen that you can buy any of those things pretty easily, perhaps with a layer of indirection (just as you need a layer of indirection to buy things in the US if you already have EUR). In that list you gave, perhaps the most interesting example in bitcoin’s disfavor is Steam; Steam stopped accepting bitcoin directly recently, presumably due to low interest. However, it’s still easy to buy games from other sources (like Humble) with BTC.

                                                                                      1. 6

                                                                                        IMO, your comments are not very inspiring for quality. As someone who does not follow Bitcoin or the Blockchain all that much, I have not felt like any of your comments addressed anyone else’s comments. Instead, I have perceived you as coming off as defensive and with the attitude of “if you don’t get it you haven’t done enough research because I’m right” rather than trying to extol the virtues of the blockchain. Maybe you aren’t interested in correcting any of what you perceive as misinformation on here, and if so that’s even worse.

                                                                                        For example, I do not know of any place I can buy pizza with bitcoin. But you say it is possible, but perhaps with a layer of indirection. I have no idea what this layer of indirection is and you have left it vague, which does not lend me to trusting your response.

                                                                                        In one comment you are very dismissive of people’s Bitcoins getting hacked, but as a lay person, I see news stories on this all the time with substantial losses and no FDIC, so someone like me considers this a major issue but you gloss over it.

                                                                                        Many of the comments I’ve read by you on this thread are a similar level of unhelpful, all the while claiming the person you’re responding to is some combination of lazy or acting dumb. Maybe that is the truth but, again, as an outsider, all I see is the person defending the idea coming off as kind of a jerk. Maybe for someone more educated on the matter you are spot on.

                                                                                        1. 5

                                                                                          There is a religious quality to belief in the blockchain, particularly Bitcoin. It needs to be perfect in order to meet expectations for it: it can’t be “just” a distributed database, it has to be better than that. Bitcoin can’t be “just” a payment system, it has to be “the future of currency.” Check out David’s book if you’re interested in more detail.

                                                                                    2. 8

                                                                                      In fact, it takes months for a credit card transaction to clear to anything resembling the permanence of a mined bitcoin transaction. Same story with credit cards.

                                                                                      But I don’t have to wait months for both parties to be content the transaction is successful, only seconds, so this is really irrelevant to the point you are responding to, which is that if a Bitcoin transaction takes 10m to process then I heave to wait 10m for my transaction to be done, which people might not want to do.

                                                                                      1. -1

                                                                                        Again, as I said directly below the text you quoted, most merchants don’t require you to wait 10 minutes - only seconds.

                                                                                      2. 5

                                                                                        Then I would advise researching it.

                                                                                        It is exactly because I looked into the inner workings of Bitcoin and the Blockchain - as a proponent I have to mention - that I became more and more skeptical about it. And I still do support various decentralized and federated systems: BitTorrent, IPFS, (proper) HTTP, Email, … but just because the structure offers the possibility for a decentralized network, doesn’t have to mean that this potential is realized or that it is necessarily superior.

                                                                                        Not trying to be rude, but it’s clear whenever anyone makes this argument that they don’t know at all how our existing financial infrastructure works. In fact, it takes months for a credit card transaction to clear to anything resembling the permanence of a mined bitcoin transaction. Same story with credit cards.

                                                                                        The crucial difference being that, let’s say the cashier nearly instantaneously hears a some beep and knows that it isn’t his responsibility, nor that of the shop, to make sure that the money is transfered. The Bank, the credit card company or whoever has signed a binding contract lining this technical part of the process out to be what they have to care about, and if they don’t, they can be sued since there is an absolute regulatory instance - the state - in the background. This mutual delegation of trust, gives everyone a sense of security (regardless of how true or false it is) that makes people spend money instead of hording it, investing into projects instead of trading it for more secure assets. Add Bitcoins aforementioned volatileness, and no reasonable person would want to use it as their primary financial medium.

                                                                                        If you do want permanence, Bitcoin is infinitely superior to any popular payment mechanism.

                                                                                        I wouldn’t conciser 3.3 to 7 transactions per second infinitely superior to, for example Visa with an average of 1,700 t/s. Even it you think about it, there are far more that just 7 purchases being made a second around the whole world for this to be realistically feasible. But on the other side, as @friendlysock Bitcoin makes up for it by not having too many things you can actually buy with it: The region I live in has approximately a million or something inhabitants, but according to CoinMap even by the most generous measures, only 5 shops (withing a 30km radius) accepting it as a payment method. And most of those just offer it to promote themselves anyway.

                                                                                        Very interesting theory - do you think these strawmen you’ve put up have collective hundreds of billions of dollars? As an effort barometer, are you familiar with the CBOE?

                                                                                        (I prefer to think about my phrasing as a exaggeration and a handful of other literary deviced, instead of a fallacy, but never mind that) I’ll give you this: It has been a while since I’ve properly engaged with Bitcoin, and I was always more interested in the technological than the economical side, since I have a bit of an aversion towards libertarian politics. And it might be true that money is invested, but that still doesn’t change anything about all the other issues. It remains a bubble, a volatile, unstable, unpredictable bubble, and as it is typical for bubbles, people invest disproportional sums into it - which in the end makes it a bubble.

                                                                                        1. 0

                                                                                          The crucial difference being that, let’s say the cashier nearly instantaneously hears a some beep and knows that it isn’t his responsibility, nor that of the shop, to make sure that the money is transfered.

                                                                                          Not quite. The shop doesn’t actually have the money. The customer can revoke that payment at any time in the next 90 or 180 days, depending. Credit card fraud (including fraudulent chargebacks) is a huge problem for businesses, especially online businesses. There are lots of good technical articles online about combatting this with machine learning which should give you an idea of the scope of the problem.

                                                                                          makes people spend money instead of hording it,

                                                                                          Basically any argument of this form (including arguments for inflation) don’t really make sense with the existence of arbitrage.

                                                                                          Add Bitcoins aforementioned volatileness, and no reasonable person would want to use it as their primary financial medium.

                                                                                          So it sounds like it would make people… spend money instead of hoarding it, which you were just arguing for?

                                                                                          I wouldn’t conciser 3.3 to 7 transactions per second infinitely superior to, for example Visa with an average of 1,700 t/s.

                                                                                          https://lightning.network

                                                                                          as @friendlysock Bitcoin makes up for it by not having too many things you can actually buy with it

                                                                                          This is just patently wrong. The number of web stores that take Bitcoin directly is substantial (both in number and traffic volume), and even the number of physical stores (at least in the US) is impressive given that it’s going up against a national currency. How many stores in the US take even EUR directly?

                                                                                          Anything you can’t buy directly you can buy with some small indirection, like a BTC-USD forex card.

                                                                                          It remains a bubble, a volatile, unstable, unpredictable bubble

                                                                                          It’s certainly volatile, and it’s certainly unstable, but it may or may not be a bubble depending on your model for what Bitcoin’s role in global finance is going to become.

                                                                                          1. 5

                                                                                            Not quite. The shop doesn’t actually have the money. The customer can revoke that payment at any time in the next 90 or 180 days, depending

                                                                                            You’ve still missed my point - it isn’t important if the money has been actually transfered, but that there is trust that a framework behind all of this will guarantee that the money will be there when it has to be, as well as a protocol specifying what has to be done if the payment is to be revoked, if a purchase wishes to be undone, etc.

                                                                                            Credit card fraud (including fraudulent chargebacks) is a huge problem for businesses, especially online businesses.

                                                                                            Part of the reason, I would suspect is that the Internet was never made to be a platform for online businesses - but I’m not going to deny the problem, I’m certainly not a defender of banks and credit card companies - just an opponent of Bitcoin.

                                                                                            Basically any argument of this form (including arguments for inflation) don’t really make sense with the existence of arbitrage.

                                                                                            Could you elaborate? You have missed my point a few times already, so I’d rather we understand each other instead of having two monologues.

                                                                                            So it sounds like it would make people… spend money instead of hoarding it, which you were just arguing for?

                                                                                            No, if it’s volatile people either won’t buy into it in the first place. And if a currency is unstable, like Bitcoin inflating and deflating all the time, people don’t even know what do do with it, if it were their main asset (which I was I understand you are promoting, but nobody does). I doubt it will ever happen, since if prices were insecure, the whole economy would suffer, because all the “usual” incentives would be distorted.

                                                                                            https://lightning.network

                                                                                            I haven’t heard of this until you mentioned it, but it seems like it’s quite new, so time has to test this yet-another-bitcoin-related project that has popped up. Even disregarding that it will again need to first to make a name of it self, then be accepted, then adopted, etc. from what I gather, it’s not the ultimate solution (but, I might be wrong), especially since it seems to encourage centralization, which I believe is what you are so afraid of.

                                                                                            This is just patently wrong. The number of web stores that take Bitcoin directly is substantial (both in number and traffic volume),

                                                                                            Sure, there might be a great quantity of shops (as I mentioned, who use Bitcoin as a medium to promote themselves), but I, and from what I know most people, don’t really care about these small, frankly often dodgy online shops. Can I use it to pay directly on Amazon? Ebay? Sure, you can convert it back and forth, but all that means it that Bitcoin and other crypto currencies are just an extra step for life stylists and hipster, with no added benefit. And these shops don’t even accept Bitcoin directly, to my knowledge always just so they can convert it into their national currency - i.e. the one they actually use and Bitcoins value is always compared to. What is even Bitcoin without the USD, the currency it hates but can’t stop comparing itself to?

                                                                                            and even the number of physical stores (at least in the US) is impressive given that it’s going up against a national currency.

                                                                                            The same problems apply as I’ve already mentioned, but I wonder: have you actually ever used Bitcoin to pay in a shop? I’ve done it once and it was a hassle - in the end I just bought it with regular money like a normal person because it was frankly too embarrassing to have the cashier have to find the right QR code, me to take out my phone, wait for me got get an internet connection, try and scan the code, wait, wait, wait…. And that is of course only if you want to make the trip to buy for the sake of spending money, and decide to make a trip to some place you’d usually never go to buy something you don’t even need.

                                                                                            Ok when you’re buying drugs online or doing something with microdonations, but otherwise… meh.

                                                                                            How many stores in the US take even EUR directly?

                                                                                            Why should they? And even if they do, they convert it back to US dollars, because that’s the common currency - there isn’t really a point in a currency (one could even question if it is one), when nobody you economically interact with uses it.

                                                                                            Anything you can’t buy directly you can buy with some small indirection, like a BTC-USD forex card.

                                                                                            So a round-about payment over a centralized instance - this is the future? Seriously, this dishonesty of Bitcoin advocates (and Libertarians in general) is why you guys are so unpopular. I am deeply disgusted that I have ever advocated for this mess.

                                                                                            It’s certainly volatile, and it’s certainly unstable, but it may or may not be a bubble depending on your model for what Bitcoin’s role in global finance is going to become.

                                                                                            So you admit that is has none of the necessary preconditions to be a currency… but for some reason it will… do what exactly? If you respond to anything I mentioned here, at least tell me this: What is your “model” for what Bitcoin’s role is going to be?

                                                                                    3. 14

                                                                                      Why don’t you believe it is anti-enviromental? It certainly seems to be pretty power hungry. In fact it’s hunger for power is part of why it’s effective. All of the same arguments about using less power should apply.

                                                                                      1. -1

                                                                                        Trying to reduce energy consumption is counterproductive. Energy abundance is one of the primary driving forces of civilizational advancement. Much better is to generate more, cleaner energy. Expending a few terrawatts on substantially improved economic infrastructure is a perfectly reasonable trade-off.

                                                                                        Blaming bitcoin for consuming energy is like blaming almond farmers for using water. If their use of a resource is a problem, you should either get more of it or fix your economic system so externalities are priced in. Rationing is not an effective solution.

                                                                                        1. 10

                                                                                          on substantially improved economic infrastructure

                                                                                          This claim definitely needs substantiation, given that in practice bitcoin does everything worse than the alternatives.

                                                                                          1. -1

                                                                                            bitcoin does everything worse than the alternatives.

                                                                                            Come on David, we’ve been over this before and discovered that you just have a crazy definition of “better” explicitly selected to rule out cryptocurrencies.

                                                                                            Here’s a way Bitcoin is better than any of its traditional digital alternatives; bitcoin transactions can’t be busted. As you’ve stated before, you think going back on transactions at the whim of network operators is a good thing, and as I stated before I think that’s silly. This is getting tiring.

                                                                                            A few more, for which you no doubt have some other excuse for why this is actually a bad thing; Bitcoin can’t be taken without the user’s permission (let me guess; “but people get hacked sometimes”, right?). Bitcoin doesn’t impose an inflationary loss on its users (“but what will the fed do?!”). Bitcoin isn’t vulnerable to economic censorship (don’t know if we’ve argued about this one; I’m guessing you’re going to claim that capital controls are critical for national security or something.). The list goes on, but I’m pretty sure we’ve gone over most of it before.

                                                                                            I’ll admit that bitcoin isn’t a panacea, but “it does everything worse” is clearly a silly nonsensical claim.

                                                                                          2. 4

                                                                                            Reducing total energy consumption may or may not be counterproductive. But almost every industry I can name has a vested interest in being more power efficient for it’s particular usage of energy. The purpose of a car isn’t to burn gasoline it is to get people places. If it can do that with less gasoline people are generally happier with it.

                                                                                            PoW however tries to maximizes power consumption, via second order effects , with the goal of making it expensive to try to subvert the chain. It’s clever because it leverages economics to keep it in everyone’s best interest to not fork but it’s not the same as something like a car where reducing energy consumption is part of the value add.

                                                                                            I think that this makes PoW significantly different than just about any other use of energy that I can think of.

                                                                                            1. 3

                                                                                              Indeed. The underlying idea of Bitcoin is to simulate the mining of gold (or any other finite, valuable resource). By ensuring that an asset is always difficult to procure (a block reward every 10 minutes, block reward halving every 4 years), there’s a guard against some entity devaluing the currency (literally by fiat).

                                                                                              This means of course that no matter how fast or efficient the hardware used to process transactions becomes, the difficulty will always rise to compensate for it. The energy per hash calculation has fallen precipitously, but the number of hash calculations required to find a block has risen to compensate.

                                                                                        2. 6

                                                                                          We’ve been doing each a long time without proof of work. There’s lots of systems that are decentralized with parties that have to look out for each other a bit. The banking system is an example. They have protocols and lawyers to take care of most problems. Things work fine most of the time. There are also cryptographically-anchored trust systems like trusted timestamping and CA’s who do what they’re set up to do within their incentives. If we can do both in isolation without PoW, we can probably do both together without PoW using some combination of what’s already worked.

                                                                                          I also think we haven’t even begun to explore the possibilities of building more trustworthy charters, organizational incentives, contracts, and so on. The failings people speak of with centralized organizations are almost always about for-profit companies or strong-arming governments whose structure, incentives, and culture is prone to causing problems like that. So, maybe we eliminate root cause instead of tools root cause uses to bring problems since they’ll probably just bring new forms of problems. Regulations, disruption, or bans of decentralized payment is what I predicted would be response with some reactions already happening. They just got quite lucky that big banks like Bank of America got interested in subverting it through the legal and financial system for their own gains. Those heavyweights are probably all that held the government dogs back. Ironically, the same ones that killed Wikileaks by cutting off its payments.

                                                                                      2. 8

                                                                                        In what context do you view proof-of-work as useful?

                                                                                        1. 11

                                                                                          You have addressed 0 of the actual content of the article.

                                                                                        1. 5

                                                                                          A point that the author doesn’t make (because the post is about setting up usernames for a new site) is that you can have requirements for new registrations that are stricter than for existing usernames, although it will be less efficient.

                                                                                          You can forbid new usernames looking similar to the existing ones, even if there are false positives (well, one name from each cluster can still exist). You can forbid new usernames that are different only in case from the existing ones, even if there are some confusing username pairs in your DB. You can even allow logging in with a username in a different case unless there are two usernames that match.

                                                                                          Even if you do not solve a problem, you can freeze its scale, and prevent it from later becoming a noticeable load on support (or from being unexpectedly abused).

                                                                                          1. 3

                                                                                            I don’t think the MitM vector is clearly described on the article (or the blogpost it links to for that matter). Anyone care to elaborate why is this MitM-able?

                                                                                            1. 2

                                                                                              From reading the article this is better described not as MitM but as reducing the security of a popular workflow back to the level equivalent to software wallets. Although I could probably find a way to explain why it is in some sense MitM.

                                                                                              The idea of hardware wallet is partially that the limitated protocols it uses make it very hard to attack; so the abilities of a worm which runs under your user account on your desktop to manipulate your payments is removed, unless it finds a vulnerability in a narrow-scope software.

                                                                                              In this case, one of the workflows includes doing something in Javascript on the desktop side, while the verification on the token side is optional. This means that there is a workflow where manipulating your browser is enough to trick you into making a different payment than you expected.

                                                                                              1. 2

                                                                                                That I could understand, but (in my very humble opinion) that sounded more like a CSRF-like vulnerability rather than MitM. Either way, that’s just semantics :)

                                                                                                1. 2

                                                                                                  It just depends on what you would call end-to-end. I think the idea of calling it MitM is that you don’t trust your desktop and want to trust only the hardware wallet. You still use your desktop for a part of communication, because of convenience and network connection and stuff like that. Turns out, a program taking over the desktop can take over a part of the process that should have been unmodifiable without infiltrating the hardware wallet.

                                                                                                  So MitM is the desktop being able to spoof too much when used to facilitate interaction between you, hardware token and the global blockchain.

                                                                                            1. 2

                                                                                              Anyone have a copy?

                                                                                              1. 2

                                                                                                Just google “iboot github” and find a not-yet-dmcad link. Currently https://github.com/emrakul2002/iboot works.

                                                                                                1. 1

                                                                                                  Apparently the original upload have been taken down, but there are more copies that can be easily searched at the same site. I would assume that a lot of people have copies by now…

                                                                                                1. -1

                                                                                                  Eventually we will stop investing in chemical rocketry and do something really interesting in space travel. We need a paradigm shift in space travel and chemical rockets are a dead end.

                                                                                                  1. 7

                                                                                                    I can’t see any non-scifi future in which we give up on chemical rocketry. Chemical rocketry is really the only means we have of putting anything from the Earth’s surface into Low Earth Orbit, because the absolute thrust to do that must be very high compared what you’re presumably alluding to (electric propulsion, lasers, sails) that only work once in space, where you can do useful propulsion orthogonally to the local gravity gradient (or just with weak gravity). But getting to LEO is still among the hardest bits of any space mission, and getting to LEO gets you halfwhere to anywhere in the universe, as Heinlein said.

                                                                                                    Beyond trying reuse the first stage of a conventional rocket, as SpaceX are doing, there are some other very interesting chemical technologies that could greatly ease space access, such as the SABRE engine being developed for the Skylon spaceplane. The only other way I know of that’s not scifi (e.g. space elevators) are nuclear rockets, in which a working fluid (like Hydrogen) is heated by a fissiling core and accelerated out of a nozzle. The performance is much higher than chemical propulsion but the appetite to build and fly such machines is understandably very low, because of the risk of explosions on ascent or breakup on reentry spreading a great deal of radioactive material in the high atmosphere over a very large area.

                                                                                                    But in summary, I don’t really agree with, or more charitably thing I’ve understood your point, and would be interested to hear what you actually meant.

                                                                                                    1. 3

                                                                                                      I remember being wowed by Project Orion as a kid.

                                                                                                      Maybe Sagan had a thing for it? The idea in that case was to re-use fissile material (after making it as “clean” as possible to detonate) for peaceful purposes instead of for military aggression.

                                                                                                      1. 2

                                                                                                        Atomic pulse propulsion (ie Orion) can theoretically reach .1c, so that’s the nearest star in 40 years. If we can find a source of fissile material in solar system (that doesn’t have to be launched from earth) and refined, interstellar travel could really happen.

                                                                                                        1. 1

                                                                                                          The moon is a candidate for fissile material: https://www.space.com/6904-uranium-moon.html

                                                                                                      2. 1

                                                                                                        Problem with relying a private company funded by public money like SpaceX is that they won’t be risk takers, they will squeeze every last drop out of existing technology. We won’t know what reasonable alternatives could exist because we are not investing in researching them.

                                                                                                        1. 2

                                                                                                          I don’t think it’s fair to say SpaceX won’t be risk takers, considering this is a company who has almost failed financially pursuing their visions, and has very ambitious goals for the next few years (which I should mention, require tech development/innovation and are risky).

                                                                                                          Throwing money at research doesn’t magically create new tech, intelligent minds do. Most of our revolutionary advances in tech have been brainstormed without public nor private funding. One or more people have had a bright idea and pursed it. This isn’t something people can just do on command. It’s also important to also consider that people fail to bring their ideas to fruition but have paved the path for future development for others.

                                                                                                          1. 1

                                                                                                            I would say that they will squeeze everything out of existing approaches, «existing technology» sounds a bit too narrow. And unfortunately, improving the technology by combining well-established approaches is the stage that cannot be too cheap because they do need to build and break fulll-scale vehicles.

                                                                                                            I think that the alternative approaches for getting from inside atmosphere into orbit will include new things developed without any plans to use them in space.

                                                                                                        2. 2

                                                                                                          What physical effects would be used?

                                                                                                          I think that relying on some new physics, or contiguous objects of a few thousand kilometers in size above 1km from the ground are not just a paradigm shift; anything like that would be nice, but doesn’t make what there currently is a disappointment.

                                                                                                          The problem is that we want to go from «immobile inside atmosphere» to «very fast above atmosphere». By continuity, this needs to pass either through «quite fast in the rareified upper atmosphere» or through «quite slow above the atmosphere».

                                                                                                          I am not sure there is a currently known effect that would allow to hover above the atmosphere without orbital speed.

                                                                                                          As for accelerating through the atmosphere — and I guess chemical air-breathing jet engines don’t count as a move away from chemical rockets — you either need to accelerate the gas around you, or need to carry reaction mass.

                                                                                                          In the first case as you need to overcome the drag, you need some of the air you push back to fly back relative to Earth. So you need to accelerate some amount of gas to multiple kilometers per second; I am not sure there are any promising ideas for hypersonic propellers, especially for rareified atmosphere. I guess once you reach ionosphere, something large and electromagnetic could work, but there is a gap between the height where anything aerodynamic has flown (actually, a JAXA aerostat, maybe «aerodynamic» is a wrong term), and the height where ionisation starts rising. So it could be feasible or infeasible, and maybe a new idea would have to be developed first for some kind of in-atmosphere transportation.

                                                                                                          And if you carry you reaction mass with you, you then need to eject it fast. Presumably, you would want to make it gaseous and heat up. And you want to have high throughput. I think that even if you assume you have a lot of electrical energy, splitting watter into hydrogen and oxygen, liquefying these, then burning them in-flight is actually pretty efficient. But then the vehicle itself will be a chemical rocket anyway, and will use the chemical rocket engineering as practiced today. Modern methods of isolating nuclear fission from the atmosphere via double heat exchange reduce throughput. Maybe some kind nuclear fusion with electomagnetic redirection of the heated plasma could work, maybe it could even be more efficient than running a reactor on the ground to split water, but nobody knows yet what is the scale required to run energy-positive nuclear fusion.

                                                                                                          All in all, I agree there are directions that could maybe become a better idea for starting from Earth than chemical rockets, but I think there are many scenarios where the current development path of chemical rockets will be more efficient to reuse and continue.

                                                                                                          1. 2

                                                                                                            What do you mean by “chemical rockets are a dead end”? In order to escape planetary orbits, there really aren’t many options. However, for intersteller travel, ion drives and solar sails have already been tested and deployed and they have strengths and weaknesses. So there are multiple use cases here depending on the option.

                                                                                                            1. 1

                                                                                                              Yeah right after we upload our consciousness to a planetary fungal neural network.