1. 1

    Ah, Haskell code golf. It’s like APL, but for Monads instead of matrices!

    1. 27

      Sometimes I like to think that I know how computers work, and then I read something written by someone who actually does and I’m humbled most completely.

      1. 11

        A lot of this complexity seems down to the way Windows works, though. As a Linux user, the amount of somewhat confusing/crufty stuff going on in a typical Windows install boggles the mind; it’s almost as bad as Emacs.

        1. 11

          I guess to me it doesn’t feel like there’s much Windows specific complexity here, just a generally complex issue; a bug in v8’s sandboxed runtime and how it interacts with low-level OS-provided virtual memory protection and specific lock contention behavior, which only expressed itself by happenstance for the OP.

          Some of this stuff just feels like irreducible complexity, though my lack of familiarity with Windowsisms (function naming style, non-fair locks, etc.) probably doesn’t help there.

          1. 5

            How does CFG work with chrome on linux?

            1. 2

              Do you mean CFI?

              CFG is MS’s Control Flow Guard, it’s a combination of compile-time instrumentation from MSVC and runtime integration with the OS. CFI on Linux (via clang/LLVM), in contrast, is entirely compile time AFAIK, with basically no runtime support.

              See:

              for more details on the differences.

              1. 2

                Yes and no. :) The linux CFI implementation doesn’t include the jit protection feature in CFG that’s implicated in the bug, so I’m not sure it’s fair to characterize this as “cruft”.

                1. 2

                  The CFI implementation in llvm isn’t a “linux CFI implementation.” :)

                  As OpenBSD moves towards llvm on all architectures, it can take advantage of CFI, just as HardenedBSD already does. :)

                2. 1

                  llvm’s implementation of CFI does have the beginnings of a runtime support library (libclang_rt.cfi). HardenedBSD is working on integrating Cross-DSO CFI from llvm, which is what uses the support library.

              2. 4

                Linux just hasit’s own weirdnesses in other places.

                That said, memory management seems to be a source of strange behaviour regardless of OS.

            1. 2

              Semmle are a spin out from Oxford University - a genuine formal methods success story I believe (although they probably wouldn’t want that term anywhere near their marketing :) )

                1. 3

                  Thanks for that. It’s bad enough when journalistic articles fail to link to the original research, but this is an article by the research institute itself!

                1. 5

                  The conflation of “correct” with the absence of bugs makes this argument hard to critique.

                  1. No code is bug-free.
                  2. Software with bugs is not correct.
                  3. Simple code is easier to make correct.
                  4. Simple code is easier to make fast.
                  5. Therefore: prioritise simple code, to make future work easier.

                  If I’m understanding this argument correctly, then I see several fallacies. The biggest problem is the assertion that no code can ever be correct. Then why even try for simplicity? What is the heuristic for knowing if my simple code is correct… enough?

                  I think there might be something in another assertion: correct code is easier to make simple. Most first cuts at a problem are complicated. But it’s hard as hell to get to simple without knowing what logic can stay and what logic is extraneous.

                  You need to take problems apart, identify smaller problems within them and ruthlessly remove scope until you find the basic problem you can apply a basic solution to.

                  This proposal defers the conflict. Once I have a “basic” problem and solution… then what?

                  This argument ignores the jump from not-working code to working code. This argument assumes “simple” code is easy to understand and easy to change.

                  Get off my lawn. 😉

                  1. 3

                    The conflation of “correct” with the absence of bugs makes this argument hard to critique.

                    Yeah. For me, coming from a theoretical CS background, correctness is meaningless without a formal specification. If you don’t have a spec, how can you even know what correctness is? Code that you debug into existence alongside a kind of “I know it when I see it spec” can be perfectly functional and useful, but not “correct” in any meaningful sense.

                    (Sometimes of course, you debug something into existence & then realise that there’s an underlying structure to the problem that you hadn’t seen before. Bonus! Now you can spec it out and catch future errors, if you care enough to do that.)

                  1. 33

                    First they…

                    Ah, you all know how it goes. Congratulations to the RISC-V guys on making another step on that particular ladder.

                    1. 2

                      We currently use CVS at $job for a large amount of our projects, mostly because that’s what had been in use here for some 30 years. It’s great for our use-case (one branch, many developers), but we’ve been migrating projects one-by-one to git very slowly.

                      Git feels much more powerful, but also more complex for basic operations like merging and rebasing. Our integration/workflow with Eclipse could be hindering some of this, but it’s almost second nature to use CVS in the IDE.

                      I do use git quite a bit for personal projects, but mostly just through the terminal and with minimal merging/branching.

                      1. 2

                        magit under emacs is really, really good if you’re jonesing for proper editor integration with your revision control system.

                        1. 1

                          I used CVS for a few things back in the day, and SVN more - but I can’t remember either well enough to know how you’d even handle things like rebasing - to me that’s a term that almost doesn’t make sense outside of modern VCS. So when you said it’s more complex, I was surprised. I do ‘git merge’ and ‘git rebase’ regularly, it’s an everyday part of the workflow. Rebasing in particular makes keeping long-lived branches alive in a state where they can be cleanly merged, a much more sane proposition.

                          +1 for magit in emacs also, btw. It’s a power tool for git.

                          Cheap branching and local commits are the biggest selling point for git’s usability over CVS/SVN, because I can be so much more confident that I’m not going to lose work. I can save commits, use temporary local branches, and never worry about accidentally destroying my local changes while doing a merge.

                        1. 3

                          No, it’s vector multiplication in disguise as a markov chain.

                          1. 3

                            That seems like a category mistake the me, whereas the title of the article doesn’t.

                            A Markov chain may be a specific pattern of vector multiplications, but that pattern makes all the difference. Markov chains and vector multiplications are on a different level. On the other hand ‘deep learning’ and ‘Markov chain’ are terms for alternative patterns of vector multiplications, one a lot more involved than the other.

                            1. 2

                              There’s a video on YT somewhere of a talk by a physicist (IIRC) on why deep learning is so ridiculously effective - it pretty much boils down to the same reason that mathematics is so unreasonably effective in describing physical systems in general, i.e. (handwaving extremely wildly from memory) that physical systems tend to be simple functions of their inputs (albeit with many, many inputs!) where causality is preserved. This is what makes it possible for RNNs and the like to approximate physical systems in various ways, because the nature of said physical systems is exactly what permits approximations of the information content of the system to be at least partially valid instead of being a total loss.

                              (I tried to find the video, but there are too many terrible ones on the same topic these days. I’ll have another look later.)

                            2. 4

                              No, it’s a monoid in the category of endofunctors.

                            1. 4

                              If the book is so bad, then what is the publisher doing? Isn’t it their job to weed out bad content?

                              1. 6

                                I wanted to explore that question some more in the post, but it got out of scope and is really its own huge topic.

                                The short version is that perhaps, as readers, we think they are asking “Is this content any good?” when what they’re really asking is, “Will this sell?”

                                1. 5

                                  In the preface of the second edition it says that the first edition was reviewed “by a professional C programmer hired by the publisher.” That programmer said it should not be published. That programmer was right, but the publisher went ahead and published it anyway.

                                  Can you expand slightly on this? I understand that the second edition contains a blurb that someone they hired reviewed the 1st edition and decided it should never be published. I’m slightly lost in meaning here.

                                  1. Did they hire a person for the second edition, to review the first edition where the conclusion was ‘that should have not been published’?
                                  2. Hired a person to review the first edition, the conclusion was to not publish but they still decided to publish and included a blurb about it in the second edition?

                                  I guess the question is, did they knew before publishing that it’s this bad.

                                  Additionally was the second edition reviewed by the same person and considered OK to be published?

                                  1. 5

                                    Here’s a longer excerpt from the second edition’s preface.

                                    Prior to the publication of the first edition, the manuscript was reviewed by a professional C programmer hired by the publisher. This individual expressed a firm opinion that the book should not be published because “it offers nothing new—nothing the C programmer cannot obtain from the documentation provided with C compilers by the software companies.”

                                    This review was not surprising. The reviewer was of an opinion that was shared by, perhaps, the majority of professional programmers who have little knowledge of or empathy for the rigors a beginning programmer must overcome to achieve a professional-level knowledge base of a highly technical subject.

                                    Fortunately, that reviewer’s objections were disregarded, and “Mastering C Pointers” was released in 1990. It was an immediate success, as are most books that have absolutely no competition in the marketplace. This was and still is the only book dedicated solely to the subject of pointers and pointer operations using the C programming language.

                                    To answer your question, then, all we can conclude is that a “professional C programmer” reviewed the first edition before it was published, recommended against publishing it, but the book was published anyway. If the quoted portion were the reviewer’s only objection, then we could surmise that the reviewer didn’t know much either, or didn’t actually read it.

                                    1. 1

                                      little knowledge of or empathy for … a beginning programmer

                                      This is an important point I feel that has been left out of the discussion of this book. Yes the book contains harmful advice that should not be followed. It is probably a danger to make this text available to beginners, and it serves as little more than an object of ridicule for more experienced readers.

                                      However, I think there is something to be gained from a more critical analysis that doesn’t hinge on the quality or correctness of the example. This reviewer takes a step in the right direction by trying to look at Traister’s background and trying to interpret how he arrived at holding such fatal misconceptions about C programming from a mental model seemingly developed in BASIC.

                                      Traister’s code examples are in some cases just wrong and non-functioning, but in other cases I can understand what he wanted to achieve even if he has made a serious mistake. An expert C programmer has a mental model informed by their understanding of the memory management and function call semantics of C. A beginner or someone who has experience in a different sort of language will approach C programming from their own mental model.

                                      Rather than pointing and laughing at his stupidity, or working to get this booked removed from shelves, maybe there’s something to be gained by exercising empathy for the author and the beginner programmer. Are the mistakes due to simple error, or do they arise from an “incorrect” mental model? Does the “incorrect” mental model actually make some sense in a certain way? Does it represent a possibly common misconception for beginners? Is it a fault of the programmer or the programming language?

                                      1. 1

                                        …an opinion that was shared by, perhaps, the majority of professional programmers who have little knowledge of or empathy for the rigors a beginning programmer must overcome…

                                        What utter nonsense. This is inverse-meritocracy: claiming that every single expert is blinded by their knowledge & experience. Who are we to listen to then?

                                        It seems like they’d prefer lots of terrible C programmers cropping up right away, to a moderate number of well-versed C programmers entering the craft over time. Which, now that I think about it, is a sensible approach for a publisher to take.

                                  2. 3

                                    Cynically? The publishers job is to make money. If bad content makes them money, they’ll still publish it.

                                    1. 2

                                      Exactly. There’s tons of news outlets, magazines, and online sites that make most of their money on fluff. Shouldn’t be surprised if computer book publishers try it. The managers might have even sensed IT books are BS or can risk being wrong individually given how there’s piles of new books every year on the same subjects. “If they want to argue about content, let them do it in the next book we sell!” ;)

                                      1. 2

                                        I recommend a scene from Hal Harley’s film “Fay Grim” (the sequel to “Henry Fool”) here. At a point, Fay questions the publishers decision to publish a work (‘The Confessions’) of her husband - she only read “the dirty parts” but still recognized the work as “really, really bad”.

                                        Excerpted from a PopMatters review: “One proposal, from Simon’s publisher Angus (Chuck Montgomery), will lead to publication of Henry’s (admittedly bad) writing and increased sales of Simon’s poetry (on which royalties Fay and Ned depend to live). (Though the writing is, Fay and Angus agree, “bad,” he asserts they must press on, if only for the basest of reasons: “We can’t be too hard-line about these things, Fay. Anything capable of being sold can be worth publishing.”)”

                                  1. -2

                                    Proof-of-work cryptocurrency mining is bad, stupid and damaging.

                                    In the words of the author, people who spread this nonsense without substantiating it and addressing the arguments why it’s wrong are “bad, stupid, and damaging”.

                                    1. 5

                                      Your link doesn’t make a claim, it’s just a link to a Twitter search saying proof of work is good. Did you mean another link?

                                      1. -2

                                        Your link doesn’t make a claim

                                        I posted four links.

                                        1. 10

                                          So you did, sorry!

                                          One link compares, literally, the entire financial system and everything it does to Bitcoin, and asserts - without numbers - that surely Bitcoin is more energy efficient than the existing system. (The other links are blank slogans.)

                                          Since you can’t be bothered making an effort post, I will:

                                          In 2015, there were approximately 430 billion cashless transactions.

                                          The world produced about 24,000 TWh of energy in 2015 - oil, gas, coal, renewables, the lot.

                                          If Bitcoin handled all of those transactions at 215 kWh per transaction, that’d be about 10,000 TWh - or about 40% of all the energy in the world.

                                          Does the existing financial infrastructure consume close to 40% of all energy? I strongly suspect it doesn’t.

                                          Those numbers are, of course, fuzzy as hell. Feel free to post better ones, or indeed any.

                                          1. -6

                                            without numbers

                                            This was addressed in the second and third links, which you haven’t really responded to. I’ll wait for that thank you.

                                            The other links are blank slogans.

                                            No, they aren’t. They make very technical arguments on a variety of topics. This is self-evident to anyone who bothers to visit the links. So, this shows you’re a dishonest person, with no argument.

                                            If Bitcoin handled all of those transactions at 215 kWh per transaction, that’d be about 10,000 TWh - or about 40% of all the energy in the world.

                                            That’s not how Bitcoin handles such a volume of transactions. Bitcoin’s blockchain is literally designed to not be capable of handling that many transactions. Thus, your point is again moot.

                                            And yet, pretty soon “Bitcoin” will be handling that number of transactions, and it will be doing so at far less than “215 kWh per transaction” or whatever number you invent. It has already begun.

                                            Again, you’re just showing your immense ignorance of this subject.

                                            1. 3

                                              And you’re being rude. Please either engage positively or don’t post at all.

                                              1. -2

                                                You and the OP are the ones being rude.

                                      2. 5

                                        A bunch of Twitter links do not an argument make.

                                        1. -5

                                          If you know how to read they do. I see Lobsters has been infested with trolls.

                                        2. 4

                                          You’d have done better just highlighting the one link that makes your points best. That’s this one. It actually has good points if we’re comparing Bitcoin to the existing financial system. It’s misleading, though, given it counts every card or chip used in the main financial system… tiny, cheap things… without counting all the devices that would be necessary to securely do Bitcoin. I bet the computers or embedded devices Bitcoin users use cost more to make than smartcards with 16-bit MCU’s connecting to standard servers over secure tunnel.

                                          The no branch and cash advantages do exist against most banks. Your comparison leaves off those where they don’t: branchless banks (i.e. online banks) or digital payment systems (i.e. Venmo or Paypal). Unlike the cryptocurrencies, those centralized alternatives grabbed plenty of the bankers’ market with one becoming a transforming force when partnering with eBay. PayPal achieved its goals by fixing real problems people had with the financial system using easiest methods possible reusing what was already proven to work. That’s what alternatives should be doing. The cryptocurrencies seem to only look much better in performance and energy usage if compared to the most inefficient, wasteful models in centralized finance. Compared to the digital ones (esp lean ones), they don’t have strong advantages for most users: only disadvantages like slower transactions, more energy use, more costly computers, riskier protocols due to higher complexity, lower longevity, and unclear risk on disputes if it hits a court.

                                          (@David_Gerard, you might find this last one useful later.)

                                          I’m saving the best part about your comparison for last: it should be an AND instead of an OR. It’s totally wrong to compare them in isolation. Bitcoin is a failed currency primarily used for speculation with intent to get someone richer in an existing currency (i.e. the financial system). It and most cryptocurrencies also use the existing financial system for investments into them, payments, the devices they run with, their energy use, cash backing of some assets, conferences/meetups, and probably your personal account on Patreon. The crypocurrencies are using the current financial system to bootstrap their vision of the future (or just defraud people… it varies). Until a transition happens, crypocurrencies need their energy use and the existing financial system’s energy use. They combine. It’s not one or the other until cryptocurrency users or developers are no longer using the financial system. That is quite a long shot even harder to believe than crypocurrencies going mainstream to begin with.

                                          1. 2

                                            thank you! I have a lengthy effortpost in the works on proof-of-work and the bad excuses for it, and will definitely be noting that point :-)

                                            1. -2

                                              You’d have done better [..]

                                              It’s absurd Lobsters allows this deceitful nonsense at all. The dude is spreading disinfo and knows he’s doing it.

                                              1. 3

                                                I just pointed out your link was spreading disinfo given it pretended Bitcoin operated in isolation instead of with the financial system. It’s a wasteful system that depends on and adds to everything you mentioned. It can only be said to use less energy if it’s self-sustaining and eliminated the other stuff by replacing it in the large. Instead, most Bitcoin use is happening side-by-side with it sustaining both systems. They add together.

                                                1. -2

                                                  It’s a wasteful system that depends on and adds to everything you mentioned.

                                                  That’s not true. Bitcoin doesn’t have a single branch for example. I could go on, but I tire of arguing against liars. It’s not productive.

                                                  1. 3

                                                    If it’s not true, then at least the following would be true:

                                                    1. All personnel and hardware involved in Bitcoin are paid for only in Bitcoin. That’s development, mining, promotion, meetings, etc. It has replaced the financial system for at least its own needs among its own supporters.

                                                    2. People move money into Bitcoin. It then stays there since the use Bitcoin as a currency. Most aren’t moving things into and out of the financial system to profit off of Bitcoin. It would just be a financial instrument in the regular, financial system being used like many others. It’s also fairly stable so your money isn’t here today and gone tomorrow.

                                                    3. Bitcoin isn’t backed by physical cash at any level. You’d need the banks, Brinks, Fort Knox, etc at that point behind the scenes. Bitcoin could still vastly reduce amount of that but still depends on some of it. Hasn’t eliminated it.

                                                    If any of these aren’t true, then Bitcoin is using the current, financial system to operate because it hasn’t replaced it or eliminated need for it. If No 1 is true, that’s especially interesting given they’re the people who say it will replace the current financial system. If it hasn’t for them, then why should rest of us depend on it?

                                                    1. -2

                                                      Bitcoin doesn’t depend on the current financial system. I’m not going to waste any time convincing someone as smart as you about that.

                                          1. 6

                                            Some of these are also on Google Play, but whatever:

                                            • ConnectBot - ssh client
                                            • K-9 mail is the best Android IMAP client I’ve found
                                            • Orgzly lets you make org-mode style notes.
                                            1. 1

                                              As someone who has spent wayyyy too much time in the world of CSP I applaud this effort.

                                              1. 2

                                                How did you spend the time? golang?

                                                1. 2

                                                  Was working on a proof tool for CSP for a number of years.

                                                  1. 1

                                                    Interesting, because to quote Hoare’s original paper:

                                                    However, this paper also ignores many serious problems. The most serious is that it fails to suggest any proof method to assist in the development and verification of correct programs.

                                                    Not sure what the current status is with proof tools.

                                                    1. 2

                                                      The common thing to use with CSP-like formalism in industry was SPIN model-checker. It was used a lot. Various temporal logics have also been modeled in proof assistants. People dont do it a lot but I think it’s more lack of interest than difficulty.

                                                      Anyway, I took a look and found CSP-Prover. They had an initial paper then another in search results about deadlock detection. Hope yall enjoy it.

                                                      1. 1

                                                        Iirc SPIN is the best of the open source CSP capable checkers. Not looked at it for a few years though.

                                                1. 8

                                                  At one extreme, Arch Linux and Debian Unstable, where you can do just about anything but there’s only about 98% chance it’ll work on a given day.

                                                  At the other extreme, CentOS/Red Hat Enterprise Linux, where there’s a specific set of things that are guaranteed 100% reliable, and pretty much everything else will break.

                                                  In between are macOS and Windows, where most things work and keep working, but sometimes you reach for that one extra thing and wind up having to upgrade your OS. It’s a pain, but consider it an amortized cost over not spending five minutes a day tracing through shell scripts to find out where a particular thing is failing.

                                                  1. 3

                                                    To be fair, I haven’t had unstable break on me for years. But I do tend to mostly track testing rather than unstable & keep a close eye on what apt wants to upgrade.

                                                    1. 2

                                                      I agree, I’ve had a pretty good experience with Debian unstable in the past 10 years or so (prior to that it used to break more). Contrary to the name, it isn’t really the anything-goes staging area (at least not anymore). To upload to unstable, a package is supposed to have been at least minimally tested, should build cleanly from source, should install cleanly without dependency issues, etc. There is a more anything-goes staging area for packages still working out the bugs, experimental. But you wouldn’t run it as a distribution, only pull in specific packages from there to test.

                                                    2. 3

                                                      MacOS itself does not break on every update* but homebrew packages break very often. After each update half of them stop working due to some library having wrong version or for some other reason. Packages often fail to install or install in broken state. Sometimes newer versions of packages has features disabled by default or entirely (i.e. GUI dropped). I know homebrew is mostly hobby project but I had no such bad experience in Gentoo or Arch. And without homebrew, Mac OS is just a glamorous runtime for Photoshop.

                                                      `* Except starting from 10.13, update from 10.12 failed for me, leaving system in non-booting state, then after fixing that, each minor update started to install twice, now it even offers me to install ancient security update from January. This is literally Mac OS Vista.

                                                    1. 3

                                                      Google can push whatever updates they like through the Google services layer on an Android phone: it wouldn’t surprise me if they’d do just that when there are security holes being actively exploited in the wild, even if the user has explicitly turned off updates on their phone.

                                                      1. 4

                                                        Right. I am operating under the assumption that 1. Google did it and 2. They had some good reason.

                                                        But, I want details!

                                                        Hm, I found this… https://source.android.com/security/bulletin/2018-04-01

                                                        And this https://groups.google.com/forum/#!forum/android-security-updates

                                                        But, neither seem to be the sort of place where I’d see google developers discussing whether or not to push updates.

                                                        Anyway, it doesn’t seem that an update to some specific app was pushed. It seems that the “auto update” feature was temporarily turned on, then off again.

                                                        1. 1

                                                          Hmm, I don’t have an Android phone but out of the blue my Apple Watch gave me a notification that I needed to update. The patch notes say “This is a security update” and Apple doesn’t comment on security updates, and I can’t find anything in the news about it. Wonder if it’s related.

                                                      1. 2

                                                        These are probably the weakest arguments against Bitcoin I’ve seen. But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.

                                                        Real arguments against Bitcoin are:

                                                        And I’m sure there are others but literally none of the ones presented here are valid.

                                                        1. 29

                                                          These are probably the weakest arguments against Bitcoin I’ve seen.

                                                          As it says, this is in response to one of the weakest arguments for Bitcoin I’ve seen. But one that keeps coming up.

                                                          But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.

                                                          When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business.

                                                          1. 19

                                                            I would also like to be able to upgrade my gaming PC’s GPU without spending what the entire machine cost.

                                                            This is getting better though.

                                                            1. 1

                                                              For what it’s worth, Bitcoin mining doesn’t use GPUs and hasn’t for several years. GPUs are being used to mine Ethereum, Monero, etc. but not BItcoin or Bitcoin Cash.

                                                            2. 0

                                                              When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business

                                                              And yet, still less electricity than… Christmas lights in the US or gold mining.

                                                              https://coinaccess.com/blog/bitcoin-power-consumption-put-into-perspective/

                                                              1. 21

                                                                When you reach for “Tu quoque” as your response to a criticism, then you’ve definitely run out of decent arguments.

                                                            3. 13

                                                              Bitcoin (and all blockchain based technology) is doomed to die as the price of energy goes up.

                                                              It also accelerates the exaustion of many energy sources, pushing energy prices up faster for every other use.

                                                              All blockchain based cryptocurrencies are scams, both as currencies and as long term investments.
                                                              They are distributed, energy wasting, ponzi scheme.

                                                              1. 2

                                                                wouldn’t an increase in the cost of energy just make mining difficulty go down? then the network would just use less energy?

                                                                1. 2

                                                                  No, because if you reduce the mining difficulty, you decrease the chain safety.

                                                                  Indeed the fact that the energy cost is higher than the average bitcoin revenue does not means that a well determined pool can’t pay for the difference by double spending.

                                                                  1. 3

                                                                    If energy cost doubles, a mix of two things will happen, as they do when the block reward halves:

                                                                    1. Value goes up, as marginal supply decreases.
                                                                    2. If the demand isn’t there, instead the difficulty falls as miners withdraw from the market.

                                                                    Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value. This cost is what secures the blockchain by making attacks costly.

                                                                    1. 1

                                                                      Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value.

                                                                      You forgot one word: average.

                                                                      1. 2

                                                                        It is implied. The sentence makes no sense without it.

                                                                        1. 1

                                                                          And don’t you see the huge security issue?

                                                                2. 1

                                                                  Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.

                                                                  PoS has no such energy requirements. Peercoin (2012) was one of the first, Blackcoin, Decred, and many more serve as examples. Ethereum, #2 in “market cap”, is moving to PoS.

                                                                  So to say “ [all blockchain based technology] is doomed to die as the price of energy goes up” is silly.

                                                                  1. 1

                                                                    Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.

                                                                    Hum… are you saying that Bitcoin miners have no brain? :-D

                                                                    I know that PoS, in theory, is more efficient.
                                                                    The fun fact is that all implementation I’ve seen in the past were based on PoW based crypto currencies stakes. Is that changed?

                                                                    As for Ethereum, I will be happy to see how they implement the PoS… when they will.

                                                                    1. 2

                                                                      Blackcoin had a tiny PoW bootstrap phase, maybe weeks worth and only a handful of computers. Since then, for years, it has been purely PoS. Ethereum’s goal is to follow Blackcoin’s example, an ICO, then PoW, and finally a PoS phase.

                                                                      The single problem PoW once reasonably solved better than PoS was egalitarian issuance. With miner consolidation this is far from being the case.

                                                                      IMHO, fair issuance is the single biggest problem facing cryptocurrency. It is the unsolved problem at large. Solving this issue would immediately change the entire industry.

                                                                      1. 1

                                                                        Well, proof of stake assumes that people care about the system.

                                                                        It see the cryptocurrency in isolation.

                                                                        An economist would object that a stake holder might get a lot by breaking the currency itself despite the loss in-currency.

                                                                        There are many ways to gain value from a failure: eg buying surrogate goods for cheap and selling them after the competitor’s failure has increased their relative value.

                                                                        Or by predicting the failure and then causing it, and selling consulting and books.

                                                                        Or a stake holder might have a political reason to demage the people with a stake in the currency.

                                                                        I’m afraid that the proof of stake is a naive solution to a misunderstood economical problem. But I’m not sure: I will surely give a look to Ethereum when it will be PoS based.

                                                                  2. 0

                                                                    doomed to die as the price of energy goes up.

                                                                    Even the ones based on proof-of-share consensus mechanisms? How does that relate?

                                                                    1. 3

                                                                      Can you point to a working implementation so that I can give a look?

                                                                      Last time I checked, the proof-of-share did not even worked as a proof-of-concept… but I’m happy to be corrected.

                                                                      1. 2

                                                                        Blackcoin is Proof of Stake. (I’ve not heard of “Proof of Share”).

                                                                        Google returns 617,000 results for “pure pos coin”.

                                                                        1. 1

                                                                          Instructions to get on the Casper Testnet (in alpha) are here: https://hackmd.io/s/Hk6UiFU7z# . No need to bold your words to emphasize your beliefs.

                                                                          1. 3

                                                                            The emphasis was on the key requirement.

                                                                            I’ve seen so many cryptocurrencies died few days after ICO, that I raised the bar to take a new one seriously: if it doesn’t have a stable user base exchanging real goods with it, it’s just another waste of time.

                                                                            Also, note that I’m not against alternative coins. I’d really like to see a working and well designed alt coin.
                                                                            And I like related experiments as GNU Teller.

                                                                            I’m just against scams and people trying to fool other people.
                                                                            For example, Casper Testnet is a PoS based on a PoW (as Etherum currently is).

                                                                            So, let’s try again: do you have a working implementation of a proof of stake to suggest?

                                                                            1. 1

                                                                              It’s not live or open-source, so I’d understand if you’re still skeptical, but Algorand has simulated 500,000 users.

                                                                              1. 1

                                                                                Again I don’t seem to understand your anger. We’re on a tech site discussing tech issues. You seem to be getting emotional about something that’s orthogonal to this discussion. I don’t think that emotional exhorting is particularly conducive to discussion, especially for an informed audience.

                                                                                And I don’t understand what you mean by working implementation. It seems like a testnet does not suffice. If your requirements are: widely popular, commonly traded coin with PoS, then congratulations you have built a set of requirements that are right now impossible to satisfy. If this is your requirement then you’re just invoking the trick question fallacy.

                                                                                Nano is a fairly prominent example of Delegated Proof of Stake and follows a fundamentally very different model than Bitcoin with its UTXOs.

                                                                                1. 3

                                                                                  No anger, just a bit of irony. :-)

                                                                                  By working implementation of a software currency I mean not just code and a few beta tester but a stable userbase that use the currency for real world trades.

                                                                                  Actually that probably the minimal definition of “working implementation” for any currency, not just software ones.

                                                                                  I could become a little lengthy about vaporware, marketing and scams, if I have to explain why an unused software is broken by definition.
                                                                                  I develop an OS myself tha literally nobody use, and I would never sell it as a working implementation of anything.

                                                                                  I will look to Nano and delegated proofs of stake (and I welcome any direct link to papers and code… really).

                                                                                  But frankly, the sarcasm is due to a little disgust I feel for proponents of PoW/blockchain cryptocurrencies (to date, the only real ones I know working, despite broken as actual long term currency): I can understand non programmers that sell what they buy from programmers, but any competent programmer should just say “guys Bitcoin was an experiment, but it’s pretty evident that has been turned to a big ponzi scheme. Keep out of cryptocurrencies! Or you are going to loose your real money for nothing.”

                                                                                  To me, programmers who don’t explain this are either incompetent enough to talk about something they do not understand, or are trying to profit from those other people, selling them their token (directly or indirectly).

                                                                                  This does not means in any way that I don’t think a software currency can be built and work.

                                                                                  But as an hacker, my ethics prevent me from using people’s ignorance against them, as does who sell them “the blockchain revolution”.

                                                                              2. 2

                                                                                The problem is that in the blockchain space, hypotheticals are pretty much worthless.

                                                                                Casper I do respect, they’re putting a lot of work in! But, as I note literally in this article, they’re discovering yet more problems all the time. (The latest: the security flaws.)

                                                                                PoS has been implemented in a ton of tiny altcoins nobody much cares about. Ethereum is a great big coin with hundreds of millions of dollars swilling around in it - this is a different enough use case that I think it needs to be regarded as a completely different thing.

                                                                                The Ethereum PoS FAQ is a string of things they’ve tried that haven’t quite been good enough for this huge use case. I’ll continue to say that I’ll call it definitely achievable when it’s definitely achieved.

                                                                        2. 4

                                                                          ASICboost was fixed by segwit. Bitcoin isn’t subject to ASICboost anymore, but Bitcoin Cash is.

                                                                          1. 2

                                                                            Covert asicboost was fixed with segwit, overt is being used: https://mobile.twitter.com/slush_pool/status/977499667985518592

                                                                        1. 3

                                                                          I’m in awe.

                                                                          To quote someone’s review of Annihilation: “It’s super upsetting! In a good way!”

                                                                          1. 15

                                                                            Java, XML, Soap, XmlRpc, Hailstorm, .NET, Jini, oh lord I can’t keep up. And that’s just in the last 12 months!

                                                                            Oh simpler times when we only had 7 new technologies in the last 12 months. Also after I read that I realized this was published in 2001 and it suddenly made a lot more sense.

                                                                            All they’ll talk about is peer-to-peer this, that, and the other thing. Suddenly you have peer-to-peer conferences, peer-to-peer venture capital funds, and even peer-to-peer backlash with the imbecile business journalists dripping with glee as they copy each other’s stories: “Peer To Peer: Dead!”

                                                                            s/peer-to-peer/blockchain/g, this may have been from 2001 but it’s still so relevant

                                                                            1. 2

                                                                              What are the 2018 equivalents? Obviously Blockchain: Is there anything else that has that ‘new hotness’ quality which makes it irresistible to neophiles?

                                                                              1. 8

                                                                                IOT, AI/ML, Serverless and of course: microservices

                                                                                1. 3

                                                                                  Oo yes. Docker et al definitely qualify.

                                                                                  1. 2

                                                                                    I forgot the most important one: kubernetes

                                                                                2. 1

                                                                                  Also of interest is the converse: what are the things that have recently lost (or are in the process of losing) this quality?

                                                                                  1. 3

                                                                                    Peer to peer.

                                                                                    1. 3

                                                                                      recently… :)

                                                                                    2. 2

                                                                                      I’m hearing less about big data and nosql

                                                                                      1. 1

                                                                                        Bigdata has folded into AI/ML or just analytics

                                                                                        1. 2

                                                                                          On top of it, we have a new fad of stronger-consistency DB’s with SQL layers. One of few fads I like, too. I hope they design even more. :)

                                                                                1. 25

                                                                                  Not fixing the problem but rather leaving it up to see if someone else fixes it isn’t an experiment, it’s just passive-aggressive.

                                                                                  1. 8

                                                                                    She did tell her manager at least. I mean she wanted to see if anyone else would pick up on it.

                                                                                    1. 3

                                                                                      Given the circumstances, I think at the minimum a written notification to the manager to get their sign-off would have been a good idea.

                                                                                      These things are always easy in hindsight of course.

                                                                                  1. 3

                                                                                    I put Ubuntu on an old Macbook Pro that I wasn’t using anymore.

                                                                                    Everything worked beautifully except for the trackpad. It was incredibly sensitive. Over the years, I’ve come to test my thumb on the trackpad. OSX was smart enough to ignore it. I couldn’t get Ubuntu to. In addition to that, I couldn’t ever get the trackpad to react anywhere near as well as it worked on OSX.

                                                                                    Too bad, it would have been a nice way to repurpose an old machine.

                                                                                    1. 1

                                                                                      Modern libinput is supposed to do palm / thumb rejection. Was it installed?

                                                                                      1. 1

                                                                                        Yes, it didn’t work well.

                                                                                    1. 1

                                                                                      If I understand the post correctly, this seems like a too big obvious failure. I kind of can’t believe Debian and Ubuntu never thought about that.

                                                                                      Did someone try injecting a manipulated package? I’d assume that at least the signed manifest contains not only URLs and package version but also some kind of shasum at least?

                                                                                      1. 2

                                                                                        Looks like that’s exactly what apt is doing, it verifies the checksum served in the signed manifesto: https://wiki.debian.org/SecureApt#How_to_manually_check_for_package.27s_integrity

                                                                                        The document mentions it uses MD5 though, maybe there’s a vector for collisions here, but it’s not as trivial as the post indicates, I’d say.

                                                                                        Maybe there’s marketing behind it? Packagecloud offers repositories with TLS transport…

                                                                                        1. 2

                                                                                          Modern apt repos contain SHA256 sums of all the metadata files, signed by the Debian gpg key & each individual package metadata contains that package’s SHA256 sum.

                                                                                          That said, they’re not wrong that serving apt repos over anything but https is inexcusable in the modern world.

                                                                                          1. 2

                                                                                            You must live on a planet where there are no users who live behind bad firewalls and MITM proxies that break HTTPS, because that’s why FreeBSD still doesn’t use HTTPS for … anything? I guess we have it for the website and SVN, but not for packages or portsnap.

                                                                                            1. 1

                                                                                              There’s nothing wrong with being able to use http if you have to: https should be the default however.

                                                                                              1. 1

                                                                                                https is very inconvenient to do on community run mirrors

                                                                                                See also: clamav antivirus

                                                                                                1. 1

                                                                                                  In the modern world with letsencrypt it’s no where near as bad as it used to be though.

                                                                                                  1. 1

                                                                                                    I don’t think I would trust third parties to be able to issue certificates under my domain.

                                                                                                    It is even more complicated for clamav where servers may be responding to many different domain names based on which pools they are in. You would need multiple wildcards.

                                                                                            2. 1

                                                                                              each individual package metadata contains that package’s SHA256 sum

                                                                                              Is the shasum of every individual package not included in the (verified) manifesto? That would be a major issue then, as it can be forged alongside the package.

                                                                                              But if it is, then forging packages should require SHA256 collisions, which should be safe. And package integrity verified.

                                                                                              Obviously, serving via TLS won’t hurt security, but (given that letsencrypt is fairly young) depend on a centralized CA structure and additional costs - and arguably add a little more privacy on which packages you install.

                                                                                              1. 3

                                                                                                A few days ago I was searching about this same topic when after seeing the apt update log and found this site with some ideas about it https://whydoesaptnotusehttps.com, including the point about privacy.
                                                                                                I think the point about intermetdiate cache proxys and use of bandwith for the distribution servers probably adds more than the cost of a TLS certificate (many offer alternative torrent files for the live cd to offload this cost).

                                                                                                Also, the packagecloud article implies that serving over TLS removes the risk of MitM, but it just makes it harder, and without certificate pinning only a little. I’d defer mostly to the marketing approach on this article, there are call-to-action sprinkled on the text

                                                                                                1. 1

                                                                                                  https://whydoesaptnotusehttps.com

                                                                                                  Good resource, sums it up pretty well!

                                                                                                  Edit: Doesn’t answer the question about whether SHA256 sums for each individual package are included in the manifesto. But if not, all of this would make no sense, so I assume and hope so.

                                                                                                  1. 2

                                                                                                    Hi. I’m the author of the post – I strongly encourage everyone to use TLS.

                                                                                                    SHA256 sums of the packages are included in the metadata, but this does nothing to prevent downgrade attacks, replay attacks, or freeze attacks.

                                                                                                    I’ve submit a pull request to the source of “whydoesaptnotusehttps” to correct the content of the website, as it implies several incorrect things about the APT security model.

                                                                                                    Please re-read my article and the linked academic paper. The solution to the bugs presented is to simply use TLS, always. There is no excuse not to.

                                                                                                    1. 2

                                                                                                      TLS is a good idea, but it’s not sufficient (I work on TUF). TUF is the consequence of this research, you can find other papers about repository security (as well as current integrations of TUF) on the website.

                                                                                                      1. 1

                                                                                                        Yep, TUF is great – I’ve read quite a bit about it. Is there an APT TUF transport? If not, it seems like the best APT users can do is use TLS and hope some will write apt-transport-tuf for now :)

                                                                                                      2. 1

                                                                                                        Thanks for the post and the research!

                                                                                                        It’s not that easy to switch to https: A lot of repositories (incl. die official ones of Ubuntu) do not support https. Furthermore, most cloud providers proivide their own mirrors and caches. There’s no way to verify whether the whole “apt-chain” of package uploads, mirrors and caches is using https. Even if you enforce HTTPS, the described vectors (if I understood correctly) remain an issue in the mirrors/ cache scenario.

                                                                                                        You may be right, that current mitingations for the said vectors are not sufficient, but I feel like a security model in package management that relies on TLS is simply not sufficient and the mitingation of the attack vectors you’ve found needs to be something else - e.g. signing and verifing the packages upon installation.

                                                                                                  2. 2

                                                                                                    Is the shasum of every individual package not included in the (verified) manifesto? That would be a major issue then, as it can be forged alongside the package.

                                                                                                    Yes, there’s a chain of trust: the signature of each package is contained within the repo manifest file, which is ultimately signed by the Debian archive key. It’s a bit like a git archive - a chain of SHA256 sums of which only the final one needs to be signed to trust the whole.

                                                                                                    There are issues with http downloads - eg it reveals which packages you download, so by inspecting the data flow an attacker could find out which packages you’ve downloaded and know which attacks would be likely to be successful - but package replacement on the wire isn’t one of them.