1. 4

    As usual, David apparently fails or refuses to understand how and why PoW is useful and must attack it at every opportunity (using his favorite rhetorical technique of linking negatively connoted phrases to vaguely relevant websites).

    That said, the article reminds me of a fun story - I went to a talk from a blockchain lead at <big bank> a while back and she related that a primary component of her job was assuring executives that, in fact, they did not need a blockchain for <random task>. This had become such a regular occurrence that she had attached this image to her desk.

    1. 10

      What would you consider a useful situation for PoW? In the sense that no other alternative could make up for the advantages in some real life use-case?

      But otherwise, and maybe it’s just me, since I agree wuth his premise, but I see @David_Gerard as taking the opposite role of popular blockchain (over-)advocates, who claim that the technology is the holy grail for far too many problems. Even if one doesn’t agree with his conclusions, I enjoy reading his articles, and find them very informative, since he doesn’t just oppose blockchains from a opinion-based position, but he also seems to have the credentials to do so.

      1. 1

        Relying to @gerikson as well. I personally believe that decentralization and cryptographically anchored trust are extremely important (what David dismissively refers to as “conspiracy theory economics”). We know of two ways to achieve this: proof of work, and proof of stake. Proof of stake is interesting but has some issues and trade-offs. If you don’t believe that PoW mining is some sort of anti-environmental evil (I don’t) it seems to generally offer better properties than PoS (like superior surprise-fork resistance).

        1. 13

          I personally believe that decentralization and cryptographically anchored trust are extremely important

          I personally also prefer decentralised or federalised systems, when they have a practical advantage over a centralized alternative. But I don’t see this to be the case with most application of the blockchain. Bitcoin, as a prime example, to my knowledge is too slow, too inconvenient, too unstable and too resource hungry to have a practical application, as a real substitute for money, either digital or virtual. One doesn’t have the time to wait 20m or more whenever one pays for lunch or buys some chewing gum at a corner shop, just because some other transactions got picked up first by a miner. It’s obviously different when you want to do something like micro-donations or buying illegal stuff, but I just claim that this isn’t the basis of a modern economy.

          Cryptography is a substitute for authority, that is true, but I don’t belive that this is always wanted. Payments can’t be easily reveresed, addresses mean nothing, clients might loose support because the core developers arbitrarily change stuff. (I for example am stuck with 0.49mBTC from an old Electrum client, and I can’t do anything with it, since the whole system is a mess, but that’s rather unrelated.) This isn’t really the dynamic basis which capitalism has managed to survive on for this long. But even disregarding all of this, it simply is true that bitcoin isn’t a proper decentralized network like BitTorrent. Since the role of the wallet and the miner is (understandably) split, these two parts of the network don’t scale equally. In China gigantic mining farms are set up using specialized hardware to mine, mine, mine. I remember reading that there was one farm that predominated over at least 10% of the total mining power. All of this seems to run contrary to the proclaimed ideals. Proof of Work, well “works” in the most abstract sense, that it produces the intended results on one side, at the cost of disregarding everything can be disregarded, irrespective of whether it should be or not. And ultimately I prioritise other things over an anti-authority fetish, as do most people -which reminds us that even if everything I said is false that Bitcoin just doesn’t have the adoption to be significant enough to anyone but Crypto-Hobbiests, Looney Libertarians and some soon-to-fail startups in Silicon Valley.

          1. 5

            there was one farm that predominated over at least 10% of the total mining power

            There was one pool that was at 42% of the total mining power! such decentralization very security

              1. 5

                To be fair, that was one pool consisting of multiple miners. What I was talking about was a single miner controlling 10% of the total hashing power.

                1. 7

                  That’s definitely true.

                  On the other hand, if you look at incident reports like https://github.com/bitcoin/bips/blob/master/bip-0050.mediawiki — the pool policies set by the operators (often a single person has this power for a given pool) directly and significantly affect the consensus.

                  Ghash.io itself did have incentives to avoid giving reasons for accusations that would tank Bitcoin, but being close to 50% makes a pool a very attractive attack target: take over their transaction and parent-block choice, and you take over the entire network.

              2. 0

                But I don’t see this to be the case with most application of the blockchain.

                Then I would advise researching it.

                One doesn’t have the time to wait 20m or more whenever one pays for lunch or buys some chewing gum at a corner shop

                Not trying to be rude, but it’s clear whenever anyone makes this argument that they don’t know at all how our existing financial infrastructure works. In fact, it takes months for a credit card transaction to clear to anything resembling the permanence of a mined bitcoin transaction. Same story with credit cards.

                Low-risk merchants (digital goods, face-to-face sales, etc.) rarely require the average 10 minute (not sure where you got 20 from) wait for a confirmation.

                If you do want permanence, Bitcoin is infinitely superior to any popular payment mechanism. Look into the payment limits set by high-value fungible goods dealers (like gold warehouses) for bitcoin vs. credit card or check.

                Bitcoin just doesn’t have the adoption to be significant enough to anyone but Crypto-Hobbiests, Looney Libertarians and some soon-to-fail startups in Silicon Valley.

                Very interesting theory - do you think these strawmen you’ve put up have collective hundreds of billions of dollars? As an effort barometer, are you familiar with the CBOE?

                1. 10

                  Please try to keep a civil tone here.

                  Also, it’s hard to buy a cup of coffee or a steam game or a pizza with bitcoin. Ditto stocks.

                  1. -4

                    It’s hard to be nice when the quality of discourse on this topic is, for some reason, abysimally low compared to most technical topics on this site. It feels like people aren’t putting in any effort at all.

                    For example, why did you respond with this list of complete non-sequiturs? It has nothing to do with what we’ve been discussing in this thread except insofar as it involves bitcoin. I feel like your comments are normally high-effort, so what’s going on? Does this topic sap people’s will to think carefully?

                    (Civility is also reciprocal, and I’ve seen a lot of childish name-calling from the people I’m arguing with in this thread, including the linked article and the GP.)

                    Beyond the fact that this list is not really relevant, it’s also not true; you could have just searched “bitcoin <any of those things>” and seen that you can buy any of those things pretty easily, perhaps with a layer of indirection (just as you need a layer of indirection to buy things in the US if you already have EUR). In that list you gave, perhaps the most interesting example in bitcoin’s disfavor is Steam; Steam stopped accepting bitcoin directly recently, presumably due to low interest. However, it’s still easy to buy games from other sources (like Humble) with BTC.

                    1. 6

                      IMO, your comments are not very inspiring for quality. As someone who does not follow Bitcoin or the Blockchain all that much, I have not felt like any of your comments addressed anyone else’s comments. Instead, I have perceived you as coming off as defensive and with the attitude of “if you don’t get it you haven’t done enough research because I’m right” rather than trying to extol the virtues of the blockchain. Maybe you aren’t interested in correcting any of what you perceive as misinformation on here, and if so that’s even worse.

                      For example, I do not know of any place I can buy pizza with bitcoin. But you say it is possible, but perhaps with a layer of indirection. I have no idea what this layer of indirection is and you have left it vague, which does not lend me to trusting your response.

                      In one comment you are very dismissive of people’s Bitcoins getting hacked, but as a lay person, I see news stories on this all the time with substantial losses and no FDIC, so someone like me considers this a major issue but you gloss over it.

                      Many of the comments I’ve read by you on this thread are a similar level of unhelpful, all the while claiming the person you’re responding to is some combination of lazy or acting dumb. Maybe that is the truth but, again, as an outsider, all I see is the person defending the idea coming off as kind of a jerk. Maybe for someone more educated on the matter you are spot on.

                      1. 5

                        There is a religious quality to belief in the blockchain, particularly Bitcoin. It needs to be perfect in order to meet expectations for it: it can’t be “just” a distributed database, it has to be better than that. Bitcoin can’t be “just” a payment system, it has to be “the future of currency.” Check out David’s book if you’re interested in more detail.

                  2. 8

                    In fact, it takes months for a credit card transaction to clear to anything resembling the permanence of a mined bitcoin transaction. Same story with credit cards.

                    But I don’t have to wait months for both parties to be content the transaction is successful, only seconds, so this is really irrelevant to the point you are responding to, which is that if a Bitcoin transaction takes 10m to process then I heave to wait 10m for my transaction to be done, which people might not want to do.

                    1. -1

                      Again, as I said directly below the text you quoted, most merchants don’t require you to wait 10 minutes - only seconds.

                    2. 5

                      Then I would advise researching it.

                      It is exactly because I looked into the inner workings of Bitcoin and the Blockchain - as a proponent I have to mention - that I became more and more skeptical about it. And I still do support various decentralized and federated systems: BitTorrent, IPFS, (proper) HTTP, Email, … but just because the structure offers the possibility for a decentralized network, doesn’t have to mean that this potential is realized or that it is necessarily superior.

                      Not trying to be rude, but it’s clear whenever anyone makes this argument that they don’t know at all how our existing financial infrastructure works. In fact, it takes months for a credit card transaction to clear to anything resembling the permanence of a mined bitcoin transaction. Same story with credit cards.

                      The crucial difference being that, let’s say the cashier nearly instantaneously hears a some beep and knows that it isn’t his responsibility, nor that of the shop, to make sure that the money is transfered. The Bank, the credit card company or whoever has signed a binding contract lining this technical part of the process out to be what they have to care about, and if they don’t, they can be sued since there is an absolute regulatory instance - the state - in the background. This mutual delegation of trust, gives everyone a sense of security (regardless of how true or false it is) that makes people spend money instead of hording it, investing into projects instead of trading it for more secure assets. Add Bitcoins aforementioned volatileness, and no reasonable person would want to use it as their primary financial medium.

                      If you do want permanence, Bitcoin is infinitely superior to any popular payment mechanism.

                      I wouldn’t conciser 3.3 to 7 transactions per second infinitely superior to, for example Visa with an average of 1,700 t/s. Even it you think about it, there are far more that just 7 purchases being made a second around the whole world for this to be realistically feasible. But on the other side, as @friendlysock Bitcoin makes up for it by not having too many things you can actually buy with it: The region I live in has approximately a million or something inhabitants, but according to CoinMap even by the most generous measures, only 5 shops (withing a 30km radius) accepting it as a payment method. And most of those just offer it to promote themselves anyway.

                      Very interesting theory - do you think these strawmen you’ve put up have collective hundreds of billions of dollars? As an effort barometer, are you familiar with the CBOE?

                      (I prefer to think about my phrasing as a exaggeration and a handful of other literary deviced, instead of a fallacy, but never mind that) I’ll give you this: It has been a while since I’ve properly engaged with Bitcoin, and I was always more interested in the technological than the economical side, since I have a bit of an aversion towards libertarian politics. And it might be true that money is invested, but that still doesn’t change anything about all the other issues. It remains a bubble, a volatile, unstable, unpredictable bubble, and as it is typical for bubbles, people invest disproportional sums into it - which in the end makes it a bubble.

                      1. 0

                        The crucial difference being that, let’s say the cashier nearly instantaneously hears a some beep and knows that it isn’t his responsibility, nor that of the shop, to make sure that the money is transfered.

                        Not quite. The shop doesn’t actually have the money. The customer can revoke that payment at any time in the next 90 or 180 days, depending. Credit card fraud (including fraudulent chargebacks) is a huge problem for businesses, especially online businesses. There are lots of good technical articles online about combatting this with machine learning which should give you an idea of the scope of the problem.

                        makes people spend money instead of hording it,

                        Basically any argument of this form (including arguments for inflation) don’t really make sense with the existence of arbitrage.

                        Add Bitcoins aforementioned volatileness, and no reasonable person would want to use it as their primary financial medium.

                        So it sounds like it would make people… spend money instead of hoarding it, which you were just arguing for?

                        I wouldn’t conciser 3.3 to 7 transactions per second infinitely superior to, for example Visa with an average of 1,700 t/s.

                        https://lightning.network

                        as @friendlysock Bitcoin makes up for it by not having too many things you can actually buy with it

                        This is just patently wrong. The number of web stores that take Bitcoin directly is substantial (both in number and traffic volume), and even the number of physical stores (at least in the US) is impressive given that it’s going up against a national currency. How many stores in the US take even EUR directly?

                        Anything you can’t buy directly you can buy with some small indirection, like a BTC-USD forex card.

                        It remains a bubble, a volatile, unstable, unpredictable bubble

                        It’s certainly volatile, and it’s certainly unstable, but it may or may not be a bubble depending on your model for what Bitcoin’s role in global finance is going to become.

                        1. 5

                          Not quite. The shop doesn’t actually have the money. The customer can revoke that payment at any time in the next 90 or 180 days, depending

                          You’ve still missed my point - it isn’t important if the money has been actually transfered, but that there is trust that a framework behind all of this will guarantee that the money will be there when it has to be, as well as a protocol specifying what has to be done if the payment is to be revoked, if a purchase wishes to be undone, etc.

                          Credit card fraud (including fraudulent chargebacks) is a huge problem for businesses, especially online businesses.

                          Part of the reason, I would suspect is that the Internet was never made to be a platform for online businesses - but I’m not going to deny the problem, I’m certainly not a defender of banks and credit card companies - just an opponent of Bitcoin.

                          Basically any argument of this form (including arguments for inflation) don’t really make sense with the existence of arbitrage.

                          Could you elaborate? You have missed my point a few times already, so I’d rather we understand each other instead of having two monologues.

                          So it sounds like it would make people… spend money instead of hoarding it, which you were just arguing for?

                          No, if it’s volatile people either won’t buy into it in the first place. And if a currency is unstable, like Bitcoin inflating and deflating all the time, people don’t even know what do do with it, if it were their main asset (which I was I understand you are promoting, but nobody does). I doubt it will ever happen, since if prices were insecure, the whole economy would suffer, because all the “usual” incentives would be distorted.

                          https://lightning.network

                          I haven’t heard of this until you mentioned it, but it seems like it’s quite new, so time has to test this yet-another-bitcoin-related project that has popped up. Even disregarding that it will again need to first to make a name of it self, then be accepted, then adopted, etc. from what I gather, it’s not the ultimate solution (but, I might be wrong), especially since it seems to encourage centralization, which I believe is what you are so afraid of.

                          This is just patently wrong. The number of web stores that take Bitcoin directly is substantial (both in number and traffic volume),

                          Sure, there might be a great quantity of shops (as I mentioned, who use Bitcoin as a medium to promote themselves), but I, and from what I know most people, don’t really care about these small, frankly often dodgy online shops. Can I use it to pay directly on Amazon? Ebay? Sure, you can convert it back and forth, but all that means it that Bitcoin and other crypto currencies are just an extra step for life stylists and hipster, with no added benefit. And these shops don’t even accept Bitcoin directly, to my knowledge always just so they can convert it into their national currency - i.e. the one they actually use and Bitcoins value is always compared to. What is even Bitcoin without the USD, the currency it hates but can’t stop comparing itself to?

                          and even the number of physical stores (at least in the US) is impressive given that it’s going up against a national currency.

                          The same problems apply as I’ve already mentioned, but I wonder: have you actually ever used Bitcoin to pay in a shop? I’ve done it once and it was a hassle - in the end I just bought it with regular money like a normal person because it was frankly too embarrassing to have the cashier have to find the right QR code, me to take out my phone, wait for me got get an internet connection, try and scan the code, wait, wait, wait…. And that is of course only if you want to make the trip to buy for the sake of spending money, and decide to make a trip to some place you’d usually never go to buy something you don’t even need.

                          Ok when you’re buying drugs online or doing something with microdonations, but otherwise… meh.

                          How many stores in the US take even EUR directly?

                          Why should they? And even if they do, they convert it back to US dollars, because that’s the common currency - there isn’t really a point in a currency (one could even question if it is one), when nobody you economically interact with uses it.

                          Anything you can’t buy directly you can buy with some small indirection, like a BTC-USD forex card.

                          So a round-about payment over a centralized instance - this is the future? Seriously, this dishonesty of Bitcoin advocates (and Libertarians in general) is why you guys are so unpopular. I am deeply disgusted that I have ever advocated for this mess.

                          It’s certainly volatile, and it’s certainly unstable, but it may or may not be a bubble depending on your model for what Bitcoin’s role in global finance is going to become.

                          So you admit that is has none of the necessary preconditions to be a currency… but for some reason it will… do what exactly? If you respond to anything I mentioned here, at least tell me this: What is your “model” for what Bitcoin’s role is going to be?

                  3. 14

                    Why don’t you believe it is anti-enviromental? It certainly seems to be pretty power hungry. In fact it’s hunger for power is part of why it’s effective. All of the same arguments about using less power should apply.

                    1. -1

                      Trying to reduce energy consumption is counterproductive. Energy abundance is one of the primary driving forces of civilizational advancement. Much better is to generate more, cleaner energy. Expending a few terrawatts on substantially improved economic infrastructure is a perfectly reasonable trade-off.

                      Blaming bitcoin for consuming energy is like blaming almond farmers for using water. If their use of a resource is a problem, you should either get more of it or fix your economic system so externalities are priced in. Rationing is not an effective solution.

                      1. 10

                        on substantially improved economic infrastructure

                        This claim definitely needs substantiation, given that in practice bitcoin does everything worse than the alternatives.

                        1. -1

                          bitcoin does everything worse than the alternatives.

                          Come on David, we’ve been over this before and discovered that you just have a crazy definition of “better” explicitly selected to rule out cryptocurrencies.

                          Here’s a way Bitcoin is better than any of its traditional digital alternatives; bitcoin transactions can’t be busted. As you’ve stated before, you think going back on transactions at the whim of network operators is a good thing, and as I stated before I think that’s silly. This is getting tiring.

                          A few more, for which you no doubt have some other excuse for why this is actually a bad thing; Bitcoin can’t be taken without the user’s permission (let me guess; “but people get hacked sometimes”, right?). Bitcoin doesn’t impose an inflationary loss on its users (“but what will the fed do?!”). Bitcoin isn’t vulnerable to economic censorship (don’t know if we’ve argued about this one; I’m guessing you’re going to claim that capital controls are critical for national security or something.). The list goes on, but I’m pretty sure we’ve gone over most of it before.

                          I’ll admit that bitcoin isn’t a panacea, but “it does everything worse” is clearly a silly nonsensical claim.

                        2. 4

                          Reducing total energy consumption may or may not be counterproductive. But almost every industry I can name has a vested interest in being more power efficient for it’s particular usage of energy. The purpose of a car isn’t to burn gasoline it is to get people places. If it can do that with less gasoline people are generally happier with it.

                          PoW however tries to maximizes power consumption, via second order effects , with the goal of making it expensive to try to subvert the chain. It’s clever because it leverages economics to keep it in everyone’s best interest to not fork but it’s not the same as something like a car where reducing energy consumption is part of the value add.

                          I think that this makes PoW significantly different than just about any other use of energy that I can think of.

                          1. 3

                            Indeed. The underlying idea of Bitcoin is to simulate the mining of gold (or any other finite, valuable resource). By ensuring that an asset is always difficult to procure (a block reward every 10 minutes, block reward halving every 4 years), there’s a guard against some entity devaluing the currency (literally by fiat).

                            This means of course that no matter how fast or efficient the hardware used to process transactions becomes, the difficulty will always rise to compensate for it. The energy per hash calculation has fallen precipitously, but the number of hash calculations required to find a block has risen to compensate.

                      2. 6

                        We’ve been doing each a long time without proof of work. There’s lots of systems that are decentralized with parties that have to look out for each other a bit. The banking system is an example. They have protocols and lawyers to take care of most problems. Things work fine most of the time. There are also cryptographically-anchored trust systems like trusted timestamping and CA’s who do what they’re set up to do within their incentives. If we can do both in isolation without PoW, we can probably do both together without PoW using some combination of what’s already worked.

                        I also think we haven’t even begun to explore the possibilities of building more trustworthy charters, organizational incentives, contracts, and so on. The failings people speak of with centralized organizations are almost always about for-profit companies or strong-arming governments whose structure, incentives, and culture is prone to causing problems like that. So, maybe we eliminate root cause instead of tools root cause uses to bring problems since they’ll probably just bring new forms of problems. Regulations, disruption, or bans of decentralized payment is what I predicted would be response with some reactions already happening. They just got quite lucky that big banks like Bank of America got interested in subverting it through the legal and financial system for their own gains. Those heavyweights are probably all that held the government dogs back. Ironically, the same ones that killed Wikileaks by cutting off its payments.

                    2. 8

                      In what context do you view proof-of-work as useful?

                      1. 11

                        You have addressed 0 of the actual content of the article.

                      1. 7

                        This is a mess.

                        • Much of the technical complexity of the web has been generated by web designers who refuse to understand and accept the constraints of the medium. Overhauling the design when the implementation becomes intolerably complex is only an option when you are the designer. This luxury is unavailable to many people who build websites.
                        • Suggesting that CSS grid is somehow the reincarnation of table-based layout is astonishingly simple-minded. Yes, both enable grid-based design. CSS grid achieves this without corrupting the semantic quality of the document. They’re both solutions to the same problem. But there are obvious and significant differences between how they solve that problem. It’s hard to fathom how the author misses that point.
                        • The fetishization of unminified code distribution is really bizarre. The notion that developers should ship uncompressed code so that other developers can read that code is bewildering. Developers should make technical choices that benefit the user. Code compression, by reducing the bandwidth and time required to load the webpage, is very easily understood as a choice for the user. The author seems to prioritize reliving a romanticized moment in his adolescence when he learned to build websites by reading the code of websites he visited. It’s hard not to feel contempt for somehow who would prioritize nostalgia over the needs of someone trying to load a page from their phone over a poor connection so they can access essential information like a business address or phone number.
                        • New information always appears more complex than old information when it requires updates to a mental model. This doesn’t mean that the updated model is objectively more complex. It might be more complex. It might not be more complex. The author offers no data that quantifies an increased compexity. What he does offer is a description of the distress felt by people who resist updating their mental model in response to new information. Whether or not his conclusions are correct, I find here more bias than observation.
                        1. 8

                          CSS grid achieves this without corrupting the semantic quality of the document.

                          When was the last time you saw a page that is following semantic guidelines? It is so full of crap and dynamically generated tags, hope was lost a long time ago. It seems to be so crazy that developers heard about the “don’t use tables” that they will put tabular data in floating divs. Are you kidding me?! Don’t even get me started about SPAs.

                          The fetishization of unminified code distribution is really bizarre.

                          The point is, I think, that the code should not require minifying and only contain the bare minimum to get the functionality required. The point is to have 1kbyte unminified JS instead of 800kbyte minified crap.

                          1. 4

                            New information always appears more complex than old information when it requires updates to a mental model.

                            I feel like you completely missed his point here. He isn’t just talking about how complex the new stuff is. He even said flexbox was significantly better and simpler to use than “float”. What he is resisting is the continual reinvention that goes on in webdev. A new build tool every week. A new flavor of framework every month. An entire book written about loading fonts on the web. Sometimes you legitimately need that new framework or a detailed font loading library for your site. But frankly even if you are a large company you probably don’t need most of the new fad of the week that happens in web dev. FlexBox is probably still good enough for you needs. React is a genuine improvement for the state of SPA development. But 3-4 different build pipelines? No you probably don’t need that.

                            And while we are on the subject

                            CSS grid achieves this without corrupting the semantic quality of the document.

                            Nobody cares about the semantic quality of the document. It doesn’t really help you with anything. HTML is about presentation and it always has been. CSS allows you to modify the presentation based on what is presenting it. But you still can’t get away from the fact that how you lay things out in the html has an effect on the css you write. The semantic web has gone nowhere and it will continue to go nowhere because it’s built on a foundation that fundamentally doesn’t care about it. If we wanted semantic content we would have gone with xhtml and xslt. We didn’t because at heart html is about designing and presenting web pages not a semantic document.

                            1. 3

                              Nobody cares about the semantic quality of the document.

                              Anybody who uses assistive technology cares about its semantic quality.

                              Anybody who choses to use styles in Word documents understands why they’d want to write documents with good semantic quality.

                              You still can’t get away from the fact that how you lay things out in the html has an effect on the css you write.

                              That’s… the opposite of the point.

                              All of the cycles in web design – first using CSS at all (instead of tables in the HTML) and then making CSS progressively more powerful – have been about the opposite:

                              How you lay things out on the screen should not determine how the HTML is written.

                              Of course the CSS depends on the HTML, as you say. The presentation code depends on the content! But the content should not depend on the presentation code. That’s the direction CSS has been headed. And with CSS Grid, we’re very close to the point where content does not have to have a certain structure in order to permit a desired presentation.

                              And that’s my main issue with the essay: it presents this forward evolution in CSS as cyclical.

                              (The other issue is that the experience that compelled the author to write the article in the first place – the frenetic wheel reinvention that has taken hold of the Javascript world – is wholly separate from the phases of CSS. As far as that is concerned, I agree with him: a lot of that reinvention is cyclical and essentially fashion-driven, is optional for anyone who isn’t planning on pushing around megabytes of Javascript, and that anyone who is planning on doing that ought to pause and reconsider their plan.)

                              If we wanted semantic content we would have gone with xhtml and xslt.

                              Uh… what? XHTML is absolutely no different from HTML in terms of semantics and XSLT is completely orthogonal. XML is syntax, not semantics. It’s an implementation detail at most.

                              1. 3

                                If you are a building websites, please do more research and reconsider your attitude about semantic markup. Semantic markup is important for accessibility technologies like screen readers. RSS readers and search indexes also benefit from semantic markup. In short, there are clear and easily understood necessities for the semantic web. People do care about it. All front end developers I work with review the semantic quality of a document during code reviews and the reason they care is because it has a real impact on the user.

                                1. 2

                                  Having built and relied on a lot of sematic web (lowercase) tech, this is just untrue. Yes, many devs don’t care to use even basic semantics (h1/section instead of div/div) but that doesn’t mean there isn’t enough good stuff out there to be useful, or that you can’t convince them to fix something for a purpose.

                                  1. 1

                                    I don’t know what you worked on but I’m guessing it was niche. Or if so then you spent a lot of time dealing with sites that most emphatically didn’t care about the semantic web. The fact is that a few sites caring doesn’t mean the industry cares. The majority don’t care. They just need the web page to look just so on both desktop and mobile. Everything else is secondary.

                              1. 2

                                Honestly, I don’t see why this post is resonating with people so much (which it clearly is!) Most of the author’s technical points are incorrect, or fail to acknowledge the objective superiority of the newer solution. And most of his issues appear to be self inflicted.

                                No one is saying you need all these fancy new build tools and package managers for brochureware sites. But they are extremely handy when building actual applications.

                                1. 9

                                  objective superiority of the newer solution

                                  Is that so? Even though it were somewhat different topics (Object Oriented Programming, Syntax Highlighting, etc.) a lot of things that people used to call objectively superior turn out to be “subjectively superior” at best if one actually bothers to look at it in an objective way.

                                  Other than that I am inclined to claim that it’s really hard to define superiority of software or even techniques. Few people would argue about the superiority of an algorithm without a use case, yet people do the same thing with technologies and call them better, without mentioning the use case at all.

                                  I think a problem of our times is that one loses track of complexities and anatomies of problems, seeing only a very small port of a problem. Then we try to fix it and on the way move the problem to another place. When that new problem bothers us enough we repeat the process.

                                  This looks similar to looking for something like the perfect search or index algorithm for every use case, even disregarding limits such as available memory. It’s good that people love to go and build generic abstractions. It’s of extreme importance in IT, but it’s easy to end up in a state where progress kind of goes in a circle, when disregarding limitations and trying to find a “one size fits all”.

                                  In web development this would be a framework for both real time, non real time, for rest based microservice architectures, but also supporting RPC, real time streaming, as well as being a blog engine and what not, while both being very low level, going down to the HTTP or even TCP level and a blog at the same time, making all of that equally easy.

                                  This sounds great, and it’s certainly not impossible. However, it still might not be the right way to go and someone will always find some use case that they don’t see covered well enough. Something that isn’t easy enough for their use case out of the box and something that can be done in a more easy way by simply writing it from scratch, maybe just with some standard library.

                                  I don’t say that projects like that are bad. However, since they get reinvented over and over I think it would make sense to instead of trying to invent tools for everything it might be worthwhile do strive for completing or extending the tools to pick from.

                                  And I think that is what’s starting to happen more and more anyway. I even would say that the frameworks we see today are a symptom of it. The set of tools is growing and instead of being multitools like they used to be a decade ago (therefor also not working well with others) they nowadays seem more to be like tool belts, with many already available tools in them.

                                  Or to say it in more technical terms. Frameworks nowadays (compared to a decade or so ago) are less like huge libraries forcing you into a corsett, but more software or library distributions with blueprints and maybe manuals on how things can be done.

                                  1. 7

                                    No one is saying you need all these fancy new build tools and package managers for brochure-ware sites. But they are extremely handy when building actual applications

                                    Except that I see people say that all the time. I see them say it at work. I see them say it on social media. I see them say it at conferences. There’s always some reason why that fancy new tool is needed for the static site they are working on. They need it so they can use LESS or SASS for the CSS. They need it so that they can use react to build the html statically before they serve it… (Yes. I’ve really heard someone say that.). They need it because that one metrics javascript tracking library is only available from npm and they can just use that other build tool to ensure it’s in the right place.

                                    This post resonates with people because while they understand that it should be the way you say it is. They can see people saying clearly silly things with a whole lot of unreasonable excitement everywhere they look. It’s so prevalent that when they see someone in web-dev saying something so eminently reasonable they can’t help but stand up and applaud. It’s not a problem with the technologies themselves. It’s more of a problem with the way the culture looks to the people observing it.

                                    1. 1

                                      Except that I see people say that all the time.

                                      Have an example? I’ve not seen that. I’ve seen lots of tutorials showing how to do $simpleThing with $complexTool, but that’s just because small examples are necessary. I’ve not seen any claims that $complexTool is required for $simpleThing.

                                      1. 2

                                        It might be a matter of emphasis. But when the only examples you can find for your responsive static brochure site are the examples you reference above then it sends a perhaps unintended message that this is how you do those things. I can’t point to specific examples around in person conversations for such things for obvious reasons. But in a way you make my point for me. It’s the reason why when you go to many sites that should be just html css and small amount of javascript you end up downloading MB’s of javascript. From the outside looking in it certainly appears that as an industry we’ve decided that this is how you do things, so why fight it?

                                  1. 4

                                    At the other end of the spectrum, I feel like everything that used to be hard is pretty easy now, or at least way easier. Compilers, debuggers, static analysers, programming languages… writing systems software and embedded stuff is so much easier than it used to be (very possible that I was just doing it wrong before, too.)

                                    1. 6

                                      His article is primarily about webdev which seems to be uniquely on a fast moving treadmill of continual change and reinvention. The areas you mention are older and more established and most people agree on the right way to do them which means we’ve automated the right ways quite a bit. In the browser and JS worlds the right way changes every few months which means the tools and automation changes every few months too. That’s a lot of mental overhead that’s not directly related to the problem you want to solve usually.

                                      1. 2

                                        Oh, yes, I know and agree. I keep up with web stuff even though I don’t do it often. I guess my comment was only tangentially related.

                                      2. 2

                                        Great point. Even in web, it wouldve been way harder to achieve the current level of functionality. There would be an uphill learning process for components without StackOverflow or Javascript-based testbeds for practice. The Web 2.0-style functionality also required native proxies/plugins on client, stuff like Perl on server, and so on. Let’s not forget how hard the portable, auto-updating, look/act-same-everywhere, native apps they effectively replaced are still hard to build if wanting experience similar to native apps on each platform.

                                        It was always hard to balance these conflicting goals in or outside of a browser. The trick was building manageable solutions to problems that we then stick with. The churn and endless reinvention of basics are what makes web a pain in many places. I say many places since some take saner approach.

                                      1. 3

                                        Part of me wonders if RedHat is hedging against Docker collapsing under the weight of it’s own codebase and API’s. CoreOS implemented rkt which is the other container runtime that k8s supports. And Docker has not been impressing me with their quality of late neither my attempts to read their code or consume their api’s.

                                        1. 5

                                          This is exactly what I was looking for: hybrid programming/interactive CAD modeler, high-level language and native performance. OpenJSCAD what the best option I could find before, but the quality of the generated models was very low.

                                          1. 4

                                            Seems like LISP and CAD are a match made in heaven. The first real job I had in tech was with ICAD, a company that made a solid modeling / CAD system in LISP:

                                            https://en.wikipedia.org/wiki/ICAD_(software)

                                            It was pretty amazing. Boeing used it to model the wing of the 757 and GE used it to model turbine blades for generators and subs.

                                            1. 3

                                              I believe that, to this day, AutoCad uses a lisp dialect for it’s scripting as well.

                                              1. 2

                                                A number of CAD systems have LISP embedded for scripting. Something that made ICAD different though is it was actually written in Franz Allegro Common LISP. When you bought the system you were also buying Franz.

                                                That drove the total sticker price up a good bit.

                                          1. -2

                                            It is, after all, a truth universally acknowledged, that a program with good use of data types, will be free from many common bugs.

                                            I’m afraid it is only so for the value of “universally” meaning “other fans of type safety”. And that’s a rather constrained subset.

                                            1. 2

                                              A language being free from a many common bugs does indeed change the set of common bugs for that language. But that doesn’t mean that it is indeed free from many common bugs of the languages without those safety features. You absolutely do gain a certain amount of safety the stronger your type system is and that does provide a fair amount of value that shouldn’t be dismissed just because there are still possible bugs. Type systems in many cases manage to push the remaining bugs into the “spec”. Which has the benefit of allowing you to fix the bug by fixing the spec and then having a compiler tell that you have indeed actually fixed said bug.

                                              For developers who can leverage this power it’s really useful and increases productivity. For those who can’t yet leverage this power it just feels like the language is slowing you down.

                                              1. 0

                                                I’m not trying to discuss the issue itself, I’m pointing out that that “truth” is not universally accepted, not even close.

                                                Talking about the issue, one of the assumption by type safety advocates I have qualms with is the word “many” here:

                                                A language being free from a many common bugs does indeed change the set of common bugs

                                                Another is this implied assumption that any gain in safety is good, without considering the disadvantages:

                                                You absolutely do gain a certain amount of safety the stronger your type system is and that does provide a fair amount of value that shouldn’t be dismissed just because there are still possible bugs.

                                                This is trivialization of the opposing view. Strong type safety system may be dismissed not because “there are still possible bugs” but because on balance they remove too few for too much effort. Simple as that.

                                                1. 1

                                                  you’re shifting the goalposts a little. Sure, maybe the balance of safety versus effort is not worth it for you. But that doesn’t mean that there isn’t a much smaller set of bugs with type safety than without, all else being equal.

                                            1. 3

                                              Thanks for this submission. My hunch is that an architecture which makes e.g. caching and speculative execution an observable part of the API is the better approach. Afaiu mips does something similar and compilers learned to deal with it.

                                              1. 1

                                                My own hunch is that we should be avoiding impure operations like getting the current time.

                                                This post seems to be talking about trusting high-assurance languages for critical/sensitive tasks, and how those guarantees can be undermined if we run arbitrary machine code. That problem seems too difficult to me: surely a better langsec approach would be for the arbitrary code to be in a high-assurance language, with the only machine code we execute coming from trusted critical/sensitive programs?

                                                I would think a langsec approach to e.g. preventing timing attacks in Javascript is to make Javascript (the language) incapable of timing. Or, at least, providing a logical clock (also used for the interleaving of concurrent event handlers) rather than allowing access to the actual time.

                                                1. 2

                                                  For the vast majority of uses of a computer at some point the application will need to know what time it is. Avoiding impure operations is throwing up your hands on general computing as a useful tool. I don’t think this is quite what you meant to say though. Can you clarify?

                                                  1. 2

                                                    The clock is a sensor and needs to be treated as such with permissions and similar. Many applications don’t have a need for the clock.

                                                1. 12

                                                  I don’t understand what the author’s problem with Haskell is, I bet Haskell is more widely used in the industry compared to Rust (I for one am working in a Haskell-first shop). Haskell has recently also gained an impressive momentum to the point that we have a shortage of Haskell talent (despite the influx of new folks) and not Haskell jobs, which was quite the opposite just a couple of years ago.

                                                  1. 5

                                                    Your anecdotal experience is directly countered by mine. It is nearly impossible for me to find work doing Haskell.

                                                    Comparing Haskell to Rust industry usage is comparing items in the bottom % of languages. The relative numbers may be impressive but it says nothing about the likelihood of adoption.

                                                    1. 3

                                                      My point is, “why use Haskell as a negative example?”. Haskell is a language that rose to its position purely based on its own merits and not a multi million dollar marketing budget backed by a mega corporation, nor its similarity to a popular language. And it’s also a fairly good position. So I don’t get the author’s point.

                                                      1. 4

                                                        Regardless of popularity contest/whatever else, I do agree that I find these kinds of comparisons to be in bad taste.

                                                        1. 3

                                                          Rust would be lucky to become Haskell. It should be aspired to.

                                                        2. 2

                                                          BTW, since it’s relevant to the discussion, I’ll use this opportunity to plug this here… we’re hiring :) https://www.picussecurity.com/careers/backend-developer.html Remote is OK, but we’re a fast growing Turkish company so we can currently probably offer, say, southern-europe kind of a salary. But that’ll probably change soon.

                                                      1. 1

                                                        cmake: need to create build dir and compile in it (at least most projects using cmake expect that). That would be great feature but you are required to delete this directory and create it again if something goes wrong (the same make clean problem but on berserk mode).

                                                        I still can’t figure out how its language works and why everything is based on setting global variables instead of return values. Some of these global variables are set in “cache” so you need to delete your build directory and start again.

                                                        Most people are copying snippets from one config to other, just like in autotools.

                                                        ninja: build targets are treated as real files with fixed paths on filesystem instead of some discardable data. But still great replacement for make to use with other build systems.

                                                        bazel, buck: great concepts but not very usable for projects outside Google and Facebook. Very hard to include external dependencies — officially documented way to do it is to put source code of external dependency into repo of all projects of your megacorp (monorepo). Then create build files for each dependency by hand. For example cmake has very handy ExternalProject_Add. And almost no cross-compilation, I tried to make windows binaries with these tools with no success.

                                                        Buckaroo is for adding external projects to Buck easily, but it doesn’t have concept of build flags (which is crucial in C world), and no cross-compilation support. At least it have good collection of Buck build files for various popular libs.

                                                        P.S. I almost can’t write code in C/C++ but familiar with these tools just as user of software that uses these build tools.

                                                        1. 2

                                                          re: Bazel and external dependencies.

                                                          The recommended ways are actually to use a workspace rule in your workspace file. The most common one being new_http_archive.

                                                          In practice these work absolutely fine. You can also if you wish check in the whole dependency and get the benefits of a mono repo but it’s not necessary if a mono repo is not something you care about.

                                                        1. 3

                                                          The number one thing I hate about most build systems is that they aren’t hermetic. Some of them get closer than others. But aside from Bazel and Nix almost none of them are truly hermetic. (edit: Buck and Pant’s are similar to Bazel in their goals but I’ve not evaluated them)

                                                          Hermeticity

                                                          A build system should enforce hermeticity for any given build to the extent that the operating system and language supports it. If it doesn’t then a whole host of problems will occur. Including but not limited to.

                                                          • The It compiles on my machine problem.
                                                          • The It doesn’t compile on my machine problem.
                                                          • The I forgot to specify this build dependency problem.
                                                          • The I have to “make clean && make all” again problem.
                                                          Language agnostic hermeticity.

                                                          I almost never work on projects for work that are a single compiled language. At a minimum I’ll have to compile some variant of javascript. You can’t have a hermetic build for a project if you you need two separate build systems to compile your project. It’s nice when a language includes build tooling out of the box but as soon as you need to support multiple languages with dependencies between them you need a build system that supports them both and also enforces hermetic builds for them both.

                                                          Specify all the inputs.

                                                          You can’t have hermetic builds if you can’t specify all the inputs for a build and the flow of those inputs through the various task dependencies. The nice thing about this is that you get some amount of incremental builds for free. The build system can determine if any given build task has had it’s inputs change and just reuse the outputs if they haven’t. You also have a foundation for distributed builds and distributed build caching to help speed up builds even further for really large organizations and codebases.

                                                          1. 2

                                                            A competent CPU engineer would fix this by making sure speculation doesn’t happen across protection domains. Maybe even a L1 I$ that is keyed by CPL.

                                                            I feel like Linus of all people should be experienced enough to know that you shouldn’t be making assumptions about complex fields you’re not an expert in.

                                                            1. 22

                                                              To be fair, Linus worked at a CPU company,Transmeta, from about ‘96 - ‘03(??) and reportedly worked on, drumrolll, the Crusoe’s code morphing software, which speculatively morphs code written for other CPUs, live, to the Crusoe instruction set.

                                                              1. 4

                                                                My original statement is pretty darn wrong then!

                                                                1. 13

                                                                  You were just speculating. No harm in that.

                                                              2. 15

                                                                To be fair to him, he’s describing the reason AMD processors aren’t vulnerable to the same kernel attacks.

                                                                1. 1

                                                                  I thought AMD were found to be vulnerable to the same attacks. Where did you read they weren’t?

                                                                  1. 17

                                                                    AMD processors have the same flaw (that speculative execution can lead to information leakage through cache timings) but the impact is way less severe because the cache is protection-level-aware. On AMD, you can use Spectre to read any memory in your own process, which is still bad for things like web browsers (now javascript can bust through its sandbox) but you can’t read from kernel memory, because of the mitigation that Linus is describing. On Intel processors, you can read from both your memory and the kernel’s memory using this attack.

                                                                    1. 0

                                                                      basically both will need the patch that I presume will lead to the same slowdown.

                                                                      1. 9

                                                                        I don’t think AMD needs the separate address space for kernel patch (KAISER) which is responsible for the slowdown.

                                                                2. 12

                                                                  Linus worked for a CPU manufacturer (Transmeta). He also writes an operating system that interfaces with multiple chips. He is pretty darn close to an expert in this complex field.

                                                                  1. 3

                                                                    I think this statement is correct. As I understand, part of the problem in meltdown is that a transient code path can load a page into cache before page access permissions are checked. See the meltdown paper.

                                                                    1. 3

                                                                      The fact that he is correct doesn’t prove that a competent CPU engineer would agree. I mean, Linux is (to the best of my knowledge) not a CPU engineer, so he’s probably wrong when it comes to get all the constraints of the field.

                                                                      1. 4

                                                                        So? This problem is not quantum physics, it has to do with a well known mechanism in CPU design that is understood by good kernel engineers - and it is a problem that AMD and Via both avoided with the same instruction set.

                                                                        1. 3

                                                                          Not a CPU engineer, but see my direct response to the OP, which shows that Linus has direct experience with CPUs, frim his tenure at Transmeta, a defunct CPU company.

                                                                          1. 5

                                                                            frim his tenure at Transmeta, a defunct CPU company.

                                                                            Exactly. A company whose innovative CPU’s didn’t meet the markets needs and were shelved on acquisition. What he learned at a company making unmarketable, lower-performance products might not tell him much about constraints Intel faces.

                                                                            1. 11

                                                                              What he learned at a company making unmarketable, lower-performance products might not tell him much about constraints Intel faces.

                                                                              This is a bit of a logical stretch. Quite frankly, Intel took a gamble with speculative execution and lost. The first several years were full of erata for genuine bugs and now we finally have a userland exploitable issue with it. Often security and performance are at odds. Security engineers often examine / fuzz interfaces looking for things that cause state changes. While the instruction execution state was not committed, the cache state change was. I truly hope intel engineers will now question all the state changes that happen due to speculative execution. This is Linus’ bluntly worded point.

                                                                              1. 3

                                                                                (At @apg too)

                                                                                My main comment shows consumers didnt pay for more secure CPU’s. So, that’s not really a market requirement even if it might prevent costly mistakes later. Their goal was making things go faster over time with acceptable watts despite poorly-written code from humans or compilers while remaining backwards compatible with locked-in customers running worse, weirder code. So, that’s what they thought would maximize profit. That’s what they executed on.

                                                                                We can test if they made a mistake by getting a list of x86 vendors sorted by revenues and market share. (Looks.) Intel is still a mega corporation dominating in x86. They achieved their primary goal. A secondary goal is no liabilities dislodging them from that. These attacks will only be a failure for them if AMD gets a huge chunk of their market like they did beating them to proper 64-bit when Intel/HP made the Itanium mistake.

                                                                                Bad security is only a mistake for these companies when it severely disrupts their business objectives. In the past, bad security was a great idea. Right now, it mostly works with the equation maybe shifting a bit in future as breakers start focusing on hardware flaws. It’s sort of an unknown for these recent flaws. All depends on mitigations and how many that replace CPU’s will stop buying Intel.

                                                                              2. 3

                                                                                A company whose innovative CPU’s didn’t meet the markets needs and were shelved on acquisition.

                                                                                Tons of products over the years have failed based simply on timing. So, yeah, it didn’t meet the market demand then. I’m curious about what they could have done in the 10+ years after they called it quits.

                                                                                might not tell him much about constraints Intel faces.

                                                                                I haven’t seen confirmation of this, but there’s speculation that these bugs could affect CPUs as far back as Pentium II from the 90s….

                                                                            2. 1

                                                                              The fact that he is correct doesn’t prove that a competent CPU engineer would agree.

                                                                              Can you expand on this? I’m having trouble making sense of it. Agree with what?

                                                                        1. 12

                                                                          Docker has not been very good software for my team at all. We’ve managed to trigger non-stop kernel semaphore leak bugs as well as lvm filesystem bugs. Some of them going through multiple different attempts to fix. And any attempt to try to figure it out yourself by reading their code is stymied by the weird Moby/Docker disconnect that seems to be there.

                                                                          If you are thinking about running docker by yourself and not in someone else’s managed docker solution then beware. It’s very sensitive to the kernel you are running and the filesystem drivers you are using it with. As far as I can tell if you aren’t running in Amazon, or Googles docker hosted solutions you are in for a bad time. And only Amazon is actually running docker. Google just sidestepped the whole issue by using their own container technology under the hood.

                                                                          The whole experience has soured me on Docker as a deployment solution. It’s wonderful for the developer but it’s a nightmare for whoever has to manage the docker hosts.

                                                                          1. 11

                                                                            A few things that bit me:

                                                                            • containers don’t report real memory limits. Running top will report all 32GB of system memory even if the container is limited to 2GB. Scala/Java or other JVM apps aren’t aware of this limit, so you have to wrap the Java process with -X memory limit flags, otherwise your container will get killed (you don’t even get an OutOfMemory exception) and marathon/k8s/whatever scheduler will start a new one. Eventually most interpreters (python, ruby, jvm, etc.) will have built in support to check cgroup memory limits, but for now it’s a pain.
                                                                            • Not enough tooling in the container. I don’t want to have to apt-get nc each time I rebuild a container to see if my network connections work. I’ve heard good things about sysdig bridging this gap though.
                                                                            • Tons of specific Kernel flags (really only matters if you use Gentoo or you compile your own kernel).
                                                                            • Weird network establishment issues. If you expose a port on the host, it will be available before it’s available to a linked container. So if you want to do a check to see if something like a database is ready, you have to do it in a container.

                                                                            I’m sure there are more. Overall I actually do like Docker, despite some of the weirdness. However I hate how we have k8s/marathon/nomad/swarm .. there’s no one scheduler or scheduler format and if you switch from one to the other, you’re redoing a lot of tooling, labels and config to get all your services to connect together. Consul makes me want to stab myself. DC/OS uses up 2GB ~ 4GB of ram just for the fucking scheduler on each node! k8s is a nightmare to configure without a team of at least three and really ten. None of these solutions scale up from one node to a ton easily (minikube is a hack).

                                                                            Containers are nice. The scheduling systems around them can go die in a fire.

                                                                            1. 4
                                                                              containers don’t report real memory limits
                                                                              

                                                                              [X] we’ve been bitten by this.It also has implications for monitoring so you get double the fun.

                                                                              Not enough tooling in the container.
                                                                              

                                                                              [X] we’ve established out own baseline container images and

                                                                              Weird network establishment issues.
                                                                              

                                                                              [X] container and k8s networking was, at least until a few months ago, a mess.

                                                                              Consul makes me want to stab myself.

                                                                              [X] we hacked our own

                                                                              without a team of at least three and really ten.

                                                                              [X] confirmed, we’re throwing money and people at it.

                                                                              None of these solutions scale up from one node to a ton easily (minikube is a hack).

                                                                              [X] I’ve thrown up my hands on having a working developer environment without running it on a cloud provider. We can’t trust minikube to behave sufficiently similarly as staging and production.

                                                                              Containers are nice. The scheduling systems around them can go die in a fire.

                                                                              I’m not even sure containers are that nice, the idea of containers is nice but the execution is still half-baked.

                                                                              1. 2

                                                                                Why do you need so many people to operate kubernetes well? And what is it enabling, to make that kind of expenditure worth it?

                                                                                1. 2

                                                                                  We’re developing a commercial turn-key, provider-independent platform based on it. Dog-fooding our own stuff has exposed many sharp bits and rough edges.

                                                                                  1. 1

                                                                                    Thanks.

                                                                            2. 7

                                                                              I’ve had a positive experience with Triton. It doesn’t support all of Docker’s features, since like Google they opted for emulating Docker and apparently decided some things weren’t having, but for the features Triton does it Just Works.

                                                                              Of course, that means getting used to administering a different ecosystem.

                                                                              1. 1

                                                                                I love the idea of Triton, but having rolled it out for a past position I worked at I can say honestly that I would not recommend it. There is no high-availability for many of the internal services by default (you need to roll your own replicas etc), there is no routing across networks (static routes and additional interfaces in every instance is not a good solution). I love Joyent as a company, and their products have a great hypothetical appeal to me as a technologist but there are just too many “buts” to justify spending the kind of money they charge for the solution they offer.

                                                                                1. 2

                                                                                  I’m just curious how old the version of Triton was, because it has had software-defined networking for ~3 years or so. Was there a limitation with it?

                                                                              2. 2

                                                                                That stinks, but sounds more like a critique of the Linux kernel? Are you running anything custom?

                                                                                Newer Docker defaults to overlayfs (no more aufs), and runs fine for us on stock Debian 9 kernels (without the extra modules package, or any dkms modules). This is both on bare metal and the AMIs Debian provides. Though we run on plain ext4, without LVM.

                                                                                1. 4

                                                                                  My experience is purely anecdotal so shouldn’t be taken as more than that.

                                                                                  However we aren’t on anything custom. Running latest CentOS kernels for everything and we keep it patched. The bugs aren’t in the linux kernel. It’s the way docker does things when it sets up the cgroups and manages them. My early experimentation with other container runtimes seems to indicate that they don’t have the same problems.

                                                                                  Just searching for the word hang in the moby project shows 171 open bugs and 521 closed. Most of them from a cursory examination look very similar to our issues. For us the tend to manifest as a deadlock in the docker engine which then causes the managed containers to go unhealthy and start a reboot loop. We’ve had to have cronjobs run and kill the docker daemons periodically in the past to keep things up and running.

                                                                                  1. 2

                                                                                    Maybe there are bugs in the way Docker sets up cgroups too, but you mentioned kernel semaphore leaks and LVM bugs which seem to be squarely in the kernel? Which seems to track to me - I know when systemd started exposing all this Linux kernel-specific stuff, they were the first really big consumer so they also exposed lots of kernel bugs.

                                                                              1. 1

                                                                                One important thing I usually see missing from “composition vs inheritence” discussions: contracts. Once you think in terms of class contracts, it’s obvious when you should use one or the other. If you inherit from a class, then

                                                                                • You may not strengthen any preconditions.
                                                                                • You may not weaken any postconditions.
                                                                                • You may not weaken any class invariants.

                                                                                Do you want all of that to hold? Use inheritance. Need something else? Composition.

                                                                                1. 3

                                                                                  The problem with those rules is that almost no programming languages enforces them when you use inheritance. Which means you have to rely on all the developers who interact with your codebase in any way to follow them and some of them don’t know anything about your codebase. (i.e. library authors). It’s a recipe for much sadness later down the road.

                                                                                  You are much better off just not using inheritance 99.9% of the time. And the other .1% of the time you’re probably still going to cause some poor maintenance programmer a world of pain when they deal with the fallout.

                                                                                  1. 3

                                                                                    Eiffel, D, and Ada all do. It’s a bit more accurate to say no popular language enforces them :p

                                                                                1. 7

                                                                                  Well what can I say there is already a reply article http://blog.breakthru.solutions/re-moving-from-php-to-go-and-back-again/

                                                                                  1. 11

                                                                                    I find arguments of the style “why did Facebook do X if there weren’t issues” (in this case build HHVM) or “Uber uses it for service development” very useless. It is interesting from the perspective of someone building an ecosystem, it’s not interesting for users that don’t build the next Facebook or Uber.

                                                                                    Facebook is - on the scope of all software development happening - a fringe thing. Their practices and decisions are hard to apply to smaller scales, even if their tech speakers say otherwise.

                                                                                    1. 4

                                                                                      Yeah his analysis of go as a language revealed a highly limited understanding of the language. I suspect he kept trying to write OO PHP and then got frustrated when it didn’t work the way he thought it did.

                                                                                    1. 4

                                                                                      My stance is the same as it’s been with Bitcoin issues, if the Core devs of the platform (go-ethereum included) are able to reach rough consensus around the issue, then I support it, otherwise I do not.

                                                                                      EDIT: See also this tweet from Bob Summerwill:

                                                                                      Also EVERYBODY IN THE WORLD is welcome to be part of the process coming to consensus on how we move forward on this question of trapped funds.

                                                                                      See https://www.reddit.com/r/ethereum/comments/7d1szw/link_discussion_on_stuck_ether_recovery_options/?st=jb2js13y&sh=0b523e45

                                                                                      Join https://gitter.im/ethereum/ether-recovery

                                                                                      Nobody is going to be “bamboozled by Parity”. We all work in the open.

                                                                                      My personal stance on the issue is I’m split:

                                                                                      On one hand rescuing the funds doesn’t seem to hurt anyone. On the other hand, hard-forking every time devs screw up a smart contract sets an interventionist precedent that could lead to Bad Things™ down the road.

                                                                                      So, giving Parity Tech a figurative “get out of jail free” card on this, by hard forking, damages the long-term prospects of the whole to Parity’s benefit. It was their mistake, so IMO they should own up to it and at least cover some of the damages.

                                                                                      Take this situation to its logical extreme: if each time an Ethereum developer makes a smart contract mistake the system hard forks, well, it’s absolutely no different than a centrally managed financial system.

                                                                                      Do the people writing the software never take responsibility for their actions?

                                                                                      Is it always “no HF” when someone outside of the core dev group makes a mistake, and “HF” when people inside the core dev group make a mistake?

                                                                                      Selective enforcement like this leads to corruption.

                                                                                      1. 6

                                                                                        I don’t generally subscribe to slippery slope arguments. I do think that the prevalence of requests to do head forks suggests that the rhetoric around smart contacts is off the mark. It’s probably time for ethereum to confer up with some policy guidelines around smart contract error resolution as a matter of policy so the debate can be more focused.

                                                                                      1. 3

                                                                                        I find this somewhat reminiscent of Urbit in it’s “let’s just start over from scratch” approach. These kind of projects are fascinating Yak shaves to watch even if they will never find mainstream application.

                                                                                        1. 10

                                                                                          Honestly, I can’t help but to feel very bearish on ETH. I really like the idea, but I think the implementation is poor, and the community is poorly aligned in values to making it a success.

                                                                                          The most important construct in ETH that sets it apart from other currencies is the Smart Contract. I don’t believe though that these are either smart, nor contracts. Whether or not you agree with the resolution of the DAO hack or not, the fact that we consider it a hack to be in some way resolved indicates we do see smart contracts as programs that can and should be changeable to better meet the intent.

                                                                                          Based on the DAO and a number of other issues with smart contracts, I don’t think they are smart based on the design of the language being so poorly adapted for the kind of verification needed to make robust contracts. It isn’t smart.

                                                                                          Based on the communities willingness to fork over contract actions they don’t agree with means they aren’t contracts. In real life, if you’re duped by a creative but legal (as judged by the legal process, or in this case the execution on the blockchain) interpretation, you need to suck it up and move on. In Ethereum, you can fork, and in practice the group that lead to the fork of ETH were a minority. Smart contracts aren’t contracts because by the decision of a few they can be rewritten without the agreement of all involved parties.

                                                                                          Ultimately, if I were looking to do non-hobbyist business, either as the business or a customer, for these reasons I wouldn’t feel comfortable using Ethereum.

                                                                                          1. 19

                                                                                            I am not a lawyer, but I did grow up with one, and I’m pretty sure a legal but clever and tricky contract has legal grounds to be thrown out in court.

                                                                                            As a kid I was curious if the “tiny fine print that you couldn’t read” could really be used to trick someone. It can’t. The legal system is very aware of the distinction, it’s called acting in good faith.

                                                                                            Again, not a lawyer, not legal advice, don’t make choices based on what I’ve said, but it’s not as cut and dry as you claim it is.

                                                                                            1. 15

                                                                                              And contracts with “bugs” in them (i.e., that don’t accurately represent the intent of the parties) aren’t taken literally either. There are rules/principles about how to interpret them that are much more nuanced than that. Only a programmer who doesn’t get out much would think that a better approach is to eliminate the potential for ambiguity and then always interpret contracts literally.

                                                                                              1. 7

                                                                                                I generally understand your point and agree with it, but what I’m suggesting is that the execution of a smart contract is the legal process in this context.

                                                                                                It’s not that it’s right or wrong that the contract was interpreted/executed in a given way, it’s that after the field has been set and the dice cast, then going back in time and writing out the execution because some definition of majority (usually a minority in practice) didn’t win is the issue.

                                                                                                Changing how the outcome played out after the fact that it was interpreted and executed feels (in the context of a smart contract being interpreted by the legal process of the block chain) like an extrajudicial action by people who lost out.

                                                                                                1. 5

                                                                                                  The legal system has been dealing with smartasses since before your ancestors were deloused.

                                                                                                  Think of it like the efficient market hypothesis: People have been banging on legal systems for so long that you can reasonably assume that all of the interesting stuff has been found, and is either a known technique or is already illegal. There might be exceptions to this, but the fact the system is administered by humans who exercise human judgement closes a lot of novel loopholes, as well.

                                                                                                  1. 3

                                                                                                    I’d go one step further and assert that, in legal systems that have been functioning for centuries and are thoroughly debugged, some obvious glaring flaws will continue to exist, but they are those that are actively maintained by some group which has an extraordinary amount of power and stands to gain an extraordinary amount of wealth from them.

                                                                                                2. 4

                                                                                                  I used to think this way, until I realized that all these high-profile bugs in applications on Ethereum have very little to do with the code in Ethereum.

                                                                                                  The DAO is a good example. It was not written by the core Ethereum project. It was a distributed application written by unrelated developers, and crowdfunded by a token sale. Blaming the Ethereum project for DAO’s code quality is like blaming the Unix developers for a segfault in some third-party app.

                                                                                                  1. 3

                                                                                                    You don’t have to blame the core developers for the DAO contract code’s bugs to blame them for forking the block chain to “fix” the bugs for THE DAO developers.

                                                                                                    Those are two separate acts from two separate groups of people.

                                                                                                    1. 1

                                                                                                      On the other hand, one of the Ethereum founders was responsible for the Parity bug.

                                                                                                      1. 1

                                                                                                        I agree with you but think the conclusion you draw is incorrect. While Solidity itself is not a bug, the language itself is part of the design of Ethereum, and by using a language (Solidity) that is so poorly adapted to verification, it’s made it easier for users to write buggy contracts.

                                                                                                        1. 1

                                                                                                          C is buggy, but that didn’t kill Unix.

                                                                                                          Unless a credible competitor appears, I think Ethereum will continue to dominate the smart contracts space.

                                                                                                          1. 2

                                                                                                            C isn’t buggy. Solidity isn’t buggy. Their use in the systems mentioned have lead to more bugs, and an environment more user- and developer-hostile than had they instead been replaced with other languages.

                                                                                                            I agree that Solidity won’t kill Ethereum, but a credible competitor will. I think it is almost a certainty that the biggest shining star of a more mature smart contract blockchain system will be better verifiability in the language. It might not be the immediate killer of Ethereum, it might not even be the technology that kills the Ethereum killer, but I really do think that a verifiable in practice language will be a requisite feature for a smart contract technology that isn’t as known for being a massive footgun as Ethereum is.

                                                                                                            1. 1

                                                                                                              Wait, since when is C buggy?

                                                                                                              1. 1

                                                                                                                I should have been more precise.

                                                                                                                While C itself is not a bug, the language itself is part of the design of Unix, and by using a language (C) that is so poorly adapted to verification, it’s made it easier for users to write buggy programs.

                                                                                                                Buggy programs didn’t kill Unix, so I doubt Ethereum is in danger.

                                                                                                      1. 5

                                                                                                        While parser combinators tend not to have a tokenizer step explicitely I find it useful to still maintain the distinction between Tokenization and AST building when using them.

                                                                                                        It’s much easier to write them without getting lost if you separate the types of parsing they each represent. It also forces you to focus on the primitives of your grammar separately from the way they combine to build the AST’s.

                                                                                                        I also find that it makes adding error handling and reporting slightly easier when using parser combinsators, an area they have historically given a lot of people trouble with.

                                                                                                        1. 1

                                                                                                          I agree with you, and you can easily write a tokenizer and an AST builder using parser combinators : http://parsy.readthedocs.io/en/latest/howto/lexing.html

                                                                                                          In fact, this is possible because Parsy works on any iterable : strings, lists, sets, etc. Parsy handles token lists exacly like strings.

                                                                                                          1. 2

                                                                                                            Parsy looks cool. I haven’t used it but it looks like something I could see myself using. You are right that good parser combinator libraries will work with any iterable so separating the two is usually not that much effort and the gains are well worth it.

                                                                                                            1. 1

                                                                                                              Yes. I really like Parsy too.

                                                                                                          2. 1

                                                                                                            It should also be faster to do separate tokenization if you have any backtracking.

                                                                                                            I was just discussing this on subreddit:

                                                                                                            https://www.reddit.com/r/oilshell/comments/7fjl5t/any_idea_on_the_completeness_of_shellchecks_parser/dqfxz8b/

                                                                                                            My argument was that you should do the easy thing with the fast algorithm (lexing in linear time with regexes), and the hard thing with he powerful but slow algorithm (backtracking, whatever flavor of CFG, etc.)

                                                                                                            I did the same thing with PEGs. PEGs are usually described as “scannerless”, but there’s no reason you can’t lex first and operate on tokens rather than characters.

                                                                                                          1. 7

                                                                                                            The but operator has a brother: an infix does operator. It behaves very similarly, except it does not clone.

                                                                                                            Perl6 is like C++. They jumped the shark long ago: Both these languages have become so complex and arcane.

                                                                                                            For a language to work for me, I need to be able to grok the “base language” – hold the whole thing in my head. The complexity should reside in the libraries.

                                                                                                            1. 2

                                                                                                              I agree they both have a problem saying “no” to new features, but at least I understand the use cases for features that get tacked onto C++.

                                                                                                              Sometimes Perl6 seems like an elaborate troll or esoteric joke language.

                                                                                                              1. 1

                                                                                                                In terms of grokking “the language”, there is not much difference between a language feature and a library. If the library is out of your problem domain, you can ignore it; if the language feature feels arcane, you can ignore it, too. I know some C++ projects that have rules allowing only a strict set of template features. So be it. You won’t need to know anything about SFINAE to work productively in those projects.

                                                                                                                1. 5

                                                                                                                  Right up until you are trying to debug a problem happening in an upstream library that you don’t control.