1. 6

    I’ve been working remotely for a few months, although only recently full time. I love it. I feel very confident that my employer is getting much higher quality hours out of me than when I worked in an office. I feel much more productive and I believe that I am much more productive. A lot of talking that I did in an office really didn’t matter so much for getting work done. For random conversations, I just send an email to someone or talk to them on a chat, that has worked fine for me.

    1.  

      Now switch to 4 day work-weeks and be amazed that you can be even more productive!

      1.  

        Already there actually :) And yes, I am more productive! But we’ll see how it works in the long run. Do I acclimate to 4 day work weeks as we did to 5 day work weeks so long ago and productivity drops once the novelty wears off?

    1. 5

      I spoke to the FreeBSD folks at fosdem about laptop compatibility as I’d also had issues. The advice they gave was that h/w support is best in -CURRENT so laptop users should treat that as a rolling release. I have yet to try that out.

      1. 2

        Are there binary releases of -CURRENT, or is the advice to “rolling recompile” the kernel & base system daily/weekly? 😒

        1. 2

          the advice is to compile from source, but rather than doing it regularly to track the -current mailing list and see what folks are talking about.

          1. 2

            You have to recompile if you want evdev support in most input device drivers (options EVDEV_SUPPORT in config is still not on by default >_<).

            1.  

              I run TrueOS which tracks current + the DRM changes. I’m on UNSTABLE, but with Boot Environments it’s not a problem (the last UNSTABLE release actually broke quite a bit for me so I just rolled back, without issue). I suggest it if you want to track the latest stuff but don’t care to do it yourself. cc @oz @leeg

              1.  

                That’s interesting, I’ll have a look. Thanks!

          1. 5

            It seems odd to spend so much space discussing the complexities of software development, only to conclude that the answer is empiricism. Surely the number of variables and their non-linear effects make experimentation difficult. I think $10M is an extremely tiny fraction of what a reproducible experiment would cost, and it would take a long time to run. You need a huge scale in number of projects and also lengthy longitudinal studies of their long-term impacts. And after you did all that the gigantic experiment would be practically certain to perturb what it’s trying to measure: programmer behavior. Because no two projects I’m on get the same me. I change, mostly by accumulating scar tissue.

            Empiricism works in a tiny subset of situations where the variables have been first cleaned up into orthogonal components. I think in this case we have to wait for the right perspective to find us. We can’t just throw money at the problem.

            1. 5

              What else can we do? Our reason is fallible, our experiences are deceitful, and we can’t just throw our hands up and say “we’ll never know”. Empiricism is hard and expensive, but at least we know it works. It’s gotten us results about things like n-version programming and COCOMO and TDD and formal methods. What would you propose we do instead?

              1. 4

                Empiricism is by no means the only thing that works. Other things that work: case studies, taxonomies, trial and error with motivation and perseverance. Other things, I’m sure. All of these things work some of the time – including empiricism. It’s not like there’s some excluded middle between alchemy and science. Seriously, check out that link in my comment above.

                I’m skeptical that we have empirically sound results about any of the things you mentioned, particularly TDD and formal methods. Pointers? For formal methods I find some of @nickpsecurity’s links kinda persuasive. On some mornings. But those are usually case studies.

                Questions like “static or dynamic typing” are deep in not-even-wrong territory. Using empiricism to try to answer them is like a blind man in a dark room looking for a black cat that isn’t there.

                Even “programming” as a field of endeavor strikes me as a false category. Try to understand that domain you’re interested in well enough to automate it. Try a few times and you’ll get better at it – in this one domain. Leave the task of generalization across domains to future generations. Maybe we’ll eventually find that some orthogonal axis of generalization works much better. Programming in domain X is like ___ in domain X more than it is like programming in domain Y.

                You ask “what else is there?” I respond in the spirit of Sherlock Holmes: “when you have excluded the impossible, whatever is left, however unlikely, is closer to the answer.” So focus on your core idea that the number of variables is huge, and loosen your grip on empiricism. See where that leads you.

                1. 5

                  I think we’re actually on the same page here. I consider taxonomies, ethnographies, case studies, histories, and even surveys as empirical. It’s not just double blind clinical studies: as Making Software put it, qualitative findings are just as important.

                  I reject the idea that these kinds of questions are “not even wrong”, though. There’s no reason to think programming is any more special than the rest of human knowledge.

                  1. 2

                    Ah ok. If by empiricism you mean, “try to observe what works and do more of that”, sure. But does that really seem worth saying?

                    It can be hard psychologically to describe a problem well in an article and then not suggest a solution. But sometimes that may be the best we can do.

                    I agree that programming is not any more special than the rest of human knowledge. That’s why I claim these questions are not even wrong. Future generations will say, “sure static typing is better than dynamic typing by about 0.0001% on average, but why did the ancients spend so much time on that?” Consider how we consider ancient philosophers who worried at silly questions like whether truth comes from reason or the senses.

                    Basically no field of human endeavor had discovered the important questions to ask in its first century of existence. We should spend more time doing and finding new questions to ask, and less time trying to generalize the narrow answers we discover.

                    1. 3

                      Ah ok. If by empiricism you mean, “try to observe what works and do more of that”, sure. But does that really seem worth saying?

                      It’s not quite that. It’s all about learning the values and limitations of all the forms of knowledge-collection. What it means to do a case study and how that differs from a controlled study, where ethnographies are useful, etc. It’s not “observe what works and do more of that”, it’s “systematically collect information on what works and understand how we collect and interpret it.”

                      Critically in that is that the information we collect by using “reason” alone is minimal and often faulty, but it’s how almost everybody interprets software. That and appealing to authority, really.

                      Basically no field of human endeavor had discovered the important questions to ask in its first century of existence. We should spend more time doing and finding new questions to ask, and less time trying to generalize the narrow answers we discover.

                      The difference is that we’ve already given software control over the whole world. Everything is managed with software. It guides our flights and runs our power grid. Algorithms decide whether people go to jail or go free. Sure, maybe code will look radically different in a hundred years, but right now it’s here and present and we have to understand it.

                      1. 2

                        It is fascinating that we care about the same long-term problem but prioritize sub-goals so differently. Can you give an example of a more important question than static vs dynamic typing that you want to help answer by systematically collecting more information?

                        Yes, we have to deal with the code that’s here and present. My answer is to reduce scale rather than increase it. Don’t try to get better at running large software projects. Run more small projects; those are non-linearly more tractable. Gradually reduce the amount of code we rely on, and encourage more people to understand the code that’s left. A great example is the OpenBSD team’s response to Heartbleed. That seems far more direct an attack on existing problems than any experiments I can think of. Experiments seem insufficiently urgent, because they grow non-linearly more intractable with scale, while small-scale experiments don’t buy you much: if you don’t control for all variables you’re still stuck using “reason”.

                        1. 2

                          Can you give an example of a more important question than static vs dynamic typing that you want to help answer by systematically collecting more information?

                          Sure. Just off the top of my head:

                          • How much does planning ahead improve error rate? What impacts, if any, does agile have?
                          • What are the main causes of cascading critical failures in systems, and what can we do about them?
                          • When it comes to maximizing correctness, how much do intrinsic language features matter vs processes?
                          • I don’t like pair programming. Should I be doing it anyway?
                          • How do we audit ML code?
                          • How much do comments help? How much does documentation help?
                          • Is goto actually harmful?

                          Obviously each of these have ambiguity and plenty of subquestions in them. The important thing is to consider them things we can investigate, and that investigating them is important.

                          1. 0

                            Faced with Cthulhu, you’re trying to measure how the tips of His tentacles move. But sometimes you’re conflating multiple tentacles! Fthagn!

                            1. 1

                              As if you can measure tentacles in non-Euclidean space without going mad…

                              1. 1

                                Now you’re just being rude. I made a good faith effort to answer all of your questions and you keep condescending to me and insulting me. I respect that you disagree with me, but you don’t have to be an asshole about it.

                                1. 2

                                  Not my intention at all! I’ll have to think about why this allegory came across as rude. (I was more worried about skirting the edge when I said, “is that really worth talking about?”) I think you’re misguided, but I’m also aware that I’m pushing the less likely theory. It’s been fun chatting with you precisely because you’re trying to steelman conventional wisdom (relative to my more outre idea), and I find a lot to agree with. Under it all I’ve been hoping that somebody will convince me to return to the herd so I can stop wasting my life. Anyway, I’ll stop bothering you. Thanks for the post and the stimulating conversation.

                        2. 2

                          “try to observe what works and do more of that”

                          That is worth saying, because it can easily get lost when you’re in the trenches at your job, and can be easy to forget.

                    2. 2

                      What else can we do?

                      If describing in terms of philosophies, then there’s also reductionism and logic. The hardware field turning analog into digital lego’s and Oberon/Forth/Simula for software come to mind for that. Maybe model-driven engineering. They break software into fundamental primitives that are well-understood which then compose into more complex things. This knocks out tons of problems but not all.

                      Then, there’s logical school that I’m always posting about as akkartik said where you encode what you want, the success/failure conditions, how you’re achieving them, and prove you are. Memory safety, basic forms of concurrency safety, and type systems in general can be done this way. Two of those have eliminated entire classes of defects in enterprise and FOSS software using such languages. CVE list indicates the trial-and-error approach didn’t works as well. ;) Failure detection/recovery algorithms, done as protocols, can be used to maintain reliability in all kinds of problematic systems. Model-checking and proof has been most cost-effective in finding protocol errors, esp deep ones. Everything being done with formal methods also falls into this category. Just highlighting high impact stuff. Meyer’s Eiffel Method might be said to combine reductionism (language design/style) and logic (contracts). Cleanroom, too. Experimental evidence from case studies showed Cleanroom was very low defect, even on first use by amateurs.

                      Googled a list of philosophies. Let’s see. There’s the capitalism school that says the bugs are OK if profitable. The existentialists say it only matters if you think it does. The phenomenologists say it’s more about how you perceived the failure from the color of the screen to the smell of the fire in the datacenter. The emergentists say throw college grads at the problem until something comes out of it. The theologists might say God blessed their OS to be perfect with criticism not allowed. The skeptics are increasingly skeptical of the value of this comment. The… I wonder if it’s useful to look at it in light of philosophy at all given where this is going so far. ;)

                      I look at it like this. We have most of what we want out of a combo of intuition, trial-and-error, logic, and peer review. This is a combo of individuals’ irrational activities with rational activity on generating and review side of ideas. I say apply it all with empirical techniques used to basically just catch nonsense from errors, bias, deception, etc. The important thing for me is whether something is working for what problems at what effort. If it works, I don’t care at all whether there’s studies about it with enough statistical algorithms or jargon used in them. However, at least how they’re tested and vetted… the evidence they work… should have rigor of some kind. I also prefer ideological diversity and financial independence in reviewers to reduce collusion problem science doesn’t address enough. A perfectly-empirical study with 100,000+ data points refusing my logic that Windows is insecure is less trustworthy when the people who wrote it are Microsoft employees wanting a NDA for the data they used, eh?

                      I’ll throw out another example that illustrates it nicely: CompCert. That compiler is an outlier where it proves little to nothing about formal verification in general most empiricists might tell you. Partly true. Skepticism’s followers might add we can’t prove that this problem and only this problem was the one they could express correctly with logic if they weren’t misguided or lying to begin with. ;) Well, they use logical school of specifying stuff they prove is true. We know from testing vs formal verification analysis that testing or trial-and-error can’t ensure the invariants due to state space exploration. Even that is a mathematical/logical claim because otherwise you gotta test it haha. The prior work with many formal methods indicate they reduce defects a lot in a wide range of software at high cost with simplicity of software required. Those generalizations have evidence. The logical methods seem to work within some constraints. CompCert pushes those methods into new territory in specification but reuses logical system that worked before. Can we trust claim? Csmith throws CPU years of testing against it and other compilers. Its defect rate bottoms out, mainly spec errors, unlike about any compiler ever tested that I’ve seen in the literature. That matches prediction of logical side where errors in proven components about what’s proven should be rare to nonexistent.

                      So, the empirical methods prove certain logical systems work in specific ways like ensuring proof is at least as good as the specs. We should be able to reuse logical systems proven to work to do what they’re proven to be good at. We can put less testing into components developed that way when resources are constrained. Each time something truly new is done like that we review and test the heck out of it. Otherwise, we leverage it since things that logically work for all inputs to do specific things will work for the next input with high confidence since we vetted the logical system itself already. Logically or empirically, we can therefore trust methods ground in logic as another tool. Composable, black boxes connected in logical ways plus rigorous testing/analysis of the box and composition methods are main ways I advocate doing both programming and verification. You can keep applying those concepts over and over regardless of tools or paradigms you’re using. Well, so far in what I’ve seen anyway…

                      @derek-jones, tag you’re it! Or, I figure you might have some input on this topic as a devout empiricist. :)

                      1. 2

                        Googled a list of philosophies. Let’s see. There’s the capitalism school that says the bugs are OK if profitable. The existentialists say it only matters if you think it does. The phenomenologists say it’s more about how you perceived the failure from the color of the screen to the smell of the fire in the datacenter. The emergentists say throw college grads at the problem until something comes out of it. The theologists might say God blessed their OS to be perfect with criticism not allowed. The skeptics are increasingly skeptical of the value of this comment. The… I wonder if it’s useful to look at it in light of philosophy at all given where this is going so far. ;)

                        Awesome, hilarious paragraph.

                        We have most of what we want out of a combo of intuition, trial-and-error, logic, and peer review. This is a combo of individuals’ irrational activities with rational activity on generating and review side of ideas. I say apply it all with empirical techniques used to basically just catch nonsense from errors, bias, deception, etc. The important thing for me is whether something is working for what problems at what effort. If it works, I don’t care at all whether there’s studies about it with enough statistical algorithms or jargon used in them.

                        Yes, totally agreed.

                      2. 2

                        What else can we do? Our reason is fallible, our experiences are deceitful, and we can’t just throw our hands up and say “we’ll never know”.

                        Why not? The only place these arguments are questioned or really matter is when it comes to making money and software has ridiculous margins, so maybe it’s just fine not knowing. I know high-risk activities like writing airplane and spaceship code matter, but those folks seem to not really have much contention about about if their methods work. It’s us folks writing irrelevant web services that get all uppity about these things.

                    1. 3

                      Can we stop using the term technical debt? It was invented by Ward Cunningham to explain to business people the risk and benefit of cutting corners. We are not finance people and like all analogies it breaks down quickly. Who incurs the debt and who pays it back? Its unclear.

                      1. 2

                        Who incurs the debt

                        First the developers. They have to do more work. Then the users. Code tends to be delivered to them later because TD adds to time.

                        and who pays it back?

                        Your boss. Either he invests in giving you time to fix the TD or he pays continuously because your day-to-day work takes longer and longer.

                        I think the term pretty much nails it. Not only for finance people.

                        1. 1

                          First the developers. They have to do more work. Then the users. Code tends to be delivered to them later because TD adds to time.

                          Technical debt saves time! That was the argument Ward Cunningham gave as a benefit of technical debt. You save time now for complexity later. It’s a way to push out software faster in the beginning. it’s about cutting corners. You incur TD because it saves you time!

                          Your boss. Either he invests in giving you time to fix the TD or he pays continuously because your day-to-day work takes longer and longer.

                          What if there is no boss? What if you never pay technical debt? What if you just keep patching? What if your software reaches a point where it’s feature complete and there is not reason to tackle the technical debt? Is this a debt jubilee?

                          It makes no sense, who do we borrow from? the universe? ourselves? Is there interest? Is it quantifiable?

                          I prefer other terms than technical debt like entropy, code rot, or incidental complexity. It’s about keeping a clean shop. After the first release it’s all maintenance. A better analogy might be a car mechanic keeping a clean shop.

                          1. 1

                            Is it quantifiable?

                            Yes, for companies producing software, yes it is. For Open Source projects, may be.

                            It’s about keeping a clean shop.

                            Well, no. It can be all dirty but this might not add to the total cost of maintaining or producing the product. It’s not about work ethics or craftsmanship.

                            TD is used to assess the real cost of certain decisions. (I like your “cutting corners”.) Whether these costs are too high or not, or like you said, if it even saves you in the end, doesn’t matter. It is just a term for those costs.

                            1. 2

                              Yes, for companies producing software, yes it is.

                              Can you expand on that? Quantifying software development has, so far, been a very difficult challenge.

                              1. 1

                                Yes, sure, exactly quantifying the costs has been a very difficult challenge. But you can estimate, get a measure of, or roughly calculate the costs. And this is in fact what you do in every sprint planning, if you do follow an agile methodology.

                                Example: When I use SVN as a version control software I can add up a certain amount of time, say 10 minutes, when I do manually “rebase” my changes on the current trunk. If I do that 3 times a day, that’ll be 30 minutes. Or I can use git, where it’ll take me lets say 1 minute, or 3 minutes a day.

                                Now when I want to switch the repo to git, and want to preserve the commit history, I need to understand and use cvs2git to transform the SVN repo. Then I need to test the import. Publish it. Train my fellow colleagues to use git. Lets say I need two weeks for that.

                                Now is it worth it? When will the cost amortize?

                                I think this is pretty good quantifiable. If you just guesstimate.

                                1. 1

                                  Or you could just decide commit history isn’t all that important to you and do merges, no tool change needed! Your example leaves out a bunch of other work, though. For example, if your company has a bunch of tooling that makes assumptions about using SVN, then moving over to git could be extremely expensive, and that expense is usually a challenge to quantify, IME. Is it worth having the tools team abstract SVN away or move everything to git? What won’t we have in the time it takes for them to do that? And that’s what TD usually looks like IME. It’s not “if we do this, I’ll save X minutes every day, and doing this is free”. Almost always, it’s “we can do this, I’ll save X minutes every day, but doing this will cost us a month of doing other things”. And since quantifying the future is hard, that statement becomes much harder to quantify.

                                  1. 1

                                    I might be wrong but it seems you mix-up business decisions with engineering decisions. And of course, taking decisions in a world where you do not know everything is difficult. That’s what makes it so interesting :)

                                    But I believe I see your point: TD is a limited concept that does not encompass the entire problem-space of software quality. True.

                                    1. 2

                                      I might be wrong but it seems you mix-up business decisions with engineering decisions.

                                      TD is inherently a business decision, otherwise not having it/solving it when it does show up would be trivial: just do refactor/rewrite and take as long as you want.

                      1. 16

                        The click-bait title is not really backed up in any way in the content. The conclusion doesn’t even bring up company death. All-in-all, a rehash of the existing knowledge and statements around technical debt.

                        1. 3

                          Moreover, the fact that almost all of the most successful, rich companies have a pile of technical debt… some maybe inescapable… refutes the title so thoroughly that it almost seems more natural to ask if technical debt is a positive sign of success.

                          Im not saying it is so much as looking only at correlations between companies that last and amount of technical debt would make it look positive by default.

                          1. 4

                            I tend to view that as “Stagger onwards despite the mountain of technical debt, because the prime drivers in our current economy are not efficiency or competence of engineering”.

                          2. 1

                            I’m sorry if the content of my post wasn’t explicit enough. The 4 areas of technical debt I analyse give my idea of how they lead to the corrosion of the engineering organization and the company:

                            • Lack of shared understanding on the functionality of the product (leading to waste of effort and disruption of service)
                            • Inability to scale and loss of agility in respect to the competing organizations
                            • Inability to react to failures and learn from how the product is used by your clients
                            • Inability to scale your engineering team

                            My bad if the above points haven’t been clear enough from my post. Thanks for your feedback, really appreciated!

                            1. 2

                              No, I got those points out of it, but you didn’t link that to company death in anyway. I’ve not done a study but of the successful companies I’ve worked at, tech debt is pervasive, depending on the area.

                              Also, and this point as more to do with me than you so I don’t hold it against you, I’m sick of these articles that of the form:

                              Observation -> Logical Assumptions Based On Observation -> Conclusion

                              There is no empiricism in there at all. So does tech debt make it harder to scale your engineering team? Maybe! But you’ve just presented some nice sounding arguments rather than any concrete evidence. It’s easy to say “here’s a a bad thing and here are the bad things I would assume happen because of it” but that’s a long ways away from “here’s a bad thing and here is the concrete evidence of outputs because of it”.

                          1. 3

                            How many of you track your technical debt explicitly?

                            That is, with an issue label or somesuch marker indicating “a decision that might bite us was made and this is proof of that hesitation”.

                            I’ve been doing this now for about 2 years on my projects and it’s helped a few times when regret has come in the form of unhappy management.

                            1. 2

                              I do not do it that explicit in code. But projects I’ve been TL on I do make very explicit decisions about which part of the codebase is allowed to go into debt and which parts need to be high quality from day one. It doesn’t always work out like that but making the decision and communicating helps a lot.

                            1. 1

                              The article makes a really important observation that’s often overlooked. It’s really important for code to express its intent clearly. The key to working with code effectively lies in your ability to understand its purpose. Static typing can have a big impact on the way we structure code, and in many cases it can actually obscure the bigger picture as the article illustrates.

                              1. 20

                                I don’t see how the article illustrates that. The article’s argument is that converting between serialization format and your business object requires maintaining some other kind of code and that costs something.

                                In my experience I’ve had other costs, which I found more expensive, in dealing with using one’s serialization layer as a business object:

                                1. The expected fields and what they are tend to not be well documented. Just convert the JSON and use it as a dict or whatever, but it’s hard to know what should be there for someone who didn’t write the code. With static types, even if one doesn’t document the values, they are there for me to see.
                                2. The semantics of my serialization layer may not match the semantics of my language and, more importantly, what I want to express in my business logic. For example, JSON’s lack of integers.
                                3. The serialization changes over versions of the software but there is a conversion to the business object that still makes sense, I can do that at the border and not affect the rest of my code.

                                The call out to clojure.spec I found was a bit odd as well, isn’t that just what any reasonable serialization framework is doing for you?

                                As a static type enthusiast, I do not feel that the conversion on the edges of my program distract from some bigger picture. I do feel that after the boundary of my program, I do not want to care what the user gave me, I just want to know that it is correct. If one’s language supports working directly with some converted JSON, fine.

                                On another note, I don’t understand the Hickey quote in this article. What is true of all cars that have driven off a road with a rumble strip? They went over the rumble strip. But cars drive off of roads without rumble strips too, so what’s the strength of that observation? Does the lack of a rumble strip some how make you a more alert driver? Do rumble strips have zero affect on accidents? I don’t know, but in terms of static types, nobody knows because the studies are really hard to do. This sort of rhetoric is just distracting and worthless. In my experience, the talk of static types and bugs is really not the strength. I find refactoring and maintenance to be the strength of types. I can enter a piece of code I do not recognize, which is poorly documented, and start asking questions about the type of something and get trustworthy answers. I also find that that looking at the type of a function tends to tell me a lot about it. Not always, but often enough for me to find it valuable. That isn’t to say one shouldn’t document code but I find dynamically typed code tends to be as poorly documented as any other code so I value the types.

                                If one doesn’t share that experience and/or values, then you’ll disagree, and that’s fine. I’m not saying static types are objectively superior, just that I tend to find them superior. I have not found a program that I wanted to express that I preferred to express with a dynamic language. I say this as someone that spends a fair amount of time in both paradigms. The lack of studies showing strengths in one direction or another doesn’t mean dynamic types are superior to static or vice versa, it just means that one can’t say one way or the other and most of these blog posts are just echos of one’s experience and values. I believe this blog post is falling into the trap of trying to find some explanatory value in the author’s experience when in reality, it’s just the author’s experience. I wish these blog posts started with a big “In my experience, the following has been true….”

                                Disclaimer, while I think Java has some strengths in terms of typing, I really much prefer spending my time in Ocaml, which has a type system I much prefer and that is generally what I mean when I say “static types”. But I think in this comment, something like Java mostly applies a well.

                                1. 3

                                  The call out to clojure.spec I found was a bit odd as well, isn’t that just what any reasonable serialization framework is doing for you?

                                  My experience is that Clojure spec addresses the three points. However, Spec can live on the side as opposed to being mixed with the implementation, and it allows you to only specify/translate the types for the specific fields that you care about. I think this talk does a good job outlining the differences between the approaches.

                                  On another note, I don’t understand the Hickey quote in this article.

                                  What you ultimately care about is semantic correctness, but type systems can actually have a negative impact here. For example, here’s insertion sort implemented in Idris, it’s 260 lines of code. Personally, I have a much time understanding that the following Python version is correct:

                                  def insertionsort( aList ):
                                    for i in range( 1, len( aList ) ):
                                      tmp = aList[i]
                                      k = i
                                      while k > 0 and tmp < aList[k - 1]:
                                          aList[k] = aList[k - 1]
                                          k -= 1
                                      aList[k] = tmp
                                  

                                  I can enter a piece of code I do not recognize, which is poorly documented, and start asking questions about the type of something and get trustworthy answers.

                                  I’ve worked with static languages for about a decade. I never found that the type system was a really big help in this regard. What I want to know first and foremost when looking at code I don’t recognize is the intent of the code. Anything that detracts from being able to tell that is a net negative. The above example contrasting Idris and Python is a perfect example of what I’m talking about.

                                  Likewise, I don’t think that either approach is superior to the other. Both appear to work effectively in practice, and seem to appeal to different mindsets. I think that alone makes both type disciplines valuable.

                                  It’s also entirely possible that the language doesn’t actually play a major role in software quality. Perhaps, process, developer skill, testing practices, and so on are the dominant factors. So, the right language inevitably becomes the one that the team enjoys working with.

                                  1. 7

                                    That’s not a simple sort in Idris, it’s a formal, machine checked proof that the implemented function always sorts. Formal verification is a separate field from static typing and we shouldn’t conflate them.

                                    For the record, the python code fails if you pass it a list of incomparables, while the Idris code will catch that at compile time.

                                    I’m a fan of both dynamic typing and formal methods, but I don’t want to use misconceptions of the latter used to argue for the former.

                                    1. 1

                                      machine checked proof that the implemented function always sorts.

                                      And if comparing spec vs test sizes apples to apples, that means we need a test for every possible combination of every value that function can take to be sure it will work for all of them. On 64-bit systems, that’s maybe 18,446,744,073,709,551,615 values per variable with a multiplying effect happening when they’re combined with potential program orderings or multiple variable inputs. It could take a fair amount of space to code all that in as tests with execution probably requiring quantum computers, quark-based FPGA’s, or something along those lines if tests must finish running in reasonable time. There’s not a supercomputer on Earth that could achieve with testing what assurance some verified compilers or runtimes got with formal verification of formal, static specifications.

                                      Apples to apples, the formal specs with either runtime checks or verification are a lot smaller, faster, and cheaper for total correctness than tests.

                                    2. 3

                                      For example, here’s insertion sort implemented in Idris, it’s 260 lines of code. (…) Personally, I have a much time understanding that the following Python version is correct: (snippet)

                                      An actually honest comparison would include a formal proof of correctness of the Python snippet. Merely annotating your Python snippet with pre- and postcondition annotations (which, in the general case, is still a long way from actually producing a proof) would double its size. And this is ignoring the fact that many parts in your snippet have (partially) user-definable semantics, like the indexing operator and the len function. Properly accounting for these things can only make a proof of correctness longer.

                                      That being said, that a proof of correctness of insertion sort takes 260 lines of code doesn’t speak very well of the language the proof is written in.

                                      What I want to know first and foremost when looking at code I don’t recognize is the intent of the code.

                                      Wait, the “intent”, rather than what the code actually does?

                                      1. 1

                                        An actually honest comparison would include a formal proof of correctness of the Python snippet.

                                        That’s not a business requirement. The requirement is having a function that does what was intended. The bigger point you appear to have missed is that a complex formal specification is itself a program! What method do you use to verify that the specification is describing the intent accurately?

                                        Wait, the “intent”, rather than what the code actually does?

                                        Correct, these are two entirely different things. Verifying that the code does what was intended is often the difficult task when writing software. I don’t find static typing to provide a lot of assistance in that regard. In fact, I’d say that the Idris insertion sort implementation is actually working against this goal.

                                        1. 4

                                          The bigger point you appear to have missed is that a complex formal specification is itself a program!

                                          This is not true. The specification is a precondition-postcondition pair. The specification might not even be satisfiable!

                                          What method do you use to verify that the specification is describing the intent accurately?

                                          Asking questions. Normally users have trouble thinking abstractly, so when I identify a potential gap in a specification, I formulate a concrete test case where what the specification might not match the user’s intention, and I ask them what the program’s intended behavior in this test case is.

                                          I don’t find static typing to provide a lot of assistance in that regard.

                                          I agree, but with reservations. Types don’t write that many of my proofs for me, but, under certain reasonable assumptions, proving things rigorously about typed programs is easier than proving the same things equally rigorously about untyped programs.

                                          1. 1

                                            This is not true. The specification is a precondition-postcondition pair. The specification might not even be satisfiable!

                                            A static type specification is a program plain and simple. In fact, lots of advanced type systems. such as one found in Scala, are actually Turing complete. The more things you try to encode formally the more complex this program becomes.

                                            Asking questions. Normally users have trouble thinking abstractly, so when I identify a potential gap in a specification, I formulate a concrete test case where what the specification might not match the user’s intention, and I ask them what the program’s intended behavior in this test case is.

                                            So, how is this different from what people do when they’re writing specification tests?

                                            1. 2

                                              A static type specification is a program plain and simple.

                                              Not any more than a (JavaScript-free) HTML document is a program for your browser to run.

                                              In fact, lots of advanced type systems. such as one found in Scala, are actually Turing complete.

                                              I stick to Standard ML, whose type system is deliberately limited. Anything that can’t be verified by type-checking (that is, a lot), I prove by myself. Both the code and the proof of correctness end up simpler this way.

                                              So, how is this different from what people do when they’re writing specification tests?

                                              I am not testing any code. I am testing whether my specification captures what the user wants. Then, as a completely separate step, I write a program that provably meets the specification.

                                              1. 1

                                                I stick to Standard ML, whose type system is deliberately limited. Anything that can’t be verified by type-checking (that is, a lot), I prove by myself. Both the code and the proof of correctness end up simpler this way.

                                                At that point it’s really just degrees of comfort in how much stuff you want to prove statically at compile time. I find that runtime contracts like Clojure Spec are a perfectly fine alternative.

                                                I am testing whether my specification captures what the user wants.

                                                I’ve never seen that done effectively using static types myself, but perhaps you’re dealing with a very different domain from the ones I’ve worked in.

                                                1. 1

                                                  At that point it’s really just degrees of comfort in how much stuff you want to prove statically at compile time.

                                                  This is not a matter of “degree” or “comfort” or “taste”. Everything has to be proven statically, in the sense of “before the program runs”. However, not everything has to be proven or validated by a type checker. Sometimes directly using your brain is simpler and more effective.

                                                  I am testing whether my specification captures what the user wants.

                                                  I’ve never seen that done effectively using static types myself

                                                  Me neither. I just use the ability to see possibilities outside the “happy path”.

                                                  1. 1

                                                    However, not everything has to be proven or validated by a type checker.

                                                    I see that as a degree of comfort. You’re picking and choosing what aspects of the program you’re going to prove formally. The range is from having a total proof to having no proof at all.

                                                    Sometimes directly using your brain is simpler and more effective.

                                                    Right, and the part we disagree on is how much assistance we want from the language and in what form.

                                                    1. 1

                                                      You’re picking and choosing what aspects of the program you’re going to prove formally.

                                                      I’m not “picking” anything. I always prove rigorously that my programs meet their functional specifications. However, my proofs are meant for human rather than mechanical consumption, hence:

                                                      • Proofs cannot be too long.
                                                      • Proofs cannot demand simultaneous attention to more detail than I can handle.
                                                      • Abstractions are evaluated according to the extent to which they shorten proofs and compartmentalize details.

                                                      Right, and the part we disagree on is how much assistance we want from the language and in what form.

                                                      The best kind of “assistance” a general-purpose language can provide is having a clean semantics and getting out of the way when it can’t help. If you ever actually try to prove a program[0] correct, you will notice that:

                                                      • Proving that a control flow point is unreachable may require looking arbitrarily far back into the history of the computation. Hence, you want as few unreachable control flow points as possible, preferably none.

                                                      • Proving that a procedure call computes a result of interest requires making assumptions about the procedure’s precondition-postcondition pair. For second-class (statically dispatched) procedure calls, these assumptions can be discharged immediately—you know what procedure will be called. For first-class (dynamically dispatched) procedure calls, these assumptions may only be discharged in a very remote[1] part of your program. Hence, first-class procedures ought to be used sparingly.


                                                      [0] Actual programs, not just high-level algorithm descriptions that your programs allegedly implement.

                                                      [1] One way to alleviate the burden of communicating precondition-postcondition requirements between far away parts in a program is to systematically use so-called type class laws, but this is not a widely adopted solution.

                                                      1. 1

                                                        Right, and the other approach to this problem is to use runtime contracts such as Clojure Spec. My experience is that this approach makes it much easier to express meaningful specifications. I’m also able to use it where it makes the most sense, which tends to be at the API level. I find there are benefits and trade-offs in both approaches in practice.

                                                        1. 1

                                                          Right, and the other approach to this problem is to use runtime contracts such as Clojure Spec.

                                                          This is not a proof of correctness, so no.

                                                          I’ve already identified two things that don’t help: runtime checks and overusing first-class procedures. Runtime-checked contracts have the dubious honor of using the latter to implement the former in order to achieve nothing at all besides making my program marginally slower.

                                                          My experience is that this approach makes it much easier to express meaningful specifications.

                                                          I can already express meaningful specifications in many-sorted first-order logic. The entirety of mathematics is available to me—why would I want to confine myself to what can be said in a programming language?

                                                          I’m also able to use it where it makes the most sense, which tends to be at the API level.

                                                          It makes the most sense before you even begin to write your program.

                                                          1. 2

                                                            This is not a proof of correctness, so no.

                                                            I think this is the key disconnect we have here. My goal is to produce working software for people to use, and writing a proof of correctness is a tool for achieving that. There are other viable tools that each have their pros and cons. My experience tells me that writing proofs of correctness is not the most effective way to achieve the goal of delivering working software on time. Your experience clearly differs from mine, and that’s perfectly fine.

                                                            I can already express meaningful specifications in many-sorted first-order logic. The entirety of mathematics is available to me—why would I want to confine myself to what can be said in a programming language?

                                                            You clearly would not. However, there are plenty of reasons why other people prefer this. A few reasons of top of my head are the following. It’s much easier for most developers to read runtime contracts. This means that it’s easier to onboard people and train them. The contracts tend to be much simpler and more expressive. This makes it easier to read and understand them. They allow you to trivially express things that are hard to express at compile time. Contracts can be used selectively in places where they make the most sense. Contracts can be open while types are closed.

                                                            It makes the most sense before you even begin to write your program.

                                                            Again, we have a very divergent experience here. I find that in most situations I don’t know the shape of the data up front, and I don’t know what the solution is going to be ultimately. So, I interactively solve problems using a REPL integrated editor. I might start with a particular approach, scrap it, try something else, and so on. Once I settle on a way I want to do things, I’ll add a spec for the API.

                                                            Just to be clear, I’m not arguing that my approach is somehow better, or trying to convince you to use it. I’m simply explaining that having tried both, I find it works much better for me. At the same time, I’ve seen exactly zero empirical evidence to suggest that your approach is more effective in practice. Given that, I don’t think we’re going to gain anything more from this conversation. We’re both strongly convinced by our experience to use different tools and workflows. It’s highly unlikely that we’ll be changing each others minds here.

                                                            Cheers

                                                            1. 2

                                                              The contracts tend to be much simpler and more expressive. (…) Contracts can be open while types are closed.

                                                              I said “first-order logic”, not “types”. Logic allows you to express things that are impossible in a programming language, like “what is the distribution of outputs of this program when fed a sample of a stochastic process?”—which your glorified test case generator cannot generate.

                                                              Just to be clear, I’m not arguing that my approach is somehow better, or trying to convince you to use it. I’m simply explaining that having tried both, I find it works much better for me.

                                                              I’m not trying to convince you of anything either, but I honestly don’t think you have tried using mathematical logic. You might have tried, say, Haskell or Scala, and decided it’s not your thing, and that’s totally fine. But don’t conflate logic (which is external to any programming language) with type systems (which are often the central component of a programming language’s design, and certainly the most difficult one to change). It is either ignorant or dishonest.

                                              2. 1

                                                You can do more with a formal spec than a test. They can be used to generate equivalent tests for one like in EiffelStudio. Then, they can be used with formal methods tools, automated or full style, to prove they hold for all values. They can also be used to aid optimization by the compiler like in examples ranging from Common LISP to Strongtalk I gave you in other comment. There’s even been some work on natural language systems for formal specs which might be used to generate English descriptions one day.

                                                So, one gets more ROI out of specs than tests alone.

                                                1. 2

                                                  That’s why I find Clojure Spec to be a very useful tool. My experience is that runtime contracts are a better tool for creating specifications. Contracts focus on the usage semantics, while I find that types only have an indirect relationship with them.

                                                  At the same time, contracts are opt in, and can be created where they make the most sense. I find this happens to be at the boundaries between components. I typically want to focus on making sure that the API works as intended.

                                                  1. 2

                                                    Forgot to say thanks for Clojure spec link. That was a nice write-up. Good they added that to Clojure.

                                                2. 1

                                                  “A static type specification is a program plain and simple. “

                                                  I dont think so but I could be wrong. My reading in formal specs showed they were usyally precise descriptions of what is to be done. The program is almost always a description of how to do something in a series of steps. Those are different things. Most formal specs arent executable on their own either since they’re too abstract.

                                                  There are formal specifications of how something is done in a concrete way that captures all its behaviors like with seL4. You could call those programs. The other stuff sounds different, though, since it’s about the what or too abstract to produce the result the programmer wants. So, I default on not a program with some exceptions.

                                                  1. 2

                                                    It’s a metaprogam that’s executed by the compiler with your program as the input. Obviously, you can have very trivial specifications that don’t really qualify as programs. However, you also have very complex specifications as well. There’s even a paper on implementing a type debugger for Scala. It’s hard to argue that specifications that need a debugger aren’t programs.

                                                    1. 1

                                                      I think this boils down to the definition of “program.” We may have different ones. Mine is an executable description of one or more steps that turn concrete inputs into concrete outputs optionally with state. A metaprogram can do that: it does it on program text/symbols. A formal specification usually can’t do that due to it being too abstract, non-executable, or having no concept of I/O. They are typically input too some program or embedded in one. I previously said some could especially in tooling like Isabelle or Prolog.

                                                      So, what is your definition of a program so I can test whether formal specs are programs or metaprograms against that definition? Also, out of curiosity too.

                                                      1. 2

                                                        My definition is that a program is a computational process that accepts some input, and produces some output. In case of the type system, it accepts the source code as its input, and decides whether it matches the specified constraints.

                                                        1. 1

                                                          Well, I could see that. It’s an abstract equivalent to mine it looks like. I’ll hold of debating that until I’m more certain of what definition I want to go with.

                                              3. 3

                                                That’s not a business requirement. The requirement is having a function that does what was intended

                                                You’re comparing two separate things, though! The Idris isa proven correct function, while the python is just a regular function. If all the business wants is a “probably correct” function, the Idris code would be just a few lines, too.

                                                1. 1

                                                  The point here is that formalism does not appear to help ensure the code is doing what’s intended. You have to be confident that you’re proving the right thing. The more complex your proof is, the harder it becomes to definitively say that it is correct. Using less formalism in Python or Idris results in code where it’s easier for the human reader to tell the intent.

                                                  1. 4

                                                    You can say your proof is correct because you have a machine check it for you. Empirically, we see that formally verified systems are less buggy than unverified systems.

                                                    1. 2

                                                      Machine can’t check that you’re proving what was intended. A human has to understand the proof and determine that it matches their intent.

                                                      1. 2

                                                        A human had to check the intent (validate) either way. A machine can check the implementation is correct (verify).

                                                        Like in the Idris, I have to validate that the function is supposed to sort. Once I know that’s the intention, I can be assured that it does, in fact, sort, because I have a proof.

                                                        1. 1

                                                          What I’m saying is that you have to understand the proof, and that can be hard to do with complex proofs. Meanwhile, other methods such as runtime contracts or even tests, are often easier to understand.

                                                          1. 1

                                                            That’s not how formal verification works, though. Let’s say my intention is “sort a list.” I write a function that I think does this. Then I write a formal specification, like “\A j, k \in 1..Len(sorted): j < k => sorted[j] <= sorted[k]”. Finally, I write the proof.

                                                            I need to validate that said specification is what I want. But the machine can verify the function matches the specification, because it can examine the proof.

                                                            1. 1

                                                              The fact that we’re talking past each other is a good illustration of the problem I’m trying to convey here. I’m talking about the human reader having to understand specifications like \A j, k \in 1..Len(sorted): j < k => sorted[j] <= sorted[k], only much longer. It’s easy to misread a symbol, and misinterpret what is being specified.

                                                              1. 2

                                                                You did say “have to understand the proof” (not specification) before. I strongly agree with the latter - the language we use for writing specs can easily get so complex that the specs are more error-prone than their subjects.

                                                                I once wrote a SymEx tool for PLCs and found that specs in my initial choice of property language (LTL) were much harder to get right than the PLC code itself. I then looked at the properties that would likely need to be expressed, and cut the spec language down to a few higher-level primitives. This actually helped a lot.

                                                                Even if restricting the property language isn’t an option, having a standard library (or package ecosystem) of properties would probably get us rather close - so instead of \A j, k \in ... we could write Sorted(s) and trust the stdlib / package definition of Sorted to do its name justice.

                                            2. 4

                                              For example, here’s insertion sort implemented in Idris, it’s 260 lines of code

                                              When this code sample has been brought up before (perhaps by you) it’s been pointed out that this is not expected to be a production implementation and more of an example of playing with the type system. There is plenty of Python golf code out there too that one could use as an example to make a point. But, if we are going to compare things, the actual implementation of sort in Python is what..several hundred lines of C code? So your Python insertion sort might be short and sweet, but no more the production code people use than the Idris one. But if the Idris one were the production implementation, I would rather spend time understanding it than the Python sort function.

                                              It’s also entirely possible that the language doesn’t actually play a major role in software quality. Perhaps, process, developer skill, testing practices, and so on are the dominant factors.

                                              That is likely true, IMO. I think it’s interesting that one could replace type system in the Hickey quote with “testing” or “code review” and the statement would still be true, but people seem to zero in on types. No-one serious says that we shouldn’t have testing because we still have bugs in software.

                                              I never found that the type system was a really big help in this regard. What I want to know first and foremost when looking at code I don’t recognize is the intent of the code.

                                              My experience has definitely not been this. Right now I’m doing maintenance on some code and it has a call in it: state.monitors.contains(monitor) and I don’t have a good way to figuring out what state or monitors is without grepping around in the code. In Ocaml I’d just hit C-t and it’d tell me what it is. I find this to be a common pattern in my life as I have tended to be part of the clean-up crew in projects lately. The intent of that code is pretty obvious, but that doesn’t help me much for the refactoring I’m doing. But experiences vary.

                                              1. 2

                                                When this code sample has been brought up before (perhaps by you) it’s been pointed out that this is not expected to be a production implementation and more of an example of playing with the type system.

                                                The point still stands though, the more properties you try to encode formally the more baroque the code gets. Sounds like you’re agreeing that it’s often preferable to avoid such formalisms in production code.

                                                But, if we are going to compare things, the actual implementation of sort in Python is what..several hundred lines of C code? So your Python insertion sort might be short and sweet, but no more the production code people use than the Idris one.

                                                The sort implementation in Python handles many different kinds of sorts. If you took the approach of describing all of the hundreds of lines of C of that with types in Idris, that would result in many thousands lines of code. So, you still have the same problem there.

                                                No-one serious says that we shouldn’t have testing because we still have bugs in software.

                                                People argue regarding what kind of testing is necessary or useful all the time though. Ultimately, the goal is to have a semantic specification, and to be able to tell that your code conforms to it. Testing is one of the few known effective methods for doing that. This is why some form of testing is needed whether you use static typing or not. To put it another way, testing simply isn’t optional for serious projects. Meanwhile, many large projects are successfully developed in dynamic languages just fine.

                                                My experience has definitely not been this. Right now I’m doing maintenance on some code and it has a call in it: state.monitors.contains(monitor) and I don’t have a good way to figuring out what state or monitors is without grepping around in the code.

                                                In Clojure, I’d just hit cmd+enter from the editor to run the code in the REPL and see what a monitor looks like. My team has been working with Clojure for over 8 years now, and I often end up working with code I’m not familiar with.

                                                1. 3

                                                  Sounds like you’re agreeing that it’s often preferable to avoid such formalisms in production code.

                                                  At the moment, yes. I am not a type theorist but as far as I have seen, dependent types are not at the point where we know how to use them in effectively in a production setting yet. But I do make pretty heavy use of types elsewhere in codebases I work on when possible and try to encode what invariants I can in them when possible (which is pretty often).

                                                  If you took the approach of describing all of the hundreds of lines of C of that with types in Idris, that would result in many thousands lines of code. So, you still have the same problem there.

                                                  Maybe! I don’t actually know. The types in the Idris implementation might be sufficient to get very performant code out of it (although I doubt it at this point).

                                                  In Clojure, I’d just hit cmd+enter from the editor to run the code in the REPL …

                                                  I don’t know anything about Clojure, in the case I’m working on, running the code is challenging as the part I’m refactoring needs a bunch of dependencies and data and constructs different things based on runtime parameters. Even if I could run it on my machine I don’t know how much I’d trust it. The power of dynamic types at work.

                                                  1. 2

                                                    I don’t know anything about Clojure, in the case I’m working on, running the code is challenging as the part I’m refactoring needs a bunch of dependencies and data and constructs different things based on runtime parameters. Even if I could run it on my machine I don’t know how much I’d trust it. The power of dynamic types at work.

                                                    There is a fundamental difference in workflows here. With Clojure, I always work against a live running system. The REPL runs within the actual application runtime, and it’s not restricted to my local machine. I can connect a REPL to an application in production, and inspect anything I want there. In fact, I have done just that on many occasions.

                                                    This is indeed the power of dynamic types at work. Everything is live, inspectable, and reloadable. The reality is that your application will need to interact with the outside world you have no control over. You simply can’t predict everything that could happen at runtime during compile time. Services go down, APIs change, and so on. When you have a system that can be manipulated at runtime, you can easily adapt to the changes without having any downtimes.

                                                    1. 1

                                                      That sounds like a good argument for manipulating something at runtime, but not dynamic types. You can build statically-typed platforms that allow runtime inspection or modification. The changes will just be type-checked before being uploaded. The description of StrongTalk comes to mind.

                                                      1. 2

                                                        Static type systems are typically global, and this places a lot of restrictions on what can be modified at runtime. With a dynamic language you can change any aspect of the running application, while arbitrary eval is problematic for static type systems.

                                                      2. 1

                                                        When you have a system that can be manipulated at runtime, you can easily adapt to the changes without having any downtimes.

                                                        There are architectural choices that address this point, in most situations, better IME. That is, standard setup of load balancer for application servers and something like CARP on the load balancers. For street cred, I’ve worked as an Erlang developer.

                                                        1. 1

                                                          Sure, you can work around that by adding a lot of complexity to your infrastructure. That doesn’t change the fact that it is a limitation.

                                                          1. 1

                                                            In my experience, if uptime is really important, the architecture I’m referring to is required anyways to deal with all the forms of failure other than just the code having a bug in it. So, again in my experience, while I agree it is a limitation, it is overall simpler. But this whole static vs dynamic thing is about people willing to accept some limitations for other, perceived, benefits.

                                                            1. 1

                                                              My experience is that it very much depends. I’ve worked on many different projects, and sometimes such infrastructure was the right solution, and in others it was not. For example, consider the case of the NASA Deep Space 1 mission.

                                                              1. 2

                                                                I’m not sure how Deep Space 1 suits the point you’re making. Remote Agent on DS1 was mostly formally verified (using SPIN, I believe) and the bug was in the piece of code that was not formally verified.

                                                                1. 1

                                                                  The point is that it was possible to fix tis bug at runtime in a system that could not be load balanced or restarted. In practice, you don’t control the environment, and you simply cannot account for everything that can go wrong at compile time. Maybe your chip gets hit by a cosmic ray, maybe a remote sensor gets damaged, maybe a service you rely on goes down. Being able to change code at runtime is extremely valuable in many situations.

                                                                  1. 1

                                                                    The things you listed are accountable for at build time. Certainly NASA doesn’t send chips that are not radiation hardened into space saying “we can just remote debug it”. Sensors getting damaged is expected and expecting services one relies on going down is table stakes for a distributed system. And while I find NASA examples really cool, I do not find them compelling. NASA does a lot of things that a vast majority of developers don’t and probably shouldn’t do. Remember, NASA also formally verifies some of their software components, but you aren’t advocating for that, which makes the NASA example confusing as to which lesson one is supposed to take from it. And those cosmic rays are just as likely to bring down one’s remote debugging facility as it is to break the system’s other components.

                                                                    1. 1

                                                                      I think you’re fixating too much on NASA here. The example is just an illustration of the power of having a reloadable system. There are plenty of situations where you’re not NASA and this is an extremely useful feature. If you can’t see the value in it I really don’t know what else to say here really.

                                                                      1. 1

                                                                        I’m responding to the example you gave, if you have other examples that are more compelling then I would have expected you to post that.

                                                                        1. 1

                                                                          What’s compelling is in in the eye of the beholder. It’s pretty clear that there’s nothing I can say that you will find convincing. Much like I’m not convinced by your position.

                                          1. 1

                                            I couldn’t really understand why the author wants to rollback at all. Just store the order linked to the failed payment. Maybe cleanup failed orders older than 10 years periodically.

                                            1. 2

                                              Maybe another way to think about this is “Can I not do FP in my language?”. Yes for JavaScript and Scala and Rust - you can write procedural code to your heart’s content in these languages, even if JavaScript gives you the tools to use functional abstractions and Scala and Rust actively encourage them. No for Haskell and Elm - there’s no way to write code that looks imperative in these langauges.

                                              1. 9

                                                No for Haskell and Elm - there’s no way to write code that looks imperative in these langauges.

                                                main = do
                                                  putStrLn "What is your name?"
                                                  name <- getStr
                                                  putStrLn $ "Hello, " ++ name
                                                
                                                1. 5

                                                  No for Haskell and Elm - there’s no way to write code that looks imperative in these langauges.

                                                  What do you mean by “looks imperative”? Doing everything inside the IO monad is not much different from writing a program in an imperative language.

                                                  1. 2

                                                    You mean StateT and IO. And then learning how to use both.

                                                  2. 3

                                                    Writing Haskell at my day job, I’ve seen my fair share of Fortran written in it. The language is expressive enough to host any design pathology you throw at it. No language will save you from yourself.

                                                  1. 6

                                                    I think the faulty assumption is that the happiness of users and developers is more important to the corporate bottom line than full control over the ecosystem.

                                                    Linux distributions have shown for a decade that providing a system for reliable software distribution while retaining full user control works very well.

                                                    Both Microsoft and Apple kept the first part, but dropped the second part. Allowing users to install software not sanctioned by them is a legacy feature that is removed – slowly to not cause too much uproar from users.

                                                    Compare it to the time when Windows started “phoning home” with XP … today it’s completely accepted that it happens. The same thing will happen with software distributed outside of Microsoft’s/Apple’s sanctioned channels. (It indeed has already happened on their mobile OSes.)

                                                    1. 8

                                                      As a long-time Linux user and believer in the four freedoms, I find it hard to accept that Linux distributions demonstrate “providing a system for reliable software distribution while retaining full user control works very well”. Linux distros seems to work well for enthusiasts and places with dedicated support staff, but we are still at least a century away from the year of Linux on the desktop. Even many developers (who probably have some overlap with the enthusiast community) have chosen Macs with unreliable software distribution like Homebrew and incomplete user control.

                                                      1. 2

                                                        I agree with you that Linux is still far away from the year of Linux on the desktop, but I think it is not related to the way Linux deals with software distribution.

                                                        There are other, bigger issues with Linux that need to be addressed.

                                                        In the end, the biggest impact on adoption would be some game studios releasing their AAA title as a Linux-exclusive. That’s highly unlikely, but I think it illustrates well that many of the factors of Linux’ success on the desktop hinge on external factors which are outside of the control of users and contributors.

                                                        1. 2

                                                          All the devs I know that use mac use linux in some virtualisation options instead of homebrew for work. Obviously thats not scientific study by any means.

                                                          1. 8

                                                            I’ll be your counter example. Homebrew is a great system, it’s not unreliable at all. I run everything on my Mac when I can, which is pretty much everything except commercial Linux-only vendor software. It all works just as well, and sometimes better, so why bother with the overhead and inconvenience of a VM? Seriously, why would you do that? It’s nonsense.

                                                            1. 4

                                                              Maybe a VM makes sense if you have very specific wishes. But really, macOS is an excellent UNIX and for most development you won’t notice much difference. Think Go, Java, Python, Ruby work. Millions of developers probably write on macOS and deploy on Linux. I’ve been doing this for a long time and ‘oh this needs a Linux specific exception’ is a rarity.

                                                              1. 4

                                                                you won’t notice much difference.

                                                                Some time ago I was very surprised that hfs is not case sensitive (by default). Due to a bad letter-case in an import my script would fail on linux (production), but worked on mac. Took me about 30 minutes to figure this out :)

                                                                1. 3

                                                                  You can make a case sensitive code partition. And now with APFS, partitions are continuously variable size so you won’t have to deal with choosing how much goes to code vs system.

                                                                  1. 1

                                                                    A case sensitive HFS+ slice on a disk image file is a good solution too.

                                                                  2. 2

                                                                    Have fun checking out a git repo that has Foo and foo in it :)

                                                                    1. 2

                                                                      It was bad when microsoft did it in VB, and it’s bad when apple does it in their filesystem lol.

                                                                  3. 2

                                                                    Yeah definitely. And I’ve found that accommodating two platforms where necessary makes my projects more robust and forces me to hard code less stuff. E.g. using pkg-config instead of yolocoding path literals into the build. When we switched Linux distros at work, all the packages that worked on MacOS and Linux worked great, and the Linux only ones all had to be fixed for the new distro. 🙄

                                                                  4. 2

                                                                    I did it for awhile because I dislike the Mac UI a lot but needed to run it for some work things. Running in a full screen VM wasn’t that bad. Running native is better, but virtualization is pretty first class at this point. It was actually convenient in a few ways too. I had to give my mac in for repair at one point, so I just copied the VM to a new machine and I was ready to run in minutes.

                                                                    1. 3

                                                                      I use an Apple computer as my home machine, and the native Mac app I use is Terminal. That’s it. All other apps are non-Apple and cross-platform.

                                                                      That said, MacOS does a lot of nice things. For example, if you try to unmount a drive, it will tell you what application is still using it so you can unmount it. Windows (10) still can’t do that, you have to look in the Event viewer(!) to find the error message.

                                                                      1. 3

                                                                        In case it’s unclear, non-Native means webapps, not software that doesn’t come preinstalled on your Mac.

                                                                        1. 3

                                                                          It is actually pretty unclear what non-Native here really means. The original HN post is about sandboxed apps (distributed through the App Store) vs non-sandboxed apps distributed via a developer’s own website.

                                                                          Even Gruber doesn’t mention actual non-Native apps until the very last sentence. He just talks/quotes about sandboxing.

                                                                          1. 3

                                                                            The second sentence of the quoted paragraph says:

                                                                            Cocoa-based Mac apps are rapidly being eaten by web apps and Electron pseudo-desktop apps.

                                                                      2. 1

                                                                        full-screen VM high-five

                                                                      3. 1

                                                                        To have environment closer to production I guess (or maybe ease of installation, dunno never used homebrew). I don’t have to use mac anymore so I run pure distro, but everyone else I know uses virtualisation or containers on their macs.

                                                                        1. 3

                                                                          Homebrew is really really really easy. I actually like it over a lot of Linux package managers because it first class supports building the software with different flags. And it has binaries for the default flag set for fast installs. Installing a package on Linux with alternate build flags sucks hard in anything except portage (Gentoo), and portage is way less usable than brew. It also supports having multiple versions of packages installed, kind of half way to what nix does. And unlike Debian/CentOS it doesn’t have opinions about what should be “in the distro,” it just has up to date packages for everything and lets you pick your own philosophy.

                                                                          The only thing that sucks is OpenSSL ever since Apple removed it from MacOS. Brew packages handle it just fine, but the python package system is blatantly garbage and doesn’t handle it well at all. You sometimes have to pip install with CFLAGS set, or with a package specific env var because python is trash and doesn’t standardize any of this.

                                                                          But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                                                                          1. 1

                                                                            Installing a package on Linux with alternate build flags sucks hard in anything except portage

                                                                            You mention nix in the following sentence, but installing packages with different flags is also something nix does well!

                                                                            1. 1

                                                                              Yes true, but I don’t want to use NixOS even a little bit. I’m thinking more vs mainstream distro package managers.

                                                                            2. 1

                                                                              For all its ease, homebrew only works properly if used by a single user who is also an administrator who only ever installs software through homebrew. And then “works properly” means “install software in a global location as the current user”.

                                                                              1. 1

                                                                                by a single user who is also an administrator

                                                                                So like a laptop owner?

                                                                                1. 1

                                                                                  A laptop owner who hasn’t heard that it’s good practice to not have admin privileges on their regular account, maybe.

                                                                              2. 1

                                                                                But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                                                                                Can you elaborate more on this? You create a virtualenv and go from there, everything works.

                                                                                1. 2

                                                                                  It used to be worse, when mainstream distros would have either 2.4 or 2.6/2.7 and there wasn’t a lot you could do about it. Now if you’re on python 2, pretty much everyone is 2.6/2.7. Because python 2 isn’t being updated. Joy. Ruby has rvm and other tools to install different ruby versions. Java has a tarball distribution that’s easy to run in place. But with python you’re stuck with whatever your distro has pretty much.

                                                                                  And virtualenvs suck ass. Bundler, maven / gradle, etc. all install packages globally and let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs. Node installs all it’s modules locally to a directory by default but at least it automatically picks those up. I know there are janky shell hacks to make virtualenvs automatically activate and deactivate with your current working directory, but come on. Janky shell hacks.

                                                                                  That and pip just sucks. Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch. The virtualenv melting pot of files that pip dumps into one directory just blatantly breaks a lot of the time. They’re basically write once. Meanwhile every gem version has it’s own directory so you can cleanly add, update, and remove gems.

                                                                                  Basically the ruby, java, node, etc. all have tooling actually designed to author and deploy real applications. Python never got there for some reason, and still has a ton of second rate trash. The scientific community doesn’t even bother, they use distributions like Anaconda. And Linux distros that depend on python packages handle the dependencies independently in their native package formats. Ruby gets that too, but the native packages are just… gems. And again, since gems are version binned, you can still install different versions of that gem for your own use without breaking anything. Python there is no way to avoid fucking up the system packages without using virtualenvs exclusively.

                                                                                  1. 1

                                                                                    But with python you’re stuck with whatever your distro has pretty much.

                                                                                    I’m afraid you are mistaken, not only distros ship with 2.7 and 3.5 at same time (for years now) it is usually trivial to install newer version.

                                                                                    let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs

                                                                                    You can also execute from virtualenvs directly.

                                                                                    Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch.

                                                                                    I’m not sure how to comment on that :-)

                                                                                    1. 1

                                                                                      it is usually trivial to install newer version

                                                                                      Not my experience? How?

                                                                                      1. 1

                                                                                        Usually you have packages for all python versions available in some repository.

                                                                        2. 2

                                                                          Have they chosen Macs or have they been issued Macs? If I were setting up my development environment today I’d love to go back to Linux, but my employers keep giving me Macs.

                                                                          1. 3

                                                                            Ask for a Linux laptop. We provide both.

                                                                            I personally keep going Mac because I want things like wifi, decent power management, and not having to carefully construct a house of cards special snowflake desktop environment to get a useable workspace.

                                                                            If I used a desktop computer with statically affixed monitors and an Ethernet connection, I’d consider Linux. But Macs are still the premier Linux laptop.

                                                                            1. 1

                                                                              At my work place every employee is given a Linux desktop and they have to do a special request to get a Mac or Windows laptop (Which would be in addition to their Linux desktop).

                                                                          2. 3

                                                                            Let’s be clear though, what this author is advocating is much much worse from an individual liberty perspective than what Microsoft does today.

                                                                            1. 4

                                                                              Do you remember when we all thought Microsoft were evil for bundling their browser and media player? Those were good times.

                                                                          1. 4

                                                                            As usual, David apparently fails or refuses to understand how and why PoW is useful and must attack it at every opportunity (using his favorite rhetorical technique of linking negatively connoted phrases to vaguely relevant websites).

                                                                            That said, the article reminds me of a fun story - I went to a talk from a blockchain lead at <big bank> a while back and she related that a primary component of her job was assuring executives that, in fact, they did not need a blockchain for <random task>. This had become such a regular occurrence that she had attached this image to her desk.

                                                                            1. 10

                                                                              What would you consider a useful situation for PoW? In the sense that no other alternative could make up for the advantages in some real life use-case?

                                                                              But otherwise, and maybe it’s just me, since I agree wuth his premise, but I see @David_Gerard as taking the opposite role of popular blockchain (over-)advocates, who claim that the technology is the holy grail for far too many problems. Even if one doesn’t agree with his conclusions, I enjoy reading his articles, and find them very informative, since he doesn’t just oppose blockchains from a opinion-based position, but he also seems to have the credentials to do so.

                                                                              1. 1

                                                                                Relying to @gerikson as well. I personally believe that decentralization and cryptographically anchored trust are extremely important (what David dismissively refers to as “conspiracy theory economics”). We know of two ways to achieve this: proof of work, and proof of stake. Proof of stake is interesting but has some issues and trade-offs. If you don’t believe that PoW mining is some sort of anti-environmental evil (I don’t) it seems to generally offer better properties than PoS (like superior surprise-fork resistance).

                                                                                1. 13

                                                                                  I personally believe that decentralization and cryptographically anchored trust are extremely important

                                                                                  I personally also prefer decentralised or federalised systems, when they have a practical advantage over a centralized alternative. But I don’t see this to be the case with most application of the blockchain. Bitcoin, as a prime example, to my knowledge is too slow, too inconvenient, too unstable and too resource hungry to have a practical application, as a real substitute for money, either digital or virtual. One doesn’t have the time to wait 20m or more whenever one pays for lunch or buys some chewing gum at a corner shop, just because some other transactions got picked up first by a miner. It’s obviously different when you want to do something like micro-donations or buying illegal stuff, but I just claim that this isn’t the basis of a modern economy.

                                                                                  Cryptography is a substitute for authority, that is true, but I don’t belive that this is always wanted. Payments can’t be easily reveresed, addresses mean nothing, clients might loose support because the core developers arbitrarily change stuff. (I for example am stuck with 0.49mBTC from an old Electrum client, and I can’t do anything with it, since the whole system is a mess, but that’s rather unrelated.) This isn’t really the dynamic basis which capitalism has managed to survive on for this long. But even disregarding all of this, it simply is true that bitcoin isn’t a proper decentralized network like BitTorrent. Since the role of the wallet and the miner is (understandably) split, these two parts of the network don’t scale equally. In China gigantic mining farms are set up using specialized hardware to mine, mine, mine. I remember reading that there was one farm that predominated over at least 10% of the total mining power. All of this seems to run contrary to the proclaimed ideals. Proof of Work, well “works” in the most abstract sense, that it produces the intended results on one side, at the cost of disregarding everything can be disregarded, irrespective of whether it should be or not. And ultimately I prioritise other things over an anti-authority fetish, as do most people -which reminds us that even if everything I said is false that Bitcoin just doesn’t have the adoption to be significant enough to anyone but Crypto-Hobbiests, Looney Libertarians and some soon-to-fail startups in Silicon Valley.

                                                                                  1. 5

                                                                                    there was one farm that predominated over at least 10% of the total mining power

                                                                                    There was one pool that was at 42% of the total mining power! such decentralization very security

                                                                                      1. 5

                                                                                        To be fair, that was one pool consisting of multiple miners. What I was talking about was a single miner controlling 10% of the total hashing power.

                                                                                        1. 7

                                                                                          That’s definitely true.

                                                                                          On the other hand, if you look at incident reports like https://github.com/bitcoin/bips/blob/master/bip-0050.mediawiki — the pool policies set by the operators (often a single person has this power for a given pool) directly and significantly affect the consensus.

                                                                                          Ghash.io itself did have incentives to avoid giving reasons for accusations that would tank Bitcoin, but being close to 50% makes a pool a very attractive attack target: take over their transaction and parent-block choice, and you take over the entire network.

                                                                                      2. 0

                                                                                        But I don’t see this to be the case with most application of the blockchain.

                                                                                        Then I would advise researching it.

                                                                                        One doesn’t have the time to wait 20m or more whenever one pays for lunch or buys some chewing gum at a corner shop

                                                                                        Not trying to be rude, but it’s clear whenever anyone makes this argument that they don’t know at all how our existing financial infrastructure works. In fact, it takes months for a credit card transaction to clear to anything resembling the permanence of a mined bitcoin transaction. Same story with credit cards.

                                                                                        Low-risk merchants (digital goods, face-to-face sales, etc.) rarely require the average 10 minute (not sure where you got 20 from) wait for a confirmation.

                                                                                        If you do want permanence, Bitcoin is infinitely superior to any popular payment mechanism. Look into the payment limits set by high-value fungible goods dealers (like gold warehouses) for bitcoin vs. credit card or check.

                                                                                        Bitcoin just doesn’t have the adoption to be significant enough to anyone but Crypto-Hobbiests, Looney Libertarians and some soon-to-fail startups in Silicon Valley.

                                                                                        Very interesting theory - do you think these strawmen you’ve put up have collective hundreds of billions of dollars? As an effort barometer, are you familiar with the CBOE?

                                                                                        1. 10

                                                                                          Please try to keep a civil tone here.

                                                                                          Also, it’s hard to buy a cup of coffee or a steam game or a pizza with bitcoin. Ditto stocks.

                                                                                          1. -4

                                                                                            It’s hard to be nice when the quality of discourse on this topic is, for some reason, abysimally low compared to most technical topics on this site. It feels like people aren’t putting in any effort at all.

                                                                                            For example, why did you respond with this list of complete non-sequiturs? It has nothing to do with what we’ve been discussing in this thread except insofar as it involves bitcoin. I feel like your comments are normally high-effort, so what’s going on? Does this topic sap people’s will to think carefully?

                                                                                            (Civility is also reciprocal, and I’ve seen a lot of childish name-calling from the people I’m arguing with in this thread, including the linked article and the GP.)

                                                                                            Beyond the fact that this list is not really relevant, it’s also not true; you could have just searched “bitcoin <any of those things>” and seen that you can buy any of those things pretty easily, perhaps with a layer of indirection (just as you need a layer of indirection to buy things in the US if you already have EUR). In that list you gave, perhaps the most interesting example in bitcoin’s disfavor is Steam; Steam stopped accepting bitcoin directly recently, presumably due to low interest. However, it’s still easy to buy games from other sources (like Humble) with BTC.

                                                                                            1. 6

                                                                                              IMO, your comments are not very inspiring for quality. As someone who does not follow Bitcoin or the Blockchain all that much, I have not felt like any of your comments addressed anyone else’s comments. Instead, I have perceived you as coming off as defensive and with the attitude of “if you don’t get it you haven’t done enough research because I’m right” rather than trying to extol the virtues of the blockchain. Maybe you aren’t interested in correcting any of what you perceive as misinformation on here, and if so that’s even worse.

                                                                                              For example, I do not know of any place I can buy pizza with bitcoin. But you say it is possible, but perhaps with a layer of indirection. I have no idea what this layer of indirection is and you have left it vague, which does not lend me to trusting your response.

                                                                                              In one comment you are very dismissive of people’s Bitcoins getting hacked, but as a lay person, I see news stories on this all the time with substantial losses and no FDIC, so someone like me considers this a major issue but you gloss over it.

                                                                                              Many of the comments I’ve read by you on this thread are a similar level of unhelpful, all the while claiming the person you’re responding to is some combination of lazy or acting dumb. Maybe that is the truth but, again, as an outsider, all I see is the person defending the idea coming off as kind of a jerk. Maybe for someone more educated on the matter you are spot on.

                                                                                              1. 5

                                                                                                There is a religious quality to belief in the blockchain, particularly Bitcoin. It needs to be perfect in order to meet expectations for it: it can’t be “just” a distributed database, it has to be better than that. Bitcoin can’t be “just” a payment system, it has to be “the future of currency.” Check out David’s book if you’re interested in more detail.

                                                                                          2. 8

                                                                                            In fact, it takes months for a credit card transaction to clear to anything resembling the permanence of a mined bitcoin transaction. Same story with credit cards.

                                                                                            But I don’t have to wait months for both parties to be content the transaction is successful, only seconds, so this is really irrelevant to the point you are responding to, which is that if a Bitcoin transaction takes 10m to process then I heave to wait 10m for my transaction to be done, which people might not want to do.

                                                                                            1. -1

                                                                                              Again, as I said directly below the text you quoted, most merchants don’t require you to wait 10 minutes - only seconds.

                                                                                            2. 5

                                                                                              Then I would advise researching it.

                                                                                              It is exactly because I looked into the inner workings of Bitcoin and the Blockchain - as a proponent I have to mention - that I became more and more skeptical about it. And I still do support various decentralized and federated systems: BitTorrent, IPFS, (proper) HTTP, Email, … but just because the structure offers the possibility for a decentralized network, doesn’t have to mean that this potential is realized or that it is necessarily superior.

                                                                                              Not trying to be rude, but it’s clear whenever anyone makes this argument that they don’t know at all how our existing financial infrastructure works. In fact, it takes months for a credit card transaction to clear to anything resembling the permanence of a mined bitcoin transaction. Same story with credit cards.

                                                                                              The crucial difference being that, let’s say the cashier nearly instantaneously hears a some beep and knows that it isn’t his responsibility, nor that of the shop, to make sure that the money is transfered. The Bank, the credit card company or whoever has signed a binding contract lining this technical part of the process out to be what they have to care about, and if they don’t, they can be sued since there is an absolute regulatory instance - the state - in the background. This mutual delegation of trust, gives everyone a sense of security (regardless of how true or false it is) that makes people spend money instead of hording it, investing into projects instead of trading it for more secure assets. Add Bitcoins aforementioned volatileness, and no reasonable person would want to use it as their primary financial medium.

                                                                                              If you do want permanence, Bitcoin is infinitely superior to any popular payment mechanism.

                                                                                              I wouldn’t conciser 3.3 to 7 transactions per second infinitely superior to, for example Visa with an average of 1,700 t/s. Even it you think about it, there are far more that just 7 purchases being made a second around the whole world for this to be realistically feasible. But on the other side, as @friendlysock Bitcoin makes up for it by not having too many things you can actually buy with it: The region I live in has approximately a million or something inhabitants, but according to CoinMap even by the most generous measures, only 5 shops (withing a 30km radius) accepting it as a payment method. And most of those just offer it to promote themselves anyway.

                                                                                              Very interesting theory - do you think these strawmen you’ve put up have collective hundreds of billions of dollars? As an effort barometer, are you familiar with the CBOE?

                                                                                              (I prefer to think about my phrasing as a exaggeration and a handful of other literary deviced, instead of a fallacy, but never mind that) I’ll give you this: It has been a while since I’ve properly engaged with Bitcoin, and I was always more interested in the technological than the economical side, since I have a bit of an aversion towards libertarian politics. And it might be true that money is invested, but that still doesn’t change anything about all the other issues. It remains a bubble, a volatile, unstable, unpredictable bubble, and as it is typical for bubbles, people invest disproportional sums into it - which in the end makes it a bubble.

                                                                                              1. 0

                                                                                                The crucial difference being that, let’s say the cashier nearly instantaneously hears a some beep and knows that it isn’t his responsibility, nor that of the shop, to make sure that the money is transfered.

                                                                                                Not quite. The shop doesn’t actually have the money. The customer can revoke that payment at any time in the next 90 or 180 days, depending. Credit card fraud (including fraudulent chargebacks) is a huge problem for businesses, especially online businesses. There are lots of good technical articles online about combatting this with machine learning which should give you an idea of the scope of the problem.

                                                                                                makes people spend money instead of hording it,

                                                                                                Basically any argument of this form (including arguments for inflation) don’t really make sense with the existence of arbitrage.

                                                                                                Add Bitcoins aforementioned volatileness, and no reasonable person would want to use it as their primary financial medium.

                                                                                                So it sounds like it would make people… spend money instead of hoarding it, which you were just arguing for?

                                                                                                I wouldn’t conciser 3.3 to 7 transactions per second infinitely superior to, for example Visa with an average of 1,700 t/s.

                                                                                                https://lightning.network

                                                                                                as @friendlysock Bitcoin makes up for it by not having too many things you can actually buy with it

                                                                                                This is just patently wrong. The number of web stores that take Bitcoin directly is substantial (both in number and traffic volume), and even the number of physical stores (at least in the US) is impressive given that it’s going up against a national currency. How many stores in the US take even EUR directly?

                                                                                                Anything you can’t buy directly you can buy with some small indirection, like a BTC-USD forex card.

                                                                                                It remains a bubble, a volatile, unstable, unpredictable bubble

                                                                                                It’s certainly volatile, and it’s certainly unstable, but it may or may not be a bubble depending on your model for what Bitcoin’s role in global finance is going to become.

                                                                                                1. 5

                                                                                                  Not quite. The shop doesn’t actually have the money. The customer can revoke that payment at any time in the next 90 or 180 days, depending

                                                                                                  You’ve still missed my point - it isn’t important if the money has been actually transfered, but that there is trust that a framework behind all of this will guarantee that the money will be there when it has to be, as well as a protocol specifying what has to be done if the payment is to be revoked, if a purchase wishes to be undone, etc.

                                                                                                  Credit card fraud (including fraudulent chargebacks) is a huge problem for businesses, especially online businesses.

                                                                                                  Part of the reason, I would suspect is that the Internet was never made to be a platform for online businesses - but I’m not going to deny the problem, I’m certainly not a defender of banks and credit card companies - just an opponent of Bitcoin.

                                                                                                  Basically any argument of this form (including arguments for inflation) don’t really make sense with the existence of arbitrage.

                                                                                                  Could you elaborate? You have missed my point a few times already, so I’d rather we understand each other instead of having two monologues.

                                                                                                  So it sounds like it would make people… spend money instead of hoarding it, which you were just arguing for?

                                                                                                  No, if it’s volatile people either won’t buy into it in the first place. And if a currency is unstable, like Bitcoin inflating and deflating all the time, people don’t even know what do do with it, if it were their main asset (which I was I understand you are promoting, but nobody does). I doubt it will ever happen, since if prices were insecure, the whole economy would suffer, because all the “usual” incentives would be distorted.

                                                                                                  https://lightning.network

                                                                                                  I haven’t heard of this until you mentioned it, but it seems like it’s quite new, so time has to test this yet-another-bitcoin-related project that has popped up. Even disregarding that it will again need to first to make a name of it self, then be accepted, then adopted, etc. from what I gather, it’s not the ultimate solution (but, I might be wrong), especially since it seems to encourage centralization, which I believe is what you are so afraid of.

                                                                                                  This is just patently wrong. The number of web stores that take Bitcoin directly is substantial (both in number and traffic volume),

                                                                                                  Sure, there might be a great quantity of shops (as I mentioned, who use Bitcoin as a medium to promote themselves), but I, and from what I know most people, don’t really care about these small, frankly often dodgy online shops. Can I use it to pay directly on Amazon? Ebay? Sure, you can convert it back and forth, but all that means it that Bitcoin and other crypto currencies are just an extra step for life stylists and hipster, with no added benefit. And these shops don’t even accept Bitcoin directly, to my knowledge always just so they can convert it into their national currency - i.e. the one they actually use and Bitcoins value is always compared to. What is even Bitcoin without the USD, the currency it hates but can’t stop comparing itself to?

                                                                                                  and even the number of physical stores (at least in the US) is impressive given that it’s going up against a national currency.

                                                                                                  The same problems apply as I’ve already mentioned, but I wonder: have you actually ever used Bitcoin to pay in a shop? I’ve done it once and it was a hassle - in the end I just bought it with regular money like a normal person because it was frankly too embarrassing to have the cashier have to find the right QR code, me to take out my phone, wait for me got get an internet connection, try and scan the code, wait, wait, wait…. And that is of course only if you want to make the trip to buy for the sake of spending money, and decide to make a trip to some place you’d usually never go to buy something you don’t even need.

                                                                                                  Ok when you’re buying drugs online or doing something with microdonations, but otherwise… meh.

                                                                                                  How many stores in the US take even EUR directly?

                                                                                                  Why should they? And even if they do, they convert it back to US dollars, because that’s the common currency - there isn’t really a point in a currency (one could even question if it is one), when nobody you economically interact with uses it.

                                                                                                  Anything you can’t buy directly you can buy with some small indirection, like a BTC-USD forex card.

                                                                                                  So a round-about payment over a centralized instance - this is the future? Seriously, this dishonesty of Bitcoin advocates (and Libertarians in general) is why you guys are so unpopular. I am deeply disgusted that I have ever advocated for this mess.

                                                                                                  It’s certainly volatile, and it’s certainly unstable, but it may or may not be a bubble depending on your model for what Bitcoin’s role in global finance is going to become.

                                                                                                  So you admit that is has none of the necessary preconditions to be a currency… but for some reason it will… do what exactly? If you respond to anything I mentioned here, at least tell me this: What is your “model” for what Bitcoin’s role is going to be?

                                                                                          3. 14

                                                                                            Why don’t you believe it is anti-enviromental? It certainly seems to be pretty power hungry. In fact it’s hunger for power is part of why it’s effective. All of the same arguments about using less power should apply.

                                                                                            1. -1

                                                                                              Trying to reduce energy consumption is counterproductive. Energy abundance is one of the primary driving forces of civilizational advancement. Much better is to generate more, cleaner energy. Expending a few terrawatts on substantially improved economic infrastructure is a perfectly reasonable trade-off.

                                                                                              Blaming bitcoin for consuming energy is like blaming almond farmers for using water. If their use of a resource is a problem, you should either get more of it or fix your economic system so externalities are priced in. Rationing is not an effective solution.

                                                                                              1. 10

                                                                                                on substantially improved economic infrastructure

                                                                                                This claim definitely needs substantiation, given that in practice bitcoin does everything worse than the alternatives.

                                                                                                1. -1

                                                                                                  bitcoin does everything worse than the alternatives.

                                                                                                  Come on David, we’ve been over this before and discovered that you just have a crazy definition of “better” explicitly selected to rule out cryptocurrencies.

                                                                                                  Here’s a way Bitcoin is better than any of its traditional digital alternatives; bitcoin transactions can’t be busted. As you’ve stated before, you think going back on transactions at the whim of network operators is a good thing, and as I stated before I think that’s silly. This is getting tiring.

                                                                                                  A few more, for which you no doubt have some other excuse for why this is actually a bad thing; Bitcoin can’t be taken without the user’s permission (let me guess; “but people get hacked sometimes”, right?). Bitcoin doesn’t impose an inflationary loss on its users (“but what will the fed do?!”). Bitcoin isn’t vulnerable to economic censorship (don’t know if we’ve argued about this one; I’m guessing you’re going to claim that capital controls are critical for national security or something.). The list goes on, but I’m pretty sure we’ve gone over most of it before.

                                                                                                  I’ll admit that bitcoin isn’t a panacea, but “it does everything worse” is clearly a silly nonsensical claim.

                                                                                                2. 4

                                                                                                  Reducing total energy consumption may or may not be counterproductive. But almost every industry I can name has a vested interest in being more power efficient for it’s particular usage of energy. The purpose of a car isn’t to burn gasoline it is to get people places. If it can do that with less gasoline people are generally happier with it.

                                                                                                  PoW however tries to maximizes power consumption, via second order effects , with the goal of making it expensive to try to subvert the chain. It’s clever because it leverages economics to keep it in everyone’s best interest to not fork but it’s not the same as something like a car where reducing energy consumption is part of the value add.

                                                                                                  I think that this makes PoW significantly different than just about any other use of energy that I can think of.

                                                                                                  1. 3

                                                                                                    Indeed. The underlying idea of Bitcoin is to simulate the mining of gold (or any other finite, valuable resource). By ensuring that an asset is always difficult to procure (a block reward every 10 minutes, block reward halving every 4 years), there’s a guard against some entity devaluing the currency (literally by fiat).

                                                                                                    This means of course that no matter how fast or efficient the hardware used to process transactions becomes, the difficulty will always rise to compensate for it. The energy per hash calculation has fallen precipitously, but the number of hash calculations required to find a block has risen to compensate.

                                                                                              2. 6

                                                                                                We’ve been doing each a long time without proof of work. There’s lots of systems that are decentralized with parties that have to look out for each other a bit. The banking system is an example. They have protocols and lawyers to take care of most problems. Things work fine most of the time. There are also cryptographically-anchored trust systems like trusted timestamping and CA’s who do what they’re set up to do within their incentives. If we can do both in isolation without PoW, we can probably do both together without PoW using some combination of what’s already worked.

                                                                                                I also think we haven’t even begun to explore the possibilities of building more trustworthy charters, organizational incentives, contracts, and so on. The failings people speak of with centralized organizations are almost always about for-profit companies or strong-arming governments whose structure, incentives, and culture is prone to causing problems like that. So, maybe we eliminate root cause instead of tools root cause uses to bring problems since they’ll probably just bring new forms of problems. Regulations, disruption, or bans of decentralized payment is what I predicted would be response with some reactions already happening. They just got quite lucky that big banks like Bank of America got interested in subverting it through the legal and financial system for their own gains. Those heavyweights are probably all that held the government dogs back. Ironically, the same ones that killed Wikileaks by cutting off its payments.

                                                                                            2. 8

                                                                                              In what context do you view proof-of-work as useful?

                                                                                              1. 11

                                                                                                You have addressed 0 of the actual content of the article.

                                                                                              1. -1

                                                                                                Eventually we will stop investing in chemical rocketry and do something really interesting in space travel. We need a paradigm shift in space travel and chemical rockets are a dead end.

                                                                                                1. 7

                                                                                                  I can’t see any non-scifi future in which we give up on chemical rocketry. Chemical rocketry is really the only means we have of putting anything from the Earth’s surface into Low Earth Orbit, because the absolute thrust to do that must be very high compared what you’re presumably alluding to (electric propulsion, lasers, sails) that only work once in space, where you can do useful propulsion orthogonally to the local gravity gradient (or just with weak gravity). But getting to LEO is still among the hardest bits of any space mission, and getting to LEO gets you halfwhere to anywhere in the universe, as Heinlein said.

                                                                                                  Beyond trying reuse the first stage of a conventional rocket, as SpaceX are doing, there are some other very interesting chemical technologies that could greatly ease space access, such as the SABRE engine being developed for the Skylon spaceplane. The only other way I know of that’s not scifi (e.g. space elevators) are nuclear rockets, in which a working fluid (like Hydrogen) is heated by a fissiling core and accelerated out of a nozzle. The performance is much higher than chemical propulsion but the appetite to build and fly such machines is understandably very low, because of the risk of explosions on ascent or breakup on reentry spreading a great deal of radioactive material in the high atmosphere over a very large area.

                                                                                                  But in summary, I don’t really agree with, or more charitably thing I’ve understood your point, and would be interested to hear what you actually meant.

                                                                                                  1. 3

                                                                                                    I remember being wowed by Project Orion as a kid.

                                                                                                    Maybe Sagan had a thing for it? The idea in that case was to re-use fissile material (after making it as “clean” as possible to detonate) for peaceful purposes instead of for military aggression.

                                                                                                    1. 2

                                                                                                      Atomic pulse propulsion (ie Orion) can theoretically reach .1c, so that’s the nearest star in 40 years. If we can find a source of fissile material in solar system (that doesn’t have to be launched from earth) and refined, interstellar travel could really happen.

                                                                                                      1. 1

                                                                                                        The moon is a candidate for fissile material: https://www.space.com/6904-uranium-moon.html

                                                                                                    2. 1

                                                                                                      Problem with relying a private company funded by public money like SpaceX is that they won’t be risk takers, they will squeeze every last drop out of existing technology. We won’t know what reasonable alternatives could exist because we are not investing in researching them.

                                                                                                      1.  

                                                                                                        I don’t think it’s fair to say SpaceX won’t be risk takers, considering this is a company who has almost failed financially pursuing their visions, and has very ambitious goals for the next few years (which I should mention, require tech development/innovation and are risky).

                                                                                                        Throwing money at research doesn’t magically create new tech, intelligent minds do. Most of our revolutionary advances in tech have been brainstormed without public nor private funding. One or more people have had a bright idea and pursed it. This isn’t something people can just do on command. It’s also important to also consider that people fail to bring their ideas to fruition but have paved the path for future development for others.

                                                                                                        1. 1

                                                                                                          I would say that they will squeeze everything out of existing approaches, «existing technology» sounds a bit too narrow. And unfortunately, improving the technology by combining well-established approaches is the stage that cannot be too cheap because they do need to build and break fulll-scale vehicles.

                                                                                                          I think that the alternative approaches for getting from inside atmosphere into orbit will include new things developed without any plans to use them in space.

                                                                                                      2. 2

                                                                                                        What physical effects would be used?

                                                                                                        I think that relying on some new physics, or contiguous objects of a few thousand kilometers in size above 1km from the ground are not just a paradigm shift; anything like that would be nice, but doesn’t make what there currently is a disappointment.

                                                                                                        The problem is that we want to go from «immobile inside atmosphere» to «very fast above atmosphere». By continuity, this needs to pass either through «quite fast in the rareified upper atmosphere» or through «quite slow above the atmosphere».

                                                                                                        I am not sure there is a currently known effect that would allow to hover above the atmosphere without orbital speed.

                                                                                                        As for accelerating through the atmosphere — and I guess chemical air-breathing jet engines don’t count as a move away from chemical rockets — you either need to accelerate the gas around you, or need to carry reaction mass.

                                                                                                        In the first case as you need to overcome the drag, you need some of the air you push back to fly back relative to Earth. So you need to accelerate some amount of gas to multiple kilometers per second; I am not sure there are any promising ideas for hypersonic propellers, especially for rareified atmosphere. I guess once you reach ionosphere, something large and electromagnetic could work, but there is a gap between the height where anything aerodynamic has flown (actually, a JAXA aerostat, maybe «aerodynamic» is a wrong term), and the height where ionisation starts rising. So it could be feasible or infeasible, and maybe a new idea would have to be developed first for some kind of in-atmosphere transportation.

                                                                                                        And if you carry you reaction mass with you, you then need to eject it fast. Presumably, you would want to make it gaseous and heat up. And you want to have high throughput. I think that even if you assume you have a lot of electrical energy, splitting watter into hydrogen and oxygen, liquefying these, then burning them in-flight is actually pretty efficient. But then the vehicle itself will be a chemical rocket anyway, and will use the chemical rocket engineering as practiced today. Modern methods of isolating nuclear fission from the atmosphere via double heat exchange reduce throughput. Maybe some kind nuclear fusion with electomagnetic redirection of the heated plasma could work, maybe it could even be more efficient than running a reactor on the ground to split water, but nobody knows yet what is the scale required to run energy-positive nuclear fusion.

                                                                                                        All in all, I agree there are directions that could maybe become a better idea for starting from Earth than chemical rockets, but I think there are many scenarios where the current development path of chemical rockets will be more efficient to reuse and continue.

                                                                                                        1. 2

                                                                                                          What do you mean by “chemical rockets are a dead end”? In order to escape planetary orbits, there really aren’t many options. However, for intersteller travel, ion drives and solar sails have already been tested and deployed and they have strengths and weaknesses. So there are multiple use cases here depending on the option.

                                                                                                          1. 1

                                                                                                            Yeah right after we upload our consciousness to a planetary fungal neural network.

                                                                                                          1. 2

                                                                                                            @pushcx I’m not sure if Replies is working properly for me. When I click on Replies and go to All, I see two replies from 1 - 2 months ago [0][1]. They show up in All and Comments. I don’t see any other replies in any other section.

                                                                                                            [0] https://lobste.rs/s/hvjwd6/how_become_part_time_programmer#c_91muap

                                                                                                            [1] https://lobste.rs/s/z6dilb/initial_impressions_moving_from_git#c_5usy2s

                                                                                                            1. 2

                                                                                                              Mine seems broken. It said nothing when I connected earlier. Then it gave me two replies while the email gave me four. Maybe there’s delays in how the algorithm works.

                                                                                                              1. 2

                                                                                                                Yep, it was bugged and is fixed now. Thanks for reporting it.

                                                                                                                1. 1

                                                                                                                  Looks good now, thanks.

                                                                                                              1. 16

                                                                                                                Software has become easier…in certain ways.

                                                                                                                I think this is worth reflecting on a bit.

                                                                                                                In the late 90’s when I installed Linux for the first time, this was back when setting up X11 had a big warning that if you got your monitor refresh rate wrong you could damage it, I managed to install the OS, build my own kernel, and get the whole graphics stack running, and connect to the internet…without using the internet. I only had one computer and it was busy being barely functional while I was doing this. I then went on to write Hello World in C, which required putting a few lines into a .c file and using gcc to compile it. Pretty straight forward.

                                                                                                                I recently started trying to make a frontend project using TypeScript, React, and BlueprintJS. Discounting the hours I spent trying to figure out which frameworks to even use, I could not have accomplished my Hello World without around 1000 dependencies in NPM and stackoverflow. While I’m getting there, I’ve found the experience really challenging. Error messages are bad. Components make all sorts of silly assumptions. Even just how things fit together is really challenging to discover.

                                                                                                                I’m not saying installing Linux that first time was easy but I don’t feel that things are easier now. The author tries to say that business logic grows very complex. But this doesn’t match my experience. Instead I find that the complexity is just getting things to work because every framework is huge. Most Hadoop jobs I see are dead simple but they are 3000 lines of junk to get the ten frameworks one needs to count words to play well together. I’ve spent more hours just trying to get a Java dependency to work in my system than writing the code for that feature I’m using the dependency for.

                                                                                                                I think the author is right in that many developers today just do things where the reason isn’t backed by any kind of evidence or even makes sense after a few minutes of thought. Microservices are like this. I’ve seen a lot of places go hog-in on microservices without appreciating that their system is simple enough that maybe they could get away with 2 or 3 big services, or even one if they really wanted, and dramatically underappreciate the cost of microservices. Microservices have created an entire industry of solutions to fix a problem that many people impose on themselves for no particularly good reason.

                                                                                                                What I’m saying is that we need to head back in the direction of simplicity and start actually creating things in a simpler way, instead of just constantly talking about simplicity. Maybe we can lean on more integrated tech stacks to provide out of the box patterns and tools to allow software developers to create software more efficiently.

                                                                                                                I don’t think the author gives an operationally useful definition of simplicity. It sounds like they want a big platform that has all sorts of complicated stuff integrated into it, an idea that I don’t think is very simple. A more “integrated tech stack” usually just means you push all of the complexity into one place rather than reduce it. While a bit extreme for me, I think it’s unfortunate that Handmade hasn’t picked up more steam.

                                                                                                                I think it’s important to note, as well, that IME, a lot of software complexity is out of ignorance. Many developers just don’t know what simplicity is or how to achieve it. If you’re a JavaScript developer all of the examples you’ll run across are huge. Angular 2 installs 947 dependencies on a blank project. React installed 200 something. Maybe Go is the best example today of simplicity? Maybe all developers need to go through a six month course where they solve all problems with UNIX pipes.

                                                                                                                1. 3

                                                                                                                  I agree with your assesment, I think there are a couple of different factors at play.

                                                                                                                  To be fair, package systems, dependency management, dll hell has been a problem for a loooong time. One of the axis this difficulty splits on is richness of the standard library. For example, c programs on linux have libc linked in at runtime which offers a fair bit of functionality but requires a lot libs for anything else. Go programs compile a significant runtime library into the final exe that includes a whole lot of functionality. Go has a very rich library built in. Javascript has near zero standard library, and no real module system. Hence npm and dependency hell++.

                                                                                                                  One thing that stands out to me today is how prevalent network backed package managers have become; maven, npm, heck debian and yum too. I also attempted to install linux back in the day, but I failed, because without a network connection, I coulnd’t use anything or research anything not on the disks I had available. Today, with a network connection you have access to all the dependencies and information you need. Most all languages and platforms have dependency managers and repositories available. In other words, (some) things are so much easier now.

                                                                                                                  Which all contributes to the eternal september we find ourselves in today. The barrier to entry is a lot lower, which is a good thing. Every year there are more and more fresh/green developers, and fewer and fewer old experienced developers. Those of us who fought to do anything in the old days are growing few and fewer. The new generation didnt’ suffer like we did, and that’s ok. I think they suffer in different ways, as your lament about web development shows. The reality is, a lot of the cruft in javascript land is solving real problems, as much as I hate to admit it.

                                                                                                                  It all brings to mind this quote from Alan Kay (~2004):

                                                                                                                  Computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were. So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

                                                                                                                  1. 2

                                                                                                                    FWIW, I agree with the overall gist of your post. However…

                                                                                                                    Angular 2 installs 947 dependencies on a blank project. React installed 200 something.

                                                                                                                    The @angular/core library has one hard dependency on tslib. React has a total of 16 direct and indirect dependencies. If you use a starter kit or one of the cli tools, they will install many more, but that’s because they’re depending on entire compiler ecosystems.

                                                                                                                    1. 2

                                                                                                                      I’ve seen a lot of places go hog-in on microservices without appreciating that their system is simple enough that maybe they could get away with 2 or 3 big services, or even one if they really wanted, and dramatically underappreciate the cost of microservices. Microservices have created an entire industry of solutions to fix a problem that many people impose on themselves for no particularly good reason.

                                                                                                                      I can’t read this without hearing “I want to work on cool things” over and over. That’s what everyone says, right? I want a job where I can work on cool things. And every chance that’s presented to shoehorn in a cool solution to an otherwise simple problem is taken. Isn’t this exactly what we saw with “big data”? Where a company’s “big data” could fit onto an couple of expensive hard drives, but everyone wanted to use Hadoop or write their own distributed code? Hell, I fell into this trap all the time a few years ago.

                                                                                                                      Another aspect of this is that we as programmers are really, really bad at estimating how efficient an approach is and how demanding a problem is. It’s why wise performance people (alliteration^2!) always shout at you “if you haven’t profiled it then you don’t know how fast it is!” And they’re right. And of course they’re right. What we work with, as you’ve noted, is horrendously complicated. But it sure feels great to stand back and say “well, a microservices approach would be a great fit here, and help us tackle our volume/latency/whatever requirements (that haven’t been measured), because partitioning the problem is more efficient (an unestablished claim), and our service will be more reliable (if it’s engineered correctly, maybe, again unestablished)”.

                                                                                                                      Because acting like we know is just so much fun. I submit that this should be the definition of an “architecture astronaut”.

                                                                                                                      Maybe “mechanical sympathy” exists, but if it does, it is hard won intuition born out of experience and lots and lots and lots of experimentation and measurement.

                                                                                                                      Hell, I could set @peter off on a rant about this for—literally—hours.

                                                                                                                    1. 13

                                                                                                                      When did the definition of bit rot change? Bit rot is when your storage has bits flip and slowly corrupts, solved by filesystem a like ZFS which checksum the data and can heal/repair the damage automatically.

                                                                                                                      1. 8

                                                                                                                        No, that’s the original definition from pre-ESR Jargon File.

                                                                                                                        bit rot: n. Also {bit decay}. Hypothetical disease the existence of which has been deduced from the observation that unused programs or features will often stop working after sufficient time has passed, even if `nothing has changed’.

                                                                                                                        1. 2

                                                                                                                          I agree, bit rot is corrupt data on disk. I like to use the term software entropy for what this article is talking about.

                                                                                                                          1. 2

                                                                                                                            I agree, the phenomenon described in the linked article is more accurately denoted as “technical debt”.

                                                                                                                            1. 4

                                                                                                                              I don’t think tech debt is the right description. Even a very well constructed program needs maintenance to keep up with the changing APIs and systems its dependencies run on. This is just software maintenance.

                                                                                                                              1. 3

                                                                                                                                I agree with you. Technical debt is better applied to decisions during the design and implementation phase coming back to haunt you (in my opinion).

                                                                                                                                But “bit rot” is definitely incorrect in this context!

                                                                                                                          1. 8

                                                                                                                            Thank you for both the Oil project and this post. This is definitely the explanation I will point people to.

                                                                                                                            I haven’t adopted Oil yet myself, and probably won’t until at least 1.0. I’ve tried zsh, fish, and xonsh, and have nice things to say about them all… but so far I always keep setting my login shell back to bash on linux, because there are just too many other people’s scripts for me to deal with. The net semantic complexity of $NEAT_NEW_SHELL plus that of $CRANKY_OLD_SHELL is always greater than the latter alone, so I find myself stuck with bash despite its irritations. It’s apparently just another one of these insoluble collective action problems.

                                                                                                                            The embrace, extend, (eventually) extinguish approach that source translation enables is the only one I can endorse for having a hope of success in such an entrenched, messy, decentralized context as the Unix diaspora. There’s an important lesson here, and I hope similar projects take note.

                                                                                                                            1. 8

                                                                                                                              but so far I always keep setting my login shell back to bash on linux, because there are just too many other people’s scripts for me to deal with

                                                                                                                              What does this have to do with the shell that you run? I run fish and that is no obstacle to running programs written in any other language, including bash.

                                                                                                                              1. 6

                                                                                                                                It’s not just the shell I run, it’s the shell “all the things” expect. I can easily avoid editing C++ or ruby source (to pick a couple of random examples) but, in my job at least, I can’t avoid working with bash. I can’t replace it, and I need to actually understand how it works.

                                                                                                                                Of course, other people with other jobs, or those who have long since attained fluency in bash, may have better luck avoiding in their personal environment. I can’t, because I have to learn it, ugly corners and all. I’d be happy to stick with fish, it’s just not a realistic option for me right now. My observation is that, for my current needs, two shells are worse than one.

                                                                                                                                1. 3

                                                                                                                                  I’ve used fish for years now. Whenever I need to run a bash script I just run bash script.sh. The smallest hurdle I have to deal with is the small mental effort I have to make translating bash commands to fish equivalents when copying bash one liners directly into the shell.

                                                                                                                                  1. 3

                                                                                                                                    I don’t understand what working with bash scripts has to do with the shell that you run, though. Just because you run Python programs doesn’t mean your shell has to be a Python reple, these things are separate. In the case you’re referring to it sounds like bash is just a programming language like Ruby or Python.

                                                                                                                                2. 2

                                                                                                                                  Thanks, yes I didn’t explicitly say “embrace and extend”, since that has a pretty negative Microsoft connotation :)

                                                                                                                                  But that’s the idea, and that’s how technology and software evolves. And that’s is how bash itself “won”! It implemented the features of every shell, including all the bells and whistles of the most popular shell at the time – AT&T ksh.

                                                                                                                                  Software and in particular programming languages have a heavy lock-in / network effects. I mean look at C and C++. There’s STILL probably 100x more C and C++ written every single day than Go and Rust combined, not even counting the 4 decades of legacy!

                                                                                                                                  It does seem to me that a lot of programmers don’t understand this. I suppose that this was imprinted on my consciousness soon after I got my first job, by reading Joel Spolsky’s blog:

                                                                                                                                  https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost-the-api-war/

                                                                                                                                  There are two opposing forces inside Microsoft, which I will refer to, somewhat tongue-in-cheek, as The Raymond Chen Camp and The MSDN Magazine Camp.

                                                                                                                                  The most impressive things to read on Raymond’s weblog are the stories of the incredible efforts the Windows team has made over the years to support backwards compatibility:

                                                                                                                                  This was not an unusual case. The Windows testing team is huge and one of their most important responsibilities is guaranteeing that everyone can safely upgrade their operating system, no matter what applications they have installed, and those applications will continue to run, even if those applications do bad things or use undocumented

                                                                                                                                  This is a good post, but there are others that talked about the importance of compatibility. Like the “never rewrite post” (although ironically I’m breaking that rule :) )

                                                                                                                                  https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/

                                                                                                                                  Another example of this that people may not understand is that Clang implemented GCC’s flags bug-for-bug! GCC has an enormous number of flags! The Linux kernel uses every corner of GCC, and I think even now Clang is still catching up.

                                                                                                                                  Building the kernel with Clang : https://lwn.net/Articles/734071/

                                                                                                                                1. 9

                                                                                                                                  She should be able to use dtrace to investigate deeper.

                                                                                                                                  1. 13

                                                                                                                                    Where would you start with using dtrace to investigate this? I already know the responsible system call – is the idea that I could use dtrace somehow to trace what resource is under contention inside the kernel?

                                                                                                                                    I’ve basically never used dtrace except running dtruss occasionally to look at system calls so it’s pretty unclear to me where to start.

                                                                                                                                    1. 12

                                                                                                                                      I’m not a dtrace master, or even low-level amateur but this might be a first start:

                                                                                                                                      sudo dtrace -n ':::/pid == $target/{@[stack()] = count();} tick-5s {exit(0);}' -p PID_OF_YOUR_STUCK_PROGRAM
                                                                                                                                      

                                                                                                                                      That will sample the kernel stack of the process any time something happens with it and count them, then after 5 seconds it’ll print the kernel stacktraces out in ascending order of count. You can also look at -c instead of -p

                                                                                                                                      That’ll probably give you way too much stuff, though. You can only get the stack traces when it does a syscall with:

                                                                                                                                      sudo dtrace -n 'syscall:::/pid == $target/{@[stack()] = count();} tick-5s {exit(0);}' -p PID_OF_YOUR_STUCK_PROGRAM
                                                                                                                                      

                                                                                                                                      Maybe what you could do is use dtruss to figure out what it’s stuck in (probably a syscall?) and then use dtrace to see what’s going on there. For example, to see what syscalls were called in doing sleep 10 I did (on FreeBSD):

                                                                                                                                      sudo dtrace -n 'syscall:::/pid == $target/{}' -c "sleep 10"
                                                                                                                                      

                                                                                                                                      And I got a bunch of output, where it clearly sat for 10 seconds on:

                                                                                                                                        2  80973                  nanosleep:entry
                                                                                                                                      

                                                                                                                                      Then, to see exactly what goes on in the kernel for this process between nanosleep:entry and nanosleep:return, I did:

                                                                                                                                      sudo dtrace -n 'BEGIN {trc = 0} syscall::nanosleep:entry /pid == $target/ {trc = 1} syscall::nanosleep:return /pid == $target/{trc = 0;} ::::/pid == $target && trc == 1/{@[stack()] = count();}' -c "sleep 10"
                                                                                                                                      

                                                                                                                                      Kind of hard to read but if you pull the stuff in quotes out you can see it’s using a variable called trc and for this pid, when nanosleep is entered it sets the variable to 1, and when nanosleep returns it sets trc to 0. Then, for any probe for this pid if trc is 1 then record the kernel stack. I got a bunch of output in that.

                                                                                                                                      Hopefully that is helpful. I’m not near a trace wizard so I’m sure there is some much more clever things one can do, but that might be a start to digging.

                                                                                                                                      You can see probes available to you with dtrace -l.

                                                                                                                                  1. 1

                                                                                                                                    “When people who can’t think logically design large systems, those systems become incomprehensible. And we start thinking of them as biological systems. And since biological systems are too complex to understand, it seems perfectly natural that computer programs should be too complex to understand.”

                                                                                                                                    Simultaneously a straw man and a false dichotomy. Not written by someone who understands logic?

                                                                                                                                    1. 2

                                                                                                                                      The author is Leslie Lamport, who won the 2013 Turing Award for his work on distributed algorithms.

                                                                                                                                      1. 1

                                                                                                                                        I’m aware of that. My question is rhetorical.

                                                                                                                                        1. 1

                                                                                                                                          What he may have meant is that programmers using the biological approach with things like information hiding, guard functions, and testing built complex programs that usually work as intended. That’s without knowing anything about formal logic or mathematical aspects. Writers covering things like LISP used to compare it to biological approaches as arguments it was more adaptable whereas the formalized stuff failed do to rigidity and slow-moving. Just reading Leslie’s remark, someone might assume all biologically-inspired approaches were barely comprehensible or failures whereas the formal or logical methods stayed outperforming them. Most of the latter actually failed.

                                                                                                                                          I still enjoyed reading it despite that inaccuracy. Leslie’s mind is interesting to watch in action with down-to-earth style. This reminded me of a computer scientist who thought like a biologist to overcome limitations CompSci folks were facing. Led him to do everything from invent massively-parallel processing to using evolution to try to outperform human designers. Always claimed biology was better. A lot of better write-ups are paywalled or disappearing with Old Web but I can try to dig some out this week if you’re interested.

                                                                                                                                          1. 2

                                                                                                                                            Please do dig it up, I’m quite intrigued to see where their solutions worked well, and where they didn’t.

                                                                                                                                            1. 2

                                                                                                                                              With the way ML/AI is going, it’s quite possible many future systems could be much closer to biology than human design. An AI system-design software will just do whatever works as long as its optimization function says it’s good.

                                                                                                                                              1. 2

                                                                                                                                                I am in no way questioning Lamport’s brilliance nor contributions in general. However most people, brilliant or otherwise, have blind spots. I believe he’s betrayed some of his here, and that in itself is interesting and worth reading.

                                                                                                                                          1. 2

                                                                                                                                            The kdb+ database is popular because of its wickedly fast performance, for which companies pay a premium, and Timescale is not trying to take on Kx Systems directly in this core market. In fact, company co-founders Ajay Kulkarni and Mike Freedman, who were roommates at MIT two decades ago before their paths diverged and then reconverged, tell The Next Platform that they were aiming the TimescaleDB database at machine-to-machine applications but have seen early adopters use it in a number of more traditional applications, where enterprises have added time series data to traditional databases like Oracle’s eponymous database or Microsoft SQL Server or are replacing scale out clusters running the open source Redis, Cassandra, or Riak key-value stores or their commercial variants.

                                                                                                                                            That is quite the sentence and I have no clue what it’s trying to say. “machine-to-machine”???

                                                                                                                                            They could use relational databases with SQL interfaces, which are easy to use but they don’t scale well. Or they could use NoSQL databases that scale well but were not as reliable and are harder to use.

                                                                                                                                            I’m not sure what the author is talking about her. Many NoSQL options are quite reliable and much simpler than a SQL layer.

                                                                                                                                            Good for the authors of this product but the actual content of this post felt lacking to me.

                                                                                                                                            1. 2

                                                                                                                                              Any security minded people have thoughts on this?

                                                                                                                                              1. 13

                                                                                                                                                Debian’s security record regarding CAs is atrocious. By this I mean default configuration and things like the ca-certificates package.

                                                                                                                                                Debian used to include non-standard junk CAs like CACert and also refuse to consider CA removal a security update, so it’s hugely hypocritical of this page to talk about many insecure CAs out of 400+.

                                                                                                                                                Signing packages is a good idea, as that is bound to the data and not to the transport like https so in principle I agree that using https for debian repositories doesn’t gain much in terms of extra security. However these days the baseline expectation should be that everything defaults to https, as in no more port 80 unauthenticated http traffic.

                                                                                                                                                Yes, moving over to https for debian repositories breaks local caching like apt-cacher (degrades it to a tcp proxy) and requires some engineering work to figure out how to structure a global mirror network, but this will have to be done sooner or later. I would also not neglect the privacy implications, with https people deploying passive network snooping have to apply heuristics and put in more effort than simply monitoring http.

                                                                                                                                                Consider the case where someone sitting passively on a network just monitors package downloads that contains a fix for a vulnerability that is exploitable remotely. That passive attacker can just try to race the host and exploit the vulnerability before the update can be installed.

                                                                                                                                                Package signing in debian suffers from problems with the underlying gpg level, gpg is so 90s in that it’s really hard to sustainably use it long-term: key rotation, key strength are problem areas.

                                                                                                                                                1. 4

                                                                                                                                                  Package signing in debian suffers from problems with the underlying gpg level, gpg is so 90s in that it’s really hard to sustainably use it long-term: key rotation, key strength are problem areas.

                                                                                                                                                  What do you consider a better alternative to gpg?

                                                                                                                                                  1. 10

                                                                                                                                                    signify is a pretty amazing solution here - @tedu wrote it and this paper detailing how OpenBSD has implemented it.

                                                                                                                                                  2. 4

                                                                                                                                                    non-standard junk CAs like CACert

                                                                                                                                                    imho CACert feels more trustworthy than 90% of the commercial cas. i really would like to see cacert paired with the level of automation of letsencrypt. edit: and being included in ca packages.

                                                                                                                                                    1. 2

                                                                                                                                                      With the dawn of Let’s Encrypt, is there still really a use case for CACert?

                                                                                                                                                      1. 4

                                                                                                                                                        i think alternatives are always good. the only thing where they really differ is that letsencrypt certificates are cross signed by a ca already included in browsers, and that letsencrypt has automation tooling. the level of verification is about the same. i’d go as fas as to say that cacert is more secure because web of trust, but that may be just subjective.