1. 5

    I know and like Trey, but I want to quibble about property.

    In, say, Java, programmers are taught early never to expose a public attribute; always make it private and write getter/setter methods from the start. And in Java this is good advice, because you can’t change your mind later without breaking all existing clients of the class.

    In Python this isn’t a problem, because property lets you change your mind later, and swap in getter/setter methods without breaking clients of the class. But too many programmers, on learning this, do one (or both) of the following:

    1. Decide that this means “oh, in Python instead of always writing a getter/setter from the start, you always make it a property from the start”.
    2. Decide never again to expose which members of a class are methods and which are attributes, by wrapping everything in property.

    The first is bad because it’s completely unnecessary code; the point of property is you don’t have to write the getter/setter or equivalent until the time comes when you can prove you actually need it, at which point you write the methods, wrap in property, and go about your business without breaking any existing clients.

    The second is bad because it hides what the code is doing. I want to know if something’s a method that potentially does extra calculation or other work (maybe even DB queries) to return a result. Hiding implementations behind property makes it harder for others to see what your code is really doing and gauge which parts of it may be more expensive/resource-intensive to use.

    So I generally push property much further away in the Python learning curve, if I even mention it at all.

    1. 3

      Strong +1 from a fellow long-time Pythonista. That said, I do think people need to know how @property works for two reasons: reading others’ code (especially framework code), and having a straightforward mental model for how decorators and the descriptor protocol work. I think this is covered well in “Fluent Python”.

      1. 5

        I think there are better examples for learning about decorators. Properties are definitely great for learning about descriptors, but that’s basically the most advanced pure Python topic in existence! (I say pure Python because there are things an order of magnitude more advanced and complex to learn about the language when you get into writing extensions, implementation-specific quirks, concurrency, etc. But from a pure language perspective, descriptors are right up there.)

        My most used decorator in practice is compose(list). Being able to write functions with yield x and yield from x instead of result.append(x) and result.extend(x) and having to make sure I return result in all the right places is so nice, and throwing a simple decorator on the function so that it’s still eager is surprisingly often useful. Sometimes they can be removed later to make the program more lazy without changing the behaviour, rather than having to rewrite functions to use laziness.

        I think the best decorator to use to demonstrate them is something like trace though.

        def trace(f):
            @functools.wraps(f)
            def wrapper(*args):
                args_string = ', '.join(map(str, args))
                ret = f(*args)
                print(f'{f.__name__}({args_string}) -> {ret}')
                return ret
            return wrapper
        

        It’s just a super simple wrapper that does something that’s very obvious, so the only thing that you need to understand to understand it is how decorators themselves work. Once you’ve demonstrated it without the wraps decorator (explaining decorators using something that has a decorator in it is a bit silly) you can show how it makes debugging easier by explicitly marking the function as being wrapped.

        It’s also good because you can use it as an example of where decorator order really matters:

        @functools.lru_cache()
        @trace
        def fib(a):
            if a < 2: return 1
            return fib(a - 1) + fib(a - 2)
        

        vs. swapping the order of the caching and the tracing.

        Personally I use something more like the below in practice, but parameterised decorators should come after decorators themselves are already understood!

        def trace(show_counter=False, show_types=False):
            def decorator(f):
                counter = 0
                @functools.wraps(f)
                def wrapper(*args, **kwds):
                    nonlocal counter
                    counter += 1
                    local_counter = counter
        
                    if show_types:
                        args_string = ', '.join(f'{arg}: {type(arg).__name__}'
                                                for arg in args)
                    else:
                        args_string = ', '.join(map(str, args))
        
                    ret = f(*args)
        
                    if show_counter:
                        print(local_counter, end=' ')
                    print(f'{f.__name__}({args_string}) -> {ret}')
        
                    return ret
                return wrapper
            return decorator
        
        def compose(f):
            def decorator(g):
                @functools.wraps(g)
                def wrapper(*args, **kwds):
                    return f(g(*args, **kwds))
                return wrapper
            return decorator
        
        1. 3

          These are all very good points, and I agree!

      2. 2

        I really like to think of property as a kind of ‘oh crap’ button.

        Oh crap, I made this an attribute but now I want to [blah blah blah that’s simple and cheap] instead of having a plain value. Oh crap, I’m going to have to change 100s of spots in my code. Oh crap, property exists! Oh crap, I’m saved!

        I also see myself using it about 50% less now that functools.cached_property has been added in 3.8.

        1. 3

          I just changed a bunch of API calls to the equivalent of cached_property on Friday. We’re using 3.6 so it wouldn’t have worked for us, but it’s nice to know about for the future!

      1. 12

        And thus the commons become more tragic.

        I’ve wondered, since we rely so much on the whim of others, if, say, Google were just to completely vanish overnight (say, the CEOs think it was an incredible journey, but it’s time to shut off the company, beginning with google.com)… does society have any recourse? I mean, legally, is there any way we can say, no, you can’t shut down google.com because it’s now ingrained into the fabric of our society?

        Or does Google not owe us anything? What about Twitter, Facebook, or Youtube, which drive so much of the worldwide economy and politics? How different is this from one small unpaid maintainer of an obscure package closing up shop?

        1. 7

          Isn’t that logic not really applicable here, since Actix is open source? Someone can just fork it. You can’t fork Google’s entire business and infrastructure.

          1. 13

            My guess is no-one forks it and it dies. I don’t know what drives anyone to write open source code, I simply can’t fathom the mental health hit it would take to manage a community. People filing bugs on my code in my own company makes me anxious. People filing bugs in public? People shittalking me in public and filing angry bugs on me? Forget it. No way.

            The word I am looking for about open source maintainers isn’t hero exactly, but something like that. You have to have a certain stuff that most of us don’t. I would hazard none of those in the peanut gallery who were throwing shells are not the same people who have that stuff.

            1. 3

              Sadder and confirming your point is that we just had the article on Actix’s optimizations showing the maintainer was ahead of the curve.

              1. 2

                If not hero, how about martyr?

                1. 1

                  I used to maintain a large open source project with a friend. Our community is mostly very supportive and when some people aren’t we feel very comfortable telling them that we don’t owe them squat.

                2. 4

                  Also, what logic? I’m just wondering what happens when we rely on someone’s good will and that good will goes away. People were obviously relying on this guy for something, or otherwise wouldn’t be upset that he’s no longer providing it.

                  1. 6

                    What you’re saying now, about someone’s good will, is totally disjoint from what you were saying in your other comment: Google, Twitter, Facebook owing us anything.

                    I’m just pointing out your statement is not really comparable to what has happened.

                    My opinion on all this is… the community killed the project. There are approaches to things, and harassment is not one of them.

                    Know what would’ve been super effective?

                    Fork the project, make the SAFE changes, show the performance impacts, and if they are marginal, and people like it, then they will use the fork. This introduces a game theory approach: if more people use the fork, the original will fade unless it adapts the changes. And we know this model works - it’s been done many times.

                  2. 2

                    Being free software doesn’t mean that the original author is required to keep distributing it. If people had neglected to keep copies of the original (which isn’t the case here, granted) the original author has no obligation to make sure you can keep receiving those copies.

                    But sometimes, depending on the terms that you received something, free licenses can be revoked. The GPL has an irrevocability clause, but weaker licenses don’t.

                    1. 1

                      Also crates.io will maintain the current state, so your builds won’t fail from today to tomorrow*

                      *(Yes there is still DMCA but that’s a complete different situation.)

                    2. 5

                      We could charge them for securities fraud. Everything Everywhere Is Securities Fraud!

                      1. 2

                        Oh, interesting. So, right, shareholders could sue. But can a judge order, bring google.com back up or go to jail or something like that?

                      2. 4

                        I’ve wondered, since we rely so much on the whim of others, if, say, Google were just to completely vanish overnight (say, the CEOs think it was an incredible journey, but it’s time to shut off the company, beginning with google.com)… does society have any recourse? I mean, legally, is there any way we can say, no, you can’t shut down google.com because it’s now ingrained into the fabric of our society?

                        i’d say we would end up with a positive outcome medium-term. short-term it would be “interesting” i guess.

                      1. 31

                        Modern cars work … at 98% of what’s physically possible with the current engine design.

                        Ignoring the fact that ICEs are thermodynamically constrained to something closer to 20% efficiency, “current engine design” is quite an escape-hatch. Computer software and hardware designs, similarly, are subject to “current designs”, and there’s no reason to think that SWEs are somehow less inclined to improve designs than mechanical engineers.

                        Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance

                        There is no objective “possible performance” metric. There’s only “current implementation” vs “potential improvements gained by human attention”.

                        Everything is unbearably slow

                        No. kitty is fast. ripgrep (and old grep…) is fast. Mature projects like Vim and Emacs are getting faster. JITs and optimizing compilers produce faster code than ever. Codecs, DSP are faster than ever.

                        Yes, tons of new software is also being created and power law effects guarantee that most of it will be low-effort and unoptimized. The fact that you can scoop your hand into an infinite stream and find particulates, means nothing.

                        Text editors! What can be simpler? On each keystroke, all you have to do is update a tiny rectangular region

                        If that’s all you expect from a text editor then your computer can do it very quickly. But you chose to use a text editor that does much more than that per keystroke.

                        Build systems are inherently unreliable and periodically require full clean, even though all info for invalidation is there

                        “All info” is not there. Most builds have implicit state created by shell scripts, filesystem assumptions, etc. That’s why conforming to bazel or NixOS is painful (but those projects are examples of people working to improve the situation).

                        Machine learning and “AI” moved software to guessing in the times when most computers are not even reliable enough in the first place.

                        :)

                        Spin another AWS instance. … Write a watchdog that will restart your broken app … That is not engineering. That’s just lazy programming

                        That’s how RAM ECC works, that’s how MOSFET works. Failure is an inherent property of any physical system.

                        I want state-of-the-art in software engineering to improve, … I don’t want to reinvent the same stuff over and over

                        I agree with that. Clojure and urbit are efforts in that direction. Taking code-reuse and backwards-compatibility seriously, allows us to build instead of repeat.

                        But dispense with the nostalgia for DOS and the good old days. CompSci artists ignore cost/benefit. Engineers consider economics (cost/benefit, not just “money”, all costs).

                        The bulk of new activity in any vigorous market will be mostly trash, but the margins yield fruit. High-quality software is being built at the margins. The disposable trash is harmless, and serves a purpose, and will be flushed out as users adjust their relative priorities.

                        1. 4

                          Mature projects like Vim and Emacs are getting faster.

                          Ignoring the fact that older software had to be optimized in order to run on older computers, a bit of this is survivorship bias. People don’t use the old bad programs anymore or they got fixed.

                          Older software crashed a lot and it corrupted your documents. Today, even if it crashes you probably won’t lose anything. To an extent, this is the result of consciously trading performance for correctness.

                          1. 1

                            Ignoring the fact that older software had to be optimized in order to run on older computers,

                            You’re making assumptions. Emacs and especially Vim have many unoptimized components. Vimscript, the core language of Vim, doesn’t even produce an AST, it re-parses every source line over and over (including while-loops/for-loops). Only recently has this got attention. Fixing it takes human time, and now the human time is being spent on it because relative priorities arrived there.

                            survivorship bias. People don’t use the old bad programs anymore or they got fixed.

                            The converse is that every program that touches a computer should be optimized before it reaches a user. That makes no sense.

                            1. 1

                              You’re making assumptions.

                              No! You! :)

                              I didn’t mean to imply that they are perfectly optimized. I meant that they had to perform more optimization to run on older computers than modern software might have to make to run on newer computers.

                              The converse is that every program that touches a computer should be optimized before it reaches a user. That makes no sense.

                              I don’t think every program should be optimized. I don’t follow what you are saying here.

                              1. 1

                                I don’t think every program should be optimized.

                                Then it does not make sense to discount good software as mere “survivors”. It is a contradiction. Good software takes time, bad software gets filtered out over time. In the interval, there will be bad software, but that is because more time is needed for good software to take its place.

                                1. 1

                                  Good software takes time, bad software gets filtered out over time. In the interval, there will be bad software, but that is because more time is needed for good software to take its place.

                                  I agree. The comment about survivors is about how not every piece of software from an era is as good as the software from that era that we use today. I.e., the survivorship bias fallacy:

                                  https://en.m.wikipedia.org/wiki/Survivorship_bias

                          2. 2

                            Actually, measured by latency (which is almost certainly the benchmark you care about in a terminal emulator), kitty is moderately fast and extremely jittery, just like alacritty. Both Konsole and Qterminal perform substantially better every time I benchmark them, especially if you have a discrete GPU instead of integrated graphics.

                            1. 1

                              Fast software exists, that’s my point. You’ve provided more examples of fast software.

                            2. 1

                              So, I think you maybe be misinformed about the cars thing. ICEs and turbines can get between 37 and 50ish percent efficiency by theoretical maximum, and real-world engines get very close to that.

                              This is important when you look at the claimed efficiency of computers: a modern multi GHz CPU should be capable of billions of operations a second, and for any given task it is pretty easy to to make a back-of-the-envelope calculation about how close to that theoretical ideal we are–that’s one of the ways efficiency in traditional engineering fields is calculated.

                              We are seeing efficiencies of tens of orders of magnitude smaller than seems reasonable. There are reasons for this, but the fact is inescapable that anybody saying “we really use computers inefficiently” is not wrong.

                              Also, on the restarting the app bit–it is one thing to use ECC to compensate for cosmic ray bit flips, or to mirror data across multiple hard drives in case one or two die, as a way of doing reliability engineering. It is something else entirely to, say, restart your Rails application every night (or hour…) because it leaks memory and you can’t be bothered to track down why.

                              1. 3

                                It is something else entirely…

                                Is that ‘a difference in scale becomes a difference in kind’?

                                They seem like the same kind of thing to me, just at very different points on the effort/value continuum.

                                1. 2

                                  Statistical process control is a tool we can use to answer that.

                                  If the problem is predictable enough to count as routine variation inherent to the system, we should try to fix the system so it happens more rarely. (And I’d argue the memory leaks that force you to restart every hour belong to that category.)

                                  If the problem is unpredictable and comes from special causes, we cannot generally adjust the system to get rid of them. Any adjustments to the system due to special causes only serves to calibrate the system to an incorrect reference and increases the problem. (This is where I’d argue cosmic radiation belongs.)

                                  Another way of looking at it is through the expression that “you cannot inspect quality into a product”, meaning that continually observing the system state and rejecting the system when it goes out of bounds is a way to ensure only systems within limits are running, but it is very expensive compared to ensuring the system stays within bounds to begin with. It is only acceptable for cosmic rays because we can’t work the cosmic rays out of the system, so we are regrettably forced to rely on inspection in that case.

                                  1. 2

                                    Memory errors are not unpredictable; for a given stick of ram, the rate of bit flips is not that hard to figure out (takes quite awhile to get good numbers for ECC sticks).

                                    We adjust this by adding error correction codes.

                                    RE memory leaks: a memory leak isn’t worth chasing if it’s not causing trouble. I have inherited an app that uses the “restart every hundred requests” strategy and I cannot fathom that ever being on my top ten problems. My users don’t care, and it isn’t expensive. I dislike the untidiness, and would probably fix it if it was a side project.

                                    1. 1

                                      Indeed. Volatile RAM is a horrible hack (continually refresh capacitors, waste power) compared to NVM. The cost calculation is clear there, so few complain about it. But the cost calculation of “human attention” is less clear to puritans who think that lack of Discipline and Virtue is what prevents a utopia of uniformly better software.

                                  2. 1

                                    ICEs and turbines can get between 37 and 50ish percent efficiency by theoretical maximum, and real-world engines get very close to that.

                                    I said “closer to 20% [than 98%]”. I didn’t bother to look up the actual number. 50 is closer to 20 than 98.

                                    a modern multi GHz CPU should be capable of billions of operations a second, and for any given task it is pretty easy to to make a back-of-the-envelope calculation about how close to that theoretical ideal

                                    • Why do you assume that the current hardware design is the theoretical ideal?
                                    • CPU saturation as a performance metric assumes that the instructions are meaningful, not to mention TFA is concerned about an over-abundance of instructions in the first place.
                                    1. 1

                                      I said “closer to 20% [than 98%]”. I didn’t bother to look up the actual number. 50 is closer to 20 than 98.

                                      I am not sure that you are interpreting those numbers correctly. There are two numbers: the ~37-50% efficiency allowed by physics, and the 98% efficiency in achieving that theoretical efficiency. The former is a measure of how good an ICE can ever be at accomplishing the goal of turning combustion into usable mechanical energy, the latter is a measure of how well-engineered our engines are in attaining that ideal–only the latter we have any control over.

                                      Why do you assume that the current hardware design is the theoretical ideal?

                                      There may well be a more efficient means of computation out there! In the mean time, it seems reasonable to look at the theoretical max performance of real silicon we have on hand today.

                                      1. 0

                                        There are two numbers: the ~37-50% efficiency allowed by physics, and the 98% efficiency in achieving that theoretical efficiency.

                                        That’s why I said “Ignoring…”. Also mentioning “thermodynamic limit” is a pretty clear signal that I’m aware of the difference between physical limits and engineering tradeoffs. OTOH combustion itself is a design choice, and that is a hint that the distinction isn’t so obvious.

                                        You chose to comic-book-guy that part of the comment instead of focusing on the part that didn’t start with “Ignoring”.

                                1. 18

                                  In short, it is perfectly OK to have a life outside of work.

                                  If I believed a lot of Linked In posts, I’m supposed to have at least two different side hustles and make up for my lack of personal life with the greatness of my company’s work culture.

                                  People often feel peer pressure to code outside of hours, to stay competitive and to be the best

                                  The typical developer is being crushed by a thousand things during the workday which kill their productivity. Professional software development became so ineffectual due to dithering management who can’t make product decisions, ridiculous methodologies, knowledge loss from average employee turnovers of 2.5 years, and myopic approaches to chase technological “flavors of the month” with poor tooling that businesses no longer trusts what people learn and can do during the work day.

                                  1. 8

                                    the crux is that workers are competing with each other for the approval of an ineffective management structure. a blog post certainly isn’t enough to overcome that. we need to develop solidarity between workers and organizations to create a hiring process that doesn’t demand unpaid labor, and to give workers a say in management.

                                    1. 4

                                      Management knows corporate software development is crumbling in general and so pushes recruiters to emphasize side project work, and workers are desperate to stay relevant and (older folks especially) employed. The real solution to improving software long term is minimizing hiring by reducing turnover, which requires companies to pay raises to keep institutional knowledge and training and mentoring developers in-house.

                                      Software is In one of the most ageist industries–it constantly emphasizes hiring young cheap disposable unmentored and inexperienced 20-somethings to throw at complex problems and wonders why software quality is melting.

                                      1. 0

                                        The real solution to improving software long term is minimizing hiring by reducing turnover, which requires companies to pay raises to keep institutional knowledge and training and mentoring developers in-house.

                                        yeah that makes sense, and there is the question of how to make that happen, which would require some collective action by the workers IMO.

                                    2. 3

                                      The typical developer is being crushed by a thousand things during the workday which kill their productivity.

                                      I feel like that is the main reason I like to program outside of work: to get a sense of freedom and efficacy. It never seems like I end up working on the things that I want to during my work hours.

                                    1. 6

                                      Two words: Foam. Roller.

                                      1. 1

                                        Do you just roll out your lower back? Sides or hips maybe?

                                        1. 3

                                          I go very gently over my “upper” lower back, if its very bad pain then you need to focus on other areas. IT bands on the side of the thighs, upper back tension, and your ass are other areas to focus on!

                                          Foam Roller always hurts like a bitch when you start using it, but its a god send and once you get into it daily its easy and feels really good.

                                          1. 1

                                            Oh, I always avoided them because they hurt… I guess I’ll try

                                            1. 2

                                              It does hurt like a bitch at first, but once you get through the pain its damned amazing! I dont even need to do it daily, or even weekly anymore, a few times a month is all I need now.

                                        2. 1

                                          But how?

                                          1. 2

                                            This is pretty close to what my physical therapist taught me: https://www.youtube.com/watch?v=MnWWDAsEfXk

                                            1. 1

                                              There’s actually tons of videos on YouTube on how to use foam rollers and other devices for stretching and helping sore muscles. Make sure to check the source - you don’t want some random giving you advice.

                                              1. 2

                                                There’s actually tons of videos on YouTube…

                                                Make sure to check the source - you don’t want some random giving you advice.

                                                I find that to be a tricky proposition. How do you know which one is good?

                                                1. 3

                                                  My physiotherapists recommends the PreHab Guys for physical therapy ideas.

                                                  I actually site on a gym ball when I’m not at my standing desk - and I’ve found that has been good for prevent back pain from sitting too long.

                                                2. 2

                                                  Make sure to check the source - you don’t want some random giving you advice.

                                                  I almost feel that’s what I’d be doing by following your 2 word advice…

                                                  1. 1

                                                    Haha! Well, there’s definitely a range of quality in YouTube. I guess it’s fairly hard to measure if you aren’t used to dealing with fitness types. My general rule of thumb is this: if the person in the video tells you to be gentle with your body, seems to represent a bigger brand than just themselves (say, a gym), and doesn’t try to sell you a particular brand of supplement, they are probably OKish.

                                                    If they are pushing supplements, promising you can do 100 pull-ups if you just workout 10 minutes a day or pretend that their six pack is unrelated to their diet, then you want to stay away.

                                            1. 23

                                              FTFY: “A plea to developers everywhere: Write Junior Code”

                                              Let’s get bogged down with how much simple code we write.

                                              God, I wish every developer would make an effort to write simple code.

                                              1. 7

                                                I don’t disagree with you at all, but Haskell does have a bit of a spiral problem with these sorts of things; often folks writing even simple Haskell programs end up using very exotic types that are abstruse to more junior devs (or even more senior devs who just haven’t looked at, say, lenses before). I have this tweet about a simple dialect of Haskell saved because I think about this often when interacting with Haskell code.

                                                1. 8

                                                  Those exotic types describe complexity that is present in other languages as well. However, in other languages, you do not need the type checker’s permission to introduce complexity. Instead, you discover this complexity after the fact by debugging your program.

                                                  It is questionable whether the Haskell approach is as wise as it is clever. At least to me, it does not seem very suitable for writing what the original post calls “junior code”. Consider some of Haskell’s main features:

                                                  • Purity and precise types:

                                                    • Benefit: You can use equational reasoning to understand the complexity in your code.
                                                    • Drawback: You cannot ignore the complexity in your code, even when it does not matter to you.
                                                  • Lazy evaluation:

                                                    • Benefit: It is easy to write programs that manipulate conceptually large data structures, but in the end only need to inspect a tiny part of them.
                                                    • Drawback: It is difficult to track the sequence of states resulting from running your program.
                                                  • Higher-kinded types:

                                                    • Benefit: It possible to abstract not only over concrete types, such as Int or String, but also over “shapes of data types”, such as List or Tree (leaving the element type unspecified).
                                                    • Drawback: Oftentimes, type errors will be an unintelligible mess.

                                                  It is ultimately a subjective matter whether these are good tradeoffs.

                                                  1. 6

                                                    often folks writing even simple Haskell programs end up using very exotic types

                                                    … abstruse …

                                                    🤔

                                                  2. 1

                                                    Isn’t a large aspect of Java and C# that they force you to write simple code? Then they get called “blub” languages or whatever. The reality is that you should write for whoever your audience is. Explaining everything such that a six-year old can understand it requires an inordinate amount of effort and without picking a target audience this is what your suggestion devolves into.

                                                    1. 6

                                                      Isn’t a large aspect of Java and C# that they force you to write simple code?

                                                      No. C# has had type inference, covariant and contravariant generics, opt-in dynamic typing as distinct from type inference, lambdas, value variables, reference variables, checked and unchecked arithmetic, and G–d knows what else I’m forgetting since at least the late 2000s. Java’s missing some of that (although less and less recently), but adds to it things like implicit runtime code generation, autoboxing, and a bunch of other stuff. Neither language is intrinsically simple.

                                                      But that said, I don’t honestly know that they’re honestly much more complicated than most languages, either. They’re more complicated than Go, maybe, but I don’t even know for sure if they’re more complicated than Python. The thing is that Java projects—at least, the “enterprise” ones for which the language has become famous—go crazy with complexity, despite—and often at odds with—the underlying language. There’s nothing preventing Python from doing absolutely crazy things, for example, and people who remember pre-1.0 versions of Django might recall when it used metaclasses and what would now be importlib to make one hell of a lot of magic happen in model classes. But the community rejects that approach. The Java community, on the other hand, is happy to go crazy with XML, factories, and custom class loaders to roam way into the Necronomicon of software development. I tend to regard this as the ecosystem, rather than the language, going to the extreme.

                                                      Haskell in practice, to me, feels like what C# or Java code taken to the extreme would look like. And there’s even indeed libraries like language-ext for C# or Arrow (which is for Kotlin, but same difference), which do go there, with (IMVHO) disastrous results. (Disclaimer: I work heavily on an Arrow-based code base and am productive in it, albeit in my opinion despite that comment.) This is also an ecosystem decision, and one that I think this article is rightfully and correctly railing against.

                                                      1. 4

                                                        There’s nothing preventing Python from doing absolutely crazy things, for example, and people who remember pre-1.0 versions of Django might recall when it used metaclasses and what would now be importlib to make one hell of a lot of magic happen in model classes. But the community rejects that approach.

                                                        I don’t think that’s true at all. The difference is that Python has good abstractions, so if you want to do something complex under the hood, you can still expose a simple interface. In fact, Python programmers would much rather use something with a simple interface and complex internals than the other way around. That’s why they’re using Python!

                                                        1. 3

                                                          I’m not sure we’re disagreeing, except for I think you’re implying that Java and C# lack an ability to expose something with complex internals and a simple interface. I’m logging off tech for the weekend, but Javalin is a great example of a Java framework that’s on par with Flask in terms of both simplicity and power, and done with 100% vanilla Java. It’s just not popular. And the reason I cited early versions of Django for Python is specifically because the community felt that that tradeoff of a simple interface for complex internals went too far. (If you have not used way-pre-1.0 versions of Django, it did Rails-style implicit imports and implicit metaclasses. We are not talking about current, or even 1.0, Django here.)

                                                          In other words, I think you’re making my point that this is about culture and ecosystem, not language in the abstract. Which is also why this article is making a plea about how to write Haskell, and not about abandoning Haskell for e.g. OCaml.

                                                          1. 3

                                                            Ah right yes I see about the Django thing. I was thinking about how it uses them now. I wasn’t aware it did import magic before, that definitely sounds a bit much!

                                                    2. 1

                                                      I used to use juxt and comp and partial quite a bit in my Clojure code, but these days I try to avoid them. They’re clever, they’re fun, they’re succinct… but they can also make it harder for the next person who comes along if they’re not already a Clojure hotshot.

                                                      1. 5

                                                        That’s setting a pretty low bar, isn’t it? Partially applying functions isn’t exactly whizz-bang fancy-pants programming in a Lisp.

                                                        1. 2

                                                          And yet, there’s usually another way to write it that’s more clear to someone not as familiar with Lisps.

                                                          (I’m not saying “never use these”. There are definitely times when it’s more awkward to use something else.)

                                                          1. 3

                                                            Function composition is the most fundamental functional programming concept as far as modularity is concerned, and partial application is not far behind. They are not specific to Lisps. juxt is slightly more “clever,” but nonetheless provides a ton of utility, is a part of the core library, and should not be shied away from. Talking about avoiding these functions without explicit examples or clear criteria is pointless.

                                                            Do you disapprove of any macro usage in your Clojure code? Are transducers out? What about core.async? I’ve seen more “clever” and confusing code written using those features than with any of the functions you’ve listed. For that matter, the worst (all?) Clojure codebases tend to be agglomerations of layer after layer of “simple” map-processing functions which are impossible to grasp in the aggregate and incredibly frustrating to debug. This is evidence of a general lack of coherent system-level thinking, versus any specific features in Clojure being responsible for complex, unmaintainable code.

                                                            The guidelines for writing clean, simple, maintainable code are never so straightforward such that they can be stated pithily, to the chagrin of Rich Hickey true-believers everywhere. It’s a combination of figuring out what works for a given team, adopting conventions and architecture well-suited to the domain, and choosing an environment and libraries to integrate with so that you introduce as little friction as possible (and probably more that I’m forgetting, unrelated to the choice of language). But picking and choosing arbitrary functions to eschew will not get you very close to the goal of writing simple code.

                                                            1. 2

                                                              I think you’re taking this a lot farther than what I actually said.

                                                              1. 2

                                                                I’m sorry, I was trying to respond systematically to a comment I disagreed with. If you wouldn’t mind: how exactly did I take it too far?

                                                                1. 1

                                                                  Well, I didn’t say “don’t use these”, I said that I “try to avoid them”. I don’t always succeed in that, and I’m happy to use them where they make sense.

                                                                  There’s a continuum between “can’t avoid it” and “totally gratuitous” and I try to push my personal cutoff towards the left, there. When it would make the code harder to read, I don’t avoid them!

                                                                  1. 1

                                                                    Well, I didn’t say “don’t use these”, I said that I “try to avoid them”. I don’t always succeed in that, and I’m happy to use them where they make sense.

                                                                    Why do you try to avoid using them? When does it make sense to use them?

                                                    1. 19

                                                      One of the little phrases that really resonated early with me was “you should learn a lot about a little, and a little about a lot”. I’m having trouble finding the proper attribution for where I heard it.

                                                      You need depth to frame breadth. You need breadth to de-dogmatize and relieve the constrictions imposed by having a preferred hammer (depth).

                                                      1. 10

                                                        I’m having trouble finding the proper attribution for where I heard it.

                                                        Maybe try searching for “T Shaped” people? That’s how I’ve heard it phrased in some places.

                                                      1. 11

                                                        Rust, I think.

                                                        I’d also like to get my Spanish up to snuff.

                                                        (¿Donde estan las langostas que hablan Espanol?)

                                                        1. 3

                                                          Lo hablo (más o menos), pero casi nunca. Tengo familia en España y (del lado de mí madrastra) Argentina. Sería buena practicarlo más.

                                                          1. 2

                                                            Aquí hay una langosta, cuando gusten practicar :)

                                                          2. 3

                                                            Are you planning to pick up Rust for something specific or just for fun?

                                                            1. 1

                                                              Fun mostly, though if I end up liking it I have a good candidate for a large-ish project that I think would be a good fit

                                                            2. 2

                                                              Aquí estámos! Saludos! 👋🏽

                                                            1. 3

                                                              This phenomenon is amusing the first time you encounter it. But after running into it over and over, in multiple languages, it stops being funny and now becomes a major preoccupation. It is mighty time we investigate its fundamental causes. Why do programming language features that individually seem sensibly designed, have such unexpected interactions when put together? Perhaps there is something wrong with the process by which language features are usually designed.

                                                              1. 2

                                                                So what are your thoughts there?

                                                                1. 3

                                                                  Doesn’t this behavior seem common to many/most software systems? Initially systems are conceptually simple and consistent, but overtime they get adhoc extensions that cause unexpected complexities. Those extensions cause unexpected behavior and possibly bugs.

                                                                  Programming languages seem to have more strict backwards compatibility requirements than many other software projects so it makes sense that mistakes would accrue overtime.

                                                                  1. 1

                                                                    Backwards compatibility is only a manifestation of a more fundamental problem. General-purpose programming languages are meant to be, well, general-purpose, i.e., address a very large space of use cases that you cannot possibly hope to enumerate. Designing language features based on ad-hoc use cases is a mistake.

                                                                  2. 1

                                                                    Design features of general-purpose programming languages based on general principles, not use cases. Make sure that your principles neither (0) contradict each other, nor (1) redundantly restate each other. This is more likely to lead to orthogonal language features.

                                                                    By definition, use cases are concrete and specific. They are useful as guidelines for designing software that addresses concrete and specific needs, and is unlikely to be used in situations that you cannot foresee in advance. If a user comes up with a gross hack to use your software for something else, and ends up burning themselves, you can rightfully tell them “Well, that is not my problem.”

                                                                    However, a general-purpose programming language does not fit the above description. By definition, the ways in which a general-purpose programming language can be used are meant to be limitless, but you can only imagine finitely many use cases. A general principle has the advantage that you can apply it to a situation that only arose after you stated the principle.

                                                                  3. 2

                                                                    One would imagine that the superior way would then be to make extending the language as easy as possible. In other words, Lisp. Every Lisp developer is a potential Lisp developer (wink). The extensions would compete against each other like regular libraries do, and the cream would rise to the top.

                                                                    But the actual effect (at least the way the Lisp community currently is) seems to be that since extending the language is so easy, everybody just extends it to their own liking and no (or very rare) centralized improvements that everyone adopts happen. Nobody codes Lisp, but Lisp+extension set #5415162.

                                                                    Or perhaps it just has too many parentheses. Pyret might show if that’s the problem.

                                                                    1. 2

                                                                      This just pushes the problem onto the user community. A programming language needs a vision, and a vision needs a visionary.

                                                                    2. 1

                                                                      The problem isn’t the features, it’s that people expect to use something as complex as a programming language without a single bit of reading.

                                                                      Nobody expects to be able to just waltz up to a bridge building project and play around without knowing anything about engineering. Yet people think that Python should just work exactly the way they imagine it to work in their head.

                                                                      1. 1

                                                                        it’s that people expect to use something as complex as a programming language without a single bit of reading.

                                                                        The problem you mention is very real too, but it is not fair to blame it only on the language users. Programming languages are designed in a way that makes it difficult to learn all their intricacies. Oftentimes, even the language designers are not aware of the consequences of their own designs. Features are conceptualized by their inventors exclusively in operational terms (i.e., “How do we desugar this into smaller imperative steps?”), and not enough thought is put into question “What does this actually mean?”

                                                                        Try picking arbitrary pairs (X,Y), where X is a programming language and Y is a feature in X not shared with many other languages. Enter X’s IRC channel and ask why Y was designed the way it is. Count how many times they actually give you a design rationale vs. how many times they reply as if you had asked how Y works. And, when they do give a design a rationale, count how many times it looks like a post hoc rationalization of the prior design.

                                                                        1. 3

                                                                          The problem is that people have a shallow, surface-level understanding of two features, then when they combine them they act in a way that you can only understand if you have a deeper understanding of the features. Then they throw up their hands and say ‘WTF?’

                                                                          ‘WTFs’ in programming languages, a ‘meme’ that really started with PHP in my opinion, made a lot more sense when it was the deeper design of individual features that was batshit crazy. Now people are just applying it to every language they don’t like. Two features interact in a way that doesn’t make sense from my perspective of shallow understanding? Must be the language that’s broken.

                                                                          If you actually understand the features in the context of their design - which yes, might very well be syntactic sugar over a series of small imperative steps, what’s wrong with that? - then you’ll understand why they work the way they do.

                                                                          1. 1

                                                                            If you actually understand the features in the context of their design - which yes, might very well be syntactic sugar over a series of small imperative steps, what’s wrong with that? - then you’ll understand why they work the way they do.

                                                                            Sure, you will understand the mechanics of how it works. But this will still give you zero insight on why the feature makes sense. It might turn out that the feature does not actually make the intended sense. Consider this answer by the ##c++ quote bot on Freenode:

                                                                            • unyu: !perfect
                                                                            • nolyc: The C++11 forwarding idiom (which uses a variadic template and std::forward) is not quite perfect, because the following cannot be forwarded transparently: initializer lists, 0 as a null pointer, addresses of function templates or overloaded functions, rvalue uses of definition-less static constants, and access control.

                                                                            In other words, the feature’s behavior deviates from what its own users consider reasonable.

                                                                            what’s wrong with that?

                                                                            The problem is that it is ad hoc. Memorizing lots of ad hoc rules does not scale.

                                                                            Programming is a purposeful activity. When you program, you usually want to achieve something other than just seeing what the computer might do if you give it this or that command. The meaning of a language feature is what allows you to decide whether using the feature contributes towards your actual goal.

                                                                            1. 4

                                                                              I’m not at all defending C++ here. It’s a perfect example of where there really is a problem. But I don’t think that the Python examples on the linked page are like this at all. They’re basic interactions of features that make perfect sense if you understand those features beyond the basic surface level.

                                                                              Some of them (e.g. ‘yielding None’) aren’t ‘WTFs’ they’re just bugs. Bugs that have been fixed! Some of them are basic features of Python, like default arguments being evaluated at function definition time. One of them is that you need to write global a to be able to modify a global variable called a within a function! That’s a good thing! That’s not a WTF. An entirely local statement like a = 1 suddenly modifying global state because you added a new global variable would be a WTF.

                                                                              1. 1

                                                                                Oh, okay. You have a point there.

                                                                    1. 3
                                                                      1. Advent of Code
                                                                      2. going to see the Star Wars movie at 8am on Saturday
                                                                      3. friends’ annual winter solstice party
                                                                      4. band practice
                                                                      5. neighbors are having a holiday party with tamales

                                                                      (yikes, that’s way too much stuff)

                                                                      1. 1

                                                                        Weird, I’m at a tamales party right now. (Or at least the lull at the end of the party.)

                                                                      1. 1
                                                                        1. Sites send cookies with the tracking pixel so that they can tell that the person who visited oldnavy.com is the same as the person who’s using Facebook on the same computer.

                                                                        How do they give you the correct tracking pixel as a third party? Can Facebook’s cookie from Old Navy’s site interact with it’s cookie from it’s own site? Or is it a combination of ip address and browser fingerprinting that identifies you?

                                                                        1. 6

                                                                          The answer to this is pretty complicated, so it’s not a surprise that this particular article doesn’t get into it. There’s a technology called cookie match or cookie sync. There are a lot of different flows it can use, but the basic trick is like this:

                                                                          1. The user navigates to a site they want to read, which we’ll call interestingnews.com.

                                                                          2. An advertising tag on the page causes the browser to load a pixel from, say, doubleclick.net. A previously-existing cookie A is sent with this request. The cookie value contains several fields, but the most important one is a simple integer which uniquely identifies this cookie jar. We’ll call the integer value A.id.

                                                                          3. The server hosting the first pixel, instead of an image file, replies with an http redirect to one of its advertising partners, say retailer.com. The URL of the redirect is a pixel owned by this second site. Crucially, the URL of the retailer.com pixel includes a query parameter with the value of A.id.

                                                                          4. The browser follows the redirect and sends retailer.com a previously existing cookie B, set on that domain. While the technology stack behind it may be different, this cookie too has a simple integer at the heart of it, which we’ll call B.id.

                                                                          5. The retailer.com server now knows the value of B.id, because it was in the cookie, as well as the value of A.id, because it was in the URL. They can now store the mapping between these cookies server-side, and use it to join whatever other server-side information they may have about this cookie jar. They may even sell the use of the linkage graph to third parties.

                                                                          I usually have to explain this with diagrams, but I did my best. I’m happy to answer questions.

                                                                          1. 1

                                                                            So I guess at that point it’s just all graph processing? Get enough sites, get enough people, take the mappings and build correlations between the parts.

                                                                            Edit: Thank you for taking the time to leave a detailed reply.

                                                                            1. 2

                                                                              Yes, basically, it’s graph processing with a lot of money changing hands over different pieces of the task.

                                                                        1. 2

                                                                          I need to keep better track of this stuff.

                                                                            1. 3

                                                                              eli.thegreenplace.net

                                                                              There was a chunk of time where I kept running into tricky problems and when I would google around the first result was always an extremely helpful article from Eli’s site. Definitely recommend it.

                                                                              1. 1

                                                                                +1 for nullprogram

                                                                                1. 1

                                                                                  Whoops. I should’ve checked the links.

                                                                                1. 20

                                                                                  I often feel like I’m the lone developer in the world screaming into the void that python is a flawed language and it’s massive adoption recently is a mistake that we haven’t begun to see the costs of yet.

                                                                                  The language itself, the grammar, operational semantics of the core language, documentation, and common library design idioms are all bathed in enormous complexity. The type system is a worst of both worlds compromise that manages to cause frequent crashes with no real upfront checking (mypy is a very small step in the right direction but has a very long way to go and probably can’t ever become as good as it needs to be) yet also don’t offer enough dynamism to justify the absurd cost of not having compile time checking.

                                                                                  I just don’t get it.

                                                                                  1. 10

                                                                                    I’ve been using Python a very long time, and I have to admit that when I read your comment looking for something objective rather than subjective, all I find is something that doesn’t match my own experience (“manages to cause frequent crashes”) and really seems to be the same “dynamically-typed languages are all bad and constantly break” stuff that’s not at all backed up by the available literature on defect rates for static versus dynamic typing.

                                                                                    1. 9

                                                                                      I’m not going to try to make an objective argument against any language because I just don’t think we’re at a point in our understanding of software engineering yet where we can make substantive objective claims about languages outside of some very narrow cases. I’m familiar with the literature you’re referring to and while I grant that the conclusions did not support static typing, I’m not convinced by the research for several reasons (methodological concerns as well as concerns about how broadly you could extend the conclusions given the pretty limited number of cases that were actually studied).

                                                                                      That said, I’m glad you’ve never had a problem. I have, and I’ve talked to quite a few other people who have as well (although they are generally dismissive of the problems). But it’s not strictly a matter of ahead of time type checking vs runtime checking as an absolute good/bad, it’s a matter of trade-offs.

                                                                                      Ahead of time type checking can eliminate categories of bugs, at the cost of not admitting some valid programs. More sophisticated type checking can admit more programs at the cost of sometimes higher requirements for the programmer to think about types (dependently typed languages for example can express far more granular constraints but might be harder for a programmer to use than something like Haskell (omitting dependents Haskell for the sake of this discussion), and there is a large range of languages along this curve).

                                                                                      Very dynamic languages on the other hand focus on flexibility of features at the cost of programs being less provable. A language like lisp, ruby, or erlang may have many programs that use this flexibility in ways that would make it very difficult to type check programs.

                                                                                      The problem as I see it is that python is the worst of both worlds. It offers no guarantees though type checking, requiring at best the programmer to manually implement their own verification through runtime type assertions, unit tests, or manual proofs outside of the program itself, or at worst programs crash while running. At the same time, the languages own grammar and idioms do not seem to support the dynamism that other languages offer in exchange for giving up safety.

                                                                                      In short, the problem with python isn’t that it’s dynamically typed per-se, but that it’s dynamically typed while being no more powerful or expressive than statically typed languages.

                                                                                      1. 4

                                                                                        It offers no guarantees though type checking,

                                                                                        Well, it offers some.

                                                                                        >>> "3" > 3
                                                                                        Traceback (most recent call last):
                                                                                          File "<stdin>", line 1, in <module>
                                                                                        TypeError: '>' not supported between instances of 'str' and 'int'
                                                                                        

                                                                                        If that doesn’t seem like much, consider that couple of the most popular programming languages today do not offer even this level of type checking.

                                                                                        1. 3

                                                                                          The problem is you still don’t find that out until runtime. That’s my fundamental point- to whatever degree there’s value in the sort of dynamism you’d get from having ‘3’ > 3, you give that up in python, but you don’t get anything in return. You just have to run it, and hope you exercise that code path early so it crashes while you’re in the office and not at 2am when you have to groggily debug stack traces under an SLA.

                                                                                          1. 1

                                                                                            The problem is you still don’t find that out until runtime.

                                                                                            Yeah, I get that, and have made the same arguments in mounting levels of anger when looking at logs with many versions of 'NoneType' object has no attribute 'foo' that nobody knows where they’re coming from. The unfortunate part is that the people who decide these things don’t really care that some engineers have to suffer from it.

                                                                                            but you don’t get anything in return

                                                                                            I’m not sure if that’s true, but I find proving that point difficult. One benefit of python is a greatly enhanced support for explorative programming that many stricter languages in practice do not give you – although some, like Haskell, are close. The language is stupidly simple (in a good way) in my opinion, and I think the dynamism is part of that somehow.

                                                                                            1. 2

                                                                                              I’m coming from Haskell as my primary language, but lately I’ve been forced into python at work. I find python far less capable for exploratory programming than Haskell, not just because of the lack of type safety but also the general lack of flexibility in the language. I’ll concede the situation might be different if the reference point for a typed language was Java or something.

                                                                                              1. 1

                                                                                                Perhaps things have changed since the last time I coded Haskell (which was ~10 years ago), but doesn’t ghci lose all local bindings when you reload your modules?

                                                                                                1. 2

                                                                                                  Yes, that’s still a pain. I’m not going to defend Haskell as flawless or anything, it has a bunch of painful and irritating deficiencies. On balance though I still find it a nicer experience than python for exploratory programming.

                                                                                        2. 1

                                                                                          I tend to think of Ruby as being very similar to Python (though I haven’t spent a lot of time with Ruby). What makes it more dynamic than Python? What are the marginal features that you think make the dynamic nature of Ruby more worthwhile?

                                                                                          1. 3

                                                                                            Ruby is certainly the least differentiated from python of the three I listed, but I think there are some important differences in idioms at least, if not fundamental capabilities. In particular, ruby’s metaprogramming feels much more dynamic than python’s decorators (which in practice feel much more like something you’d get out of a preprocessor in any given statically typed language). Writing an embedded DSL in ruby, it feels much more dynamic and mutable than trying to do the same thing in a “pythonic” style.

                                                                                            Ultimately, because of the metaprogramming capabilities and slightly more flexible syntax, idiomatic ruby “feels” more like a very dynamic message-passing OO system. Python, on the other hand, “feels” much more structured; like writing C or C++ but instead of types the compiler people spent all their time anal-rentatively trying to prevent you from using code formatting they don’t like.

                                                                                          2. 0

                                                                                            I’m really not trying to be dismissive, but again this feels like a rehash of the same “dynamic typing is bad, you have to write unit tests to assert literally everything in your program” that always comes up in static-versus-dynamic debates, and unfortunately that tends to be a highly subjective argument. Summaries of what research there is on the topic (this one is often cited) are very far from conclusive on anything, for example.

                                                                                            So while I respect that you don’t care for Python, I also respectfully ask that you keep in mind how many other people have had experiences with dynamically-typed languages which don’t line up with your own.

                                                                                            Also, it’s worth noting that if you want something with Python syntax and static type checking, Python 3 supports annotating with types and there are static checkers, like mypy, available. Though personally I dislike the fact that mypy went all-in originally on a nominal-typing approach when I find that real-world Python tends to want a structural approach, but that’s my individual preference.

                                                                                            1. 4

                                                                                              I get that people might have had different experiences, but at the end of the day my experiences are my experiences, and they’ve lead me to the conclusions I’ve articulated. I don’t see any reason I shouldn’t share them on a discussion forum where they are on topic just because some people have had different experiences.

                                                                                              You like python? Never had a problem with it? Fantastic go write it all you want. I’ve had no end of trouble with it and furthers any discussions to hide the fact that I and others have come to the same conclusions about why, at least for some of us, it doesn’t work.

                                                                                              1. 1

                                                                                                Sharing opinions and experiences is fine!

                                                                                                My issue was with the way the original comment phrased things seemingly as assertions of fact; that style of argument too often dominates any discussion of dynamic versus status typing, and reality is that nobody’s really in a position to make such assertions – the most anyone can do is share their experience and opinion, and accept that someone else might have different experiences and opinions.

                                                                                          3. 2

                                                                                            I at least somewhat agree with Rebecca. Python is a pleasure to write and a disaster to debug. My last bug was a missed comma in a statement:

                                                                                            if word in [ 'keyword1' 'keyword2' ]
                                                                                            

                                                                                            Hidden within a couple of hundreds of lines, it can be very hard to figure out what’s going on.

                                                                                          4. 3

                                                                                            You’re not alone -

                                                                                            Python’s syntax makes me want to yank my hair out, while Ruby’s makes me gnash my teeth during the day, and grind them to a fine dust at night.

                                                                                            I agree with your points on Python - and have tried assembling a similar, grounded, reasonable list for Ruby, but that was before my descent in madness.

                                                                                            1. 2

                                                                                              Python’s syntax makes me want to yank my hair out, while Ruby’s makes me gnash my teeth during the day, and grind them to a fine dust at night.

                                                                                              Good example of how much peoples’ opinions on these matters differ. I find both Python and Ruby to be in the top 5 of most beautiful programming languages.

                                                                                              1. 2

                                                                                                And that’s how we can still all be civil friends: all it boils down to is beauty.

                                                                                            2. 3

                                                                                              I agree, and I would also say that I feel python has become a bit of a worse language in recent years. I think it peaked around python-2.7. I was hopeful for python-3, and it seemed pretty decent at first, but I was really disappointed that the “twisted style async” was the version that won out with asyncio, and I feel it has only gotten worse over the years (more complex, stranger/weirder syntax additions, i find the “type checking” bolt-on very awkward, etc).

                                                                                              1. 2

                                                                                                I agree, this has been my experience as well.

                                                                                              1. 4

                                                                                                I depleted my booze cabinet in the last few stressful months, so all I want for Xmas is a big bottle of Bulleit Rye. But since we don’t celebrate Xmas here, I’m probably going to end up buying it myself and drinking it alone. Cheers.

                                                                                                1. 1

                                                                                                  Make an automated microbrewery that tracks consumption and adjusts its output so that the supply of booze always matches the demand.

                                                                                                  1. 1

                                                                                                    Post your experience and get job offers from distilleries and power companies.

                                                                                                1. 2

                                                                                                  RAM cloning seems like it would provide a major boost to copy on write operations. Wonder what the performance improvements would be like for forking processes.

                                                                                                  1. 6

                                                                                                    He left out luck. You need to collect bus tickets AND get lucky that those bus tickets ended up mattering. There are tons of people who fiddled around with obscure mathematics and we’ve never heard of them simply because they never got lucky. They where brilliant, intelligent and obsessive. They just happened to pick a dead end topic.

                                                                                                    1. 9

                                                                                                      I mean, Paul is a very smart guy, but he has his blind spots, and one of them, perhaps the greatest of them, is his complete unwillingness, even incapacity, to engage with the contingent nature of human endeavour. It’s not surprising, as he has lead a blessed life, and has therefore been wildly successful. But his writing never admits the truly random nature of success.

                                                                                                      1. 2

                                                                                                        He mentioned that if Darwin was born a hundred years earlier then he wouldn’t have been around for the period of high interest in natural history. He also mentioned that load of brilliant, hard working people just end up researching dead ends. I think it was implied.

                                                                                                      2. 2

                                                                                                        How did he left that out? Most of the write up is dedicated to that.

                                                                                                      1. 12

                                                                                                        I just resigned on the laptops market. At first, people at coffee shops around the city looked surprised when I arrived carrying my desktop computer on my shoulders, but they got used to it. I would probably gave in and buy a laptop if I had to do it more often, but once or twice a week it’s fine.

                                                                                                        1. 5

                                                                                                          I love this post because I’m honestly not sure whether or not you’re joking.

                                                                                                          1. 4

                                                                                                            I get a real Atlas vibe from carrying a desktop on your shoulders.

                                                                                                            1. 1

                                                                                                              Color me interested as well!

                                                                                                              @Bherzet, can you show us your setup at a coffee shop? :)

                                                                                                            2. 2

                                                                                                              How long do you spend in any given coffee shop?

                                                                                                              1. 1

                                                                                                                also how far is the coffee shop from home ?

                                                                                                              2. 2

                                                                                                                Pic or it didn’t happen.

                                                                                                              1. 29

                                                                                                                Windows still clearly isn’t for me. And I wouldn’t recommend it to any of our developers at Basecamp. But I kinda do wish that more people actually do make the switch. Apple needs the competition.

                                                                                                                Oh how times have changed.

                                                                                                                I find that Windows, Mac, and Linux all frustrate me, but all for different reasons. My favourite OS is whichever one I used least recently.

                                                                                                                1. 26

                                                                                                                  When Mac stops working you throw money at the problem.
                                                                                                                  When Linux stops working you throw time at the problem.
                                                                                                                  When Windows stops working you throw the laptop at the wall.

                                                                                                                  1. 4

                                                                                                                    I thought you reinstall Windows?

                                                                                                                    1. 4

                                                                                                                      You reinstall Windows. Curse yourself for thinking that would fix the problem. The you throw laptop at the wall.

                                                                                                                  2. 2

                                                                                                                    Care to elaborate on what bothers you about each operating system?

                                                                                                                    1. 4

                                                                                                                      Not OP, but have an opinion:

                                                                                                                      Windows

                                                                                                                      • Ads
                                                                                                                      • Updates (last week two of my USB ports just stopped working, although they worked fine in EFI and Linux.)
                                                                                                                      • Licenses (buy a cheap license from shady sources or hundreds for a legit license from m$). If you’re lucky your laptop has a burned in license, but that’s not available for DIY PCs (like mine).
                                                                                                                      • Lobbying against open source software (see the shit happening in Munich’s municipal IT)
                                                                                                                      • Generally drivers (my last installation had intel wifi issues where unregularly but often the ping rose to multiple seconds and I had to restart the interface. after reinstalling I haven’t had that issue again)

                                                                                                                      Linux (excerpt)

                                                                                                                      • Sometimes the desktop is unlocked when waking up from suspend
                                                                                                                      • Freezes on more modern desktops (gnome 3, kde plasma). this seems to be a bit better with MATE (my last experience with Ubuntu MATE was mostly rock solid)
                                                                                                                      • Inconsistent behaviour like sometimes the space bar dismisses the gnome3 lock screen, but sometimes you have to use the mouse to drag it away. The spaces went straight to the password field
                                                                                                                      • Gaming not ideal, although Valve is pretty cool for working on this
                                                                                                                      • You (by default in all distros I know of) have to enter your password so many times. When istalling apps through a software center, for example. No easy to use biometrics.

                                                                                                                      macOS

                                                                                                                      • Haven’t used this in a while, seems to be the holy grail for me right now (sarcasm). Well, their hardware is super expensive, so that’s a downside.

                                                                                                                      The list goes on and on and I tried to be brief.

                                                                                                                    2. 1

                                                                                                                      My favourite OS is whichever one I used least recently

                                                                                                                      Oh my, it is exactly the same for me, although I currently don’t want to buy a mac to go back to macOS again (which is the one I used the least recent and thus is the one that seems to be most tempting).

                                                                                                                      Doesn’t it bother you? I waste so much time reinstalling OSes… It drives me crazy, but I can’t help it. Maybe you have some advice for coping? I wish I could just stick with one and learn to live with the downsides.

                                                                                                                    1. 2

                                                                                                                      Just got back from two weeks in Spain (for my cousin’s wedding!) so I am looking forward to doing absolutely nothing before going back to work on Monday.