1. 2

    This is good, thanks!

    I’ve been making static sites for a long time now. I wrote my first just as a fun game to see how far I could push the static site concept. Could I, for instance, write a site that does book reviews? Allows a user to make book recommendations? All without any back-end at all?

    I could, and that was cool. I did a couple of other fun project sites. My latest was a riff on using AirTable as a blogging backend. There would be some magic to get the AirTable entries to the same folder as the site. It might be a lambda or a cron job. That wasn’t important. The important thing was that there was no back-and-forth. You could run the site from a USB drive.

    And that’s where I stopped. I wanted to add commenting, but commenting, to me, seemed much more of a back-and-forth activity. You make a comment, you reply, you edit your comment, and so on. It had much more of a dynamic feel to it. This wasn’t something I could pack up on a USB drive and give to somebody.

    I still think there may be some document-driven, delayed, offline way of doing comments, maybe using LocalStorage, I just haven’t played around with it any more. I appreciate the chance to see how somebody else has solved it.

    1. 3

      Hopefully not a thing. I am on holiday.

      I will, undoubtedly, end up doing some technical work anyway, but it’s not in the plan.

      1. 2

        This is a topic very real for me. I’ve worked on multiple legacy web applications with horrific technical debt. One thing I’d like to be able to do is “dead function elemination”. Finding all (js) functions and their uses is non trivial with grep and friends. Does anyone have experience or tips on this?

        1. 1

          Generally that’s a linker/compiler internals kind of thing, looking for symbol dependency across source files. There’s also the various versions of late binding, so public functions? Good luck.

          I’m interested into looking more into the Language Service Provider stuff that’s coming out. There might be some goodness there to check into. I don’t know.

          ADD: Bottom-up tech debt elimination on huge codesbases is a bear. Your best bet is probably to write clean code for the work you do, shim up the work around what you change, and then start a refactoring effort from the outside in. Each of these by themselves might not work, but all together they tend to eventually get the job done.

          1. 1

            A combination of file name and file content search, and a human, is the only for-sure way to know what’s useful and useless.

          1. 7

            I keep a small herd of yaks. Whenever I want to do something productive, I always end up out in the herd, shaving a bunch of them until I get tired. Then the feeling passes and I’m back to watching Love Boat.

            What I want to do is write a static code analysis tool. The language doesn’t matter, so I’ll probably pick F#. I have an idea for a “cognitive load” metric that should be a new and useful way of looking at applications.

            What I’m actually doing is writing an essay on learning: learning how to make stuff people want, learning how to be a better programmer, and learning as a generic process, whether it’s a startup, large corp, or machine learning.

            The static code analysis would be easier and much more fun. Dive down into the bits, maybe do some cool compiler object coding, end up with a number or smiley face that we can all argue about. Fun times. The essay, however, is more important, as it frames up the books I’m writing. Being a really good learner is the key part of learning to be a better programmer, and without that framework there’s really no criteria for judging whether a static code analysis tool is any good or not.

            So for this week, at least, I’ve left my normal herd of yaks and am working with Meta-Yak, the Mother of all Yaks. Leave it to an old OO guy to always be looking for virtual base class.

            1. 2

              Hi guys, author here!

              I’ve just written a very short, hopefully good explanation for merkle trees. More of a layman’s explanation type of a text but would love to get feedback. If it does not meet the requirements (I read them and didn’t see any issues, however to err is human) please let me know and will delete it.

              Thanks for your time and have a nice day!

              1. 2

                I liked this. Thanks. You have a nice voice. Keep up the good work!

                I was hoping to find out if this was how you made baby merkles, but it turned out okay anyway (grin)

                1. 1

                  Thank you so much for your kind words! I am a very firm believer in them being delivered by storks but who knows 😂

              1. 9

                This page renders poorly with JS enabled; it doesn’t render at all with JS disabled, and Reader Mode doesn’t work either. In the future, for long reads, try to ensure that they are compatible with Reader Mode and don’t require JS to load properly.

                The split between Russell and Wittgenstein is curiously portrayed. Wittgenstein was indeed a fan of “ordinary language” philosophy, and Russell was indeed committed to formalism, but several pieces of the story are missing, including Wittgenstein’s service in World War 1, Russell’s peers, notably Frege; and, of course, the spirit of contemporary Continental politics around the turn of the 20th Century.

                The author’s points about practical ontology don’t come with concrete recommendations for what to read and build. I’d like to recommend:

                There is a bizarre utterance near the end:

                Functional programming is a formal map of value delivery.

                What does this mean? How, formally, does functional programming outline such a map? Can you show us? It sounds like the “word salad” complained about in the opening; a formal map, or formal mapping, is unlikely to mesh nicely with business-oriented hollow phrases like “value delivery”.

                1. 2

                  This essay went to five thousand words and it still wasn’t enough.

                  Thank you for pointing out the many faults.

                  A big part of the problem with dealing with any philosopher or philosophy is that people dig too deep. I could never cover all the details of Wittgenstein, Russell, or Socrates nor would I want to. As a coder, what we do is meet people, and those people might be long dead philosophers, then drag them somehow into our reality so we can write code to make something useful. I did that here, and you are absolutely correct that this was a disservice to the people involved.

                  This is the beginning. There’s a book on good coding that follows for detail. it is meant as a first chapter; a teaser.

                1. 3

                  I don’t have any idea what this is supposed to be. It starts strong, but then rambles for a bit, and ends like it’s the beginning of something larger that either doesn’t exist, or (seems likely) just doesn’t work.

                  And there’s no fucking scrollbar.

                  If I were a man of leisure, I’d spend my days tracking down programmers who mess with users’ scrollbars and whomping them on the nose with a rolled-up newspaper.

                  1. 2

                    Apologies. This is a tentative first chapter on being a better programmer. It’s why I posted it here.


                  1. 2

                    Bunch of server scut work for some friends who are moving domains. It’s not neural networks conquering the world, but it’s useful to somebody and it pays the bills.

                    1. 9

                      There are several engineering “Holy cow!” moments in my career: structured programming, OO, databases, and so forth

                      But I think the insights that had the longest-term impacts were those that involved my mental/emotional manipulation by way of media consumption. In the 90s, I stopped playing video games. I realized that the games were not “games” as I understood them. They weren’t challenges of skill. It was more like a movie where you could click. The goal was to keep you playing as long as possible.

                      There was about 15 years when I didn’t have any more realizations. Instead it was all technical. Then, as I started helping more and more teams, I began to realize that developers were being sold to by Microsoft, Apple, Google, and so forth. Just like with the game situation, what it appeared to be and what it was were different things. It appeared to be new, useful tech to do things we couldn’t do before. In far too many cases, however, what it actually was? It was products designed to appeal to developers so they’d use them – and become part of some ecosystem that involved selling more stuff to them.

                      As I moved into helping larger organizations, I saw the same thing with process. There are a dozen easily-recognizable companies today where their main marketing thrust is telling other developers what a cool workplace/process they have. It appears that they’re sharing cool, positive, innovative new solutions with the community. What it actually is? It’s just another way to bring people together and keep them permanently engaged so that more things can be sold to them.

                      I don’t mind the bazaar concept. A cool place where you can see all sorts of things and browse looks like fun. What I mind is the subtle manipulation of expectations versus reality. There are tons of people massively overbuilding and overengineering systems today because they’re so caught up in these various communities that they don’t understand delivering solutions. There are poor people all over the world desperately building iPhone apps and the like because they think that’s a path to wealth. I mind it when we show people one thing, a goal they’d like to achieve, then we sell them something else: an ecosystem where it looks like they’re reaching that goal, but instead they’re actually doing some other stuff we want. That’s dark.

                      1. 2

                        There are poor people all over the world desperately building iPhone apps and the like because they think that’s a path to wealth. I mind it when we show people one thing, a goal they’d like to achieve, then we sell them something else: an ecosystem where it looks like they’re reaching that goal, but instead they’re actually doing some other stuff we want. That’s dark.

                        No, I think you are spot on.

                      1. 3

                        I used to love technical interviews and was quite good at them. I think out of 40 or 50, I flunked one time. But heck if I know what they’re good for. (I’ve conducted a bunch too.)

                        This article starts down the right road by acknowledging that we don’t take the interviewee into consideration enough. I think a deeper problem is that the selection process rarely mirrors what we actually want in a new hire.

                        Can work through tough tech problems? Ok, but do we really spend a lot of time wondering whether something is O(n) or O(log n)? More directly, is that something everybody in the building needs to know? Or is it something that only a few people need to know as long as we work well together?

                        Analogies in tech suck, but the best I can come up with is that a good tech team/company is like a Jazz band, or an improv comedy troupe. Your job is to first master your skills, but just as importantly you have to pay attention to others, the bigger picture, and the people you’re trying to help. You work together fluidly and at the appropriate technical depth each task requires without letting the show sag.

                        That means three things: 1. This is extremely dependent on the personalities and situations of the various people involved, and 2. There is no “right” person for a large group of 20-2,000 projects. You can rock one situation and be the dead weight in another. And 3. It’s gotta be a mutual-selection, audition-style tryout. You can try various tricks to simulate an improv comedy sketch or Jazz set, but hell if I think you’re going to accomplish much of anything. The best I’ve seen are people who can reliably demonstrate that they’ve worked in a huge number of various environments and everybody liked them and things came out okay. But really, all that gets you to is “mediocre”

                        Here’s a mental experiment to try out. Let’s say you find a person that is a complete tech idiot. They couldn’t work a calculator, much less code a system. But when that person is added to this team you’re hiring for, suddenly the team gets 3x as effective, the customers are loving what they see, and everybody loves their work. Would you hire them?

                        I’d hire them in a heartbeat. As long as they’ve got a good attitude and will work with the rest of the team, who cares about the tech interview? But wait, you say. If you did that consistently you’d end up with teams full of people who couldn’t code?

                        Really? Then how could they be so productive and the customers love them? And more to the point, even if they couldn’t code – suppose they’ve got some special magic wand that let’s them outsource everything – who cares again? Are we here to code, or to be happy, productive, and provide value?

                        1. 33

                          This POV is dangerous imo.

                          Which philosophy would you rather your team adhere to?

                          1. If it’s taking everyone else too much time to understand your code, you need to make it more clear.
                          2. Everyone here is a professional. If you can’t read someone’s code, it’s your fault. Try harder. Spend more time.

                          Which philosophy do you think will lead to a higher quality code base?

                          I think most people would choose 1., and if you are inclined to choose 2. it’s because taking the time to be clear and communicate well through your code is hard or annoying for you.

                          There are orders-of-magnitude difference in simplicity and readability in code bases, and I think pretending otherwise is bad code apologism.

                          discover what problem it actually intends to solve, as opposed to the problem I thought it solved

                          Why wasn’t that clear from the names and code structure? Why wasn’t that documented?

                          figure out some complexities involved in solving that problem and stop underestimating it start understanding how the code addresses that complexity

                          start understanding how the code addresses that complexity

                          What if it addresses it poorly? What if what is currently addressed by 300 lines of opaque code could have been addressed in 40 lines of clear code? This is not hyperbole… such differences are common.

                          start understanding how a lot of the seemingly unnecessary complexity deals with relevant use cases and edge cases

                          Much more common, ime, is that after gaining a full understanding you realize a much better and simpler solution exists.

                          understand that the structure of the code makes sense, solves the problem and cannot be improved in obvious ways

                          This is incredibly rare.

                          1. 10

                            Pretending otherwise is bad code apologism.

                            I don’t intend to pretend otherwise. I’m just focusing on my experience in which quite a few codebases turned out to be reasonable and especially more reasonable than I initially thought.

                            Why wasn’t that clear from the names and code structure? Why wasn’t that documented?

                            It was. I just hadn’t understood yet. Like reading a book for the second time and discovering all those things you overlooked the first time.

                            Example: some time ago I adapted the scripts of the Debian logcheck package for our purposes. My initial reaction to viewing the code was the usual: “this seems overcomplicated and hard to follow”. I had originally read the man page, but of course I only remembered what was relevant for our purposes. Reading the code and the documentation again it became clear that it addresses many more use cases than just ours. All the use cases it addresses are clear from the documentation, names and code structure, but only once you realize what they are. Also I had to get used to reading a shell script of that size again. Before I went back and forwards a few times, I did not have a clear picture, which made the code look overcomplicated. It isn’t: it’s pretty nice and was quite easy to adapt, requiring only precision surgery, once understood.

                            What if it addresses it poorly? What if what is currently addressed by 300 lines of opaque code could have been addressed in 40 lines of clear code? This is not hyperbole… such differences are common.

                            I’ve encountered those cases as well, but they haven’t been common for me. Perhaps I’m just lucky.

                            Much more common, ime, is that after gaining a full understanding you realize a much better and simpler solution exists.

                            Does the fact that a better and simpler solution exists mean that the original code was bad? I believe a lot of code is ‘reasonable’, even if it can be improved.

                            Example: I’m currently revisiting an 8 year old codebase mostly written by myself own where I can reduce the code by quite a bit by using better (and fewer) abstractions and reusing libraries. However, an important part of why that is possible is that we have developed better abstractions, better libraries and a better understanding of the more important and less important aspects of the problem we are solving since then. The solution is structurally almost the same, but simpler. Even counting the library code it’s less code and more robust. I would definitely call it ‘better’. Was the original code bad? I don’t think so. It was suboptimal, but on a scale of 0-10 it wasn’t below a 6. The possible changes weren’t obvious.

                            understand that the structure of the code makes sense, solves the problem and cannot be improved in obvious ways

                            This is incredibly rare.

                            Perhaps we have a different understanding of ‘obvious’. I’ve worked on codebases where, after understanding the code, thinking (and experimenting) for a couple of days resulted in important improvements (which then took several additional days of work to actually perform). I don’t count such improvements as “obvious”.

                            (Aside: really bad code can make finding the improvements harder, but I’m talking about ‘reasonable’ code where nevertheless significant improvements are possible, but they take quite a bit of time to figure out)

                            1. 4

                              Again, your point as stated above is much more reasonable and I agree with a lot of it. A few points I’d still push back on…

                              All the use cases it addresses are clear from the documentation, names and code structure, but only once you realize what they are.

                              I actually think not having the the code itself reflect the use-cases is a major problem. If you meant that there are exponentially many use-cases that can be fulfilled by combinations of cmd-line flags, etc, even there the kinds of use cases that are being addressed should exist in the code, if only in comments. Leaving it as implicit information that other devs must puzzle out for themselves is unacceptable. If, after puzzling out, you realize the code is a very elegant implementation, the code should still be faulted for making you puzzle it out yourself.

                              I would definitely call it ‘better’. Was the original code bad? I don’t think so. It was suboptimal, but on a scale of 0-10 it wasn’t below a 6.

                              I see no contradiction in saying simultaneously:

                              1. Writing the code this way was the correct decision at the time
                              2. This code is bad now

                              And 6 is pretty bad imo. Which doesn’t mean it must be fixed right now – that’s a decision with more inputs.

                              On this point we’re sort of arguing the semantics of “bad,” but I will say that I think, in general, better results come from having high-standards for yourself and others than come from “accepting that things are messy and imperfect.” I’m not saying that latter POV is never appropriate, but it is a very slippery slope, and I think it will more often lead to shoddy, lazy work than it will lead good, balanced work that’s not overly burdened with perfectionism – which I think is the implicit claim of your argument.

                            2. 7

                              I think everybody’s conflating a lot of things:

                              1. What does this code do?
                              2. What problem is it supposed to solve?
                              3. What are all the intermediary pieces used to solve that?
                              4. What does it do that it shouldn’t? What should it do that it doesn’t?
                              5. How do I know any of this?

                              The difficultly in assembling the answers to these questions, especially coming in cold, might best be called “cognitive load”

                              I think there are probably automated ways to measure cognitive load. There are also some obvious trade-offs. If I have a bash script with 20 lines of code that solves a problem and is valuable? So what if it takes 2 hours to understand? Isn’t that better than a 30k codebase that solves the same problem and you spend two weeks thrashing around in the codebase figuring out what it does and how to fix it?

                              Many coders would look at the bash example and balk. That’s a crazy amount of time to spend understanding just a few lines of code! But what they’re missing is that it’s not the line-of-code-count. That has nothing to do with anything. It’s the time it takes to understand everything you need to understand and provide value, no matter what the language or line count. Similarly, a huge codebase that reads like beautiful English might be a much worse situation. It could make you waste a lot of time reading and in the end give you the appearance of understanding these things without your actually understanding them.

                              1. 1

                                The title is also a bit… too simply put Of course, when you “get” it, it’s simple. The whole problem is that some code is hard to “get”, and becomes a time sink.

                              1. 4

                                I don’t seem to grasp what is the DDD critique from the article. It wanders from partially describing DDD to talking about MVC patterns. I suspect it confuses software design methodologies with software implementation methodologies.

                                Software design methodologies are interesting as a whole, and a critique of DDD is curiously interesting but unfortunately this looks simply like click bait.

                                1. 3

                                  It’s a good criticism: both yours and his.

                                  The way I looked at it was like this: so much of DDD is about how the software looks, what goes where. (Yes, there’s event storming and few other methodology tricks, but the real thrust of it is program control by types). This article is about process, how do you go about incrementally developing a large system over time?

                                  What this author groks about software development – which oddly enough may be illustrated by his essay itself – is that DDD is nice and pure. Real-word development is messy with a bunch of stuff going on all over the place.

                                  There’s a lot of tension between the two theses for reasons I won’t go into here. I think the key thing to ask the DDD folks is “How does this incrementally grow over time?” and the key thing to ask authors like this one is “But what’s the best way to organize everything?”

                                1. 2

                                  Starting a project to cut down two rather large trees next to my house. It will involve lots of gear: line, anchors, straps, hand-winches, power saws, and so forth.

                                  I am completely unqualified for this. That’s why I’m doing it.

                                  1. 1

                                    I’m no fan of the amount of appealing to authority we do in our industry. People stake out areas and then folks think they own them. We constantly re-invent things already invented multiple times before, then brand them.

                                    So I agree with the author that Kay’s role can be misunderstood/misused.

                                    Who invented objects? I’d go with Plato, perhaps with a bit of Aristotle thrown in. These ideas have long, storied, and interesting histories. We do one another a disservice when we say “Well, X says that….”

                                    1. 2

                                      Then Dogen invented functions. He describes the process of immutable transformation in his Koan of firewood.

                                      Firewood becomes ash, and it does not become firewood again. Yet, do not suppose that the ash is future and the firewood past. You should understand that firewood abides in the phenomenal expression of firewood, which fully includes past and future and is independent of past and future. Ash abides in the phenomenal expression of ash, which fully includes future and past. Just as firewood does not become firewood again after it is ash, you do not return to birth after death.

                                    1. 16

                                      “…The idea of strong opinions, loosely held is that you can make bombastic statements, and everyone should implicitly assume that you’ll happily change your mind in a heartbeat if new data suggests you are wrong…”

                                      No, it does not. It means that as a professional, you do not associate your passion for a technical topic with your self-worth. It’s okay to feel strongly about something without being a jerk about it. It’s permission to speak honestly and get feedback from others. If you do this, others may very well call you an asshole! They may vote you off the island. So you learn what’s important and what’s not. What’s worth having public passion over and what’s just being a jerk.

                                      You should have strong opinions about error handling, or customer service. You should speak up about your opinions so we might learn from you. You might have strong opinions about the pasta down at Luigis, but nobody cares. And we’ll tell you so. The requirement with strong opinions lightly held is that you have to agree to suck it up and do things you might not agree with. You can’t have one without the other.

                                      It’s just the opposite of what this author thinks it is. It’s the thing that over time prevents people from being jerks, not encourages them to.

                                      ADD: Does our industry have a problem with tech bros and people making bombastic statements? Most definitely. But I don’t think that’s related to SOLH. In fact, for every one of these bozos, I can show you five tech people that should be speaking up and are not. Those are the ones who cause real damage to an organization.

                                      1. 21

                                        You should have strong opinions about error handling, or customer service. You should speak up about your opinions so we might learn from you.

                                        There is a famous Bertrand Russell quote: “The whole problem with the world is that fools and fanatics are always so certain of themselves, and wise people so full of doubts.”

                                        It matches my experience exactly. The more I learn about complex topics like error handling or customer service, the more I realize that there is no “One True Way”™, and that it’s actually really complex and a series of trade-offs.

                                        Does our industry have a problem with tech bros and people making bombastic statements? Most definitely. But I don’t think that’s related to SOLH. In fact, for every one of these bozos, I can show you five tech people that should be speaking up and are not. Those are the ones who cause real damage to an organization.

                                        At least some people are not speaking up out of fear or exasperation by being told that their viewpoints are “absurd” or “not sane”.

                                        I just tap out of discussions these days once someone starts acting like that, and I know that in a few instances it has caused real damage to the business. I’m not happy with that, but there is a limit to what I’m able to put up with before it starts affecting my mood to an unreasonable degree.

                                        1. 7

                                          Strong opinions does not necessarily mean obnoxious or loud. I don’t understand how we got to the point where people think that. I feel strongly about fishing in the rain. Doesn’t mean I ever yell about it. It’s an odd conflation of ideas, as if the demonstration is as important as the strength. I wouldn’t think that would be true at all.

                                          We have to self-correct. In order to self-correct, it is necessary to make a case for some path we recommend and then negotiate/argue/arm-wrestle as part of a decision-making process. Passion allows us to make our case. It does not have to involve yelling or being rude. We just have to care, to feel strongly. After all, we’re professionals. Why wouldn’t we feel strongly about various parts of our job? I’d argue we aren’t worth a bucket of warm spit if we don’t. I know I wouldn’t want to work with anybody who had no passion in our work.

                                          It’s such a weird confusion of ideas. Feeling strongly about something does not mean acting like a bozo. In fact, that’s pretty much acting childish.

                                          Yep, everything is complex and people who oversimplify and are too sure of themselves can be problems. It is also true that given incomplete and sometimes self-contradictory information, we are required to make choices. We should do what we can to make sure these are the best choices possible. Tapping out ain’t cutting it.

                                          1. 5

                                            What you’re describing doesn’t sound like a “strong opinion” to me, but rather just “an opinion”.

                                            Perhaps this is just a case of semantics, but the adjective “strong”, to me at least, means either “not likely to be convinced otherwise” or “obnoxious or loud” These are indeed two very different things, but generally neither of them are very constructive, and doesn’t really seem like what you’re describing.

                                            1. 1

                                              I thought the strong was an adjective on your belief in the opinion. As in, you believe with all your heart that a particular option/action/way is the right one.

                                              1. 1

                                                If you’ve got extremely strongly-held priors, you’re liable to be hard to convince (and require a lot of evidence to budge). This is a problem because such priors are not typically based on reality. As a general guideline, if you want to be more right about things, you ought to be less sure about them.

                                          2. 2

                                            I agree. I am not going to have a shouting match with my coworkers. It’s not worth the aggravation.

                                            1. 1

                                              There is a famous Bertrand Russell quote: “The whole problem with the world is that fools and fanatics are always so certain of themselves, and wise people so full of doubts.”

                                              Well, I am personally full of doubts. Still I would give some of this type of decision makers more credit. It is easy to criticize everything since nobody knows anything of relevance with certainty (or you wouldn’t need to argue about it). In a tech context (and many others), it is often more effective to go with one reasonable choice, stick with it and control doubt with overconfidence.

                                              1. 1

                                                I like the quote and agree on the spirit ( https://quoteinvestigator.com/2015/03/04/self-doubt/ ) but you are confusing strong opinions with overconfidence / inflated egos.

                                              2. 2

                                                There’s something valuable about SOLH, so long as the LH part is taken to mean both qualifying statements based on an honest and informed estimate of confidence & actually updating your priors. And, some folks who stand by SOLH (including, presumably, some of its popularizers) take it this way. I’ve seen too many folks who apply it in the way OP describes to believe that it isn’t being used as a justification for ultimately destructive behaviors, though, even if that application is based on a misunderstanding of what the original popularizers intended.

                                                OP’s suggestion of annotating statements with confidence levels (which is popular in the rationalist community for blog posts, & seems to have come from Robert Anton Wilson, who recommended it along with e-prime for avoiding common patterns of miscommunication) is a good one, because it rewards accurate estimates of confidence, providing a road for careful folks to gain social status over pundits & blowhards by raising a useful metric above being loud and contrarian (which, unless it’s paired with careful thought, introspection, and a rigorous and accurate estimate of one’s own confidence levels, usually ends up being equivalent to being annoying and wrong).

                                                Of course, this runs contrary to norms. We live in an environment where qualifiers are called ‘weasel words’ & no matter how much you signal your level of confidence, all those signals will be stripped away as you are judged on your conclusions as though your confidence were 100%. Furthermore, confidence is held in esteem before it can even be proven to be justified, so we cheer on demagoges for being bold as they lead us full speed ahead into obvious traps. In such an environment, people who can get away with avoiding being held to account are incentivized to sound very sure, and everybody else is incentivized to keep their mouths shut.

                                                1. 3

                                                  I think all of us are missing the point when it comes to passion, confidence, truth, and so on.

                                                  These are language games. Most of what teams do are language games. Put another way, everybody wants to do a “good job”. The rest of what we do is trying to come to common agreement on what the phrase “good job” means.

                                                  The certainty number isn’t awful. It just misses the point of what we’re trying to do. As arp242 pointed out, things are complex. What we’re looking for is the simplest question we can agree on that’s important, testable, and that we disagree on the answer. That question might be something like “Switching to SONAR will result in 10% fewer bugs to production” (I can’t give you a good example because it varies widely depending on the circumstances.)

                                                  To get to that pivot question, we have to take strong opinions about fuzzy things, then work our way towards being more reasonable about absolute things. This is the job of being a smart person who creates technology. A user comes in and says “I hate this! Make it stop sucking so badly!” and we work towards testable chunks of code.

                                                  It’s perfectly fine to respond with “What do you mean this sucks? This is awesome!” This is the beginning of that back-and-forth. Checking out is not an option. You could try to go the percentage route but then you’re not working towards better definitions of terms. Instead taking a strong position and then following it up with something like “Which parts are sucky?” takes the game forward a step.

                                                  “I like X!” vs. “I hate X!” are fine places to start. There’s passion there. Now add some technique and flexibility. If everybody is appropriately apathetic, you are in stasis. Not a good place.

                                                  1. 3

                                                    That makes sense when the origin of the decision-making process is passion, and when nobody comes to the situation with a nuanced understanding. In most professional situations, neither of these are the case: developers are working on things they don’t care about for users who see the application as a necessary evil, and one or two folks in the group have 30 years of professional experience to everybody else’s three months (along with complex nuanced and experience-backed takes that simply can’t be boiled down to a slogan). Three junior devs shouting irrelevant preferences doesn’t help in this situation, and because their nuance-free takes are low-information, they can be repeated over and over (and thus gain control over the thinking of everybody else). The person with the best chance of designing a usable system gets shut out of the discussion, because when takes are optimized for hotness nobody wants to read an essay.

                                                    This notional experienced dev has a greater justification for confidence in their general position, but will necessarily express lower confidence in any individual element, because they have experienced instances that provide doubt. Meanwhile, the junior devs will be more confident because they have never been contradicted. This confidence is not representative of justified confidence.

                                              1. 3

                                                Setting up a buildspec, some pipelines, and some sites for some friends I’m helping out.

                                                1. 1

                                                  Author here. This was a test of some new video gear. I think there’s some good content worth sharing, but you’re also welcome to ignore it if it is too bothersome. I am finding that my idea of how simple video publishing should be and the reality are two completely separate things.

                                                  1. 21

                                                    I’m with the author. I feel this pain – although in other areas.

                                                    It used to be that the kiss-of-death was to move from coding into management. It was fun at first, but pretty soon your skills got rusty. It didn’t matter that much because you didn’t use them.

                                                    Then one day they lay everybody off. Now the folks who can still code get jobs and the guys who only know management don’t.

                                                    What we’re seeing now is that it’s also happening with tech abstraction levels. I’ve been playing around with AWS for the last half-year or so and all their cool tools. It’s seductive as heck. I love it. When I first really understood what I could do use automation? I think my exact words were “I’m never going to set up or manage another server again!”

                                                    It just didn’t make sense.

                                                    Until it does. Now I’m setting some servers up because I have some needs that are not a good fit for AWS. (Long story)

                                                    I believe the key question is this: are you ever going to have to go back? Are you sure? Are you willing to bet your career on it?

                                                    I’ll never go back to driver-level C and complex C++ stuff. As a 50-ish coder, it requires way more patience and focus than I have right now. More to the point, I haven’t needed to go back in 15-20 years or more. That seems about right. Now HTML? Javascript? I read somebody the other day making fun of people who – gasp! – sometimes hand-code HTML or use JQuery.

                                                    My dudes. That’s messed up. I don’t care what kind of coolness you’re using to crank out web apps, I bet you 20 bucks that at some point you’re going to have to know what’s going on at the page level in the same way the browser does. That’s one you can’t get rid of yet.

                                                    To me it gets a little iffy with things like desktop GUI apps. Would I write one again? Probably not, but I’m not completely sure. There are no easy answers to this problem. It’s a judgment call.

                                                    1. 8

                                                      Then one day they lay everybody off. Now the folks who can still code get jobs and the guys who only know management don’t.

                                                      I believe the key question is this: are you ever going to have to go back? Are you sure? Are you willing to bet your career on it?

                                                      This is exactly why I’m not willing to bet my career on tools from “Vendor X”.

                                                      I try to learn mostly (>80%) fundamentals, because one day, your tools will fail you and/or your platform will become obsolete and there will be no-one around to hold your hand and help you out. Most of the time, this happens because of things which are outside of your sphere of influence.

                                                      However: The knowledge I gather, is completely within my sphere of influence, so I’m going to make sure that more than 80% of it, is the kind of knowledge that will last for at least another 40 years.

                                                      1. 6


                                                        There was a bit of a golden age where the coolkids stuff was also the next stuff. So you could hang with the cool kids, pull down some good money consulting, then move onto whatever the next thing was. Back in the 90s I made the call to stick with Microsoft for a while.

                                                        Those days are long gone. Now you have parasites that just do cool kids stuff — and usually leave messes for other as the transition on to the next toy/gig – and us luddites (kidding), trying to be careful and wise where we invest our time.

                                                        I could list a dozen techs that were super cool and awesome. Each of them had tens of thousands of developers. All gone.


                                                        1. 2

                                                          The solid irony is that some of the oldest dinosaur-languages around, are still going strong. C/C++. Java, SQL, PHP and R are still going strong, while many others came, went and died in the meantime.

                                                          The languages that are still being used and survived the test of time, are the languages which all had their clear application-domains.

                                                          • C/C++ are mostly systems-languages which are used because of their fine-grained control over hardware, and therefore also the performance of the applications that have been built with it.
                                                          • Java’s purpose is to write a robust application that will “just work” on any system.
                                                          • SQL’s application domain consists of large structured databases, which often contain mission critical information. Because of this, it cannot be easily replaced.
                                                          • PHP’s nice is to power just about any web-application that does not require any long-lived computations.
                                                          • R is the goto languages for scientists and data-analysts, which might need all kinds of exotic algorithms based on all kinds of mathematical models.

                                                          However, my verdict is still out on Python. It appears to be the world’s number one “glue” language which is used to tie all kinds of systems together in a way that was previously being done by shell-scripts. The problem with “glue languages” like python, is that they can also be easily replaced.

                                                          And frankly, I think that most of the Javascript frameworks and libraries in use nowadays, will die in 10-20 years from now on, because Javascript is missing a clear focus outside of the webbrowser-ecosystem and the performance still is horrible.

                                                          I still have a very vivid memory of what Microsoft did to XNA and what Adobe did to Flash, which will forever deter me away from learning and writing user-interfaces with anything besides the most basic tools.

                                                          1. 2

                                                            I really feel for my friends that went down the XNA path. That had to have hurt.

                                                            At some point, programming tools and languages stopped being about being a better programmer and started being about selling things to programmers that they think is cool. Perhaps initially it was the same thing, but it drifted over time.

                                                            I was really surprised and linux and text-based tools sticking around. Like many others, I thought as soon as we had graphical operating systems and tools we would just continue heading down this “looking cool” path, eventually ending up with something like the Hal-9000.

                                                            As it turns out, there is one unambiguous way to do X using text and a certain toolset. You learn that, you’re able to do X 10, 20, 30 years from now. Using any sort of contextual interface: GUI or voice, for instance, there’s more of a UX-ish path you have to go down in order to establish context for the underlying tools. With each new version of the OS/toolset, that can change! And they’ll change it up just to make things look cool. So last week it was this seven click thing until you found a checkbox. This week you hold your left arm just like so, stomp your feet, and make a cat’s meow. You end up burning brain cycles learning to do crap you already knew how to do.

                                                            I think most everybody’s figure that out by now. I hope so.

                                                            1. 2

                                                              I think most everybody’s figure that out by now. I hope so.

                                                              Sadly this is not the case, but it is why I love the standard Unix toolset.

                                                    1. 2

                                                      Reposing here what I said on Tildes, because it’s relevant to the discussion.

                                                      This article has good historical context and presents an interesting case, but I have to say, the title and the conclusion are both representative of a very problematic assumption that underlies a lot of wrongheaded actions and opinions we see in society and even in government.

                                                      “Using a phone” is not a meaningful activity. The computer, handheld, laptop, or desktop, is a tool to do something. If that something is press the Skinner box-like Facebook, Twitter, Instagram, etc refresh button, sure, that’s probably not good, but I can’t see any parent or psychologist in good faith saying that the literal hundreds of hours I spent playing Kerbal Space Program in high school had “a damaging – and perhaps permanent – effect on [my] developing brain”. It taught me the calculus, and some orbital mechanics and aerodynamics, which I’m currently having formalized in college.

                                                      Confusingly, the article makes this point and then retreats from it, with the flippant assertion that “World of Warcraft beats Wikipedia hands down.”

                                                      Really? Ever been stuck in a Wikipedia rabbit hole? The same thing used to happen to me with my parents’ 1980s Encyclopedia Britannica before I was ever allowed to use a computer. That stuff is just interesting.

                                                      The problem is not instantaneous mass communication. The problem is that large companies are harnessing instantaneous mass communication to fuck people over. Stop using corporate social media and the problem disappears.

                                                      1. 3

                                                        The problem only disappears if critical mass (adoption) is somehow reversed. I can quit social media, but I can’t avoid it when dealing with other people. The mass adoption and inextricable integration into daily life is the part that changes the equation from a matter of personal taste to a matter of ecology.

                                                        1. 1

                                                          I think we’re talking about different problems. If the problem is “technology addiction”, you absolutely can avoid addiction while using corporate social media when it’s absolutely necessary, just as I took Tramadol after surgery and never became addicted.

                                                          1. 3

                                                            If we parse this finely, the problem per se is not even tech addiction, but the observed negative results thereof. These negative results may only be observed in a specific formulation of tech addiction; they may not even be causally linked to tech addiction. That’s why I insist on pulling the context (read: bigger picture, including lateral factors such as other people’s behavior, which I can’t control) into any discussion of individual tech addiction. We don’t get addicted in a vacuum.

                                                            1. 1

                                                              We probably need a good working definition of “addiction”. I believe I heard somewhere that addiction is doing something repeatedly that you enjoy at the time but later feel like it was a bad thing. An identifying pattern is deciding not to do it again or cut back, then doing it again anyway. So it’s not just “wasting time playing games”. It’s more like “Waking up at 30 and realizing that you tried to stop playing games over and over again so you could go to college but you were never able to”

                                                              Based on this definition, you could be plugged in 24/7 to tech your entire life and not be addicted. It’s not the tech. I don’t think it’s broken people – or people who make bad choices. Especially with AI getting involved, even if AI is way oversold, the system will optimize around keeping you plugged in. You begin to lose agency.

                                                              We don’t get addicted in a vacuum.

                                                              It’s informative to look at command centers in old sci-fi movies and early battleships. People did complex things (supposedly) involving lots of tech .. .but the audience and the other crew members could observe what’s going on from 30 feet away. It allowed for a social connection and some cross-checking.

                                                        2. 1

                                                          You are correct it is a tool. It’s a wonderful and amazing tool. We’re lucky to have it.

                                                          Stop using corporate social media and the problem disappears.

                                                          This sounds like over-generalized BS to me. That’s funny, since it seems to be the same point you were making about my article. (grin)

                                                          There’s a thought experiment to be done here which clears things up. Imagine a world with no social media. No BigCorp tech fucking people over. Everything online is something that’s good-for-you and edifying. (Also pretend that the phrase “good for you” has some meaningful and not vague meaning)

                                                          That’s a lot of pretending! But we can do that. It may be easier for me since I saw the entire thing evolve from nothing.

                                                          Gamification doesn’t go anywhere. Optimizing for site stickiness doesn’t go anywhere. The dopamine-click-reward response doesn’t go anywhere. Multi-function devices don’t go anywhere.

                                                          There was a reason I went back to Beethoven to start this – and it’s not that life was somehow the halcyon salad days of yore. I wanted to start in a place that had very, very little in the way of social manipulation. That’s it. If I could have went back further reliably, I would have.

                                                          Robert Greenburg makes the case in several of his books that the act of signing a composer’s name to a piece of music is what made music evolve. The case is also made by others. I agree. As soon as anybody started creating anything and sticking their name on it, they started using that content to manipulate consumers. Doesn’t matter if it’s music, sculpture, or my stupid jokes on FB. We create things for various reasons we have and we look for feedback to hone how we create those things. Over long periods of time this becomes manipulative. If the selection criteria for artists creating any material is audience interest? We figure out how to get audiences interested and keep them.

                                                          Even if the net were full of orbital mechanics, you’d just end up with blogspam posts “Top Ten Reasons Uranus Looks Bad! You won’t believe #3!” We compete. Even science writers compete. You can change the venue of that competition, but it never goes away. Wikipedia might be a good example of how to create and maintain content without all of that manipulation. But I don’t want an internet that all looks like Wikipedia. Do you?

                                                          People have their own political drums they like banging on. Big government, BigCorps, Social Media, evil clowns. I’m happy with that. You guys carry on and have fun with it. Just don’t pick up the drum you like banging on and think it’s solve this problem. This is a systemic problem. Systemic problems don’t have good or bad guys, and they don’t tend to react well to intricate manipulation. Screwing around with systemic problems without understanding the feedback mechanism involved just makes things more complex while the actors all route around the complexity introduced. I’m happy with whatever political solution folks want – but this evolutionary process started centuries ago. Mark Zuckerberg or whoever really don’t have a lot to do with it. They’re just the lucky folks that stepped in at the right moment with the right product.

                                                          1. 2

                                                            There’s a thought experiment to be done here which clears things up. Imagine a world with no social media. No BigCorp tech fucking people over. Everything online is something that’s good-for-you and edifying. (Also pretend that the phrase “good for you” has some meaningful and not vague meaning)

                                                            I spend a lot of time trying to push myself, my personal world, my “filter bubble” if you will, in this direction. I use uMatrix, uBlock Origin, PawBlock, et cetera to close off large sections of the web for myself. You say none of the other problems go anywhere, but you’re wrong.

                                                            When you use Facebook only to communicate with people you don’t have other contact info for, and Reddit only to post and reply to support questions or disseminate and discuss blog posts and articles, the Web becomes a lot more like it was built to be: a document platform.

                                                            We compete. Even science writers compete. You can change the venue of that competition, but it never goes away.

                                                            I didn’t suggest that. Competition can be healthy when it’s not tied to survival or critical to self-esteem.

                                                            Wikipedia might be a good example of how to create and maintain content without all of that manipulation. But I don’t want an internet that all looks like Wikipedia. Do you?

                                                            That sounds dope. Wikipedia, arXiv, high-quality blogs, and PhpBB-style fora are the best part of the Web.

                                                            In a certain sense, this is a really good point, and you’re right: corporate social media is a useful proxy for capitalism that doesn’t scare people off when you talk about it. The real solution to the problems you very astutely identify as the underlying causes of this addictive nature (gamification, optimizing for stickiness, etc) is to build our software as far outside of the constraints of capitalism as we can.

                                                        1. 13

                                                          I’ve been reading Ivan Illich’s book Tools for Conviviality (from 1973). It’s about how the progress of technology, while appearing to empower us, actually sucks away our autonomy and makes us dependents of industrial processes, monopolistic supply chains, etc. He imagines a different kind of technological progress that would instead treat human autonomy as a primary value.

                                                          That summary might be problematic; it doesn’t feel exactly right. He thinks of our world of cars, highways, huge hospitals, credentialed experts, compulsory schooling, factories, etc, as one that steamrolls over the dignity and creativity of human beings and the fabric of community. This seems true even though we also benefit from industry. So he wants to imagine a future that reverses this, where people in communities have a more widespread real knowledge of how things work, where making and repairing are part of everyday life, and so on.

                                                          People need not only to obtain things, they need above all the freedom to make things among which they can live, to give shape to them according to their own tastes, and to put them to use in caring for and about others.

                                                          I choose the term “conviviality” to designate the opposite of industrial productivity. I intend it to mean autonomous and creative intercourse among persons, and the intercourse of persons with their environment; and this in contrast with the conditioned response of persons to the demands made upon them by others, and by a man-made environment

                                                          The book was influential in the development of the personal computer, especially through Lee Felsenstein. Alan Kay seems to have had a similar perspective, and you also see it in old school “bicycle for the mind” Apple. But something changed, I guess. Maybe mostly with the combination of 3G smartphones and mainstream social media.

                                                          In this light it seems to me like if our gadgets are like opiates, they are kind of like the “sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions … the opium of the people.” The world around us is boring—school is boring, work is boring, politics are boring, traffic is boring, apartments are boring, neighborhoods are boring, shopping is boring—so we scroll through funny weird novel stuff on the internet.

                                                          But internet devices aren’t one-dimensional nihilistic black holes like heroin. They do open up to a wide world of amazing stuff. We can’t pretend that everyone is just severely addicted with no escape. We do communicate, learn, and create.

                                                          I think these gadgets are going to be with us in the future, and I think industrial/postindustrial alienation is going to steam on through sheer power and scale—so we have to think about how to make our own spaces for conviviality, weave webs of learning and caring, find the good potentials within technology, and generally just work to strengthen ourselves, our friendships, and our communities…

                                                          Right now I’m using the web to learn about woodworking, restoration, and some other DIY stuff with the goal of making stuff for myself and my family as well as contribute to local venues and community spaces, especially by doing things in ways that are affordable, safe, and fun. Watching, say, Paul Sellers on making mortise and tenon joints is the opposite of doing heroin.

                                                          1. 7

                                                            Wow. What a great comment. Thank you.

                                                            People seem to get hung up on the heroin analogy, but opiates in general are a very good thing! Holy cow, think of the kinds of surgery we can do now that we couldn’t 200 years ago, or the people who use them to manage chronic pain.

                                                            The reason the heroin analogy works, in my opinion, is because it’s an external factor entering society rather recently that both empowers us to do a lot of things we couldn’t do before and has a tremendous downside potential that many would like to ignore. I’m a drug legalization guy, so I’m definitely not looking to go back to the dark ages. But you deal with these tremendously powerful new things by being honest. Then you educate. Then people start forming more complex relationships with these things. That’s what we want. That’s the goal: enough awareness and education that a new generation has a nuanced and healthy relationship with tech, probably though mores, religion, hero-worship, whatever works for each person.

                                                            I feel really badly that ten years have passed and I don’t have a lot of good answers, badly both as an author and fellow consumer and tech-lover. Working alone, tech is my gateway to everything, and I’m a coder, a writer, a game player. I feel the pull of tech quite strongly. I have been planning and saying I’m going to quit FB for a year now. I find it impossible to do. Rather I find it easy to say I’m quitting, I’ll even delete it from a bunch of places. Then somehow it creeps right back into my life. That sucks. I have multiple friends with grown children in their 20s that tell me that they are quite afraid that their kids are not developing normally because they never physically get out and interact with the rest of the world. But why should they? As you point out, life is pretty boring compared to the universes we can make in tech.

                                                            Personally, I feel there’s a physicality piece that’s missing, whether you leave the basement or not. Multi-function devices with alerts on them are by construction made to accumulate various tech trinkets to keep our attention and prevent the dreaded boredom and ennui that comes with intelligence, sentience, and the existential crisis. Perhaps single-function devices, or devices with only one app and with notifications turned off, could provide a tactile and physical feedback mechanism both for ourselves and others as to where we’re spending our time. It might help us figure out what things work for us and what things simply waste our time. You see a loved one with their head in their phone all day long, you don’t know if they’re learning woodworking or clicking on cows. This kind of observation of yourself and others you care about seems to be what we’ve naturally used to provide enough feedback to self-correct. But I’m only guessing.

                                                            It is a difficult and thorny problem with lots of edge cases.

                                                            1. 3

                                                              I think the heroin analogy is pretty good actually, because tech, just like heroin and morphine can be an enormous enhancement to our lives if used properly.

                                                              However, what is and what isn’t proper use, will probably have to be ruled out by legislation. Just like we have heavily legislated other new “technologies” like cars, aeroplanes, heroin, sex, gambling and pre-packaged food items.

                                                              One could argue that all of those are damaging to society, but all of those are also necessary to some extent.

                                                              Maybe it’s that duality which should define what it means to be a human being.

                                                              1. 4

                                                                I am generally against regulation. The feedback loop is too long, which matters especially in situations like this where conditions are changing so quickly. It tends to turn into a whack-a-mole game based on politics. Not an optimum solution. Then there’s regulatory capture and a bunch of other stuff that leads to bad outcomes.

                                                                Having said that, I think we are at a point where something’s gotta give somewhere. Regulation may be a blunt and wasteful tool, but any tool beats no tool at all.

                                                                In this kind of situation, what I look for is the minimum amount of change that can have the greatest impact while still allowing the system to evolve. Perhaps that’s something along the lines of “Tech providers are forbidden to collect and store personal information beyond anything the user has explicitly and physically agreed-to” The user has to see the information, acknowledge it, push a button, and confirm. The two-factor confirmation thing that many providers are already doing with email. Blanket acceptance of onerous TOS that give apps the ability to track you like a lab rat is only leading to worse and worse outcomes, and no matter how much you trust app A, once that data is recorded it’s going to end up everywhere. It’s just a matter of time.

                                                                I don’t know. I know the data collection we’re doing is enabling this kind of rapid-feedback adaptation, and I’m fine with the system evolving over time. But what’s happening now is because of the tools we’re using, the system is evolving far, far faster than we can track, much less come to terms with. We need some explicit inspection and feedback loop in there somewhere. This seems like the smallest change that could have the largest impact.

                                                                But like I said, I’m just guessing.

                                                                1. 1

                                                                  My reply was in the context of whether or not the analogy was sound to describe the problem (which I think it is) and I stated further that we have always solved these problems with regulation.

                                                                  But I’ll take the time to reply anyway: You argue that “something’s gotta give” and I totally agree with you on that one. In fact, I’ll even state that “one of these days a developer like one of us, is going to do something that will get thousands of people killed”.

                                                                  At the time of writing, Boeing seems to have made a good start on that one with the 737 MAX, but keep in mind that I actually attach a dual meaning with that statement. To understand that meaning, you’ll have to firmly grasp the fact that the road to ruin is paved with good intentions, that you can be held accountable for your actions and that you can also be held accountable for inaction.

                                                                  Just like Boeing’s 737 MAX, heroin, cars, gambling, pre-packaged food and information-technology were also invented and sold with mostly good intentions. In all those cases, the organizations involved acted in good faith, while closing their eyes for the consequences their actions might have further on down the line.

                                                                  You should also keep in mind, that endless spying, data-harvesting and keeping people entertained endlessly through a means of addiction to dopamine stimuli, have all happened multiple times before (see eastern Germany, casino’s and Dutch law regarding slot machines). Each and every time, it turned out that the organizations involved could not be expected to keep their activities within reasonable bounds on their own. They basically acted like a paperclip-maximizers each and every time.

                                                                  If you ask me for my personal opinion, I would tell you that I think that “something has already given” and that we are now running an experiment to see how much further we are willing to let this go on. I also think that history has shown us that “maximum effect with minimal means” methodologies usually do not work. To me (and not just me, but many other Europeans as well), social media has become “just the next slot machine” and an average smartphone is “just the next listening device” that does not add anything of value.

                                                                  It would be a wholly different story if the technology actually added some value to our lives, but right now it is disproportionally taking value in the form of time, privacy, freedom and money. As a consequence, many people in Europe are developing anti-American and especially an anti-scilicon-valley-attitude.

                                                                  If I have to summarize this into one sentence it would be: “Great that my device recommends me better music, TV-programs, games or other apps, but if that comes at the cost of never being able to have a guaranteed private conversation in a world where a single mistake can follow me for the rest of my life, I think I’ll pass on that.”

                                                                  Unfortunately it has become nearly impossible to “pass on that” and therefore I think that legislation is not only unavoidable, but also the only viable option. The times where you could “move fast and break things”, are slowly coming to an end, but it might take yet another decade before the natural progression puts a stop to things. Legislators are usually slow and use blunt tools, but I don’t think they are going to wait that long this time around before they will use their blunt tools. Especially because it all is stuff we’ve seen before, but now it’s all combined into a new package.

                                                                  1. 2

                                                                    Yep. I’m not aware of anything we disagree on – aside maybe what legislation should look like. I’m willing to be a large sum of money that the more complex that legislation looks, the more it will enable the big players will stay in the game and use this as an opportunity to prevent anybody else from coming along later. Usually once you secure a monopoly, you’re the first person in line to ask for lots of controls to “help protect” people.

                                                                    1. 2

                                                                      Not necessarily. The big players will stay in business for a long time, but if someone discovers a better ranking algorithm for webpages, that, in combination with a few bad bets (like neural networks, Tensorflow and online advertising), will be it for Google’s main product.

                                                                      For example: Apple’s spotlight feature on the Mac and iPhone is a much bigger threat to Google than they would like to admit. If another search provider has better results, Apple will quietly mix them in with Google’s search results without significant hesitation and the users will never see Google’s front page again. They’ll probably throw in some privacy-features too.

                                                                      As for examples of the bad bets: At this rate of progression, in a few years, the web will be unusable because of all the advertisements that are constantly getting in your way. The adblocker guys are not going to stop. They’ve even built an entire browser based of Google’s own code and even non-tech-savy people are finding their ways to methods of blocking them. Even mozilla has integrated a decent adblocker into the recent versions of firefox! The hightime days of online advertising, are over as people are discovering that sometimes they actually have to get stuff done from time to time.

                                                                      As for why Tensorflow and Neural Networks might be a bad bet, I quote one of my old professors: “If Neural Networks or some other AI-thingy works, that’s great, but it usually means that there is something else going on as well. Most of the time you can get the same results with domain knowledge, other metaheuristics (searches, MIP’s, etc) or a combination of both.”

                                                                      Another significant problem is that we usually can’t explain properly what those models do. This works for a while in a bubble-like episode where nobody asks questions. Right now money is plenty and machine learning and online advertising are a hype, just like e-commerce in the 90’s or blockchain is right now. But when money is not as richly flowing as it is right now, a lot of accountants will start to evaluate their cost/income flows on their advertising campaigns and wonder why their sales are declining. At that point they will demand an explanation for why their ads are being served to users whom are “not in the mood to buy” or on irrelevant pages.

                                                                      It doesn’t matter how much regulation is pushed through in favour of the giants, because there is no amount of regulation that could shield them from the market-forces I have described above…. And I don’t see why any government other than the US would force their citizens to search the internet through just a few providers…. But I do see governments keeping their citizens away from the few big ones they don’t like.

                                                              2. 2

                                                                Personally, I feel there’s a physicality piece that’s missing, whether you leave the basement or not.

                                                                That reminds me of another book, The Spell of the Sensuous by David Abram from 1996; here’s a quote that connects back to the concept of conviviality:

                                                                Caught up in a mass of abstractions, our attention hypnotized by a host of human-made technologies that only reflect us back to ourselves, it is all too easy for us to forget our carnal inherence in a more-than-human matrix of sensations and sensibilities. Our bodies have formed themselves in delicate reciprocity with the manifold textures, sounds, and shapes of an animate earth – our eyes have evolved in subtle interaction with other eyes, as our ears are attuned by their very structure to the howling of wolves and the honking of geese. To shut ourselves off from these other voices, to continue by our lifestyles to condemn these other sensibilities to the oblivion of extinction, is to rob our own senses of their integrity, and to rob our minds of their coherence. We are human only in contact, and conviviality, with what is not human.


                                                                Humans, like other animals, are shaped by the places they inhabit, both individually and collectively. Our bodily rhythms, our moods, cycles of creativity and stillness, even our thoughts are readily engaged and influenced by seasonal patterns in the land. Yet our organic attunement to the local earth is thwarted by our ever-increasing intercourse with our own signs. Transfixed by our technologies, we short-circuit the sensorial reciprocity between our breathing bodies and the bodily terrain. Human awareness folds in upon itself, and the senses – once the crucial site of our engagement with the wild and animate earth – become mere adjuncts of an isolate and abstract mind bent on overcoming an organic reality that now seems disturbingly aloof and arbitrary.

                                                                And as always I think about Christopher Alexander’s preface to Patterns of Software (Richard Gabriel’s book), where he asks us to consider whether a computer program can make a person feel helped “on the same level that they are helped by horses, and roses, and a crackling fire”—not to say it can’t, but to encourage that aspiration.