1.  

    Of the several Brainfuck implementations I have read, only one wasn’t purely interpreted. Some don’t use ASTs. Brainfuck is “popular enough” but you probably meant “popular enough and practical enough” :)

    1.  

      “You don’t need a blockchain” is the new “You need a block chain” article.

      1.  

        I’ve lost track of the number of organizations and people I’ve talked out of a blockchain solution in the last ~6 years that I’ve seen Bitcoin as more than just a cryptocurrency.

        My entry-level question when someone tells me that they want to start their own cryptocurrency:

        Are you OK with people buying drugs with it?

        If the answer is No, then a cryptocurrency is not their solution. Not because people will buy drugs with it, but because if the creators can control who buys what with their currency, then it’s not a cryptocurrency.

      1. 12

        The Go project is absolutely fascinating to me.

        How they managed to not solve many hard problems of a language, it’s tooling or production workflow, but also solve a set to get a huge amount of developer mindshare is something I think we should get historians to look into.

        I used Go professionally for ~2+ years, and so much of it was frustrating to me, but large swaths of our team found it largely pleasant.

        1. 12

          I’d guess there is a factor depending on what you want from a language. Sure, it doesn’t have generics and it’s versioning system leaves a lot to be wished for. But personally, if I have to write anything with networking and concurrency, usualy my first choice is Go, because of it’s very nice standard library and a certain sense of being thought-thorugh when it comes to concurrency/parallelism - at least so it appears to be when comparing it to other imperative Java, C or Python. Another popular point is how the language, as compared to C-ish languages doesn’t give you too much freedom when it comes to formatting – there isn’t a constant drive to use as few characters as possible (something I’m very prone to doing), or any debates like tabs vs. spaces, where to place the opening braces, etc. There’s really something reliving about this to me, that makes the language, as you put it, “pleasant” to use (even if you might not agree with it)

          And regarding the standard library, one thing I always find interesting is how far you can get by just using what’s already packaged in Go itself. Now I haven’t really worked on anything with more that 1500+ LOC (which really isn’t much for Go), and most of the external packages I used were for the sake of convince. Maybe this totally changes when you work in big teams or on big projects, but it is something I could understand people liking. Especially considered that the Go team has this Go 1.x compatibility promise, so that you don’t have to worry that much about versioning when it comes to the standard lib packages.

          I guess the worst mistake one can make is wanting to treat it like Haskell or Python, forcing a different padigram onto it. Just like one might miss macros when one changes from C to Java, or currying when one switches from Haskell to Python, but learns to accept these things, and think differently, so I belive, one should approach Go, using it’s strengths, which it has, instead of lamenting it’s weaknesses (which undoubtedly exist too).

          1. 7

            I think their driving philosophy is that if you’re uncertain of something, always make the simpler choice. You sometimes go to wrong paths following this, but I’d say that in general this is a winning strategy. Complexity can always be bolted on later, but removing it is much more difficult.

            The whole IT industry would be a happier place if it followed this, but seems to me that we usually do the exact opposite.

            1.  

              I think their driving philosophy is that if you’re uncertain of something, always make the simpler choice.

              Nah - versioning & dependency management is not some new thing they couldn’t possibly understand until they waited 8 years. Same with generics.

              Where generics I can understand a complexity argument for sure, versioning and dependency management are complexities everyone needed to deal with either way.

              1.  

                If you understand the complexity argument for generics, then I think you could accept it for dependency management too. For example, Python, Ruby and JavaScript have a chaotic history in terms of the solution they adopted for dependency management, and even nowadays, the ecosystem it not fully stabilized. For example, in the JavaScript community, Facebook released yarn in October 2016, because the existing tooling was not adequate, and more and more developers are adopting it since then. I would not say that dependency management is a fully solved problem.

                1.  

                  I would not say that dependency management is a fully solved problem.

                  Yes it is, the answer is pinning all dependencies, including transitive dependencies. All this other stuff is just heuristics that end up failing later on and people end up pinning anyways.

                  1.  

                    I agree about pinning. By the way, this is what vgo does. But what about the resolution algorithm used to add/upgrade/downgrade dependencies? Pinning doesn’t help with this. This is what makes Minimal Version Selection, the strategy adopted by vgo, original and interesting.

                    1.  

                      I’m not sure I understand what the selection algorithm is doing then. From my experience: you change the pin, run your tests, if it passes, you’re good, if not, you fix code or decide not to change the version. What is MVS doing for this process?

                      1.  

                        When you upgrade a dependency that has transitive dependencies, then changing the pin of the upgraded dependency is not enough. Quite often, you also have to update the pin of the transitive dependencies, which can have an impact on the whole program. When your project is large, it can be difficult to do manually. The Minimal Version Selection algorithm offers a new solution to this problem. The algorithm selects the oldest allowed version, which eliminates the redundancy of having two different files (manifest and lock) that both specify which modules versions to use.

                        1.  

                          Unless it wasn’t clear in my original comment, when I say pin dependencies I am referring to pinning all dependencies, including transitive dependencies. So is MVS applied during build or is it a curation tool to help discover the correct pin?

                          1.  

                            I’m not sure I understand your question. MVS is an algorithm that selects a version for each dependency in a project, according to a given set of constraints. The vgo tool runs the MVS algorithm before a build, when a dependency has been added/upgraded/downgraded/removed. If you have the time, I suggest you read Russ Cox article because it’s difficult to summarize in a comment ;-)

                            1.  

                              I am saying that with pinned dependencies, no algorithm is needed during build time, as there is nothing to compute for every dependency version is known apriori.

                              1.  

                                I agree with this.

            2. 4

              I had a similar experience with Elm. In my case, it seemed like some people weren’t in the habit of questioning the language or thinking critically about their experience. For example, debugging in Elm is very limited. Some people I worked with came to like the language less for this reason. Others simply discounted their need for better debugging. I guess this made the reality easier to accept. It seemed easiest for people whose identities were tied to the language, who identified as elm programmers or elm community members. Denying personal needs was an act of loyalty.

              1.  

                How they managed to not solve many hard problems of a language, it’s tooling or production workflow, but also solve a set to get a huge amount of developer mindshare is something I think we should get historians to look into.

                I think you’ll find they already have!

              1. 7

                This is a mess.

                • Much of the technical complexity of the web has been generated by web designers who refuse to understand and accept the constraints of the medium. Overhauling the design when the implementation becomes intolerably complex is only an option when you are the designer. This luxury is unavailable to many people who build websites.
                • Suggesting that CSS grid is somehow the reincarnation of table-based layout is astonishingly simple-minded. Yes, both enable grid-based design. CSS grid achieves this without corrupting the semantic quality of the document. They’re both solutions to the same problem. But there are obvious and significant differences between how they solve that problem. It’s hard to fathom how the author misses that point.
                • The fetishization of unminified code distribution is really bizarre. The notion that developers should ship uncompressed code so that other developers can read that code is bewildering. Developers should make technical choices that benefit the user. Code compression, by reducing the bandwidth and time required to load the webpage, is very easily understood as a choice for the user. The author seems to prioritize reliving a romanticized moment in his adolescence when he learned to build websites by reading the code of websites he visited. It’s hard not to feel contempt for somehow who would prioritize nostalgia over the needs of someone trying to load a page from their phone over a poor connection so they can access essential information like a business address or phone number.
                • New information always appears more complex than old information when it requires updates to a mental model. This doesn’t mean that the updated model is objectively more complex. It might be more complex. It might not be more complex. The author offers no data that quantifies an increased compexity. What he does offer is a description of the distress felt by people who resist updating their mental model in response to new information. Whether or not his conclusions are correct, I find here more bias than observation.
                1. 8

                  CSS grid achieves this without corrupting the semantic quality of the document.

                  When was the last time you saw a page that is following semantic guidelines? It is so full of crap and dynamically generated tags, hope was lost a long time ago. It seems to be so crazy that developers heard about the “don’t use tables” that they will put tabular data in floating divs. Are you kidding me?! Don’t even get me started about SPAs.

                  The fetishization of unminified code distribution is really bizarre.

                  The point is, I think, that the code should not require minifying and only contain the bare minimum to get the functionality required. The point is to have 1kbyte unminified JS instead of 800kbyte minified crap.

                  1. 4

                    New information always appears more complex than old information when it requires updates to a mental model.

                    I feel like you completely missed his point here. He isn’t just talking about how complex the new stuff is. He even said flexbox was significantly better and simpler to use than “float”. What he is resisting is the continual reinvention that goes on in webdev. A new build tool every week. A new flavor of framework every month. An entire book written about loading fonts on the web. Sometimes you legitimately need that new framework or a detailed font loading library for your site. But frankly even if you are a large company you probably don’t need most of the new fad of the week that happens in web dev. FlexBox is probably still good enough for you needs. React is a genuine improvement for the state of SPA development. But 3-4 different build pipelines? No you probably don’t need that.

                    And while we are on the subject

                    CSS grid achieves this without corrupting the semantic quality of the document.

                    Nobody cares about the semantic quality of the document. It doesn’t really help you with anything. HTML is about presentation and it always has been. CSS allows you to modify the presentation based on what is presenting it. But you still can’t get away from the fact that how you lay things out in the html has an effect on the css you write. The semantic web has gone nowhere and it will continue to go nowhere because it’s built on a foundation that fundamentally doesn’t care about it. If we wanted semantic content we would have gone with xhtml and xslt. We didn’t because at heart html is about designing and presenting web pages not a semantic document.

                    1. 3

                      Nobody cares about the semantic quality of the document.

                      Anybody who uses assistive technology cares about its semantic quality.

                      Anybody who choses to use styles in Word documents understands why they’d want to write documents with good semantic quality.

                      You still can’t get away from the fact that how you lay things out in the html has an effect on the css you write.

                      That’s… the opposite of the point.

                      All of the cycles in web design – first using CSS at all (instead of tables in the HTML) and then making CSS progressively more powerful – have been about the opposite:

                      How you lay things out on the screen should not determine how the HTML is written.

                      Of course the CSS depends on the HTML, as you say. The presentation code depends on the content! But the content should not depend on the presentation code. That’s the direction CSS has been headed. And with CSS Grid, we’re very close to the point where content does not have to have a certain structure in order to permit a desired presentation.

                      And that’s my main issue with the essay: it presents this forward evolution in CSS as cyclical.

                      (The other issue is that the experience that compelled the author to write the article in the first place – the frenetic wheel reinvention that has taken hold of the Javascript world – is wholly separate from the phases of CSS. As far as that is concerned, I agree with him: a lot of that reinvention is cyclical and essentially fashion-driven, is optional for anyone who isn’t planning on pushing around megabytes of Javascript, and that anyone who is planning on doing that ought to pause and reconsider their plan.)

                      If we wanted semantic content we would have gone with xhtml and xslt.

                      Uh… what? XHTML is absolutely no different from HTML in terms of semantics and XSLT is completely orthogonal. XML is syntax, not semantics. It’s an implementation detail at most.

                      1. 3

                        If you are a building websites, please do more research and reconsider your attitude about semantic markup. Semantic markup is important for accessibility technologies like screen readers. RSS readers and search indexes also benefit from semantic markup. In short, there are clear and easily understood necessities for the semantic web. People do care about it. All front end developers I work with review the semantic quality of a document during code reviews and the reason they care is because it has a real impact on the user.

                        1. 2

                          Having built and relied on a lot of sematic web (lowercase) tech, this is just untrue. Yes, many devs don’t care to use even basic semantics (h1/section instead of div/div) but that doesn’t mean there isn’t enough good stuff out there to be useful, or that you can’t convince them to fix something for a purpose.

                          1. 1

                            I don’t know what you worked on but I’m guessing it was niche. Or if so then you spent a lot of time dealing with sites that most emphatically didn’t care about the semantic web. The fact is that a few sites caring doesn’t mean the industry cares. The majority don’t care. They just need the web page to look just so on both desktop and mobile. Everything else is secondary.

                      1. 57

                        Meaningful is…overrated, perhaps.

                        A survey of last four jobs (not counting contracting and consulting gigs, because I think the mindset is very different)

                        • Engineer at small CAD software startup, 50K/yr, working on AEC design and project management software comfortably 10 years ahead of whatever Autodesk and others were offering at the time. Was exciting and felt very important, turned out not to matter.
                        • Cofounder at productivity startup, no income, felt tremendously important and exciting. We bootstrapped and ran out of cash, and even though the problems were exciting they weren’t super important. Felt meaningful because it was our baby, and because we’d used shitty tools before. We imploded after running out of runway, very bad time in life, stress and burnout.
                        • Engineering lead at medical startup, 60K/yr, working on health tech comfortably 20 years ahead of the curve of Epic, Cerner, Allscripts, a bunch of other folks. Literally saving babies, saving lives. I found the work very interesting and meaningful, but the internal and external politics of the company and marketplace soured me and burned me out after two years.
                        • Senior engineer at a packaging company, 120K/yr, working on better packaging. The importance of our product is not large, but hey, everybody needs it. Probably the best job I’ve ever had after DJing in highschool. Great team, fun tech, straightforward problem space.

                        The “meaningful” stuff that happened in the rest of life:

                        • 3 relationships with wonderful partners, lots of other dating with great folks
                        • rather broken family starting to knit together slowly, first of a new generation of socks has been brought into the world
                        • exciting and fun contracting gigs with friends
                        • two papers coauthored in robotics with some pals in academia on a whim
                        • some successful hackathons
                        • interesting reflections on online communities and myself
                        • weddings of close friends
                        • a lot of really rewarding personal technical growth through side projects
                        • a decent amount of teaching, mentoring, and community involvement in technology and entrepreneurship
                        • various other things

                        I’m a bit counter-culture in this, but I think that trying to do things “meaningful for humanity” is the wrong mindset. Look after your tribe, whatever and whoever they are, the more local the better. Help your family, help your friends, help the community in which you live.

                        Work–at least in our field!–is almost certainly not going to help humanity. The majority of devs are helping run arbitrage on efficiencies of scale (myself included). The work, though, can free up resources for you to go and do things locally to help. Meaningful things, like:

                        • Paying for friend’s healthcare
                        • Buying extra tech gear and donating the balance to friends’ siblings or local teaching organizations
                        • Giving extra food or meals to local homeless
                        • Patronizing local shops and artisans to help them stay in business
                        • Supporting local artists by going to their shows or buying their art
                        • Paying taxes

                        Those are the things I find meaningful…my job is just a way of giving me fuckaround money while I pursue them.

                        1. 14

                          I’m a bit counter-culture in this, but I think that trying to do things “meaningful for humanity” is the wrong mindset. Look after your tribe, whatever and whoever they are, the more local the better. Help your family, help your friends, help the community in which you live.

                          Same (in the sense that I have the same mindset as you, but I’m not sure there is anything right or wrong about it). I sometimes think it is counter-culture to say this out loud. But as far as I can tell, despite what anyone says, most peoples’ actions seem to be consistent with this mindset.

                          There was an interesting House episode on this phenomenon. A patient seemingly believed and acted as if locality wasn’t significant. He valued his own child about the same as any other child (for example).

                          1. 9

                            I pretty much agree with this. Very few people have the privilege of making their living doing something “meaningful” because we live within a system where financial gains do not correspond to “meaningful” productivity. That’s not to say you shouldn’t seek out jobs that are more helpful to the world at large, but not having one of those rare jobs shouldn’t be too discouraging.

                            1. 4

                              Meaningful is…overrated, perhaps.

                              I think specifically the reason I asked is because I find it so thoroughly dissatisfying to be doing truly meaningless work. It would be nice to be in a situation where I wake up and don’t wonder if the work I spend 1/3rd of my life on is contributing to people’s well-being in the world or actively harming them.

                              Even ignoring “the world,” it would be nice to optimize for the kind of fulfillment I get out of automating the worst parts of my wife’s job, mentoring people in tech, or the foundational tech that @cflewis talks about here.

                              Work–at least in our field!–is almost certainly not going to help humanity. The majority of devs are helping run arbitrage on efficiencies of scale (myself included).

                              I think about this a lot.

                              1. 10

                                In general I find capitalism and being trapped inside of capitalism to generally be antithetical to meaningful work in the sense that you’ll rarely win at capitalism if you want to do good for the world, no matter what portion of the world you’re interested in helping.

                                A solution I found for this is to attain a point where financially I don’t have to work anymore to maintain my standard of living. It’s a project in the making, but essentially, passive income needs to surpass recurring costs and you’re pretty much good to go. To achieve that, you can increase the passive income, diminish the recurring costs, or both (which you probably want to be doing. Which i want to be doing, anyway.

                                As your passive income increases, you (potentially) get to diminish your working hours until you don’t have to do it anymore (or you use all the extra money to make that happen faster). Freedom is far away. Between now and then, there won’t be a lot of “meaningful” work going on, at least, not software related.

                                [Edit: whoever marked me as incorrect, would you mind telling me where? I’m genuinely interested in this; I thought I was careful in exposing this in a very “this is an opinion” voice, but if my judgement is fundamentally flawed somehow, knowing how and why will help me correct it. Thanks.]

                                1. 8

                                  Agree re. ‘get out of capitalism any way you can’, but I don’t agree with passive income. One aspect of capitalism is maximum extraction for minimum effort, and this is what passive income is. If you plan to consciously bleed the old system dry whole you do something which is better and compensates, passive income would be reasonable; if you want to create social structures that are as healthy as possible for as many people as possible, passive income is a hypocrisy.

                                  I prefer getting as much resource (social capital, extreme low cost of living) as fast as possible so you can exit capitalism as quickly as possible.

                                  1. 1

                                    Are you talking about the difference between, say, rental income (passive income) and owning equities (stockpile)? Or do you mean just having a lot of cash?

                                    1. 1

                                      Yes, if you want to live outside capitalism you need assets that are as far as possible conceptually and with least dependencies on capitalism whilst supporting your wellbeing. Cash is good. Social capital, access to land and resource to sustain yourself without needing cash would be lovely, but that’s pretty hard right now while the nation state and capitalism are hard to separate.

                                      1. 1

                                        Do you ever worry about 70’s (or worse) style inflation eroding the value of cash? In this day and age, you can’t even live off the land without money for property taxes.

                              2. 3

                                Work–at least in our field!–is almost certainly not going to help humanity. The majority of devs are helping run arbitrage on efficiencies of scale (myself included).

                                This 100%. A for-profit company can’t make decisions that benefit humanity as their entire goal is to take more than they give (AKA profit).

                                1. 2

                                  Sure they can. They just have to charge for a beneficial produce at a rate higher than the cost. Food, utilities, housing, entertainment products, safety products… these come to mind.

                                  From there, a for-profit company selling a wasteful or damaging product might still invest profits into good products/services or just charity. So, they can be beneficial as well just more selectively.

                                2. 2

                                  I think you’re hitting at a similar truth that I was poking at in my response, but from perhaps a different angle. I would bet my bottom dollar that you found meaning in the jobs you cited you most enjoyed, but perhaps not “for humanity” as the OP indicated.

                                  1. 1

                                    What is the exact meaning of “run arbitrage on efficiencies of scale”? I like the phrase and want to make sure I understand it correctly.

                                    1. 5

                                      So, arbitrage is “taking advantage of the price difference in two or more markets”.

                                      As technologists, we’re in the business of efficiency, and more importantly, efficiency of scale. Given how annoying it is to write software, and how software is duplicated effortlessly (mostly, sorta, if your ansible scripts are good or if you can pay the Dread Pirate Bezos for AWS), we find that our talents yield the best result when applied to large-scale problems.

                                      That being the case, our work naturally tends towards creating things that are used to help create vast price differences by way of reducing the costs of operating at scale. The difference between, for example, having a loose federation of call centers and taxis versus having a phone app that contractors use. Or, the difference between having to place classified ads in multiple papers with a phone call and a mailed check versus having a site where people just put up ads in the appropriate section and email servers with autogenerated forwarding rules handle most of the rest.

                                      The systems we build, almost by definition, are required to:

                                      • remove as many humans from the equation as possible (along with their jobs)
                                      • encode specialist knowledge into expert systems and self-tuning intelligences, none of which are humans
                                      • reduce variety and special-cases in economic and creative transactions
                                      • recast human labor, where it still exists, into a simple unskilled transactional model with interchangeable parties (every laborer is interchangeable, every task is as simple as possible because expertise are in the systems)
                                      • pass on the savings at scale to the people who pay us (not even the shareholding public, as companies are staying private longer)

                                      It is almost unthinkable that anything we do is going to benefit humanity as a whole on a long-enough timescale–at least, given the last requirement.

                                    2. 1

                                      Care about your tribe, but also care about other tribes. Don’t get so into this small scope thinking that you can’t see outside of it. Otherwise your tribe will lack the social connections to survive.

                                      Edit: it’s likely my mental frame is tainted by being angry at LibertarianLlama, so please take this comment as generously as possible :).

                                      1. 1

                                        Speaking of that, is there any democratic process that we could go through such that someone gets banned from the community? Also what are the limits of discussion in this community?

                                    1. 4

                                      How are Java exceptions monadic? The examples contrast two styles of exception handling. I don’t see a clear connection to the monadic laws in any of the examples. I don’t know Java, so I might be overlooking something. I do know JavaScript and I’m also confused by that section. A Promise is said to be a monad because Promise.resolve and Promise.prototype.then are equivalent to unit and bind in the very specific sense that these operators can be used to satisfy the monadic laws of identity and associativity. The monadic nature of a Promise is not related to control flow or asynchronous code execution, except perhaps incidentally. I hope I’m not being pedantic here. What the article says about Promises is much more essential for practitioners than the data structure’s theoretical context. But if the topic is Promise as monad, it seems like the relevant information is missing.

                                      1. 2

                                        I can see what the author’s getting at here - RemoteData doesn’t capture more complex use cases, but I don’t think it’s intended to. RemoteData models a single loading event. Dropping it completely for more complex use cases strikes me a bit as throwing the baby out with the bathwater. I suspect a cleaner option would be to compose RemoteData with additional state and data structures to build up to the use cases he describes.

                                        For existing data and a refresh request, that might look something like:

                                        type RefreshableData
                                             = Refreshable a (RemoteData e a)
                                        

                                        For a large number of requests, you could do something like:

                                        type alias RequestGroup e a = List (RemoteData e a)
                                        

                                        And it becomes pretty easy to derive the states from the list:

                                        {- This can be optimized, but if you have enough data
                                            on the page for it to be an issue, you probably have bigger UX problems -}
                                        
                                        isPartialLoading requestGroup =
                                            (List.any RemoteData.isLoading requestGroup)
                                                && (not (List.all RemoteData.isLoading requestGroup)) 
                                        

                                        Of course, in these examples, the states aren’t represented as union types, so you lose come compiler checking that you’ve handled all states. That said, I’ve worked on some pretty complex interfaces, and I have never needed or wanted something that would validate that we had code to handle all of:

                                        • empty, general error, and request pending
                                        • empty, general error, and request pending for a subset of the data
                                        • empty, error for a subset of the data, and request pending
                                        • empty, error for a subset of the data, and request pending for a subset of the Data
                                        • data cached and general error
                                        • data cached and error for a subset of the data
                                        • data cached and request pending
                                        • data cached and request pending for a subset of the data
                                        • data cached, general error, and request pending
                                        • data cached, general error, and request pending for a subset of the data
                                        • data cached, error for a subset of the data, and request pending
                                        • data cached, error for a subset of the data, and request pending for a subset of the data

                                        That said, if you really wanted it, you could take put your composition of RemoteData with additional state into it’s own module, make it an opaque type and enforce correct transistions between states by limiting the exposed API.

                                        I think all of this would be clearer with a specific use case in mind. The exercise in the article strikes me as a case of premature generalization. It seems like it’s trying to solve all possible problems rather than anything specific.

                                        I also have questions about what kind of cache is being referenced in the article, as I have some fairly strong opinions about caching data in client-side applications. (TL;DR: Don’t, the browser can already do this for you.)

                                        1. 3

                                          I also have questions about what kind of cache is being referenced in the article, as I have some fairly strong opinions about caching data in client-side applications. (TL;DR: Don’t, the browser can already do this for you.)

                                          I would love to hear more about this, because two problems have me stuck with client-side JS data caches in my apps. And I love deleting code.

                                          1. Embedded documents. Every app I’ve worked on denormalizes the data we fetch to cut down on HTTP requests. Is it cheap enough with HTTP/2 to send lots and lots of requests? Even then, it would mean a bunch of sequential round-trips for each child document that depends on its parent.

                                          2. Consistency. If two parts of the app load the same data at different times, we can get different responses, and have weirdness in the UI. With a JS cache, when we update a document, we can have every dependent piece of UI re-render and be consistent.

                                          1. 3

                                            You should take this all with a big grain of salt, because I have not had the bandwidth to implement most of these ideas in practice, due to the usual constraints around priorities and limited time. I’ve just been increasingly bothered by the complexity of implementing caches in the client, or alternatively the ugly behavior that results when they’re implemented naively, and on reflection, I think it’s mostly unnecessary. I have probably missed some corner cases and I suspect there are many apps where some amount of specific, targeted caching might still be useful for a subset of APIs or pages.

                                            With all of that in mind, I will say that #1 is probably the best reason I’ve seen for having a client side cache. I think in that case it’s worth looking at usage patterns to be sure it’s really providing benefit. If the individual requests your app is making in between big de-normalized requests don’t overlap much with the de-normalized data, the client side cache isn’t going to buy you much, although neither is the browser cache. Or if you’re always making large de-normalized requests, you’re still probably not getting a caching win, unless you have a way of structuring those requests to specify what you already have data on.

                                            I think there’s a lot of promise with HTTP/2. The single TCP connection is nice on it’s own, but there’s also the potential to do interesting things like make a request for a de-normalized structure that only contains the relationships between resources, and then have the server push the actual data from the resources individually. That way they’re cached individually, and the browser will actually cancel the push for resources it has already cached. Running some experiments with that is somewhere medium-high on my TODO list.

                                            #2 is tricky. If you data doesn’t change often, it’s not as big of a deal, or if you don’t often/never show the same data twice on the same page. One thing you can do if it’s still a problem after taking both of those into consideration is to track ongoing requests at the API layer. If you’re using something like promises, that means when a request comes in while another is still outstanding, you should be able to return the promise from the first request to the second caller, and just share the API call. If the first request has completed already, the browser should have the data in it’s cache (assuming the data has a time-based cache rather than something like etags).

                                            1. 2

                                              This is awesome, thank you so much for taking the time to share your thoughts @mcheely!

                                              Solving the round-trip part of #1 with HTTP/2 server push seems like it could be so damn magical and cool. In my most common case of hard-to-cache embedding – “load a list of items” and then “load a detailed view of one item” – it seems like a drop-in solution.

                                              For #2, I actually hadn’t thought about races! I was thinking more about the case where data rendered on one part of the screen becomes stale, but there’s no way for the browser cache to tell that part of the UI to re-render, so it stays stale. I guess, since we hope it to be cached, maybe I just need to adjust my thinking to re-render more things more often. Cheap most of the time, since it’s cached, and expensive when it should be expensive anyway. Huh.

                                              (solving the races by having a client API layer managing promises across the whole app starts to feel like a dangerously tempting place to add new features like… caching :P )

                                              I think in that case it’s worth looking at usage patterns

                                              I think it all comes back to this, for me. Building an app usually feels like a process of discovery to me; top-down plans don’t survive long. The usage patterns can be pretty unstable, and it can get painful surprisingly quickly to be completely naïve about loading data.

                                              …It’s appealing to imagine that a client-side cache, made hopefully robust through explicit modeling of all the possible states of each piece of data, can provide a 90% solution in a general way. Coupled with things like graphQL or PostgREST, you just build stuff and it works reasonably well, for free-ish.

                                          2. 2

                                            Thanks for reading and thanks for the feedback.

                                            I think all of this would be clearer with a specific use case in mind. The exercise in the article strikes me as a case of premature generalization. It seems like it’s trying to solve all possible problems rather than anything specific.

                                            As I say in the post, “States, events, and transitions should reflect the needs of the application”. I listed those states as an example because the last four applications I’ve built have needed all of these states. I tried to use RemoteData for two of those applications and ran into the problems I describe in the post. These apps do not strike me as complex and three years of dealing with these states led me to assume they were common. One example is an app that lists financial transactions. The app periodically refreshes the list. A loading icon is shown next to the list header during each refresh. Error messages are shown above the list if a refresh fails. That’s half of the states in that list already. On top of that, the user can make inline edits to the transaction details. When the updates are committed, a loading icon displays next to the transaction title. Errors related to the update (e.g. network failure) are displayed under the transaction title. That’s all of the states in that list. But the point of the article is not to dictate states to the reader. Again, “States, events, and transitions should reflect the needs of the application”. The point is that you cannot oversimplify the problem just because the result of that oversimplification looks nice in a blog post or a tweet.

                                            RemoteData models a single loading event. Dropping it completely for more complex use cases strikes me a bit as throwing the baby out with the bathwater.

                                            I agree that these states map closely to the HTTP request/response lifecycle. As I said in the post, “RemoteData models a stateful cache of data in terms of a stateless transfer of data, REST.” The original RemoteData post clearly states that the pattern is intended to model the cache, not the request/response lifecycle. That is why that post starts by evaluating existing patterns for modeling cached data and then offers RemoteData as an alternative. Notice that these posts place RemoteData in the model and that the view functions consume RemoteData - cache state, not request state.

                                          1. 2

                                            As someone who is just starting to dive deep into operating systems, especially Unix, I’m grateful for all the writing you’ve done about the Oil project.

                                            Oil is taking shell seriously as a programming language, rather than treating it as a text-based UI that can be abused to write programs.

                                            One question in response to this statement is at what point does the shell language become just another programming language with an operating system interface. This question seems especially important when the Oil shell language targets users who are writing hundreds of lines of shell script. If someone is writing an entire program in shell script, what is the advantage of using shell script over a programming language? You seem to anticipate this question by comparing the Oil shell language to Ruby and Python:

                                            …Python and Ruby aren’t good shell replacements in general. Shell is a domain-specific language for dealing with concurrent processes and the file system. But Python and Ruby have too much abstraction over these concepts, sometimes in the name of portability (e.g. to Windows). They hide what’s really going on.

                                            So maybe these are good reasons (not sure if they are or aren’t) why Ruby and Python scripts aren’t clearly better than shell scripts. You also provide a mix of reasons why shell is better than Perl. For example: “Perl has been around for more than 30 years, and hasn’t replaced shell. It hasn’t replaced sed and awk either.”.

                                            But again, it doesn’t seem to clearly answer why the domain language for manually interacting with the operating system should be the same language used to write complex scripts that interact with the operating system. Making a language that is capable of both should provide a clear advantage to the user. But it’s not clear that there is an advantage. Why wouldn’t it be better to provide two languages: one that is optimized for simple use cases and another that is optimized for complex use cases? And why wouldn’t the language for complex use cases be C or Rust?

                                            1. 3

                                              My view is that the most important division between a shell language and a programming language is what each is optimized for in terms of syntax (and semantics). A shell language is optimized for running external programs, while a programming language is generally optimized for evaluating expressions. This leads directly to a number of things, like what unquoted words mean in the most straightforward context; in a fluid programming language, you want them to stand for variables, while in a shell language they’re string arguments to programs.

                                              With sufficient work you could probably come up with a language that made these decisions on a contextual basis (so that ‘a = …’ triggered expression context, while ‘a b c d’ triggered program context or something like that), but existing programming languages aren’t structured that way and there are still somewhat thorny issues (for example, how you handle if).

                                              Shell languages tend to wind up closely related to shells (if not the same) because shells are also obviously focused on running external programs over evaluating expressions. And IMHO shells grow language features partly because people wind up wanting to do more complex things both interactively and in their dotfiles.

                                              (In this model Perl is mostly a programming language, not a shell language.)

                                              1. 1

                                                Thanks, glad you like the blog.

                                                So maybe these are good reasons (not sure if they are or aren’t) why Ruby and Python scripts aren’t clearly better than shell scripts.

                                                Well, if you know Python, I would suggest reading the linked article about replacing shell with Python and see if you come to the same conclusion. I think among people who know both bash and Python (not just Python), the idea that bash is better for scripting the OS is universal. Consider that every Linux distro uses a ton of shell/bash, and not much Python (below a certain level of the package dependency graph).

                                                The main issue is that people don’t want to learn bash, which I don’t blame them for. I don’t want to learn (any more) Perl, because Python does everything that Perl does, and Perl looks ugly. However, Python doesn’t do everything that bash does.

                                                But again, it doesn’t seem to clearly answer why the domain language for manually interacting with the operating system should be the same language used to write complex scripts that interact with the operating system.

                                                There’s an easy answer to that: because bash is already both languages, and OSH / Oil aim to replace bash.

                                                Also, the idea of a REPL is old and not limited to shell. It’s nice to build your programs from snippets that you’ve already tested. Moving them to another language wouldn’t really make sense.

                                              1. 4

                                                It’s strange and disappointing that this post does not reflect the nuance and mix of experience documented on a related Lobsters thread created by the post’s author.

                                                1. 6

                                                  Seems like he’s been exploring that space quite bit more these days:

                                                  I reached a dark night of the soul with regard to software and technology. There were moments when I looked around and realized that my total contribution to humanity, by working for an increasingly maleficent industry, might be negative. The 21st century’s American theatre has featured the dismantling of the middle class, and I can’t say I had nothing to do with it.

                                                  I’ve worked on many features and enhancement that have caused back office people to lose their jobs. One project was “online return” functionality for a medium sized, online retailer ($300M in it’s hayday). FIve people worked returns, then zero. It’s not a good feeling.

                                                  1. 5

                                                    I’ve written software that made friends of mine redundant, and it’s definitely a terrible feeling.

                                                    I feel like there’s an important distinction between

                                                    • Creating productivity improvements (which can result in redundancies but could also increase the total amount of work available) - eg designing electric drills, vs
                                                    • Enabling impersonal mistreatment (of a kind that wouldn’t happen personally) - eg uber automatically routing work away from underperforming contractors instead of employing/training staff
                                                    1. 2

                                                      There’s a neat game where you play the villain– a paperclip maximizer– and it captures how I feel about working in the tech industry.

                                                      My experience has been invaluable. I’m building an AI to test some “final” design changes to a card game, Ambition; I couldn’t do it if I hadn’t been a programmer for 10 years. Programming is a great skill set to have, and it really disciplines the mind in a way that’s opposite to what happens to most people in their 20s and 30s (the imprecision of thought that is typical of Corporate America infects them, and they lose their sharpness). I also couldn’t write Farisa (a mage/witch heroine in a world with a complex magic system) without experience of other-than-neurotypicality, nor could I write my first-book villains (in a steampunk dystopia, the Pinkertons win and gradually become evolve into Nazis) if I didn’t have painful experience with what people are when the stakes are high enough.

                                                      All of that said, I look at what I’ve accomplished to date, and I think I probably come up just barely on the right size of zero. It’s really unsettling. Like anyone else, I could die tomorrow. Corporate America only persists because people forget their own mortality– it would become a ghost town within hours if people fully realized that they, some day, will leave this world for the utterly unknowable– but also because they lose all sense of moral agency. Fuck everything about that.

                                                  1. 5

                                                    Would be nice to see a before and after…

                                                      1. 2

                                                        Interestingly, Preset omits all the Normalize fixes for MS Edge, which seems surprising to me.

                                                        1. 1

                                                          Ah, I thought the parent meant a before/after comparison of old resets versus this new rest. Not a before/after comparison of the reset applied to a page.

                                                        2. 1

                                                          Sure! Here’s a before/after: https://imgur.com/a/QjZl2

                                                          And they’re online at:

                                                        1. 3

                                                          There are a lot of great blog posts written by ‘recursers’. Those posts are usually spread out on different blogs though. Does anyone know if they are aggregated somewhere?

                                                          1. 6

                                                            There’s an internal tool that we (recursers) use that aggregates them, but unfortunately it’s not public. Perhaps someday someone will write a public view for it. (hint hint to current recursers)

                                                            1. 7

                                                              FYI, other projects call this their planet. See http://planet.mozilla.org/ or http://planet.debian.org/ or http://planet.ubuntu.com/

                                                              1. 1

                                                                Thanks, that’s a cool term I hadn’t heard before.

                                                              2. 1

                                                                I’d really like this.

                                                              3. 1

                                                                I was just thinking today that a huge benefit of doing Recurse Center was exposure to so many great blogs/bloggers I otherwise wouldn’t have encountered.

                                                              1. 25

                                                                I used to do the things listed in this article, but very recently I’ve changed my mind.

                                                                The answer to reviewing code you don’t understand is you say “I don’t understand this” and you send it back until the author makes you understand in the code.

                                                                I’ve experienced too much pain from essentially rubber-stamping with a “I don’t understand this. I guess you know what you’re doing.” And then again. And again. And then I have to go and maintain that code and, guess what, I don’t understand it. I can’t fix it. I either have to have the original author help me, or I have to throw it out. This is not how a software engineering team can work in the long-term.

                                                                More succinctly: any software engineering team is upper-bound architecturally by the single strongest team member (you only need one person to get the design right) and upper-bound code-wise by the single weakest/least experience team member. If you can’t understand the code now, you can bet dollars to donuts that any new team member or new hire isn’t going to either (the whole team must be able to read the code because you don’t know what the team churn is going to be). And that’s poison to your development velocity. The big mistake people make in code review is to think the team is bound by the strongest team member code-wise too and defer to their experience, rather than digging in their heels and say “I don’t understand this.”

                                                                The solution to “I don’t understand this” is plain old code health. More functions with better names. More tests. Smaller diffs to review. Comments about the edge cases and gotchas that are being worked around but you wouldn’t know about. Not thinking that the code review is the place to convince the reviewer to accept the commit because no-one will ever go back to the review if they don’t understand the code as an artifact that stands by itself. If you don’t understand it as a reviewer in less than 5 minutes, you punt it back and say “You gotta do this better.” And that’s hard. It’s a hard thing to say. I’m beginning to come into conflict about it with other team members who are used to getting their ungrokkable code rubber stamped.

                                                                But code that isn’t understandable is a failure of the author, not the reviewer.

                                                                1. 7

                                                                  More succinctly: any software engineering team is upper-bound architecturally by the single strongest team member (you only need one person to get the design right) and upper-bound code-wise by the single weakest/least experience team member.

                                                                  Well put – hearing you type that out loud makes it incredibly apparent.

                                                                  Anywhoo, I think your conclusion isn’t unreasonable (sometimes you gotta be the jerk) but the real problem is upstream. It’s a huge waste when bad code makes it all the way to review and then and then needs to be written again; much better would be to head it off at the pass. Pairing up the weaker / more junior software engineers with the more experienced works well, but is easier said than done.

                                                                  1. 4

                                                                    hmm, you make a good point and I don’t disagree. Do you think the mandate on the author to write understandable code becomes weaker when the confusing part is the domain, and not the code itself? (Although I do acknowledge that expressive, well-structured and well-commented code should strive to bring complicated aspects of the problem domain into the picture, and not leave it up to assumed understanding.)

                                                                    1. 3

                                                                      I think your point is very much applicable. Sometimes it takes a very long time to fully understand the domain, and until you do, the code will suffer. But you have competing interests. For example, at some point, you need to ship something.

                                                                      1. 2

                                                                        Do you think the mandate on the author to write understandable code becomes weaker when the confusing part is the domain, and not the code itself?

                                                                        That’s a good question.

                                                                        In the very day-to-day, I don’t personally find that code reviews have a problem from the domain level. Usually I would expect/hope that there’s a design doc, or package doc, or something, that explains things. I don’t think we should expect software engineers to know how a carburetor works in order to create models for a car company, the onus is on the car company to provide the means to find out how the carburetor works.

                                                                        I think it gets much tricker when the domain is actually computer science based, as we kind of just all resolved that there are people that know how networks work and they write networking code, and there’s people who know how kernels work and they write kernel code etc etc. We don’t take the time to do the training and assume if someone wants to know about it, they’ll learn themselves. But in that instance, I would hope the reviewer is also a domain expert, but on small teams that probably isn’t viable.

                                                                        And like @burntsushi said, you gotta ship sometimes and trust people. But I think the pressure eases as the company grows.

                                                                        1. 1

                                                                          That makes sense. I think you’ve surfaced an assumption baked into the article which I wasn’t aware of, having only worked at small companies with lots of surface area. But I see how it comes across as particularly troublesome advice outside of that context

                                                                      2. 4

                                                                        I’m beginning to come into conflict about it with other team members

                                                                        How do you resolve those conflicts? In my experience, everyone who opens a PR review finds their code to be obvious and self-documenting. It’s not uncommon to meet developers lacking the self-awareness required to improve their code along the lines of your objections. For those developers, I usually focus on quantifiable metrics like “it doesn’t break anything”, “it’s performant”, and “it does what it’s meant to do”. Submitting feedback about code quality often seems to regress to a debate over first principles. The result is that you burn social capital with the entire team, especially when working on teams without a junior-senior hierarchy, where no one is a clear authority.

                                                                        1. 2

                                                                          Not well. I don’t have a good answer for you. If someone knows, tell me how. If I knew how to simply resolve the conflicts I would. My hope is that after a while the entire team begins to internalize writing for the lowest common denominator, and it just happens and/or the team backs up the reviewer when there is further conflict.

                                                                          But that’s a hope.

                                                                          1. 2

                                                                            t’s not uncommon to meet developers lacking the self-awareness required to improve their code along the lines of your objections. For those developers, I usually focus on quantifiable metrics like “it doesn’t break anything”, “it’s performant”, and “it does what it’s meant to do”. Submitting feedback about code quality often seems to regress to a debate over first principles.

                                                                            Require sign-off from at least one other developer before they can merge, and don’t budge on it – readability and understandability are the most important issues. In 5 years people will give precisely no shits that it ran fast 5 years ago, and 100% care that the code can be read and modified by usually completely different authors to meet changing business needs. It requires a culture shift. You may well need to remove intransigent developers to establish a healthier culture.

                                                                            The result is that you burn social capital with the entire team, especially when working on teams without a junior-senior hierarchy, where no one is a clear authority.

                                                                            This is a bit beyond the topic at hand, but I’ve never had a good experience in that kind of environment. If the buck doesn’t stop somewhere, you end up burning a lot of time arguing and the end result is often very muddled code. Even if it’s completely arbitrary, for a given project somebody should have a final say.

                                                                            1. 1

                                                                              The result is that you burn social capital with the entire team, especially when working on teams without a junior-senior hierarchy, where no one is a clear authority.

                                                                              This is a bit beyond the topic at hand, but I’ve never had a good experience in that kind of environment. If the buck doesn’t stop somewhere, you end up burning a lot of time arguing and the end result is often very muddled code. Even if it’s completely arbitrary, for a given project somebody should have a final say.

                                                                              I’m not sure.

                                                                              At very least, when no agreement is found, the authorities should document very carefully and clearly why they did take a certain decision. When this happens everything goes smooth.

                                                                              In a few cases, I saw a really seasoned authority to change his mind while writing down this kind of document, and finally to choose the most junior dev proposal. And I’ve also seen a younger authority faking a LARGE project just because he took any objection as a personal attack. When the doom came (with literally hundreds of thousands euros wasted) he kindly left the company.

                                                                              Also I’ve seen a team of 5 people working very well for a few years together despite daily debates. All the debates were respectful and technically rooted. I was junior back then, but my opinions were treated on pars with more senior colleagues. And we were always looking for syntheses, not compromises.

                                                                          2. 2

                                                                            I agree with the sentiment to an extent, but there’s something to be said for learning a language or domain’s idioms, and honestly some things just aren’t obvious at first sight.

                                                                            There’s “ungrokkable” code as you put it (god knows i’ve written my share of that) but there’s also code you don’t understand because you have had less exposure to certain idioms, so at first glance it is ungrokkable, until it no longer is.

                                                                            If the reviewer doesn’t know how to map over an array, no amount of them telling me they doesn’t understand will make me push to a new array inside a for-loop. I would rather spend the time sitting down with people and trying to level everyone up.

                                                                            To give a concrete personal example, there are still plenty of usages of spreading and de-structuring in JavaScript that trip me up when i read them quickly. But i’ll build up a tolerance to it, and soon they won’t.

                                                                          1. 4
                                                                            y = false, true; // returns true in console
                                                                            console.log(y); // false (left-most)
                                                                            

                                                                            Huh, that definitely tripped me up for a second. Is this because the comma is higher precedence than the assignment?

                                                                            1. 9

                                                                              The assignment to y belongs wholly to the expression on the left side of the comma operator: y = false. The left and right sides of the comma operator don’t interact. The comma operator is just a way to squeeze in two or more expressions where only one expression is valid, e.g. the first parameter of a for loop. The list of expressions is treated as a single expression that always evaluates to the result of the right-most expression. For that reason y = (false, true) has your expected result. Along the same lines var x = 1, y = 2 expands to var x; var y; x = 1; x = 2; because of variable hoisting.

                                                                            1. 8

                                                                              I appreciate the move, but “we’re paying wages based on a place and we found out it’s kind of arbitrary, so we now pay wages based on another, even more arbitrary place” is a weird argument.

                                                                              1. 25

                                                                                Not more than “we want to pay you less because you currently live in a cheaper place” as if any company has any business dictating what my appropriate level of living standard should be.

                                                                                I somewhat disagree with San Francisco being another arbitrary place. It is probably the most expensive city with significant number of well paid developers which seemed to be the reason why they picked it.

                                                                                1. 7

                                                                                  They spend quite a bit of the blog post arguing that picking a place for a distributed company is a little arbitrary. Then they pick another place. They could have just placed themselves on the wage scale at the price where they want to be.

                                                                                  That’s independent on why San Francisco wages are high. It’s just as much a place as Chicago is.

                                                                                  To make it clear: the argument amuses me, nothing more, nothing less.

                                                                                  I’m fully on board with the whole “wage depends on where you live, not the value you bring” stuff being completely off, I think the freedom to chose a different place of living also for financial reasons is important. Everyone talking “my employees should think business” and then pulling stuff like this is not practicing what they preach.

                                                                                  1. 6

                                                                                    We were sold the line that in the Real World, one’s salary is reflective of the value they bring to the company.

                                                                                    Then remote work enters the picture, along with the opportunity for employees to take part in arbitrage, and the line suddenly changes to talk about standard of living and other nonsense. It struck me as odd how quickly the Real World changed once employees had the potential for an upside.

                                                                                  2. 9

                                                                                    I don’t think they’ve now chosen an arbitrary place. Remote work is steadily gaining in popularity. Bay area companies pay the most, and make their salaries increasingly available (or within 10%) to remote devs. Basecamp is not picking a city out of a hat, they’re putting themselves at the top of the American market they’re competing in. It used to be that the market rate for remote work included a location adjustment, but the market is moving. (Moving slowly and incompletely, of course, as wages are illiquid and sticky.)

                                                                                    1. 1

                                                                                      I would expect to see compensation regress towards the mean in a national or international labor market. If the supply of labor changes without a change in the demand, wages should decrease.

                                                                                      1. 2

                                                                                        There’s a bunch of factors and I tried not to nerd-snipe myself. I’d predict that on the balance that there’s enough increasing demand to pull up salaries outside of the bay area, but I didn’t run the numbers.

                                                                                        1. 1

                                                                                          Great list of factors in the third tweet.

                                                                                        2. 1

                                                                                          Sure - but this isn’t “the market”, it’s a founder-controlled company.

                                                                                          The decisions are informed by the market, but not controlled by it.

                                                                                          1. 1

                                                                                            I would expect to see a decrease in compensation not because the market controls market actors but because free-ish markets tend towards economic equilibrium. I wasn’t referring directly to the actions taken by Basecamp but instead to “…the market is moving” in the parent.

                                                                                      2. 4

                                                                                        I’m with you. It’s nonsense trading place for place. I’ll add they have the better method built right into this article. Let me put it simply:

                                                                                        Goal: Pay workers really well.

                                                                                        Problem: Industry pays based on location. Capitalists also try to push wages down.

                                                                                        Solution: Look at pay ranges in IT, identify a pay rate for various job positions that meets their goal for baseline, and set starting pay for those positions to those points.

                                                                                        Done! Didn’t even need anything about location to pick a nice pay rate for programmers. Just need the national (or global) statistics on pay. They already did this by picking a number in the high end. They can finish by dropping the location part. Result is still the same.

                                                                                        Personally, though, I find the approaches that combine cost of living with a base pay for a position to be interesting. Example here. They may be more fair depending on how one looks at it in terms of what people are keeping after basic bills in exchange for their work. I’m far from decided on that topic. Far most businesses’ goals, getting talent in areas with lower cost of living will let them invest that savings back into their business. That can be a competitive advantage with more people getting stuff done or better tools for those they have. If not needing more programmers, quite a bit of QA, deployment, and marketing goods can be bought for savings of a few programmers in cheaper areas versus expensive ones.

                                                                                        1. 1

                                                                                          Goal: Pay workers really well.

                                                                                          I don’t think this is any real goal. The goal is more likely boost reputation and attract the best works.

                                                                                          Goal: Happy (productive) and skilled workers.

                                                                                          Actually, even then I don’t think it is right, if a company could operate effectively without staff it would.

                                                                                          1. 2

                                                                                            Their workers were already happy and skilled. Certainly a top priority for them. Although, the author writes as if having core principles about business on top of that. Putting their beliefs in practice to set an example is also a goal.

                                                                                            I’m just using pay because it’s an objective value that can be measured. They wanted that value to go up. I proposed a different method to make it go up.

                                                                                        2. 3

                                                                                          If they don’t use SF as their template they miss out on anyone living there as a potential employee as they’ve priced themselves out

                                                                                          1. 3

                                                                                            Honestly, Basecamp doesn’t feel like the company to me that would actually care that much about that. They’ve managed to be highly successful without.

                                                                                            1. 1

                                                                                              Really? Basecamp is all about making the best product possible. It’s not about SF per se; SF just happens to be the top of the market for developer pay. They explain in the article:

                                                                                              But in what other part of the business do we look at what we can merely get away with? Are we trying to make the bare minimum of a product we can get away selling to customers? Are we looking to do the bare minimum of a job marketing our business? No.

                                                                                              Do better than what you can get away with. Do more than the bare minimum. Don’t wait for the pressure to build. Don’t wait for the requests to mount. The best time to take a step forward is right now.

                                                                                              1. 2

                                                                                                I read the article. But if your point is “top of the market”, just say “top of the market” and be done with it.

                                                                                                IMHO, Basecamp is pretty good at giving their employees a fair share of their successes, and that’s fine. SF or not.

                                                                                          2. 2

                                                                                            I believe the logic here was “the place distinction is arbitrary, so we’ll take the most expensive place so that people can go anywhere with ease”

                                                                                          1. 1

                                                                                            Pay people to store things.

                                                                                            A crypto currency which mines not blockchain but content would encourage people to donate their disks to the rare- hoovering up all the worlds data, since anyone who wants it would have to pay a premium inverse to availability. Think of it as a tax-on-demand library of Congress.

                                                                                            1. 3

                                                                                              This wouldn’t work by itself. Soon, most disks would be occupied by useless junk and someone will need to decide what to ditch and what to keep. Which is the other function of the Library of Congress.

                                                                                              Some library evaluation methods include the checklists method, circulation and interlibrary loan statistics, citation analysis, network usage analysis, vendor-supplied statistics and faculty opinion.

                                                                                              – Wikipedia on Collection development

                                                                                              1. 1

                                                                                                Soon, most disks would be occupied by useless junk and someone will need to decide what to ditch and what to keep.

                                                                                                I’m envisioning a recurring storage fee that would eventually run out unless topped-up. Somewhat like Ethereum distributed apps that stop running when they run out of ‘gas’.

                                                                                              2. 2

                                                                                                FileCoin aims to be that. Its initial token sale raised over $200M, showing that a lot of big players want in on that market opportunity. Right now it seems they are massively expanding their team, and it’s not clear yet when it will be available to the general public.

                                                                                                Considering P2P rewards, private torrent trackers have been doing this for a really long time, converting seed time into virtual community credits or something similar, enabling recognition and opportunities to contributing members. But like in other parts of the online world, spending time, money, and equipment for a cause rather than a product becomes less and less convenient for the average user. Many people lamented the downfall of what.cd, but it illustrates the two sides of the P2P coin pretty well: it can have huge potential if many people are willing to invest their resources, but it is still very much illegal for much of the shared content, and there is a powerful force behind the corporations and authorities to stop these things (namely, huge piles of money).

                                                                                                1. 1

                                                                                                  A reward system like you describe appeals to me. A market can be an efficient way to allocate a finite supply of resources. This would also enable things like bounties for data that exists out of band. I wonder if valuing the data inversely proportional to its availability would eventually bring about an equilibrium where most things were within the same range of availability. I also agree with the sibling that storage space is a complicating factor. In theory, the value of the data would rise and attract more hosts until the supply met the demand. So the effect would be a general pay-wall.

                                                                                                1. 7

                                                                                                  Have a peer-to-peer web, but keep the concept of servers in addition to it. It’s no better a solution than currently exists, but at least it offers potential redundancy (fueled by popularity, but perhaps some other metrics can be devised, as was hinted towards the end of the article) instead of the current lack of existence that information has a tendency to vanish into.

                                                                                                  Additionally, I’d like to point out that things such as paywalls or pizza delivery would still require servers that are not data-addressable, because they are services and not information that can be copied and distributed (at least, not without a clever reworking of how the service works). Servers are here to stay for the time being, and a peer-to-peer web most likely won’t change that dramatically.

                                                                                                  1. 5

                                                                                                    BitTorrent has web seeds (download from HTTP servers in addition to peers). They’re quite popular: used by many OS distrubutions, Archive.org, media.ccc.de, Amazon S3

                                                                                                    1. 1

                                                                                                      By “keep the concept of servers”, do you mean that whoever publishes to the network should also self-host their data and treat the redundancy as a bonus? This way of looking at it has also occurred to me. You could describe this as a more robust form of client-side caching, where all static assets are saved locally and the validity of the cache is communicated through the network protocol. Thinking of the p2p web as a bandwidth optimization makes sense to me in exactly the way you say “it’s no better a solution than currently exists, but…” we get some nice additional properties at scale. But this point of view falls short of the ambitions the decentralized web movement has for these network architectures.

                                                                                                    1. 30

                                                                                                      I agree that the page should be served over HTTPS. I’m not sure I agree that the lecture which followed was necessary. Yes, the person replying to his initial tweet didn’t get it. It’s clear that this individual does not understand the importance of HTTPS. But within 30 minutes, they offer to forward his concern to the technical staff. Within 24 hours, there are comments on this blog post that the page has been switched to HTTPS. This indicates that someone of those technical people, in the space of a workday, corrected the mistake. I’m not convinced that this incident is evidence (in and of itself) of anything more than a nontechnical person unwittingly deploying a customer relations strategy that is inappropriate for the situation. Yes, banks and other institutions handling sensitive data should not make these mistakes, etc. But, like Torvalds, I’m also getting tired of the sanctimonious, unwarranted lectures from the infosec crowd. Not everyone understands the importance of SSL. Making people feel bad about this only makes the problem worse. In an era where everyone needs the benefits of infosec expertise and technology, the infosec community has a real user experience and user relations problem.

                                                                                                      1. 18

                                                                                                        Wow, I didn’t realize this was fixed within a day. That changes a lot of things, this article can now be summed up as “I didn’t like the way the PR person for NatWest’s twitter account answered me”.

                                                                                                        1. 7

                                                                                                          It’s pretty insane that he expects some social media person to understand what he is on about and be able to reply instantly with the right info.

                                                                                                          1. 3

                                                                                                            Truth me told, I do. Social media people are supposed to communicate between users and the company so I would expect whenever there is a chance that a user is reporting something important to forward it to an internal tech contact at the company. If your contact people don’t do that, what is their purpose? Pure marketing? Then don’t answer customers at all on these channels.

                                                                                                            (And I do think that deploying HTTPS within 24 hours is actually pretty good within a big organization, so they do seem to employ competent people)

                                                                                                            1. 1

                                                                                                              I’m sorry you feel this way. I can certainly pass on your concerns and feed this back to the tech team for you Troy? DC

                                                                                                              They passed the feedback to the correct team. In my opinion, the social media people did their job at this point. It isn’t their job to understand what he’s saying, it is their job to pass the information to the right people. Apparently it wasn’t phrased well enough for this guy though.

                                                                                                        2. 7

                                                                                                          Just to play devil’s advocate:

                                                                                                          • what about the 4 month old XSS vuln mentioned towards the bottom of the article, reported by @huykha10?
                                                                                                          • their rushed-out fix now has problems of its own with mixed protocols, bad certs, and still no upgrade-insecure-requests

                                                                                                          I side with you in that public shaming isn’t helping how people feel about properly implementing security, but it sure seems like it took someone making the news before they leapt into action.

                                                                                                          1. 2

                                                                                                            You’re right - it’s not as simple as they fixed it in a day. There is additional evidence of incompetence and not all of it seems to be simple ignorance. I’m not defending any of their sloppy security practices, including those in your list. At some point, publicly shaming them is a reasonable way to seek the desired outcome. I just don’t think it should be the first step. Tone is very hard to read on the web; to me, the author’s tone indicated that he assumed the worst from the start. It seems like many people now use accusative tweets to threaten public embarrassment as a way to quickly get what they want. The result of this routine, reflexive public shaming is a culture where people are afraid to admit what they don’t know or don’t understand. If the goal is to have a population that values security, I would guess (without any data) that it’s more effective to treat the uninitiated kindly and save the shaming for those who willfully and maliciously disregard well-established social norms (of information security or whatever else).

                                                                                                            1. 1

                                                                                                              Thanks for the response! I totally agree with you.

                                                                                                              I’m also guilty of being flip sometimes when frustrated by something that I can’t get traction on. It’s tough to continue being helpful and polite when faced with a lack of ownership for an issue that you yourself can’t fix.

                                                                                                        1. 1

                                                                                                          This is very related to things I need to do over the next couple weeks. Thanks!

                                                                                                          1. 2

                                                                                                            Good to hear! Thank you for reading.

                                                                                                          1. 4

                                                                                                            I’m unsure that 5 days is enough to get into the headspace of any language beyond, perhaps, brainfuck.

                                                                                                            1. 1

                                                                                                              I think 5 days is enough to write a brainfuck interpreter. Many more days are required to write brainfuck :P

                                                                                                            1. 8

                                                                                                              Cancelation does not fit the Promise ontology. A Promise does not abstract a process. It abstracts a value. It makes sense to say “I’d like to cancel this process” but it does not make sense to say “I’d like to cancel this value”. The point of the Promise abstraction is to enable the value to be transformed without having to think about when or if that value will exist. Yes, sometimes the value is produced by asynchronous code execution. But this is not the concern of the Promise. The concern of the Promise is simply to run the queued transformations when that value exists. If you need to cancel a process, then you need to build a cancelation mechanism into an abstraction of the process itself. The Task monad is one example: http://folktale.origamitower.com/docs/v2.0.0/migrating/from-data.task/ A Task can resolve to a Promise just as an asynchronous process eventually produces a value. It can also resolve to a Future, which is just another way of representing the value. The point is that there are two things here: processes and values. Cancelability is a property of the process. Promise is a representation of the value.

                                                                                                              1. 2

                                                                                                                I agree, but I really wish people would use the term “break” for promises. When I think about it that way, I think “if I break a promise, I don’t know what will happen.” With that in mind, I feel like breaking a promise – a cancelation in worse terms – says, “I don’t care about this value anymore and I don’t care about anything that was being done to calculate it”. A promise in that mindset can be broken and its state entirely discarded. A task might need to handle a cancellation request carefully, e.g. releasing resources, etc.

                                                                                                                Put another way, canceling a promise issues s SIGKILL to the process (in abstract terms) that was calculating the value while canceling a task issues a SIGTERM to a process that is doing something, whether or not that process was returning a value, and allows it to end gracefully.