1. 2

    Not a bad idea to have a dedicated hardware module tailored towards the web. In particular, for distributed systems it would be particularly helpful. We’ve had a problem before with pages in memory deadlocking everything because the OS was taking so long to defrag memory and taking down the whole app.

    1. 1

      I was thinking one could also simultaneously improve performance and reduce attack surface, too.

      1.  

        Does it reduce the attack surface because hardware is more easily verifiable?

        1.  

          There’s less functionality in there since it includes just what you need. The hardware is FSM’s converted to logic. Both support strong, automated verification. Finally, the hardware implementation might allow you to do things like simultaneously input check all headers since it’s inherently parallel. That might further let you do more checks or protections that would have too much slowdown on general-purpose CPU. Some approaches do 50-70% hit in software but 1-10% in hardware.

    1. 6

      None of these tactics remove or prevent vulnerabilities, and would therefore by rejected by a “defense’s job is to make sure there are no vulnerabilities for the attackers to find” approach. However, these are all incredibly valuable activities for security teams, and lower the expected value of trying to attack a system.

      I’m not convinced. “Perfect” is an overstatement for the sake of simplicity, but effective security measures need to be exponentially more costly to bypass than they are to implement, because attackers have much greater resources than defenders. IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers. Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.

      1. 2

        You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system? In this case and as your threats get more advanced I agree with the article. In higher levels of threats it becomes a problem of economics.

        1. 2

          You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system?

          I think just the opposite actually. The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals. Whereas following the truism would lead you to make changes that would protect against all attackers.

          1. 4

            The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals.

            That’s not what I understand from this article. The attitude proposed by the article should, IMO, lead you to think of the threat model of the system you’re trying to protect.

            If it’s your friend’s blog, you (probably) shouldn’t have to consider state actors. If it’s a stock exchange, you should. If you’re Facebook or Amazon, not the same as lobsters or your sister’s bike repair shop. If you’re a politically exposed individual, exploiting your home automation raspberry pi might be worth more than exploiting the same system belonging to someone who is not a public figure at all.

            Besides that, I disagree that all examples are too costly to be worth it. Hashing passwords is always worth it, or at least I can’t think of a case where it wouldn’t be.

            To summarize with an analogy, I don’t take the exact same care of my bag when my laptop (or other valuables) are in it than when it only contains my water bottle, and Edward Snowden should care more about the software he uses than the ones I use.

            Overall I really like the way of thinking presented by the author!

            1. 2

              Whereas following the truism would lead you to make changes that would protect against all attackers.

              Or mess with your sense of priority such that all vulnerabilities are equally important so “let’s just go for the easier mitigations”, rather than evaluating based on the cost of the attack itself.

              1. 1

                If you’re thinking about “mitigations” you’re already in the wrong mentality, the one the truism exists to protect you against.

                1. 1

                  It’s important to acknowledge that it’s somewhat counterintuitive to think about the actual parties attempting to crack your defenses. It requires more mental work, in a world where people assume they can get all the info they need just by reading their codebase & judging it on its own merits. It requires methodical, needs-based analysis.

                  The present mentality is not a pernicious truism; it’s an attractive fallacy.

          2. 2

            IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers.

            How do you figure it’s too costly? If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks. Additionally, there are services out there that scan dep. vulnerabilities if you give them a Gemfile, or access to your repo.

            Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.

            Perfect it all you want. The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up) If anything, what’s costly is keeping on your employees to not take shortcuts, and on stay alert to missing access cards, rouge network devices in the office, badge surfing, and that they don’t leave their assets lying around.

            1. 1

              If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks.

              I’d frame that as: deployment environments are increasingly set up so that everyone pays the costs.

              The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up)

              So fix that, with a real security measure like hardware tokens. Thinking in terms of costs to attackers doesn’t make that any easier; indeed it would make you more likely to ignore this kind of attack on the grounds that fake domains are costly.

          1. 5

            Larger question: why are we proud of this? Do we want programming to just be wiring up components written by 25 year-olds from Facebook with an excess of free time?

            1. 4

              I know for certain I can knock up vast chunks of functionality by gluing together prewritten chunks. I know this, I do this, I know I deliver orders of magnitude more than I could ever write myself.

              I also know we haven’t learnt how to do it well, how to do it in a rock solid, reliable, testable, repeatable way. That video to hexagonal architecture is an example of a way of doing the “glue” part in a rock solid, testable way.

              We haven’t really learnt the best ways. Yet.

              The entire industry is learning on the job. And some of those lessons are going to be really really painful…. Especially for those who don’t recognize that they still need to be learning…

              1. 2

                I know that selfishly I don’t want to wire things up (it’s boring, it’s frustrating etc.), but what is the justification for starting every project by hand rolling your own language and compiler? Surely that wouldn’t benefit anyone but the developer. Even though wiring up components produces software that’s suboptimal in many respects, it’s nevertheless efficient on two very important metrics: cost and development time.

                I’m sure there are exceptions to this (enterprise software comes to mind) but in general, I struggle to make a case against reusing components.

                Looking at it from another angle, we generally want to solve ever more complex problems with software. Managing complexity requires abstraction. Components seem to be the only widely accepted mechanism for creating large scale abstractions. I know this isn’t necessarily the best way, but what is a practical alternative and how can all the existing components be replaced in a way that doesn’t create astronomical costs?

                1. 2

                  I’m not arguing for bootstrapping the universe just to make an apple pie. I’m actually a big fan of components. But the view that we’re “just” component wiring plumbers irks me to my core.

                  Somebody has to envision the data flow from the user to the app. Someone has to design the interface. Someone has to empathize with the user to discern an optimal workflow. Someone has to also be able to make the machine perform this task in an acceptable amount of time. And someone has to have the skill to design it in a way that doesn’t prevent future improvements.

                  I’d argue the utopian vision of software components is already here, where you can drop in various modules and glue them together. Add in an appropriately flexible language, such as JS, and there is very little friction involved overall.

                  Also note that the software problem hasn’t been solved, design skills are still needed, and people merely ship faster apps in a more buggy state.

                  So, I speak against “just component wiring” in the de-skilling sense if only to say the actual programming part is only a small part of what a programmer does.

                  1. 2

                    Just playing devils advocate, but how many of us actually have design skills in an engineering sense? To be more specific, how many of us actually design in terms of a definable process? Design is definitely what differentiates a senior and junior but is it something concrete or something more like aesthetics? Another language you learn to talk to other developers.

                    It’s interesting because the whole Requirements-Design-Architecture Formalized Design process is not really used anywhere, and places that it is used, design is done by an architecture team and locked away never to be seen by another human.

              1. 16

                I don’t disagree with the heading…. but I would strongly assert we a VERY BAD at doing “Just” plumbing.

                We have this terrible habit of concreting ourselves into the design suggested by whatever the prebuilt components presented to us.

                We then pat ourselves on the back saying we don’t need to unit test, the prebuilt component is well tested, and we don’t need to think about design, the prebuilt component has thought about that for us.

                And when we decide that component doesn’t fit we rip up everything we have done or insert layers upon layers of kludges to shim in something else.

                And when we want a different UI or backend…. we find we have welded these components into our very architecture.

                At severe risk of enraging DHH further…. I point at this talk as an example of someone who has thought about the issue and has an excellent proposal for resolving (some of it).

                We also forget that retrieving, versioning, licencing, patching, bug fixing and configuring and deploying these pre-built components is hard. This is a large part of what bitbake and OpenEmbedded do for us… and partly what docker should be doing for us.

                Bitbake makes it nice and clear, the “recipe” for grabbing the correct version of the source, verifying it, backing up the original in case the source goes down, verifying the licence, patching, configuring, building, installing, deploying, post installation configuration, ….., for this package AND all it’s dependencies…..

                …that recipe is our source code.

                That is the “crown jewels” of the “just plumbing” era. That is what goes into version control.

                And we do NOT have Good best practices for developing and designing and testing this stuff.

                I really really don’t think this is a solved problem yet.

                Yes, “tinker toy” development as I call it, where we plumb together very large chunks, allowing very rapid development of a huge amount of functionality is the future.

                But we aren’t there yet.

                Not even close.

                Stepping stones and signposts on the journey there are….

                • guix and nixos
                • The reproducible builds project.
                • bitbake and openembedded.
                1. 2

                  I think your perspective is correct in a software for developers kind of way. But in production I would say that plumbing and rapid development through external modules is already happening. Any company right now will use npm and many language specific dependency/package managers. In fact, it’s heavily incentized. Your perspective of best practices would typically apply to companies and organizations where time to market is not a significant factor. Which usually means large companies and OSS, but in those cases they would rather do it in-house anyways for other advantages.

                  The only portion of software I see this being true is perhaps very specific embedded applications. Such as drivers, industrial controls, FPGAs, etc.

                  1. 1

                    Oh, I know we all doing it now. Gluing together hundreds of packages and calling it a product.

                    I’m just saying we’re all doing it very badly.

                    The entire industry is at that painful stage the “handcraft it out of C” part of the industry was at in the 1970’s…

                    ie. Groping for what best practice actually looks like, feeling bits of pain here and there, knowing that ten years down the line the pain is going to become excruciating…. but not having and really, umm, solid, principles and tools and practices.

                    We’re like the C++ community was prior to STL, and way way prior to the isocpp Core Guidelines.

                    We’re at the stage where major players like IBM and Rational defined what they do as best practice and everybody copies that…. only to work out years later that just because you’re big doesn’t mean you’re good at it, just means you can throw hundreds of bodies at it.

                    The “IBM” of today is facebook.

                    Software in the Large was and is really hard to do Right.

                    The era of Software in the Gargantuan that we’re in now will be harder.

                1. 3

                  This is reminiscent of A Cloud-Scale Acceleration Architecture from Microsoft.

                  1. 1

                    I think as more providers have FPGAs (aws, gcp, etc) well start to see a lot of work in hardware acceleration for web apps.

                  1. 1

                    Is there a publish date for this I’m not seeing anywhere? It talks about PHP 4.3.0+ which has been dead for a number of years (more than I decade I think?) That being said, I love articles like this had explain in-depth how a language works.

                    1. 2

                      This is basically a hieroglyph, it’s from 2005. I believe it’s part of an ongoing magazine called php architect though, so you could check that out.

                    1. 5

                      Very strong opinions here…

                      As far as I’m concerned, strong scrum practices would defeat these issues.

                      Bad tools are not scrum. Lack of ownership is not scrum.

                      People who try to use scrum as a way to wrap a process around bad ideas will never benefit from it.

                      Take the good ideas, apply scrum, and most importantly, adapt to what you learn.

                      1. 38

                        adapt to what you learn.

                        Umm. Point 5 and 6 of TFA?

                        I’ve learnt from seeing it in practice both in my own experience and speaking to many others… The article is pretty spot on.

                        Ok. Warning. Incoming Rant. Not aimed at you personally, you’re just an innocent bystander, Not for sensitive stomachs.

                        Yes, some teams do OK on Scrum (all such teams I have observed, ignore largish chunks of it). ie. Are not doing certified scrum.

                        No team I have observed, have done as well as they could have, if they had used a lighter weight process.

                        Many teams have done astonishingly Badly, while doing perfect certified Scrum, hitting every Toxic stereotype the software industry holds.

                        Sigh.

                        I remember the advent of “Agile” in the form of Extreme Programming.

                        Apart from the name, XP was nearly spot on in terms of a light weight, highly productive process.

                        Then Kanban came.

                        And that was actually good.

                        Then Scrum came.

                        Oh my.

                        What a great leap backwards that was.

                        Scrum takes pretty much all the concepts that existed in XP…. and ignores all the bits that made it work (refactoring, pair programming, test driven development, …), and piles on stuff that slows everything down.

                        The thing that really pisses me off about Scrum, is the amount of Pseudo Planning that goes on in many teams.

                        Now planning is not magic. It’s simply a very data intensive exercise in probabilistic modelling.

                        You can tell if someone is really serious about planning, they track leave schedules and team size changes and have probability distributions for everything and know how to combine them, and update their predictions daily.

                        The output of a real plan is a regularly updated probability distribution, not a date.

                        You can tell a work place bully by the fact their plans never change, even when a team member goes off sick.

                        In some teams I have spoken to, Scrum planning is just plain unvarnished workplace bullying by powertripping scrum managers, who coerce “heroes” to work massive amounts of unpaid overtime, creating warm steaming mounds of, err, “technical debt”, to meet sprint deadlines that were pure fantasy to start with.

                        Yes, if I sound angry I am.

                        I have seen Pure Scrum Certified and Blessed Scrum used to hurt people I care about.

                        I have seen Good ideas like Refactoring and clean code get strangled by fantasy deadlines.

                        The very name “sprint “ is a clue as to what is wrong.

                        One of the core ideas of XP was “Sustainable Pace”…. which is exactly what a sprint isn’t.

                        Seriously, the one and only point of Agile really is the following.

                        If being able to change rapidly to meet new demands has high business value, then we need to adapt our processes, practices and designs to be able to change easily.

                        Somehow that driving motivation has been buried under meetings.

                        1. 8

                          I 100% agree with you actually.

                          I suppose my inexperience with “real certified scrum” is actually the issue.

                          I think it’s perfectly fine and possible to take plays out of every playbook you’ve mentioned and keep the good, toss the bad.

                          I also love the idea that every output of planning should be a probabilistic model.

                          Anyone who gets married to the process they pick is going to suffer.

                          Instead, use the definitions to create commonly shared language, and find the pieces that work. For some people, “sprint” works. For others, pair programming is a must have.

                          I think adhering to any single ideology 100% is less like productivity and more like cultish religion.

                          1. 5

                            fantasy deadlines

                            Haha. Deadlines suck so let’s have em every 2 weeks!

                            1. 3

                              As they say in the XP world: if it hurts, do it more often.

                              1. 3

                                True. It’s a good idea. One step build pipeline all the way to deployment. An excellent thing, all the pain is automated away.

                                If you planned it soundly, then a miss is feedback to improve your planning. As I say, planning is a data intensive modelling exercise. If you don’t collect the data, don’t feed it back into your model… your plans will never improve.

                                If it was pseudo planning and a fantasy deadline and the only thing you do is bitch at your team for missing the deadline… it’s workplace bullying and doing it more will hurt more and you get a learned helplessness response.

                          2. 12

                            Warning: plain talk ahead, skip this if you’re a sensitve type. Scrum can actually work pretty well with mediocre teams and mediocre organizations. Hint we’re mostly all mediocre. This article wreaks of entitlement; I’m a special snowflake, let ME build the product with the features I want! Another hint; no one wants this. Outside of really great teams and great developers, which by definition most of us aren’t, you are not capable.

                            Because all product decision authority rests with the “Product Owner”, Scrum disallows engineers from making any product decisions and reduces them to grovelling to product management for any level of inclusion in product direction.

                            This the best thing about scrum/agile imo. Getting someone higher in the food chain to gatekeep what comes into development and prioritize what is actually needed is a huge benefit to every developer wether you realize it or not. If you’ve never worked in a shop where Sales, Marketing and Support all call their pet developers to work on 10 hair on fire bullshit tasks a day, then you’ve been fortunate.

                            1. 9

                              Scrum can actually work pretty well with mediocre teams and mediocre organizations. Hint we’re mostly all mediocre.

                              The problem is: Scrum also keeps people mediocre.

                              Even brilliant people are mediocre, most of the time, when they start a new thing. Also, you don’t have to be a genius to excel at something. A work ethic and time do the trick.

                              That said, Scrum, because it assumes engineers are disorganized, talentless children, tends to be a self-fulfilling prophecy. There’s no mentorship in a Scrum shop, no allowance for self-improvement, and no exit strategy. It isn’t “This is what you’ll do until you earn your wings” but “You have to do this because you’re only a developer, and if you were good for anything, you’d be a manager by now.”

                              1. 3

                                That said, Scrum, because it assumes engineers are disorganized, talentless children, tends to be a self-fulfilling prophecy.

                                Inverting the cause and effect here is an equally valid argument, that most developers in fact are disorganinzed, talentless children as you say, and the sibling comment highlights. We are hi-jacking the “Engineer” prestige and legal status, with none of the related responsibility or authority.

                                There’s no mentorship in a Scrum shop, no allowance for self-improvement, and no exit strategy.

                                Is there mentoring and clear career paths in none scrum shops? This isn’t a scrum related issue. But regardless, anyone who is counting on the Company for self actualization is misguided. At the end of the day, no matter how much we would all like to think that our contributions matter, they really don’t. To the Company, we’re all just cogs in the machine. Better to make peace with that and find fulfillment elsewhere.

                                1. 3

                                  Scrum does not assume “engineers” at all. It assumes “developers”. Engineers are highly trained group of legally and ethically responsible professionals. Agile takes the responsibility of being an engineer right out of our hands.

                                  1. 4

                                    Engineers are highly trained group of legally and ethically responsible professionals.

                                    I love this definition. I have always said there’s no such thing as a software engineer. Here’s a fantastic reason why. Computer programmers may think of themselves as engineers, but we have no legal responsibilities nor ethical code that I am aware. Anyone can claim to be a “software engineer” with no definition of what that means and no legal recourse for liars. It requires no experience, no formal education, and no certification.

                                    1. 1

                                      True, but why?

                                      IMHO, because our field is in its infancy.

                                      1. 2

                                        I dislike this reason constantly being thrown around. Software engineering has existed for half a century, name another disipline where unregulated work and constantly broken products are allowed to exist for that long. Imagine if nuclear engineering was like us. I think the real reason we do not get regulated is majority of our field does not need rigor and companies would like a lower salary for engineers, not higher. John Doe the web dev does not need the equalivalent of a engineering stamp each time he pushes to production because his work is unlikely to be a critical system where lives are at stake.

                                        1. 1

                                          I’m pretty sure that most human disciplines date in the thousands years.

                                          Nuclear engineering (that is well rooted in chemistry and physics) is still in its infancy too, as both Chernobyl and Fukushima show pretty well.

                                          But I’m pretty sure that you will agree with me that good engineering take a few generations if you compare these buildings with this one.

                                          The total lack of historical perspective in modern “software engineers” is just another proof of the infancy of our discipline: we have to address our shortsighted arrogance as soon as possible.

                                          1. 1

                                            We’re talking about two different things. How mature a field is not a major factor in regulation. Yes I agree with your general attitude that things get better over time and we may not be at that point. But we’re talking about government regulating the supply of software engineers. That decision has more to do with public interests then how good software can be.

                                            1. 1

                                              That decision has more to do with public interests then how good software can be.

                                              I’m not sure if I agree.

                                              In my own opinion current mainstream software is so primitive that anybody could successful disrupt it.

                                              So I agree that software engineers should feel much more politically responsible for their own work, but I’m not sure if we can afford to disincentivate people to reinvent the wheel, because our current wheels are triangular.

                                              And… I’m completely self-taught.

                                2. 3

                                  This the best thing about scrum/agile imo. Getting someone higher in the food chain to gatekeep what comes into development and prioritize what is actually needed is a huge benefit to every developer wether you realize it or not.

                                  While I agree with the idea of this, you did point out that this works well with mediocre teams and, IME, this gatekeeping is destructive when you have a mediocre gatekeeper. I’ve been in multiple teams where priorities shift every week because whoever is supposed to have a vision has none, etc. I’m not saying scrum is bad (I am not a big fan of it) but just that if you’re explicitly targeting mediocre groups, partitioning of responsibility like this requires someone up top who is not mediocre. Again, IME.

                                  1. 2

                                    Absolutely, and the main benefit for development is the shift of blame and responsibility to that higher level, again, if done right. Ie there has to be a ‘paper trail’ to reflect the churn. This is were jira (or whatever ticketing system) helps, showing/proving scope change to anyone who cares to look.

                                    Any organization that requires this level of CYA (covery your ass) is not worth contributing to. Leeching off of, sure :)

                                    1. 2

                                      So are you saying that scrum is good or that scrum is good in an organization that you want to leech off of?

                                      1. 1

                                        I was referring to the case the gp proposed where the gatekeeper themselves are mediocre and/or incompetent, and in the case scape goats are sought, the agile artifacts can be used to effectively shield development, IF they’re available. In this type of pathological organization, leeching may be the best tactic, IMO. Sorry that wasn’t clear.

                                  2. 3

                                    I’m in favour of having a product owner.

                                    XP had one step better “Onsite Customer” ie. You could get up from your desk and go ask the guy with the gold what he’d pay more gold for and how much.

                                    A product owner is a proxy for that (and prone to all the ill’s proxies are prone to).

                                    Where I note things go very wrong, is if the product owner ego inflates to thinking he is perhaps project manager, and then team lead as well and then technical lead rolled into one god like package…. Trouble is brewing.

                                    Where a product owner can be very useful is in deciding on trade offs.

                                    All engineering is about trade offs. I can always spec a larger machine, a more expensive part, invest in decades of algorithm research… make this bigger or that smaller…

                                    • But what will a real live gold paying customer pay for?
                                    • What will they pay more for? This or That? And why? And how much confidence do you have? Educated guess? Or hard figures? (ps: I don’t sneer at educated guesses, they can be the best one has available… but it gives a clue to the level of risk to know it’s one.)
                                    • What will create the most re-occurring revenue soonest?
                                    • What do the customers in the field care about?
                                    • How are they using this system?
                                    • What is the roadmap we’re on? Some trade offs I will make in delivering today, will block some forks in the road tomorrow.

                                    Then there is a sadly misguided notion, technical debt.

                                    If he is wearing a project manager hat, there is no tomorrow, there is only The Deadline, a project never has to pay back debt to be declared a success.

                                    If he is wearing a customers hat, there is no technical debt, if it works, ship it!

                                    Since he never looks at the code….. he never sees what monsters he is spawning.

                                    The other blind spot a Product Owner has is about what is possible. He can only see what the customers ask for, and what the competition has, or the odd gap in our current offering.

                                    He cannot know what is now technologically feasible. He cannot know what is technologically desirable. So engineers need wriggle room to show him what can or should be done.

                                    But given all that, a good product owner is probably worth his weight in gold. A Bad One will sink the project without any trace, beyond a slick of burnt out and broken engineers.

                                1. 11

                                  It’s not true that no-one thought the early internet was rubbish. I did, and a lot of my peers did too. We just saw a slow and clunky technology filled with problems and didn’t have the imagination to see further. Strikes me that this is exactly like blockchain. Also, the title talks about Bitcoin, but then discusses blockchain, which is confusing. It’s like dismissing Yahoo, and then dismissing the internet because of it. Very odd.

                                  1. 12

                                    Email was already faster than physical post in 1992 when I got on the internet as a student.

                                    Mailing lists and Usenet presented an awesome opportunity for interaction with people all over the world.

                                    Bitcoin purports to be a better payment system. I can go online now, find a widget on Alibaba, pay for it with my credit card and get it delivered in a week or so. In what way does BTC improve on this scenario?

                                    1. 2

                                      Bitcoin purports to be a better payment system.

                                      Bitcoin is a technology. Many people have worked on it for various reasons, and it’s used by many people for various purposes. It doesn’t make sense to talk about its purport as if that were a single unified thing.

                                      At least for now it’s an alternate payment system, with its own pros and cons.

                                      Cryptocurrencies are still actively iterating different ideas. Many obscure ideas are never tried for lack of a network effect. Bitcoin and its brethren are young technology. I don’t think we can truly understand its potential until people have finished experimenting with it. That day hasn’t come.

                                      I think there is an innate human tendency to rush to judgment, to reduce the new to the seen before. When we do so, I think we miss out on the potential of what we judge. This is particularly true for young technology, where the potential is usually the most important aspect.

                                      Email was faster than physical post in 1992 but without popular usage lacked general utility. In hindsight it all seems so obvious however.

                                      1. 9

                                        I wrote:

                                        Bitcoin purports to be a better payment system.

                                        You write:

                                        Bitcoin is a technology. Many people have worked on it for various reasons, and it’s used by many people for various purposes. It doesn’t make sense to talk about its purport as if that were a single unified thing.

                                        I’m going off the whitepaper here:

                                        Bitcoin: A Peer-to-Peer Electronic Cash System

                                        Abstract. A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.

                                        I’ve been following the cryptocurrency space since I first installed a miner on my crappy laptop and got my first 0.001 BTC from a faucet, and the discussion has overwhelmingly been about Bitcoin as a payment system, or the value of the token itself, or how the increasing value of the token will enable societal change. Other applications, such as colored coins, or the beginnings of the Lightning Network, have never “hit it off” in the same way.

                                        1. 1

                                          Hmm, I’m not sure how that abstract is supposed to show how bitcoin purports to be a “better” payment system, just that it was originally envisioned as a payment system.

                                          Anyway, since then the technology presented in that paper has been put to other uses besides payments. Notarization, decentralized storage and decentralized computation are some examples. A technology is more than the intention of an original inventor.

                                          Other applications, such as colored coins, or the beginnings of the Lightning Network, have never “hit it off” in the same way.

                                          Evaluating the bitcoin technology, if that’s what we’re discussing, requires more than looking at just the bitcoin network historically. It’s requires looking at other cryptocurrencies, which run under similar principles. It also requires that we understand how the bitcoin network itself might improve itself in the future. It doesn’t make sense to write off bitcoin technology simply for slow transaction times, when there remains a chance that the community will fix it in time, or when there are alternatives with faster transaction times.

                                          Besides that, there are the unthought-of uses that the technology may have in the future. And even ideas that people have had that have never been seriously tried. With all that in mind, the potential of bitcoin technology can’t really be said to be something we can grasp with much certainty. We will only understand it fully with experimentation and time.

                                          1. 4

                                            Notarization, decentralized storage

                                            There was quite a bit of tech predating Bitcoin that used hashchains with signatures or distributed checking. I just noted some here. So, people can build on tech like that, tech like whatever counts as a blockchain, tech like Bitcoin’s, and so on. Many options without jumping on the “blockchain” bandwagon.

                                            1. 1

                                              Well the advantage of a cryptocurrency blockchain vs the ones you cite is that:

                                              • you have a shared, “trustless”, appendable database including an ability to resolve version conflicts
                                              • the people who provide this database are compensated for doing so as part of the protocol

                                              A cryptocurrency blockchain has drawbacks, sure, but it’s not like it doesn’t bring anything to the table.

                                            2. 3

                                              Unfortunately, what you said can be applied to every emerging tech out there. See VR and AR. The difference is that VR and AR has found enterprise-y niches like furniture preview in your home or gaming. Likewise, crypto-currency has one main use case which is to act as a speculative tool for investors. Now, crypto currency’s main use case is becoming threatened from regulation on a national level (see China, South Korea). Naturally, it’s practicality is being called into question. No one can predict the future and say with 100% certainty that X tech will become the next internet. But, what we’re saying is that the tech did not live up to it’s hype and it’s pointless to continue speculating until block chain has found real use cases.

                                              1. 1

                                                Unfortunately, what you said can be applied to every emerging tech out there.

                                                Yes, probably.

                                                The difference is that VR and AR has found enterprise-y niches like furniture preview in your home or gaming.

                                                Personally I’m skeptical that furniture preview and gaming truly explore the limits of what these technologies can do for us.

                                                Likewise, crypto-currency has one main use case which is to act as a speculative tool for investors.

                                                I mean, right now you can send money electronically with it.

                                                Now, crypto currency’s main use case is becoming threatened from regulation on a national level (see China, South Korea).

                                                You seem to be saying that regulation is going to happen everywhere. How could you know that?

                                                No one can predict the future and say with 100% certainty that X tech will become the next internet.

                                                I’m not talking about the difference between 99% certainty and 100% certainty. My argument is that we don’t understand the technology because we haven’t finished experimenting with it, and it’s through experimentation that we learn about it.

                                                But, what we’re saying is that the tech did not live up to it’s hype

                                                The life of new technology isn’t in its hype - its in its potential, something which I think we haven’t uncovered. There’s tons of crazy ideas out there that have never even seen a mature implementation - programs running off prediction markets, programmable organizations, and decentralized lambda, storage, and compute.

                                                it’s pointless to continue speculating until block chain has found real use cases.

                                                Not sure what you mean by speculating - financially speculating? I’m not advocating for that. Perhaps you mean speculating in the sense of theorizing - in that case I think there is value in that since the “real use cases” that you are demanding only get discovered through experiment, which is driven by speculation.

                                        2. 1

                                          And if we shift now the debate on blockchain as a whole and not just bitcoin?

                                      1. 7

                                        TLDR: Author used several bad IDEs, which crashed to the point of rebooting so that restarting after a reboot alsotook a long while.

                                        Well, now I really want to know on what kind of potato the author was running which IDEs. JetBrains IDEA is not lightweight but I’d used it for 4 years on an x230 (ok, i7 with 16GB and SSD) without any problems. Multiple projects, multiple languages, multiple VMs running at the same time. Now I use QtCreator and it’s a joy.

                                        Not using IDEs is fine, but the concept of an IDE is absolutely not the problem here.

                                        1. 1

                                          Intellij crashes on me at times. Are you sure you haven’t had any problems?

                                          Edit: Don’t take my question as an endorsement of this article, which it is not. 🙂

                                          1. 1

                                            You never have not any problems ;) But I’ve definitely had less problems than when using non-IDEs. (Sure it crashed once in a while or was stuck at “indexing” - but it’s given me less problems than, say, my browser, IRC client, mail client or anything else. Maybe I was just lucky :P) But just opening many files and switching between them multiple times per minute, resplitting panes, etc. It’s doable in vim/tmux/etcc (for me) but there’s no “intuitive flow” and forget about plain editors.

                                            1. 0

                                              wink wrote:

                                              … ;) …

                                          2. 1

                                            JetBrains has some quality IDE’s. For me, PhpStorm has features that I haven’t found in other IDE’s let alone text editors. I’m guessing this is similar to other languages too. For example IDE’s that support and extend hot swapping for Java.

                                          1. 4

                                            We’re doing a lot of plumbing for a Magento 2 instance to work with our production facility whilst our UX guy works on our new designs.

                                            I also have to do some post on a comedy show I recorded a month ago. Probably going to finish my Fleetwood Mac remix and start some more tracks that wont get finished.

                                            1. 2

                                              instance to work with our production facility whilst our UX guy works on our new designs.

                                              How’s Magento 2? I used to be a Magento 1 developer but have moved on once they announced they no longer support it.

                                              1. 3

                                                Mage 2 is a hefty beast. All of the core functionality of 1 is there, but written in much more modern ways. They introduce a lot of new stuff too. I’m in a team of two and we spend a lot of our time deep diving the documentation (which is also a massive improvement over 1) just to figure out small things like how the front end development works. I like the platform a lot but its massive. A lot to learn.

                                                1. 2

                                                  What are some differences/problems on the backend that you’ve faced when trying to get it running in production? We breifly worked on Mage 2 when it came out. I was surprised by the DI, interface, factory structure of the modules. Also the fact that they replaced module registration from XML to PHP for a performance boost.

                                                  1. 2

                                                    We’re still months away from Mage2 in production, but our stack is almost identical to how we have Mage1 set up so I don’t fear that will be an issue. I look at training our marketing and customer service teams on mage2 as something that will be a problem to face when we get to that point.

                                                    They also didn’t replace module registration - you still have to declare your module and version in an XML file. It makes the registration.php file a bit confusion, but all that does is register your namespace with the autoloader.

                                                    The way Magento 2 has approached DI is incredible I think. I don’t mind running a compile script on the sole fact that I just declare an interface for my injection and boom, it takes care of finding the implementation when the time comes to instantiate the object.

                                                    All in all, the more work I do on Mage2 the more I appreciate the efforts by their team.

                                            1. 1

                                              I’m always confused with this fusion between technology and societal impact. A couple things, technology is definitely a tool and thereby always morally neutral. Although, when used by people and context the morality will change. What does this mean? It’s common sense that people should be at the forefront at solving social problems NOT technology. As for the ethical training for programmers, except for the few of us working on critical systems, no one wants ethical responsibility. Imagine if every commit you presented a legally binding stamp to a code of conduct that you have verified that the system is free of any system failures and any violation will cost you a hefty fine or worse. Of course, there are benefits to this, like most other professions, the supply of programmers will rapidly decrease and controlled. In addition, code quality will likely go up.

                                              The reality is that a huge majority of our field if we make a mistake, no one bleeds. It’s a huge advantage in our line of work in that if you make a mistake it’s usually not a big deal. In addition to that, we’re as well paid as other profession who don’t have the same benefit. I suspect this is also something most programmers enjoy.

                                              1. 10

                                                So much truth on that. That’s THE reason I always tell friends working for startups to stop the hype with graphql, react, go and stuff, and advocate for darn simple Laravel or Rails app.

                                                The message in the end is KISS or just take the first boring tech that will give you enough time to prove your business to be successful!

                                                1. 3

                                                  Funny aside: I wanted to contribute to a Laravel app, and damn is it annoying to have to set up a full LAMP stack to do PHP development.

                                                  It’s pretty interesting how PHP is probably the easiest language to get production stuff running, but nowhere near as easy as Python/Ruby/Javascript’s “run the included watch script and you’re good to go” for local development.

                                                  that said if you know how to use the language, definitely worth just using that. Use what you know, and you can always change things later

                                                  1. 2

                                                    Laravel has made this process easier through homestead, if you want to set up LAMP on your native OS it takes a while longer.

                                                    1. 1

                                                      It’s not particularly hard to setup a LAMP stack on your native OS using MAMP or XAMPP.

                                                    2. 1

                                                      Laravel Valet is a Mac-specific option that is more lightweight than the Homestead VM. Both make local Laravel development simpler.

                                                      1. 1

                                                        Yeah I was trying that out. Still felt pretty uncomfortable with needing that much software to have a dev server working, but the process was a lot better than doing old LAMP stack stuff in Windows.

                                                        I’m glad that PHP has improved so much over the years, and hope it can keep on improving.

                                                      2. 1

                                                        Isn’t php -S enough for development?

                                                      3. 4

                                                        Well Go is also pretty simple, imo. In the “Go Programming Language” book, they already show you how to create a web server in the first introductory/overview chapter, since the ability is quite well integrated into the overall structure (and standard library) of the language. I, personally, would say, when one looks at all the real-life use-cases of Go, that it has passed it’s hype phase.

                                                        1. 10

                                                          A web server is nice, but where’s the battle-tested ORM, routing engine, middleware system, view rendering engine, asset handling system, authentication layer, background job handler, countless other built-in parts, plus roughly a kajillion gems that can drop in to provide various functionality?

                                                          Go is interesting and has it’s place, but it’s got a long way to go before it’s competitive for getting a moderately complex site up and running fast.

                                                          1. 2

                                                            I’m gonna preface this by saying that I thoroughly dislike go for many reasons, but criticizing it for it’s “web stuff” capabilities seems really weird, since that’s like the one thing it’s good at. Out of the things you mention:

                                                            ORM

                                                            Not really an ORM but equivalent functionality: https://golang.org/pkg/database/sql/

                                                            routing engine, middleware system

                                                            Provided in the standard library. https://golang.org/pkg/net/http/

                                                            view rendering engine, asset handling system

                                                            I’ll admit Go’s handling of this is pretty bad.

                                                            authentication layer

                                                            Depends on what you need to do, but this is definitely covered by /x/crypto or /net/http.

                                                            background job handler

                                                            Goroutines are like the best part of go.

                                                            countless other built-in parts, plus roughly a kajillion gems that can drop in to provide various functionality?

                                                            Uh…I’m not totally clear on what else you think is missing for rendering a web app

                                                            Go is interesting and has it’s place

                                                            What do you think is go’s place? Cause I think go is awful for systems level programming, and really it’s only niche is “web stuff” and command line apps.

                                                            1. 4

                                                              Honestly, this seems to me like comparing an auto parts shop to a car dealership.

                                                              ActiveRecord has its warts, but the Go SQL built-ins are super bare-bones, lets you execute hand-written SQL queries and that’s about it. No comparison IMHO.

                                                              I will admit that I was not aware of Go’s ServeMux, which is a bit better than I thought. I only skimmed the docs for it, but they’re like a page long. The manual for the Rails router is at least 20 times longer. I sure don’t see a standardized middleware system, or where you would even put one, given how close to bare metal the web APIs are.

                                                              And comparing /x/crypto to something like Devise… well, even the parts shop to car dealership analogy is woefully inadequate. There’s a huge number of ways to do web security wrong. Throwing Devise into a Rails app gets you automatic best practice implementations of almost everything. Pointing somebody at a pack of implementations of bare crypto algorithms and telling them to roll their own everything… that’s a security disaster waiting to happen.

                                                              And yeah, goroutines are nice. Until you want to implement a job worker on another computer, or have some records of what jobs are running when, automatic retries with exponential backoff, failure logging, etc.

                                                              I could start writing about some of the many other things, but honestly, just read through the Rails guide for an example of the kinds of things a good web framework should do. May the gods of code spare me from having to rewrite all of that stuff from scratch in another language to build a webapp, or from having to maintain a webapp where some other developer rolled all of that stuff from scratch and I have to figure it out.

                                                              1. 3

                                                                I use rails and go for several things, and this is a grossly inaccurate comparison.

                                                                The sql packages are not even close to the query generation and object mapping in rails, and there’s nothing close to that level of functionality on GitHub.

                                                                The standard library routing has no support for parameter extraction, nor separation between http methods. You can put together something useful by combining third party stuff, but it’s not orthogonal to middleware, so you have to glue them or pick middlewares designed for your router.

                                                                For authentication, rails has straightforward ways to check auth before running an action; go has bits you can put together yourself and hope you got it right.

                                                                Goroutines are great but are not a background job handler. Failure handling, retries, logging job state etc are all solved easily in rails; build those yourself in go.

                                                            2. 7

                                                              Okay, if you say so, but it’s far from providing you all those bricks the author is writing about.

                                                              I feel you’re totally missing the point of my comment and simply reacting on me categorizing go as potentially hype.

                                                              1. 1

                                                                Maybe, I can’t say for sure. I am no webdev, and my interest for startups is nonexistent, maybe even negative. But what I understood you saying, and the author, was to reject the use of newer technologies, because they are new.

                                                                Take for example the most highlighted passage from the article:

                                                                Building a product is not about using the latest and greatest technology.

                                                                All I wanted to point out, in regards to what you said is that this doesn’t “automatically mean to reject any newer, maybe even better fit technology (and that Go isn’t necessarily a complicating factor). I’d agree that one shouldn’t go overboard, and use some one-week-old language for everything, but that’s the same extreme as insisting to use COBOL on the other side. And after all: From what I was told, businesses are all about “risk”, so isn’t this a good example for that?

                                                                1. 3

                                                                  Not exactly because they are new but because they lack the bricks Rails for example took years to have.

                                                                  A common bias too, Go is a language, not a framework (although it has nice primitives for web).

                                                                  But I get your point and I tend to agree to some extent.

                                                            3. 2

                                                              React can be treated as proven library, I think. And it speeds up UI development considerably. Of course if you require JS UI at all, and just static forms are not enough.

                                                              “jQuery way” becomes painful really quick, even with very simple UIs. Server-side things like Rails or Django are usually good enough and not need to be replaced with something more “modern”, but js-side hype tech is born from pain and frustration, so I would not dismiss React, Webpack (Rails has integration for it now) and even fringe technologies like Purescript.

                                                              1. 2

                                                                Go is not hype, BUT for making CRUD webapps and not something that requires an application server I’d agree. Go is a good choice when you start needing to proxy data, do long running connections, write a chat server etc.

                                                              1. 9

                                                                I’m working on a cock-pit style station for developers. For example toggle switches to switch between staging and development. Push to deploy buttons.

                                                                1. 1

                                                                  That sounds fun. Do post a write-up and pics if you ever finish it. Matter of fact, it might make a nice entry to the next battlestations thread.

                                                                1. 2

                                                                  As others have already stated, in a professional sense the responsibility will lie with management. Although, reading this story leaves me frustrated. I do not understand why she had to run this experiment. In addition, what she was expecting since I doubt she doesn’t know her co-workers work ethnic. Why did she write this post?

                                                                  1. 6

                                                                    Unfortunately the reality is that most websites is not designed for slow connections. This is the same for users who have outdated browsers as well as those who disable java script. When devs tell managers that X doesn’t work for Y browser, or doesn’t have full compatibility, managers will see that it will take ZZZ hours for an issue that affects 2% of users.

                                                                    1. 3

                                                                      I dont have much pity left for outdated browsers.., though there are always exceptions

                                                                    1. 3

                                                                      This is find more palatable than the other thread. I’m definitely for college courses that focus on software engineering with just what courses are considered valuable from CompSci. They should learn proper abstraction, interface design, verification strategies, debugging libraries from 3rd parties, balancing act that is safety/security, how compilers can improve/break source code, basics of distributed systems, and importance of software, configuration management. These come to mind.

                                                                      Early stuff related to basic programming should be in Associates Degree so they can get jobs quickly. Then, their work experience and Bachelor’s studies fuse into a larger, overall lesson.

                                                                      1. 3

                                                                        This is exactly, what a software engineering degree in Canada offers. In Canada Software Engineering is a discipline which you can get a Professional Engineering accreditation for. This level of accreditation puts you in the same level as any other engineer (e.g you can legally and have a duty to tell managers to back off if a decision leads to safety concerns.)

                                                                        Unfortunately, verification strategies in my experience isn’t taught unless you look for it. There is more emphasis on project management, legal responsibilities, economics of software. As well as OOP design/archiecture.

                                                                      1. 17

                                                                        My current mantra at work is “eliminate dependencies”. I think alot of developers think something like: “Oh, I need to do X. There is a library for X. It’s free. Surely I will be better off using that library and those library authors are experts at X and I can focus on delivering business value”. For some values of X that’s a pretty reasonable assumption. I use lots of libraries every day. I use an open source kafka client that I could and probably never would want to write myself.

                                                                        In fact, the core competency of my team is serving web requests and we rely hugely on go and net/http lib to provide us with keepalive, transparent h2 support, a scalable epoll based network handling stack, etc.

                                                                        But a dependency is never free. There’s no such thing. A dependency is something you have to understand and debug. A library likely tries to accommodate a very generic set of use cases for maximum reusability, and it’s quite possible you need a very small subset of that functionality. So it’s very possible that writing that one or two functions may very well be a much better choice than the ongoing work of importing a 3rd party framework, staying up to date while simultaneously managing risk of framework disappearing while also auditing the framework to ensure that it’s safe to run on your production servers. That framework may very well do it’s job in a very non-optimal way because it has to work in so many different environments and use cases (running on windows, or supporting Oracle, etc).

                                                                        As an extreme example, it’s unlikely that the left-pad library was worth the risk.

                                                                        Finally, and this is specific to my experience, alot of 3rd party services are just bad. I really hate having to tell someone that their site is down, but there’s nothing I can do, I don’t like to pass the buck or throw up my hands and say: “Sorry, your site is down until $provider X fixes their stuff”.

                                                                        1. 13

                                                                          Surely I will be better off using that library and those library authors are experts at X

                                                                          Maybe it’s because I do JS, but lately I’ve been questioning this more and more. Cynical, but a lot of library authors aren’t experts at X. I’ve switched to trusting a small list of reputable Node/JS developers (which is vague and in my head) and evaluating the need for a dependency for the rest. Maybe not fair, but it saves me trouble.

                                                                          1. 4

                                                                            I reached the same conclusion after evaluating my company’s dependency tree and finding problems with most of the third party libraries our code depended on. Some of them are described here:

                                                                            https://kev.inburke.com/kevin/dont-use-sails-or-waterline/

                                                                            I focus more on areas… I don’t want to rewrite a bcrypt hasher, or a postgres driver. But I’m definitely going to rewrite an API client for your API. In some cases I’ll steal only the parts of a third party library that I need, put them in my source tree and remove the rest.

                                                                            I also found most uses of lodash to be totally unnecessary and increasing the complexity of the code inside.

                                                                            1. 1

                                                                              I wonder if you’d be willing to share that list? I’m outside of the JS modules community. However merely publishing the list could cause drama, which I wouldn’t want.

                                                                              1. 5

                                                                                Like I said, it’s really vague and all in my head. It’s probably not even a list, just people I recognize from my 3-4 years doing Node. Sindre Sorhus, TJ, Dominic Tarr, Max Ogden, Mafintosh, among many others come to mind right now.

                                                                                I also give more “points” when the project is in a company’s GitHub, because then I can a) have some confidence more than one person looked at the code and b) learn about the company—they could be a top Node agency or respected in their field.

                                                                                If you want to summarize my method, it’s really just “don’t install too many dependencies with little activity and made by random people”. Another point is that most people (including me) don’t live on GitHub like prolific authors sometimes do and ignore or forget about project issues and pull requests.

                                                                            2. 1

                                                                              Dependencies are definitely a risk as sources of bugs but I do not believe we should eliminate them. Instead of eliminating dependencies, alternatively you can structure your code in a way that switching out libraries/dependencies is easy. Dependency Injection would allow you to modify or replace the object that’s creating the issue.

                                                                            1. 3

                                                                              If you read both of these papers, you get something profound:

                                                                              Notes on Postmodern Programming : http://www.mcs.vuw.ac.nz/comp/Publications/CS-TR-02-9.abs.html

                                                                              Design Beyond Human Abilities (by Richard Gabriel, author of Worse is Better): https://scholar.google.com/scholar?cluster=5397162041763930663&hl=en&as_sdt=0,5&sciodt=0,5

                                                                              The theme here is to give up on thinking of software as a well-formed and engineered crystal, i.e. a “modern” structure. I notice this tendency among programmers to want to make a perfect world within a single model that you understand. Everything is in Java in my IDE; everything is modelled with strong types in Haskell, code and data in a single uniform Lisp syntax, single language operating systems, etc.

                                                                              Through tremendous effort, you can make your software a crystal. But that betrays your small point of view more than anything. All you have to do is zoom out to the larger system and you’ll see that it’s wildly inconsistent – made of software in different languages, and of different ages.

                                                                              For some reason I feel the need to defend the word “post-modern”. Certain types of programmers are allergic to this term.

                                                                              Here is means something very concrete:

                                                                              “Modern” – we have a model of things, we understand it, and and we can change it.

                                                                              “Postmodern” – there are multiple models of things – local models that fit specific circumstances. We don’t understand everything, and we can’t change some things either.

                                                                              Even though humans created everything in computing, no one person understands it all. For example, even when you make a single language OS like Mirage in OCaml, you still have an entire world of Xen and most likely the Linux kernel below you.

                                                                              Another example: a type system is literally a model, and this point seems beyond obvious to me, but lots of people don’t recognize it: you need different type systems for different applicaitons. People seem to be searching for the ultimate type system, but it not only doesn’t exist, but is mathematically impossible.

                                                                              1. 2

                                                                                I find this to be a refreshing perspective on software engineering as it does have interesting points. I agree with the sentiment for large software systems. After a certain size, a single individual can only really have a mental model of a small portion of the overall system.

                                                                                But from mid-size to small-sized system I think this is incorrect. You can definitely have a grand narrative on mid-sized systems. For example, using “post-modern” techniques like design patterns. Most payment processing systems at banks have an architecture team which will gather requirements and create high-level (architecture) diagrams and mid-level diagrams (object diagrams). Essentially the Software Requirements Specifications allows us to translate it to a grand narrative that can be understood visually. Additionally this gets easier as you go closer to bare metal.

                                                                                1. 2

                                                                                  Maybe a typo, but to clarify design patterns are a modern technique and not post-modern. They are a model within OOP, which is itself a modern idea, if you think that “everything is an object”. In the post-modern perspective OOP is an appropriate language and abstraction for some systems.

                                                                                  I haven’t worked on payment processing systems, but they seem like large and long-lived systems. I’m sure there is decades-old code in almost all current production systems. I thought there were a lot of banks with systems in COBOL?

                                                                                  The point is that if you look from a large enough perspective, they will be composed of heterogeneous parts, each with different models.

                                                                                  I briefly worked at an insurance company when I was young, and that was definitely true. You had a data warehouse, probably on a mainframe, and they were connecting to it with Microsoft Access. If you think that Microsoft Access “was” the system, then you are sorely mistaken. There is decades of legacy code behind the scenes, and it absolutely affects the way you do your work. If you try to “wall off” or “abstract” the old system, you end up with poorly working and inefficient systems.

                                                                                  Based on my customer experience, payment systems don’t appear to be gracefully adapting to adversaries.

                                                                                  That’s not to say that modern techniques are bad. They work; they’re just limited to a domain. If you read the first article, they explicitly make the point that post-modern techniques encompass modern ones. They just pick and choose whatever is useful, rather than claiming to have the one solution and grand narrative.

                                                                                2. 2

                                                                                  a type system is literally a model,

                                                                                  Eh? Yes, you may choose to use it to model things…. but I think you’re on to a pretty sticky wicket as soon as you try.

                                                                                  Type systems to me are purely axiomatic rules for a mathematical game.

                                                                                  Just symbols and declarations and definitions and references in a digraph with rules for what is allowed and what isn’t.

                                                                                  Model? Really?

                                                                                  Ye Olde 1980’s era Object Orient Analysis and Design texts were full of that ….

                                                                                  But I rapidly gave up on that as totally disconnected from reality.

                                                                                  You work out what outputs you want from the inputs…. and then you play the type game to produce the simplest thing that will do what you want.

                                                                                  Model?

                                                                                  Nah.

                                                                                  Just a game with rules that you can pile up. But because the rules are consistent, you can make very big piles and know what they will do. (So long as you don’t cheat).

                                                                                  1. 2

                                                                                    When I say model, I mean it’s a map that you can reason about at compile time for what the program does at runtime.

                                                                                    But the map is not the territory. I wrote this comment a few years ago and it still reflects my position: https://news.ycombinator.com/item?id=7913684

                                                                                    I don’t think the use of the word “model” is very controversial. On that page:

                                                                                    http://blog.metaobject.com/2014/06/the-safyness-of-static-typing.html

                                                                                    “I think it’s most helpful to consider a modern static type system (like those in ML or Haskell) to be a lightweight formal modelling tool integrated with the compiler.”

                                                                                    Models and maps are useful. But they can be applied inappropriately, and don’t apply in all situations. That’s all I’m saying, which I think should be relatively controversial. And that’s the “postmodern” point of view, although I admit that this is perhaps a bad term because it inflames the debate with other connotations.

                                                                                    Wikipedia has a pretty good explanation:

                                                                                    The map–territory relation describes the relationship between an object and a representation of that object, as in the relation between a geographical territory and a map of it. Polish-American scientist and philosopher Alfred Korzybski remarked that “the map is not the territory” and that “the word is not the thing”, encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself. Korzybski held that many people do confuse maps with territories, that is, confuse models of reality with reality itself.

                                                                                    https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation

                                                                                    1. 2

                                                                                      Ye Olde Object Oriented Design folk were very keen on modelling reality by types…..

                                                                                      Aha! We have a Car in our problem domain, we’re going to have a Car type and a Wheel type and …. and these are going to be the symbols we draw on our maps to represent the territory.

                                                                                      Over the decades of dealing with large systems I have completely divorced from that mindset.

                                                                                      I’m only interested in invariants and state and constraints and control flow and dependencies.

                                                                                      Models of reality? I don’t see that in the lines of code. I just see the digraph of dependencies, encapsulated state, invariants enforced, and ultimately inputs and outputs.

                                                                                      Models of reality? If reality says X and class invariant says Y, I know all kinds of buggy things will happen if I violate the class invariant…

                                                                                      But if I violate reality…. It’s an enhancement request, not a bug.

                                                                                      Part of our difference is you’re talking about static typing, I’m talking about typing.

                                                                                      Smalltalk / ruby / … are typed, just dynamically typed.

                                                                                      When you invoke a method on a variable, you can’t know until run time which implementation of that method will actually get invoked.

                                                                                      But you can know exactly the algorithm by which it decides which one will get invoked.

                                                                                      With a static typed language like C, you can know which function at compile time, because you can evaluate it’s rules at compile time.

                                                                                      ie. For both, its an axiomatic mathematical game with fixed rules.

                                                                                      If you show me a class “Car”, I ignore completely the similarity of the word “Car” to the name of the thing I drive.

                                                                                      I look at it’s state, it’s methods, it’s invariants, it’s dependencies.

                                                                                      I might after awhile, as an afterthought, mutter that it’s badly named…

                                                                                      To me we only map to some semblance of reality at a feature level, not a type.

                                                                                      1. 2

                                                                                        Yeah I know what you are saying – there was a this naive way of thinking of taking the nouns and verbs in your domain and trying to “model” reality with objects. That is not how I think either, but it’s not what I meant by the word “model”. I meant model in a more mathematical sense, like “formal modelling”.

                                                                                        I think of it as naive top-down design. I tend to work from the bottom up… write the simplest thing that does what you need to do. Then does it have the structure of an object? Factor it out. I don’t try to start with the classes. Then you get a structure that doesn’t match what your code actually does.

                                                                                1. 6

                                                                                  Web development is in a shabby state. I find it displeasing to work on many web applications. The reason for this I think is that frameworks like Rails set the barrier to entry/learning curve too low. What I mean specifically is that in my experience, web dev projects do not require well-thought out design decisions. Instead all you need is a couple individuals out of a bootcamp and maybe a senior installing gems/extensions with a couple mods. This will allow most projects to fulfill their requirements and in maybe 4-5 years a new website will be built to replace it.

                                                                                  1. 4

                                                                                    I agree, even though I am new to the industry (2-3 years) and work on frontend. The low barrier doesn’t just apply to the actual web programmers (fairly low but not as bad as it seems), but almost every aspect of the project’s planning, resource management, and execution is done rushed and with little thought for maintenance (to be “"agile”“). It doesn’t matter, though, because 3-4 years later it’ll be rewritten and your old code will effectively cease to exist.

                                                                                    That said, I’d rather work on a Rails project with fellow newbies than a Node/React one which becomes an absolute mess because of the lack of convention.

                                                                                    1. 4

                                                                                      Web development is in a shabby state. I find it displeasing to work on many web applications

                                                                                      The entire platform seems smothered between lots of people changing careers (good on them!) and the alpha nerds of the platform parroting cliches about the “open web” ad nauseum. It’s like the hype of making money on the platform overrides all quality concerns.

                                                                                      Things are held together by duct tape, but we should be proud of this thing because we’ve worked so hard on making it somewhat performant on Haswell i7s. Sunk cost fallacy all the way down.

                                                                                      Also, I really resent the self-justification that the easiest platform for users is the one that is the most important. This cedes control to people who don’t know any better. We should be framing how users use technology, and I’ll be the first to admit we’ve never prized accessibility as much as we should have.

                                                                                      1. 2

                                                                                        I was discussing this with two people earlier in the week, in the context of visiting a local code school’s graduation showcase.

                                                                                        All three of us had “grown up” with the web. We remembered when CSS came to be, when DHTML was still a thing, and when JavaScript was only used to make the website snow during December. None of us could possibly imagine being in the shoes of someone learning the web now. All of us were trying to get a sampling of that from the various code school grads. Having a gradual history of the technology in our heads, we all felt it was easier to navigate new technologies as they come about, and to not let the new shiny distract from core software engineering principles.

                                                                                        On one hand, I definitely want more people to be able to code, to understand the digital world, and to grow intellectually or professionally. But when the most experienced fellow among us said “Anyone who can write 2 lines of JavaScript thinks they’re the god of the web,” I had an mental flash of agreement before opening my mouth to push back. The resulting conversation was around how and when someone matures out of that.

                                                                                        My current thinking revolves around the first time you realize you’ve added to a mess rather than having fixed it.

                                                                                        I meet a lot of ambitious junior developers who goes into one of their early-career jobs with a platonic ideal of clean code in their heads. They behold the vast sprawl of legacy code around them and think “This is a swamp! A huge a pile of mud! I guess I’ll be the one to build real structure and bring sanity to this place.”

                                                                                        Among those who are lucky and are given the chance to do that, the majority will fail, and the best among them will look back at what they built and see that all of their scaffolding was just heaping more mud onto the pile. Then the healing can begin. Then they have some perspective on how the the mess comes to be in the first place: well-intentioned people just like them.

                                                                                        1. 1

                                                                                          all you need is a couple individuals out of a bootcamp and maybe a senior installing gems/extensions with a couple mods. This will allow most projects to fulfill their requirements and in maybe 4-5 years a new website will be built to replace it.

                                                                                          We, who care about quality and medium to long term maintenance, might not like it but this is positive, if you are on the other side of the table. A junior team can crank out something that works in short order.

                                                                                          1. 1

                                                                                            i’m still sad every time i think about opa failing to gain mindshare. it really should have been the next rails, and it would have advanced the state of web development marvellously. let’s see what elm and phoenix can do.

                                                                                          1. 9

                                                                                            The new design is so awesome and 2.0 it doesn’t even render on the Wayback Machine.

                                                                                            Sadly, the usual go-to in such cases, archive.is, makes a pretty jumble of it too.

                                                                                            In the browser though, the new layout looks pretty neat. Compared with the previous design it gives you a quicker idea of how many different things the platform is capable of and looks less like a generic landing page.

                                                                                            1. 3

                                                                                              I was impressed they have a layout that works with NoScript on. Then it fails hard in the Wayback Machine. Harder than most sites I see from CompSci people. Any web developers know why it does that?

                                                                                              1. 8

                                                                                                Looks like this is just a specific bug in Wayback Machine’s implementation. Usually the crawler will download the assets and CSS and I believe the server will change all links to refer to the new locations. In this case, the site didn’t do that properly so the CSS isn’t loaded.

                                                                                                For archive.is, is a bit more interesting. Looks like how their implementation works is that they move all the CSS into the HTML as inline styling and serve it as one static file. But when they translate it, it seems they translate it for old browsers like IE8. The new design doesn’t support old browsers at all so it looks jumbled when it sees CSS3 properties like flex-columns. Their previous design used a CSS framework which usually handles boring tasks like browser compatibility for you.

                                                                                                1. 1

                                                                                                  Thanks for the explanation. At least I know not to put the problem on the Racket site now.