1. 9

    So the only alternatives to cash are a no-privacy, government-controlled totalitarian system, or cryptocurrencies? It’s a totally reductionist point of view. There is a huge solution space in between.

    In New Zealand, for example, we hardly use cash, but guess what: it’s not a dystopian nightmare! We simply make payments using our bank debit cards, or credit cards, or even Apple Pay. Seems to work fine.

    1. 6

      Bank debit cards, credit cards, and Apple pay are absolutely vulnerable to being cut off because “you said something someone with power doesn’t like”. Or just by virtue of overzealous (maybe selectively overzealous?) fraud detection measures, which have actually prevented me from making purchases with my own debit and credit cards.

      1. 9

        Given a sufficiently dystopian regime, neither cash nor cryptocurrencies would help you, because in the secret-police torture dungeon you won’t have access to them and there won’t be anything you’d want to buy anyway.

        It’s also weird to me that people only ever raise this in the context of Wikileaks, when it’s been happening to say, adult content for years and years. I know someone who did tech stuff for a comic artist who does a fair bit of NSFW stuff, for example; lots of trouble there finding hosting services, payment processors (for merchandise), etc. – just for drawing comics that sometimes have naked people and sex jokes in them. And that’s without getting into how LGBT resources, abortion resources, and other similar things often get classified as “adult content” and hit with the same broad banhammer as porn does.

        1. 0

          Sounds like that comic artist was creating art that people with power don’t like (or a least art that was close enough that the financial system’s bureaucracy can’t distinguish it). It would be good if there was a way to for people to pay that artist without using channels that can be blocked because a bureaucracy decided not to allow it! ,

          1. 13

            You seem to be stuck on this narrative that it’s very specific “people with power” who are responsible for this. It’s not. It’s a reflection of a society-wide willingness to marginalize certain topics and groups. It happens when conservatives are in power, it happens when liberals are in power, it happens in democracies, it happens in dictatorships.

            In general, the things that are truly fundamentally wrong in modern society are not due to decisions made in smoky back rooms by a sinister cabal of “people with power” twirling their mustaches and cackling. They’re due to decisions made every day by ordinary people. I’d suggest that if you want real, lasting change, a necessary first step is recognizing and accepting this.

            1. 1

              Uhm, no, not quiet. In most cases, it’s unwillingness to get yourself in trouble. The soviet union used lethal force against people peacefully protesting against food prices increase. Even in the 80’s, saying a wrong thing publicly could get you fired and forever locked out of any non-minimum wage jobs. A number of soviet rock musicians worked as janitors etc. for this reason, not because they lacked skills or education. They often had decent jobs until they lost them when their involvement in the underground music scene was discovered. Most ordinary people also thought the ban was ridiculous and their music was wildly popular. The ordinary people, however, had no power to do anything with the sinister people in the back rooms though.

              1. 4

                The Soviet Union only lasted as long as it did because of the complicity of vast numbers of people. Even people who would – if offered the opportunity – have told you that they hated the system and wished it would end. But every day, they still did things and made choices which propped up and reinforced that system.

                This isn’t necessarily saying that ordinary Soviet citizens were wrong to make the choices they did. One of the perverse things about these situations is that something which seems like, or even is, the best and most rationally self-interested choice at the level of a specific individual can be a terrible, irrational, society-wrecking choice in aggregate.

                But it absolutely is the case that the “oh, you did something the people in power don’t like” style of narrative I was replying to is not useful. The things I was bringing up are not going to be fixed by replacing a handful of “people in power”. As Terry Pratchett pointed out a few times in his books, one problem with returning power to “The People” is you find out unpleasant truths about The People.

                  1. 2

                    I keep telling you the all-whites-are-privileged narrative is BS. That plenty exist in oppressive environments. In service sector (esp retail), many are even effectively slaves. The people they serve are in all kinds of groups. Most write-ups on it are rants by people deep in the shit with little potential for positive benefit. That’s one of the best articles I’ve seen on the subject. His background and writing skills made it much more interesting.

                    Thanks for the article! I’m definitely going to put it to good use. I know some folks in service positions that might find some inspiration in it to get better jobs.

          2. 4

            The first scenario seems far-fetched. The second is possible, but you usually have recourse, like calling the bank and getting them to unfreeze the account, or I think my bank now allows me to do it via the mobile app. Besides, cash has its own failure modes! It’s vulnerable to being lost or stolen. I still see a large solution space with different tradeoffs for different solutions.

            1. 6

              The first scenario seems far-fetched.

              You should look up how Wikileaks went down. It wasn’t a FBI, CIA, or SOCOM operation that many thought might be coming. They just pissed off powerful people who, on the banking side, decided to shut off their donations before they leaked on a powerful bank (BofA suspected). Those blocking donations were Visa, Mastercard, and Paypal that I recall. Then, Wikileaks withered and died.

              No speculation required. It already happened. Under the Patriot Act, it could’ve happened repeatedly without you knowing why since the order would be classified with NDA. The U.S. is a quasi-police state. Suspect the worst until federal and secret government powers are reigned in via laws with teeth. Hell, GAO said Congress oversight didn’t even read their reports on NSA abuse. Congress also made some violators immune to what they did with later legislation. Assume the worst since there’s plenty reason to at this point.

              1. 3

                …it could’ve happened repeatedly without you knowing why since the order would be classified with NDA.

                You probably mean NSL? National Security Letter?

                Much, much worse than any NDA.

                1. 1

                  Well, any government sealing of what’s going on. It could be NDA, NSL, court order (esp FISA), etc.

                2. 3

                  I knew this was going to come up :) But WikiLeaks is an exceptional example, and anyway the proximate cause of its issues was excess corporate power, so it’s tangential to what I was commenting on.

                  Regardless, there is a multitude of solutions that would prevent this scenario, and jumping straight to cryptocurrencies is not required.

                  1. 1

                    That last part is true. The others are a maybe. For instance, people periodically run into Paypal freezes. For normal banks, I think people’s money is probably safer in them due to both muggers and civil forfeiture. Folks irreversibly lose cash more often than digital cash with higher damages. Unless we’re talking ACH but basic security mitigates that.

                  2. 1

                    Pretty scary that a handful of companies control commerce. If Mastercard, AMEX, Paypal and Visa decides to not work with you, your effectively cut off from a large portion of the market.

                    1. 1

                      True. I’ll add that online just Paypal will cause a big loss cuz folks trust its escrow. AMEX you can usually ditch since (a) many shops dont take it and (b) most AMEX users have MC/VISA backup for that reason. Whereas, losing MC or VISA is throwing your wallet out the window. Many people don’t even carry cash.

                      I still encourage everyone to keep cash on them in case cards go down at a store. Saved my butt and helped others many times.

                    2. 1

                      Suspect the worst until federal and secret government powers are reigned in via laws with teeth.

                      Given many recent political events, I’m becoming suspicious that any number or combination of laws can reign in the mess we have now. Almost seems like the Government is just to darn big to ever really get under control.

                      Though I’m also concerned that if we cut it back too hard, we may just give even more power to corporations, which aren’t that much better.

              1. 16

                I started as a programmer in the early 90s. During that time, the only Windows based computer I ever owned was a dedicated one that ran Cubase for recording. Despite that, about 3 weeks ago I switched off of MacOS after 12 years and got a Surface Book 2 that I’ve been using with WSL2. There are some weird and hinky things with Windows 10 that bother me but all in all, WSL2 is a better development experience than MacOS was.

                When I got the Surface Book, it was because I also need some software that made Linux a difficult choice. I told myself that if I didn’t need that software, I would be using Linux if I could find a good Linux laptop with a trackpad I can stand (I’m rather picky on the trackpad front). Now though, I’m not so sure.

                The workflow of being able to spin up different Linux environments really quickly and be able to throw them away has been a wonderful development experience. Windows itself isn’t really in my mind worse than MacOS, it’s just bad in different ways.

                At this point, Windows with WSL2 might end up being my primary development environment for a long time. Which, strikes me as the weirdest thing I could say because Windows was basically a giant unknown foreign entity for me. I’ve been a Unix*n developer for over 25 years and the idea of using Windows still seems foreign to me.

                ¯_(ツ)_/¯

                1. 6

                  I think part of it might be how Apple seems to have thrown away their investment into a platform for creators/hackers in favor of going all in for consumer-oriented platforms. They should’ve done both. They had the money to make their Mac OS the platform that all the best creators would want. They literally could’ve bought most of the major companies creators used, ensured their products were most optimized for Mac OS, made the best hardware that was simultaneously optimize for those companies’ products, and kept the creators coming. They’d have the lead on consumer stuff, creator stuff, and still plenty of money to throw at shareholders.

                  Instead, the company whose co-founder was great at making billions off of consumers totally wasted their opportunities on the creators that made it what it was. They might still turn it around if they can somehow connect the dots.

                  1. 3

                    Perhaps.

                    At this point I’ve disliked all the Macbook Pro hardware since 2015 and starting with Sierra, I found MacOS to be a constant series of crashes for me. So, they have a lot of ground to make up in my mind. Not that they seem to care.

                    1. 2

                      I don’t even like Apple and I still cannot upvote this enough. The iPod was the beginning of the end of the Apple that was worth loving because it showed that there was far, far more money in making classy, expensive consumer goods than in making high quality tech tools.

                      1. 9

                        Sorry, but the iPod was fantastic. It did it’s job really very well, and it felt well-made and of high quality.

                        The reason the newer MacBooks suck so bad is because they are not fit for purpose for many of our peers. A laptop that ceases to function when a spec of dust wriggles its way into the keyboard is not fit for purpose.

                        A well-designed product needs to do very well the thing that it was designed to do. The iPod did that. The newer MacBooks do not.

                      2. 1

                        They literally could’ve bought most of the major companies creators used

                        Oh god please no. Why and how would this have been a good thing!?

                        1. 2

                          It was the creators platform for a while. High-end software for pictures, audio, and video was targeted to it. Them building on the momentum they had with that audience would’ve kept money coming in with fewer reasons for people to use Windows. Instead, they were pissing off their own customers with the Windows PC getting better for those same customers all the time.

                          It doesn’t look like it rebounded much from Microsoft doing the Metro disaster or turning Windows into a surveillance platform, either. Some folks would like an alternative to Windows with as much usability, high-quality apps, and hardware drivers designed for it.

                          1. 2

                            I thought you must’ve been playing devil’s advocate. You’re saying you want the purveyor of an ecosystem to buy up all the players in its ecosystem? I don’t think that’s ever been Apple’s forté, and if they messed up the platform they would’ve definitely messed up a play like this. I think there are other, significantly more prudent ways to invest in your ecosystem.

                            1. 2

                              I’m basically saying they shouldve invested in their creator-oriented platform in all the ways big companies do. That includes acquiring good brands in various segments followed up with ensuring they work great on Mac and integrate well with other apps.

                              They can otherwise let them do their thing. Apple isn’t good at that. It would’ve helped them, though.

                            2. 1

                              Although Apple might have had dominated the creators market. Maybe they know something the rest of us don’t. The reason why they’re building 6000$ computers and consumer-oriented laptops might be because that’s where the money is right now. https://www.youtube.com/watch?v=KRBIH0CA7ZU

                        2. 2

                          For reference (since I was interested and looked it up), wikipedia says the SurfaceBook 2 has:

                          1. 1

                            https://shru.gg/r might be of service

                          1. 2

                            I work with distributed systems, although not at a scale like Amazon. Like all CS fields there’s a big difference between practical distributed systems and academic topics you read about. When working with distributed systems what your really concerned about is bugs, how do we debug a distributed system, and timing issues. As well as typical infrastructure concerns: high availability, security, reliability etc.

                            My advice is to talk to people who work with these systems everyday and see if you like what it’s like.

                            1. 1

                              Great idea. I would expand on your suggestion, and say that in all systems we’re concerned about bugs. How would I debug a system call for example? I think that’s a really great question to ask for every area of computation, because the answer leads to more depth of knowledge. Thanks!

                            1. 20

                              My advice, which is worth every penny you pay for it:

                              Don’t maintain a test environment. Rather, write, maintain and use code to build an all-new copy of production, making specific changes. If you use puppet, chef, ansible or such to set up production, use the same, and if you have a database, restore the latest backup and perhaps delete records.

                              The specific changes may include deleting 99% of the users or other records, using the smallest possible VM instances if you’re on a public cloud, and should include removing the ability to send mail, but it ought to be a faithful copy by default. Including all the data has drawbacks, including only 1% has drawbacks, I’ve suffered both, pick your poison.

                              Don’t let them diverge. Recreate the copy every week, or maybe even every night, automatically.

                              1. 9

                                Seconding this.

                                One of the nice things about having “staging” being basically a hot-standby of production is that, in a pinch, you can cut over to serve from it if you need to. Additionally, the act of getting things organized to provision that system will usually help you spot issues with your existing production deployment–and if you can’t rebuild prod from a script automatically, you have a ticking timebomb on your hands.

                                As far as database stuff goes, use the database backups from prod (hopefully taken every night) and perhaps run them through an anonymizing ETL to do things like scramble sensitive customer data and names. You can’t beat the shape (and issues) of real data for testing purposes.

                                1. 2

                                  Pardon a soapbox digression: Friendlysock is big improvement over your previous persona. Merci.

                                  1. 1

                                    It’s not a bad idea to make use of a secondary by having it be available to tests. Though I would argue instead for multiple availability zones and auto scaling groups if you want production to be high availability. Having staging as a secondary makes it difficult for certain databases like Couch base to have automatic fail over since the data is not in sync and in both cases your gonna have to spin up new servers anyways.

                                  2. 8

                                    We basically do this. our production DB (and other production datastores) are restored every hour, so when a developer/tester runs our code they can specify –db=hourly and it will talk to the hourly copy(actually we do this through ENV variables, but can override that with a cli option) . We do the same for daily. We don’t have a weekly.

                                    Most of our development happens in daily. Our development rarely needs to live past a day, as our changes tend to be pretty small anymore. If we have some long-lived branch that needs it’s own DB to play in(like a huge long-lasting DB change or something) we spin out a copy of daily just for that purpose, we limit it to one, and it’s called dev.

                                    All of our debugging and user issue fixing happens in hourly. It’s very rare that a user bug gets to us in < 1hr that can’t be reproduced easily. When that happens we usually just wait for the next hour tick to happen, to make sure it’s still not reproducible before closing.

                                    It makes life very nice to do this. We get to debug and troubleshoot in what is essentially a live environment, with real data, without caring if we break it badly (since it’s just an at most 1 hour old copy of production, and will automatically get rebuilt every hour of every day).

                                    Plus this means all of our dev and test systems have the same security and access controls as production, if we are re-building them EVERY HOUR, it needs to be identical to production.

                                    Also this is all automated, and is restored from our near-term backup(s). So we know our backups work every single hour of every day. This does mean keeping your near-term backups very close to production, since it’s tied so tightly to our development workflow. We do of course also do longer-term backups that are just bit-for-bit copies of the near-term stuck at a particular time(i.e. daily, weekly, monthly).

                                    Overall, definitely do this and make your development life lazy.

                                    1. 1

                                      I’m sorry, what is the distinction you’re making that makes this not a test environment? The syncing databases?

                                      1. 2

                                        If I understand correctly, the point is that this entire environment, infrastructure included, is effectively ephemeral. It is not a persistent set of servers with a managed set of data, instead, it’s a stand by copy of production recreated every week, or day. Thus, it’s less of a classic environment and more like a temporary copy. (That is always available.)

                                        1. 4

                                          Yes, precisely.

                                          OP wants the test environment to be usable for testing, etc., all of which implies that for the unknown case that comes up next week, the test and production environments should be equivalent.

                                          One could say “well, we could just maintain both environments, and when we change one we’ll do the same change on the other”. I say that’s rubbish, doesn’t happen, sooner or later the test environment has unrealistic data and significant but unknown divergences. The way to get equivalence is to force the two to be the same, so that

                                          • quick hacks done during testing get wiped and replaced by a faithful copy of production every night or sunday
                                          • mistakes don’t live forever and slowly increase divergence
                                          • data is realistic by default and every difference is a conscious decision
                                          • people trust that the test environment is usable for testing

                                          Put differently, the distinction is not the noun (“environment”) but the verb (“maintain” vs “regenerate”).

                                          1. 2

                                            Ah, okay. That’s an interesting distinction you make – I take it for granted that the entire infrastructure is generated with automation and hence can be created / destroyed at will.

                                            1. 2

                                              LOLWTFsomething. Even clueful teams fail a little now and then.

                                              Getting the big important database right seems particularly difficult. Nowhere I’ve worked and nowhere I’ve heard details about was really able to tear down and set up the database without significant downtime.

                                    1. 3

                                      I’m curious to see why Rust is considering Monads. Is the Rust Community going towards FP? Data Abstraction that FP has does seem like a good fit for rust but on a conceptional basis why would you need a Monad in rust when Rust is already imperative and can produce side effects?

                                      1. 7

                                        It doesn’t look like “Rust” is considering this. More like a few people who code in Rust wanting to expand it. Official response:

                                        “Please note that these are musings about possible language extensions that would let us express monads; this is not even at the stage of a proposal yet, and we haven’t all agreed that enabling this kind of code is actually useful. However, community sentiment is all over the map… reddit thread on this post, you can see a number of different opinions expressed there.

                                        I personally want to see examples of real Rust code that has a problem that this solves. I get monads, I like monads in Haskell, but I’m unconvinced that the additional complexity is worth it in Rust. “ (Steve Klabnik)

                                        1. 4

                                          It doesn’t look like “Rust” is considering this.

                                          Exactly, it’s a personal musing of a member of the lang team. We want to have that, but it is definitely a personal mental exercise.

                                        2. 4

                                          If it remains a PLT musing, that’s fine, but I would be unhappy if this came to be implemented in Rust. It would add complexity to a language that’s already pushing its complexity budget. I don’t think that Rust needs to appeal to everyone: it’s okay if Haskell programmers don’t find their favorite abstractions (functors, monads, applicatives) in Rust, just as it’s okay if Java programmers don’t find theirs (classes, inheritance, etc.).

                                          1. 1

                                            Not to mention we can use different tools for the jobs they’re good at. At least one person here is mixing Haskell and Rust. Some others were doing that with Erlang. So, Rust steps in with its own style to handle anything that would force such people to drop down to C or jump through crazy hoops. They keep writing everything else in the preferred language.

                                            1. -1

                                              Maybe it would make sense to remove all that silly complexity and cut down on the constant feature additions, instead of blaming looming complexity on support for some of the most fundamental abstractions?

                                              Drop mandatory semicolons, replace generics’ <> with [], turn special indexing syntax into normal function calls, replace :: with ..

                                              There, removed loads of complexity that people experience daily.

                                              1. 5

                                                None of these even comes close to being a daily complexity for me: they’re just lexical syntactic concerns, not very high on my list of things that Rust needs to be wary of.

                                                1. 1

                                                  I could probably go on for hours about the semantics, but you have to start somewhere.

                                                  If even the basic syntax is needlessly complicated and you are fine with it, then it’s not a surprise that Rust’s complexity grows with every release. I think Rust’s development is too often focused on asking whether they can instead of asking whether they should.

                                          1. 1

                                            PHP requires that application is initialized from scratch for each request, so this can really decrease response latency. But this is also a problem for JIT: how generated code and tracing information is preserved between requests? Usually it starts from blank state, source files are re-read for each request, each function and class definitions re-create functions and classes. How will it change with adding JIT?

                                            1. 2

                                              source files are re-read for each request

                                              With an opcode cache like APC, you try to only re-read and re-parse the source files when they change (when their mtimes change, I presume).

                                              IME, if you turn on APC’s opcode cache, turn off debug mode and change absolutely nothing else, WordPress on running on Apache/mod_php used to become perceptibly quicker. (Note I haven’t checked this in at least 4 years.)

                                              1. 1

                                                Apache/mod_php is dog slow. I hope nobody is using this anymore… Better use Apache MPM event + PHP-FPM, that is quite acceptable! :)

                                                1. 1

                                                  MPM event wasn’t officially considered stable yet back when I did this. FPM wasn’t in the distro repos ;). The Wordpress site was not the slow part of the whole product anyway. We had a much slower website alongside it, which the WP site was effectively a giant landing page for. Also I had Varnish in front of the WP site so it went fine.

                                                2. 1

                                                  As I remember, using these caches (there was plenty of them) always wasn’t easy and standardized, and now documentation for APC says:

                                                  This extension is considered unmaintained and dead. However, the source code for this extension is still available within PECL GIT here: https://git.php.net/?p=pecl/caching/apc.git.

                                                  Alternatives to this extension are OPcache, APCu, Windows Cache for PHP and the Session Upload Progress API.

                                                  Adding JIT will probably require intermediate code/native code cache that really works out of the box. Working bytecode cache also was the main selling point of commercial Zend Platform/Zend Accelerator (renamed several times), and if mainline PHP will have cache out of the box, it may reduce its sales.

                                                  1. 1

                                                    I remember APC only taking about an hour or so to set up, most of which was reading docs.

                                                    Fwiw this was somewhere around 4 to 6 years ago on a one off job. Never touched it since.

                                                3. 1

                                                  JIT will increase the bootstrap time for the first request so this may reduce response latency. May as in, it will also speed up long-running processes. As the author says you’ll probably see no performance gains from JIT for web applications.

                                                  1. 1

                                                    But PHP’s current execution model does not have long-running processes. Only interpreter/compiler may be preserved as long-running process, but application starts and terminates with each request. Almost like in CGI.

                                                1. 3

                                                  Many features that used to be enterprise are really starting to trickle down. Since CPU clock speed is basically plateauing on, consumer computers are basically servers now. They brought up a good point which was that VFIO and IOMMU led to some cool new tech like looking-glass which allows people to use VMs without all the latency/virtualization issues. And in the future that this trend will continue, where people will just skip the virtualization layer and go from userspace direct to hardware.

                                                  1. 1

                                                    It’s true. Yet, there’s also a counter trend where enterprise stuff adds extra risk or problems that people might want to dodge. The Management Engine was one. There’s a market for non-obvious backdoors in CPU’s. All these complex features in CPU’s are also leading to side channels and such. Some will go for less-complex CPU’s or side-channel-free coprocessors.

                                                    Although mostly what you said, I think there will steadily be a smaller flow away from enterprise-style features. They’ll want different tradeoffs.

                                                  1. 3

                                                    I am setting up a build environment for AWS CloudFormation/Lambda applications in VS Code. I’ve worked on all the pieces separately. I think I grok them enough to assemble it all together into a code-build-test-deploy project.

                                                    I’m starting off with a triple-language lambda – code that does the same thing in Python, C#, and F#. The cloud I set up should call this code on a regular basis. It will poll a website for changes using a RESTFUL API. If it finds any, it’ll grab the new information and put it in a publicly-accessible S3 bucket.

                                                    I was going to add Route 53 and CloudFront stuff to the build, but I’ve already developed some stuff using that for my first CF stack.

                                                    I want to say it’s tough, but it’s not. However it’s going quite slowly. I think the reason is that while I know a lot about the separate pieces, trying to assemble them into a AWS Cloud Architect-y kinda workflow is quite painful. There are a zillion little loosely coupled pieces all over the place. And aside from basic ATDD poking around, I’m not really sure what it means to test a cloud architecture.

                                                    There’s a lot of stuff to learn and I’m having fun. That’s the important part.

                                                    1. 1

                                                      I believe CloudFormation will allow you do have CI as part of a deployment for your lambda functions if you change your workflow so the repo’s on AWS. Testing AWS offerings like lambda is a sort of vendor lock in because there’s really no way to do it any other way.

                                                      1. 1

                                                        You are correct – and you may be anticipating where I’m going with this.

                                                        I’ve been writing what I consider to be “true” microservices for many years now: small, pure functional apps that run from the command line. By default they use streams, but any data sync should be fine for them. I don’t use a bus; instead I use various OS features for glue and overall testing.

                                                        In this environment, lambda really isn’t that big of a deal. I’m getting data via JSON instead of streams. There may be some event/interface work I need to do. That’s part of what I’m figuring out. I know AWS will keep an instance warm for a bit even if you’re not using it, so I need to think about setup vs. processing. Still, I think it’s going to be trivial.

                                                        I agree with the vendor lock-in concern. I think you can do it without that problem as long as you carefully pack your own picnic box. That is, functions with data streaming in and out that your programmatically join is one thing. Integrating you coding and CI/CD process with your cloud vendor is another thing entirely.

                                                        There’s the ever-prevalent “video game” danger with AWS: lots of pretty screens with easy to click-and-deploy cool-stuff. I love the ease-of-use part, and I love the offerings. The danger is that you have to be aware of the linkages to Amazon you’re creating. Best keep everything as code, everything in your own repo (GitHub for me), and view where the stuff runs as plug-and-play.

                                                        This, of course, means honking around with all the little details involved with coding, pipelines, and clouds. That sucks, but I’m okay with that. Great way to actually learn stuff instead of just poking around until it works.

                                                        1. 2

                                                          I’ve been working on a similar stack, to avoid any vendor lock it i have started using the LambdaProxy project and now i have a ‘local’ binary which runs a plain http server and is very useful for debugging.

                                                          I am also going to look into CDK because writing json/yaml for cloudformation isn’t what i want spend my time one.

                                                          1. 1

                                                            My belief is that a parameterized CF stack should work for a bunch of stuff. I already have one for static websites. I’m developing this lambda one. There are probably 3 or 4 others and then I’m done. The entire point is not to rely heavily on AWS, so at some point the more I’m doing CF/Lambda stuff the more I’m headed down the wrong road.

                                                            Once you’ve got deployment and execution in, the rest of it is just replicating continuation stuff, right? Sequencing, map-reduce, orchestration – maybe a couple of others. At that point I think you finish up the work with a bit of containerization and K8 scripting.

                                                            As an industry, we’re really not that far from true vendor/platform/OS/language-agnostic coding, where we only pay for storage and CPU cycles, ideally at market rates. We’re very, very close.

                                                    1. 3

                                                      I wish the author went more in-depth into how changing hardware will affect kernel space. A lot of our OS concepts are based on optimizing for slow,inexpensive drives. What about now with NVM drives and in the future where servers could be just a bunch of NVDIMMS?

                                                      1. 1

                                                        No clue if NVDIMMS. I did submit a SSD-optimized filesystem here.

                                                      1. 7

                                                        I am beginning the hiring efforts for my reinsurance startup. I’m looking to hire a couple of remote Haskell developers within the next few months. Naturally this is time-consuming work, because it’s important to not just select for technical excellence, but also communication skills and professionalism. I’ve worked at too many companies where management are apathetic about infighting or the development of a “bro culture”, and that’s something I will be constantly working to avoid.

                                                        1. 2

                                                          Sounds cool. I am working on a model for an insurance company, and I constantly run into issues with state. Things like if (settings == null) initializeSettings(withParametersOnlyValidForMyUsecase);

                                                          which means that running things in the wrong order means that settings are wrong. Which sometimes matters, sometimes not. I figure working functional would avoid many of these issues.

                                                          1. 1

                                                            Would you be willing to train a developer in Haskell?

                                                            1. 1

                                                              Probably not at the earliest stages, but if you can drop me a note privately I’d be happy to have an informal chat with you. We may be able to work something out in future.

                                                          1. 3

                                                            Mathematicians love finding connections between different fields, but particularly between fields and linear algebra, because we know a lot about linear algebra.

                                                            I don’t know of too many instances where thinking of a matrix as a graph led to a surprising proof (but I am curious if you do), but studying graphs via linear algebra is a basic cornerstone of graph theory. Textbooks about this can be found by searching “algebraic graph theory”.

                                                            1. 1

                                                              I don’t know anything about any of it. :) I do lots of meta-research. In this case, there’s a lot of work that went and is going into acceleration of matrix operations and (esp recently) graph operations. There’s even graph databases these days. So, anything connecting a concept getting a lot of attention to another concept in a lot of use might be worth posting in case it gives people ideas.

                                                              1. 1

                                                                It’s usually the other way around. There is the field called spectral graph theory, which is amost entirely focused on the graph laplacian (a matrix made from the adjacency matrix and degree matrix) and uses it to construct proofs around the eigendecomposition.

                                                                One example would be spectral clustering, where the eigenvectors of the laplacian are used to embed the nodes in a two- or three-dimensional space. There is also a random walk laplacian, which models the transition probabilities between the different vertices in a graph. Those are just some examples where matrices are used to analyze graphs and there are many more possible applications.

                                                                1. 1

                                                                  If you can find a mapping between different fields for your problem you can use the other’s fields proofs to help solve yours.

                                                                  1. 1

                                                                    Physics does something similar. If you can make something a harmonic oscillator (mass bouncing on a spring), you’ve hit the jackpot because it’s an incredibly simple and well-studied model system.

                                                                  1. 1

                                                                    I’m intrigued, but I wish they had titled and labelled the axes on their charts. I’m not sure what they’re measuring, or what the units are.

                                                                    1. 2

                                                                      Yeah they should have labeled them, but for performance charts it’s usually measured against seconds (due to the large test data sets)

                                                                    1. 1

                                                                      Mkay, so, five or ten years ago, it was considered keeping a PHP interpreter hanging around for many many requests was a fairly bad idea because none of the code in the ecosystem had really been extensively tested in that mode of operation, so all kinds of memory leak bugs were left around.

                                                                      (IMO they were not really bugs, just absence of support for the “long lived process” use case.)

                                                                      I take it that that class of problem is largely fixed now in the modern PHP ecosystem?

                                                                      1. 1

                                                                        Yes, with the release of PHP 7 a lot of the internals were reworked.

                                                                      1. 8

                                                                        yet in many respects, it is the most modern database management system there is

                                                                        It’s not though. No disrespect to PostgreSQL, but it just isn’t. In the world of free and open source databases it’s quite advanced, but commercial databases blow it out of the water.

                                                                        PostgreSQL shines by providing high quality implementations of relatively modest features, not highly advanced state of the art database tech. And it really does have loads of useful features, the author has only touched on a small fraction of them. Almost all those features exist in some other system. But not necessarily one single neatly integrated system.

                                                                        PostgreSQL isn’t great because it’s the most advanced database, it’s great because if you don’t need anything state of the art or extremely specialized, you can just use PostgreSQL for everything and it’ll do a solid job.

                                                                        1. 13

                                                                          but commercial databases blow it out of the water

                                                                          Can you provide some specific examples?

                                                                          1. 16

                                                                            Oracle has RAC, which is a basic install step for any Oracle DBA. Most Postgres users can’t implement something similar, and those that can appreciate it’s a significant undertaking that will lock you into a specific workflow so get it right.

                                                                            Oracle and MS-SQL also have clustered indexes. Not what Postgres has, but where updates are clustered as well. Getting Pg to perform sensibly in this situation is so painful, it’s worth spending a few grand to simply not worry about it.

                                                                            Ever run Postgres on a machine with over 100 cores? It’s not much faster than 2 cores without a lot of planning and partitioning, and even then, it’s got nothing on Oracle and MS-SQL: Open checkbook and it’s faster might sound like a lose, but programmers and sysadmins cost money too! Having them research how to get your “free” database to perform like a proper database isn’t cost effective for a lot of people.

                                                                            How about big tables. Try to update just one column, and Postgres still copies the whole row. Madness. This turns something that’s got to be a 100GB of IO into 10s of TBs of IO. Restructuring this into separate partitions would’ve been the smart thing to do if you’d remembered to do it a few months ago, but this is a surprise coming from commercial databases which haven’t had this problem for twenty years. Seriously! And don’t even try to VACUUM anything.

                                                                            MS-SQL also has some really great tools. Visual Studio actually understands the database, and its role in development and release. You can point it at two tables and it can build ALTER statements for you and help script up migrations that you can package up. Your autocomplete can recognise what version you’re pointing at. And so on.

                                                                            …and so on, and so on…

                                                                            1. 3

                                                                              Thanks for the detailed response. Not everyone has money to throw at a “real” enterprise DB solution, but (having never worked with Oracle and having only administered small MSSQL setups) I did wonder what some of the specific benefits that make a DBA’s life easier were.

                                                                              Of course, lots of the open source tools used for web development and such these days seem to prefer Postgres (and sometimes MySQL), and developers like Postgres’ APIs. With postgres-compatible databases like EnterpriseDB and redshift out there, my guess is we’ll see a Postgres-compatible Oracle offering at some point.

                                                                              1. 7

                                                                                Not everyone has money to throw at a “real” enterprise DB solution

                                                                                I work for a commercial database company, so I expect I see a lot more company-databases than you and most other crustaceans: Most companies have a strong preference to rely on an expert who will give them a fixed cost (even if it’s “money”) to implement their database, instead of trying to hire and build a team to do it open-source. Because it’s cheaper. Usually a lot cheaper.

                                                                                Part of the reason why: An expert can give them an SLA and has PI insurance, and the solution generally includes all costs. Building a engineering+sysadmin team is a big unknown for every company, and they usually need some kind of business analyst too (often a contractor anyway; more £££) to get the right schemas figured out.

                                                                                Professional opinion: Business logic may actually be some of the least logical stuff in the world.

                                                                                lots of the open source tools used for web development and such these days seem to prefer Postgres

                                                                                This is true, and if you’re building an application, I’d say Postgres wins big. Optimising queries for dbmail’s postgres queries was hands down much easier than any other database (including commercial ones!).

                                                                                But databases are used for a lot more than just applications, and companies who use databases don’t always (or even often) build all (or even much) of the software that interacts with the database. This should not be surprising.

                                                                                With postgres-compatible databases like EnterpriseDB and redshift out there, my guess is we’ll see a Postgres-compatible Oracle offering at some point.

                                                                                I’m not sure I disagree, but I don’t think this is a good thing. EnterpriseDB isn’t Postgres. Neither is redshift. Queries that work fine in a local Pg installation run like shit in redshift, and queries that are built for EnterpriseDB won’t work at all if you ever try and leave. These kinds of “hybrid open source” offerings are an anathema, often sold below a sustainable price (and much less than what a proper expert would charge), leaving uncertainty in the SLA, and with none of the benefits of owning your own stack that doing it on plain postgres would give you. I just don’t see the point.

                                                                                1. 3

                                                                                  Professional opinion: Business logic may actually be some of the least logical stuff in the world.

                                                                                  No kidding. Nice summary also.

                                                                                  1. 0

                                                                                    Queries that work fine in a local Pg installation run like shit in redshift

                                                                                    Not necessarily true, when building your redshift schema you optimize for certain queries (like your old pg queries).

                                                                                2. 4

                                                                                  And yet the cost of putting your data into a proprietary database format is enough to make people find other solutions when limitations are reached.

                                                                                  Don’t forget great database conversion stories like WI Circuit Courts system or Yandex where the conversion to Postgres from proprietary databases saved millions of dollars and improved performance…

                                                                                  1. 2

                                                                                    Links to those stories?

                                                                                    1. 1

                                                                                      That Yandex can implement clickhouse doesn’t mean everyone else can (or should). How many $100k developers do they employ to save a few $10k database cores?

                                                                                      1. 2

                                                                                        ClickHouse has nothing to do with Postgres, it’s a custom column oriented database for analytics. Yandex Mail actually migrated to Postgres. Just Postgres.

                                                                                    2. 2

                                                                                      You’re right about RAC but over last couple of major releases Postgres has gotten alot better about using multiple cores and modifying big tables. Maybe not at the Oracle level yet bit its catching up quickly in my opinion.

                                                                                      1. 3

                                                                                        Not Oracle-related, but a friend of mine tried to replace a disk-based kdb+ with Postgres, and it was something like 1000x slower. This isn’t even a RAC situation, this is one kdb+ core, versus a 32-core server with Postgresql on it (no failover even!).

                                                                                        Postgres is getting better. It may even be closing the gap. But gosh, what a gap…

                                                                                        1. 1

                                                                                          Not to be that guy, but when tossing around claims of 1000x, please back that up with actual data/blogpost or something..

                                                                                          1. 6

                                                                                            You remember Mark’s benchmarks.

                                                                                            kdb doing 0.051sec what postgres was taking 152sec to complete.

                                                                                            1000x is nothing.

                                                                                            Nobody should be surprised by that. It just means you’re asking the computer to do the wrong thing.

                                                                                            Btw, starting a sentence with “not to be that guy” means you’re that guy. There’s a completely normal way to express curiosity in what my friend was doing (he’s also on lobsters), or to start a conversation about why it was so much easier to get right in kdb+. Both could be interesting, but I don’t owe you anything, and you owe me an apology.

                                                                                            1. 2

                                                                                              Thanks for sharing the source, that helps in understanding.

                                                                                              That’s a benchmark comparing a server grade setup vs essentially laptop grade hardware (quad-core i5), running the default configuration right out of the sample file from the Git repo, with a query that reads a single small column out of a very wide dataset without using an index. I don’t doubt these numbers, but they aren’t terribly exciting/relevant to compare.

                                                                                              Also, there was no disrespect intended, not being a native english speaker I may have come off clumsy though.

                                                                                              1. 1

                                                                                                kdb doing 0.051sec what postgres was taking 152sec to complete.

                                                                                                That benchmarks summary points to https://tech.marksblogg.com/billion-nyc-taxi-rides-postgresql.html which was testing first a pre-9.6 master and then a PG 9.5 with cstore_fdw. Seems to me that neither was fair and I’d like to do it myself, but I don’t have the resources.

                                                                                                1. 1

                                                                                                  If you think a substantially different disk layout of Pg, and/or substantially different queries would be more appropriate, I think I’d find that interesting.

                                                                                                  I wouldn’t like to see a tuning exercise including a post-query exercise looking for the best indexes to install for these queries though: The real world rarely has an opportunity to do that outside of applications (i.e. Enterprise).

                                                                                            2. 1

                                                                                              Isn’t kdb+ really good at stuff that postgres (and other RDBMS) is bad at? So not that surprising.

                                                                                              1. 1

                                                                                                Sort of? Kdb+ isn’t a big program, and most of what it does is the sort of thing you’d do in C anyway (if you liked writing databases in C): Got some tall skinny table? Try mmaping as much as possible. That’s basically what kdb does.

                                                                                                What was surprising was just how difficult it was to get that in Pg. I think we expected, with more cores and more disks it’d be fast enough? But this was pretty demoralising! I think the fantasy was that by switching the application to Postgres it’d be possible to get access to the Pg tooling (which is much bigger than kdb!), and we massively underestimated how expensive Pg is/can be.

                                                                                                1. 3

                                                                                                  Kdb+ isn’t a big program, and most of what it does is the sort of thing you’d do in C anyway (if you liked writing databases in C)

                                                                                                  Well, kdb+ is columnar, which is pretty different than how most people approach naive database implementation. That makes it very good for some things, but really rough for others. Notably, columnar storage is doesn’t deal with update statements very well at all (to the degree that some columnar DBs simply don’t allow them).

                                                                                                  Even on reads, though, I’ve definitely seen postgres beat it on a queries that work better on a row-based system.

                                                                                                  But, yes, if your primary use cases favor a columnar approach, kdb+ will outperform vanilla postgres (as will monetdb, clickhouse, and wrappers around parquet files).

                                                                                                  You can get the best of both worlds You can get decent chunks of both worlds by using either the cstore_fdw or imcs extensions to postgres.

                                                                                                  1. 1

                                                                                                    which is pretty different than how most people approach naive database implementation.

                                                                                                    I blame foolish CS professors emphasising linked lists and binary trees.

                                                                                                    If you simply count cycles, it’s exactly how you should approach database implementation.

                                                                                                    Notably, columnar storage is doesn’t deal with update statements very well at all (to the degree that some columnar DBs simply don’t allow them).

                                                                                                    So I haven’t done that kind of UPDATE in any production work, but I also don’t need it: Every customer always wants an audit trail which means my database builds are INSERT+some materialised view, so that’s exactly what kdb+ does. If you can build the view fast enough, you don’t need UPDATE.

                                                                                                    Even on reads, though, I’ve definitely seen postgres beat it on a queries that work better on a row-based system.

                                                                                                    If I have data that I need horizontal grabs from, I arrange it that way in memory. I don’t make my life harder by putting it on the disk in the wrong shape, and if I do run into an application like that, I don’t think gosh using postgres would really speed this part up.

                                                                                        2. 3

                                                                                          Spanner provides globally consistent transactions even across multiple data centers.

                                                                                          Disclosure: I work for Google. I am speaking only for myself in this matter and my views do not represent the views of Google. I have tried my best to make this description factually accurate. It’s a short description because doing that is hard. The disclosure is long because disclaimers are easier to write than useful information is. ;)

                                                                                          1. 2

                                                                                            @geocar covered most of what I wanted to say. I also have worked for a commercial database company, and same as @geocar I expect I have seen a lot more database use cases deployed at various companies.

                                                                                            The opinions stated here are my own, not those of my former or current company.

                                                                                            To put it bluntly, if you’re building a Rails app, PostgreSQL is a solid choice. But if you’ve just bought a petabyte of PCIe SSDs for your 2000 core rack of servers, you might want to buy a commercial database that’s a bit more heavy duty.

                                                                                            I worked at MemSQL, and nearly every deployment I worked with would have murdered PostgreSQL on performance requirements alone. Compared to PostgreSQL, MemSQL has more advanced query planning, query execution, replication, data storage, and so on and so forth. It has state of the art features like Pipelines. It has crucial-at-scale features like Workload Profiling. MemSQL’s competitors obviously have their own distinguishing features and qualities that make them worth money. @geocar mentioned some.

                                                                                            PostgreSQL works great at smaller scale. It has loads useful features for small scale application development. The original post talks about how Arcentry uses NOTIFY to great effect, facilitating their realtime collaboration functionality. This already tells us something about their scale: PostgreSQL uses a fairly heavyweight process-per-connection model, meaning they can’t have a huge number of concurrent connections participating in this notification layer. We can conclude Arcentry deployments using this strategy probably don’t have a massive number of concurrent users. Thus they probably don’t need a state of the art commercial database.

                                                                                            There are great counterexamples where specific applications need to scale in a very particular way, and some clever engineers made a free database work for them. One of my favorites is Expensify running 4 million queries per second on SQLite. SQLite can only perform nested loop joins using 1 index per table, making it a non-starter for applications that require any kind of sophisticated queries. But if you think about Expensify, its workload is mostly point look ups and simple joins on single indexes. Perfect for SQLite!

                                                                                            1. 1

                                                                                              But MemSQL is a distributed in-memory database? Aren’t you comparing apples and oranges?

                                                                                              I also highly recommend reading the post about Expensify usage of SQLite: it’s a great example of thinking out of the box.

                                                                                              1. 1

                                                                                                No. The author’s claims “Postgres might just be the most advanced database yet.” MemSQL is a database. If you think they’re apples and oranges different, might that be because MemSQL is substantially more advanced? And I used MemSQL as one example of a commercial database. For a more apples-to-apples comparison, I also think MSSQL more advanced than PostgreSQL, which geocar covered.

                                                                                                And MemSQL’s in-memory rowstore serves the same purpose as PostgreSQL’s native storage format. It stores rows. It’s persistent. It’s transactional. It’s indexed. It does all the same things PostgreSQL does.

                                                                                                And MemSQL isn’t only in-memory, it also has an advanced on-disk column store.

                                                                                        1. 1

                                                                                          Oh boy, this is the first thing I had to do when I started working at my company. I’m interested to see how these guys made the slow queue work (probably just a sleep command?). Further optimization could be batching requests in each worker with Guzzle. You could run even less workers that way.

                                                                                          1. 2

                                                                                            We’ve been using terraform daily at $WORK for slightly more than 2 years now and have been very happy with it so far. What’s also interesting is that Terraform supports new AWS features before Cloudformation, which doesn’t make sense but is fun nevertheless.

                                                                                            1. 1

                                                                                              I would actually go to terraform’s docs to learn about AWS features instead of AWS’.

                                                                                              One hassle with terraform and AWS is that the AWS API does certain things in the background that terraform can’t control (certain default or dependent actions) which can be a real pain when trying to re-deploy stuff with terraform. Some manual cleanup might be necessary still.

                                                                                              1. 1

                                                                                                We’ve just started to use Terra Form, any tips?

                                                                                                1. 1

                                                                                                  I wrote a blog post on this topic last year. Hope this is helpful!

                                                                                              1. 5

                                                                                                “Architecture is the decisions that you wish you could get right early in a project.” Ralph Johnson

                                                                                                It is easy to come up with rules of thumb. The author learned that he should test better and more. Maybe next time he will go overboard with testing. It is always about tradeoffs and balance. That is not helpful advice though.

                                                                                                My current philosophy is to have a mindset of probabilities. The optimal architecture would be obvious if you know the future. Since you cannot do that the second best thing is estimate the probability and impact and cost of all future possibilities.

                                                                                                Just because something will probably not happen does not mean you don’t have to mitigate it if the impact is high and the cost is low (e.g. style guidelines). Just because something will probably happen does not mean you have to mitigate it if the impact is low and the cost is high (e.g. formal methods).

                                                                                                1. 3

                                                                                                  In terms of probabilities a lot of architectural decisions has kind of an unknown-unknown benefits and costs unless your familiar with the tools. For our new project, we realized that configuration data in NoSQL in a AP configuration will tend of have that data corrupt for X reason. Unfortunately the configuration is embedded in relations and cannot be back-uped independently. If in the initial design of the database we had separated the config into it’s own bucket it would have huge infrastructure ramifications.

                                                                                                  1. 1

                                                                                                    I always assume that I will get some things wrong the first time and need to fix them later but that I don’t know which things those will be. That makes me optimize for changing stuff later.

                                                                                                  1. 2

                                                                                                    Not a bad idea to have a dedicated hardware module tailored towards the web. In particular, for distributed systems it would be particularly helpful. We’ve had a problem before with pages in memory deadlocking everything because the OS was taking so long to defrag memory and taking down the whole app.

                                                                                                    1. 1

                                                                                                      I was thinking one could also simultaneously improve performance and reduce attack surface, too.

                                                                                                      1. 1

                                                                                                        Does it reduce the attack surface because hardware is more easily verifiable?

                                                                                                        1. 1

                                                                                                          There’s less functionality in there since it includes just what you need. The hardware is FSM’s converted to logic. Both support strong, automated verification. Finally, the hardware implementation might allow you to do things like simultaneously input check all headers since it’s inherently parallel. That might further let you do more checks or protections that would have too much slowdown on general-purpose CPU. Some approaches do 50-70% hit in software but 1-10% in hardware.

                                                                                                    1. 6

                                                                                                      None of these tactics remove or prevent vulnerabilities, and would therefore by rejected by a “defense’s job is to make sure there are no vulnerabilities for the attackers to find” approach. However, these are all incredibly valuable activities for security teams, and lower the expected value of trying to attack a system.

                                                                                                      I’m not convinced. “Perfect” is an overstatement for the sake of simplicity, but effective security measures need to be exponentially more costly to bypass than they are to implement, because attackers have much greater resources than defenders. IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers. Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.

                                                                                                      1. 2

                                                                                                        You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system? In this case and as your threats get more advanced I agree with the article. In higher levels of threats it becomes a problem of economics.

                                                                                                        1. 2

                                                                                                          You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system?

                                                                                                          I think just the opposite actually. The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals. Whereas following the truism would lead you to make changes that would protect against all attackers.

                                                                                                          1. 4

                                                                                                            The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals.

                                                                                                            That’s not what I understand from this article. The attitude proposed by the article should, IMO, lead you to think of the threat model of the system you’re trying to protect.

                                                                                                            If it’s your friend’s blog, you (probably) shouldn’t have to consider state actors. If it’s a stock exchange, you should. If you’re Facebook or Amazon, not the same as lobsters or your sister’s bike repair shop. If you’re a politically exposed individual, exploiting your home automation raspberry pi might be worth more than exploiting the same system belonging to someone who is not a public figure at all.

                                                                                                            Besides that, I disagree that all examples are too costly to be worth it. Hashing passwords is always worth it, or at least I can’t think of a case where it wouldn’t be.

                                                                                                            To summarize with an analogy, I don’t take the exact same care of my bag when my laptop (or other valuables) are in it than when it only contains my water bottle, and Edward Snowden should care more about the software he uses than the ones I use.

                                                                                                            Overall I really like the way of thinking presented by the author!

                                                                                                            1. 2

                                                                                                              Whereas following the truism would lead you to make changes that would protect against all attackers.

                                                                                                              Or mess with your sense of priority such that all vulnerabilities are equally important so “let’s just go for the easier mitigations”, rather than evaluating based on the cost of the attack itself.

                                                                                                              1. 1

                                                                                                                If you’re thinking about “mitigations” you’re already in the wrong mentality, the one the truism exists to protect you against.

                                                                                                                1. 1

                                                                                                                  It’s important to acknowledge that it’s somewhat counterintuitive to think about the actual parties attempting to crack your defenses. It requires more mental work, in a world where people assume they can get all the info they need just by reading their codebase & judging it on its own merits. It requires methodical, needs-based analysis.

                                                                                                                  The present mentality is not a pernicious truism; it’s an attractive fallacy.

                                                                                                          2. 2

                                                                                                            IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers.

                                                                                                            How do you figure it’s too costly? If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks. Additionally, there are services out there that scan dep. vulnerabilities if you give them a Gemfile, or access to your repo.

                                                                                                            Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.

                                                                                                            Perfect it all you want. The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up) If anything, what’s costly is keeping on your employees to not take shortcuts, and on stay alert to missing access cards, rouge network devices in the office, badge surfing, and that they don’t leave their assets lying around.

                                                                                                            1. 1

                                                                                                              If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks.

                                                                                                              I’d frame that as: deployment environments are increasingly set up so that everyone pays the costs.

                                                                                                              The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up)

                                                                                                              So fix that, with a real security measure like hardware tokens. Thinking in terms of costs to attackers doesn’t make that any easier; indeed it would make you more likely to ignore this kind of attack on the grounds that fake domains are costly.

                                                                                                          1. 5

                                                                                                            Larger question: why are we proud of this? Do we want programming to just be wiring up components written by 25 year-olds from Facebook with an excess of free time?

                                                                                                            1. 5

                                                                                                              I know for certain I can knock up vast chunks of functionality by gluing together prewritten chunks. I know this, I do this, I know I deliver orders of magnitude more than I could ever write myself.

                                                                                                              I also know we haven’t learnt how to do it well, how to do it in a rock solid, reliable, testable, repeatable way. That video to hexagonal architecture is an example of a way of doing the “glue” part in a rock solid, testable way.

                                                                                                              We haven’t really learnt the best ways. Yet.

                                                                                                              The entire industry is learning on the job. And some of those lessons are going to be really really painful…. Especially for those who don’t recognize that they still need to be learning…

                                                                                                              1. 2

                                                                                                                I know that selfishly I don’t want to wire things up (it’s boring, it’s frustrating etc.), but what is the justification for starting every project by hand rolling your own language and compiler? Surely that wouldn’t benefit anyone but the developer. Even though wiring up components produces software that’s suboptimal in many respects, it’s nevertheless efficient on two very important metrics: cost and development time.

                                                                                                                I’m sure there are exceptions to this (enterprise software comes to mind) but in general, I struggle to make a case against reusing components.

                                                                                                                Looking at it from another angle, we generally want to solve ever more complex problems with software. Managing complexity requires abstraction. Components seem to be the only widely accepted mechanism for creating large scale abstractions. I know this isn’t necessarily the best way, but what is a practical alternative and how can all the existing components be replaced in a way that doesn’t create astronomical costs?

                                                                                                                1. 2

                                                                                                                  I’m not arguing for bootstrapping the universe just to make an apple pie. I’m actually a big fan of components. But the view that we’re “just” component wiring plumbers irks me to my core.

                                                                                                                  Somebody has to envision the data flow from the user to the app. Someone has to design the interface. Someone has to empathize with the user to discern an optimal workflow. Someone has to also be able to make the machine perform this task in an acceptable amount of time. And someone has to have the skill to design it in a way that doesn’t prevent future improvements.

                                                                                                                  I’d argue the utopian vision of software components is already here, where you can drop in various modules and glue them together. Add in an appropriately flexible language, such as JS, and there is very little friction involved overall.

                                                                                                                  Also note that the software problem hasn’t been solved, design skills are still needed, and people merely ship faster apps in a more buggy state.

                                                                                                                  So, I speak against “just component wiring” in the de-skilling sense if only to say the actual programming part is only a small part of what a programmer does.

                                                                                                                  1. 2

                                                                                                                    Just playing devils advocate, but how many of us actually have design skills in an engineering sense? To be more specific, how many of us actually design in terms of a definable process? Design is definitely what differentiates a senior and junior but is it something concrete or something more like aesthetics? Another language you learn to talk to other developers.

                                                                                                                    It’s interesting because the whole Requirements-Design-Architecture Formalized Design process is not really used anywhere, and places that it is used, design is done by an architecture team and locked away never to be seen by another human.