I’m curious what the cause for the performance increase is - is it just that the on-prem hardware is that much better than the cloud hardware?
Like caius said, the hardware is probably better. But also; overhead and jitter is introduced by the various virtualization layers.
Here is a good read on the subject: https://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtualization-2017.html
Probably the hardware is better specced, but also there’ll be less running on it. AWS has multiple customers on the same hardware so they have to ensure nothing leaks cross-tenant. And there’s also noisy neighbour, when you’re the only customer on the box you can tune it much more easily knowing there isn’t someone sat there rinsing CPUs next to you. Not sure what they’re doing for storage too, but that is likely local rather than network attached too.
Turns out having dedicated hardware running one tenant is superior in performance. Reminds me of the (many) times we (re)discovered that during the days of VMs vs bare servers.
This + fewer abstraction layers with on-prem hardware. The closer to the metal you are, the more performance you’d get - always.
IIRC Ruby benefits from faster single-core speed so moving to on-prem is going to give you some benefit. Jeff Atwood’s Why Ruby? is old, but cover a lot of points. I haven’t kept up with how Discourse are doing their hosting, but Jeff has mentioned core performance over the years on Twitter.
I see other comments about Ruby having a VM, but that’s often only a problem when you have limited awareness of your platform and managing performance on said platform. In Bing’s move to .NET 7 with Principal Engineer Ben Watson you can hear a commentary on how awareness of generations in the .NET GC can help optimize along with the implications when there are modifications to the GC. You can make similar comments on Python GIL conversations that never address the nature of the performance problem.
I’m not sure if they still sell them, but for a while Intel sold chips with 128 MiB of EDDR as last level cache. We got some for FPGA builds (single threaded place and route, CPU bound with fairly random memory access patterns) and they were a lot faster than anything else Intel sold for this workload (lower core counts and lower clock speed, but when you have a single threaded workload that spends a big chunk of its time waiting on cache misses, this doesn’t matter so much). I don’t think any cloud providers offer them but they’d probably be a huge win for a lot of these workloads.
AMD’s 3D V-Cache chips have large amounts of proper SRAM L3 cache. My Ryzen 7 5800X3D has 96MB, and the new Ryzen 9 7950X3D has 128MB. Most of the benchmarking on them has been for gaming. I’d be curious to see web backend benchmarks though.
Programming Pearls by Jon Bentley is a canonical example. I love it because the style is so much more pleasant to read than most technical books. It’s more than an algorithms book while being relatively short. The principles are timeless even if the examples may not be current. It’s interesting from the historical perspective if you don’t approach it like a 2022 algorithms book.
I don’t understand what they mean when they say database developer. To me that means you hack on databases. But from context it seems more like they mean database administrator?
that means you hack on databases.
When you say “He is a Ruby developer” I guess it’s clear that it doesn’t mean “somebody who is a contributor to the Ruby programming language implementation”.
Sorry I still don’t understand from context what either you or the redditor mean. Do they mean someone who uses a database in building backend or desktop apps? Or do they mean someone who administers the database servers? Or do they mean someone who builds pipelines between databases? These are all distinct job titles I have heard of: backend developer, db admin, and data engineer respectively. I haven’t heard of the title “database developer” in this context before where it does not mean someone who hacks on databases.
I haven’t heard of the title “database developer” in this context before where it does not mean someone who hacks on databases.
Maybe less common than it used to be, but have definitely heard it… often places that have gone the route of baking a lot of functionality and business logic into the database layer directly, and have devs who may be entirely devoted to writing stored procedure code and the like. (though titles and responsibilities are always fuzzy, and may also include some overlap with admin and ETL/data eng type functions, too)
I guess that it’s all of that and maybe more.
From my own exposure to the modern data industry the amounts of soul-searching are massive.
One thing seems certain is that there are less and less people willing to call themselves “database administrator”, this seems to be a dying occupation.
The OP expressed their gratitude for my response so I guess I’ve struck some chord there. I think that maybe this fuzziness is part of “not being sure about your potential career” part, so I guess lack of rigour is understandable.
I see.
I think that maybe this fuzziness is part of “not being sure about your potential career” part, so I guess lack of rigour is understandable.
Makes sense!
I don’t understand what they mean when they say database developer.
I’d call them a DBA too, but I’m old. I think the original context for the substack was to target developers (read: implied limitations with scope of RDBMS understanding) with perspective’s from an experienced DBA. Creating a more umbrella term feels more inclusive to me.
Is it inclusive, or misleading? If someone told me they were a database developer, I would imagine they were a developer on ClickHouse or something. If someone told me they were a DBA I wouldn’t be able to assume they had programming experience.
I was a volunteer librarian in my compsci department’s library, and I fondly remember a day we did inventory and realized an entire shelf was dedicated to UML textbooks, all outdated/never checked out. I marked all of them as obsolete and moved them to the basement section.
Has UML or even software ontology in general provided more value than the tons of paper waste it has generated? I wondered.
It’s nice to be able to sketch DB schema on a white board using the standard arrow types with the rest of the team.
Apart from that, not so much.
I’ve found the sequence diagrams useful to visualize race conditions, as well. But I probably don’t really use the “official” exact notation.
Can you share the location of your CS department and timeframe? I saw the country on your GitHub profile on your country, but wanted to confirm. If I extrapolated that data point, I do know that there were significant numbers of students required to learn/use/research UML plus adjacent topics across UK universities from the mid-90s until the late 2000s. This included courses from the huge contingent of Greek academics in the UK. It was basically a requirement from 2000-2004 at some point in many degrees because of the demands of the large employers combined with the response times for universities adjusting to industry requirements.
This is orthogonal to the usefulness of UML (or any other tech.) There are a lot of useless things that academia and industry pursue. When UML was hot, anyone close to real-world development was starting to see the value of agile as originally envisioned and the value of tools like Rails for web development which was a growing area. I realise there are many other domains, but the point was to illustrate that there are people following these boondoggles with different motivations from others that are more pragmatic. It’s harder to see this if you haven’t lived before, during, and after the rise of a thing.
It was basically a requirement from 2000-2004 at some point in many degrees because of the demands of the large employers combined with the response times for universities adjusting to industry requirements.
Not quite. It was a requirement for BCS accreditation. The BCS is a weird organisation. Industry doesn’t have much input into its requirements and neither does academia, so it ends up requiring things that neither industry nor universities want. Cambridge decided to drop BCS accreditation a few years ago after polling the companies that hired graduates and hearing that none of them cared about it (I’m not sure if the department is still accredited but the policy was that they would not change any course material to conform to the BCS’ weird view of what a computer science degree should contain).
As I recall, the biggest problem was that there were great tools for taking code and generating UML diagrams from it, far-less-good tools for going the other way (and the fastest way of creating a load of the diagrams was to write some stub code and then tell the tooling to extract UML). When the specification is describing, rather than defining, the implementation then it ceases to be a useful design tool.
You are right about the BCS being weird and accreditation being a driver. Cambridge, Oxford, Imperial, and a few others are relative outliers with the autonomy to operate in the way you describe. The bulk of students are attending relatively average universities which are forced to seek accreditation for various reasons, so this pattern just repeats.
I hadn’t thought about UML in a long time before this post. I recall some SystemC researchers and graduate students as having the most rounded out workflows I’ve seen, but it looks like what you described everywhere else.
You are right about the BCS being weird and accreditation being a driver. Cambridge, Oxford, Imperial, and a few others are relative outliers with the autonomy to operate in the way you describe
I suspect that most others are too. My undergrad and PhD were both from Swansea. The undergrad degree was BCS accredited and had all of this pointless nonsense in it. Since I left, no employer has ever asked if it was BCS accredited, most don’t even know who the BCS is, and none has considered any of the stuff that I learned in the BCS-required modules to be valuable (at least one has considered it to be actively harmful). I strongly suspect that if a second-tier university dropped BCS accreditation then there would be no consequences. I was hoping that Cambridge’s decision to drop it would make it easier for others to do so.
SizeUp: On macOS I moved to Rectangle, but used SizeUp for a long-time. I bought an AquaSnap license a long time ago and it serves my needs. Windows now has more hotkeys and virtual desktop stuff I don’t really use since I have my setup the way I want it.
Dash: Zeal and Velocity were the top equivalents when I last checked. Compared to window management, this area had fewer options because most people tend to open a browser rather than use a specialist tool.
As @williballenthin stated, PowerToys and Windows Terminal are also essential. You want to be on Windows 11 for the best experience going forward. Windows 10 users have been frustrated about some changes, but things are gradually reappearing.
You want to be on Windows 11 for the best experience going forward
No, you don’t. They haven’t even managed to fix the taskbar yet. Wait for 12 and stick with 10 for now.
I would take a step back and ask why you are making the code open source in the first place. If you want just to share your work with others, then pick whatever you want. As long as you own all of the intellectual property related to the code, you can pick whatever license you desire.
However, if you want your project to be adopted within a corporate environment, you can’t expect things outside their standard set to get a lot of traction. That set was picked by lawyers to reduce the risk of the company having future issues with clean intellectual property rights to their product. Even if it was adopted by a company before they were big enough to have lawyers who cared, one day they will grow, get acquired, IPO, and there will be a team of people running license checks for stuff outside of the approved set. That is especially true for relatively unknown licenses like the one in this case. At that point, they’re likely going to stop engineering work to replace the components affected, too, once again, reduce risk.
Here is a hypothetical. A company adopts this component with the license as it was; they get acquired by a large, multinational public company. There is not a lawyer that would read this license and agree to run down every aspect of this license and ensure they’re complying with it. Some are easy, but many are vague enough to be a pain. So instead, they tell engineering to yeet it from the product.
Given all of that, to answer your prompt, you don’t. Companies are not taking a risk on small open-source components. If you want to get the Hippocratic License added to the set of approved licenses, it is a Sisyphean effort. The only way I see it happening is if your project gets to the level of something like Kubernetes or Linux, which (in a catch) often doesn’t happen without corporate support.
why open-source it? clearly, to provide some benefit. it’s a useful library.
i personally don’t care a great deal about adoption; what i do care about is “good use”. i personally don’t want to support the military or fossil fuel companies, say. just like i wouldn’t work at those companies.
i’m curious to gauge peoples views about expressing such sentiments via licenses. it seems like the hippocratic license - https://firstdonoharm.dev/ - is a very clear approach to do this; yet it seems to be met with quite some anxiety by people who think tech should somehow be “neutral”. it’s long been shown that neutrality only rewards the privileged; to make social change one needs to step out, somehow.
so my question is, as a tech community at large, do we just completely give up on licenses? (aside from the standard few?) or is there some room to innovative; some way to create social change for ourselves, our users, and the broader community? and if so, what is that mechanism?
I’ll ask it a different way. In an ideal world, would a company change its policies to adopt your open source software? If you want to change corporate governance, I don’t think you do it with progressive open source licenses. No engineering leader is going to go to a board and ask them to change broad policy so they can use an open source library.
A plurality of US states – Delaware (the important one for corporate governance!) included – allow corporations to incorporate or reincorporate as a public benefit corporation. It’s conceivable that a corporation could be subject to enough pressure by its employees and shareholders that it would reincorporate as a B corporation.
But while I think a niche could exist in B corporations for software licensed under the Hippocratic license & similar, it’s important to not mix cause & effect: your Hippocratic licensed software may be eligible for selection by a company because they chose to become a B corp, but it strikes me as exceptionally unlikely that a company will ever become a B corp to use your Hippocratic licensed software.
how is B-corp and the license even related?
i.e. we’re just taking about a simple license here, where the terms are of course only enforceable through some (hypothetical) law suit; i..e the license really just expresses some notion of personal preferences enforceable only if i feel like suing random companies that use it.
maybe one thing i could point out is the difference between a code of conduct and a license. we all feel (somewhat?) comfortable with a code of conduct expressing behaviour wanted in our spaces; why not licenses for those same desires?
how is B-corp and the license even related?
only if i feel like suing random companies that use it.
maybe one thing i could point out is the difference between a code of conduct and a license
Corporate governance seems like the thing being discussed here. You hope to impact governance through clauses in a license. However, governance is not limited to the time when you decide to sue some companies. Companies are bound to various agreements which require them to make some attempt to mitigate risk so that they can achieve the outcomes that the owners desire. The result is that they pick and choose which risks they want to take on by limiting the number of licenses they support and the scope of these licenses.
Regular corporations (and, I suspect B-corps too) are unlikely to want to increase the number of risks they are dealing with by using software with the Hippocratic license. We already know that many companies rule out GPL and derivative licenses entirely just to limit their risk. Some will pick and choose, but only when they have resource to review and fit it into their business.
Above I used terms like “various agreements” because I don’t have the time to write in the level of the detail I’d like to. Agreements come in many forms and we care most about the explicit ones which are written like contracts. Some agreements are more implicit and while still important, I’m ignoring these to simplify. Agreements include but aren’t limited to:
For your license to succeed, you need to navigate all of these agreements. A license like MIT is relatively compatible because it’s limited in scope.
i see
i mean, suppose you are a regular developer living your life, and you feel like sharing code. clearly, i don’t want to engage at the level you mention with anyone who uses the code.
licenses seem like a reasonable way, no? or no. would you suggest there is no way? we should just give up and MIT everything?
licenses seem like a reasonable way, no? or no. would you suggest there is no way? we should just give up and MIT everything?
There is no way to achieve what you desire to any great extent with your approach. The trade-offs are for you to decide.
I would posit that most people don’t want to have relationships based on the requirements of the license you put forth. If you want to define your relationships and engagement through that license for your code, or companies you run, then that’s 100% fine. Many types of small communities can be sustained with hard work.
When you go in that direction don’t expect other people to reciprocate in various ways that they can in the open source world through code use, testing, bug reporting, doc writing, etc. If you use MIT then you’ll open the door to a lot more collaboration and usage. For many people who have a livelihood depending on open source, this is the only approach. When your livelihood doesn’t depend on open source it’s easier to pick and choose licenses, but even then the decision can limit who will engage with you.
You’ve forgotten one more potential situation: you want other open source projects and people to be able to use it, but don’t care at all about corporate usage, or even want to discourage it.
In such situations, licenses like the unlicense, AGPL, Hippocratic license, etc can be useful.
Production tips for django apps:
switch to asgi and use daphne+channels to handle websocket connections.
makemigrations every time you change your choices lists.
use django-extensions. Instead of creating the database manually, create your user, then use django extensions to reset_db
use django-restframework
beware of the N+1 problem when using django-restframework serializers.
Personally, I’d avoid django-restframework or any JSON API if at all possible. Server-side rendered HTML is the golden path for Django. Slapping a JSON serialization layer right in the middle of your architecture is a great way to kill your velocity and lose many of the benefits of Django.
When you need more on the frontend, sprinkle in some htmx or Alpine or something similar. If, and only if, you’ve discovered that these aren’t going to cut it for your UI needs, choose an appropriate SPA technology and use where necessary - but I’d argue they make a very poor default. If I knew from the beginning that an app was going to be mostly JSON APIs on the backend, I’d seriously consider whether Django was the best choice at all - I’ve tried it, it’s pretty painful and there must be better options these days.
Django works beautifully with htmx. I wrote a reasonably-sized app a few years ago with Django and its predecessor, intercooler, and I was able to re-use so many templates. The only thing I really felt I was missing that would have been useful was server-sent events.
I tend to agree. I’ve used DRF a couple of times and even if you try very hard, you’re sort of sucked into building castles of class inheritance. It gets too complex too quickly. If you just need a pure JSON API, it will probably be easier to use FastAPI or something because Django isn’t adding a ton of value. Django is great for HTML website plus instant backend.
How do you manage data access when you add a JSON API? Let’s say, you have been building an Django site for a while and you need to add the API for partners. You’ve made a big investment in the Django ORM and now you are adding a separate Python stack into the mix. Do you reuse anything in the ORM layer, or duplicate with something more appropriate for FastAPI?
I was assuming it’s greenfield. If you have an existing Django project, you should see if you can just return the JSON in a view, and then if it’s too complex for that, yeah, DRF.
I would ignore the naysayers and just use rest-framework. See my other comment for why I think it’s fine.
I’m quite happy with DRF and using it to build backend APIs. But I think you and I would disagree on patterns and approaches to doing so (I think I like generic views a lot more than you do, and also I suspect I like the thin-controller style way more than most people do), and I think that’s probably the source of it – I fully agree that if you’re not doing things the way Django/DRF are pushing you to, it will feel like you’re fighting the framework and it’s getting in your way.
Thanks for posting more tips!
Mind if I add some of them to my original article?
Will checkout django-extensions
, have never heard of it before. I use django-allauth
amongst other things though and wouldn’t give that a skip.
The N+1 pattern is a real problem in any Django codebase. Thank you for mentioning it. There’s no silver bullet; sometimes tools like Django Debug Toolbar can help.
It’s been a problem with every ORM I’ve ever used. It’s no silver bullet, but most of them have some mechanism to let you prefetch columns you care about on related entities so you can work around it if you see it happening.
switch to asgi and use daphne+channels to handle websocket connections.
While no doubt you have good reasons to recommend this, I would provide a counterpoint. At my previous job we had nothing but trouble with daphne+channels, to the point where it took a concerted effort to get rid of them.
Anecdotally, Daphne also went for a year or so without a maintainer. Also, the original ASGI spec had several alternative backends, among which there was also a sysv IPC one, which to me looked a lot simpler to set up than the current default of using memcached. Turns out this was completely broken and never really worked properly in the first place. I wasted quite a bit of time on getting that to work.
Now, it may be that in the intervening years things have improved, especially since Channels is now an official part of Django (although even after it was initially adopted, we still had many issues with it). But I’d still tread very cautiously.
How do you expect this to play out in the long-term? Tools like these can start out well when you have a set of users that span a whole company. Eventually cost, or just a more trendy tool, cause folks to want to move. This is not a great experience for folks, but it seems to frustrate eng teams more than others. I like Notion’s user experience, so I’m more interested in whether you think they can be sticky, especially for eng teams.
Disclaimer; I work at Notion.
I’d like to build a two-way sync tool between Notion and source-controlled documentation so that it’s easier to author in Notion, but pipe that documentation goodness into your IDE, website, public GitHub wiki; have run books cloned locally and greppable in-repo with code, etc. Let the Vim and EMacs fans write content that finds its way into the Notion wiki. That would lessen the lock-in anxiety, too.
I think the biggest problem with Notion as a documentation tool for code aside from that is search. Search has improved a lot over the last year in general, but Notion’s index is still quite simple. I don’t think we do any specific analysis or indexing for source code snippets or really understanding the semantic layout of your wiki. We have a long way to go there.
Where Notion is better than other similar tools is that UX, and wrangling projects - especially projects with unusual shape where you might want a more specialized workflow. We get a lot of positive feedback from teams who say “wow we came for the wiki but really like the flexibility to manage this or that process”, as well as teams who say “the editor UX in Notion is so much better than XYZ that we actually write (and organize) our docs”.
If that roundtripping was fully supported for third parties I could see how might light up scenarios beyond developers. It seems like the Notion API is fairly fleshed out. Is it complete enough to attempt something like this?
I think you could do a pretty good job with the API. The only limitation is that the API is kinda slow / rate limited.
I always like: talk to me about the last time you got paged into an issue. You can find out a lot about the uglier side of what you’re getting into with 15 minutes and that question as a lead.
“paged”? Last time I carried a pager was nearly 20 years ago. Do you want an answer from the distant past?
Paged is still frequently used in some circles and is tied in and reinforced with the branding of the common PagerDuty product. In my circles, people mostly just say there were called. Some companies have bespoke incident management tools so they say they were XYZ’d where XYZ is some obscure tool. Paging may not translate as well internationally.
And the verb “paging” existed well before pagers were invented. The term is more generic that you seem to be suggesting.
The parallel between societies and software is a great find! The big thing that I disagree with though is:
and a fresh-faced team is brought in to, blessedly, design a new system from scratch. (…) you have to admit that this system works.
My experience is the opposite. No customer is willing to work with a reduced feature set, and the old software has accumulated a large undocumented set of specific features. The new-from-scratch version will have to somehow reproduce all of that, all the while having to keep up with patching done to the old system that is still running as the new system is under development. In other words, the new system will never be completed.
In short, we have no way to escape complexity at all. Once it’s there, it stays. The only thing we can do to keep ourselves from collapse as described in the article is avoid creating complexity in the first place. But as I think is stated correctly, that is not something most organisations are particularly good at.
No customer is willing to work with a reduced feature set…
Sure they are, because the price for the legacy system keeps going up. They eventually bite the bullet. That’s been my experience, anyway. The evidence is that products DO actually go away, in fact, we complain about Google doing it too much!
Yes, some things stay around basically forever, but those are things that are so valuable (to someone) that someone is willing to pay dearly to keep them running. Meanwhile, the rest of the world moves on to the new systems.
Absent vandals ransacking offices, perhaps this is what ‘collapse’ means in the context of software; the point where its added value can no longer fund its maintenance.
Cost is one way to look at it, but it’s much harder to make this argument in situations like SaaS. The cost imposed on the customer is much more indirect than when it’s software the customer directly operates. You need to have a deprecation process that can move customers onto the supported things in a reasonable fashion. When this is done well, there is continual evaluation to reduce the bleeding from new adoption of a feature that’s going away while migration paths are considered.
I think the best model for looking at this overall is the Jobs To Be Done (JTBD) framework. Like many management tools, it can actually be explained in to a software engineer on a single page rather than requiring a book, but people like to opine.
You split out the jobs that customers need done which are sometimes much removed from the original intent of a feature. These can then be mapped onto a solution, or the solution can be re-envisioned. Many people don’t get to the bottom of the actual job the customer is currently doing and then they deprecate with alternatives that only partially suit task.
My experience is the opposite. No customer is willing to work with a reduced feature set
Not from the same vendor. But if they’re lucky enough not to be completely locked in, once the first vendor’s system is sufficiently bloated and slow and buggy, they might be willing to consider going to the competition.
It’s still kind of a rewrite, but the difference this time is that one company might go under while another rises. (If the first company is big enough, they might also buy the competition…)
Failing to consider jQuery a “framework” seems arbitrary and wrong.
Yes, jQuery provided massive compatibility fixes that the very fad driven “first frameworks” from the article lack, but to then dismiss it as just a compatibility layer is nonsense. jQuery was so widely deployed (well intentioned, but wrong :) ) people pushed for it to be included as part of the literal standards.
Beyond those core compatibility and convenience functions jQuery feature an extensive plugin architecture, and provided UI components, reactive updates, various form, and other user data supports. All of which sound not significantly different to the feature set of those “frameworks”.
This article then goes on to dismiss Ruby on Rails with a single sentence. Given Ruby on Rails pretty much created the entire concept of the integrated frontend and backend, with integrated databases, that seems bizarre?
Honestly, reading this post felt like it was written by someone who had encountered a few fad frameworks, added a few of the still major ones, and then called that a history. Honestly I don’t think this is worth spending time reading, if your goal is to actually learn something about the history of web framework.
I disagree with (1), unless the post were updated to state that it is talking about a specific framework architecture, rather than “frameworks” in general.
As I think about it, I agree on (2), because now that I recall people were still using separate libraries in client code. What it thinking of when writing the above was the adoption of the concept of “application frameworks”, which Ruby is a major early driver of, but as you say it didn’t actually interact with JS directly, you were using frameworks like jQuery, etc in the client, and rails was just providing the application data and state.
I’ll take response (3) as a mea culpa :D
I think it’s a fair point on further reflection. By the time I was starting “application frameworks” were just the default, Ruby on Rails and Django had already been around and matured, modern JS frameworks were also trying to be entire application frameworks, etc. And in our modern context, when we refer to frameworks, we’re usually talking about application frameworks.
But that doesn’t mean UI widget frameworks are any less of a “framework”, it’s just that we were collectively thinking about software differently back then. I unfortunately don’t have that context, to me jQuery was always a “library” whereas Backbone was a “framework”, but I can totally see your perspective here. If I have time I’ll try to go back and work that in somehow in my discussion of the first era, thanks for reading and commenting!
I really think we have failed at having the required communication structure for an Internet forum :D
Maybe we’re all a bit wrong and right here? That would seem to be the theme of eternal September in JavaScript frameworks.
Put another way,I think it’s possible for someone to know current JavaScript frameworks quite deeply and still miss the history or the underlying terrain that’s shaped it. A few things I thought of reading this:
goog.ui.Component
, and also shipped with a template library. These things had been used in Gmail and Google Docs for some time - though I’m not sure they’re used Put another way,I think it’s possible for someone to know current JavaScript frameworks quite deeply and still miss the history or the underlying terrain that’s shaped it.
Totally! The author fully acknowledges a knowledge gap in the era you’re commenting on and invites people to give exactly the kind of info you’re responding with. :D
Web components… have indeed not really made a ton of progress. There’s more motion on some of the fundamental problems in that space in the past couple years but they were stuck for a very long time. My own take is that they are trying to do something quite different from what the component-oriented view layers and frameworks were trying to do: the APIs are basically “How would you implement a new built-in element?” rather than picking up the themes around reactivity etc. that the view-layer-frameworks tackled. We’ll see if and how they change going forward.
FWIW, I admitted in the “before times” section that I didn’t have a ton of knowledge of how everything worked prior to 2012 or so, when I started coding 😅 definitely simplified and miss bits of history there for sure, but it’s hard to capture everything without writing a novel (or having been there).
Re: Google’s tooling, that’s amazing to hear about now, but I don’t believe these tools were really adopted by the community. At least, I’ve never heard of an app other than Google ones being built with them. I did point out that Google proved JS frameworks could work though, with Gmail being the first app most people seem to remember as being the moment when they realized how powerful JS had become.
Re: Web components, there has been a lot of progress here actually! In my work on decorators I’ve been collaborating closely with the folks who are pushing them forward, such as Lit and Fast, they are in fact a standard and part of the browser now. That said, they are severely limited compared to mainstream JS frameworks, I think in large part because the platform moves much more slowly than the ecosystem as a whole. But, if we step back, this is similar to the pattern we saw with View-Layer frameworks - letting patterns evolve on their own, and adopting the best ones. Some of the current patterns they’re working on include:
Given time, I think they still have a lot of potential, but I also think that they’re not really usable for larger-scale apps at the moment (I had a particularly painful experience with Stencil.js last year, would not go back). But for smaller components and UI widgets, they’re pretty great!
Re: Google’s tooling, that’s amazing to hear about now, but I don’t believe these tools were really adopted by the community. At least, I’ve never heard of an app other than Google ones being built with them. I did point out that Google proved JS frameworks could work though, with Gmail being the first app most people seem to remember as being the moment when they realized how powerful JS had become.
It’s probably worth elaborating on the reasons for this because they repeat with different technologies:
Re: Google’s tooling, that’s amazing to hear about now, but I don’t believe these tools were really adopted by the community. At least, I’ve never heard of an app other than Google ones being built with them. I did point out that Google proved JS frameworks could work though, with Gmail being the first app most people seem to remember as being the moment when they realized how powerful JS had become.
For what it’s worth, ClojureScript heavily depends on the Google Closure Compiler to perform optimization of generated JS code, and the official docs encourage people to use some of the features from the Google Closure Library.
The new MacBook Pro, which I have, is all this and more. A clean build of my work project (C++, Xcode) is about 50% faster than on my 2018 model, with no audible fan noise. In fact I don’t recall ever hearing the fan. I’m getting multiple days of battery life, too. (This is with the baseline M1 Pro CPU, not the Max.)
I’ve been through every Mac CPU change, 68000 to PowerPC to x86 to ARM, and each one gets more transparent. The only time I noticed this one was when I first launched an x86 app and the OS told me it was going to install the emulator. Everything I’ve recompiled and run has worked fine.
Also kudos to Apple for backing away from earlier design decisions — “butterfly” keyboard, touch-bar — that turned out to be bad ideas. The Apple I worked at probably wouldn’t have done that.
I liked new MacBooks Pro on paper, but when I touched them in the store I took my 3k$ and went home. I don’t care about useless to me ports, USB C is all I want (I do software). For me MacBook Air with M1 is much better in terms of form and weight. But this is my usecase.
I don’t need an SD card reader, and I could do without the extra weight, but I do need at least a 15” display to do work. ¯\(ツ)/¯
Ditto to all of this. I spend a lot of time in and around WebGL graphics. One of our in-house apps makes my old MacBook and the laptops of all my colleagues sound like the decks of aircraft carriers. It’s completely silent on my new M1 MacBook Pro.
I was frankly a little nervous about getting this machine. I need it to run everything from Blender, a .NET server with a proxied webpack dev server on top, various node servers, to a headless WebGL engine. I was pleasantly surprised to find it does all but the last those things without breaking a sweat. Things that run natively feel instantaneous. Things that run on Rosetta 2 still feel snappier than before. Industry adoption for Apple Silicon is moving apace. I’m pretty sure I’m just a dependency upgrade away from getting the last item on my list working and I can finally shed my old machine for good.
The most revolutionary thing about the experience of using the new MacBook Pro isn’t the features or build quality per se (although they’re both excellent). It’s that the performance bottleneck in my day-to-day work is squarely back on the network. I haven’t felt that way in a while.
I am really disappointed that Apple removed both the touchbar and the butterfly keyboard. The butterfly keyboard was the best feeling keyboard I have ever used, bar none, and the touchbar was very useful in some scenarios, while the physical F keys are absolutely useless to me.
The touch bar has an interesting idea, but without proper haptics (i.e. scan with your finger, more forecful click as you actually press), I don’t think most people wouldn’t have bought into it.
I thought the butterfly keyboard was nice on the 12” MacBook (they should bring it back w/ M1, IMHO), but I wasn’t as impressed with it on the Pros…. and the reliability issues sunk that.
I forget if I’ve mentioned it on here before, but HapticKey is worth trying. It uses the magnets in the touchpad to emulate feedback on the Bar, it work better than you’d expect.
Oh yeah, the version of the 12’’ MacBook was the best. The newer version on the Pros wasn’t quite as good, but to me it was still better than the current keyboard.
As for reliability, mechanical keyboards have atrocious reliability compared to both regular keyboards, and I suspect to the butterfly keyboards as well, but that’s not the reason why people use mechanical keyboards. They simply like the feel and accept the tradeoff. I would accept the same tradeoff for my laptop’s keyboard.
As for reliability, mechanical keyboards have atrocious reliability compared to both regular keyboards, and I suspect to the butterfly keyboards as well, but that’s not the reason why people use mechanical keyboards.
Do you have some evidence for this? I don’t have numbers, but I have numerous mechanical keyboards and the only one that has failed was stepped on, whereas I experienced multiple failures of my MacBook butterfly keyboard.
Unfortunately I am not aware on any real studies, so all I have is anecdotal evidence, albeit I have a lot of anecdotal evidence.
As for reliability, mechanical keyboards have atrocious reliability compared to both regular keyboards
Just to clarify, you are only referring to mechanical keyboards on laptops and not to external mechanical keyboards?
No, I am referring to external keyboards. I didn’t even know mechanical keyboards on laptops were a thing.
There were, though I’m going back to the 486/Pentium era (easily portable might be a better description than what we think of now). The current ones I know of with mechanical keyboards are from Razer and Asus.
My experience with mechanical keyboards differs, even cheap ones are long-lasting than any laptop keyboard I’ve seen since some of the old IBM Thinkpads and Toshiba Porteges.
The touch bar has an interesting idea, but without proper haptics (i.e. scan with your finger, more forecful click as you actually press), I don’t think most people wouldn’t have bought into it.
The most important feature of he touchbar was that it could run Doom. It’s still a mystery why touchbar equipped Macbooks didn’t fly of the shelf after this became known. :(
I really wanted to like the touch bar, but my fingers rarely ended up using it, maybe because of the lack of tactile feedback. Also, I’ve long been accustomed to binding F1..F8 to switch between my most-used apps, so having other functions on the touchbar interfered with my muscle memory.
You’re honestly the first person I’ve ever heard praise the butterfly keyboard, or even say it felt any better than the old type. I kept making typing mistakes on it, and even bought a little KeyChron mechanical BT keyboard to use when I wanted to do a lot of typing away from my desk.
Well, add me to the likes for the feel. Besides that, it was an absolute disaster. I had a MacBook Pro with a butterfly keyboard from before they added the seals and keys would constantly get stuck or wouldn’t actuate correctly (I guess because a speck of dust got in).
Though, the most reviews don’t mention this, the new M1 Pros have another scissor mechanism than the M1 Air and prior scissor MacBooks and I love it. It is much more ‘clicky’ than the earlier scissor keyboards and feels somewhat closer to mechanical keyboards.
I am really disappointed that Apple removed both the touchbar and the butterfly keyboard.
The butterfly keyboard seems to have had a lot of reliability issues, even after they made some changes to it.
Why not bind the actions your wanted to the F keys? MacOS has a very good support for binding actions.
I’m the first to urge caution in upgrades, but without highlighting actual breaking changes this seems like fud.
Some of us hang out in forums where people literally start posting minutes after a Python release that they don’t understand why NumPy isn’t installing on the new version.
Waiting at least a little bit for the ecosystem to catch up is sound advice.
I don’t understand why you say that when the article was very clearly a meta-discussion of how to approach Python version upgrades. It is not asking users to hold off indefinitely, but instead is reacting to the availability and how that plays out with updates throughout the ecosystem.
A “product manager” for Python could take a lot away from how clearly the pain points were laid out. As a platform, it’s advantageous for Python to tackle a lot of the issues pointed out, but it’s hard because of the number of stakeholders for things like packages. Getting a Docker image out more quickly seems like low-hanging fruit, but delaying a few days could perhaps be intentional.
For what it is worth, the Docker container, like many very popular containers on the official docker registry, are in fact owned and maintained by the Docker community themselves. I am unsure if it is really their duty to-do that.
Many of the listed things in the article are indeed painful things to deal with, but some of them I’m not sure if the PSF is really the right entity to have had them fixed on launch day.
edit: clarified that is the docker community that maintains it, not Docker the corporate entity.
Also, as the author suggested it could be, it’s fixed already:
Digest: sha256:05ff1b50a28aaf96f696a1e6cdc2ed2c53e1d03e3a87af402cab23905c8a2df0
Status: Downloaded newer image for python:3.10
Python 3.10.0 (default, Oct 5 2021, 23:39:58) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
They had to hit publish pretty quickly to release that complaint while it was still true.
Some of concerns seem reasonable, for example the tooling catching up with the new pattern matching syntax blocks (match
, case
). If you use the popular Black code formatter, for example, it doesn’t yet handle pattern matching (and it looks like it’s going to be a bit of a job to update that).
osquery reminds me a lot of WMI’s query language. Likewise, in the IBM i world, there’s been a big push to expose system APIs as SQL functions or tables, to make it easier for DBAs and Unix-side programs to access said APIs.
I think assuming relational DBs need SQL is kinda limiting. QUEL existed, and I think it’d be great if we could have a NoSQL RDBMS - that is, a relational DB (since relational data is good and fits many problem domains) without SQL (which is a bit of a crufty language, and could be replaced with a language learning from its mistakes or simply having none and relying on program-side query builders serializing to a “raw” query language).
There is also logparser which may be a better example of how it can be applied across many types of input/source.
I think assuming relational DBs need SQL is kinda limiting. QUEL existed, and I think it’d be great if we could have a NoSQL RDBMS - that is, a relational DB (since relational data is good and fits many problem domains) without SQL (which is a bit of a crufty language, and could be replaced with a language learning from its mistakes or simply having none and relying on program-side query builders serializing to a “raw” query language).
We kind of have that over in the Windows world via things like LINQPad, or just the LINQ APIs in .NET in general. You can use LINQ to query databases, WMI, arbitrary raw data, and so on, irrespective of source, and with relatively unsurprising performance. I could also make a case with a straight face that MongoDB is indeed, or at least has the potential to be, a NoSQL RDBMS.
But you’re right that we’re not really there on either front in a general sense.
I am intrigued by the framing of the Sturm und Drang about the state of the web as being driven, to some significant degree, by politics internal to Google.
As I stated earlier this week, promo packets are what’ll do in the web.
I think a lot of developers simply lack the interest in context to process the realpolitik that shapes and distorts the fabric of spacetime for our industry.
If you refuse to understand that Google’s whole business is threatened by adblockers, you probably would be confused at the some of the changes to web extensions webRequest that make that work harder. If you don’t understand the desire to make SEO, crawling, and walled gardens easier AMP probably seemed like a great return to roots.
Other companies do this too, of course. If you didn’t know about OS/2 Warp some of the Windows APIs probably seemed weird. If you don’t know about Facebook trying to own everything you do then the lack of email signup for Oculus probably seems strange. If you invested heavily into XNA you probably got bit when internal shifts at Microsoft killed XNA off. If you don’t know about internal Canonical and RHEL shenanigans, systemd and other things probably are a surprise.
Developers need to pay as much attention to the business dependencies as the technical ones.
When you’re doing a performance review at Google, you can request a promotion. If you do this, you put together a ‘packet’ including the impactful work you’ve done. New work is rewarded heavily, maintenance less so. For senior engineers, shipping major projects with an industry wide impact is a path to promotion.
Which means Google rewards doing something new for the sake of doing something new. It’s tremendously difficult to get promoted by improving older systems. Crucially, you often need to demonstrate impact with metrics. The easiest way to do that is sunset an old system and show the number of users who have migrated to your new system, voluntarily or otherwise.
Is there any material evidence suggesting that someone’s promotion is the reason that chrome will remove alert
? Obviously google will push the web in the direction that juices profit, but an individual promotion? Seems like a red herring.
It is often difficult to pick it apart as it’s rarely a single person or team. What happens in large organizations is that there is a high-level strategy and different tactics spring from that. Then, there are metrics scorecards, often based on a proxy, which support the tactics delivering the strategy. This blurs the picture from the outside and means that rarely one person is to blame, or has singular control over the successes.
I haven’t followed the alert situation very closely, but someone familiar with large organizations can get a good read from the feature blurb. There is a strong hint from the language that they are carrying a metric around security, and possibly one around user experience. This translates to an opportunity for a team to go and fix the issue directed by the metrics since it’s quantifiable. The easiest way to start might be to work back from what moves the metric, but this is a very narrow perspective.
Developers may know what the best things to work on having been a developer in that area for 10 years, but their impact tracks towards those top-level strategies. Management can’t justify promotion because someone else is very focused on metrics that drive the strategy.
In lots of places this is called alignment. Your boss may only support X amount of work on non-aligned projects, if you do at least Y amount of work on Y projects. A classic big company alignment example is a talented person in a support department. If they can fix your biggest problem at the source it’d be best to let them do this. However, metrics incentivize assigning them to solving N support cases per week and other metrics designed for lower-skilled individuals instead of working on fixing root causes. Eventually, they leave unless you have smart management taking calculated risks, manage the metrics at the team level so the team is not noticed working the way it wants, seeking paths for talented people to work on the product, etc.
Many of us understand how metrics and incentives at tech companies work. Was just pointing out that it’s a bold claim to assume that chrome is removing alert due to an individual seeking a promotion.
I think about this in terms of my time at Apple – like, people ascribed all kinds of causes to various seemingly peculiar Apple decisions that to those of us on the inside were obvious cases of internal politicking leaking out.
WHATWG is a consortium of multiple companies so I’m curious why everyone is pointing the finger at Google here, or is the assertion that Google has so much power over the WHATWG and Chrome at this point that there’s no ability for other companies to dissent? (And I mean we all know that the W3C lost and WHATWG won so a consortium of vendors is the web.)
The multiple companies are Apple, Google, Microsoft, and Mozilla (https://whatwg.org/sg-agreement#steering-group-member, section 3.1b) Of the three, only Apple develops a browser engine that is not majority funded by Google.
I’m pretty sure Apple develops a browser engine that is majority funded by Google: https://www.theverge.com/2020/7/1/21310591/apple-google-search-engine-safari-iphone-deal-billions-regulation-antitrust
That’s some pretty weird logic.
The browser engine Apple creates is used for a whole bunch of stuff across their platforms, besides Safari:
Mail, iMessage, Media Store fronts, App Store fronts.. Those last two alone produce revenue about 4x what Google pays Apple to make it the default.
Do I wish they’d get more people using alternatives and pass on the google money? Sure. Is there any realistic chance their ability to fund Safari and/or Webkit would be harmed by not taking the google money? Seems pretty unlikely.
It’s true-ish. But I’m sure the most profitable company in the world probably doesn’t require that money and would be able to continue without.
Right I was just wondering if folks think the WHATWG is run solely by Google at this point. Thanks for the clarification.
The point is that many of those new APIs don’t happen in standards groups at all. Exactly because they’d require more than one implementation.
Yes, this. Google’s play here is less about controlling standards per se (ed: although they do plenty of that too) and more about getting everyone to treat Whatever Google Does as the standard.
WHATWG was run at inception by a Googler and was created to give Google even more power over the standards process than the hopelessly broken W3C already gave them. That they strong armed Mozilla into adding their name or that Apple (who was using the same browser engine at the time) wanted to be able to give feedback to the org doesn’t change the Googlish nature of its existence, IMO
Like it or not, Google is the www. It is the driving force behind the standards, the implementations (other than Safari), and the traffic that reaches websites.
It would be odd if Google’s internal politics didn’t leak into the medium.
Right, it’s just … one of those things that is obvious in retrospect but that I would never be able to state.
A lot of people seem to think that standards work is a bit like being in a university - people do it for the love of it and are generally only interested in doing what’s best for all.
In reality it’s a bunch of wealthy stakeholders who realize that they need to work together for everyone’s best - they’re not a monopoly, yet - but in the meantime it behooves them to grab every advantage they can get.
As mentioned in the post, standards work is hard and time-consuming, and if an organisation can assign a dedicated team to work on standards, that work will get implemented.
You see the same with “fullstack developers”, who are usually either backend or frontend developers who can also do the other thing a bit.
I’ve actually done quite a bit of sysadmin stuff on production systems over the years, as well as a lot JavaScript frontend stuff. But at the end of the day I’m I’m mostly just a backend/systems programmer who can also do a bit of the other stuff.
IME most “fullstacks” are better at frontend than those who would cling to being “frontend-only” because they still care and are willing to learn things.
My big beef about the word “fullstack” is it presumes webdev is the only dev. Most “fullstacks” are missing experience in many stacks, such as OS, embedded, rich client, or networks other than http
I agree with your sentiments as I was thinking about how I should update my CV. This devolved into thoughts on how other people perceive a person’s skills and how I’ve tried to think about them when hiring.
The model from Football Manager (formerly, Championship Manager) for player attributes came to mind. It provides a summary of:
This could be applied to any set of roles you are hiring and the summary visualization can be overlaid for comparison. You could also use it as a way to communicate the requirements for a preferred hire. It’s all food for thought, but I think it’s an interesting way to think about skillsets given how complex we’ve made the requirements.
I believe in mandatory reviews, but I usually communicate what I expect from my reviewer when I request a review. Likewise as a reviewer I ask what’s expected of me when I don’t already know. I don’t think mandatory reviews create a culture of carelessly tossing reviews over the wall, and if you stop doing mandatory reviews I don’t think people will magically try harder.
One policy doesn’t make or break a culture of developing quality code.
I worry that a standard process gets in the way of building a flexible culture for collaboration.
This gets to the heart of my problems with the piece, I think.
Lack of process does not scale: in an ideal world everyone would be disciplined in their approach but as the size of an organisation increases you need to agree on what “disciplined” looks like.
Even for small projects, I have on days and off days and I appreciate technology that can prompt me towards the better approach. The way I often phrase this in discussions about tooling is: it should be harder not to use it.
That comes with a corollary: you should still be able to not use it! No process is sacred or perfect for every situation and it’s important to be flexible. If you’re bypassing something regularly, maybe it’s time to think about why it’s not working. A good process is one that makes your life easier.
Maybe this is what the author is trying to get to, and I’m missing the point. I’m certainly not arguing that enforcing PR review is the only way to do things.
But I’m wary of arguing that something is all bad just because we might want to do something different sometimes.
That comes with a corollary: you should still be able to not use it!
This is fine in many situations. The problems crop up when developers do not fully understand their commitments. These should always be explicit, but a significant number of organizations work with significantly complicated customers and industries that understanding the domain becomes as a big a challenge as the project. You also have the human elements of being hired into a role that’s different from the one presented, organization changes, etc.
The rules and process in an effective organization, where “effective” means meeting customer requirements, needs to have some level of enforcement. If it was purely size then the option to skip it would be more acceptable, but for many situations it isn’t. I’m ignoring a large set of organizations that operate with risk by unknowingly violating regulations, or have just hit a point where their legal department will finally force them to snap to a process.
Quality of life is an important consideration for developers and operators. In the current tech market, this group tends to be well positioned to dictate a lot about what a good process means from their perspective. When it comes to a drive towards process is optional, especially as short-hand for developers will opt out of the processes that they feel like, then it starts to challenge our level of professionalism as an industry.
This is very true! Some process should not be skipped - and we’re back to expecting people to use good discipline and judgement, which doesn’t always work.
Perhaps in such cases that should read: “you should be able to skip it if you can prove you have a damn good reason for doing so”?
Treating the skippage of very important steps as an incident and holding a plain post-mortem is a good start. So less about the skipper proving themselves, more about exploring the circumstances that lead to skippage.
Perhaps in such cases that should read: “you should be able to skip it if you can prove you have a damn good reason for doing so”?
If you act with good intent and break a regulatory requirement, you have still broken a regulatory requirement. Depending on circumstances this is still a serious issue and not infrequently investigation finds that it was avoidable. It is much better to pause and collaborate with others to overcome the impasse even when you are uncomfortable because the consequences are not always immediate, or direct.
I agree. Processes like mandatory code review is important in so far as they document your beliefs and knowledge about what was best at the time the decision was made. Beliefs and knowledge change with evidence and circumstances, and so will your processes.
The option to duck out of It – if the situation so demands – is also important to me. I think the only road to long term success is to hire smart people and give them freedom under responsibility.
Of course, you could argue that doesn’t scale, and I think that’s not a problem with the principle, but simply a consequence of the fact that human creative work does not scale, period. You can’t construct a well-functioning 500 person product development organisation. In order for it to work at all, you’d have to extinguish many of the things that make creative teamwork powerful.
Of course, the alternative might be better: put together the 500 people into small mini-orgaanisations that are as independent as possible, and essentially run a little company inside the company (driving stuff from user need to sunset), but with goals determined by larger coherent strategic initiatives.
Great article! I might try async requests sometime.
When using Requests, I throttled to avoid overwhelming remote server. Like in BASIC days, I just used “sleep” to guarantee a minimal amount of time between requests. Has the advantage that it’s less likely to be buggy or have unpredictable behavior than my custom, throttling code.
Adding sleep is good, especially if you’re trying to remain undetected.
However people typically gravely underestimate what a server can handle. Even at home servers can handle 10k connections/s if configured properly.
There’s something to be said about being nice, but in general I say that you can hit things as hard as you want and the server won’t stutter.
There’s something to be said about being nice, but in general I say that you can hit things as hard as you want and the server won’t stutter.
That works fine when there is simple rate-limiting and tracking on the server end. When you are dealing with larger APIs, or services that might be sensitive to request rates (e.g. LinkedIn), then you need to be aware of how they may take action later. Your client may appear to be working the first time around and then you get blocked later. It is worth understanding more about the service you are making requests to and taking a cautious approach because the response may be more sophisticated than you are prepared to deal with.
you get blocked later
Yes please always check this first, you don’t want to run into captcha requests (yt-dl..)
It seems very weird to even have a “CPU MHz” graph, as if the hertz mean anything at all - especially since it’s comparing AMD CPUs and Intel CPUs?
I have no idea what “Events per second” means. What kind of event? It doesn’t seem like that’s explained anywhere?
I don’t know what’s going on with the memory read and write measurements. Obviously the average and minimum speed for a single memory read or write operation is going to be 0ms? Isn’t milliseconds way too coarse grained for that kind of measurement? What is counted as a “memory read operation” or “memory write operation” anyways? Does it measure a read from cache or does the benchmark make sure to actually read from main memory? Wouldn’t memory throughput and memory latency (with separate measurements for read and write) make more sense than “memory operations per second” and “milliseconds per memory operation”?
Same with “File I/O”; isn’t latency and throughput more interesting than just ops per second? Is the “operations” the same as what’s measured when we measure IOPS or is it something else? What is the “minimum/maximum/average”? Is the “minimum time for a read operation” just measuring the time it takes to read a page from the page cache (aka just a benchmark of the memory system) or does it make sure the files aren’t in the page cache? And again, clearly milliseconds is way too coarse grained for these measurements given that they’re all at 0?
Am I missing something or do most of these benchmarks seem underexplained and not that well thought through? I like the concept, seeing a wide variety of benchmarks on the various VPSes could be interesting, but I don’t really feel like I can conclude anything from the numbers here. Maybe running the Phoronix benchmark suite on the different $5/month VPSes could provide some more useful results.
I’ve flagged this as spam because it’s some vague hand waving to get you to click on the referrer links at the bottom of the page. It looks as if it’s really just there to get referrer kick-backs.
Am I missing something or do most of these benchmarks seem underexplained and not that well thought through?
Aren’t you talking about most benchmarks that do the rounds? Benchmark blogs never seem to learn from the earlier criticism. They are often:
A better test of VPS usage, especially when it’s a single node, might be to see how many requests per second you can get out of a WordPress instance on it. It’s far from perfect, but that’s a big reason they exist. Ideally, you’d add in some problematic code and see how well that performs. That was actually an idea that Ian Bicking had suggested at PyCon long ago for Python performance comparisons because that’s what is happening when most people need to investigate performance.
Am I missing something or do most of these benchmarks seem underexplained and not that well thought through? I like the concept, seeing a wide variety of benchmarks on the various VPSes could be interesting, but I don’t really feel like I can conclude anything from the numbers here.
You’re not missing anything.
Another factor is that VPSs can have pretty variable performance, which is why he used three instances and “averaged the results where applicable”. A provider giving consistent performance vs. a provider with large differences seems like an interesting data point. Also n=3 seems pretty low to me.
And things like “Maximum (ms)” for “CPU” (maximum what? The time an “event” took?) could be a single event that’s an outlier, and the mean average for these kind of statistics isn’t necessarily all that useful. You really want a distribution of timings.
I did find the scripts he uses on his GitHub though; basically it just runs sysbench.
I agree something like this could be useful; but this is not it. Quite frankly I’d say it’s borderline blogspam
It’s not surprising that a US-centric definition of esquire was selected, but when you dive into the protocols of the British usage, things get even more complicated.