Redis is an example of this paradigm. Today, most cloud providers offer Redis as a managed service over their infrastructure and enjoy huge income from software that was not developed by them.
This makes no sense. Redis was not developped by Redis Labs either, they only hired Antirez a few years ago. Before that, they were doing exactly what they criticize today. I would even say they weren’t the best actor in that space, e.g. OpenRedis was founded by people truly involved in the community…
I won’t say any more before I see a direct take from Salvatore on this issue.
In any case, commercially, I think this is a huge mistake. If they persist Redis is going to be forked and the fork will eventually win, a la MariaDB.
EDIT: I had misread the announcement, this is only about modules which will be Apache 2 + Commons Clause, while the Redis core will remain BSD. I am fine with that even though I think AGPL would make more sense if you want those modules to become popular. Enforcing a monopoly on hosting is never a good idea.
From various comments I can read online it looks like I was not the only one to misunderstand. That Redis was unaffected should probably have been the first line of that post, in bold…
EDIT 2: Salvatore himself is tweeting about the issue on Twitter right now. https://twitter.com/antirez/status/1032192722755571714
MariaDB isn’t the bastion of open source people make it out to be.
Since the original claim that a) Oracle would close-source MySQL, and/or let it stagnate to force adoption of their commercial offerings, and thus b) MariaDB would fork MySQL and maintain feature parity, in an open source model:
Honestly if your (the reader, not @catwell) app/company needs MySQL and isn’t paying for a commercial licence from Oracle, IMO you’d be stupid not to use either Persona Server or Percona Cluster. Actual feature parity (i.e. they constantly rebase from upstream) and a clear business model: pay for support/advice, the software is completely free.
clear business model: pay for support/advice, the software is completely free.
That business model is what Redis Labs is trying to address. There’s a serious problem for companies with that model if a large hosting provider like AWS that more and more people are moving to can come in and offer a “as a service” version that cuts off the support revenue stream. At that point, AWS can benefit from the work that said company is doing without contributing anything back.
Open source software in general and open source business models often assume that you won’t have parasitic players in the market who derive value from the work of others but contribute nothing in return. The current system is going to have to change eventually to account for that.
It might end up being that all open source software is produced by companies that aren’t “product” companies. For example, Google spinning out K8S and not attempting to make money off the software. LinkedIn getting an advantage out of opening up Kafka etc. In that world, eventually, there will be very few companies like RedisLabs, CockroachDB, InfluxDB etc that are trying to be product companies. The large move “to the cloud” that is underway is a huge disruption to that previously OSS business model. I think a model that many will try will be to take an open source product and provide close sourced, additional functionality around those codebases (thereby sidestepping licenses like the AGPL) and doing managed hosting and as a service hosting within the big clouds like AWS, GCP, and Azure.
There’s a serious problem for companies with that model if a large hosting provider like AWS that more and more people are moving to can come in and offer a “as a service” version that cuts off the support revenue stream.
Percona provide support services for customers who use AWS’ various mysql flavoured “DB as a Service” offerings.
This is not that different IMO than what Rackspace did - they took their Ops/Arch experience, and offered it as a service, regardless of who hosts the underlying machines.
Re: AGPL
The 3 modules that were reliscened were previously AGPL. AGPL doesn’t provide the protections that Redis Labs is seeking. Most OSS companies have a business model that revolves around support. If a large hosting provider like Amazon comes in and provides an “as a service” version, that cuts off a primary revenue stream. If said hosting provider doesn’t produce improvements to the codebase then AGPL doesn’t matter.
For my own projects I use cron exclusively.
At work we use cron for system-level tasks (e.g. backups) and Celery for application-level tasks (e.g. periodically poll inventory from warehouses), with RabbitMQ as its backend.
Also, think about monitoring those tasks, especially backups. A lot of people don’t and it’s a recipe for disaster. I have started using https://cronhub.io/ recently but there are other similar services such as https://cronitor.io/, or you can roll your own like I used to do.
I would like to second this post.
The programming language/framework specific scheduling parts don’t matter all that much, but the message bus/backend parts do. RabbitMQ and other AMQP solutions are pretty good, try avoiding a simple key-value store based backend such as Redis.
Any specific reason for avoiding Redis/key-value stores? I’ve only had one such experience (resque-php) and the main downside seemed to be the need for polling, but honestly I don’t know if that’s because of Redis or because of resque-php’s implementation. I’d like to hear more about that!
It’s too simplistic. I mean it works for very basic usage, but once you start caring about things like HA or backups or wider usage (so multiple vhosts in rabbitmq terminology) or logging/monitoring it kind of shows how inadequate it is.
Redis clustering is not that nice. Introspectability - it’s on the wrong level, you don’t generally care about the key/value parts, you care more about the message bus parts and since Redis isn’t aware of that it can’t help you with it.
Relaxing with family after a week-long holiday. I will probably progress on my reading of the Haskell book on the train back home tomorrow.
I blog about various things including Lua and distributed systems. I have not been writing nearly enough recently.
No books, but here are a few links you may find interesting as a starting point:
This was from 2012. Arguably, we’re already there. Tons of popular computers run signed bootloaders and won’t run arbitrary code. Popular OS vendors already pluck apps from their walled garden on the whims of freedom-optional sovereignties.
The civil war came and went and barely anyone took up arms. :(
It’s not like there won’t always be some subset of developer- and hacker-friendly computers available to us. Sure, iPhones are locked down but there are plenty of cheap Android phones which can be rooted, flashed with new firmware, etc. Same for laptops, there are still plenty to choose from where the TPM can be disabled or controlled.
Further, open ARM dev boards are getting both very powerful and very cheap. Ironically, it might even be appropriate to thank China and its dirt-cheap manufacturing industry for this freedom since without it, relatively small runs of these tiny complicated computers wouldn’t even be possible.
This is actually the danger. There will always be a need for machines for developers to use, but the risk is that these machines and the machines for everyone else (who the market seems to think don’t “need” actual control over their computers) will diverge increasingly. “Developer” machines will become more expensive, rarer, harder to find, and not something people who aren’t professional developers (e.g. kids) own.
We’re already seeing this happen to some extent. There are a large number of people who previously owned PCs but who now own only locked down smartphones and tablets (moreover, even if these devices aren’t locked down, they’re fundamentally oriented towards consumption, as I touched on here).
Losing the GPC war doesn’t mean non-locked-down machines disappearing; it simply means the percentage of people owning them will decline to a tiny percentage, and thus social irrelevance. The challenge is winning the GPC war for the general public, not just for developers. Apathy makes it feel like we’ve already lost.
Arguably iPhones are dev friendly in a limited way. if you’re willing to use Xcode, you can develop for your iPhone all you want at no charge.
Develop for, yes, within the bounds of what Apple deems permissible. But you can’t replace iOS and port Linux or Android to it because the hardware is very locked down. (Yes, you might be able to jailbreak the phone through some bug, until Apple patches it, anyway.)
Mind you, I’m not bemoaning the fact or chastising Apple or anything. They can do what they want. My original point was just that for every locked-down device that’s really a general-purpose computer inside, there are open alternatives and likely will be as long as there is a market for them and a way to cheaply manufacture them.
Absolutely! Even more impressive is that with Android, Google has made such a (mostly) open architecture into a mass market success.
However it’s interesting to note that on that very architecture, if you buy an average Android phone, it’s locked down with vendorware such that in order to install what you want you’ll likely have to wipe the entire ecosystem off the phone and substitute an OSS distribution.
I get that the point here is that you CAN, but again, most users don’t want the wild wild west. Because, fundamentally, they don’t care. They want devices (and computers) that work.
Google has made such a (mostly) open architecture into a mass market success.
Uh, I used to say that until I looked at the history and the present. I think it’s more accurate that they made a proprietary platform on an open core a huge success by tying it into their existing, huge market. They’ve been making it more proprietary over time, too. So, maybe that’s giving them too much credit. I’ll still credit them with their strategy doing more good for open-source or user-controlled phones than their major competitors. I think it’s just a side effect of GPL and them being too cheap to rewrite core at this point, though.
I like to think that companies providing OSes are a bit like states. They have to find a boundary over how much liberty over safety they should set, and that’s not an easy task.
This is not completely true. There are some features you can’t use without an Apple developer account which costs $100/yr. One of those features is NetworkExtension.
friendly in a limited way.
OK, so you can take issue with “all you want” but I clearly state at the outset that free development options are limited.
Over half a million people or 2 out of 100 Americans died in the Civil War. There was little innocent folks in general public could do to prevent it or minimize losses Personally, I found his “civil war” to be less scary. The public can stamp these problems out if they merely care.
That they consistently are apathetic is what scares me.
Agreed 100%.
I have no idea what to do. The best solution I think is education. I’m a software engineer. Not the best one ever, but I try my best. I try to be a good computing citizen, using free software whenever possible. Only once did I meet a coworker who shared my values about free software and not putting so much trust in our computing devices - the other 99% of the time, my fellow devs think I’m crazy for giving a damn.
Let alone what people without technical backgrounds give a damn about this stuff. If citizens cared and demanded freedom in their software, that would position society much better to handle “software eating the world”.
The freedoms guaranteed by free software were always deeply abstruse and inaccessible for laypeople.
Your GNOME desktop can be 100% GPL and it will still be nearly impossible for you to even try to change anything about it; even locating the source code for any given feature is hard.
That’s not to say free software isn’t important or beneficial—it’s a crucial and historical movement. But it’s sad that it takes so much expertise to alter and recompile a typical program.
GNU started with an ambition to have a user desktop system that’s extensible and hackable via Lisp or Scheme. That didn’t really happen, outside of Emacs.
Your GNOME desktop can be 100% GPL and it will still be nearly impossible for you to even try to change anything about it; even locating the source code for any given feature is hard.
I tried to see how true that is with a random feature. I picked brightness setting in the system status area. Finding the source for this was not so hard, it took me a few minutes (turns out it is JavaScript). Of course it would have been better if there was something similar to browser developer tools somewhere.
Modifying it would probably be harder since I can’t find a file called brightness.js on my machine. I suppose they pack the JavaScript code somehow…
About 10 years ago (before it switched to ELF) I used Minix3 as my main OS for about a year. It was very hackable. We did something called “tracking current” (which apparently is still possible): the source code for the whole OS was on the disk and it was easy to modify and recompile everything. I wish more systems worked like this.
Remember when the One Laptop Per Child device was going to have a “view source” button on every activity?
At work, I use a full size Filco Majestouch 2 with MX Browns, with one of the QWERTY international variants. I started using a mechanical keyboard after using one of those terrible flat white Apple keyboards for years, which had started to take a toll on my wrists.
At home, I mostly use the keyboard of my XPS 13 laptop. If I had to work more from home, I would buy an external screen and a tenkeyless Majestouch 2 with MX Browns.
Don’t forget that performance enhancements, security enhancements, and increased hardware support all add to the size over what was done a long time ago with some UNIX or Linux. There’s cruft and necessary additions that appeared over time. I’m actually curious what a minimalist OS would look like if it had all the necessary or useful stuff. I especially curious if it would still fit on a floppy.
If not security or UNIX, my baseline for projects like this is MenuetOS. The UNIX alternative should try to match up in features, performance, and size.
Can you fit it with a desktop experience on a floppy like MenuetOS or QNX Demo Disc? If not, it’s not as minimal as we’re talking about. I am curious how minimal OpenBSD could get while still usable for various things, though.
Modern PC OS needs ACPI script interpreter, so it can’t be particularly small or simple. ACPI is a monstrosity.
Re: enhancements, I’m thinking Nanix would be more single-purpose, like muLinux, as a desktop OS that rarely (or never) runs untrusted code (incl. JS) and supports only hardware that would be useful for that purpose, just what’s needed for a CLI.
Given that Linux 2.0.36 (as used in muLinux), a very functional UNIX-like kernel, fit with plenty of room to spare on a floppy, I think it would be feasible to write a kernel with no focus on backwards hardware or software compatibility to take up the same amount of space.
Your OS or native apps won’t load files that were on the Internet or hackable systems at some point? Or purely personal use with only outgoing data? Otherwise, it could be hit with some attacks. Many come through things like documents, media files, etc. I can imagine scenarios where that isn’t a concern. What’s your use cases?
To be honest, my use cases are summed up in the following sentence:
it might be a nice learning exercise to get a minimal UNIX-like kernel going and a sliver of a userspace
But you’re right, there could be attacks. I just don’t see something like Nanix being in a place where security is of utmost importance, just a toy hobbyist OS.
It seems to work, just booted the ISO (admittedly not the floppy, don’t have what is needed to make a virtual image right now) of muLinux in Hyper-V and it seems to work fine, even having 0% CPU usage on idle according to Hyper-V.
Yet modern software, being complex, does fail from time to time. As do all those things engineers work on. The engineer’s solution to that is not trying to build complex things that never fail; it’s attacking the problem on all fronts. Decreasing defect rate is part of the solution, but so is actually measuring the frequency and impact of defects as well as thinking about failsafes.
Sometimes doctors give medicine to patients and it fails, because of a bad diagnostic or an unplanned adverse reaction, but mostly because biology is complex. That’s why patients are monitored in the hospital.
Sometimes accidents happen in nuclear power plants, because physics is complex and components can have defects. Most of them are not critical because engineers have planned for the unplanned.
Sometimes trains don’t start, drivers get sick or go on strike, trees fall on rails… Yet the whole country does not end up being paralyzed for it.
It’s funny that the author uses the automotive industry for comparison, when you think about it. Quick, what’s the first word that comes to your mind when you hear “breakdown”? (Maybe it doesn’t work as well in English because it could be something like “nervous”, but that’s not the case in my native language…) What’s the first cause of accidental death worldwide that isn’t health-related?
I had an education as an engineer in networks and electronics; we had courses in “resilience” that dealt with things like redundancy, MTBF / MTTR, monitoring… as well as the impact of component complexity on failure rates. A popular approach in those fields is to use cheap, relatively simple components, assume they will fail, and then make sure the failure of a component is 1) not critical ; 2) easy to detect and 3) can be fixed quickly and reliably.
There are people who think this way in software, mostly in the Erlang community (see Error Kernels, Let it Crash…). Maybe other parts of the software world should listen to them more and take inspiration from them.
Option #4: Start a product business (the right way)
Did the author ever run a business?
The most realistic chance of working less than 35 hours is slacking off somewhere as a salaried employee.
This is where it gets philosophical. Sure you can’t go hiking or do parenting in that time. However people playing games, reading books, solving puzzles and even building entire parallel careers in their nominally work time aren’t unheard of.
Exactly, when I ran my own startup, it was like working at 2 jobs. Always something to do, fix, research, discuss, plan, etc.
It depends on the business, though. I know people with couple moderately-successful iOS apps where yes, they do some support nd bugfixing, but can do it on their own schedule and the money comes it “on its own”.
Startups are a particular kind of product company that is high-pace. But small-businesses can also exist.
He did found a startup in the early 2000s.
And it failed! Because we did it the wrong way. But Amy Hoy has done it the right way, and teaches how, which is why I linked to her stuff.
Also note that VC-backed startups are very definitely NOT the way to get decent working hours as a founder. It’s totally possible to work decent hours (<40) as an employee, as I’ve done at last three jobs.
As someone who uses arch on all my developer machines, arch is a horrible developer OS, and I only use it because I know it better than other distros.
It was good 5-10 years ago (or I was just less sensitive back then), but now pacman Syu is almost guaranteed to break or change something for the worse, so I never update, which means I can never install any new software because everything is dynamically linked against the newest library versions. And since the arch way is to be bleeding edge all the time, asking things like “is there an easy way to roll back an update because it broke a bunch of stuff and brought no improvements” gets you laughed out the door.
I’m actually finding myself using windows more now, because I can easily update individual pieces of software without risking anything else breaking.
@Nix people: does NixOS solve this? I believe it does but I haven’t had a good look at it yet.
Yes, Nix solves the “rollback” problem, and it does it for your entire OS not just packages installed (config files and all).
With Nix you can also have different versions of tools installed at the same time without the standard python3.6 python2.7 binary name thing most place do: just drop into an new nix-shell and install the one you want and in that shell that’s what you have. There is so much more. I use FreeBSD now because I just like it more in total, but I really miss Nix.
EDIT: Note, FreeBSD solves the rollback problem as well, just differently. In FreeBSD if you’re using ZFS, just create a boot environment before the upgrade and if the upgrade fails, rollback to the pre-upgrade boot environment.
Being a biased Arch Developer, I rarely have Arch break when updating. Sometimes I have to recompile our own C++ stack due to soname bumps but for the rest it’s stable for me.
For Arch there is indeed no rollback mechanism, although we do provide an archive repository with old versions of packages. Another option would be BTRFS/ZFS snapshots. I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.
I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.
I can see some people might value that perspective. For me, I like the ability to plan when I will solve a problem. For example I upgraded to the latest CURRENT in FreeBSD the other day and it broke. But I was about to start my work day so I just rolled back and I’ll figure out when I have time to address it. As all things, depends on one’s personality what they prefer to do.
For me, I like the ability to plan when I will solve a problem.
But on stable distros you don’t even have that choice. Ubuntu 16.04, (and 18.04 as well I believe) ships an ncurses version that only supports up to 3 mouse buttons for ABI stability or something. So now if I want to use the scroll wheel up, I have to rebuild everything myself and maintain some makeshift local software repository.
And that’s not an isolated case, from a quick glance at my $dayjob workstation, I’ve had to build locally the following: cquery, gdb, ncurses, kakoune, ninja, git, clang and other various utilities. Just because the packaged versions are ancient and missing useful features.
On the other hand, I’ve never had to do any of this on my arch box because the packaged software is much closer to upstream. And if an update break things, I can also roll back from that update until I have time to fix things.
I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.
And if an update break things, I can also roll back from that update until I have time to fix things.
Several people here said that Arch doesn’t really support rollback which is what I was responding to. If it supports rollback, great. That means you can choose when to solve a problem.
I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.
Ok, but that’s a problem inherent to stable distros, and it gets worse the more stable they are.
Several people here said that Arch doesn’t really support rollback
It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.
It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.
Your description makes it sound like pacman doesn’t support roll backs, but you can get that behaviour if you have to and are clever enough. Those seem like very different things to me.
Also, what you said about stable distros doesn’t seem to match my experience in FreeBSD. FreeBSD is ‘stable’ however ports packages tend to be fairly up to date (or at least I rarely run into it except for a few).
I’m almost certain any kind of “rollback” functionality in pacman is going to be less powerful than what’s in Nix, but it is very simple to rollback packages. An example transcript:
$ sudo pacman -Syu
... some time passes, after a reboot perhaps, and PostgreSQL doesn't start
... oops, I didn't notice that PostgreSQL got a major version bump, I don't want to deal with that right now.
$ ls /var/cache/pacman/pkg | rg postgres
... ah, postgresql-x.(y-1) is sitting right there
$ sudo pacman -U /var/cache/pacman/pkg/postgres-x.(y-1)-x86_64.pkg.tar.xz
$ sudo systemctl start postgres
... it's alive!
This is all super standard, and it’s something you learn pretty quickly, and it’s documented in the wiki: https://wiki.archlinux.org/index.php/Downgrading_packages
My guess is that this is “just downgrading packages” where as “rollback” probably implies something more powerful. e.g., “rollback my system to exactly how it was before I ran the last pacman -Syu.” AFAIK, pacman does not support that, and it would be pretty tedious to actually do it if one wanted to, but it seems scriptable in limited circumstances. I’ve never wanted/needed to do that though.
(Take my claims with a grain of salt. I am a mere pacman user, not an expert.)
EDIT: Hah. That wiki page describes exactly how to do rollbacks based on date. Doesn’t seem too bad to me at all, but I didn’t know about it: https://wiki.archlinux.org/index.php/Arch_Linux_Archive#How_to_restore_all_packages_to_a_specific_date
now pacman Syu is almost guaranteed to break or change something for the worse
I have the opposite experience. Arch user since 2006, and updates were a bit more tricky back then, they broke stuff from time to time. Now nothing ever breaks (I run Arch on three different desktop machines and two servers, plus a bunch of VMs).
I like the idea of NixOS and I have used Nix for specific software, but I have never made the jump because, well, Arch works. Also with Linux, package management has never been the worst problem, hardware support is, and the Arch guys have become pretty good at it.
I have the opposite experience
I wonder if the difference in experience is some behaviour you’ve picked up that others haven’t. For example, I’ve found that friend’s children end up breaking things in ways that I would never do just because I know enough about computers to never even try it.
I think it’s a matter of performing Syu update often (every few days or even daily) instead of once per month. Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.
I’m an Arch user since 6 years and there were maybe 3 times during those 6 years where something broke badly (I was unable to boot). Once it was my fault; second & third one is related to nvidia driver and Xorg incompatibility.
Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.
It’s sometimes also a matter of bad timing. Now every time before doing a pacman -Syu I check /r/archlinux and the forums to see if someone is complaining. If so then I tend to wait for a day or two before the devs push out updates to broken packages.
I have quite a contrary experience, I have pacman run automated in the background every 60 minutes and all breakage I suffer is from human-induced configuration errors (such as misconfigured boot loader or fstab)
Would be nice, yeah, though I never understood or got Nix really. It’s a bit complicated and daunting to get started and I found the documentation to be lacking.
How often were you updating? Arch tends to work best when it’s updated often. I update daily and can’t remember the last time I had something break. If you’re using Windows, and coming back to Arch very occasionally and trying to do a huge update you may run into conflicts, but that’s just because Arch is meant to be kept rolling along.
I find Arch to be a fantastic developer system. It lets me have access to all the tools I need, and allows me to keep up the latest technology. It also has the bonus of helping me understand what my system is doing, since I have configured everything.
As for rollbacks, I use ZFS boot environments. I create one prior to every significant change such as a kernel upgrade, and that way if something did happen go wrong, and it wasn’t convenient to fix the problem right away, I know that I can always move back into the last environment and everything will be working.
I wrote a boot environment manager zedenv. It functions similarly to beadm. You can install it from the AUR as zedenv or zedenv-git.
It integrates with a bootloader if it has a “plugin” to create boot entries, and keep multiple kernels at the same time. Right now there’s a plugin for systemdboot, and one is in the works for grub, it just needs some testing.
Awesome! If you do, let me know if you need any help getting started, or if you have any feedback.
It can be used as is with any bootloader, it just means you’ll have to write the boot config by hand.
Tell them to until the next draw of a large lottery and use the results.
For instance, using EuroMillions where each ball draw is a number from 1 to 50, there are 50 ^ 3 = 125000 = 6250 * 20 possible draws from the first 3 balls. Order them somehow, create bins, done.
I used to self-host my email (with Postfix, Dovecot and Rainloop most recently) but I ended up giving up and switching to Fastmail.
Redis. E.g. https://redis.io/commands/sinterstore
As long it’s not available for its citizens that is..
Bernard Cazeneuve was the PM from the former (François Hollande) government, and even then he was called out by various state organizations (article in French, sorry).
Emmanuel Macron himself said something like this sometime, but he had to retract the next day. In general, in France like elsewhere, politicians don’t really understand that topic and listen to lobbies on both sides. Fortunately, in the case of Macron, the lobbies he listens to the most are pro-encryption.
From what I have heard, Signal has been considered but there is an issue with some metadata going to the US.
maybe they want to be able to customize their own clients or host their communications on their own servers
This post made me discover the Yue library (http://libyue.com), which in addition to JS also supports C++ and Lua. At a glance it looks like the best cross-platform GUI library for Lua so far!
There were 5 or so Opera users
Hello! (Both on Linux and on Android, probably.)
Someone is using Sailfish OS / Maemo
Sometimes I do, but it wasn’t me this time.
I’m not sure I’m sold on this. I get why people building infrastructure software like redis might want this. Yes, it helps them keep the “Foo as a Service” market as a captive income stream without competition from AWS, et al. At the same time it seems like for any service of much worth, it’s going to get cloned by the big providers anyway, and then you have a proliferation of similar but incompatible closed-source versions. I’m not convinced that is necessarily good for the community at large.
I think it’s just a protection to avoid a Redis as a service launched with plain redis and few bits here and there to make the offering work. Big players can obviously clone it and have theirs, but at least most small to middle size players are eliminated. (From what I understand).
You can still start Redis as a Service companies. I was shocked at first because I thought this concerned Redis and their aim was to kill all of the Redis as a Service providers which already exist. But it turns out Redis Core is unaffected by this, only some modules are.
I don’t really know what they intend to achieve with this, except having people avoid using their modules…
Which doesn’t seem worthwhile, as the big players are the ones most likely to be able to market and monetise a service based on core Redis plus their own proprietary add-ons. It’s pretty difficult to compete with AWS on any front at this stage, given their massive resources and the “nobody ever gets fired for buying X” safety of big brands.
Boxing out only the small players doesn’t really feel like it’s going to preserve a whole bunch of market or mindshare for the Redis company.
I’m not into business very much, so I cannot evaluate if this operation is worth it or not, I would just assume that they were going for the long tail, which can be a sufficient number of clients to have decent revenue and continue to work on the Redis company.
In reality I don’t have the feeling that a “long tail” actually exists for a lot of these types of services. I base this on the Firebase/Parse era when there were loads of “backend as a service” companies around that have all withered away (my understanding at least). With only google/Firebase remaining. I personally was surprised by this.