Step CA is really the way to go! I even got it working with a true random number generator. If you put your root key on a yubiki, it’s even more secure! They have beautiful tutorials.
I agree with this review. The hardest part of TF is state management and I was disappointed to learn that the for pay solution isn’t much better than the free version. Personally I just use the native cloud solutions (bicep for azure, cloud formation or cdk for AWS) since it’s supported by the cloud vendor itself.
This seems great similar to yggdrasal I also wonder why they don’t talk much about the underlying transport. You don’t need nat if it’s an IPv6 network. I do understand that they want to establish essentially a non-traceable overlay network, but the underlay is just as an important.
“if it’s an IPv6 network” - well, sometimes it isn’t, so things like tailscale or veilid have to hole punch… because they want to be used by everyday devices :)
Checked it out. There isn’t any real guides or documentation with practical examples on their GitHub or webpage. Veilid I see promised the ability to read and write objects across the network using content IDs of some sort, so that’s a big difference I suppose.
Don’t forget about the non Google encumbered forks. Ungoogled, chromium, chromite, etc. Let you have Chrome without having the privacy busting features built in.
I actually think the first naill in the coffin was the enforcement of HTTPS everywhere. I have to use the infrastructure and certificates instead of my own certificates. There were some real reasons for this such as security, and people were abusing the freedom. At the same time we see it increasing use of decreasing freedom for more device security. For instance, trusted boot or should I say untrusted boot, was supposed to fix the problem of evil people taking over my computer. The problem is now the only people that have control of my computer is everyone but me. It’s one of the reasons why I did go away from Apple and Microsoft. I use something called ungoogled chromium and on my Android I use chromite. So far. They seem to work nicely and mostly reduce a lot of the things this article talks about. It should also be noted that Google does allow the install of third party operating systems on its phones.alas, things like graphene OS are so secure that they’re almost not usable. I tried graphing for a very long time and it was fine for about 6 months and then all my apps broke. So I had to stop using it. It is definitely a tough choice freedom without security or security without freedom.
When I first came to the US, and stayed at some motel in Santa Clara, going online was quite the challenge because the local Wi-Fi network was injecting its own ads in unsecured pages, on every webpage. I found this practice appalling, not only because ads are annoying, but because it made me realise that the ISP can modify the contents of the webpages that you’re viewing.
Also, security. The general advice to “use a VPN when connecting to public networks” is only relevant for non-HTTPS connections. Having HTTPS everywhere drastically improved the security of most normal people going online.
You can still sign using your own certificates, and there are plenty of examples of that in the wild, such as your average corporation, or NextDNS showing a nice “block page”. You do have to install a custom root certificate, and that’s good actually, because otherwise MITM attacks can’t be prevented.
Also, back in the day, certificates were expensive. Nowadays, certificates are free and their renewal is easily automated via LetsEncrypt, even with wildcards.
We may agree to disagree, but HTTPS-everywhere is one of the best things that happened to the web in recent years.
There’s pros & cons here. I knew people having servers back before HTTPS & now they don’t bother ’cause they see it as just another hurdle moving to some other centralized platform. But after the the way Firesheep exploded, it was hard to deny that we needed something. Reliance on Certbot to have a) good will, b) remain free, c) not eventually be blocked by say Google Chrome™ in the name of trusted-er websites since malicious sites could automate their certs are all things that leave some of the future looking unknown. I better way to handle self-signed certs would have been nice too, but that has its own issues both with the security and the UX of showing it to users (tho I don’t think the old way of showing a massive warning should have been as scary as it was leading most to not consider it).
Your GrapheneOS comment could use a disclaimer that it’s just your personal experience.
I’ve been using it for years on several phones and I’ve never had “all my apps break”. And it’s also not “not usable”. Most Android apps just work, including banking (and Google Maps). This is without even running Google services. (Again, this is also just my personal experience with it.)
I use fossil, which is my SCM. I use caddy for my web server. I use radicale for calendaring. And I use grav CMS for my web pages. I have used gitea when I don’t use fossil. I also use jitsi meet as an open sourced zoom alternative.
It’s kind of funny because I use ~/src as well. I’ve also taken to using fossil for easy transport between computers when it’s just me working on a project.
This is quite a beautifully written article. I like the idea of applying this to all circumstances and not just to go. The idea of being kind, compassionate and thinking about other people as we code. We would have a much better it landscaping. All of this followed this advice.
Python is a very long way away from being my least favourite programming language but the Calc language in Excel is far worse in just about every possible way. The nice thing about this project (which actually started with people a couple of doors down from my office several years ago) is not that it’s plugging Python into Excel, it’s that it’s decoupling the Excel data model and UI from the underlying programming language. This should make it easier to plug in other language including some future hypothetical designed-for-spreadsheets-but-not-awful programming language.
More people using Excel for task for which they shouldn‘t. Excel has limitations:
You can‘t version control Excel files
You can‘t test algorithms written in Excel
You can‘t separate the data from the algorithm
… or it is really hard or nobody does it. Excel is good for some cases and I use it for those, but Excel is probably the most overused software, because it is just there.
It’s also one of the most powerful interactive declarative information processing environments available to non-programmers.
I assume MSFT is aiming at ChatGPT code generation for Python to be used by non-programmers to take things further in Excel. Keep fire extinguishers within reach.
You version control Excel files in OneDrive and Dropbox. The algorithms are tested manually by inspecting the output, just like how many programmers do printf-driven testing. Is it best engineering practice? Of course not.
Is something better available to non-programmers short of grovelling in front of the IT dept managers?
To yes and this too, have you ever had to write an “if then” in Excel longer than one decision tree? My eyes bleed, trying to figure out where to put the commas or parentheses.
I look forward to an IDE text box that does actual spacing and highlighting per conditionals. I only see this as a positive, and frankly a direct challenge to the jupyter notebook ecosystem.
Kind of. You can version control an Excel file, you can’t version control Excel files. Excel files include version control that integrates with OneDrive / Sharepoint so you can go back to old versions easily. Unfortunately, Excel lets you reference data in other sheets. This is why it doesn’t let you open two files with the same name at the same time: it would make cross-spreadsheet references non-unique. This means that you might actually need to version control multiple sheets simultaneously.
It’s worth noting that, if you have track-changes enabled, all of the Office tools can perform merging and git can delegate merging to external tools. I’ve never done this personally, but I’ve seen other people set it up so that git can merge MS Office documents automatically by invoking the merge functionality in Office whenever it needs to merge two versions of an Office doc. This does mean that you end up storing multiple versions of the history, but if you’re using Office then I’m assuming a few MiBs of wasted space is probably not important to you.
I too regard Facebook as one of the four horse people of the data Apocalypse. I’m not sure if they’re pestilence death, famine or war. I will admit that I do like the z standard compression algorithm however. And some of their Open compute stuff is cool. But that’s very hard against all of the other things that they’ve done bad in the world.
Zstandard was going to happen regardless of Facebook’s involvement as it is ‘just’ an evolution of LZ4 by the same creator/originally-hobbyist coder, Yann Collet. I’m just happy Yann got to get his bag while making it.
I’ve always wondered hashicorp products are so popular. Terraform promises to be everything to everyone when each cloud provider has its own separate modules which are completely different from each other. So I always wondered why people used it since the cloud providers have very nice alternatives where i don’t have to worry about a TF state file. Getting out of sync. Even packer is kind of clugee. Their commercial license for their HSM version of vault is nearly $100,000. And I’ve never been happy with the way that they completely break compatibility with previous versions, even on a minor point release. With him going close source or business source license, the impetus for using them is even lower now.
Every Terraform installation I’ve inherited was set up by a CTO. In some ways, I’m reminded a lot of the old slogan from Django; Terraform is infrastructure-as-code for perfectionists with a deadline. (And nearly every CTO has also passed me a Django application!) The CTO wants to get up and running with their experiment, and then hand it off to somebody who can shepherd it through the lifecycle. There’s a great presentation about this, Evolving your Infrastructure with Terraform.
That said, when I set up my own small business last decade, I used nixops and a pile of JSON instructions interpreted by a similar pile of jq scripts. When I went multi-cloud, I used bare YAML for k8s, along with Kustomize (which is now builtin to kubectl!) Terraform is good for communicating with others, but a terrible choice for a lone sysadmin.
As for the other products, though… Vault is weathered, and it shows. Packer is a fine tool, but Nix has such a large ecosystem, and is also reproducible.
hashicorp is all papercuts and inconsistencies, from vault to their SDKs to their terraform providers, to updates between versions, to lacked features… to what they launch on their cloud, to their licensing.
Python, which predates Java, originally did not have threading, because threading didn’t become super popular until Java (which was originally designed to run on systems without true process-based multitasking) became popular.
So Python was forced to adopt threading in a bit of a rush, and faced the problem that many popular “Python” libraries in that era (multiple decades ago) were actually wrappers around C libraries, and were almost certainly not thread-safe. The compromise solution was that any thread which wants to execute Python bytecode, or call the Python interpreter’s C API, must obtain a lock to do so (the Global Interpreter Lock, or GIL). Thus, only one thread at a time can be executing bytecode or calling the interpreter API.
At the time this was a not-unreasonable compromise, since most people who wanted to do threading wanted it for things like network daemons which have most of their threads spend most of their time waiting on I/O (so you don’t feel the lock contention as much), and hardly anybody actually had multiprocessor or multicore hardware capable of truly executing multiple threads at the same time.
Now, of course, we all have multiprocessor/multicore hardware, and many people use Python (or Python wrappers around libraries in other languages) for CPU-bound number-crunching tasks. So there’s pressure to remove the GIL. But it’s always been understood that the cost of doing so would be a hit to single-threaded performance of the Python interpreter, since the extra work required to keep the interpreter thread-safe does not come for free.
And so you have two camps: one which wants the GIL gone because they believe the benefits (free threading) will outweigh the single-threaded performance costs; and another which is skeptical of that tradeoff because most Python programs are and almost certainly would continue to be single-threaded, and people who use those programs probably will not be happy at a 15-20% performance drop, especially if it comes at a time when Python was finally starting to make significant performance gains.
There was a question from Shannon about ““what people think is a acceptable slowdown for single-threaded code””. To a large extent, that question went unanswered in the thread, but he had estimated an impact ““in the 15-20% range, but it could be more, depending on the impact on PEP 659””.
15-20% is a lot. That’s definitely in the “noticeable even without benchmarking” category.
This feels like the sticking point for me personally. Given that I use Python mostly for scripts and the occasional webserver, it:
doesn’t feel like there’s a lot of upside to be had for me by removing GIL, and
my code gets noticeably slower
So while it will (?) benefit the ecosystem and language as a whole, it makes my personal use of Python worse, so I’m not sure why I’d support the initiative.
Python isn’t a new interpreter starting with a blank slate. How would you propose making an interpreter that supports GIL and NoGIL code running in the same process, both Python code and C extensions? It’s not an easy problem to solve.
Multiprocessing with Python quickly leads to memory issues, as each process will take a significant amount of memory. This is partly due to how hard it is for stuff like code objects to end up being shared.
If I remember correctly from articles and blog posts (eg. on the PyPy blog) that I read during previous attempts, another non-trivial contributing factor is the particular ways in which Python approached metaprogramming, leading to a ridiculous number of places where it’s difficult to avoid data races and/or undefined behaviour in a multi-threaded environment without wrapping locking around runtime lookups within the interpreter.
Has anyone tried using this with something like BTRFS or the ZFS? Are those systems slower or faster? I was also wondering if different backing file systems say EXT4 versus XFS versus f2FS would make a significant difference in performance.
Imagine if Mastodon were not six processes written in three programming languages, but were instead a single binary written in golang using SQLite for storage. That’s honk.
It works quite well for me; I have a cheapo DigitalOcean server and I never have to do any maintenance, or worry about defederation drama making it hard to follow interesting weirdos.
Although, GoToSocial only provides the backend, you have to bring your own UI.
Edit: it doesn’t match the description better. (Just realized what you said, sorry.)
I assume donio meant to say GTS may be better since it’s API is mastodon-compatible so you can just use any available UI, like https://semaphore.social/ or any of the mobile clients directly.
Well, mostly. Because it’s not 100% masto-compatible (IDs are strings, not ints; rate limit headers are epoch seconds, not ISO8601, etc.[1]) a whole bunch of stuff doesn’t work (Ivory, Mammoth, Mastodon.py without tweaks, etc.)
[1] Which are compliant to the API spec but it seems a whole bunch of clients have (stupidly) ignored the spec in favour of copying what Mastodon does.
I’ll be honest, I’ve never been to that. Excited about hashicorp is a company anyway. Terraform changes from version to version doesn’t keep up to what the cloud providers have, and really does require a commercial license in order to use it properly. I have found the native cloud tools such as cloud formation or bicep a much better offering as they are specific to the vendor. The notion that terraform is somehow magically cloud agnostic is completely false. You have to learn every single dialect for all the different things you want to learn. Other than the commercial issue, my biggest gripe is that the language is changes from version to version and backwards compatibility is often quite poor.
I would second this that this is no longer an issue in more modern Alpine images.
I do something similar with https://github.com/smallstep/cli.
It’s a lot simpler with that tool.
Step CA is really the way to go! I even got it working with a true random number generator. If you put your root key on a yubiki, it’s even more secure! They have beautiful tutorials.
I agree with this review. The hardest part of TF is state management and I was disappointed to learn that the for pay solution isn’t much better than the free version. Personally I just use the native cloud solutions (bicep for azure, cloud formation or cdk for AWS) since it’s supported by the cloud vendor itself.
This seems great similar to yggdrasal I also wonder why they don’t talk much about the underlying transport. You don’t need nat if it’s an IPv6 network. I do understand that they want to establish essentially a non-traceable overlay network, but the underlay is just as an important.
“if it’s an IPv6 network” - well, sometimes it isn’t, so things like tailscale or veilid have to hole punch… because they want to be used by everyday devices :)
Checked it out. There isn’t any real guides or documentation with practical examples on their GitHub or webpage. Veilid I see promised the ability to read and write objects across the network using content IDs of some sort, so that’s a big difference I suppose.
Don’t forget about the non Google encumbered forks. Ungoogled, chromium, chromite, etc. Let you have Chrome without having the privacy busting features built in.
You’d still be contributing to the dominance of the blink engine, pushing developers to develop for chrome, making things worse.
If they only had a version with topre switches. I’m not a big fan of cherry style keyboard switches.
Heh, it’s like déjà vu all over again :-)
https://lobste.rs/s/vozxgx/zsa_moonlander_next_gen_split_ergonomic#c_ncbjcy
Since I posted my comment in that thread, I found out about deskeys, another manufacturer of Topre-style EC keyboards and parts https://deskeys.io/
I actually think the first naill in the coffin was the enforcement of HTTPS everywhere. I have to use the infrastructure and certificates instead of my own certificates. There were some real reasons for this such as security, and people were abusing the freedom. At the same time we see it increasing use of decreasing freedom for more device security. For instance, trusted boot or should I say untrusted boot, was supposed to fix the problem of evil people taking over my computer. The problem is now the only people that have control of my computer is everyone but me. It’s one of the reasons why I did go away from Apple and Microsoft. I use something called ungoogled chromium and on my Android I use chromite. So far. They seem to work nicely and mostly reduce a lot of the things this article talks about. It should also be noted that Google does allow the install of third party operating systems on its phones.alas, things like graphene OS are so secure that they’re almost not usable. I tried graphing for a very long time and it was fine for about 6 months and then all my apps broke. So I had to stop using it. It is definitely a tough choice freedom without security or security without freedom.
When I first came to the US, and stayed at some motel in Santa Clara, going online was quite the challenge because the local Wi-Fi network was injecting its own ads in unsecured pages, on every webpage. I found this practice appalling, not only because ads are annoying, but because it made me realise that the ISP can modify the contents of the webpages that you’re viewing.
Also, security. The general advice to “use a VPN when connecting to public networks” is only relevant for non-HTTPS connections. Having HTTPS everywhere drastically improved the security of most normal people going online.
You can still sign using your own certificates, and there are plenty of examples of that in the wild, such as your average corporation, or NextDNS showing a nice “block page”. You do have to install a custom root certificate, and that’s good actually, because otherwise MITM attacks can’t be prevented.
Also, back in the day, certificates were expensive. Nowadays, certificates are free and their renewal is easily automated via LetsEncrypt, even with wildcards.
We may agree to disagree, but HTTPS-everywhere is one of the best things that happened to the web in recent years.
I think you are both right :)
There’s pros & cons here. I knew people having servers back before HTTPS & now they don’t bother ’cause they see it as just another hurdle moving to some other centralized platform. But after the the way Firesheep exploded, it was hard to deny that we needed something. Reliance on Certbot to have a) good will, b) remain free, c) not eventually be blocked by say Google Chrome™ in the name of trusted-er websites since malicious sites could automate their certs are all things that leave some of the future looking unknown. I better way to handle self-signed certs would have been nice too, but that has its own issues both with the security and the UX of showing it to users (tho I don’t think the old way of showing a massive warning should have been as scary as it was leading most to not consider it).
Your GrapheneOS comment could use a disclaimer that it’s just your personal experience.
I’ve been using it for years on several phones and I’ve never had “all my apps break”. And it’s also not “not usable”. Most Android apps just work, including banking (and Google Maps). This is without even running Google services. (Again, this is also just my personal experience with it.)
I use fossil, which is my SCM. I use caddy for my web server. I use radicale for calendaring. And I use grav CMS for my web pages. I have used gitea when I don’t use fossil. I also use jitsi meet as an open sourced zoom alternative.
Oh yeah jitsi, i tried getting that working on Kubernetes once and it never happened. How’d you get it working at home?
I just used jitsi docker with docker compose it just kind of works. There’s a little bit of work but it’s not too hard
Oh dang, ok I’ll revisit it then thanks!
May I ask, as Radicale does not support server-side meeting invitations, how do you send calendar invites on your phone?
I never send out invites to other people so I would not know about this feature.
It’s kind of funny because I use ~/src as well. I’ve also taken to using fossil for easy transport between computers when it’s just me working on a project.
This is quite a beautifully written article. I like the idea of applying this to all circumstances and not just to go. The idea of being kind, compassionate and thinking about other people as we code. We would have a much better it landscaping. All of this followed this advice.
What could possibly go wrong? Python and Excel?
Python is a very long way away from being my least favourite programming language but the Calc language in Excel is far worse in just about every possible way. The nice thing about this project (which actually started with people a couple of doors down from my office several years ago) is not that it’s plugging Python into Excel, it’s that it’s decoupling the Excel data model and UI from the underlying programming language. This should make it easier to plug in other language including some future hypothetical designed-for-spreadsheets-but-not-awful programming language.
What could go wrong? Really this sounds amazing tbh…
More people using Excel for task for which they shouldn‘t. Excel has limitations:
… or it is really hard or nobody does it. Excel is good for some cases and I use it for those, but Excel is probably the most overused software, because it is just there.
It’s also one of the most powerful interactive declarative information processing environments available to non-programmers.
I assume MSFT is aiming at ChatGPT code generation for Python to be used by non-programmers to take things further in Excel. Keep fire extinguishers within reach.
You version control Excel files in OneDrive and Dropbox. The algorithms are tested manually by inspecting the output, just like how many programmers do printf-driven testing. Is it best engineering practice? Of course not.
Is something better available to non-programmers short of grovelling in front of the IT dept managers?
Apple Automator, though I suspect it fills largely different use cases. Also only available on macOS, of course.
To yes and this too, have you ever had to write an “if then” in Excel longer than one decision tree? My eyes bleed, trying to figure out where to put the commas or parentheses.
I look forward to an IDE text box that does actual spacing and highlighting per conditionals. I only see this as a positive, and frankly a direct challenge to the jupyter notebook ecosystem.
Kind of. You can version control an Excel file, you can’t version control Excel files. Excel files include version control that integrates with OneDrive / Sharepoint so you can go back to old versions easily. Unfortunately, Excel lets you reference data in other sheets. This is why it doesn’t let you open two files with the same name at the same time: it would make cross-spreadsheet references non-unique. This means that you might actually need to version control multiple sheets simultaneously.
It’s worth noting that, if you have track-changes enabled, all of the Office tools can perform merging and git can delegate merging to external tools. I’ve never done this personally, but I’ve seen other people set it up so that git can merge MS Office documents automatically by invoking the merge functionality in Office whenever it needs to merge two versions of an Office doc. This does mean that you end up storing multiple versions of the history, but if you’re using Office then I’m assuming a few MiBs of wasted space is probably not important to you.
What could go wrong indeed …?
I too regard Facebook as one of the four horse people of the data Apocalypse. I’m not sure if they’re pestilence death, famine or war. I will admit that I do like the z standard compression algorithm however. And some of their Open compute stuff is cool. But that’s very hard against all of the other things that they’ve done bad in the world.
Zstandard was going to happen regardless of Facebook’s involvement as it is ‘just’ an evolution of LZ4 by the same creator/originally-hobbyist coder, Yann Collet. I’m just happy Yann got to get his bag while making it.
I’ve always wondered hashicorp products are so popular. Terraform promises to be everything to everyone when each cloud provider has its own separate modules which are completely different from each other. So I always wondered why people used it since the cloud providers have very nice alternatives where i don’t have to worry about a TF state file. Getting out of sync. Even packer is kind of clugee. Their commercial license for their HSM version of vault is nearly $100,000. And I’ve never been happy with the way that they completely break compatibility with previous versions, even on a minor point release. With him going close source or business source license, the impetus for using them is even lower now.
Because it’s a meme that TF and vault are “best practices”. You want your practices to be the best, don’t you?
Every Terraform installation I’ve inherited was set up by a CTO. In some ways, I’m reminded a lot of the old slogan from Django; Terraform is infrastructure-as-code for perfectionists with a deadline. (And nearly every CTO has also passed me a Django application!) The CTO wants to get up and running with their experiment, and then hand it off to somebody who can shepherd it through the lifecycle. There’s a great presentation about this, Evolving your Infrastructure with Terraform.
That said, when I set up my own small business last decade, I used nixops and a pile of JSON instructions interpreted by a similar pile of
jq
scripts. When I went multi-cloud, I used bare YAML for k8s, along with Kustomize (which is now builtin to kubectl!) Terraform is good for communicating with others, but a terrible choice for a lone sysadmin.As for the other products, though… Vault is weathered, and it shows. Packer is a fine tool, but Nix has such a large ecosystem, and is also reproducible.
AWS has CDK but I’m not aware of anything comparable for GCP or Azure.
hashicorp is all papercuts and inconsistencies, from vault to their SDKs to their terraform providers, to updates between versions, to lacked features… to what they launch on their cloud, to their licensing.
Ctrl k k Ctrl k x
Why is multi processing so controversial in 2023?
Python, which predates Java, originally did not have threading, because threading didn’t become super popular until Java (which was originally designed to run on systems without true process-based multitasking) became popular.
So Python was forced to adopt threading in a bit of a rush, and faced the problem that many popular “Python” libraries in that era (multiple decades ago) were actually wrappers around C libraries, and were almost certainly not thread-safe. The compromise solution was that any thread which wants to execute Python bytecode, or call the Python interpreter’s C API, must obtain a lock to do so (the Global Interpreter Lock, or GIL). Thus, only one thread at a time can be executing bytecode or calling the interpreter API.
At the time this was a not-unreasonable compromise, since most people who wanted to do threading wanted it for things like network daemons which have most of their threads spend most of their time waiting on I/O (so you don’t feel the lock contention as much), and hardly anybody actually had multiprocessor or multicore hardware capable of truly executing multiple threads at the same time.
Now, of course, we all have multiprocessor/multicore hardware, and many people use Python (or Python wrappers around libraries in other languages) for CPU-bound number-crunching tasks. So there’s pressure to remove the GIL. But it’s always been understood that the cost of doing so would be a hit to single-threaded performance of the Python interpreter, since the extra work required to keep the interpreter thread-safe does not come for free.
And so you have two camps: one which wants the GIL gone because they believe the benefits (free threading) will outweigh the single-threaded performance costs; and another which is skeptical of that tradeoff because most Python programs are and almost certainly would continue to be single-threaded, and people who use those programs probably will not be happy at a 15-20% performance drop, especially if it comes at a time when Python was finally starting to make significant performance gains.
15-20% is a lot. That’s definitely in the “noticeable even without benchmarking” category.
It is python, that is already 10x slower than most other mainstream languages. Is it really a significant difference at that point?
Yes, of course.
This feels like the sticking point for me personally. Given that I use Python mostly for scripts and the occasional webserver, it:
So while it will (?) benefit the ecosystem and language as a whole, it makes my personal use of Python worse, so I’m not sure why I’d support the initiative.
Python isn’t a new interpreter starting with a blank slate. How would you propose making an interpreter that supports GIL and NoGIL code running in the same process, both Python code and C extensions? It’s not an easy problem to solve.
do you mean multi-threading?
I interpreted their message as them asking why multi threading would be necessary to add to python since it already features multi-processing.
But maybe you’re right and they just meant multi-threading 😅
Multiprocessing with Python quickly leads to memory issues, as each process will take a significant amount of memory. This is partly due to how hard it is for stuff like code objects to end up being shared.
It’s a Python-specific problem. It was controversial over a decade ago, too.
If I remember correctly from articles and blog posts (eg. on the PyPy blog) that I read during previous attempts, another non-trivial contributing factor is the particular ways in which Python approached metaprogramming, leading to a ridiculous number of places where it’s difficult to avoid data races and/or undefined behaviour in a multi-threaded environment without wrapping locking around runtime lookups within the interpreter.
I went to a very nice datasette in 2021 on fosdem. Search for fosdem datasette. It’s a nice talk on what datasette is.
Thanks! That was this talk here https://simonwillison.net/2021/Feb/7/video/
Has anyone tried using this with something like BTRFS or the ZFS? Are those systems slower or faster? I was also wondering if different backing file systems say EXT4 versus XFS versus f2FS would make a significant difference in performance.
I’m using Podman with ZFS on FreeBSD, but I haven’t benchmarked it in comparison to anything else. I’m using the ocijail container runtime.
I ran through the website but I couldn’t exactly understand what it’s supposed to do. Is it some kind of a CMS?
Imagine if Mastodon were not six processes written in three programming languages, but were instead a single binary written in golang using SQLite for storage. That’s honk.
It works quite well for me; I have a cheapo DigitalOcean server and I never have to do any maintenance, or worry about defederation drama making it hard to follow interesting weirdos.
GoToSocial matches that description even better since it provides Mastodon-compatible client APIs.
which part of the description does it match better?
Single golang binary using SQLite for storage?
Although, GoToSocial only provides the backend, you have to bring your own UI.
Edit: it doesn’t match the description better. (Just realized what you said, sorry.)
I assume donio meant to say GTS may be better since it’s API is mastodon-compatible so you can just use any available UI, like https://semaphore.social/ or any of the mobile clients directly.
Well, mostly. Because it’s not 100% masto-compatible (IDs are strings, not ints; rate limit headers are epoch seconds, not ISO8601, etc.[1]) a whole bunch of stuff doesn’t work (Ivory, Mammoth, Mastodon.py without tweaks, etc.)
[1] Which are compliant to the API spec but it seems a whole bunch of clients have (stupidly) ignored the spec in favour of copying what Mastodon does.
I think to @donio’s point, you’re missing a bit with this - it’s not just about the actual server infra being lightweight, it’s the interface as well.
No stars, no bells or whistles, just a spartan interface to read and post to your fedi feeds :)
I’ll be honest, I’ve never been to that. Excited about hashicorp is a company anyway. Terraform changes from version to version doesn’t keep up to what the cloud providers have, and really does require a commercial license in order to use it properly. I have found the native cloud tools such as cloud formation or bicep a much better offering as they are specific to the vendor. The notion that terraform is somehow magically cloud agnostic is completely false. You have to learn every single dialect for all the different things you want to learn. Other than the commercial issue, my biggest gripe is that the language is changes from version to version and backwards compatibility is often quite poor.
It’s good to see that this has been forked. I really like lxd and lxc and I was very disappointed when ubuntu Took it in house.
It’s nice to see the devs starting to pick up where canonical left off