Most of the static site generators don’t seem to generate “sites”. They instead generate “blogs”, with the concept of posts and pages very deep-rooted in the implementation.
I mention because I recently came across statik which is a static site generator written in Python which really lets you generate sites. You get to define the “models” which you’d like your site to have (if you define Post and Page models, you have something like pelican). Imagine Django, but compile-time (for the lack of a better analogy).
I might write a blog (heh) post on this later, but I would definitely suggest checking it out if you’re interested in static sites.
I maintain 3 websites, two with Jekyll (https://monix.io and https://typelevel.org/cats-effect/) and one with Middleman (https://alexn.org).
Both Jekyll and Middleman are perfectly capable for static websites. The blogging part is just a nice built-in feature.
I’ve been using Nikola (edit: for my landing page), because at the time it was the only one that had incremental builds. You have to follow their guide to reconfigure it for a non-blog setup: https://getnikola.com/creating-a-site-not-a-blog-with-nikola.html
VuePress has my interest now, especially once Netlify support is implemented.
Edit: I also have Sphinx instances: one as a public wiki and the other as a private notes repo.
The handful or so that I have worked with all support defining models, Sculpin and Tapestry (i’m the author) call them content types, Jigsaw calls them collections. All three can be used to generate a static site, but for convenience (and likely because the usual users are minimalist bloggers) they come “blog aware” which simply means they have a blog collection configured out of the box.
I have used all three and a few others such as metalsmith (which also supports collections via plugin) for the purpose of generating small static sites with a handful of pages for clients as well as reasonable sized multi author blogs.
TL;DR, yes some SSGs come shipped with blog content types (models) pre-configured but that doesn’t make them only good for generating blogs.
This is interesting. I wish it didn’t use YAML though.
For the website, I ended up making a custom generator, and focus on blogs in most generators was one of the biggest reasons, even though not the only one.
Most of the static site generators don’t seem to generate “sites”. They instead generate “blogs”, with the concept of > posts and pages very deep-rooted in the implementation.
Bravo for saying this. I’ve faced the same problem, with static generators forcing me to give an author / date setting for each page. This might make sense for blogs, but doesn’t for the simple site I want to build.
And most of them force you to do things a certain way; it is so painful, which is no wonder people just go back to Wordpress.
Amy Hoy wrote a great article on this : https://stackingthebricks.com/how-blogs-broke-the-web/
Timely. I have Nim in Action here after skimming the tutorial (as much to support the book author as anything), and my reactions to most of the language have been “Oh, that’s convenient”. I’m drawn to the easy C/C++ FFI but overall my fiddling with the language has been pleasant.
Nim in Action looks interesting. I like it that it directly dives into building things (at least it looks like that from Chapters 3 through 7). Most other books lose me at “here are 10 ways how to declare an integer”.
This is precisely what I was going for when writing Nim in Action. Super glad to see it being appreciated :)
I strongly believe that learning by implementing (sometimes large) practical examples is the best way to learn a programming language. Learning every single detail of the language presented through small unrealistic examples gets boring quickly.
How up-to-date is the book? It was published 2 years ago, I think, but I imagine a lot has changed in that time?
Reasonably up to date for a dead-tree edition, it’s from 2017. I haven’t compared to the complementary ebook but those are “live” and get updates. I’ve noticed a couple changes/errata (compiler invocation option, deprecation of Aporia editor, a change to the ..< operator) but nothing significant.
Indeed, it’s barely a year old.
I’ve actually added all the projects in the book to the Nim compiler’s test suite so the book should remain compatible with new versions. I’ve also asked our BDFL to not break these, so far so good :)
Here is the slide deck.
Picking up Nim.
I worked through the official tutorial this weekend and so far, I (mostly) like what I see. There are some weird quirks here and there, but I think I would publish a blog post in a day or two about first impressions.
If you write that blog post could you post it here (and maybe in reply to this comment, too, so that I see it?). I’m just starting out with nim, coming from years of Python and JS. I’m finding it more complex than I expected, especially as I’m not used to fighting a compiler. I came into it expecting something like “Python with static types” but it’s only kinda that.
I just published it. Here’s a link - https://sgoel.org/posts/nim-first-impressions/ .
Here’s a link - https://sgoel.org .
I write about topics including Python, Plain Text Accounting, DevOps, and occasionally about Vue.js. And every now and then there are a few “general” posts which don’t fit into one of those categories but I still find interesting enough to write a blog post on.
On the “not keeping your password list on a cloud” front, I use KeePassXC and Keepass2Android with SyncThing for automatic local syncing, and it’s been good. I keep a (password protected) key file which isn’t synced, and then sync the keepass database file via SyncThing.
There’s still the possibility of the database files getting out of sync if I don’t let my phone and my laptop sync for too long (I don’t do the “global” SyncThing syncing so they have to be on the same WiFi network). This has happened to me once or twice, but KeePass’s automatic merging feature has handled these cases fine.
I had been looking to improve my non-fiction writing. Initially I wanted to take up an online course, but there are way too many of those (at least on Udemy) and it’s difficult to filter out the bad ones.
I think I read someone recommend this book on HN, so I picked it up. So far so good!
I recently switched to a self-hosted Bitwarden setup, and have been pretty happy with it so far.
The server is running on an old Raspberry Pi at home (so that my passwords don’t end up somewhere on the internet). I use bitwarden-ruby (thanks @jcs!) because the Pi likes it much more than the heavy-weight official Docker image.
The client apps (iOS and Linux desktop) do what they’re supposed to do in a neat and clean way.
This could be the unicorn you’re looking for. :)
IMHO this article could have done a much better job of saying “don’t blindly trust the framework but also think about how things work underneath”.
I do get the point that the author is trying to convey. But unfortunately the language that’s used and the tone in which it’s conveyed makes the advice come off wrong-ish.
It took me half way through the article to realize the author is probably serious. The tone of the language lends itself to parody very well.
I agree. At first, when I read the title, I thought “ohh.. We’re just going to disagree on this one…”. I had to maintain countless Python application just doing SQL/CRUD and the previous engineer either glued Flask and SQLAlchemy together in a bad way, or worse invented their own framework based on WSGI. And every-time, I just migrated it to Django with half to a tenth of the original lines of codes.
Then, I read the article, and all it says is “please understand how frameworks work under the hood”. Which I totally agree with, and that’s what I always teach juniors when I train them with Django or Flask.
In a nutshell, the title is totally clickbait.
Complicated software breaks. Simple software is more easily understood and far less prone to breaking: there are less moving pieces, less lines of code to keep in your head, and fewer edge cases.
Sometimes code is complicated because the problem is complicated.
And sometimes the simple solution is wrong, even for something as basic as averaging two numbers.
But there’s a difference here: a code is simple when it doesn’t introduce (too much of) its own accidental complexity. The innate complexity of the field is out of the equation, can’t do anything about it. But the code must strive to express its intent as simple as possible. It’s not a contradictory goal.
No, it’s not the problem that’s complicated, it’s the underlying platform on which they chose to develop. You wouldn’t have that bug in Common LISP, Ruby, or any environment with big nums.
Funny as he is, there were systems- and performance-oriented variants of LISP that either stayed low-level or produced C output that was low-level. They were really fast with many advantages of LISP-like languages. PreScheme was an early one. My current favorite, since they beat me to my idea of C-like Scheme, was ZL where authors did C/C++ in Scheme. With Scheme macros, one might pick the numeric representation that worked best from safest to fastest. Even change it over time if you had to start with speed on weak hardware but hardware got better.
These days, we even have type-safe and verified schemes for metaprogramming that might be mixed with such designs. So, you get clean-looking code that generates the messy, high-performance stuff maybe in verified or at least type-checked way. People are already doing similar things for SIMD. And you’re still using x86! And if you want, you can also use a VIA x86 CPU so you can say hardware itself was specified and verified in LISP-based ACL2. ;)
I’m not sure this really rebuts the claim. Is complicated code that solves complicated problems immune from breaking?
Also, I don’t think he recommended stopping at simple and ignoring correct.
Is complicated code that solves complicated problems immune from breaking?
It’s more that some problem cannot be solved with simple code, because you can’t capture the whole complexity of the problem without writing a lot of code to capture it.
Consider tax code. Accurately following tax law is going to be messy because tax law itself is messy.
Definitely not. If you have simple but wrong, it’s no good by definition. You can “not have” fast, but essentially, you still need “fast enough”. If you can accomplish the task, and you can do so simply and correctly, but can’t work it quick enough for real-life workloads, then in those cases you might as well call it broken.
I am put in mind of the quote:
You can make simple software with obviously no defects, or complicated software with no obvious defects.
I don’t even think “correct” software is required–for many lucrative things, you have a human being interpreting the results, and oftentimes incorrect or wrong output can still be of commercial value.
That’s why we have customer support folks. :)
This is a misquote of C.A.R. Hoare:
I conclude there are two ways of constructing software design: one way is to make it so simple there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
Nothing wrong with a misquote btw, as it means you internalized the statement rather than regurgitating the words :).
If I understand correctly (hah), his point is that if you aim for simplicity, it’s easier to ensure correctness.
Just git?
I was kind of hoping that if we’re going to break the github hegemony, we might also start to reconsider git at least a little. Mercurial has so many good ideas worth spreading, like templates (DSL for formatting the output of every command), revsets (DSL for querying commits), filesets (DSL for querying files and path) and changeset evolution (meta-graph of commit rewriting).
Don’t forget pijul!
Seriously, though, I don’t think there is any “github plus something” that is going to break the github hegemony. Github won because it offered easy forking while Sourceforge was locked in a centralized model. Sourceforge won because it was so much easier than hosting your own repo + mailing list.
The thing that will get people away from github has to have a new idea, a new use case that isn’t being met by github right now, and which hasn’t been proposed before. That means that adding hg won’t do it – not because hg is worse than git (honestly, git’s terrible, and hg is fine), but because hg’s already been an option and people aren’t using it.
Adding email commits won’t do it, because that use case has been available for a long time (as pointed out elsewhere in these comments) and people aren’t using it.
Until something new is brought to the table, it’s all “let’s enter a dominated market with a slight improvement over the dominant tech”, and that’s just not going to be enough.
So, one thing that I would use a new contender for is being able to put my work under my own domain.
The “new thing” here is “have your personal branding on your site” (which is clearly fairly popular given how common personal domain/sites are among developers).
If I could CNAME code.daniel.heath.cc to your host to get my own github, I’d do it today (as long as any issues/wiki/PR state/etc remained usefully portable).
That’s a really neat idea. I don’t think I can prioritize it right now but it’s definitely something I would consider implementing.
I actually think that GitHub’s lack of branding and customization is a big reason for its success. When I go take a look at a new project on GitHub, I don’t have to figure out how to navigate a new site’s design, and this makes the GitHub ecosystem as a whole easier to use.
I don’t mean corporate/design branding.
I want to use my own name (and be able to move providers without breaking links).
I want to use my own name (and be able to move providers without breaking links).
But that will happen anyway, unless your new provider uses the same software as the old one.
That makes sense actually. sr.ht supporting the ability to use your own domain name (presumably a subdomain of your personal domain name for personal projects?) would make it really easy to migrate away from sr.ht in the future if you felt it was more cost-effective to host your own. Although I don’t know what the pricing model is intended to be.
You can do that with Gitlab (or Gitea if you prefer something lightweight). Only thing is you need to take care of the hosting yourself. But I’m sure there are companies offering a one-click setup, to which you can later point your own domain.
If you host your own gitlab instance, can you fork and submit patches to a project that’s hosted on gitlab,com, as easily/seamlessly as if you were hosted there?
Centralization has benefits that self-hosting can’t always provide. If there were some federation which allowed self-hosting to integrate with central and other self-hosting sites, that seems like a new and interesting feature.
Git is already federated with email - it’s specific services like GitHub which are incompatible with git’s federation model (awfully conveniently, I might add). sr.ht is going to be designed to accomodate git’s email features, both for incoming and outgoing communication, so you’ll be able to communicate easily between sr.ht instances (or sr.ht and other services like patchworks or LKML).
As I mention earlier, though, federation by email has been available for a long time and hasn’t been used (by enough people to replace github). The (vast) majority of developers (and other repo watchers) prefer a web UI to an email UI.
The gitlab, gitea, and gogs developers are working on this but it’s still very much in the discussion stage at this point. https://github.com/git-federation/gitpub/
I don’t know exactly what he was looking for, but It seemed like one of:
The latter sounds to me like it would need federation.
It’s currently awkward to run multiple domains on most OSS servers which might otherwise be suitable.
hg isn’t really an option right now, though. There’s nowhere to host it. There’s bitbucket, and it’s kind of terrible, and they keep making it worse.
If you can’t even host it, people won’t even try it.
I’m afraid you’re not going to find a sympathetic ear in sr.ht. I am deeply fond of git and deeply critical of hg.
The GitHug hegemony has nothing to do with its basis on git. If git were the product of GitHub, I might agree, but it’s not. If you really want to break the GitHub hegemony you should know well enough to throw your lot in with the winning tool rather than try to disrupt two things at once.
Perhaps some day I’ll write a blog post going into detail. The short of it is that git is more Unixy, Mercurial does extensibility the wrong way, and C is a better choice than Python (or Rust, I hear they’re working on that).
because hg‘s command-line interface was “designed”, whereas git’s command-line interface “evolved” from how it was being used.
The GitHug hegemony has nothing to do with its basis on git.
Exactly; it’s the other way around. Git got popular because of github.
Git was much worse before github made it popular. It’s bad now and difficult to use now, but it was much worse before 2008. So if you just want to get away from Github, there’s no need to stay particulary enamoured with git either.
And whatever criticisms you may have about hg, you have to also consider that it has good ideas (those DSLs above are great). Those ideas are worth spreading, and git for a long time has tried to absorb some of them and hasn’t succeeded.
I was kind of hoping this wouldn’t get accepted. IMHO this seems to reduce readability.
I read somewhere the “as” syntax being mentioned for if/while constructs, which I personally find much more readable.
So, instead of,
if (match := pattern.search(data)) is not None:
...
something like,
if pattern.search(data) as match is not None:
...
Although I guess as with everything, we’ll get used to it. 🤷♂️
EXPR as NAME syntax is mentioned in the proposal itself under “Alternative spellings” section. Unfortunately, except EXPR as NAME syntax already exists in Python, and this syntax scopes NAME inside except clause unlike this proposal, so there is grammar conflict.
I agree that readability seems to suffer here. I have certainly written code that looked like the examples but they never resulted in performance costs high enough to justify an alternative syntax. I’m curious to see where the named expressions get taken up.
Improving the docs and releasing molten 0.2.0. The main focus of this release is going to be Python 3.7 and OpenAPI support (the framework now bundles the Swagger UI and can automatically generate OpenAPI documents).
Molten looks neat. Especially request validation based on schema classes looks like it would eliminate so much validation code. Nice work!
It’s also developer-friendly because of its excellent wiki.
I learned Linux doing everything by hand on a Slackware system, then moved to Ubuntu after ~8 years when I realized I’d stopped learning new things. Then a couple years ago I realized I didn’t understand how a bunch of things worked anymore (systemd, pulseaudio, Xorg, more). I looked at various distros and went with Arch because its wiki had helped me almost every time I’d had an issue.
Speaking of distros, I’m currently learning Nix and NixOS. It’s very nice so far. If I can learn to build packages I’ll probably replace lobsters-ansible with it (the recent issues/PRs/commits tell a tale of my escalating frustration at design limitations). Maybe also my personal laptop: I can experiment first with using nix to try xmonad first because it’s mostly configured by editing + recompiling) and deal with python packaging, which has never worked for me, then move completely to NixOS if that goes well.
I switched from Mac to NixOS and couldn’t be happier. At work we use Nix for building Haskell projects as well.
The Arch wiki actually seems to be the only good documentation for using the advanced functionality of newer freedesktop components like pulseaudio, or much older software like Xorg.
But I’ve noticed it’s documentation for enterprise software like ZFS is usually hot garbage. Not surprising given the community. The recommendations are frequently hokey nonsense: imaginary micro-optimizations or blatantly incorrect feature descriptions.
What do you find better about nix for making packages than, say, making an rpm or deb? I’ve found those package systems valuable for large scale application deployment. Capistrano has also been nice for smaller scale, with its ability to deploy directly from a repo and roll back deployments with a simple symlink swap. And integration libraries are usually small enough that I’m comfortable just importing the source into my project and customizing them, which relieves so many minor tooling frustrations overall.
Of course in the end the best deployment system is the one you’ll actually use, so if you’re excited about packaging and deploying with nix, and will thus devote more time and energy to getting it just right, then that’s de facto the best option.
What do you find better about nix for making packages than, say, making an rpm or deb?
I don’t, yet. The “If I can learn to build packages” sentence links to an issue I’ve filed. I was unable to learn how to do so from the official documentation. I’ve almost exclusively been working in languages (PHP, Python, Ruby, JavaScript) that rpm/deb have not had good support for, prompting those languages to each implement their own package management systems that interface poorly or not at all with system packaging.
I’ve used Capistrano, Chef, Puppet, and currently use Ansible for deployment. Capistrano and Ansible at least try to be small and don’t have a pretensions to being something other than an imperative scripting tool, but I’ve seen all of them break servers on deployment, let servers drift out of sync with the config, or fail to be able to produce new deployments that match the existing one. Nix/NixOS/NixOps approach the problem from a different direction; it looks like they started from what the idea of system configuration is instead of scripting the manual steps of maintaining one. Unfortunately nix replicates the misfeature of templating config files and providing its own config file on top of them instead of checking complete config files into a repo. Hopefully this won’t be too bad in practice, though it’s not a good sign that they implemented a programming language.
I appreciate your closing sentiment, but I’m not really trying to reach new heights of system configuration. I’m trying to avoid losing time to misconfiguration caused by services that fundamentally misunderstand the problem, leading to booby traps in common usage. I see almost all of my experience with packaging + deployment tools as a loss to be minimized in the hopes that they waste less time than hand-managing the global variables of public mutable state that is a running server.
Hmmm. I don’t think the problems you listed are 100% avoidable with any tool, just easier in some rather than others.
I like Puppet and Capistrano well enough. But I also think packaging a Rails application as a pre-built system package is definitely the way to go, with all gems installed and assets compiled at build time. That at least makes the app deployment reproducible, though it does nothing for things like database migrations.
What do you find better about nix for making packages than, say, making an rpm or deb?
Let me show you a minimal nix package:
pkgs.writeScriptBin "greeter" "echo Hello $1!"
Et voila! You have a fine nix package of a utility called greeter that you can let other nix packages depend on, install to your environment as a user or make available in nix-shell. Here’s a function that returns a package:
greeting: pkgs.writeScriptBin "greeter" "echo ${greeting} $1!"
What you have here is a lambda expression, that accepts something that you can splice into a string and returns a package! Nix packages in nixpkgs are typically functions, and they offer an a great amount of customizability without much effort (for both the author and the user).
At work, we build, package and deploy with nix (on the cloud and on premises), and we probably have ~1000 nix packages of our own. Nobody is counting though, since writing packages doesn’t feel like a thing you do with nix. Do you count the number of curly braces in your code, for instance? If you’re used to purely functional programming, nix is very natural and expressive. So much so that you could actually write your application in the language if it’s IO system were designed for it.
It also helps a lot that nix can seamlessly be installed on any Linux distro (and macOS) without getting in the way of its host.
If only ZFS from Oracle hadn’t had the licensing compatibility issues it currently has, it would probably have landed in the kernel by now. Subsequently, the usage would have been higher and so would the quality of the community documentation.
If I can learn to build packages I’ll probably replace lobsters-ansible with it
Exactly. I don’t have much experience with Nix (none, actually). But in theory it seems like it can be a really nice OS-level replacement for tools like Ansible, SaltStack, etc.
This is exactly what NixOps does! See here.
Thanks for the video. I’ll watch it over the weekend!
Curious - are you also running NixOS on your personal machine(s)? I’ve been running Arch for a long time now but considering switching to Nix just because it makes so much more sense. But the Arch documentation and the amount of packages available (if you count the AUR in) is something that’s difficult to leave.
Yes, I’m using it on my personal machine :). I wouldn’t recommend switching to NixOS all at once, what worked for me was to install the Nix package manager, use it for package management and creating development environments, and then only switch once I was fully convinced that NixOS could do everything I wanted from my Ubuntu install. This took me about a year, even with me using it for everything at work. Another approach would be to get a separate laptop and put NixOS on that to see how you like it.
Even as a Ubuntu user, I’ve frequently found the detailed documentation on the Arch wiki really helpful.
I really want to use Nix but I tried installing it last month and it doesn’t seem to have great support for Wayland yet which is a deal breaker for me as I use multiple HiDPI screens and Wayland makes that experience much better. Anyone managed to get Nix working with Wayland?
Arch’s wiki explaining how to do everything piecemeal really seems strange given its philosophy is assuming their users should be able to meaningfully help fix whatever problems cause their system to self-destruct on upgrade. It’s obviously appreciated, but still…confusing, given how many Arch users I’ve met who know nothing about their system except what the wiki’s told them.
I gave up on my nix experiment, too much of it is un- or under-documented. And I’m sorry I derailed this Arch discussion.
I’m happy to help if I can! I’m on the DevOps team at work, where use it extensively, and I did a presentation demonstrating usage at linux.conf.au this year. All my Linux laptops run NixOS and I’m very happy with it as an operating system. My configuration lives here.
Ah, howdy again. I’m working my way through the “pills” documentation to figure out what’s missing from the nix manual. If you have a small, complete example of how to build a single package that’d probably be pretty useful to link from the github issue.
I made a small change to the example to get it to build, and I’ve added it as a comment to your issue.
Typically if want these kinds of requests to gain traction you’re going to need to provide a list of Nim articles proving the need for a new tag.
A quick search shows quite a few links.
Makes sense. Sorry if that came off bad, I’m new around here and don’t really know how things are done.
Thanks for the info.
the only complaint I have with arch is that pip install and npm install fuck the system up. the solution is you have to use them inside virtualenv, nvm.
Which is not a distro problem, but a problem with the defaults of these third party language package managers. Everyone assumes sudo pip install requests is a great idea while have these issues when installing the requests lib from the repository.
I’ve configured my npm to install in ~ for globals things, so it doesn’t mess up my system :)
The problem isn’t really an arch issue, it’s related to the defaults of pip and npm. I’ve never had an issue with either negatively affecting my system if I set it up as the arch wiki recommends.
For pip, use --user if you aren’t in a venv, for npm, set $npm_config_prefix to be located in ~.
You install pip or npm with the package manager
then you use them according to the man pages and it borks your system.
This is a fault
If special configuration should be done this should come when you install it with the distro package manager. That’s the whole point of a linux distribution. If you’re supposed to use the virtual env you should at least get a warning about that when you install the tool. UX matters.
If you’re supposed to use the virtual env you should at least get a warning about that when you install the tool.
Actually almost every popular Python package suggests using virtualenvs (or more recently, pipenv). AFAIK, sudo pip install is considered bad practice in the Python community.
As someone who uses arch on all my developer machines, arch is a horrible developer OS, and I only use it because I know it better than other distros.
It was good 5-10 years ago (or I was just less sensitive back then), but now pacman Syu is almost guaranteed to break or change something for the worse, so I never update, which means I can never install any new software because everything is dynamically linked against the newest library versions. And since the arch way is to be bleeding edge all the time, asking things like “is there an easy way to roll back an update because it broke a bunch of stuff and brought no improvements” gets you laughed out the door.
I’m actually finding myself using windows more now, because I can easily update individual pieces of software without risking anything else breaking.
@Nix people: does NixOS solve this? I believe it does but I haven’t had a good look at it yet.
Yes, Nix solves the “rollback” problem, and it does it for your entire OS not just packages installed (config files and all).
With Nix you can also have different versions of tools installed at the same time without the standard python3.6 python2.7 binary name thing most place do: just drop into an new nix-shell and install the one you want and in that shell that’s what you have. There is so much more. I use FreeBSD now because I just like it more in total, but I really miss Nix.
EDIT: Note, FreeBSD solves the rollback problem as well, just differently. In FreeBSD if you’re using ZFS, just create a boot environment before the upgrade and if the upgrade fails, rollback to the pre-upgrade boot environment.
Being a biased Arch Developer, I rarely have Arch break when updating. Sometimes I have to recompile our own C++ stack due to soname bumps but for the rest it’s stable for me.
For Arch there is indeed no rollback mechanism, although we do provide an archive repository with old versions of packages. Another option would be BTRFS/ZFS snapshots. I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.
I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.
I can see some people might value that perspective. For me, I like the ability to plan when I will solve a problem. For example I upgraded to the latest CURRENT in FreeBSD the other day and it broke. But I was about to start my work day so I just rolled back and I’ll figure out when I have time to address it. As all things, depends on one’s personality what they prefer to do.
For me, I like the ability to plan when I will solve a problem.
But on stable distros you don’t even have that choice. Ubuntu 16.04, (and 18.04 as well I believe) ships an ncurses version that only supports up to 3 mouse buttons for ABI stability or something. So now if I want to use the scroll wheel up, I have to rebuild everything myself and maintain some makeshift local software repository.
And that’s not an isolated case, from a quick glance at my $dayjob workstation, I’ve had to build locally the following: cquery, gdb, ncurses, kakoune, ninja, git, clang and other various utilities. Just because the packaged versions are ancient and missing useful features.
On the other hand, I’ve never had to do any of this on my arch box because the packaged software is much closer to upstream. And if an update break things, I can also roll back from that update until I have time to fix things.
I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.
And if an update break things, I can also roll back from that update until I have time to fix things.
Several people here said that Arch doesn’t really support rollback which is what I was responding to. If it supports rollback, great. That means you can choose when to solve a problem.
I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.
Ok, but that’s a problem inherent to stable distros, and it gets worse the more stable they are.
Several people here said that Arch doesn’t really support rollback
It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.
It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.
Your description makes it sound like pacman doesn’t support roll backs, but you can get that behaviour if you have to and are clever enough. Those seem like very different things to me.
Also, what you said about stable distros doesn’t seem to match my experience in FreeBSD. FreeBSD is ‘stable’ however ports packages tend to be fairly up to date (or at least I rarely run into it except for a few).
I’m almost certain any kind of “rollback” functionality in pacman is going to be less powerful than what’s in Nix, but it is very simple to rollback packages. An example transcript:
$ sudo pacman -Syu
... some time passes, after a reboot perhaps, and PostgreSQL doesn't start
... oops, I didn't notice that PostgreSQL got a major version bump, I don't want to deal with that right now.
$ ls /var/cache/pacman/pkg | rg postgres
... ah, postgresql-x.(y-1) is sitting right there
$ sudo pacman -U /var/cache/pacman/pkg/postgres-x.(y-1)-x86_64.pkg.tar.xz
$ sudo systemctl start postgres
... it's alive!
This is all super standard, and it’s something you learn pretty quickly, and it’s documented in the wiki: https://wiki.archlinux.org/index.php/Downgrading_packages
My guess is that this is “just downgrading packages” where as “rollback” probably implies something more powerful. e.g., “rollback my system to exactly how it was before I ran the last pacman -Syu.” AFAIK, pacman does not support that, and it would be pretty tedious to actually do it if one wanted to, but it seems scriptable in limited circumstances. I’ve never wanted/needed to do that though.
(Take my claims with a grain of salt. I am a mere pacman user, not an expert.)
EDIT: Hah. That wiki page describes exactly how to do rollbacks based on date. Doesn’t seem too bad to me at all, but I didn’t know about it: https://wiki.archlinux.org/index.php/Arch_Linux_Archive#How_to_restore_all_packages_to_a_specific_date
now pacman Syu is almost guaranteed to break or change something for the worse
I have the opposite experience. Arch user since 2006, and updates were a bit more tricky back then, they broke stuff from time to time. Now nothing ever breaks (I run Arch on three different desktop machines and two servers, plus a bunch of VMs).
I like the idea of NixOS and I have used Nix for specific software, but I have never made the jump because, well, Arch works. Also with Linux, package management has never been the worst problem, hardware support is, and the Arch guys have become pretty good at it.
I have the opposite experience
I wonder if the difference in experience is some behaviour you’ve picked up that others haven’t. For example, I’ve found that friend’s children end up breaking things in ways that I would never do just because I know enough about computers to never even try it.
I think it’s a matter of performing Syu update often (every few days or even daily) instead of once per month. Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.
I’m an Arch user since 6 years and there were maybe 3 times during those 6 years where something broke badly (I was unable to boot). Once it was my fault; second & third one is related to nvidia driver and Xorg incompatibility.
Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.
It’s sometimes also a matter of bad timing. Now every time before doing a pacman -Syu I check /r/archlinux and the forums to see if someone is complaining. If so then I tend to wait for a day or two before the devs push out updates to broken packages.
I have quite a contrary experience, I have pacman run automated in the background every 60 minutes and all breakage I suffer is from human-induced configuration errors (such as misconfigured boot loader or fstab)
Would be nice, yeah, though I never understood or got Nix really. It’s a bit complicated and daunting to get started and I found the documentation to be lacking.
How often were you updating? Arch tends to work best when it’s updated often. I update daily and can’t remember the last time I had something break. If you’re using Windows, and coming back to Arch very occasionally and trying to do a huge update you may run into conflicts, but that’s just because Arch is meant to be kept rolling along.
I find Arch to be a fantastic developer system. It lets me have access to all the tools I need, and allows me to keep up the latest technology. It also has the bonus of helping me understand what my system is doing, since I have configured everything.
As for rollbacks, I use ZFS boot environments. I create one prior to every significant change such as a kernel upgrade, and that way if something did happen go wrong, and it wasn’t convenient to fix the problem right away, I know that I can always move back into the last environment and everything will be working.
I wrote a boot environment manager zedenv. It functions similarly to beadm. You can install it from the AUR as zedenv or zedenv-git.
It integrates with a bootloader if it has a “plugin” to create boot entries, and keep multiple kernels at the same time. Right now there’s a plugin for systemdboot, and one is in the works for grub, it just needs some testing.
Awesome! If you do, let me know if you need any help getting started, or if you have any feedback.
It can be used as is with any bootloader, it just means you’ll have to write the boot config by hand.
I’m excited about Nim. I spent some time last week playing around and really liked what I see.
But somehow I found the import mechanism a bit confusing. I didn’t quite understand that importing a module makes all its members available in the current scope, but they’re also dot-accessible using the module name. And also, since func(a) and a.func() are equivalent (correct me if I’m wrong), there are like 3 ways to call a function defined in another module.
For instance, in the strutils example, if I start reading this code with no other context, I would assume that split() is defined for string types. But it’s actually a part of strutils module. And s.split(), strutils.split(s), and split(s) are all doing the same thing.
I guess you start picking these things up the more you work with a language. But I have a Python background, so code being very explicit is something I’m used to and have really come to appreciate.
Yeah, I hear you. Nim is a little different in this regard, but we do get a lot of the same sentiment. If you’re interested do give us your thoughts in this RFC on GitHub: https://github.com/nim-lang/Nim/issues/8013
Definitely, and I feel fortunate to be in a position where what I do for a living is also what I like doing in my spare time (among other things).
I don’t think it’s that simple. There’s a reason cache invalidation comes up every single time someone makes a “two hard problems in computer science” joke.
A better framing would be - “If you introduce caching, be very careful. It’s not a no brainer.”
Completely agreed that cache invalidation is hard.
In the above blog, I am working with a Caching library which takes care of invalidations. Note that the Cachalot library is highly inefficient, it drops all Querysets related to the model in case of a single object being changed. If that’s a red flag for you, then you shouldn’t use it. It works very well for our use case.