1. -2

So…it sounds like the main thing here is package management. Why do I care about complex package management when I have docker?

As to keeping things under version control, I have configuration management.

1. 20

Docker is tremendously complex to run at any sort of scale and introduces a bunch of concerns for an application that may not be necessary. And its original purpose isn’t really to make package management easier, its to isolate processes securely. As such, it tends to be terrible for package management beyond anything trivial, which explains the glut of tools for provisioning containers. (Packer et. al.) I prefer using tools that are built to solve an issue specifically.

1. 12

Why do I care about complex package managers where I can put an entire operating system in a jail, where I would still have to worry about complex package managers but now once removed.

1. 10

Docker is addressed in the OP, but essentially Docker container builds are not reproducible. Guix packages have all software dependencies hashed into the build, so they theoretically will never break (I say theoretically because the software is still in alpha), and the system can always roll back to a working version if an update goes south.

1. 9

Docker is an attempt to solve the same problems, but it does so in an unprincipled way, and as a result it just doesn’t actually solve the problem in a way that’s consistent or reliable.

1. 4

It never struck me as much of a package manager— I have always thought the intent behind docker was to provide process isolation without the overhead of a VM. Especially since the Dockerfiles are written in an anemic shell script language.

1. 4

If you attempt to perform builds, then you need to manage packages, whether that’s your intent going into it or not.

2. 4

What I took away was that the main benefit, apart from the virtues of functional package management, is uniformity, as the entire system is managed with Guile. I haven’t used Docker, so I may be incorrect in stating this, but I do not think Docker applies to some of the use cases listed in the article, such as configuration of the Linux kernel.

1. 4

To he it’s the prospect of stability that doesn’t conflict with novelty. The classical dichotomy between the two can be seen between something like Debian Stable/Cent OS vs Debian Unstable/Arch, where the latter might update their packages quickly, at the risk of an increasingly wobbly experience, while the former insist on waiting until experience and failure of others indicate safety. Being able to revert any change means that I don’t have to commit a leap of faith when updating, since it’s Ctrl-Z’able and can be fixed, within the intentionally conceived framework (as compared to downloading .deb packages from the internet and manually installing them).

The new Libre Lounge podcast talked about a “security”/“ethical” issue in their second conversation, comparing it to other package management systems like NPM on the one hand (as a generally bad example, think of “left pad”) and other containerized (Docker, but also Flatpak or Snap, which (they claim) was intentionally developed to also serve the distribution of propitiatory software).

Guix specifically (besides being very close to the FSF) has the interesting aspect of being written in (Guile) Scheme, with references and connections to Emacs –> the system in geared towards an open mentality and encourages to use the rights that free software grants it’s users. You don’t only have the permission to download, read, edit and share changes, but the system gives you the tools and the environment to easily hack for yourself – without having to manually run make or ./configure hundreds of times, installing each dependency one by one, hoping it’s the correct version.

I don’t know if this is that convincing, but I rather like this: https://www.gnu.org/software/guix/manual/en/html_node/Using-the-Configuration-System.html#Using-the-Configuration-System + https://www.gnu.org/software/guix/manual/en/html_node/Bootstrapping.html#Bootstrapping. When everything can be reproduced with minimal dependencies (aka. binary seed), that tries to result in the same systems over different platforms (x86 vs RISC V, Linux vs Hurd, Laptop(s) vs Desktop(s), …), I’d be interested in using something like that, and I guess others would too.

1. 1

None of the advantages and features listed in the article interested you?

1. 0

Not as compared with existing tools that already solve these problems. It seems like the author of this article hasn’t used configuration management before and is blown away by it.

1. 12

Is there another distribution except NixOS that lets you roll back your whole system configuration? That does system upgrades atomically? That lets you say “install my same system configuration but using the packages as of August 15 last year?” That lets non-root users install packages? That can take your system configuration and automatically generate a QEMU VM or a USB live disk? That can generate Docker images in a way that’s fundamentally more composable and reproducible than docker build?

1. 10

There’s a thing that Nix users like to do, which is to classify infrastructures into one of three categories: divergent, convergent and congruent.

Divergent infrastructure is what you get when you have no CM. You run arbitrary commands on your systems, you hope you remember what you did, even when do you manage to stay fairly consistent you probably have differences between boxes A and B because you ran apt-get update at different times

Convergent infrastructure is what you get with Puppet or (I believe but have not verified) Chef or Ansible or similar: there’s some kind of description of the parts of the system that are “managed”, but it’s sufficiently broad-grained that there may be differences lurking “under the hood”: files that the CM system doesn’t know about may have been updated, rewritten or abandoned by any installed package or previously-installed-but-since-removed package so who’s to say what they contain or whether they’re identical across your fleet? Hopefully they’re functionally identical and 95% of the time they probably are, but … If you’ve ever had to run puppet twice to get the system into the state you wanted it, or had two machines be slightly different because the upstream package server did an update between puppet runs, you’ve experienced the 5%

Congruent infrastructure is where, when you apply the configuration, you get the same answer every time. Nix does this; Guix (again, I assume - never used it myself) does this. If you’ve never used it before it could be confused with “traditional” CM, but I think it’s pretty next-level stuff

1. 1

Though, I would assume that Traugott would be more a fan of Docker than Nix. Docker even resolves the control-loop problem by having a host. Sure you can argue that the host itself still has the problem, but still Dockerfiles are a very controlled and totally ordered way of setting up a system.

Update: After some more reading, I find that congruence is not what I want for my little network at home. I want my machines to install security updates from Ubuntu automatically and asap. I do see the value for full-time sysadmins though. They have the time to check and test each update.

1. 4

The choice to use alt as mod key and therefor to overwriting commonly used keybindings seems very counter productive.

1. 3

Considering that most people who even would want to use this, will compile this by themselves, replacing Mod1Mask with something else shouldn’t be that difficult. These things are made to be hacked with after all.

1. 4

XMonad does this too; I think they pick a bad default specifically in order to make people install the Haskell compiler and learn enough to change it.

1. 1

Didn’t know it was intentional in the case of Xmonad. That’s probably also the reason I never got into it too, since it’s not only haskell but an entire DSL you’d have to learn.

1. 3

It’s only speculation, but it’s hard to find any other explanation for such a bad default.

2. 2

I’m probably in the minority, but changing my mod key to alt is the first thing I do when I configure my window manager. I just have a easier time to hit alt.

1. 1

This drives me insane.

1. 1

I use dwm which also uses alt keys and I don’t recall it ever being a problem. Programs I use seem to prefer ctrl keys.

1. 1

most shells and gnu readline have alt+w and alt+b moving by words by default, alt+hjkl are commonly used too, but they seem to do different things in different shells.

edit: and I forgot about the the common pattern of GUI applications using alt to access the file, edit… etc menu by underlining the character that is bound: https://i.imgur.com/DyNr4kd.png

1. 14

Not that it’s hard to find, but for anyone whose curiosity was piqued by the opening paragraph of the README, here’s the commit where they switched from parody to serious.

1. 16
- ## Fork me, daddy ¯·.¸><(((º>
+ ## Contribtuions


There aren’t many commits like these.

1. 3

The original README screenshot was actually quite good.
Also smol and comfy.

1. 3

OwO

1. 4

There is also the tinywm in 50 cloc.

1. 3

This site is even more interesting, since it lists alternative implementations: http://incise.org/tinywm.html, very helpful when trying to get into X11 programming, since usually it’s quite overwhelming to even understand all the functions used in dwm.

1. 1

“a window manager. It is only around 50 lines of C. “

I didn’t even know that was possible. Thanks for the link.

1. 0

I really disagree with having a ‘satire’ tag. It just seems wrong, it’s like tagging ‘sarcasm’ with a ‘/s’ in your post, it defeats the whole purpose of satire.

1. 2

I didn’t add it, but I do get it. Some people aren’t interested in satirical articles, and filter them instead (just like I filter games, finance, ios, cryptocurrencies, …). But other than that, I don’t think it totally defeats it’s intention either.

1. 2

You are absolutely right. /s

1. 3

A bit tongue-in-cheek.

1. 4

Pretty sure it was meant to be. I really liked its application here, too.

1. 4

maybe the “rant” or “satire” tags would be appropriate

1. 1

I thought about it, but since he twists everything in the last paragraph, I thought it wasn’t either a proper rant nor a proper satirical piece, so I left it as it was.

1. 14

My feelings are kind of mixed so far. The lightweight UI and responsive site are a breath of fresh air. What’s a little jarring is how much of the service is centered around email. I’ve never been part of a mailing list, and emailing code to other people sounds like something from 20 years ago, but maybe I’m just a young whippersnapper that doesn’t know what he’s talking about. Git is already a complicated tool, and adding email to the mix just increases the cognitive load. I’ll still learn how to use it because it sounds kind of interesting, but my preference would still be some kind of browser interface.

1. 19

I think you should give email a chance. Git has built-in tools for working with email and you can do your entire workflow effectively without ever leaving your terminal. Sending a patch along for feedback is just git send-email -1. (-1 meaning one commit, -2 meaning the last 2 commits, etc). Here’s a guide, which is admittedly terse and slated to be replaced with a more accessible tutorial:

https://man.sr.ht/git.sr.ht/send-email.md

That being said, web tools are planned to seamlessly integrate with this workflow from a browser.

1. 11

That being said, web tools are planned to seamlessly integrate with this workflow from a browser.

I would use that.

1. 4

I like the email workflow, but I also have to be realistic - it is unlikely that my colleagues or drive-by contributors would adopt it. So, in practice it will mean fewer contributions and less cooperation.

The GitHub-like workflow is something that is ingrained now and has a relatively low barrier to entry. So, if something is going to take over, it’s something that is very similar, such as GitLab or Gitea.

Of course, there will always be projects that cater to an audience that feels at home with an email workflow.

It’s good to hear that there will be web tools as well.

1. 4

I like the email workflow, but I also have to be realistic - it is unlikely that my colleagues or drive-by contributors would adopt it.

I think as this workflow proliferates, this will become less and less true. It’s remarkably easy to make a drive-by contribution with email if you already have git send-email working, easier even than drive-by GitHub pull requests.

1. 4

git send-email working

git send-email needs a bunch of perl packages, which often means you need to set up perl packaging.

Depending on your distro/OS this can be tricky, especially because git send-email needs a bunch of network packages and they don’t always cleanly install and you have to figure out why (except you don’t know much about perl packaging, so you can’t).

There have been multiple cases on different OSes (i think osx and some version of ubuntu) that i gave up after half an hour of various cpan commands trying to get things to work. I’m not even going to try setting that up on Windows.

Furthermore, the UX of git-send-email is terrible. Sending followup patches is annoying, for one.

All this has forced me to try and paste patches directly into an email client. But this is broken, too. GMail, for one, converts tabs to spaces in plaintext emails, breaking patches. I could use a local client, but setting up a client well is a lot of work and confusing (i could also rant for a while about why this is the case, but i won’t) and I don’t really want to switch my workflow over to using a client.

Furthermore, half the patch mailing lists I’ve worked with have hard-to-figure-out moderation rules. They’ll outright reject some kinds of emails without telling you, and because many are human moderated it’s hard to know if your email setup worked especially if you’re using git-send-email (which you may not have invoked or set up correctly) because 90% of the time your patch won’t show up on the list and you have no idea which of the many possible reasons for that is the case.

Despite all this I’ve submitted quite a few patches to a patch mailing list (hell, I’ve been involved enough in mailing-list-based project to have commit), either by lucking out on the perl setup for send-email, by temporarily setting up a client that doesn’t sync, or by sending patches through gmail with “ignore the whitespace please, that’s gmail’s fault, I’ll fix it when I commit”. It’s a chore each time.

I’ve given email multiple chances. It doesn’t work. The activation energy of email for patch contributions is quite high.

The web UI thing sounds like a good idea, especially if it can handle replies. It’s basically what I’ve been suggesting projects on mailing lists do for ages. Use email all you want, just give me a way to interact with the project that doesn’t involve setting up email.

1. 2

Almost no one has to package git’s perl dependencies themselves. Doesn’t your OS have a package for it already? And as someone who has packaged git before, it wasn’t really that bad.

Also, the golden rule of emailing patches is never paste them into some other mail client.

1. 2

Also, the golden rule of emailing patches is never paste them into some other mail client.

Paste not, but maybe attach? FreeBSD don’t like it, but it’s OK for Postgres.

1. 2

I generally prefer that people don’t attach patches, either. IMO the best way to send patches is send-email.

1. 1

“IMO” and “the best” is perfectly fine. But I was under impression that it was unconditionally the only way to submit patches, when I wanted to improve sr.ht’s PG DB schemas.

1. 2

Each project on sr.ht can have its own policies about how to accept patches. sr.ht itself does require you to submit them with send-email (i.e. for patches to the open source sr.ht repositories).

1. 1

Can you elaborate on what you dislike about sending patches with a normal MUA? It’s certainly a lot easier for someone who has spent the time to configure their MUA to be able to re-use the config they’ve already got rather than configuring a new tool they’ve never used before.

1. 3

The main issue is that nearly all MUAs will mangle your email and break your patches, which is annoying for the people receiving them and will be more work for you in the long run. Also, most end-user MUAs encourage the use of HTML emails, which are strictly forbidden on sr.ht. Also, code review usually happens by quoting your patch, trimming the fat, and replying inline. This is more annoying if you attach your patch to the email.

Setting up git send-email is pretty easy and will work every time thereafter. It’s also extremely convenient and fits rather nicely into the git workflow in general.

1. 1

I see; so it has more to do with the fact that you can’t trust most popular MUAs not to screw up the patch rather than any inherent problem with that flow. For a well-behaved MUA it should be fine, but assuming a MUA is well-behaved (or even assuming that a user knows whether theirs is or not) isn’t a good bet.

Thanks.

2. 1

Almost no one has to package git’s perl dependencies themselves. Doesn’t your OS have a package for it already?

No, i don’t mean you have to package them, but you have to install them and the installation isn’t always smooth. It’s been a while since I did this so I don’t remember the precise issues but i think it has a lot to do with the TLS part of the net stack. Which kinda makes sense, openssl packaging/linking has issues pretty much everywhere (especially on OSX).

Also, again, Windows. A lot of devs use Windows. I got involved in open source on Windows, back when I didn’t have my own computer. I could use Git and Github, but I’m pretty sure I’d have been unable to set up git-send-email if I had to at the time. Probably can now, but I’m an experienced programmer now.

Also, the golden rule of emailing patches is never paste them into some other mail client.

I know, except:

• now handling replies is annoying
• now i need to set up git-send-email, which doesn’t always work
1. 1

Windows devs aren’t in the target audience. I heard from a macOS user that they were able to get send-email working without too much trouble recently, maybe the situation has improved.

now handling replies is annoying

Not really?

now i need to set up git-send-email, which doesn’t always work

git send-email will always work if your email provider supports SMTP, which pretty much all of them do.

1. 1

Windows devs aren’t in the target audience

If you’re wishing for email to be the future, you’re going to have to think about windows devs at some point.

(this choice is also even more hostile to new programmers, as if patch email workflows weren’t newbie-hostile enough already)

Not really?

You have to copy message ids and stuff to get lists to thread things properly

git send-email will always work

I just told you why it doesn’t always work :)

1. 3

I’m prepared to lose the Windows audience outright on sr.ht. Simple as that.

(edit)

Regarding message IDs, lists.sr.ht (and many other email archives) have a mailto: link that pre-populates the in-reply-to for you.

1. 1

I’m prepared to lose the Windows audience outright on sr.ht. Simple as that.

oh, sure, for your own tool it’s fine.

what I’m saying is that if you’re expecting this workflow to proliferate you will have to deal with this too.

Do whatever you want with your own tool: I’m just explaining why send-email proliferating is a tall order, and windows is a major draw here.

Regarding message IDs, lists.sr.ht (and many other email archives) have a mailto: link that pre-populates the in-reply-to for you.

ah, that’s nice. I may not have encountered lists with this (or been interacting only by email and not using the archive)

3. 1

git send-email needs a bunch of perl packages, which often means you need to set up perl packaging.

Personally I’ve never once seen a unix machine where the perl stack wasn’t already installed for unrelated system-level stuff.

1. 1

to be clear, perl is usually installed, it’s the relevant packages (specifically, the networking/TLS stuff) that usually aren’t

this is particularly bad on OSX which has its own openssl issues, so the Perl SSL packages refuse to compile

4. 4

if you already have git send-email working

Sadly, I think this is extremely uncommon :’(

1. 3

Hence:

I think as this workflow proliferates

1. 5

I think I’d phrase this as if it proliferates, as if anything I think the number of people with sendmail (or equivalent) working on their computer is going down, not up. It’d be fun to see it rise again due to sr.ht, though I don’t know that I’m optimistic. But perhaps I’m just being overly pessimistic :)

I do worry about more casual developers though, who may not even really know how to use the command-line. I think an increasing number of developers interact with version control solely through their IDE, and only touch their command-line if they have to copy-paste some commands. It’d be interesting to see if that’s something this workflow can still cater to. Some simple web-based tooling may go a long way there!

1. 3

You don’t need to set up sendmail, you just need a mail server with SMTP - which nearly all of them support.

1. 3

Sorry, what I meant was more that you have to set up git for e-mail sending. I happen to have sendmail already set up, so all I needed was git config --global sendemail.smtpserver "/usr/bin/msmtp", but I think it’s very uncommon to already have it set up, or to even be comfortable following the instructions on https://man.sr.ht/git.sr.ht/send-email.md.

5. 2

I like the email workflow, but I also have to be realistic - it is unlikely that my colleagues or drive-by contributors would adopt it. So, in practice it will mean fewer contributions and less cooperation.

For https://fennel-lang.org we accept patches over the mailing list or from GitHub pull requests. Casual contributions tend to come from GitHub, while the core contributors send patches to the mailing list and discuss them there. Conveniently, casual contributions tend to require less back-and-forth review, (so GitHub’s poor UI for their review features is less frustrating) while the big meaty patches going to the mailing list benefit more from the nicer review flow.

6. 3

… that is if you’ve managed to set it up in the first place, probably without an opportunity to test it – that means that you have send your commit, not knowing what will come out, to test your setup, your configuration and the command you chose in the first place, which puts quite a lot of pressure, especially on people who have little experience with projects, let alone email-projects.

That being said, web tools are planned to seamlessly integrate with this workflow from a browser.

very nice.

1. 2

Nah, on sr.ht I have an open policy of “if you’re unsure about your setup, send the patch to me (sir@cmpwn.com) first and I’d be happy to make sure it’s correct”.

1. 2

I wonder if it would make sense to set up a “lint my prospective patch” email address you could send your patch to first which could point out common mistakes, assuming that kind of thing is easy to write code to detect.

1. 2

I plan on linting all incoming emails to lists.sr.ht to find common mistakes like this and reject the email with advice on how to fix it.

1. 1

If you can get this running well and cheaply, you could potentially do an end run around people’s send email setup related issues by hosting a “well formed, signed, patches-only” open email relay, and local git config instructions.

2. 1

Is there a way or a plan to have a patch-upload form for example? That might be helpful for beginners.

1. 3

Yes, I plan on having a web UI which acts as a frontend to git send-email.

2. 4

I like that it’s using e-mail so it’s “federated” and decentralized by default.

The e-mail workflow has two problems though:

• integrations: usually projects have a lot of checks that can be automated (“DCO present”, “builds correctly”), for e-mail workflow this kind of stuff needs to be built (check out how Postgres does it),
• client configuration: to correctly use this workflow, one need to configure git send-email (setting up credentials for example), project configuration (correct sendemail.to and format.subjectprefix) and e-mail client to send plain text, 72-characters wrapped messages. Apparently not everyone does that.

Mailing lists vs Github nicely summarizes benefits of ML over Github but also highlight the number of things maintainers need to setup to run their projects on ML that Github gives them “for free”.

From my point of view sr.ht looks like a great way to validate the idea if it’s possible bring easy project collaboration from Github to MLs.

1. 2

usually projects have a lot of checks that can be automated

This is planned on being addressed soon on sr.ht with dispatch.sr.ht, which is used today to allow GitHub users to run CI on builds.sr.ht. The same will be possible with patches that arrive on lists.sr.ht.

client configuration

There’s a guide for send-email:

https://man.sr.ht/git.sr.ht/send-email.md

As for other emails, I’m working on some more tools to detect incorrectly configured clients and reject emails with advice on how to fix it.

Thanks for the feedback!

1. 2

I’m really interested in how far can one push this model.

Would it build the patch and e-mail back build results? For example with a link to build results and a quick summary?

Are you also planning for some aggregation of patches? (Similar to what Postgres has). For example Gerrit uses Change-Id to correlate new patches that replace old ones. Would you for example use Message-Id and In-Reply-To with [Patch v2] to present a list on a web interface of patches that are new / accepted / rejected? This interface could be operated from e-mail too I think, e.g. mailing LGTM would switch a flag (with DKIM validation so that the vote is not spoofed).

By the way I really like how sr.ht is challenging status-quo of existing solutions that just want to mimic GitHub without thinking about basic principles.

Good luck!

1. 7

Would it build the patch and e-mail back build results? For example with a link to build results and a quick summary?

Yep, and a link to a full build log as well.

Would you for example use Message-Id and In-Reply-To with [Patch v2] to present a list on a web interface of patches that are new / accepted / rejected?

Yep!

This interface could be operated from e-mail too I think, e.g. mailing LGTM would switch a flag (with DKIM validation so that the vote is not spoofed).

Aye.

1. 3

Great!

By the way I admire you pro-active approach of not only explaining the problem but also building beautiful software that solves the problem! 👍

2. 3

emailing code to other people sounds like something from 20 years ago

At least OpenBSD is still doing that on the regular on the tech@ mailing list. It definitely still works.

1. 2

And I love it. It’s so damn easy to just email a one-off diff and watch someone land it. No accounts, no registration, no forking repos and dealing with fancy weird web UIs…

2. 3

One day, the current generation of “Email is SO 5 minutes ago!” kids are going to wake up and realize that e-mail is an amazing tool.

Or so I’d like to think :)

1. 1

I could be convinced. What’s your argument in favor of email?

1. 3
• Inherently de-centralized
• Can be tuned for nearly real time end to end response of low bandwidth batch processing for where network is at a premium
• Vendor neutral
• As rich or as minimal as you want it to be
• Arbitrary context types - you can send everything from 7 bit ASCII to arbitrarily complex HTML/CSS and varying payload types
• Readable with everything from /bin/cat to a nice client like Thunderbird and everything in between
• Rich capabilities for conversation threading
• Rich search capability client and server side
• Myriad archival and backup options

The list goes on.

For a more end user/business-centric version of this see In Defense of Email

1. 1

I’m confused be some of these, for example

Q: [“hip”,“hip”]

A: (hip hip array!)

I don’t even get why that’s a question.

1. 1

ok. I’ll fix it.

A: You call it: hip hip array!

1. 1

Should probably be “What do you call [“hip”, “hip”]?” or “Whaddu call [“hip”, “hip”]? :p

1. 2

“whaddaya” sounds more how I speak, personally. I do not speak as I type. :)

2. 1

Oh, I was thinking of S-Expressions when I saw (hip hip array!) and I thought it was some array/linked-list joke.

3. 1

This joke works better as a written one-liner. It looks like Wes Bos tried to force it into a uniform format.

1. 1

correct

1. 1

The one thing I want from Syncthing is an easier way to set it up from Salt/Ansible. The current config file mechanism is tricky to get right.

1. 1

Honestly, I’ve never bothered configuring it directly, since the web GUI covers everything I need. It’s basically just

• installing syncthing,
• starting the service and
• connecting it to your other devices

which (at least for me) is manageable.

1. 1

The only problem is programmatically adding peers, such as the states that bring up my webserver (static content stored in Syncthing for easy use)

1. 1

That’s true, I hope a CLI client will be developed that’s a bit more lightweight and more programmable, for use-cases like these.

1. 10

There is a new (disabled by default, undocumented) shell option to enable and disable sending history to syslog at runtime

this sounds abusable…

1. 3

I agree, totally. Now, on the other hand, why the need to add another feature like this? And why undocumented? I know it is not hard to write about it on the man page (in the case of GNU, info pages too) a few lines mentioning how to use it and such shouldn’t take more than a couple of minutes to put together. When I switch completely to OpenBSD (which I hope to happen anytime soon), for things like this I won’t miss GNU/Linux.

1. 3

“Undocumented” is how GNU tends to say, “don’t use this” or “this is internal, don’t rely this, we might change it or get rid of it.”

Not saying I agree, but that’s just how I understand they do things.

1. 15

I agree! My org-mode work file is massive! A related practice is to leave one unit test failing when you go home, makes it easy to pick up where you left off when you return.

1. 4

I’ve thought about starting something along these lines too, but more like one log.org file per project (interlinking shouldn’t be a problem). But I have the feeling that you have to have an idea how to properly structure it before you start, otherwise maintaining it would take too much time, and you’d forget about it after a while. Do you have any tips how to make it easier to use in your case?

1. 1

Multiple org files should be fine, I’m pretty sure you can use refile to move items among the files.

My structures have been redone as I go along. I use a single file with tags to split up accidental and essential difficulties.

I clock in and out of tasks and sub-items and use org’s reporting functions and orgstat to see how I spent my time. I also use org-clock-modify-effort-estimate (C-c C- x C-e) to make sure I don’t get stuck on a single item.

I have emacs show org-agenda-list at startup so I always see items where I’ve set scheduled / deadline values.

org-tempo lets me put source code inline and even run the code and see the results immediately. I use that to collect snippets and build quick prototypes. I also use it to keep interviewees whiteboard code inline with interview notes.

I still feel like an org-mode newbie, if you have more tips please reply!

A bonus non-org tip is flycheck-color-mode-line-mode :-)

1. 2

This comes of big help, because it represents the set of standard utilities available on all Unix-like systems. Now, I wonder if there is something Unix-like that doesn’t support one of these.

1. 1

I think the problem is not so much the availability of the tools but the slight differences. The GNU version of some command is slightly different than the BSD one and Solaris is different again.

1. 1

I agree. Still, I’m excited to learn what the standard tools are and what flags should be supported at minimum.

2. 1

I guess Plan 9 would be a big example (notoriously lacks find), or am I misunderstanding you?

1. 2

I think I wasn’t clear enough with my previous comment, sorry if you misundertood me. What I tried to say, is that if someone knows a Unix (want-to-be) system not supporting some of these standard tools. Plan 9 is different, because it is not Unix-like per sé.

1. 1

When I worked with QNX (a real time, microkernel) it came with a userland that was very much like Unix. OS-9 (not the one from Apple but a different OS-9, originally for the 6809, versions then available for the 68000 and 80386) was also a Unix-like operating system.

1. 7

I feel like it’s worth mentioning that some people feel like using Beamer is a bit of a curse. Nothing makes a presentation less engaging than piles of equations, tiny source code, and bullet points, but that’s precisely what Beamer makes easy to add.

I think some of the javascript libraries for presentations are a better fit as they make it easy to embed videos, animations and transitions that guide the eye to what matters. Unless you need to be able to send someone a pdf of the presentation, I’d hesitate to recommend using this library without large amounts of discipline.

1. 9

I think what’s going on here is that too many people have been sitting in university rooms listening to boring lecturers giving excruciating presentations made with Beamer and filled with hundreds of bullet points.

Not that I’m the biggest Beamer expert out there, but I use it for all my slides and I think the results are pretty good.

I think some of the javascript libraries for presentations are a better fit as they make it easy to embed videos, animations and transitions that guide the eye to what matters.

Animations, videos and transitions can be abused exactly like bullet points. In an effort to escape the boring-lecturer-effect, we should be careful not to err on the side of entertainment and produce presentations filled with animated gifs and almost zero content (I’ve seen many of those too, lately).

1. 2

I think some of the javascript libraries for presentations are a better fit as they make it easy to embed videos, animations and transitions that guide the eye to what matters.

Unless you want to print the slides..?

1. 1

Nothing makes a presentation less engaging than piles of equations, tiny source code, and bullet points, but that’s precisely what Beamer makes easy to add.

At university this has become quite popular. Instead of lecture notes we just have densely populated beamer presentations, which seem neither to read during a lecture nor to read when learning.

I think it’s a pity that many of the more interactive features of beamer beyond \pause are just forgotten, ignoring seemingly all principles of good presentation-making.

1. 1

I’m not clear even the advanced features really help. I think it matters what makes a tool easy to do.

1. 1

This is why I despise Beamer. Also it is a pain to use compared to alternatives.

2. 1

I totally agree! I have used reveal.js with pleasure and success, though I used only a bare minimum of the features, as I find most stuff in presentation software distractions not attractions.

1. 1

What javascript libraries do you have in mind? l’m a heavy (disciplined) Beamer user and, like @ema, think I produce quality slides, but I am curious about other tools for programatic presentation generation.

1. 1

Truthfully, these days I use reveal.js with Jupyter notebooks (https://github.com/damianavila/RISE)

I’ve used deck.js, reveal.js and eagle.js. Aside from needing to futz with npm these have all been perfectly adequate. Thanks to MathJax, I can still put in an equation if it’s needed. For some of them you can even use pandoc to generate the html directly from markdown https://pandoc.org/demos.html.

Like I said, if you are disciplined, Beamer can work really great. For me what counts is what the tools encourages you to do and not to do. From that standpoint, a lot of tools would have trouble outdoing sent

2. 1

That’s interesting to know. I’m in the process of converting my workshop slides from PowerPoint to beamer. Most of the slides are either code, short definitions, or diagrams, and I wanted to be able to easily find/replace my slides. They’re there to frame the live coding sections, so hopefully the plainness won’t be too much of a problem.

1. 5

Does anyone here use recutils? I think I hear about them every few years, but never use them for anything.

1. 1

The Guix distribution uses it, for package descriptions maybe? I forgot, sorry.

1. 3

No, they use S-Expressions (see package archive). Would be a shame if they used Lisp without making use of such a fundamental feature.

1. 3

They print the package descriptions on the CLI as recutil records.

1. 1

Oh, didn’t notice that. I guess I was confused by the term “package descriptions” in the context of a functional/declarative package manager.

1. 3

That’s on me, I was in a hurry when I wrote that. Sorry!

1. 6

The GNU Recutils is a textbook example of how to have a decent set of tools with ridiculous marketing that makes the project look like a joke.

The idiocy of the logo is only made more idiotic via the answers about it in the FAQ:

• Why is the logo depicting a pair of copulating turtles?
• What is the name of the turtles?
• Why those names?
1. 7

I looked at the logo, thought “why is the logo two turtles fucking?”, then reasoned “nah, it’s just a turtle and a shadow, and someone just didn’t think about it.”

I’m…not shocked as such to learn that my first interpretation was correct, but I’m definitely disappointed.

1. 5

Oh wow, new levels of awkward humour from the FSF….

1. 4

This is an old level.

2. 3

Yeah, I immediately had a WTF moment seeing the turtles.

1. 3

Heh, that’s wonderful. I’d like see more software projects with the sense of playfulness that leads them to have their logo be a pair of gay turtles fucking.

1. 2

:: facepalm ::

The tools actually look really nice and they solve a real problem.

now I am wondering how to actually get this changed so I can, e.g., share the project with others who might be a little straight-laced.

1. 1

1. 1

… no? I don’t just use software at work. I have a broad enough community outside of commerce that could use good tools.

2. 1

Maybe they would prefer SQLFairy if two gay turtles are too much?

Or not tell anyone the FAQ explicitly mentions Fred and George are homosexual.

3. 1

That’s weird, why is the logo on the Savannah page a microphone then?

1. 4

Because savannah has also perfected the art of tedious and confusing navigation.

1. 1

No, I saw that page, I was just confused why there seem to be two logos, one “acceptable” the other probably some inside-joke.

1. 3

Is there any Disqus alternative based on ActivityPub for example? Something I could embed into my site, and preferably associate the coments with my mastodon account?

I have created a Mastodon account just a few days ago, and so far so good!

1. 1

Not that I know of, but it would certainly be possible to create.

1. 3

Sometimes I get the feeling that C has the habit of using the static keyword in places where is should have been the default behavior.

1. 1

care to elaborate? where is static not the default when it should’ve been and why?

1. 3

I’ve been observing this community over the last few months with great interest. http://tilde.club/ seemed very interesting the first time I found it (I wished it had existed when I was limited to using Windows), but it was closed (which of course has it’s benefits too, as this site shows). Nevertheless I consider http://tilde.town/ to be more successful in it’s message and idea.

It’s kind of a more general, and larger, community like the one around suckless/cat-v/nixers that I found very interesting around a year ago. They have common opinions that go against contemporary trends, forging a community that does stuff and creates stuff. But what makes it even more interesting is how it transcends just one site, while not loosing it’s character. The affinity to the fediverse also delights me.

Sadly it seems to be kind of a 10%/90% affair when it comes to activity: Just look at http://tilde.town/ as you’ll see most people still have the little more added than the default index.html (examples 1, 2). What I read into this is that although this exists, and people find it interesting, many don’t know what to do with it. Kind of sad… General lack of creativity maybe? But I haven’t seen it from the other side, so maybe they are more active in other ways.

1. 5

Not all people are super creative with their sites; at least for me most of the social activity occurs on the irc net and the mailing lists.

1. 3

I think it’s important to note that many of these pubnix servers are not oriented toward generating a lot of public web content, but rather to intra-system activities. IRC chat, bulletin boards, local gaming, “botany”, grafitti walls, and so on are extremely popular. tilde.town is a lush playground of activity of all sorts, just not a lot of it bleeds out through the web. But that’s also kind of the point. Everyone on a tilde knows how to toss a webpage out there in some form or another. They congregate for the community. The outputs are very different.

Now, there are other tildes, like my own https://cosmic.voyage (or gopher://cosmic.voyage) which ARE oriented toward a public channel (collaborative storytelling in our case). Our activity is still more robust in IRC with people talking and planning than the output suggests.

Finally, you touched on federation and that’s some new and exciting territory for the tildeverse. While we do have a round-robin of IRC servers that all federate, there’s also some novel experimentation going on. The circumlunar pubnix servers are rsyncing their local bulletin boards to one another. Cosmic, baud.baby, and circumlunar are experimenting with a low-fi social networking system built on top of fingerd. There’s a lot of playing around of this sort as people push limits and turn their hobby eye toward community building.

1. 2

maybe they are more active in other ways.

Yes. The community is super active on IRC, 24 hours a day. They also have a local intranet for more private things that users don’t want indexed by google. There are a number of CLI apps that don’t have a web presence, also, such as feels, bbj, botany` etc…

1. 6

The pessimistic me would say:

• A ever increasing set of the web will be offered as a walled service, only using the web as a foundation for a more controlled network.
• Hardware and software will become more closed in practice (even if they are open source in name).
• Traditional desktops/laptops will loose, while more questionable portable devices will gain in popularity
• Energy-consumption by server-farms/consumer-devices will probably rise
• New social networks will arise and fall at greater speeds, not always transparently and will be fueled by simple but addictive offers.

The optimistic me would say:

• With the rise of RISC-V a market and a consciousness for open-hardware will arise, lowering the barrier for pure free software systems
• Distributions like Nix and Guix that make it easier to build software reproducibly will become more popular
• Alternative networks around the Fediverse will gain more popularity, and (for some reason) will still be able to survive in a actually distributed fashion avoiding addiction-tricks
• The split between users and developers will shrink, as the abilityof “programming”/understanding of computational thinking becomes more widespread due to a higher focus on it in public schools (as a form of “literacy” and ability to self-emancipate oneself).
• Energy-consumption by server-farms/consumer-devices will probably rise less that the pessimistic me belives
• Blockchain/AI-Hype will loose steam (connected to the previous point about literacy).

The idealistic me would say:

• The modern web will realize that many of it’s developments have been mistakes and there will be high-level attempts at trying to moderate it to become more sane (web-engine wise, the centrality of browsers, overuse of HTTP, etc.)
1. 2

Hardware and software will become more closed in practice (even if they are open source in name).

Intel’s Software Guard Extensions and the Intel Attestation Service are super-scary in this respect. At least they seem to be opening up to third-party attestation services, now.

1. 2

Blockchain/AI-Hype will loose steam (connected to the previous point about literacy).

Possibly unpopular opinion, but I’m not too sure about blockchain losing steam. This video explains the reasoning much better than I can.