Fun thing is that “calver” in french (“calvère” more precisely), defines something hard to deal with, because it is too complex or messy.
Jokes aside, my main issue with CalVer is that you can’t easily see the “milestones” of a project, eg when the API changes a lot, whem it reaches a stable state, or when it gets rewritten, etc… If you take the Ubuntu example, there is no distinction between LTS and standard releases. Same with youtube-dl, I was using a version from early 2017 for monthes, and then google changed the youtube API and youtube-dl had to accommodate, making all previous releases useless. You can’t know which version address that other than reading all changelogs in between.
I like more the way softwares like “pcc” are handled, with a SemVer mode, and daily builds (or weekly/monthly) whatever, that help you get the build tree at a specific date if you so prefer. Best of both worlds!
I don’t see how sending magic codes is better UX. It requires the user to leave the context of my application open up some kind of email client, click the link and return to the application.
It proves you have control of the email account (Or at least can intercept it :) ). It’s like an “I am this person”. Some people use their email account as their password manager anyway, so this could be at least slightly more secure than that, you have to have current access to the email account, it can’t just be a copy of all their emails from 1 year ago.
[Comment removed by author]
Cant you just use a hotkey? I call pass through dmenu and it’s a pretty fast process.
I’m concerned about how this works, I dont have (or want) email on all my devices, but I do have my encrypted password files and my gpg key synced across my devices.
I doubt the lambda user would setup pass, dmenu and gpg-agent for this workflow to be easy though. They’ll mostly keep their password manager wide open at all time, which means they still have to switch window, search password, copy it, go back, paste, login. On the other end, receiving a magic link would only have you switch window, click twice, and you’re in. Sounds simpler to me, especially on phone, where inputting a viable master password is a HUGE pain in the neck, as much as switching windows.
It requires the user to leave the context of my application open up some kind of email client, click the link and return to the application.
I think the idea is that if you have a rarely used app, the users will need to do that to either reset the password or look up the password anyway.
I don’t understand how the C implementation of true is non-portable? I mean, every C compiler and environment I have seen can set main() as an integer, and return 0 to the shell. Can some one explain how the following C code is non-portable?
#include <stdio.h>
int main() {
return 0;
}
His complaint is that the executable is not portable, not the source file. The literal file /usr/bin/true on a Linux machine can’t be dropped onto a machine that doesn’t use ELF binaries, for example.
It does make more sense, but what doesn’t (to me) is why Pike of all people complains about binary vs. source compatibility.
Can some one explain how the following C code is non-portable?
Main must have exactly two parameters (int and char **) or void. Anything else is implementation defined territory.
I don’t see a lot of value in the changes to to true that Rob complains about.
However, I also don’t see how having your shell scripts depend on an empty file with a particular name, so that you can run that command to get a 0 status code, counts as “good software”.
I don’t suppose there’s a practical problem to doing it that way[0], but imagine you have to explain true to an alien who knows a great deal about programming, but has no background with unix.
[0] I’m tempted to argue that every change in this series of tweets is the predictable consequence of true being a file. So far as you think these changes are bad, you should be bothered by the original decision.
Well, you would have to explain that Alien what unix is and how it works anyway, because true can only be “true” on unix systems.
You could also tell that alien “executing a file on unix will return successfully, unless the program specifies otherwise. An empty file is an empty program and thus does nothing, so it returns successfully” And he don’t even need his weird alien programming logic ;)
This is why we can’t have good software. This program could literally have been an empty file, a nothing at all, a name capturing the essence perfectly.
I’m not sure I could disagree more strongly. An empty file only has the true behavior because of a bunch of incredibly non-obvious specific Unix behaviors. It would be equally reasonable for execution of this file to fail (like false) since there’s no hashbang or distinguishable executable format to decide how to handle it. At a somewhat higher level of non-obviousness, it’s really weird that true need be a command at all (and indeed, in almost all shells, it’s not—true is a builtin nearly everywhere).
true being implementable in Unix as an empty file isn’t elegant—it’s coincidental and implicit.
I mean, it’s POSIX specified behavior that any file that is executed that isn’t a loadable binary is passed to /bin/sh (”#!” as the first two bytes results in “implementation-defined” behavior), and it’s POSIX specified behavior that absent anything else, a shell script exits true.
It’s no more coincidental and implicit than “read(100)” advances the file pointer 100 bytes, or any other piece of standard behavior. Sure, it’s Unix(-like)-specific, but, well, it’s on a Unix(-like) operating system. :)
It’s precisely specified, yes, but it’s totally coincidental that the specification says what it does. A perfectly-reasonable and nearly-equivalent specification in an alternate universe where Thomson and Ritchie sneezed five seconds earlier while deciding how executables should be handled would have precisely the opposite behavior.
On the other hand, if read(100) did anything other than read 100 bytes, that would be extremely surprising and would not have come about from an errant sneeze.
Black Mirror Episode: The year is 2100 and the world is ravaged by global warming. The extra energy aggregated over decades because non executables went through /bin/sh caused the environment to enter the tipping point where the feedback loops turned on. A time machine is invented, where one brave soul goes back in time with a feather, finds Thomson and makes him sneeze, saving humanity from the brink of extinction. But then finds himself going back to 2100 with the world still ravaged. Learns that it was fruitless because of npm and left-pad.
it’s totally coincidental that the specification says what it does.
This is true of literally all software specifications, in my experience.
Surely we can agree that it is far more coincidental that an empty executable returns success immediately than that e.g. read(100) reads 100 bytes?
Why isn’t 100 an octal (or a hex or binary) constant? Why is it bytes instead of machine words? Why is read bound to a file descriptor instead of having a record size from an ioctl, and then reading in 100 records?
Just some examples. :)
Obviously, minor variations are possible. However, in no reasonable (or even moderately unreasonable) world, would read(100) write 100 bytes.
The current (POSIX) specification is the product of historical evolution caused in part by /bin/true itself. You see, in V7 Unix, the kernel did not execute an empty file (or shell scripts); it executed only real binaries. It was up to the shell to run shell scripts, including empty ones. Through a series of generalizations (starting in 4BSD with the introduction of csh), this led to the creation of #! and kernel support for it, and then POSIX requiring that the empty file trick be broadly supported.
This historical evolution could have gone another way, but the current status is not the way it is because people rolled out of bed one day and made a decision; it is because a series of choices turned out to be useful enough to be widely supported, eventually in POSIX, and some choices to the contrary wound up being discarded.
(There was a time when kernel support for #! was a dividing line between BSD and System V Unix. The lack of it in the latter meant that, for example, you could not make a shell script be someone’s login shell; it had to be a real executable.)
The opposite isn’t reasonable though. That would mean every shell script would have to explicitly exit 0 or it will fail.
Every. Shell. Script.
And aside from annoying everyone, that wouldn’t even change anything. It would just make the implementation of true be exit 0, instead of the implementation of false be exit 1.
And read(100) does do something besides read 100 bytes. It reads up to 100 bytes, and isn’t guaranteed to read the full 100 bytes. You must check the return value and use only the amount of bytes read.
It’s not obvious to me that an empty file should count as a valid shell script. It makes code generation marginally easier, I suppose. But I also find something intuitive to the idea that a program should be one or more statements/expressions (or functions if you need main), not zero or more.
So if you run an empty file with sh, you would prefer it exits failure. And when you run an empty file with python, ruby, perl, et al., also failures?
Why should a program have one or more statements / expressions? A function need not have one or more statements / expressions. Isn’t top level code in a script just a de facto main function?
It’s intuitive to me that a script, as a sequence of statements to run sequentially, could have zero length. A program with an entry point needs to have at least a main function, which can be empty. But a script is a program where the entry point is the top of the file. It “has a main function” if the file exists.
I think whatever the answer is, it makes equal sense for Perl, Python, Ruby, shell, any language that doesn’t require main().
In my opinion, your last argument begs the question. If an empty program is considered valid, then existing is equivalent to having an empty main. If not, then it isn’t.
In any case, I don’t mean to claim that it’s obvious or I’m certain that an empty program should be an error, just that it seems like a live option.
Exactly. It sounds like arbitrary hackery common in UNIX development. Just imagine writing a semi-formal spec that defines a program as “zero characters” which you pass onto peer review. They’d say it was an empty file, not a program.
I guess true shouldn’t be considered a program. It is definitely tied to the shell it runs in, as you wouldn’t call execv("true", {"/bin/true", NULL}) to exit a program correctly. for example. true has no use outside of the shell, so it makes sense to have it use the shell’s features. That is why now it tends to be a builtin. But having it a builtin is not specified by POSIX. Executing file on the other end, is, and the spec says the default exit code it 0 or “true”. By executing an empty file, you’re then asking the shell to do nothing, and then return true. So I guess it is perfectly fine for true to jist be an empty file. Now I do agree that such a simple behavior has (loke often with unix) way too many ways to be executed, ans people are gonna fight about it for quite some time!
What about these?
alias true=(exit)
alias true='/bin/sh /dev/null'
alias true='sh -c "exit $(expr `false;echo $? - $?`)"'
The one true true !
It depends upon the system. There is IEFBR14, a program IBM produced to help make files in JCL which is similar to /bin/true. So there could be uses for such a program.
It also has the distinction of being a program that was one instruction long and still have a bug in it.
“That is why now it tends to be a builtin.”
Makes sense. If tied to the shell and unusual, I’d probably put something like this into the interpreter of the shell as an extra condition or for error handling. Part of parsing would identify an empty program. Then, either drop or log it. This is how such things are almost always handled.
That would mean every shell script would have to explicitly exit 0 or it will fail.
I don’t see how that follows.
Once the file is actually passed to the shell, it is free to interpret it as it wishes. No reasonable shell language would force users to specify successful exit. But what the shell does is not in question here; it’s what the OS does with an empty or unroutable executable, for which I am contending there is not an obvious behavior. (In fact, I think the behavior of running it unconditionally with the shell is counterintuitive.)
And read(100) does do something besides read 100 bytes.
You’re being pedantic. Obviously, under some circumstances it will set error codes, as well. It very clearly reads some amount of data, subject to the limitations and exceptions of the system; zero knowledge of Unix is required to intuit that behavior.
I don’t see how that follows.
You claim the exact opposite behavior would have been equally reasonable. That is, the opposite of an empty shell script exiting true. The precise opposite would be an empty shell script—i.e. a script without an explicit exit—exiting false. This would affect all shell scripts.
Unless you meant the opposite of executing a file not loadable as an executable binary by passing it to /bin/sh, in which case I really would like to know what the “precise opposite” of passing a file to /bin/sh would be.
You’re being pedantic. Obviously, under some circumstances it will set error codes, as well. It very clearly reads some amount of data, subject to the limitations and exceptions of the system; zero knowledge of Unix is required to intuit that behavior.
No. Many people assume read will fill the buffer size they provide unless they are reading the trailing bytes of the file. However, read is allowed to return any number of bytes within the buffer size at any time.
It also has multiple result codes that are not errors. Many people assume when read returns -1 that means error. Did you omit that detail for brevity, or was it not obvious to you?
If a file is marked executable, I think it’s quite intuitive that the system attempt to execute. If it’s not a native executable, the next obvious alternative would be to interpret it, using the default system interpreter.
Saying the behavior is totally (or even partially) coincidental is a bit strong. You’re ignoring the fundamental design constraints around shell language and giving the original designers more credit than they deserve.
Consider this experiment: you pick 100 random people (who have no previous experience to computer languages) and ask them to design a shell language for POSIX. How would all of these languages compare?
If the design constraints I’m talking about didn’t exist, then it would indeed be random and one would expect only ~50% of the experimental shell languages to have a zero exit status for an empty program.
I strongly doubt that is what you would see. I think you would see the vast majority of those languages specifying that an empty program have zero exit status. In that case, it can’t be random and there must something intentional or fundamental driving that decision.
I don’t care about how the shell handles an empty file. (Returning successful in that case is basically reasonable, but not in my opinion altogether obvious.) I’m stating that the operating system handling empty executables by passing them to the shell is essentially arbitrary.
The reason for the existence of human intelligence isn’t obvious either but that doesn’t make it random. A hostile environment naturally provides a strong incentive for an organism to evolve intelligence.
As far as the operating system executing non-binaries with “/bin/sh” being arbitrary, fair enough. Though I would argue that once the concepts of the shebang line and an interpreter exist, it’s not far off to imagine the concept of a “default interpreter.” Do you think the concept of a default is arbitrary?
It’s precisely specified, yes, but it’s totally coincidental that the specification says what it does.
laughs That’s really taking an axe to the sum basis of knowledge, isn’t it?
yes an empty file signifying true violates the principle of least astonishment.However if there were a way to have metadata comments about the file describing what it does, how it works, and what version it is without having any of that in the file we’d have the best of both worlds.
true being implementable in Unix as an empty file isn’t elegant—it’s coincidental and implicit.
But isn’t this in some sense exactly living up to the “unix philosophy”?
To me, the issue is whether it is prone to error. If it is not, it is culture building because it is part of the lore.
As far as I can tell, this is a revival of dwm’s tags/views model.
While, apparently, many dwm users use tags as if they are workspaces, that is only a fraction of their potential; and, when used properly, they can offer a workflow identical (from my perspective) to the one mentioned in the post.
This is not to say that the author is wrong or is stealing and should give credit or anything like that. It is only to suggest that this paradigm has been known and is available in some window managers.
Now, here’s where my knowledge of the subject gets a little thin. Where dwm supports this paradigm, I am unsure as to whether or not its many derivatives do (e.g., awesomewm, xmonad, etc.). I’d be interested in hearing from those users if this type of configuration is possible (presumably, anything is possible in xmonad since it’s really just a WM library and you can have whatever logic you’re willing to program, but I more meant, well-supported and easy to achieve through minor configuration changes).
Awesome supported this, I believe. I remember having Win+[1-9] set to switch between tags, and when I would accidentally hold control when switching I would get windows from both tags!
I like the idea of this, but I struggle with the execution when bringing in another tag causes overlaps or forces my current workspace to rearrange. For example, if I had chat and a browser open together taking up most of the screen while dealing with some operational issue, adding the editor group/tag to the screen would either overlap (if things were floating) or rearrange/resize existing windows (if some kind of tiling).
To those that use the group/tagging feature in the way described in the post, how do you deal with the overlap or resizing issue?
I fix the areas, so I don’t say «give me also group X», I say «please put group X in this subarea (and — in majority of cases — remove everything else from this subarea) without touching the rest of my screen»
I have pretty much the exact shortcuts you describe. I use tiling mode almost exclusively, so I expect it when I look at two at once. It seems totally normal. I also have super + J and K for moving windows up and down in the current order.
I’m not running xmonad at the moment, but it seems like xmonad-contrib has a XMonad.Actions.TagWindows module that does this.
dwm’s tags are indeed capable of handling that workflow I describe in this post, and even more IIRC.
My first experience with groups came with cwm which lets you add windows to a group. Doing so would automatically remove the window from any other groups.
With dwm, a single window can have multiple tags, thus allowing finer control over your task set, and which application to bring back and forth. This might be a little more complex to manage though, as you are responsible from adding AND removing windows from tags. Automatic removal from groups is, to me, the best compromise between workspace and tags.
Great post! I’ll definitely pick some ideas here and there. I currently use tarsnap for backing up my servers configs, git repos and web content (~3G only). 5$ was enough to hold my data for 1 year and a half though. For my personnal data, I have two separate disks which I mirror with rsync, one of them behind a USB one that I plug in occasionally. I’m definitely not satisfied with it and I’m working on building a NAS out of a cubox-i + external USB RAID enclosure. I must first ensure the cubox-i can run OpenBSD properly though. I’d use a remote server of mine for external backup copies, and I’d like to find some people (once my setup is ready) to build some sort of community backup (storing copies of other people’s backup so they will store yours). All I need now is to force myself into it :)
This is a good article. I like the concept of containers, and manipulating them like you do is pretty refreshing to see! The bits about entrer a container using a second one sawas particulary cool! You managed to demistify containers in a short post, so kudos for that! I wrote something similar a couple of years back, which you might appreciate (it’s rather long I must admit), as it shows another way to create an “app container” from scratch.
the concerns about input on international clients such as those with with diacritic marks are very real. I vividly remember my first time using a *NIX overseas and fighting to get many special characters to work.
of course, one could argue that few users directly input URLs any more, but I would still avoid almost any even remotely special character. over the decades and dozens of languages through which I have survived, I have seen countless URL [en|de]code implementations that choked on even common punctuation. if it isn’t a slash, it isn’t worth it. (only partly being sarcastic)
I vividly remember my first time using a *NIX overseas and fighting to get many special characters to work.
Oh boi, using a *NIX machine without being able to type ~ must be a pain…
As far as I’m concerned, the tilde has been part of unix for longer than my whole life, making it a somewhat important character. It can be found on roughly any keyboard (be it easy or not) as well. Multiuser server directories couldn’t have found a better symbol to represent the “home” of a user than the actual symbol representing “home” for the OS of roughly 80% of the web. I guess the point is not to blame that choice for URL, but rather for the OS themselves. Banning a character because it’s hard to tyoe on a keyboard isn’t good imo. But perhaps I’m mistaken and we should consider replacing the “:” with something simpler to express protocol strings ;)
using vim . . . IIRC, even the esc key was mapped oddly. one of the very early reasons I got into micro PCs / early tablets was because Internet cafes abroad were hopeless for for shelling home.
I didn’t mentionned the ‘:’ character because of vi. This article is about characters used in URLs, and you use ‘:’ quite a lot (think protocol/port) for them.
I remember my first time using unix abroad, too. QWERTZ ick. Nowadays it’s simpler. I don’t need strange keyboards, I have my own laptop along and all I need is a WLAN password.
The example given by the author sound hacky more than anything, to “discover” source files.
I’ve been using makefiles of this kind for years now, and it works remarkably well. I do that so they’re easy to package, regardless of the distro. I must admit my projects are fairly simple, but I never missed any “feature” with this makefile. I’m probably missing something here though so feel free to tell me!
I use a very similar makefile for my own projects. It doesn’t support header dependencies, though, and separate (out‐of‐tree) builds would be nice. I’ve been considering adding a (non‐autotools) configure script to the mix for that reason.
The example I gave doesn’t have header files, but you can easily add them to the mix (example).
For out-of-tree build, I don’t get the point of it. Is it only to keep the source tree clean?
I’ve seen some handmade configure script in the wild as well, pretty short ones (some only included the line “echo do not use autotools!”).
In my case, they would generate the config.mk file, which includes all the customizable bits. IMO, customization should be done at the environment level, and with make -e. That is the reason why I like mk a lot, which does that by default (but that’s another topic!)
Oh boy, Devuan. The gift that keeps on giving. The importance of it is much better summarised here.
There are many things one can criticize systemd of, but if you assert that a whole bunch of fragile shell scripts were any better, less error prone, or easier to debug, you are so very wrong. There are so many reasons to bash systemd, but saying it is more fragile and harder to debug than the shell scripts is not one of those. If that’s the central part of your rant against systemd, I can’t take it seriously, I’m sorry.
If you think systemd disabling a service that restarts way too often is bad, you clearly haven’t seen runaway processes that keep crashing, and the constant restarts trashing the system. I had, and I’m glad I have guards that make this easier to avoid.
I definitely prefer writing unit files than good ol’ sysv init scripts. Openrc init script are slighly better, but still too complex. IMO, systemd took another approach (a declarative one) at writing init script. To me, it just add a new bad solution to the mess init scripts are.
My issues with systemd is the lockin it brings. Be it on the tools you have to use, the dependencies of your systemd, or even on the actions you can take.
Yes it’s good to have a service disable itself if it fails to start too
often (for example, when httpd fails to bind on an unexisting IP).
But there is nothing more frustrating than being hopeless trying to
start it manually once you’ve fixed the issue.
Systemd aims to be the smartest init systemd out there, and instead of accomodating itself to different (all being valid!), it enforces its “standard” way.
For example, I’ve been experimenting with LXC recently. This tech has been
there for quite some time now, and even share the same roots (cgroups)
as systemd.
After some testing, and leaving the containers running for a few days,
I discovered that the CPU, memory and diskio cgroups for my containers
were simply removed, meaning my containers were now running without
any limitation in terms of resources! I found out later that systemd
“cleans” cgroups that are not created by itself, unless you start
your tool as a service file with the “Delegate=yes” attribute (see
ControlGroupInterface).
So the only way to avoid systemd is to use it. Awesome.
There are many other issues with its model, and I personally think the amount problems it brings outcome its benefits.
I find the “doord” analogy incorrect. It makes systemd look like it is based on a loosy idea from the start. Openning doors faster in a car is not as important as booting an OS. While I’m not a systemd fan, I find the comparison unfair, which weaken the argument against it. Systemd was based on an important fact: existing init systems were a mess to manage. Sadly, the implementation grew into something that’s even more complex and huge.
I was expecting more focus on what Devuan is doing for the opensource community like supporting software that do not depend on an init system, or encouraging simple ideas instead of overengineered ones (looking at you systemd-hostnamed…).
Instead, this article looks just like any other rant against systemd, with the same arguments everyone brings up that all fall in the “bugs” category.
After all, systemd brings some kind of « stability », as its interface is consistent (even though it has bugs). For many people, the new shinny features of systemd are definitely not worth its complexity, and this is for these people that the work from the Devuan guys is important. By keeping the alternative to systemd alive, they keep the spirit of linux which aim to keep every piece of software running on top of the kernel swappable, instead of relying on a rigid and complex API.
There’s a really good comment by someone who maintained Arch Linux’s init scripts pre-systemd about why they switched over. I’m as anti-systemd as the next person, but it’s important to understand why it became so successful.
Having a standard init system is incredibly valuable for package maintenance and having full process control does require having code in init to track children, grandchildren and even detached child processes. You can’t do that without being the init process.
All that being said, systemd is terrible from a usability standpoint. I honestly haven’t seen all the random/crashing bugs people complain about, but I do think systemctl is a terrible command, the bash completion is terrible slow, you can’t just edit a target file; you have to reload the daemon process for those changes to take effect, you have to call status after a command to see the limited log output, binary logs, etc. etc. etc.
There have been so many attempts to take the one good thing (standardized init scripts) and make drop in replacements (uselessd and others) and they all hit some pretty hard limits and are eventually abandoned. That’s sad that systemd is so integrated that replacements aren’t even remotely trivial.
Without systemd, you need one of the udev forks, consolekit and a few other things to make things work. Void Linux, Gentoo and Devuan are pretty critical in keeping this type or architecture viable. Maybe one day someone will come up with an awesome supervisor replacement and get other distributions on-board to have a real alternative.
Having a standard init system is incredibly valuable for package maintenance
The problem here is that Systemd can never be a standard init system, because it’s Linux only.
Maybe one day someone will come up with an awesome supervisor replacement and get other distributions on-board to have a real alternative.
I’m working on it :) https://github.com/davmac314/dinit
This has been my pet project for some time, although I’m long due to write a blog post update on progress. (Not a lot of commit recently, I know - that’s because Dinit uses an event loop library, Dasynq, which I’ve been focussing on instead - that should be able to change now, as I’ve just released Dasynq 1.0).
it was impossible to say when a certain piece of hardware would be available […] this was solved by first triggering uevents, then waiting for udev to “settle” […] Solution: An system that can perform actions based on events - this is one of the major features of systemd
udev is not a system that can perform actions based on events, like devd does on FreeBSD? What is it then?
we have daemons with far more complex dependencies
The question is… WHY?
Sounds like self-inflicted unnecessary complexity. I believe that services can and should start independently.
I run several webapps + postgres + nginx + dovecot + opensmtpd + rspamd + syncthing on my server… and they’re all started by runit at the same time, because none of them expect anything to be running before them. nginx doesn’t care if the webapps are up, it connects dynamically. webapps don’t care if postgres is up, they will retry connection as needed. etc. etc.
Why can’t Linux desktop developers design their programs in the same way?
Gopher being only a protocol, nothing prevents people from serving 10Mb HTML/CSS/JS pages over gopher.
Imagine a world where gopher become cool again, you’ll soon see browsers scan (i) lines for tags specifying the styling of each directory listing, then these tags will point to other files served over gopher, forcing clients to perform multiple requests to serve a single page. This will soon be too limiting and gopher servers will provide the ability to generate content dynamically, ala CGI. Clever people will find a way to retrieve data from the user (it already exist after all, veronica has you enter text in a box!), and use thia way to geolocalize you, and would present you hot girls in your area using the simplest, lowest overhead protocol on the web :)
Gopher is pretty cool, don’t get me wrong! It won’t prevent the web from being the clusterfuck it is already though.
Perhaps we (tech people) can all agree on using gopher to build an alternate web that is cleaner, but we have to keep it secret, and boring to avoid turning it into the web 3.0
This looks pretty cool! How does it work under medium to heavy load? I’d like to set it up as an online service for a small community of ~100 users.
Thanks for sharing!
PS: I feel bad that every good post from @tedu just end up discussing his SSL cert choice..
I don’t know if I’d want to set this up for unsuspecting users. I can be a little heavy handed with my aesthetic choices. :) Generally though it’s pretty fast. I have nothing like benchmarks, though. Over the past week, I couldn’t tell when I was using the proxy or not except when visiting particular sites.
I’d look into proxy auto config, too, where you write a little javascript file that tells the browser when to use the proxy based on hostname.
I just tested it locally. It works rather well indeed! Navbars get in the way a lot though.. This is definitely not something you want to force onto your users indeed. But that could be a service, eg, “use proxy strip.your.domain:8090 for cleaner content”. And then users are free to use it or not.
The code’s pretty easy to modify (eg, add input boxes, new domains/tld, …), so really, thanks for this!
It’s not much, but you’ll agree that “Can this software support ridiculous load?” would be a stupid question
I’m reasonably confident he was able to determine in advance that discussion of the resultant tedium would be an unavoidable result of this particular performance art.
In the end, you can loose 30% of your days searching for new plugins, new themes, new aliases to “improve” your productivity. I use to do that, untill I realized that I was spending more time getting more efficient rather than doing some actual work.
Am on my phone (the way most people browse internet nowadays, I heard), and turning airplaine mode didn’t change anything. Reloading brings me to the usual “check your device’s data or Wi-Fi connection” message. I don’t have any js console here either. I decided to view it from my main computer but… no airplane mode here, so I turned the interface down. Does this qualify as “airplane mode off” ?
Because I still have no clue what this page is all about!
I’ve converted my mail flow to use mblaze. Along with fsf and a few helper scripts, it’s perfect for my use. Used in conjunction with offlineimap + msmtp, and $EDITOR.
Never looking back.
I think mblaze is this and I think fsf would be some sort of fuzzyfinder?
Maybe fzf? I could see that being a nice workflow.
Correct. I just hooked it up to my script and forgot about it. It does nicely with the –preview command. There’s probably more I could do with that, but for now it solves pretty much all of my mail consumption related issues
Aside from a regular offlineimap setup and an also-regular msmtp setup, I have two scripts to help me with mblaze: one called “mymail” and one called “mshowwithawk” (which is a mouthful, but I never invoke it by hand so I don’t care.
mshowwithawk is:
#!/bin/bash
mshow $(echo $1 | awk '{print $2}')
and mymail is:
#!/bin/bash
mlist -s ~/Mail/$1/$2 | msort -d | mseq -S | mscan 2> /dev/null | fzf --preview="mshowwithawk {}"
Usage is: mymail <any ~/Mail/ subdir that contains maildirs> . Reason for this script is I have two accounts, and I often switch between my work and personal email, so I often call it like “mymail otremblay INBOX” or “mymail work INBOX”. Next improvements are gonna be defaulting to INBOX and allowing for the -s flag to be passed or not from the mymail script (because sometimes, I need to see old mail too.)
The output is a list of selectable items, with a preview of the currently selected item on the right. Yes, right in the terminal. The list is populated by mail prefixed with an ID I can then use with mshow if I need better output (say, in case the email provided a worthless (but still present) text/plain thing. I use elinks to dump out text/html messages (configurable in mblaze).
I use mblaze’s “minc” to pass messages from the maildir’s new to cur, and mflag -S to flag everything as read once I’m done.
I like the workflow because it is just a construction of a collection of small specialized programs working together. I mean, if needed, I can still just invoke mlist by itself and grep through email headers, if I so desire. Or pump the whole output elsewhere to any other unix-standard utility if I want to. Heck, it would be trivial to include spamassassin header parsing, or any other kind of header parsing. I’m also a sucker for CLI interfaces, mostly on account of it being the easiest way I know to compose software with one another out of small blocks. I feel like I should probably start a blog about my crap, but I’m afraid that said crap would be too trivial for people to enjoy.
mblaze is indeed pretty nice (similar to mu. I use it to automate some tasks with n my email workflow (archiving, marking as done, digging up the full thread when a mail arrives, …) it helps me a ton. But when it comes to actually read and reply to mails, it doesn’t cut it, so I use mutt for that.
I’ve read many people say that dvorak was fine for the vim movement keys.
And as for the keycaps, I’m not sure I see the problem, why not just use a blank keyboard and switch at will?
Although I am in theory capable of typing without looking at the keys, in practice I do a lot of key stabbing as well. And a lot of one handed typing as well. I’ve practiced this some in the dark, and it’s no fun. Definitely not interested in a blank keyboard.
Anyway, same experience as the author. Learned dvorak because there were people who didn’t know dvorak, used it for a while, then found I had trouble using a qwerty keyboard. Now I just use qwerty full time, but go back and practice dvorak for a week or so at a time to maintain the skill in case I ever have a compelling reason to switch.
I like dvorak for English, but find it substantially more annoying for code. And it’s a disaster for passwords. I usually set up hotkeys so I can quickly change on the fly depending on what and how much I’m typing.
For sure, think about where it’s now positioned. Typing …) {… is so easy when ) and { are side by side. And for code that doesn’t use egyptian braces, )<enter>{ is easier for me too. When I hit enter with my pinky, and follow up with { with my middle finger, that’s natural. But trying to squeeze my middle finger into the QWERTY location for { while my pinky is still on enter totally sucks.
Meanwhile -_=+ are all typed in line with other words (i.e. variable names). And - and _ are frequently part of filenames and variables, so it’s great that they’re closest to the letter keys.
I like dvorak for English, but find it substantially more annoying for code.
Exactly! If I were a novelist I would probably just continue using Dvorak.
in practice I do a lot of key stabbing as well
I recently bought a laptop with a Swiss(?) keyboard layout. (It really is a monstrosity with up to five characters on one key). I thought I wouldn’t need to look at the keys at all and I could just use my preferred keymap, but I’ve been caught ought a few times. I’m just about used to it now, though.
When I am typing commands into a production machine I feel like it is only responsible of me to use a properly labelled keyboard.
This is really important when you’re on your last ssh password/smartcard PIN attempt, because you can go slow and look at what you’re doing.
I got a blank keyboard, and I must admit that I still look at it from time to time. like for numbers, or b/v, u/i… I only do so when I start thinking “OMG this is a password, don’t get it wrong!”
Having a blank keyboard doesn’t stop you from looking at your hands. It only disappoint you when you do.
As a happy Dvorak user I’d have to say there are better fixes to that problem. Copy it from your password manager? (You use one, right?) Type it into somewhere else, and cut and paste? Or use the keyboard viewer? (Ok that one is macOS specific, perhaps.)
Specifically re: “typing commands into prod machines” I don’t buy the argument. Commands generally don’t take effect until you hit ENTER and until then you’ve got all the time you need to review what you’ve typed in. Some programs do prompt for yes/no without waiting for Enter but it’s not like Dvorak or Qwerty’s y or n keys have a common location in either layout, so I don’t really see that as an issue either.
Yes, the “production machines” argument is a strange one. I’d imagine it would only be an issue on a Windows system (if you’re logging in via ssh it’s immaterial) and then it would be fairly obvious quite quickly that the keyboard map is wrong. And if the keyboard map is wrong in the Dvorak vs QWERTY sense you’d quickly realise you’re typing gibberish. Or so I’d think?
Ignoring the whole issue of “you shouldn’t be logging in to a production machine to make changes”…
In this case, I find the homing keys, reorient myself, and type whatever I need to type. (Or just use a password manager & paste). Haven’t mistyped a password in years, and I’m using Dvorak with blanks.
Homing keys are there for a reason.
Labels are only necessary when you don’t touch type. If you do, they serve no useful purpose.
I’ve read many people say that dvorak was fine for the vim movement keys.
Dvorak is fine for Vim movement keys, but not as nearly as nice as Qwerty.
And as for the keycaps, I’m not sure I see the problem, why not just use a blank keyboard and switch at will?
The problem is, when I’m entering a password or bash command sometimes I want to slow down and actually look at the keyboard while I’m typing. In sensitive production settings raw speed isn’t nearly as valuable as accuracy. A blank keyboard would not solve this problem :)
Dvorak is fine for Vim movement keys, but not as nearly as nice as Qwerty.
They actually work better with Dvorak for me, because the grouping feels more logical than on qwerty to me.
Likewise: vertical and horizontal movement keys separated onto different hands rather than all on the one (and interspersed) works much better for me.
I hate vim movement in QWERTY. I think it’s because I’m left handed, and Dvorak puts up/down on my left pointer and middle finger. For me, it’s really hard to manipulate all four directions with my right hand quickly.
Would it make sense to use AOEU for motion then (or HTNS for right handed people)? I guess doing so may open a whole can of remapping worms though?
That won’t help with apps that don’t support remapping but which support vi-style motion though (as they’ll expect you to hit HJKL)…
That is a huge amount of work! Good job putting this up! ed25519 is truly nice and modern, it heart warming to see this many softwares using it!
Also I am totally stunned that my own tool (sick) is listed! I though I was the only one using it!!