1. 10

    Another thing to note is that 1.16 brings support for OpenBSD/mips64! jsing@ has been going to town!

    1. 4

      Solene’s percent - Solene is an OpenBSD developer who dabbles in NixOS and often writes about it her experiences in both!

      1. 12

        It’s the default on macOS:

        qbit@plq[0]:~% openssl version
        LibreSSL 2.8.3
        qbit@plq[0]:~% 
        
        1. 8

          I’ve been a happy customer of Feedbin since 2013. I use their web UI on desktop and the Reeder app (iOS) on my phone. Highly recommend both. Feedbin in particular has lot of nice touches like being able to subscribe to Twitter accounts and email newsletters as well as RSS feeds, an API, custom sharing targets, Feedbin notifier app, and it’s open-source.

          1. 3
            • Postgres 10
            • Redis > 2.8
            • Memcached
            • Elasticsearch 2.4

            That’s a crazy set of deps. Especially given postgresql can do key value store, PubSub and full text search with insanely fast trigram search. Even if you wanted to keep a dedicated key-vaules store, redies and memcached have huge overlap.

            1. 3

              It’s a pretty standard Rails stack for sites that get a decent amount of traffic/poll a lot of feeds, which I imagine Feedbin does.

            2. 2

              Likewise. Not sure when I first signed up, but it’s a bill I’m more than happy to pay each month.

            1. 1

              It really is the future!

              1. 8

                I had the same knee-jerk reaction :D - at the time I was on a “porting” roll, having just converted the git-prompt stuff to OpenBSD’s ksh.

                After further reflection, it became obvious that converting the build system (wrapper?) would potentially introduce more issues than it solves. Sorta a “if it ain’t broke” situation..

                If you are looking specifically for Go things to help with this label has a lot of stuff that one can take a crack at!

                If you are looking for OpenBSD+Go things - There is a grip of that too! I have documented a few things here. IMO enabling PIE mode on OpenBSD would be a decent start - it gets ya into various bits in the Go runtime - and eventually into some OpenBSD areas (that I haven’t been able to track down the breakage on :D).

                I also know that jsing@ is looking for some help switching things from using syscalls to using libc. That change would let OpenBSD remove the Go specific loosening in the kernel!

                1. 2

                  I had the same knee-jerk reaction :D

                  Well, the “knee-jerk” reaction is to the person who started that thread for not coming up with further details. I found the reaction of ianlancetaylor to my particular comment very helpful, at least it gives me the idea that if someone wants to step up and make this happen, there is fair chance it will be included, with the caveat on how to prevent backsliding to bashisms, hence the discussion I started here on Lobste.rs.

                  After further reflection, it became obvious that converting the build system (wrapper?) would potentially introduce more issues than it solves. Sorta a “if it ain’t broke” situation..

                  Thanks for sharing that :) I’m a bit afraid / hesistant for that as well, as most people are I guess.

                  Thanks for the other pointers as well! The whole reason I was building the runtime myself is because while pledging an spf filter I found that only LookupHost and LookupAddr can be handled by libc (and call get{addr,name}info), but other lookups, i.e. LookupTXT always go through native Go, hence I had to pledge “inet” instead of only “dns”. So another thing I’m thinking of is making sure that more of the Name Resolution is handled via libc using res_init(3) so that code that only needs dns from the network only needs a “dns” pledge instead of the full “inet”.

                1. 2

                  As an OpenBSD observer but not-yet convert, the thing that I find most off-putting about the setup on laptop is editing byzantine config files to connect to wifi like I’m on early 2000s Linux. Is there a “pull-down menu, discover visible networks, choose, enter key” GUI to make that more convenient?

                  1. 7
                    join WiFiHome wpakey secretSupersecret
                    join WiFiWork wpakey lesssecret
                    dhcp
                    

                    Seems pretty simple to me :P

                    It’s also all done via ifconfig. One single command to manage network interfaces.

                    On linux there is (was?): ip, iw, iwconfig, ifconfig, iwctl, iwd.. probably others I can’t remember..

                    That complexity didn’t vanish, it’s just been hidden by NetworkManager.

                    1. 3

                      Having done this on macOS, Linux, and OpenBSD, I like OpenBSD’s setup the best for anything network related. It is well documented, and consistently works the way it should.

                      I would greatly prefer to use OpenBSD’s wifi setup to the mess that is NetworkManager/netplan/etc. Since I switched to Ubuntu 20.04, I’ve had no end of trouble with getting networking to work reliably, where it all just worked on OpenBSD on the same hardware. Sadly I need Ubuntu to run certain proprietary packages, so I’m stuck with it for the time being.

                      I think this is a really enjoyable aspect of OpenBSD – there is no “secret sauce”. Usually the config files you are editing fully define the behavior of whatever they configure, there isn’t some magical daemon snarfing things up and changing the system state behind the scenes (looking at you, NetworkManager, netplan, systemd-resolved, etc.).

                      That said, because OpenBSDs tools tend to be well documented, simple, and consistant, they tend to be easy to wrap. I did this for mixerctl.

                    1. 3

                      It would be interesting to see a similar test but with pg_trgm included in the postgres test.

                      1. 1

                        What does that do?

                        1. 2

                          Creates trigram index, which helps with search for fixed strings and some regular expressions.

                      1. 3

                        There is The-Open-Book project that might result in a decent alternative!

                        1. 1

                          This series is super neat! Thanks for sharing!

                          1. 19

                            I’m probably not the only one with the opinion that rewrites in Rust may generally a good idea, but Rust’s compile times are unacceptable. I know there are efforts to improve that, but Rust’s compile times are so abysmally slow that it really affects me as a Gentoo user. Another point is that Rust is not standardized and a one-implementation-language, which also discourages me from looking deeper into Haskell and others. I’m not saying that I generally reject single-implementation languages, as this would disregard any new languages, but a language implementation should be possible without too much work (say within two man-months). Neither Haskell nor Rust satisfy this condition and contraptions like Cargo make it even worse, because implementing Rust would also mean to more or less implement the entire Cargo-ecosystem.

                            Contrary to that, C compiles really fast, is an industry standard and has dozens of implementations. Another thing we should note is that the original C-codebase is a mature one. While Rust’s great ownership and type system may save you from general memory-handling- and type-errors, it won’t save you from intrinsic logic errors. However, I don’t weigh that point that much because this is an argument that could be given against any new codebase.

                            What really matters to me is the increase in the diversity of git-implementations, which is a really good thing.

                            1. 22

                              but a language implementation should be possible without too much work (say within two man-months)

                              Why is that a requirement? I don’t understand your position, we shouldn’t have complex, interesting or experimental languages only because a person couldn’t write an implementation by himself in 2 months? We should discard all the advances rust and haskell provide because they require a complex compiler?

                              1. 5

                                I’m not saying that we should discard those advances, because there is no mutual exclusion. I’m pretty certain one could work up a pure functional programming language based on linear type theory that provides the same benefits and is possible to implement in a reasonable amount of time.

                                A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?

                                The thing is the following: If you make the choice of a language as a developer, you “invest” into the ecosystem and if the ecosystem for some reason breaks apart/dies/changes into a direction you don’t agree with, you are forced to put additional work into it.

                                This additional work can be a lot if you’re talking about proprietary ecosystems, meaning more or less you are forced to rewrite your programs. Rust satisfies the necessary condition of a qualified ecosystem, because it’s open source, but open source systems can also shut you out when the ABI/API isn’t stable, and the danger is especially given with the “loose” crate system that may provide high flexibility, but also means a lot of technical debt when you have to continually push your code to the newest specs to be able to use your dependencies. However, this is again a question of the ecosystem, and I’d prefer to only refer to the Rust compiler here.

                                Anyway, I think the Rust community needs to address this and work up a standard for the Rust language. On my behalf, I won’t be investing my time into this ecosystem until this is addressed in some way. Anything else is just building a castle on sand.

                                1. 5

                                  A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?

                                  There is a good argument by Drew DeVault that it is impossible to reimplement a web browser for the modern web

                                  1. 4

                                    I know Blink was forked from webkit but all these years later don’t you think it’s a little reductive to treat them as the same? If I’m not mistaken Blink sends nothing upstream to webkit and by now the codebases are fairly divergent.

                                2. 8

                                  I feel ya - on OpenBSD compile times are orders of magnitude slower than on Linux! For example ncspot takes ~2 minutes to build on Linux and 37 minutes on OpenBSD (with most features disabled)!!

                                  1. 5

                                    37 minutes on OpenBSD

                                    For reals? This is terrifying.

                                    1. 1

                                      Excuse my ignorance – mind pointing me to some kind of article/document explaining why this is the case?

                                      1. 7

                                        There isn’t one. People (semarie@ - who maintains the rust port on OpenBSD being one) have looked into it with things like the RUSTC_BOOTSTRAP=1 and RUSTFLAGS='-Ztime-passes -Ztime-llvm-passes' env vars. These point to most of the time being spent in LLVM. But no one has tracked down the issue fully AFAIK.

                                    2. 6

                                      Another point is that Rust is not standardized and a one-implementation-language

                                      This is something that gives me pause when considering Rust. If the core Rust team does something that makes it impossible for me to continue using Rust (e.g. changes licenses to something incompatible with what I’m using it for), I don’t have anywhere to go and at best am stuck on an older version.

                                      One of the solutions to the above problem is a fork, but without a standard, the fork and the original can vary and no one is “right” and I lose the ability to write code portable between the two versions.

                                      Obviously, this isn’t a problem unique to Rust - most languages aren’t standardized and having a plethora of implementations can cause its own problems too - but the fact that there are large parts of Rust that are undefined and unstandardized (the ABI, the aliasing rules, etc) gives me pause from using it in mission-critical stuff.

                                      (I’m still learning Rust and I’m planning on using it for my next big thing if I get good enough at it in time, though given the time constraints it’s looking like I’ll be using C because my Rust won’t be good enough yet.)

                                      1. 2

                                        The fact that the trademark is still owned by the Mozilla foundation and not the to-be-created Rust Foundation is also likely chilling any attempts at independent reimplementation.

                                      2. 1

                                        As much as I understand your point about the slowness of compile time in Rust, I think it is a matter of time to see them shrink.

                                        On the standard point, Haskell have a standard : Haskell 2010 . GHC is the only implementation now but it have a lot of plugins to the compiler that are not in the standard. The new standard Haskell 2020 is on his way. Implementing the standard Haskell (not with all the GHC add-ons) is do-able but the language will way more simple and with flaws.

                                        1. 2

                                          The thing is, as you said: You can’t compile a lot of code by implementing Haskell 2010 (or 2020 for that matter) when you also don’t ship the “proprietary” extensions.

                                          1. 1

                                            It is the same when you abuse GCC or Clang extensions in your codebase. The main difference with Haskell is that you, almost, only have GHC available and the community put their efforts in it and create a ecosystem of extensions.

                                            As for C, your could write standard-compliant code that an hypothetical other compiler may compile. I am pretty sure if we only had one main compiler for C for so long that Haskell have had GHC, the situation would have been similar : lots of language extension outside the standard existing solely in the compiler.

                                            1. 3

                                              But this is exactly the case: There’s lots and lots of code out there that uses GNU extensions (from gcc). For a very long time, gcc was the only real compiler around and it lead to this problem. Some extensions are so persistent that clang had no other choice but to implement them.

                                              1. 1

                                                But does those extensions ever reached the standard? It as asked candidly as I do not know a lot of the evolution of C, compilers and standard that much.

                                                1. 4

                                                  There’s a list by GNU that lists the extensions. I really hate it that you can’t enable a warning flag (like -Wextensions) that warns you about using GNU extensions.

                                                  Still, it is not as bad as bashism (i.e. extensions in GNU bash over Posix sh), because many scripts declare a /bin/sh-shebang at the top but are full of bashism because they incidentally have bash as the default shell. Most bashisms are just stupid, many people don’t know they are using them and there’s no warning to enable warnings. Another bad offender are GNU extensions of the Posix core utilities, especially GNU make, where 99% of all makefiles are actually GNU only and don’t work with Posix make.

                                                  In general, this is one major reason I dislike GNU: They see themselves as the one and only choice for software (demanding people to call Linux “GNU/Linux”) while introducing tons of extensions to chain their users to their ecosystem.

                                                  1. 2

                                                    Here are some of the GNU C extensions that ended up in the C standard.

                                                    • // comments
                                                    • inline functions
                                                    • Variable length arrays
                                                    • Hex floats
                                                    • Variadic macros
                                                    • alignof
                                                2. 1

                                                  If I remember correctly 10 years ago hugs was still working and maybe even nhc :)

                                                  1. 1

                                                    Yep :) and yhc never landed after forking nhc. UHC and JHC seem dead. My main point is mainly that the existence of a standard does not assure the the multiplication of implementations and the cross-cIompilation between compilers/interpreters/jit/etc. It is a simplification around it and really depends on the community around those languages. If you look at Common Lisp with a set in the stone standard and a lot of compilers that can pin-point easily what is gonna work or not. Or Scheme with a fairly easy standard but you will quickly run out of the possibility to swap between interpreters if you focus on some specific stuffs.

                                                    After that, everyone have their checklist about what a programming language must or must not provide for them to learn and use.

                                          1. 4

                                            The problem with this scenario is that the user still has to trust the vendor to do the verification.

                                            No they don’t, end users can independently verify the binaries. Take OpenBSD ports and Go programs for example.

                                            More often than not, upstream (gopass, restic.. etc) vendors provide binaries. These binaries can be checked by end users against the version shipped in an OpenBSD package. (Currently OpenBSD makes no reproducible bin guarantees, but it’s entirely possible now that we have Go module support in the ports tree.). They can even be checked without installing the package.

                                            1. 7

                                              “future-proof” would be more accurate if it was buildable on systems that don’t have nix / docker (OpenBSD, NetBSD.. etc). That said - it looks really nice!

                                              1. 20

                                                Original designer of the Atreus here; ask me anything.

                                                1. 2

                                                  Do you know if it’s possible to configure the Keyboardio Atreus firmware on OpenBSD? And if it’s possible to set up layers similar to the Planck keyboard?

                                                  1. 5

                                                    And if it’s possible to set up layers similar to the Planck keyboard?

                                                    Yep, definitely. You can create as many layers as you like, up to 64 or so I think? Layers can be momentary (only active while a key is held) or modal, where they stay until another key is pressed to deactivate it.

                                                    Do you know if it’s possible to configure the Keyboardio Atreus firmware on OpenBSD?

                                                    I’m not sure whether Chrysalis (the GUI frontend for the Keyboardio firmware config) will work on OpenBSD; it unfortunately depends on Electron which isn’t that portable. However, if you can run the Arduino toolchain on OpenBSD (I think this is pretty portable? but I haven’t looked into it) then you should be able to build the firmware from source, making your layout changes in your text editor of choice: https://github.com/keyboardio/Kaleidoscope/blob/master/examples/Devices/Keyboardio/Atreus/Atreus.ino#L61 (This is how I build it; I like to be able to keep my layouts in source control.)

                                                    If that doesn’t work you can configure it with QMK, (a different yet compatible firmware codebase) which only depends on GCC and avrdude: https://qmk.fm/

                                                    1. 2

                                                      So, after a bit more digging - the out of the box Kaleidoscope stuff is a bit tricky to build (I haven’t done so successfully yet.). Here is what I found:

                                                      • Kaleidoscope needs arduino-builder which is a Go project. Normally this is fine, but a few libs arduino-builder uses are at versions that don’t support OpenBSD or flat out don’t support it (addressed one of them here, but likely there are others).
                                                      • There are a few packages in arduino-builder that try to grab os-specific things. Those thing’s don’t exist for OpenBSD, so that bit needs more investigating.
                                                    2. 3

                                                      Sorry, just realized you said “configure”! I’ll look into it :D (I have a Atreus on the way at some point). As a side note, we recently imported Microscheme into the ports tree, and I know that can be used to configure the OG Atreus.

                                                      1. 4

                                                        That’s so cool to hear Microscheme is being packaged! I’m looking forward to digging back into that some time soon.

                                                        Edit: The Microscheme firmware right now only works with the Classic Atreus, but it would be like 10-20 minutes of work to update it to work with the new Keyboardio one.

                                                        1. 4

                                                          I have my own fork of said firmware that is tailored to be more OpenBSD friendly which I run on my classic Atreus. You can find it here: https://github.com/jturner/menelaus

                                                      2. 3

                                                        I regularly flash my ergodox from OpenBSD (teensy 2 - using devel/teensyloader) - avrdude is also available, and should be able to flash the ATmega32U4 just fine!

                                                      3. 1

                                                        Yet another new keyboard that doesn’t include the function keys.

                                                        I know you can really only speak for yourself, but why are so many new designers doing this?

                                                        1. 2

                                                          I know you can really only speak for yourself, but why are so many new designers doing this?

                                                          I don’t find them to be useful, and I guess others don’t either.

                                                          For decades you could only buy keyboards that had function keys, regardless of whether you found them useful. We’re only just now getting to the point where you have the choice to buy a design that actually fits the way you personally use your keyboard. For me it’s like a breath of fresh air.

                                                      1. 15

                                                        Ctrl-Z and fg are shell features and have nothing to do with vim. Also the “-” as a sign to read from stdin is a very common pattern in unix and many tools understand this. The “**” thing looks like fzf and has also nothing to do with vim itself. I know that I am splitting hairs a bit here, but I feel we should attribute things to the right tools and not conflate shell features with vim.

                                                        1. 10

                                                          Also “nvim” is neovim - not Vim!

                                                          1. 2

                                                            I didn’t mean to mislead so much as impress VS-Code users / command-line-shy types with the might of what you can do with vim’s nimble CLI :)

                                                            1. 1

                                                              And here I thought I was a fool for installing fzf, but no - it looks like ** simply expands for the current directory

                                                            1. 2

                                                              How much is this shifting the goal posts in attributing missing defences to WASM when the problems are more inherent to the unsafe languages that are compiled to WASM and how this is done? What promises that WASM is making are being broken?

                                                              1. 2

                                                                I don’t really see it as goal post shifting. IMO it’s more of a cautionary tale: Unsafe applications still unsafe in WASM.

                                                                What promises that WASM is making are being broken?

                                                                Front and center on the wasm site:

                                                                Safe

                                                                WebAssembly describes a memory-safe, sandboxed execution environment that may even be implemented inside existing JavaScript virtual machines. When embedded in the web, WebAssembly will enforce the same-origin and permissions security policies of the browser.

                                                                The FAQ on security makes it feel like your C/C++ apps will suddenly be more secure:

                                                                Memory Safety

                                                                Compared to traditional C/C++ programs, these semantics obviate certain classes of memory safety bugs in WebAssembly. Buffer overflows, which occur when data exceeds the boundaries of an object and accesses adjacent memory regions, cannot affect local or global variables stored in index space, they are fixed-size and addressed by index. Data stored in linear memory can overwrite adjacent objects, since bounds checking is performed at linear memory region granularity and is not context-sensitive. However, the presence of control-flow integrity and protected call stacks prevents direct code injection attacks. Thus, common mitigations such as data execution prevention (DEP) and stack smashing protection (SSP) are not needed by WebAssembly programs.

                                                                It does go on to say:

                                                                Nevertheless, other classes of bugs are not obviated by the semantics of WebAssembly.

                                                                My takeaway is basically: Are programs written in unsafe languages unsafe when running on WASM? Yes, but now you have an entirely new set of attack vectors (XSS for example) that you need to worry about. And you have to worry about them in spite of the security claims made by the WASM docs.

                                                                1. 2

                                                                  My takeaway is basically: Are programs written in unsafe languages unsafe when running on WASM? Yes, but now you have an entirely new set of attack vectors (XSS for example) that you need to worry about. And you have to worry about them in spite of the security claims made by the WASM docs.

                                                                  I agree with you that this shows that unsafe applications can still be unsafe in WASM. However, due to the design of WASM there are a number of vulnerabilities that no longer exist. My takeaway from this paper is that the design decisions in WASM are just a tradeoff, since certain vulnerabilities are opened up again which are “solved” with native applications.

                                                                  From my (potentially naive) point of view, it seems easy enough for future iterations of WASM to include segmented memory with protection bits. That would seem to eliminate a number of the now exposed vulnerabilities in current WASM. The stack overflow vulnerabilities also seem solvable in future iterations. Some of the other issues might be more inherent to having a linear memory space, so we might need alternative solutions.

                                                                  It is concerning that the security claims made by the WASM docs don’t include warning of these potential issues. In general, however, I do think WASM stands by its promise of a (fairly) safe, sandboxed environment. This shows that you do still have to be careful, and imo exposing something like eval to a WASM module is a serious mistake in the first place.

                                                              1. 2

                                                                pkg_add is written in perl. Having looking at it, I find it terrible. A rewrite is necessary. But to get that done, somebody would have to step up and put in the effort.

                                                                Thus perl needs to remain in base.

                                                                1. 4

                                                                  I always wished that they could adopt a xbps-like package manager, but that’s maybe because I had more experience with XBPS than with OpenBSD (on workstations).

                                                                  1. 3

                                                                    I dream that some BSD-family system will actually make the package manager a first class citizen.

                                                                    This includes getting rid of the distribution tarballs (base.tgz etc) and replacing them with packages, ideally more fine grained. Thus also replacing the untarring done in the install process with a tool similar to debootstrap or pacstrap.

                                                                    I believe it is one of the many problems holding the BSDs back.

                                                                    Tangentially related, I also dream that they’ll eventually drop CVS for Git or a competing DVCS.

                                                                    Between the two, the latter I feel more likely, thanks to the existence of those pushing for it. (ESR)

                                                                    1. 3

                                                                      Dream nearly granted: https://wiki.freebsd.org/PkgBase

                                                                      1. 1

                                                                        Nearly, but it might go the way of many such efforts in FreeBSD. Down the drain.

                                                                        Statistically speaking, the proponent will get tired and leave.

                                                                        Among the BSDs, I’m specifically not a fan of FreeBSD. It had great talent and energy at a time. They left. It’s now called Dragonfly.

                                                                        I still wish FreeBSD the best and hope the PkgBase effort does succeed. If anything, because it might motivate the other BSDs to do the same.

                                                                      2. 2

                                                                        Re. DCVS, there is got.

                                                                        I have a running bet that it will replace cvs for openbsd.

                                                                      3. 1

                                                                        nix was recently ported! Virtually nothing works, but it’s a start!

                                                                    1. 2
                                                                      Bugfixes
                                                                      --------
                                                                      
                                                                       * ssh(1): fix IdentitiesOnly=yes to also apply to keys loaded from
                                                                         a PKCS11Provider; bz#3141
                                                                      

                                                                      Well this one is good to see as that used to be pretty annoying, although I’ve now switched to yubikey-agent to not have to deal with the PKCS#11 implementation anymore.

                                                                      1. 2

                                                                        What does the yubikey-agent get you that isn’t native to OpenSSH >= 8.2?

                                                                        It seems like the yubikey-agent stuff was a fill-gap for older versions of OpenSSH that didn’t support FIDO out of the box, or maybe I am missing something?

                                                                        1. 4

                                                                          It’s absolutely a fill-gap, because FIDO support requires OpenSSH >= 8.2 on both sides of the connection. There’ll be a long tail of servers running older OpenSSH, and it’s nice to have a solution for people stuck connecting to them. For example, Ubuntu 18.04 is supported until April 2023 with extended support until April 2028, and uses OpenSSH 7.6.

                                                                          1. 5

                                                                            Cool, I basically live on OpenBSD current, so I have had this (both ends) for some time now. Would be handy for github though!

                                                                            1. 3

                                                                              Right, exactly this. I have personal servers running sshd that ships with the OS that aren’t yet on 8.2+, and similar for work.

                                                                              My employer gives all employees a YubiKey but our servers run Debian and we don’t backport newer OpenSSH versions, so yubikey-agent allows me to have an easy way to use it without the complicated and slightly flaky PKCS#11 setup.

                                                                              Another advantage of yubikey-agent is it allows you to re-plug your YubiKey and it doesn’t break. The stock ssh-agent (combined with OpenSC) generally stops working if the YubiKey is unplugged and it’s fiddly to get it working again.

                                                                        1. 2

                                                                          That is some sick ascii art!

                                                                          1. 3

                                                                            Off topic: When was the OpenBSD Dev hat changed to Comic Sans?

                                                                            1. 3
                                                                              1. 2

                                                                                Huh, never noticed that. Good tough though.

                                                                          1. 10

                                                                            I do basically the same thing but in pure shell:

                                                                            k() {
                                                                            	${DEBUG}
                                                                            	if [ -z $1 ]; then
                                                                            		echo $PWD >> ~/.k
                                                                            	else
                                                                            		K=~/.k
                                                                            		case $1 in
                                                                            		clean)	sort $K | uniq > ${K}.tmp && mv ${K}.tmp ${K};;
                                                                            		rm)	sed -i -E "\#^${PWD}\$#d" ${K};;
                                                                            		ls)	cat ${K};;
                                                                            		*)	cd "$(grep -e "$1" ${K} | head -n 1)";;
                                                                            		esac
                                                                            	fi
                                                                            }
                                                                            
                                                                            1. 18

                                                                              But it’s not written in rust, so that’s a major drawback!