1. 2

    Personally, I think having both would be handy. I am not sure what format things currently are in, or what the overhead of doing both would be - but something like org-mode would make it much easier to have a single doc that defines both the slides and a single-page .. thing.

    1. 2

      Having both, will be hard to maintain, but I will give it a try. Thank you very much!

      1. 2

        Don’t stretch yourself thin on my account :D

    1. 3

      Cargo/npm: I dread doing updates.

      In cargo I have had breakage (requiring refactoring of my code) with updates to rust AND with updates to dependencies (direct or indirect).

      In npm I have had breakage with updates to dependencies. Again requiring me to refactor code.

      With Go: no breakage.

      1. 6

        But what if you want to use the passwords outside fo the browser?

        1. 2

          Fully agreed. It came up a couple of times in the thread that people use their password manager for more than simply websites. Nowadays people probably also want to use their password manager on their phone and everything should stay in sync without too much additional effort.

        1. 6

          If there was better support on public clouds, I’d be using NixOS all the time, this totally kills Ansible and other configuration management solutions.

          1. 6

            There’s actually a terraform module that you can use to load configuration into NixOS machines. It’s been on my list to write about it, but I want to use it in production a bit more before I commit to writing about it.

            1. 4

              Which platforms does it work (well) on? I’m using this script to bootstrap Hetzner servers, and I wouldn’t mind something a bit less manually involved.

              1. 5

                I can’t find the comment. A nice lobster user pointed me at nix-infect and I use it with Terraform like this:

                resource "hcloud_server" "mon2" {
                  image       = "debian-10"
                  keep_disk   = true
                  name        = "mon2"
                  server_type = "cx21"
                  ssh_keys    = local.hcloud_keys
                  backups     = false
                
                  user_data = <<EOF
                  #cloud-config
                  runcmd:
                    - curl https://raw.githubusercontent.com/zimbatm/nixos-infect/3e9d452fa6060552a458879b66d8eea1334d93d2/nixos-infect | NIX_CHANNEL=nixos-20.09 bash 2>&1 | tee /tmp/infect.log
                  EOF
                }
                
                1. 2

                  I’ve only tested it with AWS, but as far as I understand it should work fine with Google Cloud and just about anything else as long as you have a NixOS system on it.

              2. 5

                Vultr lets you upload an ISO and install from that.

                Edit: They even have an existing nixos ISO you can use!

                1. 5

                  I usually make my own NixOS ISO that will automatically install NixOS on the machine with something like this that I really need to write a blogpost on.

                  1. 4

                    I wonder how hard it would be to extend this solution to create a ZFS-based installation.

                    I’d love to have a way of automatically installing NixOS onto one of OVH’s US-based servers. I’m thinking the best way to do this would be either PXE or a variation on this kexec-based solution.

                    1. 3

                      Not very! You’d just mess with the part that configures disks and mounting. I don’t use ZFS in my VMs however it should be easy to do. I would also suggest messing with how it defines the disk in question. I’m going to set something up with ZFS zraid1 groups for when I do installation on my homelab once I get the rack ordered in July (depending on how the research for my homelab goes, I currently have a spreadsheet of hardware I’d want (a bunch of used 2U servers) but I really need to wait until I move to be sure that the new place has space for it.

                    2. 3

                      Nice! Though as a NixOS beginner it might be easier to start out with an existing ISO. :D

                      1. 4

                        Granted, but being able to assimilate a new system in about 3 minutes is a fun party trick :D

                        1. 3

                          Heck yes! I look forward to your post about it!

                          1. 1

                            And just like that, I had to build an ISO! Thanks for pointing me at your repo, it was very helpful! :D

                    3. 2

                      NixOS + packer gives you a decent story, and then you can set userdata to finish off your images on first boot. I wrote about it here: http://jackkelly.name/blog/archives/2020/08/30/building_and_importing_nixos_amis_on_ec2/

                    1. 2

                      Super cool! I have wanted to get into ST recently. At the moment the thing that most prevents me is not having a recent vm on OpenBSD.

                      I did pick up an M1 Mac recently though, maybe working under Rosetta will be fast enough.

                      1. 2

                        Looks like Cog can be built for OpenBSD: https://github.com/OpenSmalltalk/opensmalltalk-vm/issues/413

                        I haven’t tried it myself! But there’s a screenshot in that thread that shows Squeak running, so it could be a promising line of investigation.

                        Also, the aarch64 build of Cog works pretty well. (Not sure about M1 specifically.) The aarch64 build of Cog is what’s driving squeak-on-a-phone.

                        1. 1

                          Oh awesome! ty for digging this up! Last I knew it took a bunch of patches :D

                      1. 7

                        Author here! For people that are not on OpenBSD, you can still use this via the Portable OpenBSD KSH version!

                        There will be some incompatibilities if you go this route as some of the commands currently used have OpenBSD specific flags (or are openbsd specific).

                        Feel free to send in patches!

                        1. 1

                          Does it work with mksh?

                          1. 2

                            I don’t believe it will. Completions are done in OpenBSD’s ksh by setting an env variable. Fairly sure that is unique among the ksh implementations.

                            1. 1

                              Alright, thanks!

                        1. 7

                          I did this for a while.. It mostly worked well but never worked great. The pcscd / gpg-agent dance was flaky.. and most days would have to start one or the other.

                          Since OpenSSH added FIDO2 and it’s in OpenBSD by default, I have completely switched to using it.. and I have to say it’s painless!

                          I even did a writeup showing how to use two different keys (resident and non-resident) on the same device: https://deftly.net/posts/2020-06-04-openssh-fido2-resident-keys.html

                          1. 2

                            Since OpenSSH added FIDO2 and it’s in OpenBSD by default, I have completely switched to using it.. and I have to say it’s painless!

                            I want to use it. But as far as I understand, GitHub and others do not support it yet, right?

                            1. 2

                              Ya, last I tried it didn’t work on GitHub. They always lag behind pretty bad with regard to OpenSSH features.

                              1. 1

                                I’m confused, isn’t this a client-side OpenSSH feature? Shouldn’t GitHub be agnostic to whether the key lives on a FIDO2 device?

                                Is it a matter of GitHub not supporting the ed25519 key type?

                                1. 2

                                  The FIDO stuff is a new key type: ed25519-sk

                          1. 11

                            Another thing to note is that 1.16 brings support for OpenBSD/mips64! jsing@ has been going to town!

                            1. 4

                              Solene’s percent - Solene is an OpenBSD developer who dabbles in NixOS and often writes about it her experiences in both!

                              1. 13

                                It’s the default on macOS:

                                qbit@plq[0]:~% openssl version
                                LibreSSL 2.8.3
                                qbit@plq[0]:~% 
                                
                                1. 8

                                  I’ve been a happy customer of Feedbin since 2013. I use their web UI on desktop and the Reeder app (iOS) on my phone. Highly recommend both. Feedbin in particular has lot of nice touches like being able to subscribe to Twitter accounts and email newsletters as well as RSS feeds, an API, custom sharing targets, Feedbin notifier app, and it’s open-source.

                                  1. 3
                                    • Postgres 10
                                    • Redis > 2.8
                                    • Memcached
                                    • Elasticsearch 2.4

                                    That’s a crazy set of deps. Especially given postgresql can do key value store, PubSub and full text search with insanely fast trigram search. Even if you wanted to keep a dedicated key-vaules store, redies and memcached have huge overlap.

                                    1. 3

                                      It’s a pretty standard Rails stack for sites that get a decent amount of traffic/poll a lot of feeds, which I imagine Feedbin does.

                                    2. 2

                                      Likewise. Not sure when I first signed up, but it’s a bill I’m more than happy to pay each month.

                                    1. 1
                                      1. 8

                                        I had the same knee-jerk reaction :D - at the time I was on a “porting” roll, having just converted the git-prompt stuff to OpenBSD’s ksh.

                                        After further reflection, it became obvious that converting the build system (wrapper?) would potentially introduce more issues than it solves. Sorta a “if it ain’t broke” situation..

                                        If you are looking specifically for Go things to help with this label has a lot of stuff that one can take a crack at!

                                        If you are looking for OpenBSD+Go things - There is a grip of that too! I have documented a few things here. IMO enabling PIE mode on OpenBSD would be a decent start - it gets ya into various bits in the Go runtime - and eventually into some OpenBSD areas (that I haven’t been able to track down the breakage on :D).

                                        I also know that jsing@ is looking for some help switching things from using syscalls to using libc. That change would let OpenBSD remove the Go specific loosening in the kernel!

                                        1. 2

                                          I had the same knee-jerk reaction :D

                                          Well, the “knee-jerk” reaction is to the person who started that thread for not coming up with further details. I found the reaction of ianlancetaylor to my particular comment very helpful, at least it gives me the idea that if someone wants to step up and make this happen, there is fair chance it will be included, with the caveat on how to prevent backsliding to bashisms, hence the discussion I started here on Lobste.rs.

                                          After further reflection, it became obvious that converting the build system (wrapper?) would potentially introduce more issues than it solves. Sorta a “if it ain’t broke” situation..

                                          Thanks for sharing that :) I’m a bit afraid / hesistant for that as well, as most people are I guess.

                                          Thanks for the other pointers as well! The whole reason I was building the runtime myself is because while pledging an spf filter I found that only LookupHost and LookupAddr can be handled by libc (and call get{addr,name}info), but other lookups, i.e. LookupTXT always go through native Go, hence I had to pledge “inet” instead of only “dns”. So another thing I’m thinking of is making sure that more of the Name Resolution is handled via libc using res_init(3) so that code that only needs dns from the network only needs a “dns” pledge instead of the full “inet”.

                                        1. 2

                                          As an OpenBSD observer but not-yet convert, the thing that I find most off-putting about the setup on laptop is editing byzantine config files to connect to wifi like I’m on early 2000s Linux. Is there a “pull-down menu, discover visible networks, choose, enter key” GUI to make that more convenient?

                                          1. 7
                                            join WiFiHome wpakey secretSupersecret
                                            join WiFiWork wpakey lesssecret
                                            dhcp
                                            

                                            Seems pretty simple to me :P

                                            It’s also all done via ifconfig. One single command to manage network interfaces.

                                            On linux there is (was?): ip, iw, iwconfig, ifconfig, iwctl, iwd.. probably others I can’t remember..

                                            That complexity didn’t vanish, it’s just been hidden by NetworkManager.

                                            1. 3

                                              Having done this on macOS, Linux, and OpenBSD, I like OpenBSD’s setup the best for anything network related. It is well documented, and consistently works the way it should.

                                              I would greatly prefer to use OpenBSD’s wifi setup to the mess that is NetworkManager/netplan/etc. Since I switched to Ubuntu 20.04, I’ve had no end of trouble with getting networking to work reliably, where it all just worked on OpenBSD on the same hardware. Sadly I need Ubuntu to run certain proprietary packages, so I’m stuck with it for the time being.

                                              I think this is a really enjoyable aspect of OpenBSD – there is no “secret sauce”. Usually the config files you are editing fully define the behavior of whatever they configure, there isn’t some magical daemon snarfing things up and changing the system state behind the scenes (looking at you, NetworkManager, netplan, systemd-resolved, etc.).

                                              That said, because OpenBSDs tools tend to be well documented, simple, and consistant, they tend to be easy to wrap. I did this for mixerctl.

                                            1. 3

                                              It would be interesting to see a similar test but with pg_trgm included in the postgres test.

                                              1. 1

                                                What does that do?

                                                1. 2

                                                  Creates trigram index, which helps with search for fixed strings and some regular expressions.

                                              1. 3

                                                There is The-Open-Book project that might result in a decent alternative!

                                                1. 1

                                                  This series is super neat! Thanks for sharing!

                                                  1. 19

                                                    I’m probably not the only one with the opinion that rewrites in Rust may generally a good idea, but Rust’s compile times are unacceptable. I know there are efforts to improve that, but Rust’s compile times are so abysmally slow that it really affects me as a Gentoo user. Another point is that Rust is not standardized and a one-implementation-language, which also discourages me from looking deeper into Haskell and others. I’m not saying that I generally reject single-implementation languages, as this would disregard any new languages, but a language implementation should be possible without too much work (say within two man-months). Neither Haskell nor Rust satisfy this condition and contraptions like Cargo make it even worse, because implementing Rust would also mean to more or less implement the entire Cargo-ecosystem.

                                                    Contrary to that, C compiles really fast, is an industry standard and has dozens of implementations. Another thing we should note is that the original C-codebase is a mature one. While Rust’s great ownership and type system may save you from general memory-handling- and type-errors, it won’t save you from intrinsic logic errors. However, I don’t weigh that point that much because this is an argument that could be given against any new codebase.

                                                    What really matters to me is the increase in the diversity of git-implementations, which is a really good thing.

                                                    1. 22

                                                      but a language implementation should be possible without too much work (say within two man-months)

                                                      Why is that a requirement? I don’t understand your position, we shouldn’t have complex, interesting or experimental languages only because a person couldn’t write an implementation by himself in 2 months? We should discard all the advances rust and haskell provide because they require a complex compiler?

                                                      1. 5

                                                        I’m not saying that we should discard those advances, because there is no mutual exclusion. I’m pretty certain one could work up a pure functional programming language based on linear type theory that provides the same benefits and is possible to implement in a reasonable amount of time.

                                                        A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?

                                                        The thing is the following: If you make the choice of a language as a developer, you “invest” into the ecosystem and if the ecosystem for some reason breaks apart/dies/changes into a direction you don’t agree with, you are forced to put additional work into it.

                                                        This additional work can be a lot if you’re talking about proprietary ecosystems, meaning more or less you are forced to rewrite your programs. Rust satisfies the necessary condition of a qualified ecosystem, because it’s open source, but open source systems can also shut you out when the ABI/API isn’t stable, and the danger is especially given with the “loose” crate system that may provide high flexibility, but also means a lot of technical debt when you have to continually push your code to the newest specs to be able to use your dependencies. However, this is again a question of the ecosystem, and I’d prefer to only refer to the Rust compiler here.

                                                        Anyway, I think the Rust community needs to address this and work up a standard for the Rust language. On my behalf, I won’t be investing my time into this ecosystem until this is addressed in some way. Anything else is just building a castle on sand.

                                                        1. 5

                                                          A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?

                                                          There is a good argument by Drew DeVault that it is impossible to reimplement a web browser for the modern web

                                                          1. 4

                                                            I know Blink was forked from webkit but all these years later don’t you think it’s a little reductive to treat them as the same? If I’m not mistaken Blink sends nothing upstream to webkit and by now the codebases are fairly divergent.

                                                        2. 8

                                                          I feel ya - on OpenBSD compile times are orders of magnitude slower than on Linux! For example ncspot takes ~2 minutes to build on Linux and 37 minutes on OpenBSD (with most features disabled)!!

                                                          1. 5

                                                            37 minutes on OpenBSD

                                                            For reals? This is terrifying.

                                                            1. 1

                                                              Excuse my ignorance – mind pointing me to some kind of article/document explaining why this is the case?

                                                              1. 7

                                                                There isn’t one. People (semarie@ - who maintains the rust port on OpenBSD being one) have looked into it with things like the RUSTC_BOOTSTRAP=1 and RUSTFLAGS='-Ztime-passes -Ztime-llvm-passes' env vars. These point to most of the time being spent in LLVM. But no one has tracked down the issue fully AFAIK.

                                                            2. 6

                                                              Another point is that Rust is not standardized and a one-implementation-language

                                                              This is something that gives me pause when considering Rust. If the core Rust team does something that makes it impossible for me to continue using Rust (e.g. changes licenses to something incompatible with what I’m using it for), I don’t have anywhere to go and at best am stuck on an older version.

                                                              One of the solutions to the above problem is a fork, but without a standard, the fork and the original can vary and no one is “right” and I lose the ability to write code portable between the two versions.

                                                              Obviously, this isn’t a problem unique to Rust - most languages aren’t standardized and having a plethora of implementations can cause its own problems too - but the fact that there are large parts of Rust that are undefined and unstandardized (the ABI, the aliasing rules, etc) gives me pause from using it in mission-critical stuff.

                                                              (I’m still learning Rust and I’m planning on using it for my next big thing if I get good enough at it in time, though given the time constraints it’s looking like I’ll be using C because my Rust won’t be good enough yet.)

                                                              1. 2

                                                                The fact that the trademark is still owned by the Mozilla foundation and not the to-be-created Rust Foundation is also likely chilling any attempts at independent reimplementation.

                                                              2. 1

                                                                As much as I understand your point about the slowness of compile time in Rust, I think it is a matter of time to see them shrink.

                                                                On the standard point, Haskell have a standard : Haskell 2010 . GHC is the only implementation now but it have a lot of plugins to the compiler that are not in the standard. The new standard Haskell 2020 is on his way. Implementing the standard Haskell (not with all the GHC add-ons) is do-able but the language will way more simple and with flaws.

                                                                1. 2

                                                                  The thing is, as you said: You can’t compile a lot of code by implementing Haskell 2010 (or 2020 for that matter) when you also don’t ship the “proprietary” extensions.

                                                                  1. 1

                                                                    It is the same when you abuse GCC or Clang extensions in your codebase. The main difference with Haskell is that you, almost, only have GHC available and the community put their efforts in it and create a ecosystem of extensions.

                                                                    As for C, your could write standard-compliant code that an hypothetical other compiler may compile. I am pretty sure if we only had one main compiler for C for so long that Haskell have had GHC, the situation would have been similar : lots of language extension outside the standard existing solely in the compiler.

                                                                    1. 3

                                                                      But this is exactly the case: There’s lots and lots of code out there that uses GNU extensions (from gcc). For a very long time, gcc was the only real compiler around and it lead to this problem. Some extensions are so persistent that clang had no other choice but to implement them.

                                                                      1. 1

                                                                        But does those extensions ever reached the standard? It as asked candidly as I do not know a lot of the evolution of C, compilers and standard that much.

                                                                        1. 4

                                                                          There’s a list by GNU that lists the extensions. I really hate it that you can’t enable a warning flag (like -Wextensions) that warns you about using GNU extensions.

                                                                          Still, it is not as bad as bashism (i.e. extensions in GNU bash over Posix sh), because many scripts declare a /bin/sh-shebang at the top but are full of bashism because they incidentally have bash as the default shell. Most bashisms are just stupid, many people don’t know they are using them and there’s no warning to enable warnings. Another bad offender are GNU extensions of the Posix core utilities, especially GNU make, where 99% of all makefiles are actually GNU only and don’t work with Posix make.

                                                                          In general, this is one major reason I dislike GNU: They see themselves as the one and only choice for software (demanding people to call Linux “GNU/Linux”) while introducing tons of extensions to chain their users to their ecosystem.

                                                                          1. 2

                                                                            Here are some of the GNU C extensions that ended up in the C standard.

                                                                            • // comments
                                                                            • inline functions
                                                                            • Variable length arrays
                                                                            • Hex floats
                                                                            • Variadic macros
                                                                            • alignof
                                                                        2. 1

                                                                          If I remember correctly 10 years ago hugs was still working and maybe even nhc :)

                                                                          1. 1

                                                                            Yep :) and yhc never landed after forking nhc. UHC and JHC seem dead. My main point is mainly that the existence of a standard does not assure the the multiplication of implementations and the cross-cIompilation between compilers/interpreters/jit/etc. It is a simplification around it and really depends on the community around those languages. If you look at Common Lisp with a set in the stone standard and a lot of compilers that can pin-point easily what is gonna work or not. Or Scheme with a fairly easy standard but you will quickly run out of the possibility to swap between interpreters if you focus on some specific stuffs.

                                                                            After that, everyone have their checklist about what a programming language must or must not provide for them to learn and use.

                                                                  1. 4

                                                                    The problem with this scenario is that the user still has to trust the vendor to do the verification.

                                                                    No they don’t, end users can independently verify the binaries. Take OpenBSD ports and Go programs for example.

                                                                    More often than not, upstream (gopass, restic.. etc) vendors provide binaries. These binaries can be checked by end users against the version shipped in an OpenBSD package. (Currently OpenBSD makes no reproducible bin guarantees, but it’s entirely possible now that we have Go module support in the ports tree.). They can even be checked without installing the package.

                                                                    1. 7

                                                                      “future-proof” would be more accurate if it was buildable on systems that don’t have nix / docker (OpenBSD, NetBSD.. etc). That said - it looks really nice!