1. 34

    A self-hosted instance of Miniflux!

    1. 2

      My main problem with Miniflux is that it do not support Google Reader API so the new releases of Reeder do not work with it.

      1. 1

        I’m using Miniflux with Reeder 4 and it is working fine with Fever API. The latest version of Reeder doesn’t support Fever anymore?

        1. 2

          Yes. Additionally the Fever API is limited to only support reading, there is no support for adding new subscriptions which greatly reduce the usability of Reeder.

      2. 1

        I also recently switched from TTRSS to Miniflux. Not entirely happy with microflux on android.

      1. 11

        Rewriting my shell prompt in Zig, and learning Zig.

        Zig is awesome so far. The IRC channel has been helpful :)

        1. 5

          Looking into more Zig things is on my 2021 to-do list! I wish there’s was a website like “Go By Example” for Zig.

          1. 5

            I’ve been using Ziglearn (especially the “Standard Patterns” chapter) to learn about common language idioms and the standard library.

            1. 5

              In addition to the official docs there’s https://ziglearn.org/ btw, which is a book-in-progress.

              The language manual is pretty extensive but the stdlib docs are just auto-generated and fairly sparse at the moment.

              Edit: whoops, ziglearn already mentioned, took a while to submit this comment after I wrote it.

          1. 10

            Thanks for another great post! Just wanted to mention that most folks probably don’t wan to use nix-env, since it is a form of imperative package management. It’s sometimes nice to quickly test out some package (if nix run or nix-shell don’t suffice), but in the spirit of NixOS’ declarative configuration, the better options are:

            • Define global packages in environment.systemPackages in configuration.nix.
            • Define user-specific packages in users.users.<name?>.packages.
            • Use home-manager, which doesn’t only allow you to specify the packages in your user profile declaratively, but to specify the configuration of many programs declaratively (similar to configuration.nix). See the extensive list of configuration options, which ranges from Emacs to isync.
            • home-manager can be used separately from the system’s NixOS configuration, so you run home-manager switch to update the user profile. However, you can also integrate home-manager configurations into the NixOS configuration, so that if you run nixos-rebuild it will also rebuild home-manager configurations.

            The really awesome thing about home-manager is that you can also use it on other distributions and macOS. So, you can even your fully declarative home environment outside NixOS (e.g. I use it to have the same environment on work server).

            1. 9

              I use nix-env in a declarative way:

              nix-env -rif packages.nix
              

              Which means “remove everything and install everything from this file” - works great.

              1. 2

                Fair enough! But I was referring to the installation of separate packages using nix-env -i as described in the blog post, which is the closest to imperative package management. Once you start describing your environments in a Nix file and produce it from that, it becomes declarative.

                This thread is a nice discussion of various ways to manager user environments declaratively:

                https://discourse.nixos.org/t/declarative-package-management-for-normal-users/1823

              2. 8

                Christine, in case you are reading this… From the emacs configuration:

                pkgs.writeTextFile {
                      name = "cadey-emacs.desktop";
                  [...]
                };
                

                There is also a handy makeDesktopItem, which validates the file as well:

                https://github.com/NixOS/nixpkgs/blob/master/pkgs/build-support/make-desktopitem/default.nix

                E.g. from one of my derivations:

                makeDesktopItem {
                    name = "softmaker-office-planmaker";
                    desktopName = "SoftMaker Office PlanMaker";
                    icon = "softmaker-office-pml";
                    categories = "Office;";
                    exec = "softmaker-office-planmaker %F";
                    mimeType = "application/x-pmd;application/x-pmdx;application/x-pmv;application/excel;application/x-excel;application/x-ms-excel;application/x-msexcel;application/x-sylk;application/x-xls;application/xls;application/vnd.ms-excel;application/vnd.stardivision.calc;application/vnd.openxmlformats-officedocument.spreadsheetml.sheet;application/vnd.openxmlformats-officedocument.spreadsheetml.template;application/vnd.ms-excel.sheet.macroenabled.12;application/vnd.ms-excel.template.macroEnabled.12;application/x-dif;text/spreadsheet;text/csv;application/x-prn;application/vnd.ms-excel.sheet.binary.macroenabled.12;";
                    extraEntries = ''
                      TryExec=softmaker-office-planmaker
                      StartupWMClass=pm
                    '';
                }
                
                1. 4

                  TIL! I’ll probably change my stuff to use that, but my current dwm setup doesn’t use .desktop entries.

                2. 3

                  I’m new to NixOS and didn’t know about home-manager, thanks! Do you think there’s a benefit of migrating my current dotfiles setup (a Git repository and stow) to it?

                  1. 4

                    Besides what @chrispickard said, using home-manager to manage dotfiles has the nice benefit that you can use the full Nix language to create configurations. Besides being able to factor out boring and repetitive stuff using Nix functions (and home-manager already provides a lot of functionality to generate cruft for you), you can also directly reference packages or files in packages.

                    Just a random example from my e-mail configuration, where I use pass as the password command.

                    passwordCommand = "${pkgs.pass}/bin/pass Mail/magnolia";
                    

                    In the generated configuration, this will have the full path to the pass command. No need to rely on the command to be available in the user-global PATH or directly through my user profile.

                    1. 2

                      I hadn’t thought about that kind of use case. Thanks to you both!

                    2. 2

                      I migrated most of my own dotfiles to home manager from stow and it’s honestly so much nicer. I always had issues with stow not wanting to overwrite files and it was difficult to stow anything that needed to end up in .config

                  1. 3

                    Adding cgit is also a very simple and nice addition to running your own git server.

                    1. 2

                      Also worth mentioning as part of the git ecosystem is gitweb, which is provided by default.

                      1. 1

                        gitweb has a pretty unintuitive and heavy UI, but its biggest selling point is that it’s bundled within git.

                        cgit is a cleaner design, but still not up to my tastes (as I’d prefer to have a “simpler” UI), and since it’s not natively provided, it’s a bit more painful to obtain it.

                        I may work on some cgit-like daemon or CGI software to have a clean / simple UI to browse git repos, kinda like cgit.

                        Note: all the recommendations and ideas that have been shown here and on dev.to have motivated me to start working on a part 2, which will contain concepts such as annexes, hooks, the git daemon / gitweb daemon, and some tips and tricks I found useful.

                        1. 3

                          You should give fudge a try! :)

                          1. 2

                            I didn’t know about fudge, and after looking at the code, it looks like it needs a bit of rework (e.g. you are forced to use the YAML format, and the server only listens to localhost:8080 and cannot be configured), but it’s a really good start, thanks!

                            1. 2

                              I have some time to work on it over the holidays, so feel free to open issues! I picked YAML for the configuration format because I was familiar with it, though I wouldn’t mind adding support for another format.

                              1. 1

                                I guess I’ll fork it to have the first changes I’d like to have, then submit a PR so we can discuss what could be integrated right into fudge.

                    1. 12

                      I was working on an operating system project for school. The teachers provided us with boilerplate code to get started, and we had to implement basic features such as writing to VGA memory, handling interrupts, or writing a round robin task scheduler. They gave us a version of malloc named dlmalloc and written by Doug Lea, which allocated an initial chunk of memory on the heap like so:

                      extern char mem_heap[];
                      extern char mem_heap_end[];
                      static char *curptr = mem_heap;
                      

                      The memory addresses corresponding to the start and end of the heap were set using a linker script. While the project was compiling and running fine on the school’s computers, I was getting page fault exceptions on my laptop. Using GDB, I figured out curptr was initialised to 0, causing malloc to write to the first memory page and raise an exception as it was protected. I then used objdump to check if the value of curptr was correctly set during compilation:

                      Disassembly of section .data.rel:
                      
                      001156e0 <curptr>:
                        1156e0:       ec
                        1156e1:       e7 11
                      

                      It was! I had never seen a .data.rel ELF section before, but a quick Google search revealed it can appear in executables with position-independent code. The executables compiled on the school’s computers didn’t have this specificity. readelf then revealed that this section was located after the .data one:

                      Section Headers:
                        [Nr] Name
                        [ 0]
                        [ 1] .text
                          ...
                        [ 8] .data
                        [ 9] .got.plt
                        [10] .data.rel
                        [11] .bss
                      

                      I dug deeper into the boilerplate code, and found the following snippet in the kernel entry point:

                          /* Blank all uninitialized memory */
                          movl    $_edata,%edi
                          xorl    %eax,%eax
                      0:  movl    %eax,(%edi)
                          addl    $4,%edi
                          cmpl    $mem_heap_end,%edi
                          jb      0b
                      

                      _edata was the first address after the .data section, which means all memory between it and the end of the heap was zeroed. This included the .data.rel section and curptr!

                      The solution to this issue was to disable the generation of position-independent code using GCC’s -fno-pic flag, which put curptr back in the .data section.

                      1. 3

                        Really cool stuff! I’ve been looking at building something like this in to go-gitdir or having some sort of frontend that can display repos but haven’t gotten around to it yet. I might have to play with this.

                        1. 2

                          Nice project! I wouldn’t be against adding support for the git-daemon-export-ok magic file or something similar if it could help you.

                          1. 2

                            Neat, I’ll give it a look. Thanks for your work on sway and sourcehut!

                          1. 10

                            Hi there! I’d been wanting to develop a lightweight Git WebUI for a while, and finally got around it this past month. Fudge has a small feature set on purpose which I probably won’t expand too much, besides adding logging to a file or through syslog and README rendering.

                            Fudge is running alongside a public Git server and advertises go-import meta tags, which allows me to use my own vanity URL when writing Go code.

                            If you want to give it a try, a live instance is running on my website, and the Ansible playbooks that I used to deploy it on an OpenBSD VPS (using acme-client, httpd, relayd, and slowcgi) are available in the infrastructure repository.

                            Feedback is welcome!

                            1. 2

                              Mikrotik routers are another alternative to Ubiquiti’s EdgeRouter line. The entry level ones are rather cheap and power efficient (10W at max load). They can be a bit of a pain to configure though.

                              1. 2

                                Excellent article! I entirely agree with the introduction: writing a basic Git implementation gave me a better understanding of its internals, along with the file formats and protocols in play. If you’d like to do the same, I recommend going through Git book’s chapter on plumbing and porcelain commands first, and parts of the technical documentation (pack-format.txt and pack-protocol.txt were really helpful when I implemented communication with a remote).

                                1. 14

                                  There is a third assumption not discussed in this article - that contributions to private repositories are always for employer repositories.

                                  There is no way of discovering whether a private contribution is for an employer, for an employee’s closed-source side project, or for a freelancing project for a third party. I think this is a serious limitation that significantly limits the usefulness of this research method.

                                  1. 5

                                    It is actually mentioned:

                                    This is not a perfect process, since users can disable showing private repository contributions, or it’s possible the developer has personal private repositories. This is why you want to check as many profiles as possible.

                                    (Bold not in original)

                                    This is why I suggested checking multiple people’s profiles, not just one.

                                    Also you can correlate weekend work dates across people to spot crunch time (I added this bit as additional suggestion to the article after it was published, it’ll show up when CDN cache resets. But private personal repos was in original post.)

                                    1. 3

                                      I also feel contributions to private repositories are not a strong indicator of work/life balance. Employees could be working long hours, often go through crunch time, be expected to reply to emails and phone calls during weekends…

                                      1. 1

                                        It’s just an initial filter, to remove companies that are obviously not a place to work for. Even if the company passes this filter you still need to e.g. ask about work/life balance during the interview (I updated the post to note that.)

                                        1. 2

                                          So a company should have to prevent employees from working outside of what the query defines as “work hours”, in order to avoid this type of bad annotation?

                                          There are so many variables here at play (what are work hours, asserting companies using github’s (not any other/internal repo) private repos, the reason of pushing things to a git remote (personal wiki, dotfiles, personal projects)). Just as hard to prove false would be: “how many employees are pushing to their private dotfiles during evenings?”

                                          Do you have any evidence for supporting this claim, or is it just pure guesswork? You say “empirically”, but I question that phrasing is applicable. I’m using a bit strong words here, sorry, but I think companies should not be dragged in dirt without evidence.

                                    1. 11

                                      Thanks for the nice and complete write-up!

                                      I noticed a few minor issues with the server and client configuration files:

                                      • You might want to set up a CRL (certificate revocation list) and use the crl-verify directive in the server config file to revoke client certificates in case of a compromise.
                                      • OpenVPN 2.4.0 ships with with the new compress option lz4-v2, which is undocumented. It seems to use less CPU, drain less power on mobile devices, and it possibly has a higher throughput according to this ticket.
                                      • There’s no need to specify push "compress lz4" in the server config file if the client config file has its own compress directive.
                                      • There’s no need to specify a key-direction (0 or 1) after tls-crypt’s keyfile path according to the manpage: “In contrast to –tls-auth, –tls-crypt does not require the user to set –key-direction.”

                                      I usually bundle all the certificates and keys along with the client configuration in a .ovpn file, which I find easier to transfer around and use.

                                      1. 2

                                        Thank you very much for your feedback! I will update the article when I will have some time.

                                        About the compression part, I learnt it is now considered as unsecured thanks to voracle vulnerability, I will probably explain it briefly and disable in the configuration offered because security is the first criteria.

                                        Thank you very much for your feedback again :)

                                        1. 0

                                          The article is now edited, I added a lot of things, feel free to read it again.

                                      1. 9

                                        I’m using this terraform script: https://github.com/dmathieu/byovpn When I need a VPN, I just apply the changes, and can destroy whenever I don’t need it anymore. That’s basically the same thing, but automated.

                                        1. 5

                                          That’s what I was thinking when reading the article. Even shorter when using sshuttle:

                                          $ brew install sshuttle terraform
                                          $ terraform apply
                                          $ sshuttle --dns -r [user@]sshserver 0.0.0.0/0
                                          
                                          1. 4

                                            yeah exactly. I did the same to learn terraform and that’s definitely the way to go. Note that you need ssh access so if you’re on a public wifi such as in a cafe, it may fail if port 22 is blocked. I usually spin my vpn from an lte network, and once credentials are ready, I configure my vpn and then I use the public wifi.

                                            It’s much more efficient than doing all of this manually 👍

                                            1. 1

                                              I wanted to rewrite my vpn setup to improve my terraform skills. Here is the project: https://github.com/GabLeRoux/terraform-aws-vpn

                                              Key features:

                                              • Runs in its own VPC
                                              • Only a few commands to get started
                                              • Has start, stop and status scripts
                                              • Supports different regions
                                              • It’s well documented
                                              • It’s MIT

                                              Have fun 🎉

                                            2. 4

                                              I’ve been working on a similar project using terraform and ansible: bovarysme/infrastructure. It’s usable even if still a bit rough around the edges (e.g. I have to manually update the ansible inventory after each deploy). Running an OpenVPN server on port 443 TCP has been helping me bypass most port blocking and firewall shenanigans I’ve encountered so far.

                                              1. 2

                                                Your script looks very interesting. I just wanted a simple approach that anyone could follow without installing packages etc which is why I used TurnKey.

                                              1. 2

                                                Thanks a lot for your project! It has been a really helpful reference since I started writing my own emulator in Go.

                                                The Ultimate Game Boy Talk is also a great introduction to the subject, which I recommend to anyone interested in emulator development or the inner workings of the console.

                                                1. 4

                                                  The compression algorithm described in the first part of the video ressembles a lot Haruhiko Okumura’s LZSS implementation of 1989 (ZIP archive; you’re looking for the LZSS.C file), which has been the basis of many more compression algorithms than the one used in Commander Keen.

                                                  If the subject interests you, he wrote a history of data compression around that time.