Don’t stress too much about the server choice. You can switch later if you want to.
Personally, I just don’t care about the local timeline of my server. I only follow people and hashtags. That works fine, too.
Likewise. Both local and federated are a firehouse of messages the contents of which I’d only previously before seen the like of in Sims games. It’s fun for a few seconds but then you just find everyone you want to follow and stay on Home.
Just switched servers from mstdn.social to hachyderm.io and I can confirm switching is really easy. That said, it doesn’t automatically transfer the accounts you are following, but the data import and export process is a breeze (just download / upload some csv files) so it’s quite easy to reinstate your account elsewhere.
It does transfer your followers, actually. That’s the only thing it transfers by default. I’m not sure how it does this, but it takes a while, so I’m assuming it has to inform your followers’ instances of your new handle / instance.
Mosfet is probably unnecessary as esp32 pins can be in open collector (open drain) mode, which is floating at high output and connected to gnd at low output.
My acquaintance made similar device, also based on esp32 (or maybe esp8266) to prevent his cat to turn off computer by pressing power button located on top of computer case where cat likes to sit. One gpio pin is connected to power pin on motherboard, other is connected to power button on the computer case and the microcontroller passes “button press” only for specific pattern of multiple presses of power button.
Nice! Yeah, I wasn’t entirely sure about the exact requirements (you can find contradicting advice online) so I decided to err on the side of caution — I would like this solution to work for years to come :)
I had the same cat problem. Solved it by taping a flat pice of plastic over the button and offering the cat a different place to sit :-)
Similar cat problem, though it was more that he liked to rub against the corner of the computer case where the button was, and it was quite touch sensitive. Again, a piece of plastic and some tape!
That would be half the fun and double the price and totally not worth it if you have just one uplink anyways.
Yeah, the most likely scenarios for a residential house are:
power loss to the building (you might have a UPS, but you are unlikely to have an autostarting generator with an automatic transfer switch)
upstream ISP loss (fiber is pretty reliable, but a truck or a backhoe can happen to anyone)
power supply failure on a machine with only one power supply (buy more expensive hardware and probably lose some efficiency)
In the last 20 years, I have experienced all of these – mostly while I was at home to fix the things that were in my power.
I’m fascinated by these Stapelberg posts, but yes, not doing any of that tends to be the easier path.
Note that this is all in support of
For the guest WiFi at an event that eventually fell through, we wanted to tunnel all the traffic through my internet connection via my home router. Because the event is located in another country, many hours of travel away, (…)
… where one might also consider, say, not tunneling all guest WiFi traffic through a home router “hours of travel away”. Or having a fall-over scenario to some gateway at a suitable hosting location.
Oh, I also had a fail-over scenario prepared with another gateway on a dedicated server in Germany.
But, tunneling through a residential connection is preferable for residential use-cases like this one :)
Russ Cox published a series of videos on how to solve Advent Of Code 2021 using this language (ivy): https://www.youtube.com/user/rscgolang/videos
I haven’t personally done benchmarks, but I thought I read that using the SSH protocol with rsync was a lot slower than using the rsync protocol due to no encryption overhead. Did you see differently?
Yeah, I can definitely believe that SSH can become the bottleneck, or pose a significant overhead, in many setups. But, in my tests, using unencrypted rsync daemon mode was even slower for some reason!
I ran some tests with my network storage PC, downloading a 50GB zero file from my workstation (both connected via a 10 Gbit/s link):
curl -v -o /dev/null
reaches ≈1000 MB/s — maximum achievable on this 10 Gbit/s linkssh midna.lan cat > /dev/null
reaches ≈368 MB/s — SSH overheadrsync
(writing to tmpfs) via SSH reaches ≈321 MB/srsync
(writing to tmpfs) unencrypted reaches ≈337 MB/sBut, once you write to disk, throughput drops even further:
scp
(writing to disk) reaches ≈280 MB/s — SSH+disk overheadrsync
(writing to disk) via SSH reaches ≈213 MB/s — rsync overheadrsync
(writing to disk) unencrypted reaches ≈199 MB/s (!) not sure why this is slowerAs you have a really fast network, I’m wondering if compression is to blame? IIRC, in both programs compression is disabled by default, but distributions might change it.
Very interesting! Process output latency has bothered me numerous times in Emacs.
Does your change also make M-x compile output faster?
Are you going to upstream your change?
Yes it does. Pretty much anything in emacs that uses a PTY for the subprocess is improved by this (process-connection-type does default to true, which makes it the default type for a process unless the developers explicitly opt in to it being a regular pipe.) It might have some benefits when the process-connection-type is nil, but I haven’t tested that extensively.
I’m planning on trying to upstream it once I extend it to work with the rest of the subprocess types for emacs. Currently, I’m not handling network and pipe subprocesses with the background thread.
Nice! If I might be so bold, I’d like to suggest that you could try varying which TLS cipher suite gets used. curl has a --ciphers
option for example. Configuring the client to only allow one cipher suite should do it. In theory I think the newer ones like AES_GCM might be the fastest?
curl is using “SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384”, so that should be fastest. I also tried generating an ECDSA key instead of an RSA key, but it made no performance difference.
I dare you to poke it into a slower mode to see. ;) I would also be super interested to hear about ChaCha+Poly modes (which are apparently new and shiny but not hardware accelerated), CBC (which sucks) and anything with CTR in the name (which I think should be slower than GCM because the to use a separate MAC but IDK for certain).
It does make sense that you don’t see throughput change when you switch between RSA and ECDSA. The asymmetric crypto is only used very briefly at the start of the connection. The two sides negotiate a new randomly generated key for symmetric encryption. Once that’s done, the bulk of the data is encrypted with the symmetric cryptosystem.
Everything in my tests is on HTTP 1.1, yes.
HTTP 2 interfered with KTLS sendfile in nginx for some reason.
Without TLS, both sides are presumably using a single sendfile syscall for the body of the request, so this is mostly a test of how fast the Linux kernel can transfer to/from the Ethernet interface. Although on the receiving end I guess it’s also writing to disk.
In trying to optimize networking code over the years I’ve often been frustrated how much slower it runs than the hardware maximum, like orders of magnitude slower. That’s because it’s doing more work on one or both ends — database queries, encoding/parsing data formats, compression/decompression and so on. In some cases it’s worth “wasting” resources keeping a copy of the data in an easy-to-stream format.
It’s also a cool use case for append-only logs — if you have a file structured that way, it can be served as-is over HTTP as a static file, and clients can use conditional range requests to sync with it extremely cheaply.
Caddy is actually not currently using sendfile (see https://github.com/caddyserver/caddy/issues/4731), and still manages to saturate the 25 Gbit/s without trouble :)
Yep, and it would not explain why the go client is slower. Maybe the net package in go std lib is also doing something suboptimal?
Although on the receiving end I guess it’s also writing to disk.
The tests write to /dev/null, so not really.
Is any server really going to send you data fast enough to justify a huge pipe like that? I have a measly 200mbps connection (1% of that!) and I rarely see my computer receiving anything close to its capacity. Maybe just when I download a new version of Xcode from Apple.
(Obligatory grandpa boast about how my first modem was 110bps — on a Teletype at my middle school — and I’ve experienced pretty much every generation of modem since, from 300 to 1200 to 2400 to… Of all those, the real game changer was going to an always-on DSL connection in the late 90s.)
It’s easy to fill a Gigabit line these days in my experience. With a faster uplink, now all devices at my home can fill at least a Gigabit line, at the same time :)
Filling 1Gbps is trivial, but pumping 25Gbps data would be rather challenging, if you fully utilize the 25Gbps duplex with the NAT. 25Gbps on each direction means 100Gbps throughout for the router. That’s a huge load on the router, for both software and hardware. For benchmarks, you could recent hourly billed hertzer vps, they have 10Gbps connection with a fairly cheap price. I wondering how’s the peering status is this ISP, the 25Gbps doesn’t really mean anything unless you have huge pipes connected to other ASN. Even with dual 100Gbps, the network can only serve 8 customer at full speed, which is :(
init7 peers with hetzner directly, other customers report getting 5+ Gbit/s for their backups to hetzner servers :)
The hetzner server I rent only has a 1 Gbit/s port. Maybe I’ll rent an hourly-billed one just for the fun of doing speed tests at some point.
In the mean time, I found this product interesting when searching for ccr2004, at msrp of 199$.
https://mikrotik.com/product/ccr2004_1g_2xs_pcie
The 2C/3C low-end “cloud” servers has full 10G connection, and it’s available across multiple regions.
What discourages me massively about this device is clunky integration like this:
This form-factor does come with certain limitations that you should keep in mind. The CCR NIC card needs some time to boot up compared to ASIC-based setups. If the host system is up before the CCR card, it will not appear among the available devices. You should add a PCIe device initialization delay after power-up in the BIOS. Or you will need to re-initialize the PCIe devices from the HOST system.
Also active cooling, which means the noise level is likely above the threshold for my living room :)
DigitalOcean directly peers with my ISP and I can frequently saturate my 1 Gbit FTTH. I use NNCP to batch Youtube downloads I might be interested in and grab them on demand from DO at 1 Gbit, which I have to say is awesome, cause I can download long 4/8K videos in seconds.
It’s pretty easy to saturate that symmetrically once you have multiple people & devices in the mix, eg) stream a 4K HDR10 movie in the living room while a couple of laptops are sending dozens of gigs to backblaze and the kid is downloading a new game from steam.
Not really 4k streaming isn’t that scary, the highest bitrate I’ve ever seen is the spider man form sony at 80Mbps, bb backup over wifi maybe use 1Gbps, and steam download is also capped at 1Gbps. So, it only uses 3Gbps, far from saturated.
Yeah sorry, I meant it’s not hard to saturate GP’s 200Mbps connection. The appeal of 25Gbps is that you’re not going to saturate it no matter what everyone in the house is doing, for at least the next few years.
This doesn’t seem to have any of the pgtk stuff in, so I suppose I still wanna use patched versions if I want semi-proper Wayland support.
Pure GTK didn’t make it to 28.1 and I think they are aiming for 29.1 (so still a couple year wait)
https://twitter.com/newplagiarist/status/1511484751219023872
My first reaction was “a couple of years? that must be an exaggeration!”, but Emacs really does seem to release a new major version only every 2 years per https://www.gnu.org/software/emacs/history.html :-/
I have tried a bunch of different Smart Home products over the last few years …
The author ended up with nearly two baskets full of functional but useless electronics, after just a few years. My “dumb” lightbulbs should live for at least 5 years before I have to replace them (hopefully longer than that). So, one thing I learned is that “smart home” is mostly good at producing electronic waste.
I think that’s not a problem of Smart Home products per se, but rather of me switching from system to system.
This is also one of the motivations to publish this article: so that others can avoid e waste by jumping straight to the system they like! :)
Oh, I should probably also add that I didn’t throw the hardware away. I sold or gifted all old hardware, so it’s not (yet) waste.
I think ‘smart’ home products are ultimately wasteful per se because they have shorter lifespans and/or use more resources than simpler components. Additionally, many of them reduce one’s security and/or privacy.
While I do appreciate many of their features (it is cool to be able to turn one’s air conditioning on on the ride home from the æroport!), ultimately I believe that they are a mistake.
I have to agree. All this waste reminds me of the waste associated with switching from wired headphones (which require no electronics, just electrics) to Bluetooth.
One thing the author seemingly hasn’t tried is using a manufacturer-independent gateway for Zigbee like https://www.zigbee2mqtt.io. This approach uses a USB-to-Zigbee adapter and makes all Zigbee devices from all vendors talk to each other. Run on a Raspberry Pi together with HomeAssistant (general-purpose home automation) or Homebridge (to bridge to Apple HomeKit), this setup provides a great user experience (for me).
It also opens two common and documented APIs for interfacing with the devices: HomeAssistant (high-level) and MQTT (lower-level, still easy).
I personally run Zigbee2mqtt with HomeBridge and control everything from the Apple ecosystem. It works flawlessly for me and my family. From the nerd-side, I stream all MQTT messages as JSON to a Postgres database and use Grafana to plot various metrics from sensors and devices.
Personally, I don’t like zigbee2mqtt. However, it is the best solution for vendor-independent gateway. I’m looking on zigbee-lua that uses Lua instead of Javascript and other alternatives.
I agree - the amount of code and complexity of the Javascript in zigbee2mqtt is staggering. Thanks for pointing me to zigbee-lua - looks great.
In an ideal world, there would be a standardised mqtt protocol definition with zigbee2mqtt and others implementing that protocol. From a short look it’s not entirely clear if zigbee-lua tries to be wire-compatible with zigbee2mqtt.
In an ideal world, there would be a standardised mqtt protocol definition with zigbee2mqtt and others implementing that protocol.
Absolutely agree.
As far as I get it right, the main complexity of zigbee2mqtt are quirks for different Zigbee devices that doesn’t conform to Zigbee specification. For interaction with Zigbee devices, z2m uses zigbee-herdsman-converters to parse messages to and from devices. Adding support of a new Zigbee device to z2m is actually an implementation of a new converter that understands and processes messages from the new device.
There is alternative of zigbee-herdsman-converters written in Python - zha-device-handlers. It uses zigpy for access to Zigbee messages, and it is used by Zigbee plugin for Home Assistant. zha-device-handlers contains a huge number of quirks for Zigbee devices, see subdirectories in the zhaquirks directory. zha-device-handlers has a great explanation of quirks for Zigbee devices and it is worth to read it - https://github.com/zigpy/zha-device-handlers#what-the-heck-is-a-quirk
I’d never heard of zigbee-lua, but it looks pretty stale compared to zigbee2mqtt. I don’t find the implementation language anything more than a implementation detail, especially not when typically running in a container.
I’ve used z2m on a raspberry pi and homeassistant on another. I moved to ZHA (the native Home assistant implementation) only by accident and was too lazy to start over again. But I’ll go back to z2m again after moving houses soon.
I have tried using a USB-to-zigbee adapter, but with custom software and not zigbee2mqtt.
Maybe the experience would have been better with zigbee2mqtt, but I generally like building my own stuff. From that perspective, the zigbee stack is not great, and my USB-to-zigbee interfered with the IKEA tradfri gateway pairing process.
I hope more modern smart home standards result in better ecosystems, but I’ll stick to the vendor gateways for now :)
The software mine came with was absolutely horrendous - which is usually the case for such systems. I updated the stick’s software and then uninstalled it :-)
The great thing is, that Zigbee2mqtt makes building your own stuff incredibly nice - you just start at a different level - MQTT instead of the stick’s serial interface. Zigbee2mqtt is the gateway and provides an API (via MQTT) to control devices in your Zigbee network.
That it will interfere with other gateway’s when both are open for devices to join is expected - but that’s expected. Multiple separate zigbee networks in the same house work fine. Devices just get confused which network to join when multiple gateways are in pairing mode.
In my use case the only gateway used is the usb stick used by zigbee2mqtt, which removes the need for all other Gateways, thus freeing you from having to use a different gateway for each vendor, and putting you much more in control of the stack.
Well, there’s always Matter which should come out sometime this year. I’ll keep on using Home Assistant until Matter makes sense to switch to - and then maybe. Home Assistant is pretty nice, not having to care what ecosystem my stuff is in and just interact with all of them instead.
(not trolling) I expected this to be an AWS EC2 instance or something else cloud-based, due to the 2022 in the title.
I recently ran into an issue where I had a few hundred GB of compressed, encrypted data that was destroying my patience. In the time it took my daily driver to do part of the decompression, I was able to spin up a 32-core, 64GB RAM instance, pull the data down, unpack everything, and run my analytics across it before downloading the results and burning the EC2 instance to the ground. I’ve never been a fan of someone else’s computer being my compute cycles, but that experience definitely left me considering how and where I needed to own the hardware, instead of letting someone else invest in it and just borrowing it for a little bit of time.
I am certainly not upset it isn’t cloud-based, though. I’m always a fan of good cable management and those pictures are gorgeous!
Cloud computing can be very useful like that, I just find it rather inconvenient.
I have https://blog.nelhage.com/post/distributed-builds-for-everyone/ on my list of things to try, though, so perhaps that changes my take on the subject :)
Thanks for the link! I read his post about building LLVM in 90 seconds and that was partly what got me thinking about trying out AWS to skip steps when my local bandwidth and memory were the limitations.
Honestly, it also made me eyeball your benchmarks in the article and ponder if there was a way to cheat and beat those numbers with AWS. More specifically, how much would it cost to be able to beat those numbers. My suspicion is the overhead would take too long (since it only seems fair to include the amount of time to get the instance running) for that to be viable.
The cloud has a long way to go for ease of deployment but there is some indication that it’s getting there. There’s a gradual trend from VMs to containers[1] to FaaS. Each of these is making it easier to deploy small things. It’s still a lot harder to deploy a cloud ‘hello world’ than a local one and there’s also a lot of friction between cloud and local development. My bet for the next decade would be that we see a lot more unikernel-like development environments where the toolchains include cloud deployment as part of the build infrastructure, so you can develop code that runs locally on your OS, locally in a separate VM, or in your favourite cloud provider’s infrastructure.
[1] Container is an overloaded term. I mean it here as a unit of distribution / deployment, it is often also conflated with shared-kernel virtualisation. Cloud container deployments typically don’t use shared kernel (or, if they do, share only within a single tenant) for security reasons. The benefit of containers is that you can easily build a very simple image for your VM.
If you ever feel like trying another model, I’ve recently switched to this one: http://www.ergoguys.com/5bulglbalatr.html (no affiliation, just had nice pictures). Huge ball, scroll wheel, can be used left handed (most trackballs can not) and has two inputs for external switches. Also built like a tank.
I have tried huge-ball trackballs before and found out that I much prefer the thumb-ball trackballs. Thanks for the recommendation, though!
Do you know of any thumb trackballs for left-handed users? Maybe more specific, ones you can actually buy around the world? That’s where most of my searches end, as I use my left hand most often.
Surprised @stapelberg didn’t mention the lack of tiling window manager (he develops i3). That’s what keeps me off of Macs, manual window management drives me nuts.
I use this computer little enough that I’m usually in a browser in fullscreen, and at most a terminal or two side by side. If I wanted to use it for any real work, I’d definitely prefer Linux+i3, but as I wrote in the article, it’ll be a while before Linux will be a reality on this machine…
There are a variety of tiling window managers for macOS, e.g. https://github.com/ianyh/Amethyst
Yeah, I tried Amethyst but it’s a real band aid solution. Its about 70% of the way, but not enough to make it a daily driver.
Not free, but I recently saw some people recommending https://hookshot.app
I’ve used Amethyst, SizeUp, Divvy, Spectacle, and now Moom — I’ve found Moom to be the best of the bunch.
I’ve been using yabai for a while, I’m probably far from the extremes of Xmonad usage but it works well for my purposes.
Somewhat offtopic, I have a couple of questions about your Dell UP3218K display, because I want to buy it myself:
Speaking of ThinkPads, again, offtopic, but I believe useful, I compiled a list of what I consider usable ThinkPads. Usable for me in Linux, anyway:
List of usable ThinkPads as of December 1st, 2021
Criteria:
Good specs and mediocre specs. Screen real estate (SRE) is assumed for an integer scaling factor (2x or 3x).
ThinkPad P1 Gen 4 https://psref.lenovo.com/Product/ThinkPad/ThinkPad_P1_Gen_4
Intel, 16’’, 16:10, 600 nit, anti-reflection, Adobe RGB, 283ppi, 64GB RAM.
SRE: 1920x1200 (2x)
ACPI S3: maybe with new firmware
Requires custom SKU without Nvidia GPU that might be unobtainable.
ThinkPad X1 Extreme Gen 4 https://psref.lenovo.com/Product/ThinkPad/ThinkPad_X1_Extreme_Gen_4
Intel, 16’’, 16:10, 600 nit, anti-reflection, Adobe RGB, 283ppi, 64GB RAM.
SRE: 1920x1200 (2x)
ACPI S3: maybe with new firmware
Requires custom SKU without Nvidia GPU that’s reasonably easy to get.
ThinkPad X1 Carbon Gen 9 https://psref.lenovo.com/Product/ThinkPad/ThinkPad_X1_Carbon_Gen_9
Intel, 14’’, 16:10, 500 nit, glossy, DCI-P3, 323ppi, 32GB RAM
SRE: 1920x1200 (2x) or 1280x800 (3x)
ACPI S3: maybe with new firmware
ThinkPad X1 Yoga Gen 6 https://psref.lenovo.com/Product/ThinkPad/ThinkPad_X1_Yoga_Gen_6
Intel, 14’’, 16:10, 500 nit, anti-reflection, DCI-P3, 323ppi, 32GB RAM, tablet-convertible.
SRE: 1920x1200 (2x) or 1280x800 (3x)
ACPI S3: unknown
ThinkPad X13 Gen 2
AMD: https://psref.lenovo.com/Product/ThinkPad/ThinkPad_X13_Gen_2_AMD
Intel: https://psref.lenovo.com/Product/ThinkPad/ThinkPad_X13_Gen_2_Intel
13.3’’, 16:10, 400 nit, anti-glare, sRGB, 226ppi, 32GB RAM.
SRE: 1280x800 (2x)
ACPI S3: yes on AMD, probably not on Intel
ThinkPad X1 Titanium Yoga Gen 1 https://psref.lenovo.com/Product/ThinkPad/ThinkPad_X1_Titanium_Yoga_Gen_1
Intel, 13.5’’, 3:2, 450 nit, anti-reflection, <sRGB, 200ppi, 16GB RAM, tablet-convertible
SRE: 1128x752 at 2x, 1504x1002 at 1.5x
ACPI S3: unknown
ThinkPad X1 Nano Gen 1 https://psref.lenovo.com/Product/ThinkPad/ThinkPad_X1_Nano_Gen_1
Intel, 13.0’’, 16:10, 450 nit, anti-reflection, sRGB, 195ppi, 16GB RAM
SRE: 1080x675 at 2x, 1440x900 at 1.5x
ACPI S3: maybe with new firmware
As for the M1 mac itself. Yes, it’s great. Its main flaw is the limited amount of memory (16GB – fixed in M1X), and the lack of support for more than one external display (again fixed in M1X). I eagerly await my MacBook Pro with the M1X chip.
Somewhat offtopic, I have a couple of questions about your Dell UP3218K display, because I want to buy it myself:
Does it work with an Intel GPU?
I don’t know any Intel GPU that has 2 DisplayPort outputs.
I had hoped for the Intel DG1 to have 2 DisplayPorts, but it has 1 DisplayPort and 1 HDMI, and you can not buy it anywhere (OEM only).
Does it currently (2021) work with an AMD GPU with the open source driver?
I don’t know. The last time I tried an AMD GPU (in 2017), it just would not recognize the full resolution of the display.
Does it work at 8k resolution with any of your ThinkPads?
No. While you can physically connect it to the dock of the X1 Extreme, I haven’t been able to get the nVidia GPU in that machine to work well with the external output in general, and not with the full resolution of the display in particular: https://michael.stapelberg.ch/posts/2021-06-05-laptop-review-lenovo-thinkpad-x1-extreme-gen2/#gpu.
FWIW: The T14 AMD has ACPI S3. I had one for more than half a year, but in the end it didn’t bring a lot of joy. The battery drains after 1-1.5 days in S3 sleep. Resume is also hit and miss on Linux, often the trackpad wouldn’t come up correctly. Sometimes the screen wouldn’t turn on. The whole experience was just meh.
I found about the touchpad bug on the P1 Gen 4 and X1E Gen 4, but apparently it exists on all Thinkpads that have re-enabled S3 sleep. Lenovo is aware of the bug, but said it’s very low priority since enabling S3 is not a supported configuration.
People have been trying to find various workarounds on the Linux side: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1791427.
The whole thing is a debacle.
Also apparently even though you can re-enable ACPI S3 on some (all?) models, there’s a bug where the touchpad won’t work after wakeup, making all these laptops absolutely useless.
Count me as another vote for Hugo and custom themes. I’ve been blogging with Hugo and my own theme for the past 5 years. It’s been great.
Yeah, my website/blog https://michael.stapelberg.ch/ is Hugo based, too, since 2018!
Hugo indeed is blazingly fast and has a bunch of nice features, including live reload. Definitely the nicest static site generator I currently know of! :)
(kinda off-topic, on the software side)
Hi Michael! Have you evaluated NixOS?
I’m especially interested in knowing your opinion since
1- you do embedded dev
2- you have researched on a package manager and maintained debian packages for some time
3- I remember reading one of your blog posts in which you explained your machines. Maybe nix tools may help keeping your various machines and file servers in sync? (I think you were using rsync
)
4- (and we’re thankful for i3wm 😄)
I briefly used NixOS on one machine for about half a year or so, but then switched away because it was too tedious to properly package up each piece of software I needed to run (or even just run it in nix-shell).
These days, more software is already packaged, the community is larger, and there are a bunch more tools, so I should probably take another look at some point. I probably would also not use NixOS for anything interactive, but strictly for servers.
That said, I’m trying to use my own https://gokrazy.org/ for more of my serving workloads than before :)
I’m using rsync to transfer data, not config files :)
Edit: I should add that I found the nix command line experience confusing / not intuitive at all (e.g. “nix-env -iA”, ugh), and even more so their functional language for describing packages. I wish they had a more approachable format.
Thanks for answering
Nix folks are re-implementing the nix tooling and there is also an experimental feature called Flakes that can pin the version of dependencies and address some Nix issues. For configuration, it seems Nikel could be an option in the future.
Another über-cliché question 😅 : In your Github page, I see lots of your projects are written in Go, even low-level hw stuff (like IoT, router, RPi), where one thinks Rust could have been a better(?) option. Is it just a matter of preference or maybe you have a stance on this Go-vs-Rust debate?
Looking forward to when the new nix tooling is the default!
You can find more details about why I like Go in https://michael.stapelberg.ch/posts/2017-08-19-golang_favorite/
Rust never clicked with me when I tried it, thus far.
25 Gbit/s is absolutely insane for a private internet connection. Don’t get me wrong, it’s super cool, and Fiber7’s guarantee to always offer you the fastest possible speed for the same $ (or rather CHF here in Switzerland) makes this a really cheap offering, but I’m curious about the uses. Aside from “because I can”, is there any reason for this? Fiber7 has business options (at a higher price) for, well, businesses, so I’m at a loss what I as a person would do with 25x the bandwidth I already have. Most downloads aren’t bottlenecked by my ISP anyway, and streaming takes a fraction of this incredible bandwidth.
Just checked: I’m “stuck” with 1Gbit where I am, so sadly no fun experiments for me.
It’s not like my Gigabit connection is constantly clogged up or anything, but I’m fascinated by high transfer speeds and want to make https://distr1.org/ Linux images and packages available via 10 Gbit/s for fun, and if I can connect my workstation to the internet with a separate 10 Gbit/s of capacity, I’ll gladly take that :D
So, yes, just because I can (hopefully) for fun.
For one thing if you have decent upload speeds it makes low-ish traffic websites hosted from home a little less hostile, if you ignore intrusion attempts, which I guess is what the business thing is built for.
Desktop computer: https://michael.stapelberg.ch/posts/2022-01-15-high-end-linux-pc/
Desk setup: https://michael.stapelberg.ch/posts/2020-05-23-desk-setup/ with a few changes — I should probably write an updated version :)