This brings up a whole conversation around wikis and their usefulness. For the private knowledge base, I’ve always found TiddlyWiki to be an “ok” answer to some jot/note taking. With the dawn of Obsidian or bear.app (the one I use), something like TiddlyWiki doesn’t seem like something I’d choose.
For the days of USB sticks and traveling around to different computers, great, but in modern day TiddlyWiki just isn’t my cup of tea.
For “official” documentation, I always found it weird if not off-putting. Maybe I’m in a small camp, but through the years I’ve just never gotten to a place where it’s the first I think about.
I think for me TiddlyWiki is less interesting as a private knowledge base, and more interesting in terms of its architecture and the ways in which it is opinionated about hypertext and information more generally. When I first came into contact with it, I was put off, and found it to be ugly and kind of painful to use, but understanding its point of view helps a bit
In terms of architecture, the bit that’s interesting to me is that it’s programmable. The expectation is that you’ll write lots of small Tiddlers (“pages” or “posts” or “notes” – a tiddler is the smallest unit of information within TiddlyWiki), and compose them via linking and transclusion to build larger ones. So instead of writing a single tiddler called “to-do list” that lists particular todos, you’d do better to create individual tiddlers for each todo, giving them an appropriate tag, and then using the built in query functionality to build a list.
You can define macros and write Tiddlers that query other tiddlers in myriad ways using a declarative Prologish query language, and essentially make TiddlyWiki whatever you want or need it to be, kind of like Emacs – for better or for worse. In that way it’s essentially a programming language – TiddlyWiki itself is implemented in TiddlyWiki.
In terms of information, it embraces the ideas and idealism of the early pioneers of hypertext. This is evident in its terminology (the term transclusion was coined by Ted Nelson), and its support for backlinking, an oft cited deficiency of the HTML implementation of hypertext, among others
Another TiddlyWiki opinion is that your TiddlyWiki should be self-contained – without dependencies on external programs or websites that might change or break your TiddlyWiki. When you link to a Tiddler from within TiddlyWiki, you are linking to the object itself, so links and transclusions will follow the tiddler in the way you expect even if you rename it. This is also reflected in how TiddlyWiki lives in a single HTML file. This is in stark contrast to the brittle nature of HTML links, which don’t backlink, and reference an address rather than the object itself
But even with these “opinions” and best practices, TiddlyWiki is pretty dang flexible. Like the page I linked lists an ecommerce site, personal websites and information sites and documentation, and a game. It’s just HTML and JS under the hood
back in the day, before Wikipedia convinced everyone to see wikis as primarily for reference material, I also found wikis to be a really neat medium for conversation. there are very good reasons it doesn’t happen that way anymore (spam was a problem, and it was hard for newcomers to understand) but it was cool while it lasted.
Also, it’s really easy to setup on NixOS: services.smokeping. It’s not too heavy and it can be a lifesaver when things break, so there’s little reason not to run it on servers.
Quick question, what’s the diff between smokeping and uptime kuma? What do you use them for? I have uptime kuma just for monitoring a few sites, I don’t need anything fancy, but I also brought it to work so we can also use the dashboard export thingy. What extras does smokeping have in comparison?
Smokeping is for Network latency. So I think of uptime kuma is more.of a heartbeat to check if the site is either up or down and notifies you if they are down. And smokeping does analysis of packets sent. But very much could be wrong. I stood up both and they are very similar except for UI
The main downside(s) I’ve found with Funnels is a lack of custom domain names (you’re stuck with your Tailscale Net name) and generally slower speeds when running through a relay. I ended up spinning my own using DNS splitting and a free tier VPS.
I misspoke, apologies. This list is more of a “homelab” list, especially with cloud flare tunnels and tail scale. I’m not using headscale. I’m using the non self hosted tailscale
I self host miniflux, and a dumb front end I made for Reddit (inspired by old.reddit.com but mobile friendly) because I absolutely refuse to use their official mobile app after they killed third party clients.
Sure https://github.com/Jackevansevo/jeddit it’s very basic compared to lots of existing clients, but was fun to hack together. Right now it’s only really good for browsing/reading content. Still need to add comments/votes.
I use fossil, which is my SCM. I use caddy for my web server. I use radicale for calendaring. And I use grav CMS for my web pages. I have used gitea when I don’t use fossil. I also use jitsi meet as an open sourced zoom alternative.
miniflux would be ideal if only it supported sqlite. I don’t necessarily see the point of having a single static executable as a feature when you need a separately managed db process.
IIRC the API is still available for low-request app keys, which likely works fine for most individual users’ needs, but the change effectively killed high-usage app keys like third-party mobile clients. When Baconreader went dark because of it, I stopped using reddit on my phone. I’m now 62 days into Dutch on Duolingo. Thanks, reddit!
A 9front server for backups (via Venti) and a network-wide ad blocker (any proxy should do, Linuxers could just run AdGuard Home). On servers (not “at home”), the list is notably longer.
I don’t think NixOS is for me, but I do love Nix for tool and home management.
I run Fedora Silverblue as my core stable base system, with only a few packages layered on like Tailscale, and some others that need to be system level. Removed a few packages. And generally don’t touch it outside of system updates.
All my home/dev environments are controlled through Home Manager. To get nix working on Silverblue, I used systemd to mount ~/.nix onto /nix so my nix store is actually in my home directory.
The only thing I have trouble with is Nodejs because of the way nix and npm packages play together. But in those cases I just install NVM and use that instead.
being able to make small files of packages and mix and match them based on the machine im on, and same with configuraiton has been awesome. I use Home-Manager as a flake which makes updating easy too.
I really think there’s an ecosystem/tutorial out there for “nix+silverblue.” Any chance you have any notes or anything on how you got it up and running? I’d love to give it a shot myself, and maybe even build a daily driver out of it.
But it looks like the author went a step further and added proper OSTree support into the Determinate Nix Installer, which does that all for you! You can now just follow that guide.
If you wish to use Home Manager in non-flakes method, just keep following that guide. I personally wanted to use it in the Flake method (which i really like and lets me easily do different configs for different machines)
This guide can help with the Flake style home-manager: https://www.chrisportela.com/posts/home-manager-flake/ but be mindful its also out of date (which is funny as it links to an article that is also good, but slightly out of date). Just use it for ideas and reference.
In general you need to:
install nix
Configure nix and enable flakes ~/.config/nix/nix.conf
Setup a home-manager flake in ~/.config/home-manager
You might need to install home-manager directly ones globally outside of the flake to use its binary to switch the one in the flake. Its confusing but I did mine in a weird way.
Hope that helps. I haven’t setup a fresh one since. WHen I redid my machine, i setup the bind mount, installed nix, install home-manager globally, then pulled my home-manager configs, ran nix flake update inside it, and then did home-manager switch and it worked.
Should be even easier with the Determinate Nix Installer and it will be what I use soon when I plan to re-install silverblue (i switched to Tumbleweed for a short stint, but going back)
[Edit] Just a warning, using the flake method, if you want to update your packages, you don’t do nix channel --update or anything of the sort. You go into your home-manager flake directory and run nix flake update and then do a home-manager switch.
As an experiment I wrote a little tool that build images based on Gentoo.
I needed to build a few images with patches and/or custom build options. It turned out to be a noon-trivial task on a Debian. Then I though that Gentoo is really good at that. So I put together a few ebuilds with my patches and that part went really smooth. The next problem to solve was how to make images smaller. So I built a tool that uses Portage to look up package dependencies recursively. Added a few options to ignore packages and files I’m sure are not needed but that still end up in that depgraph. I use it in a multi-stage dickerfiles: the first stage has a complete system with build stuff and such, it also packs all the files for the final image; the second stage just copies files from the build stage.
It’s an experiment but I still manage to get really good results. For example my PHP image is only slightly larger than the official alpine one. I’m sure there’s a plenty opportunities to optimize the image. One advantage is that it’s really easy to build images that contain a combination of packages (e.g. php-fpm with php-cli and zsh). It’s also easy to build custom configurations that are consistently applied to all packages (e.g. disable bzip for php but also for libcurl that php links to).
The downside is that you have to compile everything but with Gentoo that is the way.
I don’t have a need to build docker images, but if I did I’d try out the nix options. It gives the benefit you chose Gentoo for, easy package customization, but without needing to build everything. And other benefits such as reproducibility. ;)
I’m glad to hear Nix works for people. Personally, I had… let’s call it “suboptimal” experience with Nix. I tried three times and couldn’t set it up (on macOS that is the only “reproducible” result I’ve got, maybe it’s different on Linux or NixOS, or whatever). Its learning curve is… non-trivial. With the amount of time I’ve spent on it and the results I’ve got I decided to put it into “mabe sometime later” bin.
Yeah the learning curve is quite steep. I also gave up a couple times before it finally clicked for me. There are efforts to improve documentation for new users, but they’re fledgling and at odds with the experimental standing of flakes. Flakes are an improvement in my opinion, but add even more to learn.
If you ever come back and are having troubles, please don’t hesitate to ask for help. There are a lot of great people willing to lend some assistance in the forum, matrix or unofficial discord.
Python is a very long way away from being my least favourite programming language but the Calc language in Excel is far worse in just about every possible way. The nice thing about this project (which actually started with people a couple of doors down from my office several years ago) is not that it’s plugging Python into Excel, it’s that it’s decoupling the Excel data model and UI from the underlying programming language. This should make it easier to plug in other language including some future hypothetical designed-for-spreadsheets-but-not-awful programming language.
More people using Excel for task for which they shouldn‘t. Excel has limitations:
You can‘t version control Excel files
You can‘t test algorithms written in Excel
You can‘t separate the data from the algorithm
… or it is really hard or nobody does it. Excel is good for some cases and I use it for those, but Excel is probably the most overused software, because it is just there.
It’s also one of the most powerful interactive declarative information processing environments available to non-programmers.
I assume MSFT is aiming at ChatGPT code generation for Python to be used by non-programmers to take things further in Excel. Keep fire extinguishers within reach.
You version control Excel files in OneDrive and Dropbox. The algorithms are tested manually by inspecting the output, just like how many programmers do printf-driven testing. Is it best engineering practice? Of course not.
Is something better available to non-programmers short of grovelling in front of the IT dept managers?
To yes and this too, have you ever had to write an “if then” in Excel longer than one decision tree? My eyes bleed, trying to figure out where to put the commas or parentheses.
I look forward to an IDE text box that does actual spacing and highlighting per conditionals. I only see this as a positive, and frankly a direct challenge to the jupyter notebook ecosystem.
Kind of. You can version control an Excel file, you can’t version control Excel files. Excel files include version control that integrates with OneDrive / Sharepoint so you can go back to old versions easily. Unfortunately, Excel lets you reference data in other sheets. This is why it doesn’t let you open two files with the same name at the same time: it would make cross-spreadsheet references non-unique. This means that you might actually need to version control multiple sheets simultaneously.
It’s worth noting that, if you have track-changes enabled, all of the Office tools can perform merging and git can delegate merging to external tools. I’ve never done this personally, but I’ve seen other people set it up so that git can merge MS Office documents automatically by invoking the merge functionality in Office whenever it needs to merge two versions of an Office doc. This does mean that you end up storing multiple versions of the history, but if you’re using Office then I’m assuming a few MiBs of wasted space is probably not important to you.
Well, that’s the end of my Terraform promotion (and upgrading) going forward. I’ll have to stick to 1.5.5, and look for the most productive way to migrate away, any suggestions?
My personal view is that the ecosystem around “IaC” cloud tooling is so Terraform heavy its better to wait a few months and see what new projects come up in response to the license change.
I am also currently attempting to use Datalog to replace Terraform with varying degrees of success in convincing my coworkers that it is a good idea.
Yeah, well, their marketing page is not clear about this separation, probably intentionally like you suggest. OTOH at the very top of the pricing page they show you how to use tf-style S3 backend for state files.
But would enterprises really care about the license change in most situations? They’ll pay, likely good money, and as long as they do Hashicorp likely won’t end up suing you for anything. Of course this depends on the context, but it’s not like enterprises never use source-available products.
I’m slowly learning that “good money” normally takes months of negotiations, meetings, blood, sweat, and tears on both sides of the deal. If you are running some little “make my life easier app” inside your enterprise and all the sudden BSL comes down as “forbidden” you’ll be spending a ton of time rewriting that app to support it.
And don’t think that’s easy by any standard, so much engineering time is lost retrofitting situations like this.
I respect what the author trying to do here, but I’ve always found these types of scripts to be “great the first and second time,” but then the team comes up with their own path.
I had gone through this so many times that I eventually stole (yes, I didn’t even write it myself) this template and just asked people to fill it out in “15 min” chunks.
15 mins of thoughts and writing, 5 mins of going over each section, and boom, done.
I’ve found IRC and Matrix to be quite lively still. I also have seen more Discourse-based forums popping up around tech topics in the past decade. All of these have provided me with more genuine discussion and more reliable information than twitter or reddit ever did.
One trend I find unfortunate is each open source library standing up a Discord instance around their single project. It really makes the discussions quite fractured. In theory they are analogous to Discourse but they aren’t easily indexed by web crawlers so the discussions have no long term value. The one plus I’ve found is that those instances usually have a handful of people who are very dedicated to the project and often eager to engage with me. The Bevy game engine comes to mind.
Today mostly with Telegram, I still post some #TIL to Twitter but I know it will die soon and I think I should organize the insights I got better offline using Obsidian and other open standards based tools.
We are approaching dark times and will only have peace when a distributed platform gets adoption. The best rn content wise is SSB IMO but it’s slow AF.
Groups. A few brazilian ones like Linux Universe and NixOS Brasil I am more active but there are some other groups that I lurk and sometimes engage in discussions. I avoid groups where there is some kind of religion about a technology because people are more biased towards a side or a way of doing things.
I also follow the lobste.rs Telegram channel.
Yeah, it’s proprietary stuff but it works, it’s fast, solves the problem, zero maintenance basically and it’s not zucked up with non chronological stuff.
I’m an IBMer and honestly, the mainframe in so many ways is still our bread and butter.
If you’ve ever been interested in learning about it, you should take a look at https://openmainframeproject.org and start poking around. There’s a push for the “next generation” of engineers to start looking at mainframes as a real career, (most are retiring or have retired) and there’s good money there.
I’m an IBMer and honestly, the mainframe in so many ways is still our bread and butter.
This honestly makes me wonder why IBM’s cloud offerings are so mediocre. A modern cloud is a hybrid of mainframe and supercomputer concepts. IBM had been designing these for decades when the other players in the cloud space (Microsoft, Oracle, Google, Amazon, and so on) were startups. As a cloud user, I want to be able to write a program, run it on a big computer, and have it scale down to the cost of running a tiny program on a million-user time-sharing machine and scale up to the I/O throughput of a mainframe and the compute of a supercomputer. This is something IBM ought to be able to build and I don’t understand how your management has consistently failed to deliver so spectacularly given that half the things that I’ve seen at Azure and from other cloud providers are copying ideas from IBM 20-30 years ago (or, often, reinventing them, because the people working on the project are young and didn’t study history).
I work for another large software company and my guess is that the reason is something like this: The people building Mainframes are not the ones building clouds. They are probably in different offices, maybe even on different continents. Their SVPs are fighting over something or the other for recognition and nobody wants to work with the other party. The thing they have in common is an @ibm.com email. This would make total sense to me because I see all that where I work too.
This was my impression of Google Cloud back when I worked at Google. From the inside you could clearly see a divide in quality between things that were fairly straightforward reskins of core Google infra, vs things that were built from scratch in a short amount of time by Cloud new hires trying to stand up competitors to products from other clouds. The most obvious example being BigQuery, which is basically just Dremel.
I’ve worked for a few legacy companies that tried to pivot to cloud. It’s a very, very painful process.
Typically, the smartest, most experienced employees want to continue to work on the safe legacy products rather than risk their careers on new wildcard products. Then they try to hire new employees for the cloud product, but they can’t afford to compete with the giants, so they get very junior employees who get some experience and leave quickly for greener pastures. Finally, there are often critical, obvious architecture errors because the people remaining have limited experience with cloud engineering and don’t have time to learn.
It’s really tricky.
The companies I worked at did manage to succeed, but only with many years of work and some very serious expensive missteps.
Mainframes are designed for long term ROI based on depreciation. In other words, a large up front investment that is fully “paid off” (in terms of financial filings) before the machine stops running. A big part of the challenge there is making these beefy machine also something a standard office building can support, in terms of cooling and backup power. The owners are interested in not replacing the machine. They are willing to put up with relatively poor performance later in the life of the machine in exchange.
The value proposition of the cloud is reduced opportunity costs. The cloud operators can only deliver on that through turnover of machines. They need a failed machine to be on the order of something that can be repaired or replaced in hours rather than days. They build smaller replaceable components that can be packed densely. They integrate them closely to reduce power consumption, which enables them to achieve even greater density. All of this is part of why there always seems to be an EC2 instance ready and waiting for you, despite huge demand and growth. The owners want to replace machines often enough to ensure they are able to offer their tenants performance at least on par with the on-premisis alternatives to ensure the cloud doesn’t represent an opportunity cost.
The value proposition of the cloud is reduced opportunity costs. The cloud operators can only deliver on that through turnover of machines
I don’t think that’s the case at all, from my experience with Azure. Cloud hardware lasts a really long time. I suspect that this is part of the push for AI from cloud vendors. AI workloads need the latest GPUs or TPUs. In contrast, the best-selling Azure SKUs are a few generations old (to the extent that new hardware is run with the hypervisor intercepting CPUID to disable features pretend that it’s older hardware).
In contrast, the best-selling Azure SKUs are a few generations old (to the extent that new hardware is run with the hypervisor intercepting CPUID to disable features pretend that it’s older hardware).
I might be missing your point but I think this is my point :) The cloud operator wants newer physical machines even if the tenants don’t need the features, because keeping hardware too long prevents them from taking advantage of greater density potential.
Mainframes are built for an official service lifetime of 10+ years. Microsoft Azure started in 2010. If it had been built on mainframes (as an IBM cloud for mainframe customers would be) then they would just recently have started replacing their original hardware. In practice I’ve worked adjacent to mainframes that have been in service for as long as 22 years. Just incredibly different time scales that attract different types of business leaders.
I might be missing your point but I think this is my point :) The cloud operator wants newer physical machines even if the tenants don’t need the features, because keeping hardware too long prevents them from taking advantage of greater density potential.
I am not sure how much I can say here (confidentiality with the employer I am in process of leaving), but that’s not nearly that clear cut. The economics are really interesting but I think they’re regarded as a trade secret.
The “PowerVM” platform is like VMWare VSphere and probably could have been modularized into a more public/private cloud system (they did try an OpenStack thing but OpenStack never seemed to graduate to a serious product).
OS/400 (i) can be somewhat lumped into the above category since pHyp is based on their LPAR code, but that is yet another weird and wonderful host if you look higher up at the OS. You can think of interesting “serverless” approaches that have become a fad being very easy and very secure on this platform.
And there is VM, the grandaddy of hypervisors. IBM demonstrated some awesome scalability in the early 2000s with this platform and Linux. I think one thing holding it back is the “host” centric computing mindest in and outside of IBM. I’ve seen on the orange site people discussing mainframe discounting the hardware, but I’d disagree. Software keeps people “locked” into mainframes, but the hardware is quite a bit different than any contemporary platform. It’s designed in such a way that you can and do care about the physicality/locality of the machine as well the architecture. Which turns out to be an impediment because…
IBM puts $billions into Linux including features like KVM. It seems like the market had a silent but strong demand for cloud to be synonymous with amd64 Linux even to this day (although arm64 is making some inroads, it’s really a cost and supply chain optimization for the establishment providers not any technical win). So IBM’s main “cloud” is their acquisition of SoftLayer, which is and was an MSP running low end hardware not a full stack organization technically or in leadership.
If I had been IBM CTO five years ago, I would have tried to negotiate a good deal on an Arm architecture license by offering to contribute a lot to the RAS features that Arm was missing and on supercomputer features such as transactional memory (on POWER for ages, in the Arm architecture but no implementations) and vector units (SVE is pretty nice but IBM has been winning BLAS competitions for decades). I’d then have had the team that currently builds POWER and System Z chips build a high-reliability legacy-free Arm system adopting the features from LPAR and friends and had Red Hat support it as tier 1. All of the expertise is there for building some amazing infrastructure but it’s just not joined up at all.
Has anyone gotten Doom to run on one? ;) I guess they’d have to render to EBCDIC art on those 3270 displays.
I remember learning about the CS department’s IBM mainframe in college. Even back in the mid 80s it was a deep pile of archaeological strata. The equivalent of a Unix pipe was booting two virtual IBM 370s and hooking one’s virtual card punch to the other’s virtual card reader via JCL.
Oh the Linux side of the mainframe, probably! The key thing you want to know to answer this question is the architecture name, which is “s390x” for Linux. I did some poking around, and it turns out Ubuntu has a package that builds for s390x, but I haven’t tried it: https://packages.ubuntu.com/jammy/crispy-doom
There’s a no-charge IBM LinuxONE Community Cloud available to set up a Linux VM on s390x. With some set up x-forwarding set up you could give it a shot and report back! https://linuxone.cloud.marist.edu/
There are a bunch of different paths depending on what your background is, from students to re-skilling apprenticeship programs. I’d suggest starting here to explore what may be a good fit: https://ibm.biz/ztalent
Some jobs are remote (especially since the pandemic) but I’d say it’s slightly less common for jobs to be remote than broader tech, since they’re used in a lot of financial and healthcare settings, where security is a high priority and they want folks on-site. Pay-wise, it can be quite lucrative, but it will depend a lot on your experience, area of expertise, and location, as with most things.
This brings up a whole conversation around wikis and their usefulness. For the private knowledge base, I’ve always found TiddlyWiki to be an “ok” answer to some jot/note taking. With the dawn of Obsidian or bear.app (the one I use), something like TiddlyWiki doesn’t seem like something I’d choose.
For the days of USB sticks and traveling around to different computers, great, but in modern day TiddlyWiki just isn’t my cup of tea.
For “official” documentation, I always found it weird if not off-putting. Maybe I’m in a small camp, but through the years I’ve just never gotten to a place where it’s the first I think about.
I think for me TiddlyWiki is less interesting as a private knowledge base, and more interesting in terms of its architecture and the ways in which it is opinionated about hypertext and information more generally. When I first came into contact with it, I was put off, and found it to be ugly and kind of painful to use, but understanding its point of view helps a bit
In terms of architecture, the bit that’s interesting to me is that it’s programmable. The expectation is that you’ll write lots of small Tiddlers (“pages” or “posts” or “notes” – a tiddler is the smallest unit of information within TiddlyWiki), and compose them via linking and transclusion to build larger ones. So instead of writing a single tiddler called “to-do list” that lists particular todos, you’d do better to create individual tiddlers for each todo, giving them an appropriate tag, and then using the built in query functionality to build a list.
You can define macros and write Tiddlers that query other tiddlers in myriad ways using a declarative Prologish query language, and essentially make TiddlyWiki whatever you want or need it to be, kind of like Emacs – for better or for worse. In that way it’s essentially a programming language – TiddlyWiki itself is implemented in TiddlyWiki.
In terms of information, it embraces the ideas and idealism of the early pioneers of hypertext. This is evident in its terminology (the term transclusion was coined by Ted Nelson), and its support for backlinking, an oft cited deficiency of the HTML implementation of hypertext, among others
Another TiddlyWiki opinion is that your TiddlyWiki should be self-contained – without dependencies on external programs or websites that might change or break your TiddlyWiki. When you link to a Tiddler from within TiddlyWiki, you are linking to the object itself, so links and transclusions will follow the tiddler in the way you expect even if you rename it. This is also reflected in how TiddlyWiki lives in a single HTML file. This is in stark contrast to the brittle nature of HTML links, which don’t backlink, and reference an address rather than the object itself
But even with these “opinions” and best practices, TiddlyWiki is pretty dang flexible. Like the page I linked lists an ecommerce site, personal websites and information sites and documentation, and a game. It’s just HTML and JS under the hood
This is an interesting talk from the creators of Erlang and TiddlyWiki that goes into some more detail: https://www.youtube.com/watch?v=Uv1UfLPK7_Q
I agree, it really is a very flexible architecture
back in the day, before Wikipedia convinced everyone to see wikis as primarily for reference material, I also found wikis to be a really neat medium for conversation. there are very good reasons it doesn’t happen that way anymore (spam was a problem, and it was hard for newcomers to understand) but it was cool while it lasted.
I do use it. Image support is lacking but that problem also exists with anything Markdown-based.
I thought I was the only person left in the world running smokeping. Cheers!
I also run smokeping. So that’s 3, and by extension infinity.
Also which makes 4 and therefore double infinity.
Make that 5.
Also, it’s really easy to setup on NixOS:
services.smokeping
. It’s not too heavy and it can be a lifesaver when things break, so there’s little reason not to run it on servers.Obviously we need to start an irc/mailing list where we can come together and show cool things we do with it!
Quick question, what’s the diff between smokeping and uptime kuma? What do you use them for? I have uptime kuma just for monitoring a few sites, I don’t need anything fancy, but I also brought it to work so we can also use the dashboard export thingy. What extras does smokeping have in comparison?
Smokeping is for Network latency. So I think of uptime kuma is more.of a heartbeat to check if the site is either up or down and notifies you if they are down. And smokeping does analysis of packets sent. But very much could be wrong. I stood up both and they are very similar except for UI
Great list!
How do Cloudflare Tunnels compare to Tailscale Funnels? (Hope I got the feature names right)
The main downside(s) I’ve found with Funnels is a lack of custom domain names (you’re stuck with your Tailscale Net name) and generally slower speeds when running through a relay. I ended up spinning my own using DNS splitting and a free tier VPS.
Wait, how are you self-hosting tailscale? Do you run headscale? https://github.com/juanfont/headscale
I misspoke, apologies. This list is more of a “homelab” list, especially with cloud flare tunnels and tail scale. I’m not using headscale. I’m using the non self hosted tailscale
I self host miniflux, and a dumb front end I made for Reddit (inspired by old.reddit.com but mobile friendly) because I absolutely refuse to use their official mobile app after they killed third party clients.
Any chance you can link that mobile friendly front end? I’d love to take a look at it.
Sure https://github.com/Jackevansevo/jeddit it’s very basic compared to lots of existing clients, but was fun to hack together. Right now it’s only really good for browsing/reading content. Still need to add comments/votes.
Thanks so much!
I use fossil, which is my SCM. I use caddy for my web server. I use radicale for calendaring. And I use grav CMS for my web pages. I have used gitea when I don’t use fossil. I also use jitsi meet as an open sourced zoom alternative.
Oh yeah jitsi, i tried getting that working on Kubernetes once and it never happened. How’d you get it working at home?
I just used jitsi docker with docker compose it just kind of works. There’s a little bit of work but it’s not too hard
Oh dang, ok I’ll revisit it then thanks!
May I ask, as Radicale does not support server-side meeting invitations, how do you send calendar invites on your phone?
I never send out invites to other people so I would not know about this feature.
All of these run off of an Odroid N2+.
miniflux would be ideal if only it supported sqlite. I don’t necessarily see the point of having a single static executable as a feature when you need a separately managed db process.
My thoughts exactly. That’s the reason I use yarr.
Thank you, yarr is exactly what I have been looking for!
soju looks great! ~emersion has a bunch of other neat stuff and it turns out I was using some of it already, fun to explore the rest.
What is libreddit?
A private front-end for reddit; quite peculiar since I thought reddit imposed restrictions a few weeks ago over HTTP request against their API
IIRC the API is still available for low-request app keys, which likely works fine for most individual users’ needs, but the change effectively killed high-usage app keys like third-party mobile clients. When Baconreader went dark because of it, I stopped using reddit on my phone. I’m now 62 days into Dutch on Duolingo. Thanks, reddit!
Oh rock on, that makes sense now.
Which basically led me here.
… wait, does libreddit have apps yet?
The libreddit instance is only accessible from LAN and I only use it occassionally. It uses the anonymous API and I never hit the rate limit.
Vaultwarden for my passwords.
Yep, i hear great things about vaultwarden, I’m a 1password user though. :)
A 9front server for backups (via Venti) and a network-wide ad blocker (any proxy should do, Linuxers could just run AdGuard Home). On servers (not “at home”), the list is notably longer.
Ah yeah, I forgot to mention Pi-Hole, I use it and never think about it, I guess that’s a sign of a good DNS server?
I never tried Pi-Hole, but yes, a server which you don’t remember all the time is a painless server that just works.
I don’t think NixOS is for me, but I do love Nix for tool and home management.
I run Fedora Silverblue as my core stable base system, with only a few packages layered on like Tailscale, and some others that need to be system level. Removed a few packages. And generally don’t touch it outside of system updates.
All my home/dev environments are controlled through Home Manager. To get nix working on Silverblue, I used systemd to mount
~/.nix
onto/nix
so my nix store is actually in my home directory.The only thing I have trouble with is Nodejs because of the way nix and npm packages play together. But in those cases I just install NVM and use that instead.
being able to make small files of packages and mix and match them based on the machine im on, and same with configuraiton has been awesome. I use Home-Manager as a flake which makes updating easy too.
I really think there’s an ecosystem/tutorial out there for “nix+silverblue.” Any chance you have any notes or anything on how you got it up and running? I’d love to give it a shot myself, and maybe even build a daily driver out of it.
I originally used this blog post to help me get the systemd file setup and then i followed the standard install for nix
https://julianhofer.eu/blog/01-silverblue-nix/
But it looks like the author went a step further and added proper OSTree support into the Determinate Nix Installer, which does that all for you! You can now just follow that guide.
If you wish to use Home Manager in non-flakes method, just keep following that guide. I personally wanted to use it in the Flake method (which i really like and lets me easily do different configs for different machines)
This guide can help with the Flake style home-manager: https://www.chrisportela.com/posts/home-manager-flake/ but be mindful its also out of date (which is funny as it links to an article that is also good, but slightly out of date). Just use it for ideas and reference.
In general you need to:
Hope that helps. I haven’t setup a fresh one since. WHen I redid my machine, i setup the bind mount, installed nix, install home-manager globally, then pulled my home-manager configs, ran
nix flake update
inside it, and then didhome-manager switch
and it worked.Should be even easier with the Determinate Nix Installer and it will be what I use soon when I plan to re-install silverblue (i switched to Tumbleweed for a short stint, but going back)
[Edit] Just a warning, using the flake method, if you want to update your packages, you don’t do
nix channel --update
or anything of the sort. You go into your home-manager flake directory and runnix flake update
and then do a home-manager switch.As an experiment I wrote a little tool that build images based on Gentoo.
I needed to build a few images with patches and/or custom build options. It turned out to be a noon-trivial task on a Debian. Then I though that Gentoo is really good at that. So I put together a few ebuilds with my patches and that part went really smooth. The next problem to solve was how to make images smaller. So I built a tool that uses Portage to look up package dependencies recursively. Added a few options to ignore packages and files I’m sure are not needed but that still end up in that depgraph. I use it in a multi-stage dickerfiles: the first stage has a complete system with build stuff and such, it also packs all the files for the final image; the second stage just copies files from the build stage.
It’s an experiment but I still manage to get really good results. For example my PHP image is only slightly larger than the official alpine one. I’m sure there’s a plenty opportunities to optimize the image. One advantage is that it’s really easy to build images that contain a combination of packages (e.g. php-fpm with php-cli and zsh). It’s also easy to build custom configurations that are consistently applied to all packages (e.g. disable bzip for php but also for libcurl that php links to).
The downside is that you have to compile everything but with Gentoo that is the way.
I don’t have a need to build docker images, but if I did I’d try out the nix options. It gives the benefit you chose Gentoo for, easy package customization, but without needing to build everything. And other benefits such as reproducibility. ;)
I’m glad to hear Nix works for people. Personally, I had… let’s call it “suboptimal” experience with Nix. I tried three times and couldn’t set it up (on macOS that is the only “reproducible” result I’ve got, maybe it’s different on Linux or NixOS, or whatever). Its learning curve is… non-trivial. With the amount of time I’ve spent on it and the results I’ve got I decided to put it into “mabe sometime later” bin.
Yeah the learning curve is quite steep. I also gave up a couple times before it finally clicked for me. There are efforts to improve documentation for new users, but they’re fledgling and at odds with the experimental standing of flakes. Flakes are an improvement in my opinion, but add even more to learn.
If you ever come back and are having troubles, please don’t hesitate to ask for help. There are a lot of great people willing to lend some assistance in the forum, matrix or unofficial discord.
I was waiting for someone it inevitably say nix and Docker. I learned a ton from this video: https://youtu.be/5XY3K8DH55M?si=36BvLBJE4TUCNzcy
This reminds me to rewatch it.
This highlights the core problem of taking upstream containers at “face value.”
I’ve said it once I’ll say it again, “you gotta do your homework” to figure out what you are running.
YOU have to figure out what you need to run your application in a safe secure manner, and not just do FROM: openjdk-9 and call it a day.
What could possibly go wrong? Python and Excel?
Python is a very long way away from being my least favourite programming language but the Calc language in Excel is far worse in just about every possible way. The nice thing about this project (which actually started with people a couple of doors down from my office several years ago) is not that it’s plugging Python into Excel, it’s that it’s decoupling the Excel data model and UI from the underlying programming language. This should make it easier to plug in other language including some future hypothetical designed-for-spreadsheets-but-not-awful programming language.
What could go wrong? Really this sounds amazing tbh…
More people using Excel for task for which they shouldn‘t. Excel has limitations:
… or it is really hard or nobody does it. Excel is good for some cases and I use it for those, but Excel is probably the most overused software, because it is just there.
It’s also one of the most powerful interactive declarative information processing environments available to non-programmers.
I assume MSFT is aiming at ChatGPT code generation for Python to be used by non-programmers to take things further in Excel. Keep fire extinguishers within reach.
You version control Excel files in OneDrive and Dropbox. The algorithms are tested manually by inspecting the output, just like how many programmers do printf-driven testing. Is it best engineering practice? Of course not.
Is something better available to non-programmers short of grovelling in front of the IT dept managers?
Apple Automator, though I suspect it fills largely different use cases. Also only available on macOS, of course.
To yes and this too, have you ever had to write an “if then” in Excel longer than one decision tree? My eyes bleed, trying to figure out where to put the commas or parentheses.
I look forward to an IDE text box that does actual spacing and highlighting per conditionals. I only see this as a positive, and frankly a direct challenge to the jupyter notebook ecosystem.
Kind of. You can version control an Excel file, you can’t version control Excel files. Excel files include version control that integrates with OneDrive / Sharepoint so you can go back to old versions easily. Unfortunately, Excel lets you reference data in other sheets. This is why it doesn’t let you open two files with the same name at the same time: it would make cross-spreadsheet references non-unique. This means that you might actually need to version control multiple sheets simultaneously.
It’s worth noting that, if you have track-changes enabled, all of the Office tools can perform merging and git can delegate merging to external tools. I’ve never done this personally, but I’ve seen other people set it up so that git can merge MS Office documents automatically by invoking the merge functionality in Office whenever it needs to merge two versions of an Office doc. This does mean that you end up storing multiple versions of the history, but if you’re using Office then I’m assuming a few MiBs of wasted space is probably not important to you.
What could go wrong indeed …?
Well, that’s the end of my Terraform promotion (and upgrading) going forward. I’ll have to stick to 1.5.5, and look for the most productive way to migrate away, any suggestions?
My personal view is that the ecosystem around “IaC” cloud tooling is so Terraform heavy its better to wait a few months and see what new projects come up in response to the license change.
I am also currently attempting to use Datalog to replace Terraform with varying degrees of success in convincing my coworkers that it is a good idea.
I’m working on a similar attempt at an application but with Prolog! Seems like the best way to derive something.
good to see this isn’t a new idea, was thinking about doing the same with https://nickel-lang.org/
They all suck. The terraform API approach might be the best way forward, and some other tool might be necessary to improve the handling.
the orange site has found a new appreciation for pulumi, which runs tf providers.
I thought Pulumi was proprietary software that you bought credits to run?
No? Code’s here and it’s Apache-2.0 https://github.com/pulumi/pulumi Pulumi Cloud is proprietary, but so always were the hashicorp’s hosted offerings.
Thanks a ton, I didn’t realise this was an offering. Maybe it’s intentional but I’ve looked before and haven’t seen something like this.
Yeah, well, their marketing page is not clear about this separation, probably intentionally like you suggest. OTOH at the very top of the pricing page they show you how to use tf-style S3 backend for state files.
I wonder if with this move there will be forks.
Forks will happen, but as said somewhere else, forking Vault will cause some real unease in enterprise environments.
That makes sense.
But would enterprises really care about the license change in most situations? They’ll pay, likely good money, and as long as they do Hashicorp likely won’t end up suing you for anything. Of course this depends on the context, but it’s not like enterprises never use source-available products.
I’m slowly learning that “good money” normally takes months of negotiations, meetings, blood, sweat, and tears on both sides of the deal. If you are running some little “make my life easier app” inside your enterprise and all the sudden BSL comes down as “forbidden” you’ll be spending a ton of time rewriting that app to support it.
And don’t think that’s easy by any standard, so much engineering time is lost retrofitting situations like this.
I respect what the author trying to do here, but I’ve always found these types of scripts to be “great the first and second time,” but then the team comes up with their own path.
I had gone through this so many times that I eventually stole (yes, I didn’t even write it myself) this template and just asked people to fill it out in “15 min” chunks.
15 mins of thoughts and writing, 5 mins of going over each section, and boom, done.
Edit, not the poster, but the author :)
I keep promising myself to do this to the next Mac I get.
https://github.com/geerlingguy/mac-dev-playbook
Narrator voice: He still hasn’t.
But one day I will, then I’ll have a repeatable set up every couple of years.
That seems cool, but for now I find I get most of the way there with a two step process:
I’ve found IRC and Matrix to be quite lively still. I also have seen more Discourse-based forums popping up around tech topics in the past decade. All of these have provided me with more genuine discussion and more reliable information than twitter or reddit ever did.
One trend I find unfortunate is each open source library standing up a Discord instance around their single project. It really makes the discussions quite fractured. In theory they are analogous to Discourse but they aren’t easily indexed by web crawlers so the discussions have no long term value. The one plus I’ve found is that those instances usually have a handful of people who are very dedicated to the project and often eager to engage with me. The Bevy game engine comes to mind.
What IRC channels are you lurking in?
It varies a lot over time. #rust #networking #electronics #linux most recently over the past year.
Lobste.rs and Mastodon.
What are the “#“tags you search on? I’m starting to figure some out here and there, but it’s always nice to have suggestions.
Nothing really, I just have a very curated list of people I follow. :)
Today mostly with Telegram, I still post some #TIL to Twitter but I know it will die soon and I think I should organize the insights I got better offline using Obsidian and other open standards based tools.
We are approaching dark times and will only have peace when a distributed platform gets adoption. The best rn content wise is SSB IMO but it’s slow AF.
Telegram? What you do you mean? Can you tell me where you look for computer related stuff there?
Groups. A few brazilian ones like Linux Universe and NixOS Brasil I am more active but there are some other groups that I lurk and sometimes engage in discussions. I avoid groups where there is some kind of religion about a technology because people are more biased towards a side or a way of doing things.
I also follow the lobste.rs Telegram channel.
Yeah, it’s proprietary stuff but it works, it’s fast, solves the problem, zero maintenance basically and it’s not zucked up with non chronological stuff.
Ah nice, makes sense. thanks :D
I never realized how much I used to use /r/homelab or /r/selfhosted to learn about new/psuedo production ready open source software.
I guess https://lemmy.world is trying to fill that gap, but for 11 years where I’ve been going is now gone. 😞🐼.
I feel this comment. I’ve been mourning Reddit hard lately.
STH has an overlapping crowd. It’s absolutely not the same thing but I thought you might like the pointer: https://forums.servethehome.com
The communities on programming.dev (another Lemmy instance) are also worth checking out. Should be able to join these from any Lemmy instance.
I’m an IBMer and honestly, the mainframe in so many ways is still our bread and butter.
If you’ve ever been interested in learning about it, you should take a look at https://openmainframeproject.org and start poking around. There’s a push for the “next generation” of engineers to start looking at mainframes as a real career, (most are retiring or have retired) and there’s good money there.
This honestly makes me wonder why IBM’s cloud offerings are so mediocre. A modern cloud is a hybrid of mainframe and supercomputer concepts. IBM had been designing these for decades when the other players in the cloud space (Microsoft, Oracle, Google, Amazon, and so on) were startups. As a cloud user, I want to be able to write a program, run it on a big computer, and have it scale down to the cost of running a tiny program on a million-user time-sharing machine and scale up to the I/O throughput of a mainframe and the compute of a supercomputer. This is something IBM ought to be able to build and I don’t understand how your management has consistently failed to deliver so spectacularly given that half the things that I’ve seen at Azure and from other cloud providers are copying ideas from IBM 20-30 years ago (or, often, reinventing them, because the people working on the project are young and didn’t study history).
I work for another large software company and my guess is that the reason is something like this: The people building Mainframes are not the ones building clouds. They are probably in different offices, maybe even on different continents. Their SVPs are fighting over something or the other for recognition and nobody wants to work with the other party. The thing they have in common is an @ibm.com email. This would make total sense to me because I see all that where I work too.
This was my impression of Google Cloud back when I worked at Google. From the inside you could clearly see a divide in quality between things that were fairly straightforward reskins of core Google infra, vs things that were built from scratch in a short amount of time by Cloud new hires trying to stand up competitors to products from other clouds. The most obvious example being BigQuery, which is basically just Dremel.
I’ve worked for a few legacy companies that tried to pivot to cloud. It’s a very, very painful process.
Typically, the smartest, most experienced employees want to continue to work on the safe legacy products rather than risk their careers on new wildcard products. Then they try to hire new employees for the cloud product, but they can’t afford to compete with the giants, so they get very junior employees who get some experience and leave quickly for greener pastures. Finally, there are often critical, obvious architecture errors because the people remaining have limited experience with cloud engineering and don’t have time to learn.
It’s really tricky.
The companies I worked at did manage to succeed, but only with many years of work and some very serious expensive missteps.
Because all IBM does these days is selling fungible Java hours and mainframe legacy support contracts, it seems to me.
Mainframes are designed for long term ROI based on depreciation. In other words, a large up front investment that is fully “paid off” (in terms of financial filings) before the machine stops running. A big part of the challenge there is making these beefy machine also something a standard office building can support, in terms of cooling and backup power. The owners are interested in not replacing the machine. They are willing to put up with relatively poor performance later in the life of the machine in exchange.
The value proposition of the cloud is reduced opportunity costs. The cloud operators can only deliver on that through turnover of machines. They need a failed machine to be on the order of something that can be repaired or replaced in hours rather than days. They build smaller replaceable components that can be packed densely. They integrate them closely to reduce power consumption, which enables them to achieve even greater density. All of this is part of why there always seems to be an EC2 instance ready and waiting for you, despite huge demand and growth. The owners want to replace machines often enough to ensure they are able to offer their tenants performance at least on par with the on-premisis alternatives to ensure the cloud doesn’t represent an opportunity cost.
I don’t think that’s the case at all, from my experience with Azure. Cloud hardware lasts a really long time. I suspect that this is part of the push for AI from cloud vendors. AI workloads need the latest GPUs or TPUs. In contrast, the best-selling Azure SKUs are a few generations old (to the extent that new hardware is run with the hypervisor intercepting CPUID to disable features pretend that it’s older hardware).
I might be missing your point but I think this is my point :) The cloud operator wants newer physical machines even if the tenants don’t need the features, because keeping hardware too long prevents them from taking advantage of greater density potential.
Mainframes are built for an official service lifetime of 10+ years. Microsoft Azure started in 2010. If it had been built on mainframes (as an IBM cloud for mainframe customers would be) then they would just recently have started replacing their original hardware. In practice I’ve worked adjacent to mainframes that have been in service for as long as 22 years. Just incredibly different time scales that attract different types of business leaders.
I am not sure how much I can say here (confidentiality with the employer I am in process of leaving), but that’s not nearly that clear cut. The economics are really interesting but I think they’re regarded as a trade secret.
I think they have too many competing horses.
The “PowerVM” platform is like VMWare VSphere and probably could have been modularized into a more public/private cloud system (they did try an OpenStack thing but OpenStack never seemed to graduate to a serious product).
OS/400 (i) can be somewhat lumped into the above category since pHyp is based on their LPAR code, but that is yet another weird and wonderful host if you look higher up at the OS. You can think of interesting “serverless” approaches that have become a fad being very easy and very secure on this platform.
And there is VM, the grandaddy of hypervisors. IBM demonstrated some awesome scalability in the early 2000s with this platform and Linux. I think one thing holding it back is the “host” centric computing mindest in and outside of IBM. I’ve seen on the orange site people discussing mainframe discounting the hardware, but I’d disagree. Software keeps people “locked” into mainframes, but the hardware is quite a bit different than any contemporary platform. It’s designed in such a way that you can and do care about the physicality/locality of the machine as well the architecture. Which turns out to be an impediment because…
IBM puts $billions into Linux including features like KVM. It seems like the market had a silent but strong demand for cloud to be synonymous with amd64 Linux even to this day (although arm64 is making some inroads, it’s really a cost and supply chain optimization for the establishment providers not any technical win). So IBM’s main “cloud” is their acquisition of SoftLayer, which is and was an MSP running low end hardware not a full stack organization technically or in leadership.
If I had been IBM CTO five years ago, I would have tried to negotiate a good deal on an Arm architecture license by offering to contribute a lot to the RAS features that Arm was missing and on supercomputer features such as transactional memory (on POWER for ages, in the Arm architecture but no implementations) and vector units (SVE is pretty nice but IBM has been winning BLAS competitions for decades). I’d then have had the team that currently builds POWER and System Z chips build a high-reliability legacy-free Arm system adopting the features from LPAR and friends and had Red Hat support it as tier 1. All of the expertise is there for building some amazing infrastructure but it’s just not joined up at all.
I still want i on CHERI. An actual capability OS, on a modern capability architecture.
Has anyone gotten Doom to run on one? ;) I guess they’d have to render to EBCDIC art on those 3270 displays.
I remember learning about the CS department’s IBM mainframe in college. Even back in the mid 80s it was a deep pile of archaeological strata. The equivalent of a Unix pipe was booting two virtual IBM 370s and hooking one’s virtual card punch to the other’s virtual card reader via JCL.
Oh the Linux side of the mainframe, probably! The key thing you want to know to answer this question is the architecture name, which is “s390x” for Linux. I did some poking around, and it turns out Ubuntu has a package that builds for s390x, but I haven’t tried it: https://packages.ubuntu.com/jammy/crispy-doom
There’s a no-charge IBM LinuxONE Community Cloud available to set up a Linux VM on s390x. With some set up x-forwarding set up you could give it a shot and report back! https://linuxone.cloud.marist.edu/
How should people get into this and are any of the jobs remote? I’d happily do it if it’s remote and pays well.
There are a bunch of different paths depending on what your background is, from students to re-skilling apprenticeship programs. I’d suggest starting here to explore what may be a good fit: https://ibm.biz/ztalent
Some jobs are remote (especially since the pandemic) but I’d say it’s slightly less common for jobs to be remote than broader tech, since they’re used in a lot of financial and healthcare settings, where security is a high priority and they want folks on-site. Pay-wise, it can be quite lucrative, but it will depend a lot on your experience, area of expertise, and location, as with most things.
I mean this was only a matter of time right? ;)