I began this journey in December 2020, so it was over the last 14 months. Trying to think back about it, I’d say
+/- 25% for each, since I’m not sure how long it took me
Playing with BGP was probably between 1 and 5% of the overall time. BGP is quite simple and works very reliably, which is both a curse and a blessing :)
This is a great post, thanks for sharing it! I started my own AS a few years ago, but the “kick start” was receiving a v4 block from ARIN after being on their waiting list for 2 or 3 years… Once I had a block, I was able to announce to my upstream provider, but rather than a VM, it was from a dedicated server at a cheap colo DC. It has now grown, and I host a friends’ business internet, and have also joined the local internet exchange. It was an awesome learning experience so far. Cheers!
What’s painful is the following:
% ps ax | grep -c firefox 24
No way to tell which ones actually use memory and whether I might want to reduce the number of processes created.
My motivation for limiting memory use over other aspects is that I use a dedicated profile for work stuff that involves outlook web, teams web plus jira and confluence. I absolutely don’t care if something crash there but I care that these memory hogs are somehow constrained. They could even be twice slower if they used even 10% less memory. Right now with FF 95, I’m completely at loss regarding memory usage.
Try any of
Unfortunately, we need this high amount of processes to mitigate Spectre vulnerabilities. See https://hacks.mozilla.org/2021/05/introducing-firefox-new-site-isolation-security-architecture/ for more
Oh. I had already gone to about:processes but your comment made me spend more time in it and now I understand it better. TBH the UX could really be improved. At least the PID shouldn’t be only something at the end of the Name field because you might want to find by PID (if you’re looking at this because of something you’ve seen in another tool).
What I’d like to know is what the current model is. It’s not one process per tab plus one process for each domain per frame for each tab. I have a single process for two of my tabs (same domain) and I have one process for each of for my three lobste.rs tabs.
Also, is there a way to have more sharing or is that a thing of the past? In other words, is there any hope that my two awful outlook and teams tabs can share more?
Interesting: I noticed Ghostery being quite active… even after having paused it. Not sure what it was doing, but I’m having none if it any more. NoScript and uBlock are probably enough anyway.
The more script and content blockers you have the more code will when a script is attempted to load. If performance is dear to you, set the Firefox builtin tracking protection to strict mode. It’s a bit more performant to do less JS/C++ context switches but it adds up.
When Firefox is dealing with the 70-90% of undesired scripts, you’re addons will be less busy
I’ve always been under the impression that FF’s tracking protection ran after add-ons such as ublock because nothing is ever blocked on my machines while ublock blocks stuff.
Normal mode doesn’t block at all. It loads stuff but with a separate, isolated cookie jar. This has shown as the best balance between privacy protection and web compatibility. It’s shown that most of our users would blame the browser if some important iframe doesn’t work or show up.
Now with a loading tab the user has something to interact with.
So gor power users the recommendation is to set it to “strict mode”, which doesn’t isolate but actually block.
Good point, I’ve been wondering if FF blocks first, then addons, or vice-versa or even a mix of the two. Any thoughts?
This is an interesting project, but being perfectly happy with the OpenBSD init system right now, I see no reasons to try systemd on OpenBSD.. in fact, I see a lot of reasons not to, unless I had a very specific use case.
The article’s point is valid, if a little platitudinous (yes, code reuse is good). But:
And they utilize universal interfaces: TCP, HTTP, etc.
Isn’t this utterly wrong? TCP and HTTP are nowhere near universal interfaces. Large amounts of the internet goes over UDP and QUIC, and there are many many different RPC frameworks out there.
TCP and HTTP might be common for moderate-performance high-latency moderate-reliability microservices, which is a large number of services. But if we want to solve the problem, we need something more universal than just the 70% solution. I don’t know how to achieve that level of universality (though I have some ideas) but it’s the goal we need to achieve. Though we’ll probably hit it accidentally - there were many operating systems before and after Unix which failed, and they were all trying to win - merely trying isn’t enough, we need to get lucky too.
Hey, author here.
McIlroy’s quote specifically says:
Write programs to handle text streams, because that is a universal interface.
Emphasis on the “a” is mine.
There is no such thing as an absolute, truly universal interface. It’s a matter of degrees. Text streams certainly weren’t universal for the longest time! We had to standardize ASCII, and later unicode, etc. We’ve since standardized HTTP and it’s certainly more universal than something like a JVM method call.
Isn’t this utterly wrong? TCP and HTTP are nowhere near universal interfaces. Large amounts of the internet goes over UDP and QUIC, and there are many many different RPC frameworks out there.
“Universal” is obviously not precisely defined, but if you’re gonna pick winners for OSI layers 4 and 7, I think the answers are pretty clearly TCP and HTTP respectively. Very little of the internet goes over UDP and QUIC at the moment.
Large amounts of the internet goes over UDP and QUIC, and there are many many different RPC frameworks out there.
The quote is from someone’s post above. I didn’t know this to be true, I believe TCP/HTTP are still the majority of the internet “protocols” used today.
I don’t agree it’s platitudinous because people nod their head “yes” when reading it, but when they sit down at their jobs, they code the area and not the perimeter :)
I see what’s meant by “universal”, but I also see that it’s a misleading/vague term. I would instead say that TCP and HTTP are the “lowest common denominator” or “narrow waist”, and that has important scaling properties.
A related line of research is “HTTP: An Evolvable Narrow Waist for a Future Internet” (2010).
QUIC and HTTP/2 seem to be pushing in that direction. Basically TCP/IP was explicitly designed the narrow waist of the Internet (I traced this to Kleinrock but I’m still in the middle of the research), but it’s moving toward HTTP.
As far as I understand, QUIC is more or less a fast parallel transport specifically for HTTP. While HTTP is now the thing that supports diverse applications. For example, e-mail, IRC, NNTP are subsumed by either HTTP gateways or simply web apps. As far as I can see, most mobile apps speak HTTP these days as opposed to TCP/IP with raw sockets.
Other names: “hourglass model”, “thin waist”, and “distinguished layer”:
Basically this software architecture concept spans all of networking; compilers and languages; and (distributed) operating systems. But the terminology is somewhat scattered and not everyone is talking to each other.
But again, there’s something profound here that has big practical consequences; it’s not platitudinous at all.
Nice write up. I agree that Ubuntu is a reasonable choice for a desktop distro that is easy to setup and get going fast. It facilitates the installation of non-free drivers also, if you’re willing or have to use those. I just can’t help but think it’s the Windows-like version of Linux (bloat wise) and just can’t stand using it, especially for servers. Between Windows and Ubuntu I personally just stick to Windows 10.
I find Fedora a much better workstation distro. It doesn’t push the (IMHO, completely wrong) idea that a different desktop environment requires a derivative distro, and it’s easy to start minimal and add what you want.
no GNU components
The goal is definitely GNU-free, but yea, it still depends on gmake to build some packages. It’s the only GNU dependency, too. A gmake replacement would finish the job.
Seems that you would have to replace freetype as well.
Curious to read a little bit more about the rationale though. What’s so wrong about GNU software?
I think one advantage is that GNU has had something of a “monopoly” in a few areas, which hasn’t really improved the general state of things. The classic example of this is gcc; everyone had been complaining about its cryptic error messages for years and nothing was done. Clang enters the scene and lo an behold, suddenly it all could be improved.
Some more diversity isn’t a bad thing; generally speaking I don’t think most GNU projects are of especially high quality, just “good enough” to replace Unix anno 1984 for their “free operating system”. There is very little innovation or new stuff.
Personally I wouldn’t go so far so make a “GNU free Linux”, but in almost every case where a mature alternative to a GNU project exists, the alternative almost always is clearly the better choice. Sometimes these better alternatives have existed for years or decades, yet for some reason there’s a lot of inertia to get some of these GNU things replaced, and some effort to show “hey, X is actually a lot better than GNU X” isn’t a bad thing.
A LOT of people have soured on GNU/FSF as a result of the politics around RMS and the positions he holds.
A lot of people were soured on them long before that; the whole GPL3 debacle set a lot of bad blood, the entire Open Source movement was pretty much because people had soured on Stallman and the FSF, the relentless pedantry on al sorts of issues, etc. Of course, even more people soured on them after this, but it was just the last in a long line of souring incidents.
Was (re)watching some old episodes of The Thick of It yesterday; this classic Tucker quote pretty much sums up my feelings: “You are a fucking omnishambles, that’s what you are. You’re like that coffee machine, from bean to cup, you fuck up.”
For sure. Never seen The Thick Of It but I love Britcoms and it’s on my list :)
I’ve always leaned towards more permissive licenses. We techies love to act as if money isn’t a thing and that striving to make a living off our software is a filthy dirty thing that only uncool people do.
And, I mean, I get it! I would love NOTHING more than to reach a point in my life where I can forget about the almighty $ once and for all and hack on whatever I want whenever I want for as long as I want! :)
Yeah, when I hear “GNU” I think cruft. And this is from someone who uses emacs! (I guess you could argue it’s the exception that proves the rule, since the best thing about emacs is the third-party ecosystem).
And this is only about GNU as an organization, to be clear. I have no particular opinions on the GPL as a license.
Even Emacs is, unfortunately, being hampered by GNU and Stallman, like how Stallman flat-out refused to make gcc print more detailed AST info for use in Emacs “because it might be abused by evil capitalists”, and the repeated drama over the years surrounding MELPA over various very small issues (or sometimes: non-issues).
From the site:
- Improve portability of open source software
- Reduce requirements on GNU packages
- Prove the “It’s not Linux it’s GNU/Linux …” copypasta wrong
Yeah, “why not?” is a valid reason imvho. I would like to know which one it theirs in actuality. I often find that the rationale behind a project is a good way to learn things.
And fair enough, I assumed you were affiliated. FWIW, Freetype is not a GNU project, but it is indeed fetched from savannah in their repos, which I found slightly funny.
ETA: it also seems to be a big endeavor so the rationale becomes even more interesting to me.
My rationale was partially to learn things, partially for the memez and partially as an opportunity to do things the way I want (all these people arguing about init systems, iglunix barely has one and I don’t really need anything more). I wanted to do Linux from scratch to learn more about Linux but failed at that and somehow this ended up being easier for me. I think I definitely learnt more trying to work out what was needed for myself rather than blindly following LFS.
That’s correct! I downloaded https://download-mirror.savannah.gnu.org/releases/freetype/freetype-2.11.0.tar.xz just to double check, and here is the license:
FREETYPE LICENSES ----------------- The FreeType 2 font engine is copyrighted work and cannot be used legally without a software license. In order to make this project usable to a vast majority of developers, we distribute it under two mutually exclusive open-source licenses. This means that *you* must choose *one* of the two licenses described below, then obey all its terms and conditions when using FreeType 2 in any of your projects or products. - The FreeType License, found in the file `docs/FTL.TXT`, which is similar to the original BSD license *with* an advertising clause that forces you to explicitly cite the FreeType project in your product's documentation. All details are in the license file. This license is suited to products which don't use the GNU General Public License. Note that this license is compatible to the GNU General Public License version 3, but not version 2. - The GNU General Public License version 2, found in `docs/GPLv2.TXT` (any later version can be used also), for programs which already use the GPL. Note that the FTL is incompatible with GPLv2 due to its advertisement clause. The contributed BDF and PCF drivers come with a license similar to that of the X Window System. It is compatible to the above two licenses (see files `src/bdf/README` and `src/pcf/README`). The same holds for the source code files `src/base/fthash.c` and `include/freetype/internal/fthash.h`; they wer part of the BDF driver in earlier FreeType versions. The gzip module uses the zlib license (see `src/gzip/zlib.h`) which too is compatible to the above two licenses. The MD5 checksum support (only used for debugging in development builds) is in the public domain. --- end of LICENSE.TXT ---
Having it under a more permissive license is a very valid reason though. Guess why FreeBSD is writing their own git implementation…
If the only tool for a task is closed-source then there is a project trying to make an open-source one. If the only open-source tool for a task is under a copyleft license then there is a project trying to make a non-copyleft one. Once a project is BSD, MIT or public domain we can finally stop rewriting it.
If avoiding copyleft is the goal then the Linux kernel is a weird choice. And important parts of the FreeBSD kernel (zfs) are under a copyleft license too (CDDL).
I find OpenBSD to be one of the best choices as far as license goes. I’ve been slowly moving all my Debian machines to OpenBSD in the past year (not only because of the license, but because it’s an awesome OS).
I haven’t tried using OpenBSD in earnest since are around 1998. I prefer a copyleft to a BSD style license personally but maybe I’ll take another look. And I hear that
tar xzf blah.tar.gz might even work these days.
It gets improved with every new major release, I’ve used it consistently for the past 3 or 4 releases and there’s always noticeable improvement in performance, user-land tools, drivers, arch support, etc. I’d definitely give it a try again!
This is fine reasoning but relativized. After all, I could just as easily say that if the only tool for a task is under a non-copyleft license, then there is a project trying to make a GNU/FSF version; once GNU has a version of a utility, we can stop rewriting it.
I used to do a fair bit of packaging on FreeBSD, and avoiding things like GNU make, autotools, libtool, bash, etc. will be hard and a lot of effort. You’ll essentially have to rewrite a lot of project’s build systems.
Also GTK is GNU, and that’ll just outright exclude whole swaths of software, although it’s really just “GNU in name only” as far as I know.
Depends on their goals. Some people don’t like GNU or GPL projects. If that’s the case then probably not.
zlib is derived from GNU code (gzip) so anything that includes zlib or libpng etc will “contain GNU code”. This includes for example the Linux kernel.
He didn’t say they’ve achieved their goal. It’s still a goal.
Why does it seem like you’re trying to “gotcha” on any detail you can refute?
It’s just someone’s project.
I’m trying to understand the goal. If the goal is avoiding software that originated from the GNU project that is probably futile. The GNU project has been a huge, positive influence on software in general.
You know the goal. They stated it. The parent comment to you stated it again.
It might be futile, but luckily we don’t control other peoples free time and hobbies, so they get to try if they want. You seem to be taking personal offense at the goal.
From the site:
Iglunix is a Linux distribution but, unlike almost all other Linux distributions, it has no GNU software¹
¹With the exception of GNU make for now
Yes I still haven’t been able to replace a couple projets. For self hosting gnu make is all that’s left, for chromium bison (and by extension gnu m4) and gperf are all that’s left.
New desk got delivered today, going to move my office back to the basement where it’s nice and cool for the summer. Sometime in the future going to have to figure out why 2nd floor of the house is a bit too warm even with AC running.
I have the same keyboard, just got it a few weeks ago. It’s a great entry level keyboard that you can swap switches at any time.
One of the best features of cron is its automatic email
I don’t run e-mail on any of my machines (attack surface) and Cron is the only program that would need it, so this hasn’t been Cron’s best feature for decades for me. I’d rather Cron log to the system facilities like any other program.
My e-mail client connects to an external IMAP server and besides
cron that wants to send e-mail there’s nothing else on my machines that needs it.
I wrote a tool to send my cron jobs to HealthChecks.io, so I get error notifications and late job notifications. https://github.com/spotlightpa/kristy
I’ve been using HealthChecks also, but I add the “&& curl …” manually. I will definitely check your repo out.
I use this on my Debian machines, but OpenBSD has the -n flag on crontab(5) which does the same.
You probably use the similarly named (and well established) chronic, part of moreutils. this tool seems to be an unwitting reimplementation.
There’s isn’t really that large of a need for consumer desktops to have this type of gear, so that makes sense to me.
Watching Copa America final (Brazil vs Argentina) on Saturday with a buddy of mine, then wife and I are going to the John Deere Classic (PGA Tour) on Sunday which is the final day.
Huh, it looks like they have a presence in that market
Still, it’s a bit like if Zamboni was sponsoring a hockey league. I guess the CEO likes to golf…
When I first learned about it I thought the same. Apparently Deere bought the land many years ago and donated it to the PGA Tour. Edit: it’s great advertisement for their lawn mowers :)
Holiday weekend, hopefully won’t get called into work and won’t be in the PC at all during this long weekend.
I use LXC containers in my home lab, with automated backups (local and remote). Before any major changes I just create a new backup (takes a few seconds) and do whatever. When things blow up and I can’t find a quick fix, I just roll back and deal with it later when I have more time. To me this is a good mix of getting to control of what I run (not a container image that I have to trust was done right) while still getting some of the benefits of containers. Obviously this is a home lab setting so it won’t scale too well.
Big fan of LXC here, too.
How do you back them up exactly? I found that creating & exporting snapshots of my containers takes much more than a few seconds.
I use Proxmox! It has a web UI and console utilities to manage LXC containers. You can run VMs also the same way. The backups take a few seconds for small containers, but can take a lot more depending on the container/vm disk size and the underlying storage (hdd vs flash, etc).
What I also find frustrating on macOS is the fact you need to download Xcode packages to get basic stuff such as Git. Even though I don’t use it, Xcode is bloating my drive on this machine.
We iOS developers are also not pleased with the size on disk of an Xcode installation. But you only need the total package if you are using Xcode itself.
A lighter option is to delete Xcode.app and its related components like
~/Library/Developer, then get its command line tools separately with
xcode-select --install. Git is included; iOS simulators are not.
I’m always surprised when I see people complain about how much space programs occupy on disk. It has been perhaps a decade since I even knew (off the top of my head) how big my hard drive was, let alone how much space any particular program required. Does it matter for some reason that I don’t understand?
Perhaps you don’t, but some of us do fill up our drives if we don’t stay on top of usage. And yes, Xcode is one of the worst offenders, especially if you need to keep more than one version around. (Current versions occupy 18-19GB when installed. It’s common to have at least the latest release and the latest beta around, I personally need to keep a larger back catalogue.)
Other common storage hogs are VM images and videos.
$ df -h / /data Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p6 134G 121G 6.0G 96% / /dev/sda1 110G 95G 9.9G 91% /data
I don’t know how large XCode is; a quick internet search reveals it’s about 13GB, someone else mentioned almost 20GB in another comment there. Neither would not fit on my machine unless I delete some other stuff. I’d rather not do that just to install git.
The MacBook Pro comes with 256GB by default, so my 244GB spread out over two SSDs isn’t that unusually small. You can upgrade it to 512GB, 1TB, or 2TB, which will set you back $200, $400, or $800 so it’s not cheap. You can literally buy an entire laptop for that $400, and quite a nice laptop for that $800.
$800 for 2TB is ridiculous. If I had to use a laptop with soldered storage chips as my main machine, I’d rather deal with an external USB-NVMe adapter.
I was about to complain about this, but actually check first (for a comment on the internet!) and holy heck prices have come down since I last had to buy an ssd
I guess disk usage can be a problem when you have to overpay for storage. On the desktop I built at home my Samsung 970 EVO Plus (2TB NVMe) cost me $250 and the 512GB NVMe for OS partition was $60. My two 2TB HDDs went into a small Synology NAS for bulk/slow storage.
It matters because a lot of people’s main machines are laptops, and even at 256 GB (base storage of a macbook pro) and not storing media or anything, you can easily fill that up.
When I started working I didn’t have that much disposable income, I bought an Air with 128GB, and later “upgraded” with an sd card slot 128gb thing. Having stuff like xcode (but to be honest even stuff like a debug build of certain kinds of rust programs) would take up _so much space. Docker images and stuff are also an issue, but at least I understand that. Lots of dev tools are ginoromous and it’s painful.
“Just buy a bigger hard drive from the outset” is not really useful advice when you’re sitting there trying to do a thing and don’t want to spend, what, $1500 to resolve this problem
I don’t know. Buying laptops for Unix and Windows (gaming) size hasn’t really been an issue since 2010 or so? These days you can buy at least 512GB without make much of a dent in the price. Is Apple that much more expensive?
(I’ll probably buy a new one this year and would go with at least a 512GB SSD and 1TB HDD.)
Apple under-specs their entry level machines to make the base prices look good, and then criminally overcharges for things like memory and storage upgrades.
Not to be too dismissive but I literally just talked about what I experienced with my air (that I ended up using up until…2016 or so? But my replacement was still only 256GB that I used up until last year). And loads of people buy the minimum spec thing (I’m lucky enough now to be able to upgrade beyond my needs at this point tho)
I’m not lying to prove a point. Also not justifying my choices, just saying that people with small SSDs aren’t theoretical
I have a Raspberry Pi 4 in need of a project, so I am going to turn it into an access point (as per this post: https://willhaley.com/blog/raspberry-pi-wifi-ethernet-bridge/). I have a spot in the house that is going to be hard to run Ethernet cable to, so this is a good use for it, although overkill.
I did the opposite somewhat. I put my wifi router as the access point and my rpi as the internet router.
Should probably be folded into
r0fddp, ping @pushcx
My client was still connected to a server with old services, so I took to opportunity to DROP my account.
07:39 -- MSG(nickserv): info gustaf 07:39 -- NickServ (NickServ@services.): Information on gustaf (account gustaf): 07:39 -- NickServ (NickServ@services.): Registered : Jul 28 22:07:55 2004 (16y 46w 3d ago) 07:47 -- │ NickServ (NickServ@services.): The account gustaf has been dropped.
thanks for the reminder; just did the same. you can connect to
rinnegan.freenode.net directly to get to legacy-freenode.
Did the same, had my account there since 2014. Been on IRC since early 2000s, never seen this big of a fiasco before.
I’ve been really liking these webzines!