Some are still keeping Chaosnet alive.
I’d like to attach my personal experience of attempting to contribute to Debian here because it seems somewhat relevant to the author’s assertion that “Debian has long been experiencing a decline in the amount of people willing to participate in the project.”
I’ve tried to contribute on three different occasions; two were plain old bug reports, one was a patch. A common thread for each of these contributions was a demoralizing absence of feedback. One of the two bug reports has remained unanswered for over a year now, and my patch submission for about nine months.
Now, I sent the patch I wanted to contribute directly to the relevant maintainer mailing list. I wasn’t sure whether this was the correct way of submitting a patch for a downstream package, so I apologized preemptively in case this was not proper form and asked cordially to be pointed in the right direction if this wasn’t it.
No “thanks for your contribution,” no requests for additional information, no human response whatsoever.
What I did get was spam. Some of it was addressed to the alias I’d used for my bug report, and some of it came straight through the nginx package maintainer list I’d eventually unsubscribe from after giving up hope of receiving feedback and getting sick of receiving spam every couple of days.
Credit where it’s due, my other bug report was responded to and fixed, though also after a several months long period of radio silence.
I have no idea what to take away from this experience other than the impression that contributions are generally unwelcome, and that I should probably not bother trying to contribute in the future. Maybe this experience is not entirely uncommon, or maybe it’s just me. I really don’t know.
I have no idea what to take away from this experience other than the impression that contributions are generally unwelcome, and that I should probably not bother trying to contribute in the future.
What I would take away from this (and from my similar experiences) is that Debian maintainers are overworked volunteers with too little time and we need more of them. Of course, getting to be one involves somehow getting time from existing ones, so it’s a vicious cycle.
I was a Debian developer in the late 90s (I think from 1999-2003?) and the original application process was quite quick. I later went into hiatus for a period and tried to reactivate my access after about ten years away but the whole process was so onerous that I’ve just not taken it further. A pity, as I’d love to contribute again.
Yeah the spam is definitely real. I get at least five messages a day (might need to adjust my filter as well, but oh well). I’m happy I used a semi-throwaway address for that list - I can always delete the address and be rid of that spam forever.
I wanted to become a Debian Developer around 2003-2005 and back then you could expect the whole process to take about a year. Spoiler alert: I never even officially started the process.
From what I heard this was improved immensely but I never followed up and instead have been contributing to other projects. But personally Github has actually been a huge boon here, the friction is just so much less, I’ve also had problems contributing to other older projects before, at one time an important patch of mine hung for so long that someone else with commit rights fixed it before the turnaround time…
On the other hand I don’t think this can be solved, but the more open the project is (and yes, accepting PRs on Github or public other forge) usually makes it a 10x better experience…
I’m still quite new to it, but Paul Hudson’s Hacking With Swift series looks good. He does have Hacking with macOS to cover macOS.
I think the date on the first page of the PDF is misleading - it may have been generated in 1997 but I believe the document is originally from Seventh Edition Unix, which dates from 1979.
On the specifics of the machine you’re looking at, I’d hightly recommend Dell instead of HPE - Dell are a lot more friendly with things like BIOS updates (HPE require a service contract to access BIOS updates). In addition, certain HPE machines run the fans at full speed when you use “non-approved” PCIe adapters.
I’d look at something like a Dell R720 - it’s a generation newer than that DL360 G7, still takes cheap DDR3 RDIMMs and it’s fairly quiet. If you don’t mind Reddit, /r/homelab is a very useful resource. So are the ServeTheHome forms.
I’d say that it was a little excessive to say that the BMP file format “didn’t make it” when it was the only raster image format that the Windows OS natively supported for years, and is still supported by Firefox and Chrome out of the box. The WMF file format would have been a more interesting item to use there instead.
You’re lucky! I’m still amazed at the number of people who send screenshots in Word documents or even Excel workbooks.
It’s just a terrible headline.
The article lists formats that were (and are) in widespread use, where most people reading the article probably interacted with more than half. That’s fairly successful.
The real point is that we have faster CPUs than in the 80s, with more ability to compress; we have more bits per pixel and more pixels which increases the benefit of compression; and we moved to networks that are frequently bandwidth constrained. Hence, formats today are more compressed than formats in the late 80s/early 90s. That doesn’t mean they failed, it just means tradeoffs change.
Yeah. Three I had never heard of, two I knew by name and 5 have used from sometimes to often.
But if the definition is “it’s not PNG, JPEG, GIF, or SVG”, then yes, they didn’t make it.
Same goes for IFF ILBM on the Amiga; it was the only picture format for graphics of 256 colours or less, making it the universal format.
For that matter, TIFF was still the only way we handled photos when I worked in publishing; it can handle CMYK and its only real contender is Photoshop’s internal format.
Yeah but I also think it’s fair to say that IFF ILBM ‘didn’t make it’. Sure, it was the lingua franca for images at the time but it only ever truly took off on the Amiga, and although those of us who were Of the Tribe may feel like The One True Platform was the most important thing EVER in the HISTORY COMPUTING, if we’re honest with ourselves - it wasn’t :)
#include <intuition.h> FOR-EVER! :)
RIFF, a little-endian variant of IFF, lives on in WAV, AVI, WEBP, and other formats.
Well, at least it was included, unlike XPM/PPM or Degas Elite, but I still think this was mostly a tendentious listicle.
Sure I mean the whole idea is an exercise in futility. There are always going to be unhappy NERRRDS grousing about how their Xerox INTERLISP 4 bit image format got left out :)
those of us who were Of the Tribe may feel like The One True Platform was the most important thing EVER in the HISTORY COMPUTING
…I’m in this picture and I don’t like it.
I get it but I also think everyone is young once and sadly a necessary aspect of that and the concomitant dearth of life experience that helps you scope your opinions against commonly perceived reality means we all get a pass and rightly deserve it :)
It’s the folks who NEVER grow out of this that are sadly crippled and deserve our pity, and maybe where appropriate our help.
3.x’s Datatypes were so awesome, though.
Although I remember setting a JPEG as my Workbench backdrop and it would take several minutes before it would display after startup on my 1200, until I downloaded a 68020-optimized (maybe 030/FPU optimized? I got an upgrade at some point) data type from Aminet and it would display after only a second or so.
I mean yes but Datatypes were so much cooler. Also I think I had a good reason at the time, but I don’t remember what.
I do remember downloading an MP3 (Verve Pipe’s “The Freshmen”) and trying to play it on my 1200. The machine simply was not fast enough to decode the MP3 in real time, so I converted it to uncompressed 8SVX…it played just fine but took up an enormous portion of my 40MB hard drive.
With accelerator boards, mp3 (and vorbis, and opus) are feasible. From a quick search, apparently a 68040/40MHz will handle mp3@320.
And, back then, mp2 was still popular and used less resources.
I have a vague feeling that .AVI, which was for a while extremely prevalent container for video content, is some derivation from IFF (but not perhaps ILBM).
WMF is a walking security vulnerability. One of the opcodes is literally “execute the code at this offset in the file” and there were dozens of parsing vulns on top of that.
This brought back some good memories. I can remember using Sourcer (albeit an illegal copy) when I was at school in the early 90s and learning x86 assembly language. When I started using the internet Andrew Schulman was one of the first people I emailed (and he even replied!).
Had to smile when I saw jcs’ first two screenshots, from the late 90s/early 2000s. I’m pretty certain I have a directory of desktop screenshots I took in the mid-late 90s, complete with various incarnations of fvwm, AfterStep, WindowMaker and Bowman (the window manager that really kicked off the NeXT-lookalike craze).
I’m still using WindowMaker, to this day. At some point about 10 years ago I took a detour through wmii and ratpoison land but it never stuck, and afterwards I found very few window managers come even close to matching WindowMaker’s speed and ergonomics. I tend to have a lot of windows open (datasheets, reference manuals, code windows, debuggers etc.), so “modern” interfaces, with flat windows and fat titlebars, are pretty much impossible to manage. On the other hand, the “layouts” are too fluid (thanks to many of these docs being PDFs with various font sizes, margins etc.) to meaningfully manage with a tiling WM, I don’t have enough monitors to make that work :).
I tried it many times but I never understood how to really manage windows with it. Somehow things where always on top of each other and it never felt ergonomic. I guess I never read a good tutorial.
I think you did understand it, there’s not much about it to understand if you’ve used any “mainstream” UIs like Windows. It’s just one of those things that everyone has different preferences about :). Things end up (more or less) on top of each other by design, and you can sort of put some order in it using window icons and window shading (double-click on the titlebar and the window is “folder” underneath it). It’s obviously not as neat as tiling WMs but the way I work usually isn’t neat, either. With tiling WMs, I had the opposite problem: I had so many things open at once that showing all of them at once was impractical, and trying to get them to fit into workspaces just meant I ended up spending forever going between workspaces.
There are a lot of things about WindowMaker that I would improve (the dock is one of them, I always wanted something closer to AmiDock, for example) but it works well enough that I never got around actually writing any of that.
I see, that makes sense. I try to work differently, so I guess that is why it never worked for me.
I wish I had screenshots of my WindowMaker setup from the late 90s. There was a site that had these amazing gothic themes for WM, but I don’t have any of my files from those days.
After my post I tried to find my old screenshots but it’s proved to be a bit more difficult than I thought - my fileserver home directory has ~25 years of cruft, some of it not that well organised. I’ve not managed to find them yet :(
Yeah, I went and looked, and I have school work back to 1994(!), but nothing like a home directory with my WindowMaker themes.
It would be really cool if you could find it, as in, I’m pretty sure the folks on the WM mailing list would love to hear about it! One of the things the Window Maker community laments is the disappearance of Freshmeat’s theme repository (which was itself a superset of themes.org’s archive if memory serves me right). It hosted more than 3,000 WindowMaker themes, very few of which were mirrored elsewhere. If you still have some of these, you may be the only one who still has them!
Great to see Slackware is still around - if I’m not mistaken it’s the oldest surviving Linux distribution? I moved away from it a long time ago, but I can still remember downloading version 2.1 onto 5.25” disks back in 1994…
I discovered, read and enjoyed this paper recently, and found that it crystallised one of the fundamental design axioms for me into a way that stuck in my head, so wrote it up briefly here yesterday: Unix tooling - join, don’t extend.
Scimming quickly through this it seems to me that it must be quite an ild article, because of the RAM of workstations that is in the range of 256MB to 1GB. However i didn’t see the date of publication.
Yeah, was thinking the same. I can see the pkgsrc documentation was accessed in October 2004 so I guess the document must be from around that timeframe?
Update: Ah, found this related presentation by Jan Schaumann and it’s from EuroBSDCon 2004.
The real question is how well it all holds up two decades later. Is that system even still in use? I wonder.
No, I don’t believe so - the iPhone predates this activity by a few years (the iPhone was announced in January 2007, this paper is from 2012 although the work was done when Snow Leopard was still in development, so 2008/2009). It may well have assisted in the port of macOS to ARM though - the author later went to work for Apple after he graduated.
Glad to see work is progressing on enhancing ZFS support - for so long NetBSD support was stagnating a bit.
Thanks, enjoyed reading that. For those looking for a cheaper option, Mellanox is another option - second hand hardware can be found cheaply on eBay. Look for ConnectX-2 and ConnectX-3 cards, which support both 40Gbps InfiniBand and 10Gbps Ethernet.
I love Fantasque Sans Mono; it’s so damn cheerful and twee every time I look at a terminal or editor.
l and I
Perhaps I’m missing something, but if I type them in the code sample input box on Compute Cuter (selecting Fantasque Sans Mono) they look different to me?
I also see clearly identifiable glyphs for each when I try that. The I has top and bottom serifs, the l has a leftward head and rightward tail (don’t know what you call em), and only the | is just a line.
Honestly when is that ever a real issue? You’ve got syntax highlighting, spellcheck, reference check, even a bad typer wouldn’t accidentally press the wrong key, you know to use mostly meaningful variable names and you’ve never used L as an index variable… So maybe if you’re copying base64 data manually but why?
My friend who’s name is Iurii started spelling his name with all-lowercase letters because people called him Lurii. Fonts that make those indistinguishable even in lowercase would strip him of his last resort measure to get people to read his name correctly. (Of course, spelling it Yuriy solves the issue, but Iuirii is how his name is written in his id documents, so it’s not always an option)
It could be, and it’s not just limited to I, l, and 1. That’s why in C, when I have a long integer literal, I also postfix it with ‘L’:
1234L. Doing it that way makes it stand out easier than
1234l. And if I have to do an unsigned long literal, I use a lower case ‘u’:
5123123545uL. That way, the ‘u’ does stand out, compared to
Since I need to run a lot of x86_64 VMs, the M1 isn’t for me yet…but if and when it becomes a viable thing for me, my main concern is going to be thermals. I have a 2019 16” MBP and it just runs hot when plugged into an external monitor. The area above the touchbar is painfully hot to the touch and the fans go full blast at the slightest provocation.
I’d like something that isn’t going to melt and doesn’t sound like a jet taking off when I provide a moderate workload…
It’s a bit of a painful one to work through, but this thread on the MacRumors forums has some hints on how to solve the excessive heat when using an external monitor.
Do you need to run the VMs locally? Since we’re all in working-from-home mode, I’ve got my work laptop as my primary machine, but any engineering work is done on a remote VM (either on my work desktop, which is a nice 10-core Xeon, or in the cloud). I basically need a web browser, Office, and a terminal on my laptop. Neither my MacBook Pro nor my Surface Book 2 have particularly great battery life running VMs, so I tend to shut them down when I’m mobile, meaning that the places I’d actually run them are even more restricted than the places where I can connect to a remote VM.
Unfortunately, yeah. The product we ship is itself sometimes shipped as a VM image, and being able to build and tinker locally is a huge timesaver. Maybe in the future, when we’re doing more business in the cloud, it will be different but until then, I’m pretty stuck on x86_64 hardware.
Genuine question, why does it need to be local? I use entr and rsync to keep remote systems in sync on file saves/changes and setup emacs to just run stuff remotely for compile-command. Works a treat and the laptop never gets warm.
This lets you edit locally but “do crap” remotely, cloud or not. In my case the “cloud” is really just a bunch of servers I rsync to and then run stuff on via ssh. Yeah you could use syncthing etc… but damned if i’m going to sync my entire git checkout and .git tree when all I need is a few k of source that rsync can keep in sync easily if prodded.
I mean, it’s not a law of the Universe or anything, but it’s significantly easier. These images are fairly huge and while we do have a beefy shared system running test VMs, it’s loaded down enough that there’s not a lot of spare capacity to run 4-6 VMs per user so that system is used for testing only.
And then, finally, there’s the issue that I don’t want to have to have two machines when I can make do with one. :)
Gotcha, just curious. I try to keep vm’s off my laptop in general if i can get away with it. Lets me just chill in coffee shops (back when that was a thing) longer on battery power alone.
My goal is generally: be kind to the battery, plus with tmux+mosh i can have the remote thing run stuff and then move locations and have things happen in the bg. But if resource constraints is it that makes more sense.
I was hoping for how he got the touchpad working properly. I have to use an external keyboard and mouse to dual boot properly. I installed Ubuntu 20.04 in June. If I can get past the hardware compatibility issues, I will switch completely. Highly recommended if you don’t mind the external hardware dependancies. Btw, it is so so so so much faster than native Mac OS. It is insanely fast. Everything is instant. I didn’t realize all of the sluggishness on my 2019 16inch macbook pro until I used Ubuntu for 2 seconds.
I’m pleasantly surprised to read that the 16” MBP is reasonably well supported by Ubuntu. According to this list, the touchpad should work, you just need to apply some patches.
Thanks, I will take a look at applying those patches, when I did it earlier this summer I don’t think a few of the patches worked at all. Worth trying again.
I wonder what they’re going to do for Mac Pro class of hardware.
Trying to fit more RAM, more CPU cores, and a beefier GPU all into one package doesn’t seem realistic, especially that one-size-fits-all chip isn’t going to make sense for all kinds of pro uses.
It’s going to be interesting to see what they do (if anything at all) with a 250W TDP or so. Like, 64? 128? 196 cores? I’m also interested in seeing how they scale their GPU cores up.
Darwin does support the trashcan Mac Pros, right? They have two CPUs, and that’s a bona fide NUMA system.
Trashcan Mac Pros (MacPro6,1) are single CPU - it’s the earlier “cheesegrater” (MacPro1,1-5,1) that are dual CPU. I do believe they are NUMA - similar-era x86 servers certainly are.
Trying to fit more RAM, more CPU cores, and a beefier GPU all into one package doesn’t seem realistic
I have heard this repeatedly from various people – but I don’t have any idea why this would be the case. Is there an inherent limit in SOC package size?
I’d assume they’ll just support non-integrated RAM - as there will be space and cooling available.
Great news. Rather curiously I see there’s an SGI release - I thought the SGI port was discontinued after 6.7 (the port page still says as much)?
I just posted the somewhat related FreeBSD - a lesson in poor defaults , which is more technical.