“The Gang Builds a Mainframe”
Ha ha! I don’t think the mainframe is really a good analogue for what we’re doing (commodity silicon, all open source SW and open source FW, etc.) – but that nonetheless is really very funny.
It makes you wonder what makes a mainframe a mainframe. Is it architecture? Reliability? Single-image scale-up?
I had always assumed it was the extreme litigiousness of the manufacturer!
Channel-based IO with highly programmable controllers and an inability to understand that some lines have more than 80 characters.
I think the overwhelming focus of modern z/OS on “everyone just has a recursive hierarchy of VMs” would also be a really central concept, as would the ability to cleanly enforce/support that in hardware. (I know you can technically do that on modern ARM and amd64 CPUs, but the virtualization architecture isn’t quite set up the same way, IMVHO.)
I remember reading a story from back in the days when “Virtual Machine” specifically meant IBM VM. They wanted to see how deeply they could nest things, and so the system operator recursively IPL’d more and more machines and watched as the command prompt changed as it got deeper (the character used for the command prompt would indicate how deeply nested you were).
Then as they shut down the nested VMs, they accidentally shut down one machine too many…
Then as they shut down the nested VMs, they accidentally shut down one machine too many…
This sounds like the plot of a sci-fi short story.
…and overhead, without any fuss, the stars were going out.
I’d go with reliability + scale-up. I’ve heard there’s support for like, fully redundant CPUs and RAM. That is very unique compared to our commodity/cloud world.
If you’re interested in that sort of thing, you might like to read up on HP’s (née Tandem’s) NonStop line. Basically at least two of everything.
Architecture. I’ve never actually touched a mainframe computer, so grain of salt here, but I once heard the difference described this way:
Nearly all modern computers from the $5 Raspberry Pi Zero on up to the beefiest x86 and ARM enterprise-grade servers you can buy today are classified as microcomputers. A microcomputer is built around one or more CPUs manufactured as an integrated circuit. This CPU has a static bus that connects the CPU to all other components.
A mainframe, however, is built around the bus. This allows not only for the hardware itself to be somewhat configurable per-job (pick your number of CPUs, amount of RAM, etc), but mainframes were built to handle batch data processing jobs and have always handily beat mini- and microcomputers in terms of raw I/O speed and storage capability. A whole lot of the things we take for granted today were born on the mainframe: virtualization, timesharing, fully redundant hardware, and so on. The bus-oriented design also means they have always scaled well.
Yeah, but can it run Doom?
Doom needed four floppies. Doom 2 took 5.
Perhaps it could be loaded into memory from a DAT cassette.
Most of the Doom data is game assets and levels though, I think? You might be able to squeeze the game engine and a small custom level in 400k.
Yes it is. But the engine itself is about 700k. The smallest engine (earliest releases) were a little over 500k. You could probably build a smaller one with modern techniques and a focus on size though.
Good news, ADoom for Amiga is only 428k! Bad news, Amigas only have double density FDDs so you only have 452k for the rest of the distro.
I’ve been using Linux on the desktop on a daily basis for over 20 years and fully agree with most of the author’s other points. Ever since GNOME 3 was first released, it feels to me like the devs have been aiming to make GNOME a tablet-like experience on the desktop, which is not something I ever wanted. Improvements to the experience seem to be entirely experimental and without direction or cohesive vision. Pleas for options and settings to make GNOME behave similarly to conventional (read: tried and true) desktop idioms fall on deaf ears.
I stuck with XFCE for a long time, but for the past couple of years tried seriously to make peace with GNOME. First it was Ubuntu’s take, which I found palatable with a handful of extensions. I then switched over to Pop OS but again had to plaster over the deficiencies with extensions. But having to wrangle extensions just to get a usable desktop just isn’t my idea of a good time.
Earlier this week I decided to give KDE (or is it “Plasma” now?) another try and have so far been pretty happy with it. It seems like they have recently stripped back a lot of the fluff while being able to keep all of the customization that I require. I gave up on KDE in the past due to outright buggy behavior and crashes in common workflows but haven’t hit anything serious yet. Crossing my fingers that it stays that way for a while.
A lot of the criticism I see leveled against Gnome centers around the way they often make changes that impact the UX but don’t allow long-time users to opt-out of these new changes.
The biggest example for me: There was a change to the nautilus file manager where it used to be you could press a key with a file explorer window open, and it would jump to the 1st file in the currently open folder whose name starts with that letter or number. They changed it so it opens up a search instead when you start typing. The “select 1st file” behaviour is (was??) standard behavior in Mac OS / Windows for many many years, so it seemed a bit odd to me that they would change it. It seemed crazy to me that they would change it without making it a configurable option, and it seemed downright Dark Triad of them that they would make that change, not let users choose, and then lock / delete all the issue threads where people complained about it.
It got to the point where people who cared, myself included, started maintaining a fork of nautilus that had the old behavior patched in, and using that instead.
What’s stopping people who hate the new & seemingly “sadistic” features of gnome from simply forking it? Most of the “annoying” changes, at least from a non-developer desktop user’s perspective, are relatively surface level & easy to patch.
Wow, I thought I was the only one who thought that behavior was crazy. Since the early 90’s, my workflow for saving files was: “find the dir I want to save the file in,” then “type in the name.” In GNOME (or GTK?) the file dialog forces me to reverse that workflow, or punishes me with extra mouse clicks to focus the correct field.
I have never wanted to use a search field when trying to save a file.
Some fun discussion and critique on the osdev forums
Wow, that thread is cancer.
To me it just looks like 99% of all Internet threads where two or more people hold slightly differing positions and are just reading past each other in an effort to be right. At least there’s a good amount of technical discussion in there, as flame wars go, this is pretty mild.
I’m not very active in this space anymore but my impression is that most people moving on from the “make an LED blink” level of expertise end up on the Arduino VSCode extension (https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-arduino) or Platform.io (https://platformio.org/)
This problem, and others, can be solved by using official distribution images instead of those provided by the Raspberry Pi project. I’m using the official Fedora 33 ARM64 (aarch64) image for example, works perfectly on my Raspberry Pi 3B+ and has the exact same packages (including kernel!) as the x86_64 version of Fedora.
Do the distro-originated images come with all the same raspberry-pi configuration, hardware drivers, and firmware gubbins as Raspbian? That’s the main reason I run Raspbian, aside from it having more or less guaranteed support when things break and I need to do some googlin’ to fix it.
Generally speaking? No.
Raspbian is the only distro that provides truly first class support for the pi’s hardware.
Graphics support is becoming more widespread at least, and there are bits and bobs of work happening in various distros.
But from what I’ve seen most distros are optimizing for a good desktop experience on the pi.
At least on Fedora you get a kernel very close to upstream Linux, also for the Pi, so no crazy stuff and everything I use works out of the box (LAN, WIFi). That is the reason why the Raspberry Pi 4 for example still doesn’t work in Fedora, requires more stuff to be properly upstreamed: https://github.com/lategoodbye/rpi-zero/issues/43
I hadn’t realized there was an 8GB Pi 4 – the announcement post notes the SoC could support up to 16GB, and the limit is how much they can get in a single package. An 8GB option for the keyboard-shaped Pi 400 (for a few more bucks, of course) would be interesting too.
I’ve had an 8 GB version since spring/summer at least and it’s a wonderful cheap low-power Linux box for a variety of development duties and experiments. Main downside is that it absolutely needs a decent heat sink if you’re doing anything CPU-intensive or else the CPU speed gets throttled.
Unrelated to the news, but LWN content is nearly impossible to read on a phone…very annoying
I think is mostly because it’s an email. Otherwise, that’s usually OK (not great).
I wonder if they would accept help to fix that.
I think it’d be hard trying to intelligently format a 72-column fixed plain text email into something that isn’t a VT100. It’d probably be easier if it was rich text (or at least designed to reflow) in the first place.
I’m using wallabag to bookmark the content and read on my phone, usually much later. I also think that lwm works ok with Firefox readability view.
Thanks for the suggestion. I will give it a try although I’m using Firefox less frequently these days
Not sure which phone you have but mine is able to display the original article just fine in horizontal mode. Or in either orientation with Firefox reader view.
You can always switch to the desktop version.
My experience with Bash is: “avoid it at any cost”. Unless you are writting very OS specific stuff, you should always avoid writting bash.
Bash efficiency is a fallacy, it is never the case. Bash is sticky, it will stay with you until it transforms into a big black-hole of tech-debt. It should never be used in a real software project.
After years of Bash dependency we realized that it was the biggest point of pain for old and new developers in the team. Right now Bash is not allowed and new patches introducing new lines of Bash need to delete more than what they introduce.
Never use Bash, never learn to write Bash. Keep away from it.
What do you use instead?
Python. Let me elaborate a little bit more.
We are a Docker/Kubernetes shop, we started building containers with the usual, docker build/tag/push, plus a test in between. We had 1 image, one shell script did the trick.
We added a new image, and the previous one gained a parameter which existed in a JSON file which was captured using jq (first dependency added). Now we had a loop with 2 images being built tested and pushed.
We added 1 stage: “release”. Docker now had build tag push, test, tag push (to release). And we added another image, the previous images gained more parameters, something was curled from the public internet, the response piped into jq. A version docker build-arg was added to all of the images, this version was some sort of git describe.
2 years later, the image building and testing process was a disaster. Impossible to maintain, all errors captured after the images were released, the logic to build the ~10 different image types were spread into multiple shell scripts, CI environment definitions, docker build-args. The images required very strict order of operations to build: first run script build then run script x then tag something… etc.
Worst of all, we had this environment almost completely replicated to be able to build images locally (when building something in your own workstation) and remotely in the CI environment.
Right before the collapse, I requested to management 5 weeks to fix this monstrosity.
I did this in Python 3.7 in just a few weeks. The most difficult part was to migrate the old tightly coupled shell-scripting based solution to the new one. Once this migration was done we had:
Anyway, none of this could be achieved by using Bash, I’m pretty sure about it.
It sounds to me like your image pipeline was garbage, not the tool used to build it.
I’ve been writing tools in bash for decades, and all of them still run just fine. Can’t say the same for all the python code, now that version 2 is officially eol.
bash 3 broke a load of bash 2 scripts. This was long enough ago that it’s been largely forgotten.
I agree with you, the image pipeline was garbage, and that was our responsibility of course. We can write the same garbage in Python no doubt.
Bash however, does not encourage proper software engineering, definitely, and it makes software impossible to maintain.
I can confirm this. I’ve had to replace a whole buildsystem made in bash with cmake roughly 2 years ago and bash still contaminates many places it should not be involved in with zero tests.
Negatively: Drinking when I got stressed. Now I drink all the time, to the point where it’s an unreasonable portion of my outgoing expenditure and I’ll usually pour myself something to take the edge off before standup. If I could offer any advice to anyone reading; please only drink alcohol during fun, social occasions.
When I was a cable guy, the only outlet I had was drinking. 4 out of 5 mornings I had a hangover, was still buzzed, or even drunk. My (horrible, universally hated) boss reprimanded me for it multiple times a month. The only thing that stopped me was quitting that job in June.
With some help, (a week in the hospital and a lung injury) I’ve also quit smoking cigarettes and avoid nicotine. I now have a very nice and infinitely more affordable green tea habit.
I drink still, avoid keeping liqour around, and ceased my habit of staying drunk or getting shitfaced regularly. Stress kills, folks.
Thanks for sharing. I think avoiding keeping liquor around is a good point I hadn’t really considered, by now it’s part of the furniture. Maybe I’ll give my liquor shelf to my parents.
A relative taught me these rules when I was a kid:
Works for me.
I’ve heard these rules a couple of times, and, to me, they always sound patronizing. It feels on par with telling an addict to “just stop”. How can the advice work when you want to drink on an empty stomach, alone, and for the effect, and it’s out of your control?
These aren’t guidelines for an alcoholic, they’re guidelines to prevent one from becoming an alcoholic.
Sorry, I realized my first comment was a little intense.
I understand this. I just don’t think they very good guidelines – they’re more of a description of “patterns of people who aren’t alcoholics”. I think what makes someone an alcoholic is a very complex, and often genetic thing. For some, these rules are essentially impossible to follow from the get-go. Additionally, someone can choose to break all these rules all the time, and still not become an alcoholic.
I get your point, but if it’s genetic, then a list of rules won’t make a difference one way or the other.
After the click-baity title I expected something slightly more interesting and more numerous.
It was about a single meaning, not a myriad of meanings:
A command named pwd (also having a shell builtin of same name for performance reasons.
I expected to learn of new things under the same name, to get useful knowledge of possible pitfalls where having a preconception of what PWD would represent would cause me trouble.
I agree. I don’t like to poo-poo other people’s work in general but I’m not sure I can think of a reason it matters whether the p in pwd means “print” or “present”, or why one would want to lobby for one or the other since they are, at a very practical level, exactly thee same thing.
If I was going to lobby for one or the other, it makes more sense to default to “print” since early Unix terminals were literally teletype machines. As a result, both C and Unix have long used the very “print” to mean display, present, echo, show (or any other synonym you care to think of) data to the user or stdout/stderr.
But I guess the thing that disappointed me the most was the author’s attempt to discredit the Wikipedia source by quoting the man page. The article says “there are actually zero references to pwd being short for ‘print working directory’” and yet right there in the screenshot the man page literally says, “prints the pathname of the working (current) directory”. Yes, you have to remove an “of” and a couple of “the’s” but it says so right there!
(Author here) - Oh dear, that’s not good, sorry to have disappointed you. But while you aren’t sure you can think of a reason it matters enough to write a little post about it … I could think of one - my growing fascination with Unix history and how things grow, merge, and fork. And so I wrote it. Moreover, I wasn’t lobbying for one or the other, as you can see from the content of the post.
Regarding your last point, I guess it’s down to how literally one interprets the written word. For me, if one is “looking for” evidence that it means “print working directory”, one can find it indirectly in the man page. But I was looking for something concrete and explicit (hence the quotes), and it wasn’t there.
Anyway, I still think it’s an interesting topic, but I know that not everyone will agree, and that’s more than OK. Thanks!
(Author here) - I’m sorry you considered the title “click-baity”, that wasn’t my intention at all. Perhaps the use of “myriad” was a little extreme, but I thought having at least 4 potential meanings for such a “lowly” command as pwd warranted at least some adjective, and I decided to allow myself some breadth in expression.
I really want to run Fedora but 25 years of dpkg & apt are hard to get over. Maybe I’ll try it again when I next get a new laptop. But I’m just so comfortable on Debian…
I just switched this year, after a few years of debian. It’s probably not the same experience, but for basic things, dnf is practically equivalent to apt. My personal intuition is that it might not be worth it, unless you’re also interested in GNOME (strictly speaking, the Fedora spins aren’t real Fedora releases, and usually aren’t as polished).
the Fedora spins aren’t real Fedora releases, and usually aren’t as polished
the Fedora spins aren’t real Fedora releases, and usually aren’t as polished
I concur with this sentiment. I’m pretty steeped in the Red Hat universe for some time, so I really like Fedora on the systems I have to touch most. As an experiment, I tried the KDE spin for a year. It was OK, but had lots of paper cuts that the standard workstation edition just doesn’t have. They’re generally very minor, like needing to use the command line for firmware updates instead of getting alerted to them by the system tooling. Since I was mostly in KDE for kwin-tiling and a few other things that are much less integrated than that, I switched back to the standard workstation edition once Pop Shell shipped and got easy to integrate with the standard Fedora GNOME installation.
My personal intuition is that it might not be worth it, unless you’re also interested in GNOME
To me, the most interesting subproject of Fedora, even though it may not be ready for wide use yet, is Silverblue. Having an immutable base system with atomic upgrades/rollbacks is really nice. This really sets it apart from other Linux distributions (outside NixOS/Guix). Sure, Ubuntu is trying to offer something similar by doing ZFS snapshots on APT operations, but that looks like another hack piled upon other hacks, rather than a proper re-engineering.
Then again, I haven’t head good things about trying to use Silverblue with XFCE or other WMs.
I like Debian and I’ll run it on servers but for desktop use, I want things to Just Work out of the box. My experience with Debian on the desktop is that you have to know all the packages you need in order to get the same out-of-the-box experience as Ubuntu or Fedora. At least, that’s what it was like when I tried Debian with XFCE.
You might also be interested in PopOS and Linux Mint, both of which are based on Ubuntu but strip out most of the annoyances like snapd.
I’d like to add that rpm command is similar to dpkg
dpkg -l > rpm -qa
A couple years ago I went through a distro jumping phase. Fedora worked fine but I didn’t find any particular advantages of running it over - say - running Ubuntu. The one thing setting it apart from other distros was Wayland as default.
I ended up on Manjaro, and it’s been a breath of fresh air: most software is a click away (thanks AUR!), things just work out of the box and in general their configuration of Plasma and Gnome feel snappier than Fedora and Ubuntu.
The one thing setting it apart from other distros was Wayland as default.
The one thing setting it apart from other distros was Wayland as default.
The one thing setting Fedora apart from other distros is often getting bleeding edge stuff as default. Most of the times it works out super.
You are not wrong. What I meant was on the ‘experience’ front. Most of the time - if I’m lucky and the hardware obliges - I don’t bother remembering what version of the kernel, Mesa, etc. I am using, so being on the bleeding edge doesn’t introduce a lot of advantages.
BTW, the last time I tried Fedora was when Intel introduced their Iris driver and I wanted to see if it’d improve the sluggish performance I was experiencing on Gnome.
it occurs to me that for many of us infoslaves (i’m mostly kidding with that term), with the new normal of possibly being remote permanently, a desktop rig makes more sense right now for price/performance. unless you’re on a mac.
My workplace only issues laptops to employees, the main reason being that most of us don’t want to be chained to our desks all day long. Being able to bring your laptop into a meeting is a huge advantage, and until recent events, lots of us would spend at least half the day working from random places in the building.
I have a laptop at home that is docked most of the time but when I want to take it downstairs and work on the couch just to be in the same room as my wife, then I’m very happy to have it.
Desktops have always been cheaper price/performance wise. But they also take up more space, consume more power, and are generally louder. (This doesn’t hold for small form-factor boxes, but those tend to be priced similarly to laptops.)
unless you’re on a mac.
unless you’re on a mac.
In which case we assume price was never much of a factor ;)
Yep - that’s my experience. My desktop has more ram and less thermal throttling than a similar price laptop. When I don’t need portability (and that has been the case for some months and will be so for some more), it’s a huge win.
This seems to be desktop vs laptop? I’ve never understood why anyone buys a laptop to put on a desk, plug into the wall, and leave there 98% of the time. It’s just a more expensive and less flexible desktop at that point.
I’ve had that setup a few times. Usually for minimalism purposes - if I don’t need the computational power of a desktop, a single laptop (+ charger) makes for less clutter. And it’s still good that 2% of the time you want to take it somewhere, you can do so easily (without owning a desktop + laptop).
I don’t think many people with a laptop have it plugged in or docked 98% of the time. For those that do, maybe it’s extremely valuable to have a laptop for that other 2% of the time.
My laptop is docked at my desk all day but when I go on a trip, I just grab the thing, I don’t have to worry about whether my work is copied over to it or pushed up into the cloud. I’m not a gamer or a bitcoin miner so I don’t need a ridiculous CPU or GPU, or water cooling, or colored case lights come to think of it.
My last “desktop” computer currently sits unplugged under my desk. I haven’t gotten rid of it because it makes an excellent foot rest.
I took my laptop home from the office when on call. Barring that work requirement, I’d happily live without a laptop these days.
ergonomics is another reason to use desktop computers, if you actually care about looking at a monitor at the correct height and typing on an input device that won’t kill your wrists, desktops make a lot more sense. The laptops I use at work are just really crappy portable desktops, at least, how I use them.
Yeah, due to my history w/ RSI, using a laptop for any extended duration (> 2 hours or so) is really not viable. When you give up the goal of “mobile computing” it really stops making sense having a laptop. I have one that I bring with me on work trips and whatnot (granted, those won’t be happening for a while). My desktop was cheap to build, is incredibly powerful (which is great when working in compiled environments), upgradeable at actual consumer prices. As you mentioned, I also invested in building a desktop that is ergonomic and comfortable. The whole thing was (desktop, peripherals, monitor, desk) was less than the price of a premium Macbook.
I think laptops are great and have an important place for a majority of users, but it’s worth raising that the alternatives are real and viable.
Most laptops have a way to plug in an external monitor, keyboard, and mouse. Then your desktop computer and your portable computer are the same thing.
In fact, despite being a computer nerd, I decided years ago that I would probably not buy another desktop computer. The take up too much space, they are loud, power-hungry, space heaters and can’t be easily shoved into a backpack in one second. The only thing that would have kept me from moving in this direction is the expandability of the typical tower. But these days, practically all accessories are USB. And I’m not a gamer or bitcoin miner, so I don’t need a high-end CPU or GPU with liquid cooling.
I don’t understand why the author styled this site so that the text is ABSOLUTELY GINORMOUS. This makes it very hard to read. Thankfully browsers these days (for now) still allow users to adjust this relatively easily.
But aside from that, my impression is that the author doesn’t seem to understand what UEFI is. I mean, it has its problems (the main one being that it’s not much of a spec when pretty much all of it is optional) but its light-years better than the legacy IBM PC BIOS from the 8088 days that’s at least two decades past retirement. All modern OSes can deal with UEFI, Linux better than most. It doesn’t limit what you can do with your computer.
And if the author thinks UEFI isn’t already the norm outside the laptop form-factor, he’s in for a rude surprise…
My site’s at https://jacky.wtf - definitely curious.
Linux/Desktop (1920x1080, 16:9)/Chromium with uBlock
When I first open the site, I find it a bit hard to orient myself. Parts of the site fill the entire screen (“I stand in solidarity with …”), others are centerd (the header, the second and third paragraph), while the bottom is right-aligned. And the footer is a bit hard to identify. The varying font-sizes is hard to follow.
Also, at least on my screen, the picture of you (I assume) is cut in the middle, but that’s unavoidable.
The Blog and Posts pages are easer to grasp, but appear a bit too narrow on my screen. Maybe just using 1/4-1/5 of the horizontal screen space.
I dislike political stuff so I closed the tab right after the font loaded, which took about five seconds (4g, Netherlands).
Since you brought it up, this seems appropriate.
In the context of user-generated online content like comments sections, forums, or social media, “politics” is a euphemism for unproductive arguments about controversial subjects. In communities dedicated to a specific topic or purpose, political discussion tends to distract, disrupt, and divide the community. Which is why any healthy online community typically has rules against off-topic content.
I completely understand the desire to get one’s own beliefs across to a larger audience but no one has ever been swayed by a flame war, which is what 99% of all political discussions devolve into. Politics are usually a waste of everyone’s time–the participants, the readers, and the community moderators.
I would posit that those who seem to be the most passionate about their chosen political cause online are being the least effective at making whatever change they wish to see in society. Whatever time you spend proselytizing your beliefs online is time not being spent taking real-world action. (And no, “spreading awareness” is not taking real-world action. And neither does retweeting, while we’re at it.) This may include volunteering for (or starting) a non-profit, donating your money to organizations supporting your cause, writing a book with good science and well-reasoned points, or speaking directly with political leaders who have the power to change whatever it is you want to change.
It is fine to have strong personal beliefs. But habitually arguing against others is very bad for mental health in a variety of ways. The more time you spend espousing and defending your personal views, the less receptive you become to any potential evidence that your beliefs may not have as much merit as you thought. You become less in tune to the subtleties of reality–nearly all social problems are shades of gray, we only see them in black and white because it feels cleaner that way, even though it creates a useless mental model of the world. Even worse, you start to categorize others around you as either with you or against you and your relationships with friends and family who may (or even may not) agree with your beliefs will suffer. Ask me how I know.
Source: I have been a member of varied and numerous Internet communities since 1996.
That’s his site, it’s not like he’s got it as a kind of forum signature below each of his posts here. You’re completely overblowing the “aggressiveness” (for lack of a better word) with which Jacky presents his opinions, and at the same time incite a flamewar yourself with your “there’s always two sides to the story” like seeing everything in one color (gray) is any saner of a mental model than seeing it in two colors (black/white). And if you feel like correcting my words from “gray” to “shades of gray” reconsider whether it really does make any difference at all for the things you do, and also if you’re perhaps a bit too proud of your analysis paralysis. Not that I’d disagree that arguing on the internet is a waste of time, but you and me end up doing it anyway regardless of our ideological differences, and your argument goes way beyond that too.
I can think of plenty of valid reasons besides this to avoid politics, from it being a trigger to just unpleasant for other reasons, and assuming this of the commenter is just uncharitable.
That’s pretty much the same reason as called out by the tweet, it’s “just” unpleasant and not something that affects your life.
assuming this of the commenter is just uncharitable
assuming this of the commenter is just uncharitable
It’s fine to not get involved in things (because it eats away at your attention span or distracts you from the good you could actually do over doomscrolling, something about mental/personal health, or a million other reasons) but what I usually found missing in people who invoke this “I don’t try to get involved” phrase is earnest reflection over the why (Is it protection or just laziness?) and the consequences of them not getting involved (what if the majority of people try to stay away from politics?)
The first blog-post of the person is “I enforced the AGPL on my code, here’s how it went”. So it’s more of a “I don’t care about your politics, but F/OSS politics is completely fine!”. I think it’s a completely charitable interpretation of the comment.
I like the simplicity of the design.
I like the call to action items at the bottom.
In mobile Firefox Android there is a left/right scroll which seems unintentional.
I like yours a lot. I think on mobile the blog posts page looks a little bit funny because of relative font sizing though.
Yeah, I definitely need to fix that - thank you!
I think your main page would benefit from some margins around the center. Activism seems to be a main theme (which is fine, but probably not what tech enthusiasts and prospective employers are interested in – it might be a problem for some but you probably realized this and decided that if it is a problem you probably don’t want them on your site anyway). The ‘posts’ link doesn’t work for me, it links to https://v2.jacky.wtf/stream. High contrast is a bit much.
I’m curious, do you usually/often/at all have browsers open in fullscreen?
I often have and while lobste.rs has too much whitespace on the sides on this 27”, it’s fine. But when I open your website, it’s very much flowing across the whole screen - and when the fold goes from black to white background.. that’s a little stark. When I 50/50 split (which I often do, it’s completely fine)
I actually never put any Web browser window explicitly in fullscreen (a productivity hack for me). I can def make that color change!
I feel it depends a little on which machine I am on, interestingly - but I often have JIRA open in fullscreen and I happened to stumble upon this thread in my lunch break, so the browser was in work mode.
I don’t think it’s a huge thing because most people probably don’t run a website fullscreen on 2560x - but I sometimes do…
It’s a bit weird on mobile. Generally looks pretty nice though.
Bit of a style clash between the top half of the page and the bottom half though.
P.S. I just gotta say I appreciate the banner at the top. But it seemed a little odd it required me to enable JS for it to show up.
Yeah, I have to switch it up for mobile.
The blue on black links at the top (“Streaming Schedule – Blog – Posts – Library”) are rather hard to read for me; actually I missed them first time ‘round. Personally I’d make them a different colour and/or larger.
I have useless horizontal screen on safari/iOS 13.
Scroll is a bit broken on iOS/Safari but a lot of websites are also affected by this minor issue. Users can deal with it.
(In fact the issue comes from the PGP key in the footer: it doesn’t break on narrow displays)
I swear I saw your site posted somewhere else within the last week.
That’s wild, lol.
Too many style shifts?
I opened your link and saw really big font. Then I clicked on “blog” which led to small font and grey background. Then pressed on one of the titles and saw a “medium” style article.
I think that it might be more helpful if you explain to visitors who you are first and the things you support second. It will help support your political views and endorsements that way round better too I think.
I don’t find it incomprehensively idiotic to allow a whole class of web applications to be possible. Any native app can “fuck with the OS clipboard” like this, so this theoretical attack can come from anywhere. Shells should (and, in some cases, seem to) be smart enough to not auto-execute on paste, without the user’s confirmation or permission.
The web is a relatively secure runtime for a lot of software that covers a lot of real needs for a lot of people.
That seems a bit unnecessarily inflammatory. “Click here to copy this text” is a feature that tons of people use and enjoy.
I like bash and use it daily, even though I know there are other more fully-featured bourne-ish shells out there.
However, when I have a trivial script to write, or something that I think will have to run on multiple operating systems, I try to write in /bin/sh in an effort to be kind to my future self and others who might have to maintain what I wrote.
Part of a cross-platform build system is exactly the right place to be using /bin/sh. If any of the build script would materially benefit from bashisms, it would imply to me that it is something best implemented in the “main” build system logic (make, or whatever Go uses).
I’m genuinely curious about this approach. Isn’t bash ported and running on so many systems, that it is practically unavoidable? In addition, isn’t its package quite small? In other words, that is it that one can gain in writing something in /bin/sh, that is sometimes implemented differently in different OSes?
Bash is very portable, yes, but it isn’t installed by default on all unix-likes and derivatives, especially BSDs (like OpenBSD) commercial Unices (like AIX) or specialized Linux distributions that might only have busybox for example. These days, /bin/sh is the lowest common denominator Unix shell. (Although if you go back far enough, that wasn’t always the case.)
Bash is lightweight by today’s standards but is still an order of magnitude larger than, say, dash, even before you consider that bash pulls in a couple additional dynamically linked libraries as well:
$ ls -l /bin/bash
-rwxr-xr-x 1 root root 1,183,448 Jun 18 11:44 /bin/bash*
$ ls -l /bin/dash
-rwxr-xr-x 1 root root 129,816 Jul 18 2019 /bin/dash*
For trivial scripts, interactive use, and one-off things, the difference is usually completely negligible but Debian and Ubuntu switched to dash as the default system shell more than a decade ago because doing so provided a relatively cheap speedup on boot and other times shell scripts are called often, such as cron jobs: https://lists.debian.org/debian-release/2007/07/msg00027.html