If there was a way to make sure everyone gets the same build times (or skipping builds entirely in some way) a LFS/BLFS speedrun would be awesome!
But I also really like the ‘desert island’ concept. I think it may work even better with some really obscure Unix.
Also, make sure you don’t include bash in the machine, otherwise you can read and write to /dev/tcp using cat :))
I am writing a new blog in the style of inconsolation, reviewing command-line tools with examples, screenshots, and ideas about how you could put them to use. The first post is up (link below) and I have a backlog of a couple dozen tools that will be reviewed in a similar fashion. I hope the detailed posts about each tool inspire more people to dive into using the command-line and developing new utilities. If you have thoughts about this, I would appreciate feedback on the design, concept, and approach. The first post was just published, covering the exa ls-alternative tool.
<shameless-plug> Just in case you might be interested, some people here and on HN seemed to like my up tool :) </shameless-plug>
Cheers and good luck! :)
That does look neat – a comment on your little demo screencast though: it wasn’t immediately obvious to me that the “you should run this program as super-user” warning was coming from lshw and not up itself; at first I thought it was actually the former, which I found rather alarming.
Right, I know what you mean. Would you, by any chance, have some idea for some random other flow I could display? Doesn’t have to be ambitious, just easy to reproduce on a typical basic Linux box (so, no nginx logs). Dunno, something from syslog? The thing is, I’m kinda deep enough in the project, that I cannot really switch to lateral big picture thinking about such stuff anymore, for the time being… :/
top or ps comes to mind when hunting for a particular process. You can even use the same flow to cut out particular columns
Very nice! A colleague of mine suggested up last month and I loved it, nice to see you on here! Thank you for your work!
Thanks for your kind words! :) really happy to hear, this confirms to me it made sense to build it, that I could make your lives this tiny bit easier and more fun through it! :)
Nice one, thanks!
I can recommend terminalsare.sexy for shell plugins and customization.
Void Linux also has a tradition called Advent of Void where every day from the first to the 24th of December a different tool is suggested. 99% of them are CLI. Thanks to it I found out about ncdu, gopass, and other great software.
I appreciate these links, there’s so much good content here! ncdu is definitely on my list, it’s such a good tool.
Very good work! Also makes my browsers stutter like crazy but that’s sort of expected.
And it looks like this was not made with img2css, I wonder if this was made by hand…?
I do get the impression it was made by hand. The CSS code consists of a whole bunch of box shadows that are distorted to produce a shape, translated to place that shape, and coloured and gradiented. Most parts have a comment saying what part of the picture they produce; you can comment them out, and that part of the picture disappears. The part labeled ‘dots’ is the three spots of light at different places on the rim of the glass; the fact that these were combined into one definition is what makes me think this was coded by hand.
We don’t see 3D objects, we see a flat plane of shapes of various shades. (Leaving our two-eyedness aside for a moment.) Our brains automatically interprets those as a 3D scene. I once read somewhere that one technique to paint or draw is to see through the 3D reconstruction your brain offers you, see past what you think is there, and instead see the shapes and shades that are really hitting your eye and paint those. I think that’s the principle at work here – no raytracing or lighting tricks, but a realistic painting with box shadows as the medium. It’s lovely, and very well done.
Can relate to this. And even if you’re not handling large files or ToC, there is a Vim plugin for everything, and they make everything easier and faster. (Assuming you know how to Vim of course)
Anyway, reading that he’s using vim in WSL hurt my feelings. As much as WSL is a technical marvel and maybe even a miracle, it’s slow as hell when translating system calls. The author could just use a native Vim build for windows outside of there, or on cygwin, and save even more time…
Well, in case the author doesn’t reply… I did find a project that uses this library. So that’s someone else you could ask, maybe he still has it around.
Thank you, but it looks like they were using the assembled DLL file as opposed to the actual source code.
To be honest GPL requires tem to provide source code, so for GPL dependencies your project should keep the sources archived… Or distributed along the code.
This shows that in real life how absurd the requirements of GPL are. Even a honest actor can fall victim of bit-rot and violate GPL inadvertly.
That’s… not how licenses work, nor how the GPL works. There are no binaries on that page, therefore no requirement to distribute their sources; and presumably that project is uploaded by its author, who as the copyright holder is not themself bound by the terms of their own license.
There is absurdity in what you said, but I’m afraid it isn’t the GPL’s. 🙂
Is there a list of rollover dates anywhere? Prior to reading this, I had known only of the 32-bit UNIX timestamp rollover in 2036 or thereabouts.
This is the best I managed to find.
Thank you. I added this bug to that list.
I bet literally every firmware out there using 8 to 32-bit counters is prone to such bugs (I’ve heard people using Z80-based appliances running since late 1980’s, they just ignore the timestamp field which didn’t foresee leap years and changes in daylight savings).
That only happens because of the usual obsession to ship a product ASAP because “in a few years it will be either broken or out of warranty”.
Well, Y2K was one I guess.
I host my stuff at Scaleway, quite cheap and their parent company seems to be France based (SAS Online)
i am currently with Scaleway, but trying to escape them.
They keep screwing basic things up, also hearing more and more horror story’s from friends who where with them.
give hetzner cloud a try. I am liking them more and more.
im heavily leaning towards them so far
Would you mind elaborating on what they screw up?
Downtime, corrupted data, unexpected reboots, hypervisor failures, network failures….
Customer service has been a disaster a few times as well.
They have locations in Paris and Amsterdam. Using the latter in one of my personal projects.
The actual downtimes are quite beyond those advertised in SLA. In the 6 months of 2018 they’ve been down for ~8 hours due to routing issues and a one-time outage exceeding 12 hours due to my instance’s hypervisor failure. In either case they do not announce the outages until you notice it yourself, which is really the worst part of it.
So cheap, easy to use, but try something else for mission critical stuff.
I see, but I wouldn’t host any mission critical stuff on a €10 VPS anyway. For my personal website and IRC bouncer, it’s fine.
They have different tiers, but it’s doubtful that the routing failure for their top tier customers was fixed any faster.
I’m on Scaleway too. Been looking for alternatives since I’ve had some issues but I can’t seem to find anyone else with unmetered bandwidth at these prices. I guess you can’t keep the cake and eat it too.
Hetzner cloud is not unmetered, but you get 20 TB/month/instance. Works for my usecase. Maybe give them a try.
Same here, as long as you don’t reboot your system, your data would be safe.
Actually… I haven’t noticed this sort of DLL Hell being much of a problem on Linux the last 5-10 years(?) or so. Does anyone know why? Steam and some other things distribute binaries that work fine on many versions of Linux, as do Discord, Skype, GOG, and various other things I use semi-regularly, and it’s been a long time since I’ve had issues with them. And while lots of these binaries are distributed with their own .so files, Windows style, they all still have to talk to the same-ish version of glibc.
Is it just that everyone targets some version of Ubuntu and I use Debian so it’s always just Close Enough? Is it that the Debian maintainers put a lot of work into making stuff work? Is it that the silly glibc symbol versioning actually does what it’s intended to do and makes this not a problem? Or does glibc just not change terribly fast these days and so there are few breaking changes?
Probably because you’re using Debian (stable or testing) which isn’t that different from Ubuntu.
I’ve been using a rolling distro (that has latest version of everything except kernel) and it happened quite a few times that I’ve had to symlink 100 .so by hand to make something run, download and unpack stuff from other distros’ repos, or just give up.
I get why devs are using AppImage and users are using Flatpak/Snap.
I get it when trying to use binaries not from a native package manager or built by me on that exact system. Eg moving compiled tools onto shared webhosts and getting old compiled programs to work.
Because some other hard working people (package maintainers) / machines (build farms) rebuild everything for you.
Update: this afternoon I was bitten by a program crashing because the official Debian release of python-imagemagick tried to link against some symbol that the official Debian release of the imagemagick shared library no longer exported… So I suppose Eris has punished me for my hubris.
Its really not practical to do a chromium rebuild for every small update.
Symbol versioning is annoying and Void Linux started to make every package that is build against glibc to depend on glibc>=buildversion because partial updates are allowed but versioned symbols break all the shared library checks.
In practice, package builds already do a chromium rebuild for every small update. Developers do incremental builds regardless of the method of linking.
Really, the reason to build Chrome with shared objects is that the linker will fall over when building as a single binary with debug info – it’s already too big for the linker to handle easily. The last time I tried to build Chrome to debug an issue I was having, I didn’t know you had to do some magic to build it in smaller pieces, so the linker crunched on the objects for 45 minutes before falling flat on its face aborting. I think it didn’t like 4 gigabyte debug info sections.
Also, keep in mind that this wiki entry is coming from a Plan 9 perspective. Plan 9 tends to have far smaller binaries than Chromium, and instead of large fat libraries, it tends to make things accessible via file servers. HTTP isn’t done via libcurl, for example, but instead via webfs.
That separation also means you can rebuild webfs to fix everything using it without rebuilding them, which is what shared libraries were supposed to help with.
Well, I feel like that’s the only way to handle it in Void really.
Anyway, I’d trade disk space for having static linked executables every day. Must be why I love Go so much. But I still understand why it’s used, both for historical and practical reasons. This post showcases the difference between static and dynamic cat but I’m scared of what would happen with something heavy with lots of dependencies. For example qt built statically is about 2/3rd of the size.
If the interface has not changed, you technically only need a relink.
If you have all the build artifacts laying around
Should a distribution then relink X applications pushing hundreds of megabytes of updates or should they start shipping object files and link them on the user system where we would basically imitate shared libraries.
One data point: openbsd ships all .o for the kernel, which keeps updates small.
(I don’t think this is actually new. Iirc, old unix systems used to do the same so you could relink a modified kernel without giving away source.)
That’s how SunOS worked, at least. The way to relink the kernel after an update also works if you have the source too; it’s the same build scaffolding.
Kernel, but not also every installed port or package
It would be viable to improve both deterministic builds and binary diffs/delta patches for that. With deterministic builds you could make much better diffs (AFAICT) since the layout of the program will be more similar between patches.
Delta updates would be nice a nice improvement for the “traditional” package managers.
Chrome does this for its updates, but instead of just binary diffs, they even disassemble the binary and reassemble it on the client.
What do you mean by delta updates? What should they do differently than what delta RPMs have been doing until now?
Yes maybe this, not sure how delta rpms work specifically, do they just diff files in rpms or are those deltas of each binary/file inside of the rpm?
They ship new/changed files, and I think they also do binary diffs (at least based on what this page says)
Chrome already ships using binary diffs, so this is a solved problem.
where we would basically imitate shared libraries.
where we would basically imitate shared libraries.
Except without the issue of needing to have just one version of that shared library.
Proper shared libraries are versioned and don’t have this issue.
my body’s REST API
my body’s REST API
wat?! Yes really!
Ok this is super cool and makes me wish my body had a REST API. I have a fitbit, but what other body status sensors exist for the average coder? Any suggestions?
As for the question in the post, what’s a better way to notify? I’d suggest a wrist mounted vibrating something, blood sugar sounds really important. For the other not so critical factors, I think a glanceable display is sufficient.
what other body status sensors exist for the average coder?
what other body status sensors exist for the average coder?
You can get heart rate, last heartbeat, hemoglobin concentration, blood pressure and some other figures with ANT+ (see the device profiles) and the right device.
There are dongles and software to connect/sync these sensors with a PC and read values, like antpm/gant for the Linux commandline
Do you know of the right devices that support this?
I’ve never looked for such, don’t know where to start
To be honest, I was looking for heartbeat only, to display heartbeat in a gaming stream for a fast-paced game. What I found was something like this belt that supports heart rate that you can read with a dongle like this (or unofficial ones also)
There are also watches from Garmin that measure heart rate and blood oxygenation, not sure if they support wireless sync with the dongle.
I don’t know about a single device that supports everything, and probably there isn’t one. But LMK if you manage to find something more interesting!
Mostly just helping a colleague with debugging one of our firmware updaters and reading through 700 text files looking for something that I can’t grep. :(
Pretty old story, to be honest.
But if you’re interested in this topic, I’d suggest this video by Internet Historian.