Very interesting read. I want to find examples of “Gumball” by Broderbund Software showing the copy protection, but it looks like I’m out of luck.
I’m having a read of some other articles on that site. This page seems to be the main index. Site organisation is a little complicated.
Presentation/technology wise this site is a contrast to its era. It has none of the good bits from the retro net and many of the bad bits of the modern net:
Lazy-loaded resources really get me when I load up articles in the morning before hopping on a train, only to find most of the article hasn’t loaded. Even enabling javascript doesn’t fix this – I literally have to scroll down the entire page for every link I open.
For formatting/column issues my eternal salvation is Chris Pedrick’s web developer extension, which provides a key shortcut to disable all CSS (Shift+Alt+A). This lets me read the article with only a few scrolls, rather than having to scroll every four paragraphs. I’m sure this works on a touchscreen, but not everything is a touchscreen. HTML wants to solve this for you, let it flow free. Please :(
Privileged Command Execution History
A new command, admhist, was included in Solaris 11.4 to show successful system administration related commands which are likely to have modified the system state, in human readable form. This is similar to the shell builtin “history”.
That’s a curious one. It’s a shame the solaris manpages don’t appear to be public. I wonder if they’re logging the use of system() or exec*() in their libc.
What’s worse is I immediately started thinking about how this workaround could be useful. Naughty Hales! Fix the shell configs on the servers instead. Stop imagining libc level hacks to see what coworkers have been up to.
I have no idea how it’s implemented, but I would note that Solaris and illumos systems have an auditing system with kernel involvement that can capture, amongst other things, all command execution. Perhaps this is a set of filters for displaying information gathered by the existing auditing mechanism.
No, unfortunately. I just tried extracting the images using pdfimages – it looks like the watermarks are baked into the JPEGs themselves.
It’s a shame. I have a feeling the NSA wouldn’t care either.
Oh My Zsh has some killer features but speed is definitely one of the reasons I gave up and switched back to bash. Another reason is standardization - so many shops I work at have fairly evolved bash configs you kinda HAVE to use if you want to be able to do the thing, so it’s not worth fighting upstream over.
I’m in a similar boat, but it was with the grml collection of zsh extensions/scripts. I tried it and loved it a few years back, sticking with it for at least a few months.
One day I was forced to use bash again. It felt oddly fast – my prompt would reappear after a command exited much earlier than usual. I did some comparison with my loaded zsh setup and discovered I could really “feel” the difference. What finally clinched it was waiting for zsh to load on a heavily burdened system with runaway disk IO. I felt like I wasn’t in control anymore.
I didn’t want to let go of the better tab-completion features (especially using cd when there’s only one folder and many files) and to this day I still get stunlocked occasionally by the lack of them. It’s amazing how quickly some expectations and habits of mine developed, and how long they’ve lasted. But I don’t think I can go back to slowsville.
Output should be simple to parse and compose
No JSON, please.
Yes, every tool should have a custom format that needs a badly cobbled together parser (in awk or whatever) that will break once the format is changed slighly or the output accidentally contains a space. No, jq doesn’t exist, can’t be fitted into Unix pipelines and we will be stuck with sed and awk until the end of times, occasionally trying to solve the worst failures with find -print0 and xargs -0.
JSON replaces these problems with different ones. Different tools will use different constructs inside JSON (named lists, unnamed ones, different layouts and nesting strategies).
In a JSON shell tool world you will have to spend time parsing and re-arranging JSON data between tools; as well as constructing it manually as inputs. I think that would end up being just as hacky as the horrid stuff we do today (let’s not mention IFS and quoting abuse :D).
Sidestory: several months back I had a co-worker who wanted me to make some code that parsed his data stream and did something with it (I think it was plotting related IIRC).
Me: “Could I have these numbers in one-record-per-row plaintext format please?”
Co: “Can I send them to you in JSON instead?”
Me: “Sure. What will be the format inside the JSON?”
Co: “…. it’ll just be JSON.”
Me: “But it what form? Will there be a list? Name of the elements inside it?”
Co: “…”
Me: “Can you write me an example JSON message and send it to me, that might be easier.”
Co: “Why do you need that, it’ll be in JSON?”
Grrr :P
Anyway, JSON is a format, but you still need a format inside this format. Element names, overall structures. Using JSON does not make every tool use the same format, that’s strictly impossible. One tool’s stage1.input-file is different to another tool’s output-file.[5].filename; especially if those tools are for different tasks.
I think that would end up being just as hacky as the horrid stuff we do today (let’s not mention IFS and quoting abuse :D).
Except that standardized, popular formats like JSON get the side effect of tool ecosystems to solve most problems they can bring. Autogenerators, transformers, and so on come with this if it’s a data format. We usually don’t get this if it’s random people creating formats for their own use. We have to fully customize the part handling the format rather than adapt an existing one.
Still, even XML that had the best tooling I have used so far for a general purpose format (XSLT and XSD in primis), was unable to handle partial results.
The issue is probably due to their history, as a representation of a complete document / data structure.
Even s-expressions (the simplest format of the family) have the same issue.
Now we should also note that pipelines can be created on the fly, even from binary data manipulations. So a single dictated format would probably pose too restrictions, if you want the system to actually enforce and validate it.
“Still, even XML”
XML and its ecosystem were extremely complex. I used s-expressions with partial results in the past. You just have to structure the data to make it easy to get a piece at a time. I can’t recall the details right now. Another I used trying to balance efficiency, flexibility, and complexity was XDR. Too bad it didn’t get more attention.
“So a single dictated format would probably pose too restrictions, if you want the system to actually enforce and validate it.”
The L4 family usually handles that by standardizing on an interface, description language with all of it auto-generated. Works well enough for them. Camkes is an example.
XML and its ecosystem were extremely complex.
It is coherent, powerful and flexible.
One might argue that it’s too flexible or too powerful, so that you can solve any of the problems it solves with simpler custom languages. And I would agree to a large extent.
But, for example, XHTML was a perfect use case. Indeed to do what I did back then with XLST now people use Javascript, which is less coherent and way more powerful, and in no way simpler.
The L4 family usually handles that by standardizing on an interface, description language with all of it auto-generated.
Yes but they generate OS modules that are composed at build time.
Pipelines are integrated on the fly.
I really like strongly typed and standard formats but the tradeoff here is about composability.
UNIX turned every communication into byte streams.
Bytes byte at times, but they are standard, after all! Their interpretation is not, but that’s what provides the flexibility.
Indeed to do what I did back then with XLST now people use Javascript, which is less coherent and way more powerful, and in no way simpler.
While I am definitely not a proponent of JavaScript, computations in XSLT are incredibly verbose and convoluted, mainly because XSLT for some reason needs to be XML and XML is just a poor syntax for actual programming.
That and the fact that while my transformations worked fine with xsltproc but did just nothing in browsers without any decent way to debug the problem made me put away XSLT as an esolang — lot of fun for an afternoon, not what I would use to actually get things done.
That said, I’d take XML output from Unix tools and some kind of jq-like processor any day over manually parsing text out of byte streams.
I loved it when I did HTML wanting something more flexible that machines could handle. XHTML was my use case as well. Once I was a better programmer, I realized it was probably an overkill standard that could’ve been something simpler with a series of tools each doing their little job. Maybe even different formats for different kinds of things. W3C ended up creating a bunch of those anyway.
“Pipelines are integrated on the fly.”
Maybe put it in the OS like a JIT. Far as bytestreams, that mostly what XDR did. They were just minimally-structured, byte streams. Just tie the data types, layouts, and so on to whatever language the OS or platform uses the most.
JSON replaces these problems with different ones. Different tools will use different constructs inside JSON (named lists, unnamed ones, different layouts and nesting strategies).
This is true, but but it does not mean heaving some kind of common interchange format does not improve things. So yes, it does not tell you what the data will contain (but “custom text format, possibly tab separated” is, again, not better). I know the problem, since I often work with JSON that contains or misses things. But the problem is not to not use JSON but rather have specifications. JSON has a number of possible schema formats which puts it at a big advantage of most custom formats.
The other alternative is of course something like ProtoBuf, because it forces the use of proto files, which is at least some kind of specification. That throws away the human readability, which I didn’t want to suggest to a Unix crowd.
Thinking about it, an established binary interchange format with schemas and a transport is in some ways reminiscent of COM & CORBA in the nineties.
will break once the format is changed slighly
Doesn’t this happens with json too?
A slight change in the key names or turning a string to a listof strings and the recipient won’t be able to handle the input anyway.
the output accidentally contains a space.
Or the output accidentally contact a comma: depending on the parser, the behaviour will change.
No, jq doesn’t exis…
Jq is great, but I would not say JSON should be the default output when you want composable programs.
For example JSON root is always a whole object and this won’t work for streams that get produced slowly.
will break once the format is changed slighly
Doesn’t this happens with json too?
Using a whitespace separated table such as suggested in the article is somewhat vulnerable to continuing to appear to work after the format has changed while actually misinterpreting the data (e.g. if you inserted a new column at the beginning, your pipeline could happily continue, since all it needs is at least two columns with numbers in). Json is more likely to either continue working correctly and ignore the new column or fail with an error. Arguably it is the key-value aspect that’s helpful here, not specifically json. As you point out, there are other issues with using json in a pipeline.
On the other hand, most Unix tools use tabular format or key value format. I do agree though that the lack of guidelines makes it annoying to compose.
Hands up everybody that has to write parsers for zpool status and its load-bearing whitespaces to do ZFS health monitoring.
In my day-to-day work, there are times when I wish some tools would produce JSON and other times when I wish a JSON output was just textual (as recommended in the article). Ideally, tools should be able to produce different kinds of outputs, and I find libxo (mentioned by @apy) very interesting.
I spent very little time thinking about this after reading your comment and wonder how, for example, the core utils would look like if they accepted/returned JSON as well as plain text.
A priori we have this awful problem of making everyone understand every one else’s input and output schemas, but that might not be necessary. For any tool that expects a file as input, we make it accept any JSON object that contains the key-value pair "file": "something". For tools that expect multiple files, have them take an array of such objects. Tools that return files, like ls for example, can then return whatever they want in their JSON objects, as long as those objects contain "file": "something". Then we should get to keep chaining pipes of stuff together without having to write ungodly amounts jq between them.
I have no idea how much people have tried doing this or anything similar. Is there prior art?
In FreeBSD we have libxo which a lot of the CLI programs are getting support for. This lets the program print its output and it can be translated to JSON, HTML, or other output forms automatically. So that would allow people to experiment with various formats (although it doesn’t handle reading in the output).
But as @Shamar points out, one problem with JSON is that you need to parse the whole thing before you can do much with it. One can hack around it but then they are kind of abusing JSON.
That looks like a fantastic tool, thanks for writing about it. Is there a concerted effort in FreeBSD (or other communities) to use libxo more?
FreeBSD definitely has a concerted effort to use it, I’m not sure about elsewhere. For a simple example, you can check out wc:
apy@bsdell ~> wc -l --libxo=dtrt dmesg.log
238 dmesg.log
apy@bsdell ~> wc -l --libxo=json dmesg.log
{"wc": {"file": [{"lines":238,"filename":"dmesg.log"}]}
}
powershell uses objects for its pipelines, i think it even runs on linux nowaday.
i like json, but for shell pipelining it’s not ideal:
the unstructured nature of the classic output is a core feature. you can easily mangle it in ways the programs author never assumed, and that makes it powerful.
with line based records you can parse incomplete (as in the process is not finished) data more easily. you just have to split after a newline. with json, technically you can’t begin using the data until a (sub)object is completely parsed. using half-parsed objects seems not so wise.
if you output json, you probably have to keep the structure of the object tree which you generated in memory, like “currently i’m in a list in an object in a list”. thats not ideal sometimes (one doesn’t have to use real serialization all the time, but it’s nicer than to just print the correct tokens at the right places).
json is “java script object notation”. not everything is ideally represented as an object. thats why relational databases are still in use.
edit: be nicer ;)
I guess I should be thankful that my circuit breaker just cuts off electricity when there is too much load so it is like a forced reboot at least once a month.
The FBI recommends any owner of small office and home office routers reboot the devices to temporarily disrupt the malware and aid the potential identification of infected devices. Owners are advised to consider disabling remote management settings on devices and secure with strong passwords and encryption when enabled. Network devices should be upgraded to the latest available versions of firmware.
I would like to learn more about this. I am pretty sure Verizon has a backdoor to my WiFi router FiOS-G1100. Does anyone else have this router? What do you see when you go to http://myfiosgateway.com/#/monitoring ? I see
UI Version: v1.0.294 Firmware Version 02.00.01.08 Model Name: FiOS-G1100 Hardware Version: 1.03
Access to your router is likely not publicly routed. I can’t access that web page (connection failed).
Ah, I should have mentioned you need to be at home behind your FiOS F1100 router, log in and click on system monitoring on the top right corner.
Here’s the router/modem in question: https://www.verizon.com/home/accessories/fios-quantum-gateway/
They along with other ISP’s took tens to hundreds of millions to backdoor their networks for NSA. That was in leaks. You should assume they might backdoor anything else.
Forbes article.
Maybe I used an incorrect technical word. I meant to say I think they can remotely access and configure the modem / router.
ISP’s backdooring home routers isn’t unknown, where here I use ‘backdooring’ to mean “ISP can log in and make changes even though most home users don’t know they can do this”. Some use it to push out router firmware updates (for their preferred models).
Overbitenx appears to use some massive workarounds to get gopher support into FF. It’s a shame that Firefox’s new playpen is so strict.
In Firefox, go to about:debugging and add a “Temporary Add-on.” Browse to where you put the repo, enter the ext directory, and select manifest.json. You will need to repeat this step every time Firefox starts up.
From a practical point of view: it might be easier to install and setup a gopher<->ht* gateway. You only have to do that once.
It seems though that this is not what the author wants and it looks like the he hopes the workaround won’t be needed in the future. I suspect he has a vision of a single addon working like it used to in the pre-webextensions days.
Mr Classila, I hope it turns out well.
Onyx is written in C. Although C is not a safe language (please, Rusties, don’t send me E-mail, I don’t want to hear it), it is the most portable option right now and allows Onyx to be built with a minimal toolchain.
Hahaha. I’ve previously released some C code along with a copy of TCC (~2MB) and a batch file. This let Windows users rebuild all sources within seconds with a simple double-click; without having to install anything. It’s at that point I discovered the magic of C portability, and I don’t think I’ll be looking back for a long time.
Dang, this means the only twitter client to support wallrunning will break again. [free game, no nastyware]
Bear with me, this might sound dumb, but I find it super confusing when you have some object reference, which might be null itself, and it’s also got object values/references inside, which could also be null. So a value can be more-or-less null/unusable in multiple ways, but sometimes it will(!) be usable with almost nothing not-null, depending on context. Each time I step into the code I’ve got to re-establish which things are going to be present and why, depending on context. And add null-checks everywhere. I wish I knew the name for this pattern. (the errorless data structure, the bag of holding, &c) I’m totally down with make illegal state unrepresentable but it’s hard to refactor once the code is already written, inherited-from, corner-case’d, and passed around everywhere.
It’s the same with functions. I swear I saw a line of code today that was like below (paraphrasing). I mean sure, I can get used to anything, but it just looks to me like a failure mode.
return service.Generate(data, null, null, null, null, null, null);
I wonder what would happen if say, 64K of data was mapped to virtual address 0 and made read-only [1]. That way, NULL pointers wouldn’t crap out so much, A NULL pointer points to an address of NUL bytes, so it’s a valid C string. In IEEE-754 all zeros represents 0. All pointers lead right back to this area. If you use “size/pointer” type strings, then you get 0-byte (or 0-character) strings. It just seems to work out fine.
It’s probably a horrible idea, but it would be fun to try.
[1] I would be nice if this “read-only” memory acted like ROM—it could be written to, but nothing actually changes.
I’ve had some fun thoughts about this before :D
I was sketching out ideas of a microcontroller design that could potentially “not have registers” and also try to avoid lots of arbitrary hardcoded memory addresses. In practice it always ended up having registers in the form of a couple of internal busses and some flags, but it would look like it mostly didn’t have registers as far as programmers were concerned.
I wanted to make the “program counter” a value stored at memory address zero. This would also mean the ‘default’ value of memory address zero set in the ROM would be the entry point in the code, which I thought was pretty.
This also simplified a few things from the circuitry point of view:
Some thought later made me realise that using low memory addresses for critical things was a bad idea. When a program wigs out it can start writing to arbitrary random addresses, and address zero is a very common target in many bugs. Overwriting address zero would make the CPU jump to new code and potentially make things harder to debug.
In the end I thought it best to setup the first 64 bytes or so of memory to be an intentional trap instead. ie any read or write to those bytes would immediately halt the processor. A lot less elegant, but a lot more practical.
Back to your idea.
Letting the first 64K of memory be usable would allow a lot of programs to keep running, a lot like the old “Abort/retry/ignore” allowed us to do in the DOS days. For some bugs this would be brilliant and let you try and gracefully recover (eg finish saving a document).
Alas there would also be a chance of data being damaged (eg files getting overwritten) if you continue into unknown territory; so I think it would still be worth bringing up an A/R/I style dialog. Even if only so we can blame the users if something goes wrong :P
Things I self-host now on the Interwebs (as opposed to at home):
Things I’m setting up on the Interwebs:
Over time I may move the Docker and KVM-based Linux boxes over to OpenBSD and VMM as it matures. I’m moving internal systems from Debian to Open or NetBSD because I’ve had enough of Systemd.
Out of curiosity, why migrate your entire OS to avoid SystemD rather than just switch init systems? Debian supports others just fine. I use OpenRC with no issues, and personally find that solution much more comfortable than learning an entirely new management interface.
To be fair, it’s not just systemd, but systemd was the beginning of the end for me.
I expect my servers to be stable and mostly static. I expect to understand what’s running on them, and to manage them accordingly. Over the years, Debian has continued to change, choosing things I just don’t support (systemd, removing ifconfig etc). I’ve moved most of my stack over to docker, which has made deployment easier at the cost of me just not being certain what code I’m running at any point in time. So in effect I’m not even really running Debian as such (my docker images are a mix of alpine and ubuntu images anyway).
I used to use NetBSD years back quite heavily, so moving back to it is fairly straightforward, and I like OpenBSD’s approach to code reduction and simplicity over feature chasing. I think it was always on the cards but the removal of ifconfig and the recent furore over the abort() function with RMS gave me the shove I needed to start moving.
For now I’m backing up my configs in git, data via rsync/ssh and will probably manage deployment via Ansible.
It’s not as easy as docker-compose, but not as scary as pulling images from public repos. Plus, I’ll actually know what code I’m running at a given point in time.
Have you looked at Capistrano for deployment? Its workflow for deployment and rollback centers around releasing a branch of a git repo.
I’m interested in what you think of the two strategies and why you’d use one or the other for your setup, if you have an opinion.
I don’t run ruby, given the choice. It’s not a dogmatic thing, it’s just that I’ve found that there are more important things for me to get round to than learning ruby properly, and that if I’m not prepared to learn it properly I’m not giving it a fair shout.
N.B. You can partially remove systemd, but not completely remove it. Many binaries runtime depend on libsystemd even if they don’t appear like they would need it.
When I ran my own init system on Arch (systemd was giving me woes) I had to keep libsystemd.so installed for even simple tools like pgrep to work.
Some more info and discussion here. I didn’t want to switch away from Arch, but I also didn’t want remnants of systemd sticking around. Given the culture of systemd adding new features and acting like a sysadmin on my computer I thought it wise to try and keep my distance.
The author of the article regarding pgrep you linked used an ancient, outdated kernel, and complained that the newest versions of software wouldn’t work. He/She used all debug flags for the kernel, and complained about the verbosity. He/She used a custom, unsupported build of a bootloader, and complained about the interface. He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions. He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly, and likely with the default sRGB set (which is horribly inaccurate anyway). He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.
I’m the author of the article.
ancient, outdated kernel all debug flags for the kernel unsupported build of a bootloader
The kernel, kernel build options and bootloader were set by Arch Linux ARM project. They were not unsupported or unusual, they were what the team provided in their install instructions and their repos.
A newer mainstream kernel build did appear in the repos at some point, but it had several features broken (suspend/resume, etc). The only valid option for day to day use was the recommended old kernel.
complained that the newest versions of software wouldn’t work
I’m perfectly happy for software to break due to out of date dependencies. But an init system is a special case, because if it fails then the operating system becomes inoperable.
Core software should fail gracefully. A good piece of software behaves well in both normal and adverse conditions.
I was greatly surprised that systemd did not provide some form of rescue getty or anything else upon failure. It left me in a position that was very difficult to solve.
He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions
This was not a custom kernel package, it was provided by the Arch Linux ARM team. It was a newer kernel package that described itself as supporting my model. As it turns out it was the new recommended/mandated kernel package in the Arch Linux ARM install instructions for my laptop.
Even if the kernel were custom, it is highly unusual for distribution packages to contain scripts that overwrite partitions.
He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly
There are multiple concepts under the words of ‘colour profiles’ that it looks like you have merged together here.
Colour profiles are indeed used by image and video codecs every day on our computers. Most of these formats do not store their data in the same format as our monitors expect (RGB888 gamma ~2.2, ie common sRGB) so they have to perform colour space conversions.
Whatever the systemd unit was providing in the form of ‘colour profiles’ was completely unnecessary for this process. All my applications worked before systemd did this. And they still do now without systemd doing it.
likely with the default sRGB set (which is horribly inaccurate anyway)
1:1 sRGB is good enough for most people, as it’s only possible to obtain benefits from colour profiles in very specific scenarios.
If you are using a new desktop monitor and you have a specific task you need or want to match for, then yes.
If you are using a laptop screen like I was: most change their colour curves dramatically when you change the screen viewing angle. Tweaking of colour profiles provides next to no benefit. Some laptop models have much nicer screens and avoid this, but at the cost of battery life (higher light emissions) and generally higher cost.
I use second hand monitors for my desktop. They mostly do not have factory provided colour profiles, and even then the (CCFL) backlights have aged and changed their responses. Without calibrated color profiling equipment there is not much I can do, and is not worth the effort unless I have a very specific reason to do so.
He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.
You can do this without making systemd libraries a hard runtime dependency.
I raised this issue because of a concept that seemed more pertinent to me: the extension of systemd’s influence. I don’t think it’s appropriate for basic tools to depend on any optional programs or libraries, whether they be an init system like systemd, a runtime like mono or a framework like docker.
Almost all of these issues are distro issues.
Systemd can work without the color profile daemon, and ps and pgrep can work without systemd. Same with the kernel.
But the policy of Arch is to always build all packages with all possible dependencies as hard dependencies.
e.g. for Quassel, which can make use of KDE integration, but doesn’t require it, they decide to build it so that it has a hard dependency on KDE (which means it pulls in 400M of packages for a package that would be fine without any of them).
I really wish the FreeBSD port of Docker was still maintained. It’s a few years behind at this point, but if FreeBSD was supported as a first class Docker operating system, I think we’d see a lot more people running it.
IME Docker abstracts the problem under a layer of magic rather than providing a sustainable solution.
Yes it makes things as easy as adding a line referencing a random github repo to deploy otherwise troublesome software. I’m not convinced this is a good thing.
As someone who needs to know exactly what gets deployed in production, and therefore cannot use any public registry, I can say with certainty that Docker is a lot less cool without the plethora of automagic images you can run.
Exactly, once you start running private registries it’s not the timesaver it may have first appeared as.
Personally, I’ll have to disagree with that. I’m letting Gitlab automatically build the containers I need as basis, plus my own. And the result is very amazing because scaling, development, reproducibility etc are much easier given.
I think Kubernetes has support for some alternative runtimes, including FreeBSD jails? That might make FreeBSD more popular in the long run.
Works fine for me(tm).
It seems fine both over mobile and laptop, and over 4G. I haven’t tried any large groups and I doubt I’ll use it much, but so far I’ve been impressed.
Is bookstack good? I’m on the never ending search for a good wiki system. I keep half writing my own and (thankfully) failing to complete it.
Cowyo is pretty straighforward (if sort of sparse).
Being go and working with flat files, it’s pretty straightforward to run & backup.
Bookstack is by far one of the best wikis I’ve given to non-technical people to use. However I think it stores HTML internally, which is a bit icky in my view. I’d prefer it if they converted it to markdown. Still, it’s fairly low resource, pretty and works very, very well.
Love and appimages
The copy of Love in my repos was too old and I had never used an appimage before.
I was very confused. “There’s nothing for appimage in my repos. Where do I download the tools to mount this?”. Eventually I twigged:
$ file EXO_encounter-667-x86_64.AppImage
EXO_encounter-667-x86_64.AppImage: ELF 64-bit LSB executable, ...
Love looks interesting, I might give it for my next little project. My self-written ASCII engine became a small nightmare when I ported to Windows, the fact Love takes care of the cross-platform bit is enticing.
Sidenote: 12 - 35MB for your game is a sign of the times. Have you ever played Goldeneye 64? It worked with something like 4MB of ram total (including the framebuffer too IIRC). One of these days old crumpets like me will open enough holes in the atmosphere so that all games and programs over a certain size get statistically lost due to solar radiation. We’ll be the ones running around holding floppies and chanting.
Understanding the game
I was completely confused by the door graphics.
All I could see were blue squares with a gap next to them. One that looked like a rover could fit through, but this never worked.
It turns out these ‘gaps’ were piston bits holding up the blue ‘floor’ of the doors. It took me well until the end of the game to work this out. Even then things still looked confusing – I had to look for indirect clues like the location of the strange dark bit (above or below the blue bits) and the relative positions of other nearby doors (for multi-door segments).
I would have really appreciated a colour difference between ‘up’ and ‘down’ doors. It would have saved me using the Doom “UNGH” method to determine door states.
Fun with the game
Yes :)
For the first half of the game: I felt my every move was ‘correct’ because of the constant text prompts. I’m glad this ended.
For the second half everything I did felt like I was breaking the puzzles or perhaps doing things out of order. This felt good.
I didn’t use your son’s solution for the last puzzle, instead I drove a rover into the base from the SW corner and shot a laser through a gap in the wall.
Tone of the game
Held together very strongly by the front, music, story and graphics; as well as other bits of your presentation (eg camera animation). Let down by the gameplay seeming unrelated to all of this, but that’s a hard one to solve.
Resizing the window would also break the tone for me because the HUD text and camera were no longer centred.
End story was appreciated. Felt rewarding.
Misc notes
Bug I only noticed at the end: you can warp rovers through walls by ejecting them from the temple when you’re butted up against a wall on your north side.
Less fun: camera slower than the rovers. Turn it into a feature, weave it into the story, something about photonics being slower on this planet :D
I found that having my kids playtest continually as the game evolved meant that they didn’t see certain flaws; things that were clear to them weren’t obvious to first-time players. (Of course, as the author I expect to be blind to many flaws myself.)
Ooh yes, I know that one. Not just games too, all GUIs. I’ve been on both sides of this divide.
Thanks for your feedback! I added a note about needing to chmod the AppImage to the downloads page.
12 - 35MB for your game is a sign of the times.
I know! I realized 2 hours before the jam ended that the song I chose for the endgame scene was 10 MB on its own! I’m going to replace it but didn’t have time during the jam. I appreciated the fact that I could just eat the size problem since I had bigger problems to deal with at the time, but it was a bit embarrassing.
I was completely confused by the door graphics.
You’re not the first to say this. I’m going to try animating the opening of the doors as well as adding sound effects; we’ll see if that helps. But I might need to replace the sprites altogether or at least alter them. Maybe as you suggest changing the color would be enough; I’ll see.
I also definitely need to work on the scrolling logic for the post-game-jam edition, especially the logic for when it stops scrolling because you reached the edge of the map. I tried making some tweaks to this during the jam but all my “fixes” made things worse in other ways; it’s more subtle than it looks.
Glad you enjoyed it!
I know! I realized 2 hours before the jam ended that the song I chose for the endgame scene was 10 MB on its own!
Hahaha, that explains it. Glad to see that the rest of the love code and graphics are ~2MB then. I was worried that there were massive overheads.
I’m going to try animating the opening of the doors as well as adding sound effects
This will work if I’m near them and pay attention to them at the time, but I’ll most likely be somewhere else directing beams. A static descriptive component (eg colour) would still be as useful as a dynamic (sound/anim) one.
What’s vidconf like and is it really better than other (lower-bandwidth) comm methods?
I live in Australia and I have troubles with video conferencing software. I believe my upload speed (1MBit at best) is one of the primary causes to blame. This seems to be similar to most Australians.
There are many places in the world with internet slower than Australia’s. Combined with the fact that live voice and non-live internet comm methods are so decent these days I’m not sure what place videoconferencing has.
Video still communicates a lot more than text. Possibly the best replacement might be not emoji, but those animated gifs you share like in Telegram. Truly a new form of communication!
I roll my own comments on my blog and I’ve just had a conversation with another person considering doing the same:
http://halestrom.net/darksleep/blog/030_comment_blog_systems/
https://rubenerd.com/feedback-on-static-comments/
Summary:
A comment about CGI in general: it’s absolutely beautiful.
When I wrote my own site backend a few years back I had no knowledge about the world of interfacing webservers with code. I discovered that there were many, many methods and protocols that each webserver only seemed to support a subset of. And many people telling me that CGI was old and bad and that I shouldn’t use it.
I had a nightmare getting non-CGI things to work. I had no experience and background here, so not much of it made sense. I had presumed ‘FCGI’ was a “fixed” version of CGI, but didn’t succeed at getting it to work after following a few guides and trying a few different webservers. I gave up.
I decided I should do the opposite of modern advice. I tried CGI. And I was immediately hooked by its simplicity.
For those not in the know, a fully working CGI script is as simple as this:
#!/bin/sh
printf 'content-type: text/html\n\n'
printf '<strong>Greetings Traveller</strong>'
printf "<p>The date and time are $(date)</p>"
The webserver itself handles all the difficult bits of the HTTP headers. You just need to provide a content-type and then the page itself. Done.
If you want to provide more (cookies, etc) you can; it’s just one more ‘printf’ line and you’re done. No libraries, no functions, no complexity. You don’t even have to parse strange constructs. Just print.
If you want to look at URL strings (eg for GET) you just need to be able to access the environment variable QUERY_STRING. If you want to access body data (eg for POST) you just need to read input. Just as if someone was sitting there typing into stdin of your program.
It does get ugly for complex or multipart POSTs. That’s where a library or program can help. But you only need to attack that once you get there.
Compare this to every other method of talking to a webserver out there:
A related story of teaching
A few months back I was helping some students with their website project. They were new to web development and had been recommended to use flask, a python library that acts as a webserver and webserver interface all in one. They were having extreme difficultly wrapping their heads around many concepts. Notably:
Many of their problems stemmed from them not knowing how HTTP worked in the first place, so I was teaching them this. What made this process horrible was then also trying to find out how and then explain how flask abstracts these concepts into its own processes and functions. I could understand how to beginners like them it seemed completely opaque.
They thought pages were unreadable objects generated by the templating code, and that the templates themselves were sent to the user’s browser along with the page. They thought cookies were handled and stored by the webserver as well as the client. The way flask’s functions worked and the examples they followed suggested this to them.
If I’m ever in the situation again of helping new people learn web technology then I’m going to get or convert them to use CGI right off the bat. It’s easier to teach, it’s easier to understand, easier to get working on most webservers and isn’t locked in to any particular language or framework.
The only downside of CGI that I know about is the fact it starts a new process to handle each user request. Yes that’s a problem in big sites handling hundreds or thousands of visitors per second. But by the time a new student gets to running a big site they will have already encountered many, many other scalability issues in their code and backend/storage. Let alone teaching them database and security concepts. There’s a reason we have quotes like “premature optimisation is the root of all evil”.
I don’t think students new to webdev should be started on anything other than CGI. They can use any language they want. They can actually understand what they’re doing. And they’re not hitting any artificial barriers or limits set by frameworks or libraries.
Final notes
The whole idea that “CGI should be dead” makes little sense from my context and point of view. I run my own site, help maintain a few others and try to assist others in learning and coping with webdev.
I think the “CGI should be dead” makes sense only in the context of very high workload sites. Whilst these handle a large percentage of the web’s total traffic, the percentage of people actually running these sites is small. Different units: traffic of visitors vs people running sites. I think we confuse them.
It’s too easy to get caught up in “professional syndrome”, where you look up to the big players and trust in their opinions. But you also need to understand that their opinions are based on their current experiences, which are often a world away from what the rest of us should be worrying about.
If a captain of a battleship says that cannons are his biggest problem then you shouldn’t try to learn about and use cannons to build your first ship. You should then realise only a tiny fraction of ships need them, even the really big ones.
FWIW this may be the single most thorough and thoughtful comment response I’ve seen on Lobsters. Thank you for that.
I agree that CGI’s simplicity is beautiful, and as much as you’re suggesting that libraries aren’t required (and they aren’t) if you look at something like CGI.pm it’s an incredibly thin wrapper around CGI. A little bit of convenience, some simple templating for generating HTML. This was a winning recipe for thousands people designing interactive websites for literally decades!
Thanks feoh.
I’ll have a look at CGI.pm. One thing in particular that I found hard was decoding multipart form data; so I ended up writing a standalone C program that does nothing but split these up into separate files.
https://mobile.twitter.com/samphippen/status/987843354011586560
Going public on RT stuff was the wrong way to handle my concerns. I’m sorry. Members of the board and I are talking privately to come to a good outcome.
I’d love to know how many Seamonkey users there were, in the shallow hope of beating the Opera users.
Is @liwakura == nero?
Seamonkey 4, Opera 7.
Yes. I checked the box that im the author of the submitted story, so my nick should be light-blue.
Used to use seamonkey, but latest firefox was just too damn fast so i switched. When seamonkey get’s the latest engine, maybe i’ll switch back.
I don’t know if that will ever happen. I’m not sure there is the man-power.
Seamonkey has always been “Firefox but more sane”. Whilst it’s slipping, I think there’s still a need for a project that does this (but uses the quantum- code).
I’d really like to use anything that isn’t Firefox, but addons seem to be a problem with Seamonkey - how do people get around that?
There’s an extension that adds an ‘addon history’ thingamabob to the addons site, so you can select older versions of addons:
https://github.com/lemon-juice/AMO-Browsing-for-SeaMonkey
It’s really imperfect and I have older addons breaking. My heart may soon follow.
When I first started using Void I’ll fully admit I found it via a website that listed distributions that don’t use SystemD for init.
That’s actually how I started using Void as well. It currently runs on my Home Theater PC. So far I’ve been pretty impressed by it. Although runit can be a little janky at times, it’s pretty elegant in its simplicity.
Base on Void’s implementation of runit (I have not used runit elsewhere) there are a few things that feel odd to me:
On the other hand there are some things that I really appreciate about runit that keep me here:
To elaborate on (1): the metaphor of “the service should stay in the foreground” is so much better than trying to handle forking services. Compared to the openrc implementation in Alpine, runit is a godsend. Openrc tries to track forking services (tradition?) and the service files have their own long list of keywords, syntax and quasi-hidden default scripts.
Previously I edited /etc/runit/core-services/03-filesystems.sh and commented out the feature:
if [ -x /bin/btrfs ]; then
msg "Activating btrfs devices..."
btrfs device scan || emergency_shell
fi
Unfortunately this file gets overwritten again when runit updates.
Alternatively I could uninstall btrfs-progs, but then I wouldn’t be able to work with other people’s (non-boot) btrfs filesystems without having to install it again.
Deactivating the btrfs scan may be easy, but keeping it deactivated permanently is harder.
Something worth checking during your wiki-hopping: table support. Whilst many backends support tables, some have non-simple syntax and others don’t support multi-line text inside table cells. I discovered this during a few wiki-hops.
Some random notes:
ikiwiki https://ikiwiki.info/
I gave up trying to install this on my first attempt after several hundred MB of perl deps.
The default page template (HTML/CSS structure) is much more complicated than it needs to be (ridiculous “IF HTML5” for every single tag, IIRC); I ended up writing my own much simpler one before even attempting to make some CSS.
pmwiki http://www.pmwiki.org/
Stores pages as plaintext by default BUT in a ‘all edit history merged into one file’ format. In other words the files look like crazy nested diff’s.
foswiki http://foswiki.org/
Looks pretty, comes with an instant-launch script. Uploading images is as simple as drag+drop onto the page whilst editing! (you need to enable the WYSIWYG editor for this).
Have not dabbled deeper than this. I’m surprised I have never read of this wiki anywhere on the web, it looks like a big project.
Tried a session without addons/extensions?
firefox --profilemanager
seamonkey --profilemanager
chromium --user-data-dir=/tmp/moo
EDIT: I feel really daft for not adding palemoon :D
N.B. I’m not having this issue (Seamonkey)
disabling addons/extensions fixed things. my only addons are ublock origin and noscript (with lobste.rs allowed). perhaps the cleanest way to solve this would be an html-only interface to lobste.rs?
it’s also amusing that it suddenly stopped happening as i was typing this comment, haha.
That’s a very strange bug. I also use ublock origin and haven’t seen anything like that. Offhand, I don’t think anything in the comment reply code has changed in at least a few months. If you can nail down repro steps, please open an issue.
I recently discovered how horribly complicated traditional init scripts are whilst using Alpine Linux. OpenRC might be modern, but it’s still complicated.
Runit seems to be the nicest I’ve come across. It asks the question “why do we need to do all of this anyway? What’s the point?”
It rejects the idea of forking and instead requires everything to run in the foreground:
/etc/sv/nginx/run:
/etc/sv/smbd/run
/etc/sv/murmur/run
Waiting for other services to load first does not require special features in the init system itself. Instead you can write the dependency directly into the service file in the form of a “start this service” request:
/etc/sv/cron/run
Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
The only other feature I can think of is “reloading” a service, which Aker does in the article via this line:
I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?
The logging mechanism works like this to be stable and only lose logs in case
runsvand the log service would die. Another thing about separate logging services is thatstdout/stderrorare not necessarily tagged, adding all this stuff torunsvwould just bloat it.There is definitively room for improvements as
logger(1)is broken since some time in the way void uses it at the moment (You can blame systemd for that). My idea to simplify logging services to centralize the way how logging is done can be found here https://github.com/voidlinux/void-runit/pull/65. For me the ability to execsvlogd(8)fromvlogger(8)to have a more lossless logging mechanism is more important than the main functionality of replacinglogger(1).Ooh thankyou, having a look :)
But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order. Depending on network being setup, for example, brings complexity to each of those shell scripts.
I’m of the opinion that a dsl of whitelisted items (systemd) is much nicer to handle than writing shell scripts, along with the standardized commands instead of having to know which services that accepts ‘reload’ vs ‘restart’ or some other variation in commands - those kind of niceties are gone when the shell scripts are individually an interface each.
The runit/daemontools philosophy is to just keep trying until something finally runs. So if the order is wrong, presumably the service dies if a dependent service is not running, in which case it’ll just get restart. So eventually things progress towards a functioning state. IMO, given that a service needs to handle the services it depends on crashing at any time anyways to ensure correct behaviour, I don’t feel there is significant value in encoding this in an init system. A dependent service could also be moved to running on another machine which this would not work in as well.
It’s the same philosophy as network-level dependencies. A web app that depends on a mail service for some operations is not going to shutdown or wait to boot if the mail service is down. Each dependency should have a tunable retry logic, usually with an exponential backoff.
That was my initial thought, but it turns out the opposite is true. The services are retried until they work. Things are definitely paralleled – there is not “exit” in these scripts, so there is no physical way of running them in a linear (non-parallel) nature.
Ignoring the theory: void’s runit provides the second fastest init boot I’ve ever had. The only thing that beats it is a custom init I wrote, but that was very hardware (ARM Chromebook) and user specific.
Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services.
runit and s6 also don’t support cgroups, which can be very useful.
Why? The runit/daemontools philsophy is just to try to keep something running forever, so if something dies, just restart it. If one restarts a service, than either those that depend on it will die or they will handle it fine and continue with their life.
If they die, and are configured to restart, they will keep bouncing up and down while the dependency is down? I think having dependency resolution is definitely better than that. Restart the dependency, then the dependent.
Yes they will. But what’s wrong with that?
Wasted cycles, wasted time, not nearly as clean?
It’s a computer, it’s meant to do dumb things over and over again. And presumably that faulty component will be fixed pretty quickly anyways, right?
I would rather have my computer do less dumb things over and over personally.
Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever). The dependency tree can be complicated. Ideally once something is fixed everything that depends on it can restart immediately, rather than waiting for the next automatic attempt which could (with the exponential backoff that proponents typically propose) take quite a while. And personally I’d rather have my logs show only a single failure rather than several for one incident.
But, there are merits to having a super-simple system too, I can see that. It depends on your needs and preferences. I think both ways of handling things are valid; I prefer dependency management, but I’m not a fan of Systemd.
Why, though? What’s the technical argument. daemontools (and I assume runit) do sleep 1 second between retries, which for a computer is basically equivalent to it being entirely idle. It seems to me that a lot of people just get a bad feeling about running something that will immediately crash.
What’s the distinction here? Also, with microservices the dependency graph in the init system almost certainly doesn’t represent the dependency graph of the microservice as it’s likely talking to services on other machines.
Yeah, I cannot provide an objective argument as to why one should prefer one to the other. I do think this is a nice little example of the slow creep of complexity in systems. Adding a pinch of dependency management here because it feels right, and a teaspoon of plugin system there because we want things to be extensible, and a deciliter of proxies everywhere because of microservices. I think it’s worth taking a moment every now and again and stepping back and considering where we want to spend our complexity budget. I, personally, don’t want to spend it on the init system so I like the simple approach here (especially since with microservies the init dependency graph doesn’t reflect the reality of the service anymore). But as you point out, positions may vary.
Unnecessary wakeup, power use (especially for a laptop), noise in the logs from restarts that were always bound to fail, unnecessary delay before restart when restart actually does become possible. None of these arguments are particularly strong, but they’re not completely invalid either.
I was trying to point out that we shouldn’t make too many generalisations about how services might behave when they have a dependency missing, nor assume that it is always ok just to let them fail (edit:) or that they will be easy to fix. There could be exceptions.
Perhaps wandering off topic, but this is a good way to trigger even worse cascade failures.
eg, an RSS reader that falls back to polling every second if it gets something other than 200. I retire a URL, and now a million clients start pounding my server with a flood of traffic.
There are a number of local services (time, dns) which probably make some noise upon startup. It may not annoy you to have one computer misbehave, but the recipient of that noise may disagree.
In short, dumb systems are irresponsible.
But what is someone supposed to do? I cannot force a million people using my RSS tool not to retry every second on failure. This is just the reality of running services. Not to mention all the other issues that come up with not being in a controlled environment and running something loose on the internet such as being DDoS’d.
I think you are responsible if you are the one who puts the dumb loop in your code. If end users do something dumb, then that’s on them, but especially, especially, for failure cases where the user may not know or observe what happens until it’s too late, do not ship dangerous defaults. Most users will not change them.
In this case we’re talking about init systems like daemontools and runit. I’m having trouble connecting what you’re saying to that.
If those thing bother you, why run Linux at all? :P
N.B. bouncing up and down ~= polling. Polling always intrinsically seems inferior to event based systems, but in practice much of your computer runs on polling perfectly fine and doesn’t eat your CPU. Example: USB keyboards and mice.
USB keyboard/mouse polling doesn’t eat CPU because it isn’t done by the CPU. IIUC the USB controller generates an interrupt when data is received. I feel like this analogy isn’t a good one (regardless). Checking a USB device for a few bytes of data is nothing like (for example) starting a Java VM to host a web service which takes some time to read its config and load its caches only to then fall over because some dependency isn’t running.
Sleep 1 and restart is the default. It is possible to have another behavior by adding a
./finishscript to the./runscript.I really like runit on void. I do like the simplicity of SystemD target files from a package manager perspective, but I don’t like how systemd tries to do everything (consolekit/logind, mounting, xinet, etc.)
I wish it just did services and dependencies. Then it’d be easier to write other systemd implementations, with better tooling (I’m not a fan of systemctl or journalctl’s interfaces).
You might like my own dinit (https://github.com/davmac314/dinit). It somewhat aims for that - handle services and dependencies, leave everything else to the pre-existing toolchain. It’s not quite finished but it’s becoming quite usable and I’ve been booting my system with it for some time now.
It’s nice to be able to reload a well-written service without having to look up what mechanism it offers, if any.
Runits
sv(8)has the reload command which sendsSIGHUPby default. The default behavior (for each control command) can be changed in runit by creating a small script under$service_name/control/$control_code.https://man.voidlinux.eu/runsv#CUSTOMIZE_CONTROL
I was thinking of the difference between ‘restart’ and ‘reload’.
Reload is only useful when:
I have not been in environments where this is necessary, restart has always done me well. I assume that the primary use cases are high-uptime webservers and databases.
My thoughts were along the lines o: If you’re running a high-uptime service, you probably don’t care about the extra effort of writing ‘killall -HUP nginx’ than ‘systemctl reload nginx’. In fact I’d prefer to do that than take the risk of the init system re-interpreting a reload to be something else, like reloading other services too, and bringing down my uptime.
I used to use something like logexec for that, to “wrap” the program inside the runit script, and send output to syslog. I agree it would be nice if it were builtin.