Puppy (enjoyed the fact that it was usable on a flash drive and it could be used on “public” computers
Gentoo
Suse
Ubuntu
Arch
Gentoo
I’m sticking with Gentoo. USE flags are just too handy and I’ve had machines that have had time same install (but are kept up to date) for over a decade now. Gentoo is what I want. No more and no less.
Some believe that desktops should copy all UI conventions from mobile devices. Because we can achieve some unicorn hybrid system which is fit both for touch and traditional desktop use. (Maybe such an optimum exists, but no one is remotely close).
Web apps taking over. Electron et al. are cheap for whoever provides the app (you only have to develop one app in place of one per platform + you can hire web developers with generally lower salaries). But externalize the cost to users, both in terms of resources and by throwing away all platform look & feel. Heck, even macOS is not immune these days. Arq, which used to be an excellent native Mac app had its interface rewritten in Electron.
Wrong incentives within companies. Continuously shipping new version + making promotion by making a visible mark on an application. How can you make promotion if you hammer out issues in an application with a familiar boring UI?
It’s really sad how these trends have utterly destroyed the Windows and GNOME UIs and are even slowly chipping away from macOS (which, as long as you are running native Cocoa apps, at least still largely follows conventions: they have somewhat uniform title bars, still have a menu bar, etc).
Though it may also have to do something with age. In general, it seems that younger folks who have grown up with mobile UIs do not mind electron apps etc. as much as those who have used computers in the Mac OS classic, Windows 95, GNOME 1.x-2.x times. In fact, the often appreciate that applications look the same between platforms.
Completely disagree. set -e is more trouble than it is worth. Most of the behavior is unintuitive and completely breaks on anything that doesn’t exit 0. Plenty of things exist non-zero without it being an error. The bash you end up writing to contort your logic to meet set -e‘s whims ends up being far less readable than straight forward scripts. Besides if you’re really concerned about errors, handle them, perhaps you can recover and not just bail the entire script.
Besides if you’re really concerned about errors, handle them, perhaps you can recover and not just bail the entire script.
I am not the only one writing the scripts. Others are too. And if they don’t write error handling, set -e is a cheap way of preventing that. Again till now, set -e has worked well in my case. But I added a more detailed link to the blogpost around caveats.
I am not the only one writing the scripts. Others are too. And if they don’t write error handling
One of the many fantastic reasons to have code reviews.
Once again, not all exit codes that are non-zero mean failure, and even if they do mean failure, they could be different failures that could indicate a need to retry, or a way to possibly recover or provide a more helpful error message than just dying. Sure you can mitigate the last one with a trap, but even then, it’s still more general than writing a real error message for the issue encountered.
Take ls‘s man page for instance (not that I advocate for using ls in scripts, typically you’d want find).
Exit status:
0 if OK,
1 if minor problems (e.g., cannot access subdirectory),
2 if serious trouble (e.g., cannot access command-line argument).
and it’s certainly not the only command to have different exit statuses for different situations. In my experience, set -e is more trouble than it’s worth, but as always, ymmv.
I wonder if sort’s version sort (e.g. sort -V) would be a better fix than the normalization of gcc versions in gcc-config. Also haven’t tried it, so no idea if it would work better.
This is the kind of prosaic scut work that never gets the glory, but is vital to keep things up to date and working smoothly. Excellent.
Bash. I never have to guess if it’ll be there. I work with lots of systems that I don’t have direct control over, so it makes sense to use the default. At least on Linux.
I’ll suffer through tsch on FreeBSD for root, but my users will use bash.
Other chat protocols are handled by bitlbee (including twitter). My work and home setups are mostly the same, but at work I use weechat and wee-slack since we use slack (ugh) and I have chromium around to deal with hangouts for work reasons.
After that just a smattering of the usual suspects like various interpreted languages languages (mostly perl), compilers, ssh, mpv, and other things that I touch less frequently.
From The Collaborative International Dictionary of English v.0.48 [gcide]:
boondoggle \boon"dog*gle\ v.
[...]
2. a useless, wasteful, or impractical project; -- especially
one authorized by a government agency as a favor to
partisans, to employ unemployed people, or in return for
corrupt payments.
[PJC]
I’m curious what Perl (either version) is used for these days.
In the past I saw it used as a glue language for things like ad-hoc build/test pipelines, or a shell replacement when shell scripts became unwieldy. Nowadays CI tools or Python usually fill those roles.
Personally I have no interest in working with Perl6. They doubled-down on everything I disliked in Perl5, and I think the language tries to be too clever. Too much “magic” and too many ways to do things for my liking. I’ve got better things to do than memorize a hundred special variable.
Booking.com and DuckDuckGo run on Perl. (5, which is the only Perl.) I worked at Booking for two years, used Perl in anger, and grew to like it. It’s still and has always been a perfectly good Python/PhP/Ruby alternative.
I wouldn’t write anything new in Perl, but more because of the difficulty of finding people who could work on it than any fault of the language. Fashion is a cruel top.
I got in charge for a large Perl5 codebase by forking an abandoned project and we still consider Perl5 its original sin. I don’t think it’s an adequate alternative to anything but AWK one-liners (I still use it in that role and not going to give up—but that’s about it).
Even with strict and warnings, so many things just pass silently. Sure, you can unit test it, but other languages that aren’t untyped can just detect it on their own, and produce an informative exception trace. The difference is especially noticeable in glue code that is hard to unit test. Its garbage collector still can leak memory in situations everyone else’s could handle a decade ago. The context thing (with default context almost never being documented) is still a minefield.
The community part is important too. A lot of people had been telling us they would be happy to contribute, if it wasn’t for Perl. We’ve been steadily replacing it with Python, and it’s been an improvement all around. Code is easier to read, problems are detected earlier, and contributor activity is much higher.
on the other hand the old farts who know perl might be more competent than the young hipster python programmers. that’s a heuristic i often use when evaluating projects: if the community is older they are less likely to do dumb shit.
Currently gainfully employed and writing perl is part of my job, yes some of it is maintenance, but I also write new things in it as well. I also write go, python (grudgingly), shell, and some C++ here and there too.
I, personally, would be very happy to see this change. Perl6 has some neat ideas that I’d love to flex some day, but Perl5 needs to move on. No reason they both can’t co-exist.
The notion that Perl is dead dead dead dead is a tiresome one at this point.
I’ve never used perl5, but I discovered perl6 recently and am in love. Good for: desktop applications (assuming they don’t get too big), scripts, web applications. It essentially obviates metaprogramming because anything you could possibly want to metaprogram is already in the language (including metaprogramming, in case you want that for some reason). That means that you have less to memorize than with any other language, because once you know it, you know it. There are no codebase-specific bespoke constructs you have to learn; it’s pretty much all straight perl6 because straight perl6 is already good enough.
Regardless of what happens with Chrome’s manifest v3 proposals, we want to ensure that ad-blockers and other similarly powerful extensions that contribute to user safety and privacy remain part of Mozilla’s add-ons ecosystem while also making sure that users are not being exposed to extreme risks via malicious use of powerful APIs.
If it’s about the signed extension thing, please read about the history of that feature It is not based on threat models and predictions. It was done this way to get rid of adware that was auto-installing itself and making real-world people’s lives worse. It has to be hard-coded into the EXE, because it’s only the EXE that Windows performs signature checks on and that Mozilla can sue adware developers for impersonating.
I maintain quite a few open-source projects, and contribute to others. They all make choices about what they support and what they don’t. Is it sinister of them to do so? Many of them don’t provide any sort of toggle to make them support things the developers have chosen not to support, which is what you seem to object to. Is that really controlling behavior, or just developers disagreeing about what should be supported?
My issue is that it’s user-hostile to prevent users from doing what they want with their computers. Firefox runs on my computer; I as an end user — and my grandparents as end-users — should be free to determine which extensions I run within Firefox. It’s not Mozilla’s computer to control. The ability to choose how to use one’s computer shouldn’t be reserved to developers: it should be available to everyone.
(This is a rewritten version of my original reply, as I cannot any longer edit it).
One guy said his preferred method was to cd up and nuke the whole directory.
Wow.
The method I learned, and used when helping my coworker, was the -- option to rm.
Nowadays rm seems to detect this and gives a help text:
$ rm -testfile
rm: invalid option -- 't'
Try 'rm ./-testfile' to remove the file '-testfile'.
Try 'rm --help' for more information.
This option is included in this article https://kb.iu.edu/d/abao, which is high up on the search result for “linux remove file starting with dash”.
Then there’s this “creative” solution:
There are some characters that you cannot remove using any of the above methods, such as forward slashes, interpreted by Unix as directory separators. To remove a file with such meta-characters, you may have to FTP into the account containing the file from a separate account and enter the command:
mdel
You will be asked if you really wish to delete each file in the directory. Be sure to answer n (for no) for each file except the file containing the difficult character that you wish to delete.
There are some characters that you cannot remove using any of the above methods, such as forward slashes, interpreted by Unix as directory separators.
Yes, slashes are directory separators at the filesystem level. If a filename contains one, you have a corrupted filesystem, and no system call can help you.
you may have to FTP into the account containing the file from a separate account
Why a separate account? Users probably only have a single account. This suggestion may be informed by a discomfort with recursion (which crops up other places, too).
You will be asked if you really wish to delete each file in the directory. Be sure to answer n (for no) for each file except the file containing the difficult character that you wish to delete.
Using this tip, you have (n - 1) opportunities to irrevocably delete a file you meant to keep. May as well cut FTP out of the loop and just run rm -i *. Better yet, run rm -i *foobar* to target just the one file.
There is a much easier way to handle filenames starting with dashes: just use a double dash (–) as the last parameter to the command, before the filename. This double dash signals ‘end of options’ and makes it possible to handle filenames containing dashes. Here’s an example:
frank@yetunde:~/testdir$ ls
frank@yetunde:~/testdir$ # notice the directory is empty
frank@yetunde:~/testdir$ touch -testfile # this will not work...
touch: invalid date format ‘estfile’
frank@yetunde:~/testdir$ touch -- -testfile # this DOES work
frank@yetunde:~/testdir$ ls
-testfile
frank@yetunde:~/testdir$ # notice the directory now
contains a file named -testfile
frank@yetunde:~/testdir$ rm -testfile # this, again, will not work
rm: invalid option -- 't'
Try 'rm ./-testfile' to remove the file '-testfile'.
Try 'rm --help' for more information.
frank@yetunde:~/testdir$ rm -- -testfile # ...while this DOES work
frank@yetunde:~/testdir$ ls
frank@yetunde:~/testdir$ # and the directory is empty again...
Slack often is desired by users, sometimes even set up as clandestine shadow IT uncontrolled by corporate sysadmins. If organized labour is made of these people, it seems logical to assume those people would want Slack there too.
it’s logical that they would want some form of communication, if that’s what you mean. but i don’t see where you get to slack being the obvious choice.
Regrettably, in the US, most software professionals are opposed to unionization; and outspokenly supporting them is hazardous to one’s employment. Furthermore, unionization presents a path to removing Slack from the workplace, but certainly does not guarantee it.
I’ve been able to “survive” with these gateways. Even though you lose some features, they are good enough. The main issue is related with the threaded discussions, that get mixed with the normal content.
The IRC gateway is the only way I’ll use Slack. Beyond not wanting to devote gigabytes of ram to chat, I also have no desire to see the flurry of gifs, emojis, and reactions that the more “modern” view provides.
It matters to us in Octave. I know everyone is just screaming GET OFF SOURCEFORGE ALREADY, but honestly, it’s a lot of work and Sourceforge has not done anything evil to us. And there has been a lot of misreporting about what it actually has done.
Really well written article that highlights some realities about the over application of Docker and the ignorance of older containerization technologies such as LXC in this case.
I’m sticking with Gentoo. USE flags are just too handy and I’ve had machines that have had time same install (but are kept up to date) for over a decade now. Gentoo is what I want. No more and no less.
UI and UX have nothing to do with usability anymore, all they are is fashion. They chase trends and fads.
That, plus:
Some believe that desktops should copy all UI conventions from mobile devices. Because we can achieve some unicorn hybrid system which is fit both for touch and traditional desktop use. (Maybe such an optimum exists, but no one is remotely close).
Web apps taking over. Electron et al. are cheap for whoever provides the app (you only have to develop one app in place of one per platform + you can hire web developers with generally lower salaries). But externalize the cost to users, both in terms of resources and by throwing away all platform look & feel. Heck, even macOS is not immune these days. Arq, which used to be an excellent native Mac app had its interface rewritten in Electron.
Wrong incentives within companies. Continuously shipping new version + making promotion by making a visible mark on an application. How can you make promotion if you hammer out issues in an application with a familiar boring UI?
It’s really sad how these trends have utterly destroyed the Windows and GNOME UIs and are even slowly chipping away from macOS (which, as long as you are running native Cocoa apps, at least still largely follows conventions: they have somewhat uniform title bars, still have a menu bar, etc).
Though it may also have to do something with age. In general, it seems that younger folks who have grown up with mobile UIs do not mind electron apps etc. as much as those who have used computers in the Mac OS classic, Windows 95, GNOME 1.x-2.x times. In fact, the often appreciate that applications look the same between platforms.
Completely disagree.
set -e
is more trouble than it is worth. Most of the behavior is unintuitive and completely breaks on anything that doesn’t exit 0. Plenty of things exist non-zero without it being an error. The bash you end up writing to contort your logic to meetset -e
‘s whims ends up being far less readable than straight forward scripts. Besides if you’re really concerned about errors, handle them, perhaps you can recover and not just bail the entire script.I am not the only one writing the scripts. Others are too. And if they don’t write error handling,
set -e
is a cheap way of preventing that. Again till now,set -e
has worked well in my case. But I added a more detailed link to the blogpost around caveats.One of the many fantastic reasons to have code reviews.
Once again, not all exit codes that are non-zero mean failure, and even if they do mean failure, they could be different failures that could indicate a need to retry, or a way to possibly recover or provide a more helpful error message than just dying. Sure you can mitigate the last one with a trap, but even then, it’s still more general than writing a real error message for the issue encountered.
Take
ls
‘s man page for instance (not that I advocate for usingls
in scripts, typically you’d wantfind
).and it’s certainly not the only command to have different exit statuses for different situations. In my experience,
set -e
is more trouble than it’s worth, but as always, ymmv.I wonder if sort’s version sort (e.g.
sort -V
) would be a better fix than the normalization of gcc versions ingcc-config
. Also haven’t tried it, so no idea if it would work better.This is the kind of prosaic scut work that never gets the glory, but is vital to keep things up to date and working smoothly. Excellent.
Typical Google using 90s Microsoft style strategies.
I’ll bet they’re blocked because they’re not allowing some of the more dastardly ReCAPTCHA v3 nonsense that is a massive privacy risk.
Embrace, Extend, Lock-in. The SaaS equivalent of EEE.
Bash. I never have to guess if it’ll be there. I work with lots of systems that I don’t have direct control over, so it makes sense to use the default. At least on Linux.
I’ll suffer through tsch on FreeBSD for root, but my users will use bash.
Other chat protocols are handled by bitlbee (including twitter). My work and home setups are mostly the same, but at work I use weechat and wee-slack since we use slack (ugh) and I have chromium around to deal with hangouts for work reasons.
After that just a smattering of the usual suspects like various interpreted languages languages (mostly perl), compilers, ssh, mpv, and other things that I touch less frequently.
This seems like a massive boondoggle and a potential security issue.
That’s systemd for you.
Today’s new word :)
I’m curious what Perl (either version) is used for these days.
In the past I saw it used as a glue language for things like ad-hoc build/test pipelines, or a shell replacement when shell scripts became unwieldy. Nowadays CI tools or Python usually fill those roles.
Personally I have no interest in working with Perl6. They doubled-down on everything I disliked in Perl5, and I think the language tries to be too clever. Too much “magic” and too many ways to do things for my liking. I’ve got better things to do than memorize a hundred special variable.
Booking.com and DuckDuckGo run on Perl. (5, which is the only Perl.) I worked at Booking for two years, used Perl in anger, and grew to like it. It’s still and has always been a perfectly good Python/PhP/Ruby alternative.
I wouldn’t write anything new in Perl, but more because of the difficulty of finding people who could work on it than any fault of the language. Fashion is a cruel top.
I got in charge for a large Perl5 codebase by forking an abandoned project and we still consider Perl5 its original sin. I don’t think it’s an adequate alternative to anything but AWK one-liners (I still use it in that role and not going to give up—but that’s about it).
Even with
strict
andwarnings
, so many things just pass silently. Sure, you can unit test it, but other languages that aren’t untyped can just detect it on their own, and produce an informative exception trace. The difference is especially noticeable in glue code that is hard to unit test. Its garbage collector still can leak memory in situations everyone else’s could handle a decade ago. The context thing (with default context almost never being documented) is still a minefield.The community part is important too. A lot of people had been telling us they would be happy to contribute, if it wasn’t for Perl. We’ve been steadily replacing it with Python, and it’s been an improvement all around. Code is easier to read, problems are detected earlier, and contributor activity is much higher.
on the other hand the old farts who know perl might be more competent than the young hipster python programmers. that’s a heuristic i often use when evaluating projects: if the community is older they are less likely to do dumb shit.
I use it for personal projects, mostly because I’ve invested the time to learn it well.
I don’t think much new stuff is being written in Perl, but there’s plenty of maintenance.
Currently gainfully employed and writing perl is part of my job, yes some of it is maintenance, but I also write new things in it as well. I also write go, python (grudgingly), shell, and some C++ here and there too.
I, personally, would be very happy to see this change. Perl6 has some neat ideas that I’d love to flex some day, but Perl5 needs to move on. No reason they both can’t co-exist.
The notion that Perl is dead dead dead dead is a tiresome one at this point.
I do it for a living in webdev (backend) and deployment automation.
I’ve never used perl5, but I discovered perl6 recently and am in love. Good for: desktop applications (assuming they don’t get too big), scripts, web applications. It essentially obviates metaprogramming because anything you could possibly want to metaprogram is already in the language (including metaprogramming, in case you want that for some reason). That means that you have less to memorize than with any other language, because once you know it, you know it. There are no codebase-specific bespoke constructs you have to learn; it’s pretty much all straight perl6 because straight perl6 is already good enough.
For desktop, there are various bindings to GTK and SDL; for web, cro is the current state of the art.
The end of controlling what you see on the Web is coming.
Not if you switch to Firefox :)
I really hope Google is shooting themselves (and Chrome’s market share) in the foot with this move… but somehow I doubt it.
Firefox development is mostly funded by Google. I can’t imagine them doing much to piss Google off.
Firefox is going to migrate to Manifest v3. Their comments aren’t really reassuring.
This actually sounds reassuring:
This part is scary.
Yeah, but …
We have those APIs now isn’t it ? And the world isn’t collapsing.
The scary part is that Firefox thinks it’s their job to decide how users use their own computers.
It’s kind of impossible not to if you’re creating consumer facing software, isn’t it?
It’s one thing to provide safe defaults, and another thing entirely to ensure that those defaults can’t be overridden.
If it’s about the signed extension thing, please read about the history of that feature It is not based on threat models and predictions. It was done this way to get rid of adware that was auto-installing itself and making real-world people’s lives worse. It has to be hard-coded into the EXE, because it’s only the EXE that Windows performs signature checks on and that Mozilla can sue adware developers for impersonating.
Alright. If it doesn’t affect people building from source, I guess it doesn’t matter.
So… block it on Windows?
It’s one thing to provide safe defaults, and another thing entirely to ensure that those defaults can’t be overridden.
I never understand this sort of rhetoric.
I maintain quite a few open-source projects, and contribute to others. They all make choices about what they support and what they don’t. Is it sinister of them to do so? Many of them don’t provide any sort of toggle to make them support things the developers have chosen not to support, which is what you seem to object to. Is that really controlling behavior, or just developers disagreeing about what should be supported?
My issue is that it’s user-hostile to prevent users from doing what they want with their computers. Firefox runs on my computer; I as an end user — and my grandparents as end-users — should be free to determine which extensions I run within Firefox. It’s not Mozilla’s computer to control. The ability to choose how to use one’s computer shouldn’t be reserved to developers: it should be available to everyone.
Mozilla is free to develop the software they want to develop. You’re free to not use it.
You don’t have the right to force them to develop something they don’t want to, but you seem to be trying to assert such a right.
Or, rely on blocklists: https://firebog.net/ I’ve got a little side project to automate it: https://gitlab.com/dacav/myofb
If you want something more complex, more popular, more user-friendlly: pi-hole
Until they fully control DNS as well with something like DoH.
Ah, this cat-and-mouse thing! :) Let’s try. You play adversary :)
My next move is to use the blacklist to place a filter at firewall level instead of using it at dns level.
Your move
Or use /etc/hosts
That’s actually one of the options of my scripts: populating /etc/hosts. :)
Proxying ads through the website you want to see, so the ad urls are
http://destination.com/double click/ad/1234
Definitely. But the website gets a performance penalisation, I think.
Plus, I’m wondering, will it be as effective for the trackers to deal with the tracked browser with a proxy server in between? (maybe, maybe not).
I place Ads and DoH on the same IP address as the CDN that millions of websites use.
Wait what? I don’t get this one. How many millions of websites are passing through the same IP address? Can you elaborate?
Many of the ones that sit behind CloudFlare and Fastly.
Not every technical document begins with multiple quotations, I don’t know if that’s a good thing.
I once impressed a coworker by telling them how to
rm
a file beginning with a hyphen (I had just read about it 2 weeks previously).That was an interview question we used for a while. One guy said his preferred method was to cd up and nuke the whole directory.
(This is a rewritten version of my original reply, as I cannot any longer edit it).
Wow.
The method I learned, and used when helping my coworker, was the
--
option torm
.Nowadays
rm
seems to detect this and gives a help text:This option is included in this article https://kb.iu.edu/d/abao, which is high up on the search result for “linux remove file starting with dash”.
Then there’s this “creative” solution:
That knowledge base article makes me a sad panda.
Yes, slashes are directory separators at the filesystem level. If a filename contains one, you have a corrupted filesystem, and no system call can help you.
Why a separate account? Users probably only have a single account. This suggestion may be informed by a discomfort with recursion (which crops up other places, too).
Using this tip, you have (n - 1) opportunities to irrevocably delete a file you meant to keep. May as well cut FTP out of the loop and just run
rm -i *
. Better yet, runrm -i *foobar*
to target just the one file.There is a much easier way to handle filenames starting with dashes: just use a double dash (–) as the last parameter to the command, before the filename. This double dash signals ‘end of options’ and makes it possible to handle filenames containing dashes. Here’s an example:
Thanks! I actually mentioned the double-dash option in my comment. Perhaps it wasn’t clear.
It wasn’t to me because I read over it…
I have rewritten my comment and deleted the original one you replied to. Thanks again for expanding on this.
What magic system call is ftpd using to remove files that rm cannot?
mdel
just deletes all files, with a prompt. Answer “no” to the files you want to keep…My approach to deleting files with control characters and glob characters has always been to use
ls -i
and
find -inum $inode_goes_here -delete
or
-exec rm {} +
or
| xargs rm
But most often
-delete
.you’ll only be disappointed by this if you trusted slack in the first place
For many of us I assume this isn’t so much of a choice because it’s mandated by our employer.
unionize!
then the union is on slack too
yours is?
Slack often is desired by users, sometimes even set up as clandestine shadow IT uncontrolled by corporate sysadmins. If organized labour is made of these people, it seems logical to assume those people would want Slack there too.
it’s logical that they would want some form of communication, if that’s what you mean. but i don’t see where you get to slack being the obvious choice.
What makes you think that a union wouldn’t choose Slack?
union members are likely to care about freedom
Yup.
:(
Regrettably, in the US, most software professionals are opposed to unionization; and outspokenly supporting them is hazardous to one’s employment. Furthermore, unionization presents a path to removing Slack from the workplace, but certainly does not guarantee it.
it’s illegal to fire someone for organizing a union
It’s illegal but it happens all the time.
They are?
If your admin won’t enable the gateway, wee-slack allows you to connect from weechat, an ncurses-based client: https://github.com/wee-slack/wee-slack
Thank you! I didn’t know about WeeChat itself.
You can even use WeeChat’s relay functionality to connect from Emacs: https://github.com/the-kenny/weechat.el
I’ve been able to “survive” with these gateways. Even though you lose some features, they are good enough. The main issue is related with the threaded discussions, that get mixed with the normal content.
The IRC gateway is the only way I’ll use Slack. Beyond not wanting to devote gigabytes of ram to chat, I also have no desire to see the flurry of gifs, emojis, and reactions that the more “modern” view provides.
Does it even matter at this point? That brand is incredibly tainted after the recent actions of sourceforge in particular.
It matters to us in Octave. I know everyone is just screaming GET OFF SOURCEFORGE ALREADY, but honestly, it’s a lot of work and Sourceforge has not done anything evil to us. And there has been a lot of misreporting about what it actually has done.
Yet. Sourceforge has not done anything evil to you… yet.
Bundling malware with installers is pretty much indefensible in my eyes…
Really well written article that highlights some realities about the over application of Docker and the ignorance of older containerization technologies such as LXC in this case.