This is actually quite interesting, and I didn’t know “distroless” was even a thing. I prefer Debian for my base images, but Redhat certainly has the muscle to get some steam behind this idea, and at the end of the day the focus is more about the application and not necessarily the OS so should theoretically be agnostic anyhow.
I don’t think this is particularly RedHat-specific, you could probably implement the same thing with Debian, you just need the ability to install packages into a specific root directory? Which dpkg at least does.
“Distroless” is an oxymoron. It might not be based on an existing well-known distribution but it’s still a distribution. You still rely on them to maintain the tooling that generates the image, receive security updates, and so forth.
Easier way to eavesdrop on Signal users : ask Google to send them a modified apk and update it silently (which android can do for Google Play apps). Or update signal from signal’s own update mechanism.
Those huge weak points only exist because of signal’s insistence on not allowing open source builds to be distributed.
It’s worse yet than “if google uploaded a poison APK”
Google’s keyboard “GBoard” communicates with the internet for various reasons. You don’t need to poison a fake APK - you can spy on the keyboard directly.
And it has rudimentary ML capacity since it can and will correct words that you type previously.
Absolutely true that you can install different keyboards - however the defaults always retain a significant amount of power.
And Signal’s messaging is poor here as well. They never mention anything about GBoard and its ability to spy on every character you type and substitute. Using Signal with defaults can get you taken for a black van ride, if you’re not careful.
Isn’t the point of signal to make mass surveillance too expensive via rock solid E2E for the masses? You don’t get a van ride if someone isn’t already seriously invested in your messages since mass interception is too difficult using the above methods. Your phone could just as easily be attacked by any other software attack surface. E2E encryption doesn’t solve device/OS level security problems.
Really. A black van ride. I mean your not wrong but this escalated pretty quickly to “Signal is responsible for my abduction by not communicating that the dangerous GBoard is Google’s default in smart phones.”
It’s certainly the outlier, but has happened. And it primarily happens with whistleblowers and similar leak-to-news-agencies.
And Naomi Wu (RealSexyCyborg) in China has reported similar with dissident friends who were black-bagged after talking about sensitive stuff on Signal.
Doing usual sensitive stuff like sexting or getting passwords isn’t going to have any real ramifications. But if you involve reporters or dissidents, your phone wont protect you.
Easier way to eavesdrop on Signal users : ask Google to send them a modified apk and update it silently (which android can do for Google Play apps).
I think Google can’t do that, if Signal signs its app itself. (See android.com: Manage your own signing key) In this case Google could only give you a different app with the same name if it’s the initial installation of the app. But for updates this wouldn’t work. Also this would sooner or later be detected by the public since the signature can be compared manually after the fact.
Also you can download the APK directly from Signal.org. This way you still have to trust Signal and its TLS CA. The APK updates itself from then on (as far as I know). While the underlying OS is still Android from Google or iOS from Apple, IMO it gets silly to focus on Signal in that regard.
I’m happy that Signal exists and that it has the potential to appeal to the masses while providing the technical means to shield (legally acting) businesses from exploiting the chat data. Of course any improvement is welcome no the less.
Who checks the signature of the app you’re running? It seems like it’d be pretty easy to have Android not check the signature on startup, if you’re considering Google acting against the user.
If we go full paranoia: theoretically there is a possibility that the Google Play app installer service could secretly circumvent the whole “updates-with-same-certificate” model by e.g. replacing the cert in the package manager’s database, right? (Assuming Play has parts running as root which I think it did?)
Even on rooted devices, a changed certificate will cause the device to first uninstall (and remove all associated data) the existing app.
If we assume Google is going to be complicit in a surveillance measure in the future, they will have had to add a covert option for the OS to not do that in the past.
But if we assume Google to be complicit, all bets are off anyways and you should probably side-load Signal to begin with. And replace the phone OS with one you built yourself after auditing all of its Millions lines of code
Those huge weak points only exist because of signal’s insistence on not allowing open source builds to be distributed.
Ironically, this insistence means that on Linux the only installation is via their third party apt repository rather than from an official distribution package source. It’s the exact opposite on Android, where the only installation is from Google, the “official” Android distribution package source. This is exactly the wrong way round to how I’d like it because in both cases more trusted sources are available.
I am suspicious of all CLAs at this point. They can theoretically be used for good, e.g. updating to better FOSS licenses, but as long as they can be used to release the software under a non open source license, as Audacity is planning to for the Apple store, it is difficult to trust that they will not be used to make the software entirely proprietary at a future date.
I think that a Developer Certificate of Origin covers most of the legitimate uses of a CLA, and the risks of CLAs do not justify any remaining benefits.
As for the Apple store, I’m becoming increasingly convinced that the only reasonable approach is to take a stand against it. Why can’t the Apple store’s policies be written in such a way that GPL software can be published on it without changing the license? It is a user-hostile choice by Apple, it’s not the first such choice, and it won’t be the last. The community should protest Apple and demand that it allow GPL software, and employ alternate installation methods until Apple agrees (hahaha). Realistically, what I expect is that Apple will gradually lock down macOS the way it has iOS, and open source developers will have to make more and more damning compromises in order to remain relevant on the platform. Why not make a stand now, while alternate installation methods are still possible, if difficult and inconvenient? Do you think it will be easier to make a stand in the future?
…it is difficult to trust that they will not be used to make the software entirely proprietary at a future date.
Does this mean that you avoid contribution to any project licensed under MIT or other BSD-style licenses, and contribute only to projects that use the GPL and without a CLA?
Because any BSD-style licensed project, with or without a CLA, can be make “entirely proprietary” at any time.
That’s a fair point, and the quick answer is no. I do prefer copyleft software, precisely because of this concern, but not to the point I won’t use or contribute to permissively licensed software.
To me, it seems a CLA for a license like the GPL makes it misleading. The main reason I’m interested in copyleft is to prevent the software from being made proprietary, taking the community’s work and claiming all of the benefits for one entity, and a CLA undermines that protection. I’m more upset if GPL-licensed software becomes proprietary than if the same thing happens to MIT/BSD licensed software, because I always expected that to happen to permissively licensed software, I use it / contribute to it with that risk in mind.
Is there a point to a GPL license if it’s undermined by a CLA?
Is there a point to a GPL license if it’s undermined by a CLA?
Is it any worse than a company that refuses to accept outside contributions, but still releases their codebase as GPL? I think in that case the company would be more likely to receive praise than criticism for making the code available at all.
This is also a fair point. When id Software released Doom and Quake under the GPL, that was pretty great, even though the community was certainly not involved in the original development process.
We do need to be wary of making the perfect the enemy of the good. Software released under the GPL with a CLA is certainly better than not making it open source at all. Is it better than a permissive license? I’m not sure, I think it’s about the same. But you raise a good point that a CLA is more worrisome if it is a community effort with significant external contributions, thereby giving them the ability to “privatize” community contributions to some degree, rather than a largely internally developed program that would just mean they made their own product proprietary again.
In the long term, I would be less willing to rely on GPL software with a CLA for anything important than software with stronger copyleft.
I wanted a hardware mute button, and then I realised that I already have a remote button - my presentation remote. That has a button that sends a Tab. So I wrote a ten line app that converts Tab presses into system microphone mute toggles.
Wow, this blog post is so lacking in empathy for users that I’m surprised it made it on a reputable distro’s blog. Instead of spilling 1000 words on why “static linking is bad”, maybe spend a little time thinking about why people (like me) and platforms (like go/rust et al) choose it. The reason people like it is that it actually works and it won’t suddenly stop working when you change the version of openssl in three months. It doesn’t even introduce security risks! The only difference is you have to rebuild everything on that new version, which seems like a small price to have software that works, not to mention that rebuilding everything will also re-run the tests on that new version. I can build a go program on nixos, ship it to any of my coworkers and it actually just works. We are on a random mix of recent ubuntu, centos 7 and centos 8 and it all just works together. That is absolutely not possible with dynamic linking
It works well if all you care about is deploying your application. As a distro maintainer, I’m to keeping track of 500 programs, and having to care about vendored/bundled versions and statically linked in dependencies multiplies the work I have to do.
But … that’s a choice you make for yourself? No application author is asking you to do that, and many application authors actively dislike that you’re doing that, and to be honest I think most users don’t care all that much either.
I’ve done plenty of packaging of FreeBSD ports back in the day, and I appreciate it can be kind of boring thankless “invisible” gruntwork and, at times, be frustrating. I really don’t want to devalue your work or sound thankless, but to be honest I feel that a lot of packagers are making their own lives much harder than it needs to be by sticking to a model that a large swath of the software development community has, after due consideration and weighing all the involved trade-offs, rejected and moved away from.
Both Go and Rust – two communities with pretty different approaches to software development – independently decided to prefer static linking. There are reason for that.
Could there be some improvements in tooling? Absolutely! But static linking and version pinning aren’t going away. If all the time and effort spent on packagers splitting things up would be spent on improving the tooling, then we’d be in a much better situation now.
…but to be honest I feel that a lot of packagers are making their own lives much harder than it needs to be by sticking to a model that a large swath of the software development community has, after due consideration and weighing all the involved trade-offs, rejected and moved away from.
I think this is a common view but it results from sampling bias. If you’re the author of a particular piece of software, you care deeply about it, and the users you directly interact with also care deeply about it. So you will tend to see benefits that apply to people for whom your software is of particular importance in their stack. You will tend to be blind to the users for whom your software is “part of the furniture”. From the other side, that’s the majority of the software you use.
Users who benefit from the traditional distribution packaging model for most of their software also find that same model to be painful for some “key” software. The problem is that what software is key is different for different classes of user.
A big reason people ship binaries statically linked is so it’s easier to use without frills, benefiting especially users who aren’t deeply invested in the software.
For me personally as an end user, if a program is available in apt-get then I will install it from apt-get first, every time. I don’t want to be responsible for tracking updates to that program manually!
I believe static PIE can only randomize a single base address for a statically linked executable, unlike dynamically linked PIE executable where all loaded PIC objects receive a randomized base address.
Did a bit of rummaging in the exe header but I’m not 100% sure what I’m looking for to confirm there, but it had a relocation section and all symbols in it were relative to the start of the file as far as I could tell.
Edit: Okay, it appears the brute-force way works. I love C sometimes.
aslr.c:
#include <stdio.h>
int main() {
int (*p)() = main;
printf("main is %p\n", p);
return 0;
}
Testing:
$ gcc aslr.c
$ ldd a.out
linux-vdso.so.1 (0x00007ffe47d2f000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2de631b000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2de6512000)
$ ./a.out; ./a.out; ./a.out
main is 0x564d9cf42135
main is 0x561c0b882135
main is 0x55f84a94f135
$ gcc -static aslr.c
$ ldd a.out
not a dynamic executable
$ a.out; a.out; a.out
main is 0x401c2d
main is 0x401c2d
main is 0x401c2d
$ gcc -static-pie aslr.c
$ ldd a.out
statically linked
$ a.out; a.out; a.out
main is 0x7f4549c07db5
main is 0x7f5c6bce5db5
main is 0x7fd0a2aaedb5
Note ldd distinguishing between “not a dynamic executable” and “statically linked”.
I don’t know much about security, and this is an honest question. My understanding is that ASLR is made to mostly protect when the executable is compromised due to stack overflow and similar attacks, right? Aren’t these problems mostly a thing of past for the two languages that abhor static linking, like go and rust?
There are a lot of code-injection vulnerabilities that can occur as a result of LD_LIBRARY_PATH shenanigans. If you’re not building everything relro, dynamic linking has a bunch of things like the PLT GOT that contain function pointers that the program will blindly jump through, making exploiting memory-safety vulnerabilities easier.
As an author of open source malware specifically targeting the PLT/GOT for FreeBSD processes, I’m familiar with PLT/GOT. :)
The good news is that the llvm toolchain (clang, lld) enables RELRO by default, but not BIND_NOW. HardenedBSD enables BIND_NOW by default. Though, on systems that don’t disable unprivileged process debugging, using BIND_NOW can open a new can of worms: making PLT/GOT redirection attacks easier over the ptrace boundary.
My personal opinion is that support for ARMv6+VFPv2 should be maintained in distributions like Debian and Fedora.
My personal opinion is exactly the opposite. Raspberry Pi users are imposing unreasonable burden on ARM distribution maintainers. For better or worse, the entire ecosystem standardized on ARMv7 except Raspberry Pi. The correct answer is to stop buying Raspberry Pi.
It could be worse. At least the Pi is ARMv7TDMI. Most AArch32 software defaults to Thumb-2 now and the Pi is just new enough to support it. I maintain some Arm assembly code that has two variations, ARMv6T2 and newer, everything else. I can probably throw away the older ones now, they were added because an ultra low-budget handset maker shipped ARMv5 Android devices and got a huge market share in India or China about 10 years ago and a user of my library really, really cared about those users.
No idea, sorry. I never saw them, I just got the bug reports. Apparently they’re all gone (broken / unsupported) now. It was always a configuration that Google said was unsupported, but one handset manufacturer had a custom AOSP port that broke the rules (I think they also had their own app store).
I also agree wiťh you that the correct answer is to stop buying Raspberry Pi, especially their ARMv6 products. But for most beginners in electronics, it seems like “Raspberry Pi” equals “single board computer”. They aren’t going to stop buying them.
I don’t love MIPS64 or i686 either, but the reality is that the hardware exists and continues to be used. Maintainers should deal with that, IMHO.
The long term marginal cost of software is zero. So Free Software always catches up in the end.
However, there’s room for proprietary software until that happens. When something is new and hasn’t been done before, the proprietary model provides the funds up front to pay for it, with the promise of making it back in the short term.
The other exception is niche software that fulfills some very specific use case. It takes so long for viable replacement Free Software to appear, it might as well be never.
Automatic updates are baked into snap’s DNA. If you didn’t want this, simply don’t install the snap.
Choosing this medium to install your dev environment and then getting cranky when it does precisely what it was meant to do seems a bit unintentionally disingenuous to me.
I’m not necessary arguing for the correctness of Canonical’s choices around snaps here, because I have mixed feelings about them myself, but they make it pretty clear at every available opportunity that this is how they work.
On a more helpful note, have you tried setting the refresh.hold system wide configurable which you could set to an arbitrary date years in the future?
There’s another part to the disconnect: Ubuntu is trying to replace the deb parts of (large parts of) its ecosystem with snap equivalents. Putting these two things together makes the picture pretty interesting. How are LTS releases supposed to work with this kind of setup? I’ve had bugfix releases of software introduce bugs that blocked my work. Not often, but it does happen.
Operationally, I wish updates were always automatically-update-on-a-schedule-I-define, a la Patch Tuesday or something similar. That seems like the only possible sane option, but it seems tragically rare.
That’s interesting I’d only heard of them doing this with Chrome. What else are you referring to if you don’t mind my asking?
For sure there are going to be problems as this is absolutely a radical departure from traditional UNIX vendor app update strategies.
Their assertion is that the auto-update feature was a key blocker to bring third party vendors who otherwise wouldn’t even consider supporting the Ubuntu platform to do so.
I think there’s some value in that, but as you say there are definitely kinks to be worked out here. On the up side the Ubuntu folks are pretty good about engaging with the community. I’d suggest pinging Snapcraft or Alan Pope on Twitter. It’s not his job but he’s often good about taking the time to help people past snap difficulties.
Now that I think of it, I think part of the disconnect is that vendors think of their software as an end-product that nothing else depends on, so Slack or Spotify or whatever can update their own little worlds without affecting anything else. In the real world however, people script things to download stuff from Slack or ask Spotify to do things, and so those applications changing can break others’ functionality. Even non-techies do this, mentally; how many people have complained to tech support when a silent, undesired application update “moved that damn button” and broke their workflow?
I would urge you to be sure to get both sides of the story on the Mint snap amputation.
I don’t have any issue with the decision itself per se but the way they handled it was IMO very unfortunate and unprofessional.
Specifically, the Ubuntu folks said they were totally willing to work with Mint to accomplish the removal in a clean way that makes sense technically, but that they were never approached.
IMO when you choose to base your Linux distribution on an upstream distro, you agree to abide by the technical decisions that upstreeam makes, or at least to work with them in evolving your remix/flavor in a way that makes sense. None of that happened here.
Yes. The differences betwen Docker and LXD are interesting. Docker provides a convenient packaging format and allows for layering of components to create application stacks.
LXD seems to want you to start out with a single image and seems to have better / more / deeper integration with system services like networking.
I’d love to see someone do an in depth technical comparison of thw two technologies.
Ubuntu is trying to replace the deb parts of (large parts of) its ecosystem with snap equivalents.
I’ve only seen evidence of this in the packages that are particularly challenging to maintain as debs due to the way particular upstreams operate. Browsers are an example where security backports aren’t feasible so the debs that have been shipped by most distributions have for years been “fat” anyway. Snaps are mainly solving the problems of IoT device updates, third party (both Free and proprietary) software distribution directly to users and the areas where debs are a poor fit, such as shipping wholesale featureful updates to users such as with web browsers. The traditional deb parts are not affected.
Operationally, I wish updates were always automatically-update-on-a-schedule-I-define, a la Patch Tuesday or something similar.
Disclosure: I’m a Canonical employee and Ubuntu developer (and a Debian developer, if we’re going there). I’m not involved in snap architecture at Canonical (though I do maintain some snaps as I do debs in Debian!).
Choosing this medium to install your dev environment and then getting cranky when it does precisely what it was meant to do seems a bit unintentionally disingenuous to me.
Agree. So what I say below doesn’t apply to the situation in the article.
but they make it pretty clear at every available opportunity that this is how they work.
They may make it clear how they work but they don’t make pretty clear when you are installing a snap. Starting Ubuntu 20.04 they’ve started installing snaps when you install a Deb
The reason the snap is installed is because the chromium browser moved to a snap so we could iterate faster and consume fewer developer resources packaging a non-default web browser for multiple releases. If we didn’t have the deb to snap migration in 19.10 (and 20.04) then users upgrading from 19.04 to 19.10 or from 18.04 to 20.04 would lose their browser.
We decided at the time not to punch people in the face during the upgrade and make them choose “Do you want to keep your browser, by migrating to the snap?” because those kind of mid-upgrade dialogs are pretty awful for end users. Generally the vast majority of the audience don’t actually care what packaging scheme is used. They just want to click the browser icon and it open.
Yes, we could have communicated it better. We’re working on improving it for the 20.04.1 update which is due next week. We certainly do listen to the feedback we get.
We decided at the time not to punch people in the face during the upgrade and make them choose “Do you want to keep your browser, by migrating to the snap?”
You could use the motd announcements to give people ample notice.
Please see my answer above to icefox’s assertion along the same lines. This is only for Chromium, and there are good reasons behind that choice (They were spending vast amounts of engineering time JUST on chromium).
As I said in that response, IMO Linux Mint is very much in the wrong here, and has handled the situation abysmally. They could have enacted the removal of snaps/snapd in a much cleaner way without all the drama, and they chose not to do that.
Please see my answer above to icefox’s assertion along the same lines.
Yeah, I saw it after I had posted this comment.
This is only for Chromium, and there are good reasons behind that choice
You are conflating two separate things. There may be good reasons™ behind packing Chromium as a snap only. There is no good reason to do so behind the users back. If one uses apt to install something one most certainly doesn’t expect to be installing a snap.
What you chose to call ‘drama’ is a political stance. The issue was not “to accomplish the removal in a clean way”. It was “This breaks one of the major worries many people had when Snap was announced and a promise from its developers that it would never replace APT.”. You may not like or agree with that position but there is nothing wrong with that.
Also it is disingenuous to think this will stop at Chromium. Chromium is just for testing the waters.
# What version do I have installed?
$ snap info caprine | tail -n 1
installed: 2.47.0 (37) 66MB -
# Is there a pending update? (yes)
$ snap refresh --list | grep caprine
caprine 2.48.0 38 sindresorhus -
# Refresh blocked while app is running (experimental setting, should land "soon")
$ snap refresh caprine
error: cannot refresh "caprine": snap "caprine" has running apps (caprine)
# Let's kill the app
$ sudo killall caprine
# Try refreshing again
$ snap refresh caprine
caprine 2.48.0 from Sindre Sorhus (sindresorhus) refreshed
# Let's revert it because the update was terrible
$ snap revert caprine
caprine reverted to 2.47.0
I appreciate you’re “burned” by this and will likely switch to Mint, so this information may be academic, but for others stumbling on this thread it may be handy. Hope Linux Mint works out for you.
I’d not expect the Allen Pope to respond, you only exist in my podcasts :). How cool!
Now on point, also responding to the revert comment earlier. I did find that, but my point is about not being able to disable updates, not about not being able to rollback. Rolling back would not have been necessary if I could have done the update manually in two weeks, after which the plugins of the IDE would probably have been updated as well. I do however appreciate the option to rollback.
You also state that there are ways to disable updates and give the example of sideloading an application, after which it never gets updated. That also is not my main gripe. I do want my cake and eat it, but not automatically. As in, I want to update when I’m ready and expect it, not changing the engine while the car is driving on the highway.
Is there an official way to disable automatic updates? However hard to setup, with big red flashing warnings, promising my firstborn to whatever deity? Setting a proxy or hosts file might stop working, an “official” way would still give me all the other benefits of snaps.
As said, I do like updates and easy of use (not manually sideloading ever snap, just apt upgrade all the things) with just a bit more control.
There’s currently no way to fully disable completely all updates. I appreciate that’s hard news to take. However, I have been having many conversations at Canonical internally about the biggest pain points, and this is clearly one. Perhaps we could add a big red button marked “Give Alan all my cats, and also disable updates forever”. Perhaps we could really forcefully nag you with irritating popups once you get beyond the 60 days limit that’s currently set, I don’t know. But I agree, something needs to be done here, especially for desktop users who are super sticky on particular releases of their favorite tools.
It’s Friday night, and I’m enjoying family time, but rest assured I’ll bring it up again in our meetings next week.
What about a big button to disable automatic updates, and then give initially gentle, gradually escalating nags once updates are available?
Maybe in the handy “Software Manager” popup UI we’ve already got built. Checkboxes to select which ones you want to update. Make it really easy to “accept all”, after my workday is over.
Did you use windows 10? Did you read the comments when Microsoft did similar? They were heavily criticised there was a vocal cohort of users annoyed even by that. When I was near a deadline and my windows told me I cannot postpone updates anymore and broke my flow I happened to understand them.
This is a hard UX problem, despite how simple one might initially see it.
It is a problem but not that hard. A simple matter of control. Do I update or not? .hold? Then not. It’s perfectly feasible to add a ..hold.forever. Your workflow is not broken sms 99%of average Joes have the latest everything.
Now, this is just my opinion as am old schooler used to absolute control of my computing. I gladly let everything be automated until i don’t.
It is hard of you wish to please a wide range of users, ranging from novices to power users, and wish to provide a UI serving the different needs of these different cohorts. I mean it is hard if you want to do it well. :) Microsoft tried hard, and did not get it IMHO. Other vendors fail even more miserably.
Automatic updates are baked into snap’s DNA. If you didn’t want this, simply don’t install the snap.
Choosing this medium to install your dev environment and then getting cranky when it does precisely what it was meant to do seems a bit unintentionally disingenuous to me.
I’ve seen snaps come up on lobsters before but had no idea that they automatically update with no option to disable the updates. I thought they were basically like Canonical-flavored docker containers.
Thankfully I saw this post before I update my laptop next week. I’ll be avoiding Ubuntu and their snaps for now.
I’ve seen snaps come up on lobsters before but had no idea that they automatically update with no option to disable the updates. I thought they were basically like Canonical-flavored docker containers.
Couple of important differences between Docker and snaps.
Docker containers leverage Linux’s cgroups feature to offer each container it’s own pretty much complete Linux userland.
So that means that you can, as a for instance, have 3 containers all running on a Fedora system, one Alpine Linux, another Ubuntu, and a third something else.
Snaps, on the other hand, are application containers. They just bundle the application itself and any libraries that are necessary to run it.
Thankfully I saw this post before I update my laptop next week. I’ll be avoiding Ubuntu and their snaps for now.
And that’s the beauty of the Linux ecosystem right? :) I’d guess more people on here are Debian or maybe Arch users :)
I am amazed at how Canonical always tries to make up their own technology instead of embracing existing open-source projects… and it NEVER works, and they keep trying it anyway. Let’s look at the list:
Upstart vs. systemd
Mir vs. Wayland
Snap vs. Flatpak
Unity vs. GNOME
Am I missing any? I feel like there’s more. Does anyone know why the hell they do this? Is it them and Red Hat having a technological pissing match that Red Hat usually wins (systemd and Flatpak come out of Red Hat after all)? Or do they just dream of making a de-facto standard that gives them lots of power, which this article seems to imply?
Either way, good on Mint for pushing back against this nonsense.
IMHO, what happens is that Canonical validates the existence of alternatives by beginning work, causing alternative efforts to start up or for existing alternative efforts to gain momentum. Then a certain vocal faction publicly trash Canonical’s efforts and try to swing the community towards one of the alternatives.
None of this is intended to diminish the value of the alternatives. Alternatives are good. They’ve always existed in the our ecosystem (eg. sendmail, qmail, postfix, exim; apache, nginx; etc). But in the case of a Canonical-led effort, a specific vocal crowd makes it political.
An exception is Unity vs. GNOME. That happened after GNOME didn’t want to follow Canonical’s design opinions on how the desktop should work (even though they represented the majority of GNOME desktop users!), and refused patches. But then, as above, politics happened anyway.
Author’s note: I use RedHat as a stand-in for the entire RedHat/CentOS/Fedora ecosystem a lot in this.
tl;dr Redhat is trying to address pain points for existing users, Ubuntu is going after new markets. Both are laudable goals, but Ubuntu’s strategy is riskier.
I think a lot of this comes down to market demand. With both the “Mir v Wayland” and “Unity v GNOME” RedHat and Canonical were both trying to address a market need.
With Wayland and GNOME, Redhat wanted a more modern display server and desktop environment so that it’s existing customers didn’t have to deal with the big ol’ security hole that is X. (Don’t get me wrong, I love X11 and still think it’s valuable, but I think RedHat’s market disagrees).
With Mir and Unity, Ubuntu wanted a display server and DE that would scale from a phone to a multi-monitor workstation. This is a laudable goal, and it did see a market need to address.
The difference is, Ubuntu was trying to address a market that it wanted while Redhat was trying to address the needs of a market that it actually had. Redhat has tons of customers actively using Wayland and GNOME for their intended purpose, and that gives a project momentum. Ubuntu also had loads of customers using Mir and Unity, but for only one of the multiple purposes that it was intended to be used for. Engineering always has trade-offs, designing a display server and DE for such a wide array of purposes is bound to have rough edges for any single one of those purposes. Ubuntu was asking it’s primary market, desktops and laptops, to suffer those rough edges for the greater Canonical purpose.
Even with snap v flatpak, again Ubuntu’s goals are much wider with snap than Redhat’s are with Flatpak, judging from what I’ve seen. Flatpak is a way for Redhat to distribute software to Linux/systemd in a way that’s more robust than the current RPM method, and Fedora is actively using flatpaks as a base to their Silverblue variant. whereas with snap, I think that Ubuntu wants to be the one stop shop for distributing software on Linux. Again: engineering, trade-offs, rough edges, etc.
The Redhat method of integrating the new package format seems to be coming up with an entirely different distribution to leverage flatpak functionality to it’s fullest while kinks are worked out . Canonical’s method seems to be: “Let’s shove it into our flagship product, and work out the kinks there”. This comes with a lot of inherent risks.
One quote I think is quite relevant to the current discussion:
“Some people claimed Bazaar did not have many community contributions, and was entirely developed inside of Canonical’s walled garden. The irony of that was that while it is true that a large part of Bazaar was written by Canonical employees, that was mostly because Canonical had been hiring people who were contributing to Bazaar - most of which would then ended up working on other code inside of Canonical.”
Note: Apparently Canonical’s solutions often are the forerunners and other people the copycats… I didn’t actually know that, thanks for the corrections. Frankly it makes the fact that their solutions tend to come out on the losing side even more interesting…
Browsers are pretty much already “bundled” and exist outside the traditional distribution model. Pretty much all stable distributions have to take upstream changes wholesale (including features, security fixes and bug fixes) and no longer cherry-pick just security fixes. The packaging of browsers as snaps are merely admitting that truth.
The chromium-browser deb is a transitional package so that users who are upgrading don’t get a removed Chromium. It is done this way for this engineering reason - not a political one. The only (part) political choices here are to ship Chromium as a snap and no longer spend the effort in maintaining packaging of Chromium as a deb. Background on that decision is here: https://discourse.ubuntu.com/t/intent-to-provide-chromium-as-a-snap-only/5987
Ubuntu continues to use the traditional apt/deb model for nearly everything in Ubuntu. Snaps are intended to replace the use case that PPAs and third party apt repositories are used for, and anything else that is already shipped “bundled”. For regular packages that don’t have any special difficulties packaging with the traditional model, I’m not aware of any efforts to move them to snaps. If you want to never use snaps, then you can configure apt to never install snapd and it won’t.
Free Software that is published to the Snap Store is typically done with a git repository available so it is entirely possible for others to rebuild with modifications if they wish. This isn’t the case for proprietary software in the Snap Store, of course. The two are distinguished by licensing metadata provided (proprietary software is clearly marked as “Proprietary”). This is exactly the same as how third party apt repositories work - source packages might be provided by the third party, or they might not.
Anyone can publish anything to the Snap Store, including a fork of an existing package using a different name. There’s no censorship gate, though misleading or illegal content can be expected to be removed, of course. Normally new publications to the Snap Store are fully automated.
The generally cited reason for the Snap Store server-end not being open is that it is extensively integrated in deployment with Launchpad and other deployed server-end components, that opening it would be considerable work, that Canonical spent that effort when the same criticism was made of Launchpad, but that effort was wasted because GitHub (proprietary) took over as a Free Software hosting space instead, and nobody stood up a separate Launchpad instance anyway even after it was opened, so Canonical will not waste that effort again.
The generally cited reason for the design of snapd supporting only one store is that store fragmentation is bad.
I hope that sheds some clarity on what is going on. I tried to stick to the facts and avoided loading the above with opinion.
Opinion: Ubuntu has always been about making a bunch of default choices. One choice Ubuntu has made in 20.04 is that snaps are better for users than third party apt repositories (because the former run in a sandbox and can be removed cleanly; the latter is giving third parties root on your system and typically break the system such that future release upgrades fail). Some critics complain that users aren’t being asked before the Chromium snap is installed. But that would be a political choice. Ubuntu is aimed at users who don’t care about packaging implementation details and just want the system to do something reasonable. Ubuntu’s position is that snaps are reasonable. So it follows that Chromium packaging should be adjusted to what Ubuntu considers the best choice, and that’s what it’s doing.
Disclosure: I work for Canonical, but not in the areas related to Mint’s grievances and my opinions presented here are my own and not of my employer.
The chromium-browser deb is a transitional package
I can’t speak about Mint, but in Ubuntu the chromium-browser deb installs Chromium as a snap behind the scenes.
The generally cited reason for the Snap Store server-end not being open is that it is extensively integrated in deployment with Launchpad and other deployed server-end components, that opening it would be considerable work, that Canonical spent that effort when the same criticism was made of Launchpad, but that effort was wasted because GitHub (proprietary) took over as a Free Software hosting space instead, and nobody stood up a separate Launchpad instance anyway even after it was opened, so Canonical will not waste that effort again.
So, unless you’ll own the market with the product it’s not worth open sourcing? IMO releasing a product open source is never “wasted effort” because it may prove useful in some capacity whether you as the original author know it or not. It may spawn other ideas, provide useful components, be used in learning, the list goes on and on.
IMO releasing a product open source is never “wasted effort”
It’s very convenient to have this opinion when it’s not you making the effort. People seem to care a lot about “providing choice” but it somehow almost always translates into “someone has to provide choice for me”.
It’s very convenient to have this opinion when it’s not you making the effort.
True. I should have worded that better. I was talking about the case of simply making source available, not all the added effort to create a community, and make a “product”, etc. I still don’t believe companies like Canonical have much a leg to stand on when arguing that certain products shouldn’t be open source when open source is kinda their entire thing and something they speak pretty heavily on.
Yep. Just to be clear, open-sourcing code isn’t free. At an absolutely bare minimum, you need to make sure you don’t have anything hardcoded about your infra, but you’ll actually get massive flak if you don’t also have documentation on how to run it, proper installation and operation manuals for major platforms, appropriate configuration knobs for things people might reasonably want to configure, probably want development to happen fully in the open (which in practice usually means GitHub), etc.—even if you yourself don’t need or want any of these things outside your native use case. I’ve twice been at a company that did source dumps and got screamed at because that “wasn’t really open-source.” Not that I really disagree, but if that wasn’t, then releasing things open-source is not trivial and can indeed very much be wasted effort.
That’s true, but that cost is vastly reduced when you’re building a new product from scratch. Making sure you’re not hardcoding anything, for example, is much easier because you can have that goal in mind as you’re writing the software as opposed to the case where you’re retroactively auditing your codebase. Plus, things like documentation can only help your internal team. (I understand that when you’re trying to get an MVP out the door docs aren’t a priority, but we’re well past the MVP stage at this point.)
If the Snap Store was older I would totally understand this reasoning. But Canonical, a company built on free and open source software, really should’ve known that people were going to want the source code from the start, especially because of their experience with Launchpad. I think they could have found a middle ground and said look, here’s the installation and operation manuals we use on our own infra. We’d be happy to set up a place in our docs that adds instructions for other providers if community members figure that out, and if there’s a configuration knob missing that you need, we will carry those patches upstream. Then it would have been clear that Canonical is mostly interested in their own needs for the codebase, but they’re still willing to be reasonable and work with the community where it makes sense.
Opinion: Ubuntu has always been about making a bunch of default choices. One choice Ubuntu has made in 20.04 is that snaps are better for users than third party apt repositories (because the former run in a sandbox and can be removed cleanly; the latter is giving third parties root on your system and typically break the system such that future release upgrades fail).
I think this is a fine opinion but it seems contradicted by the fact that some packages are offered by both the off-the-shelf repos and snap.
I did say “better than third party apt repositories”. The distribution has no control over those, so what is or isn’t available in them does not affect my opinion. I’m just saying that Ubuntu has taken the position that snaps (when available) are preferable over packages from third party apt repositories (when available). And what is available through the distribution’s own apt repository is out of scope of my opinion statement.
Ubuntu has always been about making a bunch of default choices.
What is the default choice when I type jq in bash?
Command ‘jq’ not found, but can be install with:
sudo snap install jq # version 1.5+dfsg-1
sudo apt install jq # version 1.6-1
It’s fine and well opinionated choice that ubuntu prefers it for third party things. I feel like a lot of first party supported utilities are not well opinionated and i’m left thinking about trade-offs when i go with one over the other.
I was expecting more “personal” experience with particular cases rather than a copy-paste list of arguments, which I could simply duckduckgo out.
On the other side, I’ve noticed most of these responses, especially those related to politics or so called “state” are thrown there from US people PoVs, which don’t apply overseas :)
edit: Some of these also mentions “conspiracy”, “criminal acts” (as what attacks privacy), “total control” and so on. While some of these might be true, I usually prefer to avoid wording like that to not get rejected from advance with “tinfoil hat” label. Which sometimes also happens if you even start talking about anything which differs a bit from mainstream attitude to “privacy”, but these edge cases shouldn’t be taken seriously.
I’m not from the US but the UK, and I’d say it’s equally as relevant. If you’re from a five, nine, or fourteen eyes country you should consider them relevant.
What3Words are pretty weird, I’d prefer them not to succeed in the marketplace.
That said, I’m speculating they have a solid DMCA case here. I have not seen the code in question, but I figure W3W does geohashing by translating lat/long into 3 unique word tuples using a lookup table. This table has been chosen to be clear and unambiguous, and that’s the relevant IP (the hashing algo too, I imagine).
If WhatFreeWords had substituted their own lookup table, I doubt this would have been any problem.
I’d rather a competing standard was developed and hosted by OpenStreetMap, instead of trying to reverse-engineer some company that’s legally trigger-happy.
Thanks for the link. But it doesn’t really detract from my (admittedly wild-ass guess) legal theory.
The fact whether the words are actually unambiguous is not relevant - the point is that W3W can argue that they have made significant effort in choosing the words and that the lookup table is proprietary information, covered by DMCA protections.
(By the way, the article has been corrected to note that the address for Stade de France is the smoker variant)
They don’t really argue that, though. As kyrias points out in their link the words are chosen at random. This seems like incredibly thin ice for copyright purposes, almost straying into conceptual art territory.
The W3W promise is three words to define a location. The hedge is three words, plus a general idea of where the location is so you know if there’s an error. The reality is three words plus an unknown probability of an error and no way to correct it. Real addresses feature hierarchy and redundancy for this reason.
Since my comment I’ve been reading up a bit. Here’s a purported DMCA takedown notice (not neccessarily the exact one served for WhatFreeWords) that makes the same argument I surmised:
A key feature of W3W’s activities is its creation of unique three-word addresses (“3WAs”) for three-metre by three-metre “squares” of the Earth’s surface. […] 3WAs are generated through the use of software code and approximately 57,000,000,000 word combinations made from a wordlist of over 40,000 words compiled by and proprietary to W3W.
I’d rather a competing standard was developed and hosted by OpenStreetMap, instead of trying to reverse-engineer some company that’s legally trigger-happy.
Plus codes come close I think; no need for yet another standard. Granted they’re not pronounceable words, but What3Words also needs homonym disambiguation so I’m not sure much is lost.
I’m not convinced that English pronounceability is really such a killer feature for something that’s supposed to be usable worldwide. But that’s neither here nor there. Thanks for the links!
It’s interesting that the only way for this to proceed any differently would be for someone not anonymous to stand behind it. As long as nobody is prepared to do that, a takedown will by definition always be accepted by some named person or organisation via whom the anonymous publisher is connected.
I find this interesting because it has consequences for anonymous publication in general.
Excepting anonymous publication platforms like Tor or, as Nesh says, IPFS, that is.
The team behind WhatFreeWords can form a corporation to speak for their legal interests. The corporation’s stakeholders can be shielded in some jurisdictions, and they will be represented by a lawyer hired by the corporation.
I’m interested to see what others say on this, but please could commenters distinguish between application containers and system containers in their replies? Application containers are like Docker: no init process, massive paradigm shift in workflow and deployment, bundle everything into a “golden image” in a build step, and so on. System containers are like lxd: pretty much the same workflow as deploying to a VM except that memory and disk space doesn’t need to get partitioned up, though you might use differing deployment techniques and tooling just as you might with non-container deployments.
I think a lot of people are unaware that system containers exist thanks to the hype and popularity of docker. Having worked with both, I personally prefer the system container paradigm and specifically the LXD ecosystem. Although that’s probably because I’m already comfortable working with VMs and standard Linux servers and there’s less new stuff to learn.
I wish I had gone more of the lxd system container route. I feel like I could have taken my existing ansible workflow and just applied them to containers instead of KVM virtual machines (via libvirt). I think I started going down that route for a bit, but then just ended up rewriting a lot of services I used to be in regular Docker containers. I ended up writing my own provisioning tool that communicated directly with the Docker API for creating, rebuilding and deploying containers (as well as setting up Vultr VMs or DigitalOcean droplets with Docker and making them accessible over VPNs):
In general I tend to like containers. They let you separate out your dependencies and make it easy to try out applications if project maintainers supply Dockerfiles or official images. But there are also tons of custom made images for things that may not get regular updates, and you can have tons of security issues with all those different base layers floating around on your system. I also hate how it’s so Linux specific, as the FreeBSD docker port is super old and unmaintained. A Docker engine implementation in FreeBSD that uses jails (instead of cgroups) and ZFS under the hood would be pretty awesome (FreeBSD can already run Linux ELF binaries and has most of the system calls mapped, except for a few weird newer things like inotify et. al, which is what the old/dead Docker used).
I have huge issues with most docker orchestration systems. They’re super complex and difficult to setup. For companies they’re fine, but for individuals, you’re wasting a lot of resources on just having master nodes. DC/OS’s web interface is awful and can eat up several MBs of bandwidth in JSON data per second!
I’ve written a post on container orchestration systems and docker in general:
What do you think about the experimental distroless Python3 image by Google?
Might want to take a look at this fun technique from RedHat for making your own distroless images (instead of relying on Google to do it, seeing as they haven’t updated Python 3 for years): http://crunchtools.com/devconf-2021-building-smaller-container-images/
This is actually quite interesting, and I didn’t know “distroless” was even a thing. I prefer Debian for my base images, but Redhat certainly has the muscle to get some steam behind this idea, and at the end of the day the focus is more about the application and not necessarily the OS so should theoretically be agnostic anyhow.
I don’t think this is particularly RedHat-specific, you could probably implement the same thing with Debian, you just need the ability to install packages into a specific root directory? Which
dpkg
at least does.I think that image still uses Python 3.7. Otherwise, it’s a good lightweight option.
“Distroless” is an oxymoron. It might not be based on an existing well-known distribution but it’s still a distribution. You still rely on them to maintain the tooling that generates the image, receive security updates, and so forth.
Easier way to eavesdrop on Signal users : ask Google to send them a modified apk and update it silently (which android can do for Google Play apps). Or update signal from signal’s own update mechanism.
Those huge weak points only exist because of signal’s insistence on not allowing open source builds to be distributed.
It’s worse yet than “if google uploaded a poison APK”
Google’s keyboard “GBoard” communicates with the internet for various reasons. You don’t need to poison a fake APK - you can spy on the keyboard directly.
And it has rudimentary ML capacity since it can and will correct words that you type previously.
That’s why I use simple keyboard.
Absolutely true that you can install different keyboards - however the defaults always retain a significant amount of power.
And Signal’s messaging is poor here as well. They never mention anything about GBoard and its ability to spy on every character you type and substitute. Using Signal with defaults can get you taken for a black van ride, if you’re not careful.
Isn’t the point of signal to make mass surveillance too expensive via rock solid E2E for the masses? You don’t get a van ride if someone isn’t already seriously invested in your messages since mass interception is too difficult using the above methods. Your phone could just as easily be attacked by any other software attack surface. E2E encryption doesn’t solve device/OS level security problems.
Really. A black van ride. I mean your not wrong but this escalated pretty quickly to “Signal is responsible for my abduction by not communicating that the dangerous GBoard is Google’s default in smart phones.”
It’s certainly the outlier, but has happened. And it primarily happens with whistleblowers and similar leak-to-news-agencies.
And Naomi Wu (RealSexyCyborg) in China has reported similar with dissident friends who were black-bagged after talking about sensitive stuff on Signal.
Doing usual sensitive stuff like sexting or getting passwords isn’t going to have any real ramifications. But if you involve reporters or dissidents, your phone wont protect you.
I think Google can’t do that, if Signal signs its app itself. (See android.com: Manage your own signing key) In this case Google could only give you a different app with the same name if it’s the initial installation of the app. But for updates this wouldn’t work. Also this would sooner or later be detected by the public since the signature can be compared manually after the fact.
Also you can download the APK directly from Signal.org. This way you still have to trust Signal and its TLS CA. The APK updates itself from then on (as far as I know). While the underlying OS is still Android from Google or iOS from Apple, IMO it gets silly to focus on Signal in that regard.
I’m happy that Signal exists and that it has the potential to appeal to the masses while providing the technical means to shield (legally acting) businesses from exploiting the chat data. Of course any improvement is welcome no the less.
Who knows with Google, these days? They are known to force-install apps without notification or consent: https://news.ycombinator.com/item?id=27558500
Who checks the signature of the app you’re running? It seems like it’d be pretty easy to have Android not check the signature on startup, if you’re considering Google acting against the user.
If we go full paranoia: theoretically there is a possibility that the Google Play app installer service could secretly circumvent the whole “updates-with-same-certificate” model by e.g. replacing the cert in the package manager’s database, right? (Assuming Play has parts running as root which I think it did?)
Even on rooted devices, a changed certificate will cause the device to first uninstall (and remove all associated data) the existing app.
If we assume Google is going to be complicit in a surveillance measure in the future, they will have had to add a covert option for the OS to not do that in the past.
But if we assume Google to be complicit, all bets are off anyways and you should probably side-load Signal to begin with. And replace the phone OS with one you built yourself after auditing all of its Millions lines of code
That would require you to have Signal’s signing keys which I hope live in some HSM and which require manual physical interaction to make use of
This would work, but only for fresh installs. The system refuses to update an installed apk with one that’s signed by different keys.
You can force the install, but it will first uninstall the existing application with all of its data
Ironically, this insistence means that on Linux the only installation is via their third party apt repository rather than from an official distribution package source. It’s the exact opposite on Android, where the only installation is from Google, the “official” Android distribution package source. This is exactly the wrong way round to how I’d like it because in both cases more trusted sources are available.
I am suspicious of all CLAs at this point. They can theoretically be used for good, e.g. updating to better FOSS licenses, but as long as they can be used to release the software under a non open source license, as Audacity is planning to for the Apple store, it is difficult to trust that they will not be used to make the software entirely proprietary at a future date.
I think that a Developer Certificate of Origin covers most of the legitimate uses of a CLA, and the risks of CLAs do not justify any remaining benefits.
As for the Apple store, I’m becoming increasingly convinced that the only reasonable approach is to take a stand against it. Why can’t the Apple store’s policies be written in such a way that GPL software can be published on it without changing the license? It is a user-hostile choice by Apple, it’s not the first such choice, and it won’t be the last. The community should protest Apple and demand that it allow GPL software, and employ alternate installation methods until Apple agrees (hahaha). Realistically, what I expect is that Apple will gradually lock down macOS the way it has iOS, and open source developers will have to make more and more damning compromises in order to remain relevant on the platform. Why not make a stand now, while alternate installation methods are still possible, if difficult and inconvenient? Do you think it will be easier to make a stand in the future?
Does this mean that you avoid contribution to any project licensed under MIT or other BSD-style licenses, and contribute only to projects that use the GPL and without a CLA?
Because any BSD-style licensed project, with or without a CLA, can be make “entirely proprietary” at any time.
That’s a fair point, and the quick answer is no. I do prefer copyleft software, precisely because of this concern, but not to the point I won’t use or contribute to permissively licensed software.
To me, it seems a CLA for a license like the GPL makes it misleading. The main reason I’m interested in copyleft is to prevent the software from being made proprietary, taking the community’s work and claiming all of the benefits for one entity, and a CLA undermines that protection. I’m more upset if GPL-licensed software becomes proprietary than if the same thing happens to MIT/BSD licensed software, because I always expected that to happen to permissively licensed software, I use it / contribute to it with that risk in mind.
Is there a point to a GPL license if it’s undermined by a CLA?
Is it any worse than a company that refuses to accept outside contributions, but still releases their codebase as GPL? I think in that case the company would be more likely to receive praise than criticism for making the code available at all.
This is also a fair point. When id Software released Doom and Quake under the GPL, that was pretty great, even though the community was certainly not involved in the original development process.
We do need to be wary of making the perfect the enemy of the good. Software released under the GPL with a CLA is certainly better than not making it open source at all. Is it better than a permissive license? I’m not sure, I think it’s about the same. But you raise a good point that a CLA is more worrisome if it is a community effort with significant external contributions, thereby giving them the ability to “privatize” community contributions to some degree, rather than a largely internally developed program that would just mean they made their own product proprietary again.
In the long term, I would be less willing to rely on GPL software with a CLA for anything important than software with stronger copyleft.
I wanted a hardware mute button, and then I realised that I already have a remote button - my presentation remote. That has a button that sends a Tab. So I wrote a ten line app that converts Tab presses into system microphone mute toggles.
Writing Makefiles must be a sonic adventure.
What do you do if you write text while being on a call and need to use Tab?
Normal people don’t type tab in a chat.
Wow, this blog post is so lacking in empathy for users that I’m surprised it made it on a reputable distro’s blog. Instead of spilling 1000 words on why “static linking is bad”, maybe spend a little time thinking about why people (like me) and platforms (like go/rust et al) choose it. The reason people like it is that it actually works and it won’t suddenly stop working when you change the version of openssl in three months. It doesn’t even introduce security risks! The only difference is you have to rebuild everything on that new version, which seems like a small price to have software that works, not to mention that rebuilding everything will also re-run the tests on that new version. I can build a go program on nixos, ship it to any of my coworkers and it actually just works. We are on a random mix of recent ubuntu, centos 7 and centos 8 and it all just works together. That is absolutely not possible with dynamic linking
It works well if all you care about is deploying your application. As a distro maintainer, I’m to keeping track of 500 programs, and having to care about vendored/bundled versions and statically linked in dependencies multiplies the work I have to do.
But … that’s a choice you make for yourself? No application author is asking you to do that, and many application authors actively dislike that you’re doing that, and to be honest I think most users don’t care all that much either.
I’ve done plenty of packaging of FreeBSD ports back in the day, and I appreciate it can be kind of boring thankless “invisible” gruntwork and, at times, be frustrating. I really don’t want to devalue your work or sound thankless, but to be honest I feel that a lot of packagers are making their own lives much harder than it needs to be by sticking to a model that a large swath of the software development community has, after due consideration and weighing all the involved trade-offs, rejected and moved away from.
Both Go and Rust – two communities with pretty different approaches to software development – independently decided to prefer static linking. There are reason for that.
Could there be some improvements in tooling? Absolutely! But static linking and version pinning aren’t going away. If all the time and effort spent on packagers splitting things up would be spent on improving the tooling, then we’d be in a much better situation now.
I think this is a common view but it results from sampling bias. If you’re the author of a particular piece of software, you care deeply about it, and the users you directly interact with also care deeply about it. So you will tend to see benefits that apply to people for whom your software is of particular importance in their stack. You will tend to be blind to the users for whom your software is “part of the furniture”. From the other side, that’s the majority of the software you use.
Users who benefit from the traditional distribution packaging model for most of their software also find that same model to be painful for some “key” software. The problem is that what software is key is different for different classes of user.
A big reason people ship binaries statically linked is so it’s easier to use without frills, benefiting especially users who aren’t deeply invested in the software.
For me personally as an end user, if a program is available in apt-get then I will install it from apt-get first, every time. I don’t want to be responsible for tracking updates to that program manually!
I do that as well, but I think “apt-get or manual installs” is a bit of a false dilemma: you can have both.
Static linking does introduce a security risk: ASLR made ineffective. Static linking creates a deterministic memory layout, thus making moot ASLR.
Untrue, look up static-PIE executables. Looks like OpenBSD did it first, of course.
I believe static PIE can only randomize a single base address for a statically linked executable, unlike dynamically linked PIE executable where all loaded PIC objects receive a randomized base address.
I’m very familiar with static PIE. I’m unsure of any OS besides OpenBSD that supports it.
Rustc + musl, supports it on linux, since gcc has a flag for it I imagine it’s possible to use it for C code too but I don’t know how.
It was added to GNU libc in 2.27 from Feb 2018. I think it should work on Linux?
Looks like it works on Linux with gcc 10.
Did a bit of rummaging in the exe header but I’m not 100% sure what I’m looking for to confirm there, but it had a relocation section and all symbols in it were relative to the start of the file as far as I could tell.
Edit: Okay, it appears the brute-force way works. I love C sometimes.
aslr.c:
Testing:
Note
ldd
distinguishing between “not a dynamic executable” and “statically linked”.Doesn’t it stop other kind of attacks on the other hand?
What attacks would static linking mitigate?
I don’t know much about security, and this is an honest question. My understanding is that ASLR is made to mostly protect when the executable is compromised due to stack overflow and similar attacks, right? Aren’t these problems mostly a thing of past for the two languages that abhor static linking, like go and rust?
I really have no idea, that was an honest question!
I mean, the obvious example is tricking binaries into loading the wrong dynamic library, e.g.
https://us-cert.cisa.gov/ncas/current-activity/2010/11/09/Insecure-Loading-Dynamic-Link-Libraries-Windows-Applications
and
https://www.contextis.com/en/blog/linux-privilege-escalation-via-dynamically-linked-shared-object-library
Those would be examples of local attacks. ASLR does not protect against local attacks. So we need to talk about the same threat vectors. :)
There are a lot of code-injection vulnerabilities that can occur as a result of
LD_LIBRARY_PATH
shenanigans. If you’re not building everything relro, dynamic linking has a bunch of things like the PLT GOT that contain function pointers that the program will blindly jump through, making exploiting memory-safety vulnerabilities easier.As an author of open source malware specifically targeting the PLT/GOT for FreeBSD processes, I’m familiar with PLT/GOT. :)
The good news is that the llvm toolchain (clang, lld) enables RELRO by default, but not BIND_NOW. HardenedBSD enables BIND_NOW by default. Though, on systems that don’t disable unprivileged process debugging, using BIND_NOW can open a new can of worms: making PLT/GOT redirection attacks easier over the ptrace boundary.
My personal opinion is exactly the opposite. Raspberry Pi users are imposing unreasonable burden on ARM distribution maintainers. For better or worse, the entire ecosystem standardized on ARMv7 except Raspberry Pi. The correct answer is to stop buying Raspberry Pi.
I faced this issue directly, being as I was a distribution developer working on ARM at the time. I feel your pain.
However, they made the choices they made for cost reasons, and the market has spoken. I can’t argue with that.
It could be worse. At least the Pi is ARMv7TDMI. Most AArch32 software defaults to Thumb-2 now and the Pi is just new enough to support it. I maintain some Arm assembly code that has two variations, ARMv6T2 and newer, everything else. I can probably throw away the older ones now, they were added because an ultra low-budget handset maker shipped ARMv5 Android devices and got a huge market share in India or China about 10 years ago and a user of my library really, really cared about those users.
Interesting, do you know which phone models? The oldest Android phones I could find are ARMv6.
No idea, sorry. I never saw them, I just got the bug reports. Apparently they’re all gone (broken / unsupported) now. It was always a configuration that Google said was unsupported, but one handset manufacturer had a custom AOSP port that broke the rules (I think they also had their own app store).
I also agree wiťh you that the correct answer is to stop buying Raspberry Pi, especially their ARMv6 products. But for most beginners in electronics, it seems like “Raspberry Pi” equals “single board computer”. They aren’t going to stop buying them.
I don’t love MIPS64 or i686 either, but the reality is that the hardware exists and continues to be used. Maintainers should deal with that, IMHO.
I am just tired of getting issues like https://github.com/rust-embedded/cross/issues/426. This is just a tiny corner. What a horror this is being replicated 100x for every toolchain out there.
systemd upstream strongly recommend “not to pull in this target too liberally”: https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/
That page describes why network-online.target is problematic.
Summary: “when the network is ready” is a poorly defined statement.
The long term marginal cost of software is zero. So Free Software always catches up in the end.
However, there’s room for proprietary software until that happens. When something is new and hasn’t been done before, the proprietary model provides the funds up front to pay for it, with the promise of making it back in the short term.
The other exception is niche software that fulfills some very specific use case. It takes so long for viable replacement Free Software to appear, it might as well be never.
I see a fundamental disconnect here.
Automatic updates are baked into snap’s DNA. If you didn’t want this, simply don’t install the snap.
Choosing this medium to install your dev environment and then getting cranky when it does precisely what it was meant to do seems a bit unintentionally disingenuous to me.
I’m not necessary arguing for the correctness of Canonical’s choices around snaps here, because I have mixed feelings about them myself, but they make it pretty clear at every available opportunity that this is how they work.
On a more helpful note, have you tried setting the refresh.hold system wide configurable which you could set to an arbitrary date years in the future?
There’s another part to the disconnect: Ubuntu is trying to replace the deb parts of (large parts of) its ecosystem with snap equivalents. Putting these two things together makes the picture pretty interesting. How are LTS releases supposed to work with this kind of setup? I’ve had bugfix releases of software introduce bugs that blocked my work. Not often, but it does happen.
Operationally, I wish updates were always automatically-update-on-a-schedule-I-define, a la Patch Tuesday or something similar. That seems like the only possible sane option, but it seems tragically rare.
That’s interesting I’d only heard of them doing this with Chrome. What else are you referring to if you don’t mind my asking?
For sure there are going to be problems as this is absolutely a radical departure from traditional UNIX vendor app update strategies.
Their assertion is that the auto-update feature was a key blocker to bring third party vendors who otherwise wouldn’t even consider supporting the Ubuntu platform to do so.
I think there’s some value in that, but as you say there are definitely kinks to be worked out here. On the up side the Ubuntu folks are pretty good about engaging with the community. I’d suggest pinging Snapcraft or Alan Pope on Twitter. It’s not his job but he’s often good about taking the time to help people past snap difficulties.
My main source is https://lobste.rs/s/9q0kta/linux_mint_drops_ubuntu_snap_packages, which links to https://lwn.net/Articles/825005/. Chromium seems to be the main program currently, but the LWN article mentions work ongoing to apply it to other things like
gnome-software
. Seems safe to assume that the trend is going to continue.Now that I think of it, I think part of the disconnect is that vendors think of their software as an end-product that nothing else depends on, so Slack or Spotify or whatever can update their own little worlds without affecting anything else. In the real world however, people script things to download stuff from Slack or ask Spotify to do things, and so those applications changing can break others’ functionality. Even non-techies do this, mentally; how many people have complained to tech support when a silent, undesired application update “moved that damn button” and broke their workflow?
I would urge you to be sure to get both sides of the story on the Mint snap amputation.
I don’t have any issue with the decision itself per se but the way they handled it was IMO very unfortunate and unprofessional.
Specifically, the Ubuntu folks said they were totally willing to work with Mint to accomplish the removal in a clean way that makes sense technically, but that they were never approached.
IMO when you choose to base your Linux distribution on an upstream distro, you agree to abide by the technical decisions that upstreeam makes, or at least to work with them in evolving your remix/flavor in a way that makes sense. None of that happened here.
LXD is another example
Yes. The differences betwen Docker and LXD are interesting. Docker provides a convenient packaging format and allows for layering of components to create application stacks.
LXD seems to want you to start out with a single image and seems to have better / more / deeper integration with system services like networking.
I’d love to see someone do an in depth technical comparison of thw two technologies.
I’ve only seen evidence of this in the packages that are particularly challenging to maintain as debs due to the way particular upstreams operate. Browsers are an example where security backports aren’t feasible so the debs that have been shipped by most distributions have for years been “fat” anyway. Snaps are mainly solving the problems of IoT device updates, third party (both Free and proprietary) software distribution directly to users and the areas where debs are a poor fit, such as shipping wholesale featureful updates to users such as with web browsers. The traditional deb parts are not affected.
You can configure snaps to do that that: https://snapcraft.io/docs/keeping-snaps-up-to-date#heading--controlling-updates
Disclosure: I’m a Canonical employee and Ubuntu developer (and a Debian developer, if we’re going there). I’m not involved in snap architecture at Canonical (though I do maintain some snaps as I do debs in Debian!).
Agree. So what I say below doesn’t apply to the situation in the article.
They may make it clear how they work but they don’t make pretty clear when you are installing a snap. Starting Ubuntu 20.04 they’ve started installing snaps when you install a Deb
https://blog.linuxmint.com/?p=3906
The reason the snap is installed is because the chromium browser moved to a snap so we could iterate faster and consume fewer developer resources packaging a non-default web browser for multiple releases. If we didn’t have the deb to snap migration in 19.10 (and 20.04) then users upgrading from 19.04 to 19.10 or from 18.04 to 20.04 would lose their browser.
I wrote a blog post 6 months ago about why this was done. https://ubuntu.com/blog/chromium-in-ubuntu-deb-to-snap-transition
We decided at the time not to punch people in the face during the upgrade and make them choose “Do you want to keep your browser, by migrating to the snap?” because those kind of mid-upgrade dialogs are pretty awful for end users. Generally the vast majority of the audience don’t actually care what packaging scheme is used. They just want to click the browser icon and it open.
Yes, we could have communicated it better. We’re working on improving it for the 20.04.1 update which is due next week. We certainly do listen to the feedback we get.
You could use the motd announcements to give people ample notice.
Good idea, but most normals don’t ever see motd.
normals? Normies perhaps?
Please see my answer above to icefox’s assertion along the same lines. This is only for Chromium, and there are good reasons behind that choice (They were spending vast amounts of engineering time JUST on chromium).
As I said in that response, IMO Linux Mint is very much in the wrong here, and has handled the situation abysmally. They could have enacted the removal of snaps/snapd in a much cleaner way without all the drama, and they chose not to do that.
Unfortunate.
Yeah, I saw it after I had posted this comment.
You are conflating two separate things. There may be good reasons™ behind packing Chromium as a snap only. There is no good reason to do so behind the users back. If one uses apt to install something one most certainly doesn’t expect to be installing a snap.
What you chose to call ‘drama’ is a political stance. The issue was not “to accomplish the removal in a clean way”. It was “This breaks one of the major worries many people had when Snap was announced and a promise from its developers that it would never replace APT.”. You may not like or agree with that position but there is nothing wrong with that.
Also it is disingenuous to think this will stop at Chromium. Chromium is just for testing the waters.
Refresh.hold is stated in the article, it does not honor dates further than 60 days in the future.
Did you try
snap revert
?e.g.
I appreciate you’re “burned” by this and will likely switch to Mint, so this information may be academic, but for others stumbling on this thread it may be handy. Hope Linux Mint works out for you.
I’d not expect the Allen Pope to respond, you only exist in my podcasts :). How cool!
Now on point, also responding to the revert comment earlier. I did find that, but my point is about not being able to disable updates, not about not being able to rollback. Rolling back would not have been necessary if I could have done the update manually in two weeks, after which the plugins of the IDE would probably have been updated as well. I do however appreciate the option to rollback.
You also state that there are ways to disable updates and give the example of sideloading an application, after which it never gets updated. That also is not my main gripe. I do want my cake and eat it, but not automatically. As in, I want to update when I’m ready and expect it, not changing the engine while the car is driving on the highway.
Is there an official way to disable automatic updates? However hard to setup, with big red flashing warnings, promising my firstborn to whatever deity? Setting a proxy or hosts file might stop working, an “official” way would still give me all the other benefits of snaps.
As said, I do like updates and easy of use (not manually sideloading ever snap, just apt upgrade all the things) with just a bit more control.
Haha! I’m just some guy, you know.
There’s currently no way to fully disable completely all updates. I appreciate that’s hard news to take. However, I have been having many conversations at Canonical internally about the biggest pain points, and this is clearly one. Perhaps we could add a big red button marked “Give Alan all my cats, and also disable updates forever”. Perhaps we could really forcefully nag you with irritating popups once you get beyond the 60 days limit that’s currently set, I don’t know. But I agree, something needs to be done here, especially for desktop users who are super sticky on particular releases of their favorite tools.
It’s Friday night, and I’m enjoying family time, but rest assured I’ll bring it up again in our meetings next week.
Have a great weekend.
What about a big button to disable automatic updates, and then give initially gentle, gradually escalating nags once updates are available?
Maybe in the handy “Software Manager” popup UI we’ve already got built. Checkboxes to select which ones you want to update. Make it really easy to “accept all”, after my workday is over.
This sounds okay in a comment.
Did you use windows 10? Did you read the comments when Microsoft did similar? They were heavily criticised there was a vocal cohort of users annoyed even by that. When I was near a deadline and my windows told me I cannot postpone updates anymore and broke my flow I happened to understand them.
This is a hard UX problem, despite how simple one might initially see it.
It is a problem but not that hard. A simple matter of control. Do I update or not? .hold? Then not. It’s perfectly feasible to add a ..hold.forever. Your workflow is not broken sms 99%of average Joes have the latest everything.
Now, this is just my opinion as am old schooler used to absolute control of my computing. I gladly let everything be automated until i don’t.
It is hard of you wish to please a wide range of users, ranging from novices to power users, and wish to provide a UI serving the different needs of these different cohorts. I mean it is hard if you want to do it well. :) Microsoft tried hard, and did not get it IMHO. Other vendors fail even more miserably.
Thank you for the time you take in the replies here. Enjoy your weekend!
You’re quite right. I apologize for having missed this in my initial read.
I’ve seen snaps come up on lobsters before but had no idea that they automatically update with no option to disable the updates. I thought they were basically like Canonical-flavored docker containers.
Thankfully I saw this post before I update my laptop next week. I’ll be avoiding Ubuntu and their snaps for now.
Couple of important differences between Docker and snaps.
Docker containers leverage Linux’s cgroups feature to offer each container it’s own pretty much complete Linux userland.
So that means that you can, as a for instance, have 3 containers all running on a Fedora system, one Alpine Linux, another Ubuntu, and a third something else.
Snaps, on the other hand, are application containers. They just bundle the application itself and any libraries that are necessary to run it.
And that’s the beauty of the Linux ecosystem right? :) I’d guess more people on here are Debian or maybe Arch users :)
I think a large part of the criticism is that it gets installed automatically; and if you remove it it keeps getting reinstalled anyway.
Have you actually tried reading the article? It’s literally mentioned in there.
I don’t see anything in the article suggesting a snap gets automatically re-installed after removal? Can you please elaborate?
I did read the article but missed that detail. Thanks for pointing that out.
I am amazed at how Canonical always tries to make up their own technology instead of embracing existing open-source projects… and it NEVER works, and they keep trying it anyway. Let’s look at the list:
Am I missing any? I feel like there’s more. Does anyone know why the hell they do this? Is it them and Red Hat having a technological pissing match that Red Hat usually wins (systemd and Flatpak come out of Red Hat after all)? Or do they just dream of making a de-facto standard that gives them lots of power, which this article seems to imply?
Either way, good on Mint for pushing back against this nonsense.
Note that Upstart considerably predates systemd.
Snaps predate Flatpak.
IMHO, what happens is that Canonical validates the existence of alternatives by beginning work, causing alternative efforts to start up or for existing alternative efforts to gain momentum. Then a certain vocal faction publicly trash Canonical’s efforts and try to swing the community towards one of the alternatives.
None of this is intended to diminish the value of the alternatives. Alternatives are good. They’ve always existed in the our ecosystem (eg. sendmail, qmail, postfix, exim; apache, nginx; etc). But in the case of a Canonical-led effort, a specific vocal crowd makes it political.
An exception is Unity vs. GNOME. That happened after GNOME didn’t want to follow Canonical’s design opinions on how the desktop should work (even though they represented the majority of GNOME desktop users!), and refused patches. But then, as above, politics happened anyway.
Author’s note: I use RedHat as a stand-in for the entire RedHat/CentOS/Fedora ecosystem a lot in this.
tl;dr Redhat is trying to address pain points for existing users, Ubuntu is going after new markets. Both are laudable goals, but Ubuntu’s strategy is riskier.
I think a lot of this comes down to market demand. With both the “Mir v Wayland” and “Unity v GNOME” RedHat and Canonical were both trying to address a market need.
With Wayland and GNOME, Redhat wanted a more modern display server and desktop environment so that it’s existing customers didn’t have to deal with the big ol’ security hole that is X. (Don’t get me wrong, I love X11 and still think it’s valuable, but I think RedHat’s market disagrees).
With Mir and Unity, Ubuntu wanted a display server and DE that would scale from a phone to a multi-monitor workstation. This is a laudable goal, and it did see a market need to address.
The difference is, Ubuntu was trying to address a market that it wanted while Redhat was trying to address the needs of a market that it actually had. Redhat has tons of customers actively using Wayland and GNOME for their intended purpose, and that gives a project momentum. Ubuntu also had loads of customers using Mir and Unity, but for only one of the multiple purposes that it was intended to be used for. Engineering always has trade-offs, designing a display server and DE for such a wide array of purposes is bound to have rough edges for any single one of those purposes. Ubuntu was asking it’s primary market, desktops and laptops, to suffer those rough edges for the greater Canonical purpose.
Even with snap v flatpak, again Ubuntu’s goals are much wider with snap than Redhat’s are with Flatpak, judging from what I’ve seen. Flatpak is a way for Redhat to distribute software to Linux/systemd in a way that’s more robust than the current RPM method, and Fedora is actively using flatpaks as a base to their Silverblue variant. whereas with snap, I think that Ubuntu wants to be the one stop shop for distributing software on Linux. Again: engineering, trade-offs, rough edges, etc.
The Redhat method of integrating the new package format seems to be coming up with an entirely different distribution to leverage flatpak functionality to it’s fullest while kinks are worked out . Canonical’s method seems to be: “Let’s shove it into our flagship product, and work out the kinks there”. This comes with a lot of inherent risks.
You could also mention
bzr
vsgit
/hg
and like the other software you mention Bazaar(-NG) is essentially dead.On Bazaar, there’s a very interesting retrospective here: https://www.jelmer.uk/pages/bzr-a-retrospective.html
One quote I think is quite relevant to the current discussion:
“Some people claimed Bazaar did not have many community contributions, and was entirely developed inside of Canonical’s walled garden. The irony of that was that while it is true that a large part of Bazaar was written by Canonical employees, that was mostly because Canonical had been hiring people who were contributing to Bazaar - most of which would then ended up working on other code inside of Canonical.”
Upstart predates sytemd, and pretty successful, even Redhat adopts it for RHEL.
Note: Apparently Canonical’s solutions often are the forerunners and other people the copycats… I didn’t actually know that, thanks for the corrections. Frankly it makes the fact that their solutions tend to come out on the losing side even more interesting…
Note that:
Browsers are pretty much already “bundled” and exist outside the traditional distribution model. Pretty much all stable distributions have to take upstream changes wholesale (including features, security fixes and bug fixes) and no longer cherry-pick just security fixes. The packaging of browsers as snaps are merely admitting that truth.
The
chromium-browser
deb is a transitional package so that users who are upgrading don’t get a removed Chromium. It is done this way for this engineering reason - not a political one. The only (part) political choices here are to ship Chromium as a snap and no longer spend the effort in maintaining packaging of Chromium as a deb. Background on that decision is here: https://discourse.ubuntu.com/t/intent-to-provide-chromium-as-a-snap-only/5987Ubuntu continues to use the traditional apt/deb model for nearly everything in Ubuntu. Snaps are intended to replace the use case that PPAs and third party apt repositories are used for, and anything else that is already shipped “bundled”. For regular packages that don’t have any special difficulties packaging with the traditional model, I’m not aware of any efforts to move them to snaps. If you want to never use snaps, then you can configure apt to never install snapd and it won’t.
Free Software that is published to the Snap Store is typically done with a git repository available so it is entirely possible for others to rebuild with modifications if they wish. This isn’t the case for proprietary software in the Snap Store, of course. The two are distinguished by licensing metadata provided (proprietary software is clearly marked as “Proprietary”). This is exactly the same as how third party apt repositories work - source packages might be provided by the third party, or they might not.
Anyone can publish anything to the Snap Store, including a fork of an existing package using a different name. There’s no censorship gate, though misleading or illegal content can be expected to be removed, of course. Normally new publications to the Snap Store are fully automated.
The generally cited reason for the Snap Store server-end not being open is that it is extensively integrated in deployment with Launchpad and other deployed server-end components, that opening it would be considerable work, that Canonical spent that effort when the same criticism was made of Launchpad, but that effort was wasted because GitHub (proprietary) took over as a Free Software hosting space instead, and nobody stood up a separate Launchpad instance anyway even after it was opened, so Canonical will not waste that effort again.
The generally cited reason for the design of snapd supporting only one store is that store fragmentation is bad.
I hope that sheds some clarity on what is going on. I tried to stick to the facts and avoided loading the above with opinion.
Disclosure: I work for Canonical, but not in the areas related to Mint’s grievances and my opinions presented here are my own and not of my employer.
Thanks a lot. While I don’t agree about the opinion at all, background explanation is much appreciated.
I can’t speak about Mint, but in Ubuntu the
chromium-browser
deb installs Chromium as a snap behind the scenes.So, unless you’ll own the market with the product it’s not worth open sourcing? IMO releasing a product open source is never “wasted effort” because it may prove useful in some capacity whether you as the original author know it or not. It may spawn other ideas, provide useful components, be used in learning, the list goes on and on.
It’s very convenient to have this opinion when it’s not you making the effort. People seem to care a lot about “providing choice” but it somehow almost always translates into “someone has to provide choice for me”.
True. I should have worded that better. I was talking about the case of simply making source available, not all the added effort to create a community, and make a “product”, etc. I still don’t believe companies like Canonical have much a leg to stand on when arguing that certain products shouldn’t be open source when open source is kinda their entire thing and something they speak pretty heavily on.
Yep. Just to be clear, open-sourcing code isn’t free. At an absolutely bare minimum, you need to make sure you don’t have anything hardcoded about your infra, but you’ll actually get massive flak if you don’t also have documentation on how to run it, proper installation and operation manuals for major platforms, appropriate configuration knobs for things people might reasonably want to configure, probably want development to happen fully in the open (which in practice usually means GitHub), etc.—even if you yourself don’t need or want any of these things outside your native use case. I’ve twice been at a company that did source dumps and got screamed at because that “wasn’t really open-source.” Not that I really disagree, but if that wasn’t, then releasing things open-source is not trivial and can indeed very much be wasted effort.
That’s true, but that cost is vastly reduced when you’re building a new product from scratch. Making sure you’re not hardcoding anything, for example, is much easier because you can have that goal in mind as you’re writing the software as opposed to the case where you’re retroactively auditing your codebase. Plus, things like documentation can only help your internal team. (I understand that when you’re trying to get an MVP out the door docs aren’t a priority, but we’re well past the MVP stage at this point.)
If the Snap Store was older I would totally understand this reasoning. But Canonical, a company built on free and open source software, really should’ve known that people were going to want the source code from the start, especially because of their experience with Launchpad. I think they could have found a middle ground and said look, here’s the installation and operation manuals we use on our own infra. We’d be happy to set up a place in our docs that adds instructions for other providers if community members figure that out, and if there’s a configuration knob missing that you need, we will carry those patches upstream. Then it would have been clear that Canonical is mostly interested in their own needs for the codebase, but they’re still willing to be reasonable and work with the community where it makes sense.
I think this is a fine opinion but it seems contradicted by the fact that some packages are offered by both the off-the-shelf repos and snap.
I don’t see a contradiction. Can you elaborate?
I did say “better than third party apt repositories”. The distribution has no control over those, so what is or isn’t available in them does not affect my opinion. I’m just saying that Ubuntu has taken the position that snaps (when available) are preferable over packages from third party apt repositories (when available). And what is available through the distribution’s own apt repository is out of scope of my opinion statement.
What is the default choice when I type jq in bash?
It’s fine and well opinionated choice that ubuntu prefers it for third party things. I feel like a lot of first party supported utilities are not well opinionated and i’m left thinking about trade-offs when i go with one over the other.
Here are a couple of related articles:
I was expecting more “personal” experience with particular cases rather than a copy-paste list of arguments, which I could simply duckduckgo out.
On the other side, I’ve noticed most of these responses, especially those related to politics or so called “state” are thrown there from US people PoVs, which don’t apply overseas :)
edit: Some of these also mentions “conspiracy”, “criminal acts” (as what attacks privacy), “total control” and so on. While some of these might be true, I usually prefer to avoid wording like that to not get rejected from advance with “tinfoil hat” label. Which sometimes also happens if you even start talking about anything which differs a bit from mainstream attitude to “privacy”, but these edge cases shouldn’t be taken seriously.
I’m not from the US but the UK, and I’d say it’s equally as relevant. If you’re from a five, nine, or fourteen eyes country you should consider them relevant.
What3Words are pretty weird, I’d prefer them not to succeed in the marketplace.
That said, I’m speculating they have a solid DMCA case here. I have not seen the code in question, but I figure W3W does geohashing by translating lat/long into 3 unique word tuples using a lookup table. This table has been chosen to be clear and unambiguous, and that’s the relevant IP (the hashing algo too, I imagine).
If WhatFreeWords had substituted their own lookup table, I doubt this would have been any problem.
I’d rather a competing standard was developed and hosted by OpenStreetMap, instead of trying to reverse-engineer some company that’s legally trigger-happy.
You would think so, but it’s not the case. An article about What3Words listed
reporter.smoked.received
as the address of the Stade de France. But if you searchreporter.smoked.received
on What3Words, you will see it’s in Laredo, Missouri.Reporter.smoker.received
, just one letter’s difference, is the Stade. W3W is not just a badly-licensed land grab, it’s also a terrible product!That’s intentional, the idea being that you know roughly where the identifier is meant to point to, so you would realize that you have the wrong one if it’s on a different continent. (See https://support.what3words.com/en/articles/2212868-why-are-the-words-randomly-assigned-wouldn-t-it-be-better-if-there-was-some-logic-hierarchy-structure-to-the-naming-structure)
Thanks for the link. But it doesn’t really detract from my (admittedly wild-ass guess) legal theory.
The fact whether the words are actually unambiguous is not relevant - the point is that W3W can argue that they have made significant effort in choosing the words and that the lookup table is proprietary information, covered by DMCA protections.
(By the way, the article has been corrected to note that the address for Stade de France is the
smoker
variant)They don’t really argue that, though. As kyrias points out in their link the words are chosen at random. This seems like incredibly thin ice for copyright purposes, almost straying into conceptual art territory.
The W3W promise is three words to define a location. The hedge is three words, plus a general idea of where the location is so you know if there’s an error. The reality is three words plus an unknown probability of an error and no way to correct it. Real addresses feature hierarchy and redundancy for this reason.
Since my comment I’ve been reading up a bit. Here’s a purported DMCA takedown notice (not neccessarily the exact one served for WhatFreeWords) that makes the same argument I surmised:
Plus codes come close I think; no need for yet another standard. Granted they’re not pronounceable words, but What3Words also needs homonym disambiguation so I’m not sure much is lost.
See also: Evaluation of Location Encoding Systems.
I had plus codes in the back of my mind when I wrote my comment, but believed they weren’t fully open. Thanks for informing me.
Edit - the HN discussion on this submission is quite interesting. Among others, yet another competing geo-phrase service offers this critique of OLC/Plus Codes: https://www.qalocate.com/whats-wrong-with-open-location-code/
I’m not convinced that English pronounceability is really such a killer feature for something that’s supposed to be usable worldwide. But that’s neither here nor there. Thanks for the links!
It’s interesting that the only way for this to proceed any differently would be for someone not anonymous to stand behind it. As long as nobody is prepared to do that, a takedown will by definition always be accepted by some named person or organisation via whom the anonymous publisher is connected.
I find this interesting because it has consequences for anonymous publication in general.
Excepting anonymous publication platforms like Tor or, as Nesh says, IPFS, that is.
The team behind WhatFreeWords can form a corporation to speak for their legal interests. The corporation’s stakeholders can be shielded in some jurisdictions, and they will be represented by a lawyer hired by the corporation.
I’m interested to see what others say on this, but please could commenters distinguish between application containers and system containers in their replies? Application containers are like Docker: no init process, massive paradigm shift in workflow and deployment, bundle everything into a “golden image” in a build step, and so on. System containers are like lxd: pretty much the same workflow as deploying to a VM except that memory and disk space doesn’t need to get partitioned up, though you might use differing deployment techniques and tooling just as you might with non-container deployments.
I think a lot of people are unaware that system containers exist thanks to the hype and popularity of docker. Having worked with both, I personally prefer the system container paradigm and specifically the LXD ecosystem. Although that’s probably because I’m already comfortable working with VMs and standard Linux servers and there’s less new stuff to learn.
I wish I had gone more of the lxd system container route. I feel like I could have taken my existing ansible workflow and just applied them to containers instead of KVM virtual machines (via libvirt). I think I started going down that route for a bit, but then just ended up rewriting a lot of services I used to be in regular Docker containers. I ended up writing my own provisioning tool that communicated directly with the Docker API for creating, rebuilding and deploying containers (as well as setting up Vultr VMs or DigitalOcean droplets with Docker and making them accessible over VPNs):
https://github.com/sumdog/bee2
In general I tend to like containers. They let you separate out your dependencies and make it easy to try out applications if project maintainers supply Dockerfiles or official images. But there are also tons of custom made images for things that may not get regular updates, and you can have tons of security issues with all those different base layers floating around on your system. I also hate how it’s so Linux specific, as the FreeBSD docker port is super old and unmaintained. A Docker engine implementation in FreeBSD that uses jails (instead of cgroups) and ZFS under the hood would be pretty awesome (FreeBSD can already run Linux ELF binaries and has most of the system calls mapped, except for a few weird newer things like inotify et. al, which is what the old/dead Docker used).
I have huge issues with most docker orchestration systems. They’re super complex and difficult to setup. For companies they’re fine, but for individuals, you’re wasting a lot of resources on just having master nodes. DC/OS’s web interface is awful and can eat up several MBs of bandwidth in JSON data per second!
I’ve written a post on container orchestration systems and docker in general:
https://penguindreams.org/blog/my-love-hate-relationship-with-docker-and-container-orchestration-systems/
I don’t user containers and don’t know the difference between docker and lxd any more than what you just described.
It’s not clear to me what application containers provide that per-application-user-ids don’t.
System containers sound like something that may interest me. I build a lot of VMs.