By May 25, most corporates had just amended their Privacy Policy volumes and annoyed consumers were forced to clicked through to accept them without reading.
I don’t know why people find this so hard to understand, but the entire point of the GDPR is that you cannot comply with it simply by adding more terms to your Terms of Service for people to sign away their rights without reading. That’s not how it works.
In my opinion preoccupation with the nominal personal data, actually displaces real privacy. Who cares about privacy of their name and family name, or office held? Except to hide shady politicking and worse, majority of us are happy to consciously publicize it as much as possible. It’s wrong, impractical and disrespectful to assume the contrary.
There are dozens of situations when it’s actually socially undesirable to keep it private, yet it is zealously protected under the GDPR in exactly the same way as your shopping history or your family photos.
I do care about the privacy of my name and family name. Is my name public on the internet? Yes. If I wanted to make it not public, would I want to be able to do so? Yes. Simple as that, really.
Equally questionable are formal and bureaucratic prescriptions for better data protection — more documentation, privacy impact audits, formal training, etc.
Does anyone honestly believe that more paperwork will lead to more privacy? More security risks in handling of our data (say thousands of hand signed consents) are somewhat more likely, I’m afraid.
Why would formal training around data protection, auditing of privacy protection and documentation of efforts to comply with the GDPR lead to another other than more privacy?
Apart from the right to complain under the new rules and few marginal rights — which are primarily of interest to the corrupt and the criminal, like the right to be forgotten — the average data subject barely gained any new privacy through the GDPR.
Yeah okay, nothing interesting to read here. The right to be forgotten is certainly not ‘primarily of interest to the corrupt and the criminal’. What a great load of ‘if you have nothing to fear you have nothing to hide’ twaddle.
By May 25, most corporates had just amended their Privacy Policy volumes and annoyed consumers were forced to clicked through to accept them without reading.
I don’t know why people find this so hard to understand, but the entire point of the GDPR is that you cannot comply with it simply by adding more terms to your Terms of Service for people to sign away their rights without reading. That’s not how it works.
Excuse me if I misunderstand, but isn’t it still the case that they can add terms to their privacy policy, then tell users to either check all the boxes or leave?
That’s exactly what you can’t do — you can’t refuse service if a user says “no” to tracking (unless you can prove in court that the tracking is strictly required for the functioning of the service).
An example of a site that doesn’t follow the rules you state at all:
If you do not agree with our new privacy policy (that haven’t really changed much) we absolutely respect that. Feel free to go to your user settings page and delete your account. Optionally, you can change your settings and/or user profile if that helps. If you miss any settings feel free to let us know. If you just miss-clicked you can always go back and agree to the policy. If you have more questions feel free to send an e-mail to support@{{domainName}} and we will do our very best help you out.
They’re relatively small though, so I hope they’re not representative of too many other companies.
Then their privacy policy is invalid, and they’re committing a crime with every bit of data they collect.
To be allowed to collect userdata, you need consent, and under the GDPR consent is only valid if it has been given freely, without any advantage/disadvantage coming from giving/not giving consent. (except for functionality that directly requires the consent).
Oh. I guess I’ve been doing privacy policy change dialogs wrong then 😅 I could’ve sworn lots of them wouldn’t let you continue until you accepted though.
I don’t know why people find this so hard to understand, but the entire point of the GDPR is that you cannot comply with it simply by adding more terms to your Terms of Service for people to sign away their rights without reading. That’s not how it works.
Have the various aspects of GDPR been applied/tested in court yet?
European civil law originals from Roman civil law, and is quite different from common law systems that originate from British law. Generally the law is quite specific and the intent is that the law will be applied as written rather than interpreted in the social and political context of the day in light of precedent, as is done in common law systems.
I don’t know if that’s the case with the GDPR to the extent that it’s true of say, German law or French law, but if it is, it doesn’t need to be ‘tested’ in court, it is what it is.
There are a few things which GDPR leaves open to interpretation, such as:
I recently discovered how horribly complicated traditional init scripts are whilst using Alpine Linux. OpenRC might be modern, but it’s still complicated.
Runit seems to be the nicest I’ve come across. It asks the question “why do we need to do all of this anyway? What’s the point?”
It rejects the idea of forking and instead requires everything to run in the foreground:
/etc/sv/nginx/run:
#!/bin/sh
exec nginx -g 'daemon off;'
/etc/sv/smbd/run
#!/bin/sh
mkdir -p /run/samba
exec smbd -F -S
/etc/sv/murmur/run
#!/bin/sh
exec murmurd -ini /etc/murmur.ini -fg 2>&1
Waiting for other services to load first does not require special features in the init system itself. Instead you can write the dependency directly into the service file in the form of a “start this service” request:
/etc/sv/cron/run
#!/bin/sh
sv start socklog-unix || exit 1
exec cron -f
Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
The only other feature I can think of is “reloading” a service, which Aker does in the article via this line:
ExecReload=kill -HUP $MAINPID
I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?
Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
The logging mechanism works like this to be stable and only lose logs in case runsv and the log service would die.
Another thing about separate logging services is that stdout/stderror are not necessarily tagged, adding all this stuff to runsv would just bloat it.
There is definitively room for improvements as logger(1) is broken since some time in the way void uses it at the moment (You can blame systemd for that).
My idea to simplify logging services to centralize the way how logging is done can be found here https://github.com/voidlinux/void-runit/pull/65.
For me the ability to exec svlogd(8) from vlogger(8) to have a more lossless logging mechanism is more important than the main functionality of replacing logger(1).
Instead you can write the dependency directly into the service file in the form of a “start this service” request
But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order. Depending on network being setup, for example, brings complexity to each of those shell scripts.
I’m of the opinion that a dsl of whitelisted items (systemd) is much nicer to handle than writing shell scripts, along with the standardized commands instead of having to know which services that accepts ‘reload’ vs ‘restart’ or some other variation in commands - those kind of niceties are gone when the shell scripts are individually an interface each.
The runit/daemontools philosophy is to just keep trying until something finally runs. So if the order is wrong, presumably the service dies if a dependent service is not running, in which case it’ll just get restart. So eventually things progress towards a functioning state. IMO, given that a service needs to handle the services it depends on crashing at any time anyways to ensure correct behaviour, I don’t feel there is significant value in encoding this in an init system. A dependent service could also be moved to running on another machine which this would not work in as well.
It’s the same philosophy as network-level dependencies. A web app that depends on a mail service for some operations is not going to shutdown or wait to boot if the mail service is down. Each dependency should have a tunable retry logic, usually with an exponential backoff.
But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order.
That was my initial thought, but it turns out the opposite is true. The services are retried until they work. Things are definitely paralleled – there is not “exit” in these scripts, so there is no physical way of running them in a linear (non-parallel) nature.
Ignoring the theory: void’s runit provides the second fastest init boot I’ve ever had. The only thing that beats it is a custom init I wrote, but that was very hardware (ARM Chromebook) and user specific.
Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services.
runit and s6 also don’t support cgroups, which can be very useful.
Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services
Why? The runit/daemontools philsophy is just to try to keep something running forever, so if something dies, just restart it. If one restarts a service, than either those that depend on it will die or they will handle it fine and continue with their life.
either those that depend on it will die or they will handle it fine
If they die, and are configured to restart, they will keep bouncing up and down while the dependency is down? I think having dependency resolution is definitely better than that. Restart the dependency, then the dependent.
It’s a computer, it’s meant to do dumb things over and over again. And presumably that faulty component will be fixed pretty quickly anyways, right?
It’s a computer, it’s meant to do dumb things over and over again
I would rather have my computer do less dumb things over and over personally.
And presumably that faulty component will be fixed pretty quickly anyways, right?
Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever). The dependency tree can be complicated. Ideally once something is fixed everything that depends on it can restart immediately, rather than waiting for the next automatic attempt which could (with the exponential backoff that proponents typically propose) take quite a while. And personally I’d rather have my logs show only a single failure rather than several for one incident.
But, there are merits to having a super-simple system too, I can see that. It depends on your needs and preferences. I think both ways of handling things are valid; I prefer dependency management, but I’m not a fan of Systemd.
I would rather have my computer do less dumb things over and over personally.
Why, though? What’s the technical argument. daemontools (and I assume runit) do sleep 1 second between retries, which for a computer is basically equivalent to it being entirely idle. It seems to me that a lot of people just get a bad feeling about running something that will immediately crash.
Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever).
What’s the distinction here? Also, with microservices the dependency graph in the init system almost certainly doesn’t represent the dependency graph of the microservice as it’s likely talking to services on other machines.
I think both ways of handling things are valid
Yeah, I cannot provide an objective argument as to why one should prefer one to the other. I do think this is a nice little example of the slow creep of complexity in systems. Adding a pinch of dependency management here because it feels right, and a teaspoon of plugin system there because we want things to be extensible, and a deciliter of proxies everywhere because of microservices. I think it’s worth taking a moment every now and again and stepping back and considering where we want to spend our complexity budget. I, personally, don’t want to spend it on the init system so I like the simple approach here (especially since with microservies the init dependency graph doesn’t reflect the reality of the service anymore). But as you point out, positions may vary.
Why, though? What’s the technical argument
Unnecessary wakeup, power use (especially for a laptop), noise in the logs from restarts that were always bound to fail, unnecessary delay before restart when restart actually does become possible. None of these arguments are particularly strong, but they’re not completely invalid either.
We’re not necessarily just talking about standard daemons …
What’s the distinction here?
I was trying to point out that we shouldn’t make too many generalisations about how services might behave when they have a dependency missing, nor assume that it is always ok just to let them fail (edit:) or that they will be easy to fix. There could be exceptions.
Perhaps wandering off topic, but this is a good way to trigger even worse cascade failures.
eg, an RSS reader that falls back to polling every second if it gets something other than 200. I retire a URL, and now a million clients start pounding my server with a flood of traffic.
There are a number of local services (time, dns) which probably make some noise upon startup. It may not annoy you to have one computer misbehave, but the recipient of that noise may disagree.
In short, dumb systems are irresponsible.
But what is someone supposed to do? I cannot force a million people using my RSS tool not to retry every second on failure. This is just the reality of running services. Not to mention all the other issues that come up with not being in a controlled environment and running something loose on the internet such as being DDoS’d.
I think you are responsible if you are the one who puts the dumb loop in your code. If end users do something dumb, then that’s on them, but especially, especially, for failure cases where the user may not know or observe what happens until it’s too late, do not ship dangerous defaults. Most users will not change them.
In this case we’re talking about init systems like daemontools and runit. I’m having trouble connecting what you’re saying to that.
N.B. bouncing up and down ~= polling. Polling always intrinsically seems inferior to event based systems, but in practice much of your computer runs on polling perfectly fine and doesn’t eat your CPU. Example: USB keyboards and mice.
USB keyboard/mouse polling doesn’t eat CPU because it isn’t done by the CPU. IIUC the USB controller generates an interrupt when data is received. I feel like this analogy isn’t a good one (regardless). Checking a USB device for a few bytes of data is nothing like (for example) starting a Java VM to host a web service which takes some time to read its config and load its caches only to then fall over because some dependency isn’t running.
Sleep 1 and restart is the default. It is possible to have another behavior by adding a ./finish script to the ./run script.
I really like runit on void. I do like the simplicity of SystemD target files from a package manager perspective, but I don’t like how systemd tries to do everything (consolekit/logind, mounting, xinet, etc.)
I wish it just did services and dependencies. Then it’d be easier to write other systemd implementations, with better tooling (I’m not a fan of systemctl or journalctl’s interfaces).
You might like my own dinit (https://github.com/davmac314/dinit). It somewhat aims for that - handle services and dependencies, leave everything else to the pre-existing toolchain. It’s not quite finished but it’s becoming quite usable and I’ve been booting my system with it for some time now.
I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?
It’s nice to be able to reload a well-written service without having to look up what mechanism it offers, if any.
Runits sv(8) has the reload command which sends SIGHUP by default.
The default behavior (for each control command) can be changed in runit by creating a small script under $service_name/control/$control_code.
I was thinking of the difference between ‘restart’ and ‘reload’.
Reload is only useful when:
I have not been in environments where this is necessary, restart has always done me well. I assume that the primary use cases are high-uptime webservers and databases.
My thoughts were along the lines o: If you’re running a high-uptime service, you probably don’t care about the extra effort of writing ‘killall -HUP nginx’ than ‘systemctl reload nginx’. In fact I’d prefer to do that than take the risk of the init system re-interpreting a reload to be something else, like reloading other services too, and bringing down my uptime.
I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
I used to use something like logexec for that, to “wrap” the program inside the runit script, and send output to syslog. I agree it would be nice if it were builtin.
AFAIK no valid licence means “all rights reserved” all over the world.
This means that if GPL is not valid in a country, no one can use that software in that courty.
Not always. e.g. the creators of the EUPL argue that under EU law, GPL, LGPL, MPL, etc are all equivalent and compatible in any direction (because you can’t enforce copyleft on anything above file level)
Can you share some legal sources to back this statement?
I’ve never listen such argument, but I have listened an opposite one, stating that software copyright is based on literary one, so that you can translate a C GPL software into Python only under GPL, as a derived work (with a caveat I do not remember). That’s different from common wisdom, as such protections are usually demanded to patents not copyright.
So I’m very curious about your sources.
And BTW file level copyright doesn’t means that, without a license you can use that file…
And BTW file level copyright doesn’t means that, without a license you can use that file…
Correct, but it means that e.g. the viral part of the GPL is entirely gone, as it only applies to a single source file, not to any binary artifacts, intermediate compilation products, or other sources.
See https://joinup.ec.europa.eu/collection/eupl/eupl-compatible-open-source-licences#section-3 on why clauses on linking, in the eyes of the authors of the EUPL, have no legal validity.
Thanks this has been a great read for several reasons. I’ll double check with an European lawyer.
If this interpretation is confirmed, I will probably write an new stronger copyleft for my code instead of using AGPLv3.
We’ve had this discussion before, and I don’t think that your interpretation of the interpretation of the EUPL authors is correct.
https://lobste.rs/s/oroz5k/google_loses_android_battle_could_owe#c_wxup7r
In the discussion you admitted that you interpret it the same way, as them saying that based on written law, this would be the case, but there is no case law.
Generally, under civil law (which is what’s used in the relevant EU countries), decisions from courts rarely rely on previous court rulings, only on the written law. That’s why the US-american fixation on case law is a bit weird (and not exactly helpful).
GPLv3 was written with consideration to EU law. It is why several of the legal terms in it were changed, to fit a more global perspective than GPLv2. I think Eben and his team are fairly competent at international law. I am aware of some difference between the legal traditions around the world, and I still think it’s uncertain that your interpretation of their interpretation is accurate.
I don’t see consensus that copyleft is invalid in Europe. It remains, as far as I can tell, a minority opinion, and despite the different usage of jurisprudence in Europe, successful legal challenges against copyleft still would give some credence to the interpretation that strong copyleft is currently null in Europe (with the patronising insinuation that copyleft is just some silly colonial idea).
After all, the VMWare case was brought up in Germany and it wasn’t immediately thrown out because copyleft is considered null. The judge seemed to consider copyleft to be worthy of consideration, if Hellwig could demonstrate sufficient copyright ownership over his kernel code.
(with the patronising insinuation that copyleft is just some silly colonial idea).
What do you mean?
Btw, the EUPL authors does not say that GPL be void. Just the reciprocity would be invalidated (what some people call “virality”).
That’s why the US-american fixation on case law is a bit weird (and not exactly helpful).
This statement sounds a bit like a European doesn’t like that a foreign legal document with its silly foreign ideas is being discussed in Europe.
One aspect of the hereditary nature of the GPL is exactly what was being considered in the VMware case in Germany. The judge there didn’t seem to think that strong copyleft was unworthy of consideration.
The point of Markdown is keeping it readable as plain text (and I frequently use that feature). This template – which uses HTML syntax everywhere – makes that very painful.
I prefer HTML because I use it often. Also I needed HTML to center some elements. But I can easy make markdown alternative.. so people who prefer markdown could use it.
The big issue for me is just the plaintext readability – often I am e.g. compiling a project and just doing cat README.md. So a README.md template should at least aim to keep most of the text readable as plaintext, IMHO.
Wait, are you telling me fractional scaling actually works in Gnome on Fedora?!
(It doesn’t on Ubuntu, and it’s been keeping me in a state of stunned amazement that they’ve been shipping a desktop unusable on mainstream hardware for two consecutive releases now, and none of the reviewers have given it as much as a sideline mention. I guess I’m the only person in the world trying to run Ubuntu on an exceedingly rare Thinkpad X1 Carbon.)
It doesn’t really. It just renders everything at one size larger than you need, and then uses in-GPU scaling.
The same approach that iOS and macOS took, and the complete opposite of the Windows, Qt, Android, and HTML 5 approach.
Well, as long as it works, I’m fine :-)
I don’t know how Unity does it (which is what I’m using now), but I suspect it’s essentially the same, and it does look crisp at any scale factor.
The problem with riot isn’t typography or that it’s ‘too busy’, the problem is that it’s really slow and heavy, and synapse is as well.
Being slow is an issue but the old/current design is ultra ugly so this is one more issue off the list. Speed is being worked on currently with the rewrite of the server happening again.
https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
They should have never used Python in the first place. That was their big mistake. I honestly believe their rewrite will be a failure.
I’ve been reading their code for quite a while now, and from what I can see, this is all just a new UI, but the problematic parts aren’t getting fixed at all. They admitted that they just don’t have the manpower for that.
Matrix has so much potential, but with this, they won’t get anywhere.
It’s not ideal but one benefit is they can run over the spec again and see if it’s possible to implement just from the docs and iron out any issues.
Is Spolsky right, though? He only lists examples that confirm his point. For all we know, for every project that failed a complete rewrite there are ten that succeeded in one.
Case in point: the Netscape rewrite eventually lead to Mozilla, which lead to Firefox.
But why everyone blame npm and “micro-libraries” as the main problem in js? Aren’t all other languages (except C/C++) has the same way of dealing with dependencies? Even in conservative Java installing hundreds of packages from Maven is norm.
Something to consider is that JavaScript has an extreme audience. People who barely consider themselves programmers, because they mostly do design use it, or people just doing tiny modifications. Nearly everyone building a web application in any kind of language, framework, etc. uses it.
I think the reason there is so much bad stuff in JavaScript is not only something rooted in language design. JavaScript isn’t so much worse than other popular bad languages, it just has a larger base having even more horrible programmers and a lot of them also build some form of frameworks.
Don’t get me wrong, JavaScript is not a great language by any stretch, but blaming the ecosystem of a language who certainly has at least a few of the bright minds designing and implementing (working at/with Google, Mozilla and Joyent for example) it should not result in something so much more unstable.
Of course this doesn’t mean that it’s not about the language at all either. It’s just that I have yet to see a language where there isn’t a group writing micro-libraries, doing bad infrastructure, doing mostly worst-practice, finding ways, to work around protections to not shoot yourself in the foot, etc. Yes, even in Python, Rust, Go, Haskell and LISP that exists.
Maybe it’s just that JavaScript has been around for ages, many learned it do so some animated text, they wrote how they did it, there is a ton of bad resources and people that didn’t really learn the language and there is a lot of users/developers that also don’t care enough, after all it’s just front-end. Validation happens on the server and one wants to do the same sending off some form and loading something with a button, updating some semi-global state anyway.
JavaScript is used from people programming services and systems with it (Joyent, et al.) to a hobby web designer. I think that different approaches also lead to very different views on what is right and what isn’t. Looking at how it started and how the standards-committee has to react to it going into backend, application and even systems programming direction probably is a hard task and it’s probably a great example of how things get (even) worse when trying to be the perfect thing for everything, resulting in the worst.
On a related note: I think the issue the community, if you even can call it like that (there are more communities around frameworks rather than the language itself, which is different from many other scripting languages) doesn’t seem to look at their own history too much, resulting in mistakes to be repeated, often “fixing” a thing by destroying another, sometimes even in different framework-layers. For example some things that people learned to be bad in plain JavaScript and HTML get repeated and later learned to be bad using some framework. So one starts over and builds a new framework working around exactly that problem, overlooking other - or intentionally leaving them out, because it wasn’t part of the use case.
there are more communities around frameworks rather than the language itself, which is different from many other scripting languages
In general I tend to agree, but at least at some time ago I am pretty sure the Rails community was larger than the Ruby community. The Django community in Python also seems to be quite big vocal, but probably not larger than its language community given that the Python community is overall way more diversified and less focused on one particular use of the language.
A lot of Java frameworks predate maven - e.g. Spring was distributed as a single enormous jar up until version 3 or so, partly because they didn’t expect everyone to be using maven. I think there’s still a cultural hangover from that today, with Java libraries ending up much bigger than those in newer languages that have had good package management from early on (e.g. Rust).
Even including all transitive libraries, my (quite large) Android app Quasseldroid has 21 real dependencies. That’s for a ~65kLOC project.
In JS land, even my smallest projects have over 300 transitive dependencies.
It’s absolutely not the same.
In technical terms, npm does not differ much from how python does package management. Culturally, however, there are a big difference in how package development is approached. Javascript has the famous left-pad package (story). It provided a single function to left-pad a string with spaces or zeroes. Lots of javascript libraries are like it, providing a single use case.
Python packages on the other hand usually handle a series of cases or technical area - HTTP requests, cryptography or, in the case of left-pad, string manipulation in general. Python also has PEP8 and other community standards that mean code is (likely to be) more homogeneous. I am using python here as that is what I know best.
The author seems to conclude that Javascript as a ubiquitous network might be a grand accomplishment, despite spending the first half of the article calling out other winning networks (stack overflow, hackernews) for being hostile.
I’d be deeply saddened to see Javascript “win,” not because of its (lack of) qualifications, but just because there’s great value in having a diverse set of languages and language communities with varying goals and technology.
If I wanted a winner, it would be something like .Net’s CLR that’s designed to efficiently support many languages working together. They’ve put every paradigm on it with decent performance. Just that with any improvements we can do on simplicity, security, and performance.
My ideal would be graal – it can run JVM languages, Ruby, Python, managed C, and many more at high performance. In the future maybe even a C# frontend may be possible, all interacting with each other seemingly natively.
But even the CLR or classical JVM would be great. But JS or WASM? God f*cking no.
For a cross-language platform you want an ABI definition that allows different languages to provide compatible functions, so that e.g. C can call C++ can call Ruby can call Java can call Haskell.
WASM doesn’t provide any of that, in fact it’s even more low level than the C FFI we already have today.
Things I self-host now on the Interwebs (as opposed to at home):
Things I’m setting up on the Interwebs:
Over time I may move the Docker and KVM-based Linux boxes over to OpenBSD and VMM as it matures. I’m moving internal systems from Debian to Open or NetBSD because I’ve had enough of Systemd.
Out of curiosity, why migrate your entire OS to avoid SystemD rather than just switch init systems? Debian supports others just fine. I use OpenRC with no issues, and personally find that solution much more comfortable than learning an entirely new management interface.
To be fair, it’s not just systemd, but systemd was the beginning of the end for me.
I expect my servers to be stable and mostly static. I expect to understand what’s running on them, and to manage them accordingly. Over the years, Debian has continued to change, choosing things I just don’t support (systemd, removing ifconfig etc). I’ve moved most of my stack over to docker, which has made deployment easier at the cost of me just not being certain what code I’m running at any point in time. So in effect I’m not even really running Debian as such (my docker images are a mix of alpine and ubuntu images anyway).
I used to use NetBSD years back quite heavily, so moving back to it is fairly straightforward, and I like OpenBSD’s approach to code reduction and simplicity over feature chasing. I think it was always on the cards but the removal of ifconfig and the recent furore over the abort() function with RMS gave me the shove I needed to start moving.
For now I’m backing up my configs in git, data via rsync/ssh and will probably manage deployment via Ansible.
It’s not as easy as docker-compose, but not as scary as pulling images from public repos. Plus, I’ll actually know what code I’m running at a given point in time.
Have you looked at Capistrano for deployment? Its workflow for deployment and rollback centers around releasing a branch of a git repo.
I’m interested in what you think of the two strategies and why you’d use one or the other for your setup, if you have an opinion.
I don’t run ruby, given the choice. It’s not a dogmatic thing, it’s just that I’ve found that there are more important things for me to get round to than learning ruby properly, and that if I’m not prepared to learn it properly I’m not giving it a fair shout.
N.B. You can partially remove systemd, but not completely remove it. Many binaries runtime depend on libsystemd even if they don’t appear like they would need it.
When I ran my own init system on Arch (systemd was giving me woes) I had to keep libsystemd.so installed for even simple tools like pgrep to work.
Some more info and discussion here. I didn’t want to switch away from Arch, but I also didn’t want remnants of systemd sticking around. Given the culture of systemd adding new features and acting like a sysadmin on my computer I thought it wise to try and keep my distance.
The author of the article regarding pgrep you linked used an ancient, outdated kernel, and complained that the newest versions of software wouldn’t work. He/She used all debug flags for the kernel, and complained about the verbosity. He/She used a custom, unsupported build of a bootloader, and complained about the interface. He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions. He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly, and likely with the default sRGB set (which is horribly inaccurate anyway). He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.
I’m the author of the article.
ancient, outdated kernel all debug flags for the kernel unsupported build of a bootloader
The kernel, kernel build options and bootloader were set by Arch Linux ARM project. They were not unsupported or unusual, they were what the team provided in their install instructions and their repos.
A newer mainstream kernel build did appear in the repos at some point, but it had several features broken (suspend/resume, etc). The only valid option for day to day use was the recommended old kernel.
complained that the newest versions of software wouldn’t work
I’m perfectly happy for software to break due to out of date dependencies. But an init system is a special case, because if it fails then the operating system becomes inoperable.
Core software should fail gracefully. A good piece of software behaves well in both normal and adverse conditions.
I was greatly surprised that systemd did not provide some form of rescue getty or anything else upon failure. It left me in a position that was very difficult to solve.
He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions
This was not a custom kernel package, it was provided by the Arch Linux ARM team. It was a newer kernel package that described itself as supporting my model. As it turns out it was the new recommended/mandated kernel package in the Arch Linux ARM install instructions for my laptop.
Even if the kernel were custom, it is highly unusual for distribution packages to contain scripts that overwrite partitions.
He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly
There are multiple concepts under the words of ‘colour profiles’ that it looks like you have merged together here.
Colour profiles are indeed used by image and video codecs every day on our computers. Most of these formats do not store their data in the same format as our monitors expect (RGB888 gamma ~2.2, ie common sRGB) so they have to perform colour space conversions.
Whatever the systemd unit was providing in the form of ‘colour profiles’ was completely unnecessary for this process. All my applications worked before systemd did this. And they still do now without systemd doing it.
likely with the default sRGB set (which is horribly inaccurate anyway)
1:1 sRGB is good enough for most people, as it’s only possible to obtain benefits from colour profiles in very specific scenarios.
If you are using a new desktop monitor and you have a specific task you need or want to match for, then yes.
If you are using a laptop screen like I was: most change their colour curves dramatically when you change the screen viewing angle. Tweaking of colour profiles provides next to no benefit. Some laptop models have much nicer screens and avoid this, but at the cost of battery life (higher light emissions) and generally higher cost.
I use second hand monitors for my desktop. They mostly do not have factory provided colour profiles, and even then the (CCFL) backlights have aged and changed their responses. Without calibrated color profiling equipment there is not much I can do, and is not worth the effort unless I have a very specific reason to do so.
He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.
You can do this without making systemd libraries a hard runtime dependency.
I raised this issue because of a concept that seemed more pertinent to me: the extension of systemd’s influence. I don’t think it’s appropriate for basic tools to depend on any optional programs or libraries, whether they be an init system like systemd, a runtime like mono or a framework like docker.
Almost all of these issues are distro issues.
Systemd can work without the color profile daemon, and ps and pgrep can work without systemd. Same with the kernel.
But the policy of Arch is to always build all packages with all possible dependencies as hard dependencies.
e.g. for Quassel, which can make use of KDE integration, but doesn’t require it, they decide to build it so that it has a hard dependency on KDE (which means it pulls in 400M of packages for a package that would be fine without any of them).
I really wish the FreeBSD port of Docker was still maintained. It’s a few years behind at this point, but if FreeBSD was supported as a first class Docker operating system, I think we’d see a lot more people running it.
IME Docker abstracts the problem under a layer of magic rather than providing a sustainable solution.
Yes it makes things as easy as adding a line referencing a random github repo to deploy otherwise troublesome software. I’m not convinced this is a good thing.
As someone who needs to know exactly what gets deployed in production, and therefore cannot use any public registry, I can say with certainty that Docker is a lot less cool without the plethora of automagic images you can run.
Exactly, once you start running private registries it’s not the timesaver it may have first appeared as.
Personally, I’ll have to disagree with that. I’m letting Gitlab automatically build the containers I need as basis, plus my own. And the result is very amazing because scaling, development, reproducibility etc are much easier given.
I think Kubernetes has support for some alternative runtimes, including FreeBSD jails? That might make FreeBSD more popular in the long run.
Works fine for me(tm).
It seems fine both over mobile and laptop, and over 4G. I haven’t tried any large groups and I doubt I’ll use it much, but so far I’ve been impressed.
Is bookstack good? I’m on the never ending search for a good wiki system. I keep half writing my own and (thankfully) failing to complete it.
Cowyo is pretty straighforward (if sort of sparse).
Being go and working with flat files, it’s pretty straightforward to run & backup.
Bookstack is by far one of the best wikis I’ve given to non-technical people to use. However I think it stores HTML internally, which is a bit icky in my view. I’d prefer it if they converted it to markdown. Still, it’s fairly low resource, pretty and works very, very well.
It’s nice people are working on discovering these things. I wish they didn’t have to make a big publicity stunt out of it every time though.
I’m sure there do exist some GPG users who have images set to download automatically, but the idea that every GPG user “must take action now” is absurd.
If I understand this well, if someone that has history of exchanges with you doesn’t act, then he might leak some informations that you exchanged together. That’s why if everybody stops using it temporarily, it might help everybody.
Proper use of PGP assumes trust both in every participant, in the hardware/software involved, and in the OPSEC skill of every corespondent. This “vulnerability” changes nothing. If you used PGP with a client that automatically downloaded imaged (extremely unlikely), you had been doing it wrong already.
That’s not necessarily required – Thunderbird preloads content, but does not display it. So you’d be vulnerable there, too.
OK, then Thunderbird is broken and badly needs to be fixed. This is true regardless of whether GPG is used or not.
Thunderbird does not automatically download remote content: https://support.mozilla.org/en-US/kb/remote-content-in-messages. It does download entire messages, including attachments; but I would be very surprised if this were not common MUA behavior.
I’m running
Planned in the long-term future are a custom password manager and a custom Firefox Sync Server (with better history sync + web fulltext history search). For the short-term a clone of Google Keep is planned.
A while ago due to a bug Google wiped my calendar and contacts. Shortly after that I lost access to one of my other Google accounts and only managed to get it back because I had a lot of luck (and help from the new owner of my old phone number). These two events, combined with the Snowden papers, have over the years been a major motivation for me trying to self-host everything.
I’ve been using Matrix as a glorified IRC bouncer for over a year, it’s pretty good, but Synapse still occasionally chokes on “forward extremities” and becomes completely unresponsive so you have to run a SQL query to clean up and wait for a while for it to become responsive again :(
worst offenders seem to be IRC-bridged rooms with a high join/part turnover. Such as #mozilla_#rust:matrix.org, #mozilla_#rust-offtopic:matrix.org, and #haskell:matrix.org
Riot-web has been fast enough for me, but I prefer Fractal, because GTK :)
Bridges are also choking (and gettign out of sync) in low/moderare-traffic 200 user channels where 90% don’t rejoin because bouncers. I still haven’t really seen an advantage.
It’s one of the big issues where no alternative for IRC really exists yet.
Riot also starts choking once the rooms grow over a few thousand memberd that join and part constantly — while even the simplest IRC clients handle it fine.
It’ll be interesting to see how this develops in the next years, but for now it looks like the time for Matrix to replace IRC isn’t just quite ready yet.
From the client/user point of view, riot is certainly as optimal as it is subotimal. It is fairly usable and nice, but also incredibly ressource hungry and slow at times. I would like to see more native clients (in particular console clients), but this would certainly increase friction in terms of client support for features and changes.
This also extends to the operational point of view: It’s not just that matrix/synapse is simply slow at times, it’s that the design is by default way more ressource intensive than IRC. An ircd requires basically nothing in terms of ressources to serve quite a seizable number of users. synapse on the other hand requires quite a lot of CPU power in addition to metric ton of space in it’s database (especially if your users join large rooms). Joining the main matrix channel is almost certain to cause hours of full CPU usage and increase the db size by a few hundred MB.
Of course matrix and irc provide different featuresets, but right now I feel that matrix may never be ideal for large group chats simply by design. I can’t quite see how rooms like the matrix main channel will ever be “ok” for a matrix server.
All this being said, matrix works nicely for one-on-one and small group chats, which is what most of my users do.
The actual design of the Matrix spec doesn’t have any issues that I have seen but the current software seems more like a prototype in production. Hopefully dendrite and some updates to riot can speed everything up because thats one of the main issues I see with it now.
Yeah, that’s what I’ve seen so far, too. The spec is great, but the implementation is rather meh. Which means that at least it should be easy to fix later on.
The spec does require a lot more resources than IRC, though, specifically in the form of maintaining logs and allowing searching of them. I wouldn’t be surprised if there are other implementations/settings that come out to auto-kill logs after a month or something (I don’t think that necessarily violates the spec and is pretty handy for GDPR)
We also do log storage and fulltext search in the Quassel bouncer (and its ecosystem), and yet we don’t have nearly as much performance issues as Matrix does.
This is mostly an implementation problem, I’m sure it can be fixed over the years.
I have been using fractal as well. I like the gui but it does seem to use a high amount of CPU usage. Also doesn’t support end to end crypto yet.
Just tried Fractal on Mac OS. Amazing (and a bit horrible) that it looks exactly like Gnome. Perhaps somebody (me?!) will make a decent version in the future, though.
I find it a little ironic that after using the open-web browser that I am not able to inspect the sessionstore-backups/recovery.jsonlz4 file after a crash to recover some textfield data, as Mozilla Firefox is using a non-standard compression format, which cannot be examined with lzcat nor even with lz4cat from ports.
The bug report about this lack of open formats has been filed 3 years ago, and suggests lz4 has actually been standardised long ago, yet this is still unfixed in Mozilla.
Sad state of affairs, TBH. The whole choice of a non-standard format for user’s data is troubling; the lack of progress on this bug, after several years, no less, is even more so.
https://bugzilla.mozilla.org/show_bug.cgi?id=1209390#c10 states that when Mozilla adopted using LZ4 compression there wasn’t a standard to begin with. Yeah, no one has migrated the format to the standard variant, which sucks, but it isn’t like they went out of their way in order to hide things from the user.
It was probably unwise for Mozilla to shift to using that compression algorithm when it wasn’t fully baked, though I trust that the benefits outweighed the risks back then.
This will sound disappointing to you, but your case is as edge-caseish as it gets.
It’s hard to prioritize those things over things that affect more users. Note that other browser makers have security teams larger than all of Mozilla’s staff. Mozilla has to make those hard decisions.
These jsonlz4 data structure are meant to be internal (but your still welcome to use the open source implementation within Firefox to mess with it).
I got downvoted twice for “incorrect” though I tried my best to be neutral and objective. Please let me know, what I should change to make these statements more correct and why. I’m happy to have this conversation.
Priorities can be criticized.
Mozilla obviously has more than enough money that they could pay devs to fix this — just sell Mozilla’s investment in the CliqZ GmbH and there would be enough to do so.
But no, Mozilla sets its priorities as limiting what users can do, adding more analytics and tracking, and more cross promotions.
Third party cookie isolation still isn’t fully done, while at the same time money is spent on adding more analytics to AMO, on CliqZ, on the Mr Robot addon, and even on Pocket. Which still isn’t ooen source.
Mozilla has betrayed every single value of its manifesto, and has set priorities opposite of what it once stood for.
That can be criticized.
Wow, that escalated quickly :) It sounds to me that you’re already arguing in bad faith, but I think I’ll be able to respond to each of your points individually in a meaningful and polite way. Maybe we can uplift this conversation a tiny bit? However, I’ll do this with my Mozilla hat off, as this is purely based on public information and I don’t work on Cliqz or Pocket or any of those things you mention. Here we go:
As someone who also got into 1-3 arguments against firefox I guess you’ll always have to deal with criticism that is nit picking, because you’ve written “OSS, privacy respecting, open web” on your chest. Still it is obvious you won’t implement an lz4 file upgrade mechanism (oh boy is that funny when it’s only some tiny app and it’s sqlite tables). Because there are much more important things than two users not being able to use their default tools to inspect the internals of firefox.
Sure, but it’s obvious that somehow Mozilla has enough money to buy shares in one of the largest Advertisement and Tracking companies’ subsidiaries (Burda, the company most known for shitty ads and its Tabloids, owns CliqZ), where Burda retains majority control.
And yet, there’s not enough left to actually fix the rest.
And no, I’m not talking about Telemetry — I’m talking about the fact that about:addons and addons.mozilla.org use proprietary analytics from Google, and send all page interactions to Google. If I wanted Google to know what I do, I’d use Chrome.
Yet somehow Mozilla also had enough money to convert all its tracking from the old, self-hosted Piwik instance to this.
None of your arguments fix the problem that Mozilla somehow sees it as higher priority to track its users and invest in tracking companies than to fix its bugs or promote open standards. None of your arguments even address that.
about:addons code using Google analytics has been fixed and is now using telemetry APIs, adhering to the global control toggle. Will update with the link, when I’m not on a phone.
Either way, Google Analytics uses a mozilla-customized privacy policy that prevents Google from using the data.
If your tinfoil hat is still unimpressed, you’ll have to block those addresses via /etc/hosts (no offense.. I do too).
I won’t comment on the rest of your comment, but this is really a pretty tiny issue. If you really want to read your sessionstore as a JSON file, it’s as easy as git clone https://github.com/Thrilleratplay/node-jsonlz4-decompress && cd node-jsonlz4-decompress && npm install && node index.js /path/to/your/sessionstore.jsonlz4. (that package isn’t in the NPM repos for some reason, even though the readme claims it is, but looking at the source code it seems pretty legit)
Sure, this isn’t perfect, but dude, it’s just an internal datastructure which uses a format which is slightly non-standard, but which still has open-source tools to easily read it - and looking at the source code, the format is only slightly different from regular lz4.
Weechat for Android - remote client for the weechat IRC client. Best Android IRC experience I’ve found to date.
If you want even better Android IRC experience, you could try the new Quasseldroid beta https://quasseldroid.info/ (as soon as the new version is released it’ll also replace the current F-Droid entry) with the new quassel 0.13 (try it by compiling from git).
The result looks something like https://i.k8r.eu/63U1pA and works quite nicely. The repo is at https://git.kuschku.de/justJanne/QuasselDroid-ng/ and beta builds are available on the Play Store or at https://s3.kuschku.de/releases/quasseldroid-ng/Quasseldroid-latest.apk (sadly I haven’t found a nice way to get beta builds onto F-Droid yet, but thanks to this thread I’m right now trying out a way to do so)
EDIT: An F-Droid binary repo of the latest beta releases is now available, its direct link (not usable in the browser) is https://repo.kuschku.de/repo?fingerprint=A0CBC2C29E38ED9542F86A1188412A60C5A756FC4D7A31C4C622242D7AD021F2
Disclaimer: I’m the dev.
I would, but my quassel experience wasn’t that great ~1 year ago. Got really unstable and took several minutes to start the desktop client as I approached 400 buffers. Granted, this is a lot, and people have told me that using postgres for the backend can improve it, but I don’t really want to use postgres for my IRC bouncer and I’m accustomed to weechat and its own problems now :’)
That said, I used the old quasseldroid for a long time, and it was actually a life-saver when the desktop client kept crashing and I needed to read some old messages. Thanks for many years of IRC on the go!
Yeah that’s a real issue with SQLite, it really wasn’t designed for many separate threads writing and reading at the same time, causing timeouts. That said, with the recent versions, that issue should go away as now loading backlog is only really required for the buffers you actually open – which reduces the amount of data that has to be loaded on connection massively.
Still great that you liked it :)
What’s the memory usage like with postgres? I might try it again at some point, seeing as weechat keeps eating all the RAM I can feed it (not much).
I’m still a bit unsure about the internals of quassel… for example recently I found out that it inexplicably sends NAMES for every channel it’s in quite regularly. When you’re in a few hundred that ends up being quite a lot of traffic, which is really unnecessary.
Edit: I just noticed that you edited your post to clarify about sqlite, all of this is sounding pretty good so far, maybe even good enough to tempt me back to the dark side — but certainly good enough to stop me warning people against using quassel nowadays :-)
A lot of people are using it successfully on systems as small as a raspberry pi. Personally I have been hosting a core for 5 people, some of which with 400+ channels (not buffers) on this: https://www.online.net/en/server-dedicated/start-2-s-sata
So performance is actually quite great, especially with more recent versions of the core.
for example recently I found out that it inexplicably sends NAMES for every channel it’s in quite regularly. When you’re in a few hundred that ends up being quite a lot of traffic, which is really unnecessary.
That’s done to update the away status, the modes, etc all correctly on servers that don’t yet support the IRCv3 extensions for doing this automatically. The 0.13 beta uses IRCv3 away-notify and IRCv3 account-notify for this, if available.
That’s done to update the away status, the modes, etc all correctly on servers that don’t yet support the IRCv3 extensions for doing this automatically. The 0.13 beta uses IRCv3 away-notify and IRCv3 account-notify for this, if available.
Oh fair enough then. Is there a way to turn this off?
Of course! https://i.k8r.eu/57bw_A
Silence - (badly) encrypted SMS messages, I just use it because I like the UI
Afaik Silence is a fork of TextSecure and uses the Signal Protocol, just over SMS/MMS. So the encryption, authentication, and integrity properties it provides should be very good–not bad as you state. If you read the Signal blog post on why they stopped using SMS/MMS they list user experience, metadata leakage, and development overhead as the main problems.
Oh, I was under the impression they were using some different crypto because the messages produced by Signal would be too large, or something. I retract my statement :-)
Honestly I don’t see how this is a big deal.
As a user, it’s a decent feature because I get to view, say, news coverage of some current event from multiple sources; and as a publisher, why do I care about where else my users go to consume media?
Why would commercial site operators care that a browser vendor is presenting unsolicited links to their competitors, to visitors?
Let’s shift the industry: what if chrome started showing links to Chrysler on Ford’s website? Why would Ford care?
Okay the links are clearly not on the page itself. You can see at the top that the UI is clearly distinct from the website itself and is more like a feature of the browser than anything else. As far as I can see, it’s just a convenient way of displaying search results for the headline without the user having to manually search it themselves.
And well, that’s really a strawman argument since that is not at all what Google is doing here. Equating a news site and the website of a corporation that sells actual products isn’t really meaningful.
Really, it’s like berating Spotify for having their radio feature because it can show me similar songs to the one I’m listening to. It’s a feature that doesn’t really hurt anyone, and on the contrary, benefits the majority of people. I’m sure Tyler’s fine with Spotify playing some Frank Ocean on his radio.
users don’t understand browser vs site differences.
You say that comparing two rival news businesses with two rival car businesses is a straw man and then bring up a music streaming service which pays each of the artists for the songs played.
Google isn’t running a news service, and paying site owners for displaying their articles.
This would be like if while using the Spotify app, Siri chirped out with “hey we have songs on on Apple Music”.
Have anything to back up that claim of user ignorance? Also, if we suppose that the claim is correct, does it make a difference?
The straw man is in promoting a product on a rival’s site. Nothing even remotely like that is happening here. The Spotify example was mentioned because music suggestions are vaguely analogous to information media, but I take your point. Even still, my point stands.
And I disagree that it’s like that. In that case both Spotify and Apple Music would be trying to get users listening to the same thing on different platforms. Articles on the other hand are some person’s unique view of some topic, and my browser giving me suggestions for other people’s viewpoints on the same topic is nothing to beef about.
Apple literally changed the way javascript alert()/etc look in Safari because users got confused that it was the webpage showing a dialog, not the OS/browser.
In this very statement you also acknowledge that it is possible for a UI to be unambiguous, as Apple did change it so as to make it unambiguous.
Hence your claim about the UI of the suggested pages is meaningless, since you’d need to show that this particular UI falls into one of the two categories.
Apple changed the elements in question from native-styled (i.e. they look and behave exactly like a native macOS/iOS element) elements that are modal above the whole browser (i.e. they blocked all interaction with Safari while open) into in-tab plain white elements that look like a plain-jane javascript html “modal” window.
The chrome UI in question is literally a white bar at the bottom of the page - how could anyone determine whether it’s chrome’s chrome, or in-page content?
I don’t think you’re looking at the right picture: https://pbs.twimg.com/media/Db1OtiSWkAEKPag.jpg
But we’re digressing. What actual point are you trying to make here, because I don’t see it.
look at the first picture, which is what the user sees, seemingly as part of the site.
This whole thing is in response to this claim:
You can see at the top that the UI is clearly distinct from the website itself
The next sentence was:
Also, if we suppose that the claim is correct, does it make a difference?
We could argue about weather or not a user will think it’s part of the site all day, but it doesn’t really matter.
And I made many points in the post that you quoted, not just that one.
The rest of your ‘points’ are arguing that the news sites in question wouldn’t be concerned by this move.
Did you happen to notice who wrote the tweet thats linked to? The Executive Editor of The Verge. He seems none to pleased about this change, for what I think are pretty obvious reasons.
If you want to put Google on some nerd pedestal and believe nothing they do can be faulted, thats your choice, but dont expect other people to follow your logic.
If you want to put Google on some nerd pedestal and believe nothing they do can be faulted, thats your choice, but dont expect other people to follow your logic.
You’re taking some large leaps here. I do think that Google have fucked up with AMP overall, and I don’t think that Google can do no wrong: they’re a terrible company for user privacy and they’ve shit all over their “Don’t Be Evil” slogan recently.
However, we’re talking about a very specific feature of one of their services, and as a user I welcome a little tab that gives me related articles on X topic, regardless of where The Verge want me to consume information.
I didn’t realise that I had to outline my position on Google as a whole to be able to have an opinion on something that they do.
as a user I welcome
As a user you’re entitled to want what you want, but you seem to have forgotten the part where you said:
as a publisher, why do I care about where else my users go to consume media
You claim to acknowledge Google’s faults, but you seem unable to comprehend how this change could affect online news companies, either now or in any future incarnations of this ‘feature’.
The two aren’t mutually exclusive.
You claim to acknowledge Google’s faults, but you seem unable to comprehend how this change could affect online news companies, either now or in any future incarnations of this ‘feature’.
And you’ve failed to demonstrate how.
You’re fucking kidding me.
You don’t see how driving traffic away from a site to it’s competitors could affect them?
You’re being deliberately obtuse.
It’s the “driving traffic away” part I don’t agree with, but since you’ve dissolved this discussion into pure ad hominem attacks, I won’t be continuing with the conversation.
Have a nice day dude!
No, they changed it to confuse fewer people.
As someone who’s been doing UI design for more than a decade and has also followed this part of the industry even longer I can assure you there are no meaningful interfaces that wouldn’t confuse at least some people. What we all try to do is each day reduce number of confused people which you can see by over time evolving UI widgets and patterns.
The big issue is that every time Google has provided any kind of listing anywhere, ever, they’ve allowed companies through AdWords to get to the top.
And that becomes super shady.
Mgy favourite falsehood about Unicode — toUpper/toLower does not change the length of a string. At least when measured in graphemes. For latin-1. — is sadly coming to an end nowadays.
Previously, “ß” uppercase equivalent was “SS”, it compared equal to “SZ” and “SS”, and “SS” lowercase equivalent was “ss”, but “ss” and “ß” were not equal.
Now that ẞ exists instead as official uppercase form for ß and is to be used since 2017, the next version of Unicode is going to standardize ẞ.
The capital sharp s existed long before Unicode:
Historical typefaces offering a capitalized eszett mostly date to the time between 1905 and 1930. The first known typesets to include capital eszett were produced by the Schelter & Giesecke foundry in Leipzig, in 1905/6. Schelter & Giesecke at the time widely advocated the use of this type, but its use remained very limited.
… and it’s been in Unicode since 2008:
Capital ß (ẞ) was introduced as part of the Latin Extended Additional block in Unicode version 5.1 in 2008 (U+1E9E ẞ LATIN CAPITAL LETTER SHARP S).
The only thing that changed in 2017 was the opinion of the Council for German Orthography.
Yes, I realize that – but it’s expected that the next version of Unicode is going to standardize the new capitalization rules, which they haven’t yet.
It’s funny how for many years people hated on checked exceptions as the worst mistake every – and now they’re back as Result types.
I just wish this realization had come sooner, as now too many languages don’t have support for either.
I think checked-exceptions as implemented in Java had a number of flaws that Rust’s corrects:
NullPointerException, contributing to a feeling that they don’t add a lot of value.UnsupportedEncodingException on "utf-8". The Java spec says UTF-8 must be available, but you have to write the handful of lines of code to catch UnsupportedEncodingException anyways! In Rust the equivalent situation is handled with .unwrap() or .expect("..."), much less verbose.Result and wrapping it into the correct one. In Java convention seems to be declaring that every function raises three different exception types, adding verbosity at every call definition.I agree. It just saddens me that Kotlin makes all exceptions unchecked, even those coming from Java, instead of automatically wrapping the Java code in Result<T, E>.
There’s a lot of things Rust does right that no JVM language currently does well.
There’s a lot of things Rust does right that no JVM language currently does well.
Such as? Scala is very Rust-like; it doesn’t do linear typing but that wouldn’t help you much on the JVM anyway.
At least Rust has unwrap, when you know that errors should not happen if code is correct, or for initial rough code. Java’s checked exceptions are frustrating just because there’s no short syntax for re-raising as unchecked exception (and preserving stacktrace, some IDEs even add code that prints stacktrace to stderr in such unwrap-like handlers).
I hated checked exceptions right up until I tried to write some software that had to be more reliable than a http worker that got restarted every request.
Turns out that when I write to a file I really want to know exactly what can go wrong.
My only experience with checked exceptions was java, and that sucked… But inferred+merged checked exceptions could be cool. Any languages have that?
Yes, Ocaml has it with Polymorphic variants + result monad. The one current downside is the error messages can be less than ideal.
A few blog posts describing it:
http://functional-orbitz.blogspot.se/2013/01/introduction-to-resultt-vs-exceptions.html
http://functional-orbitz.blogspot.se/2013/01/experiences-using-resultt-vs-exceptions.html
The difference is that results are plain old values that fit in the normal type system. You can call a higher-order function with a function that returns a result and it will just work. Checked exceptions were indeed a terrible mistake, not because they force you to handle errors, but because they were a secondary type system that didn’t interoperate properly with the primary type system.
(People who are proposing effect systems should take note)
Sure, but the solution would have been to wrap checked exceptions in a Result type for interop, not, like kotlin does today, to just swallow all of them.
the solution would have been to wrap checked exceptions in a Result type for interop
There are a couple of problems with that - performing a JVM catch at every interop boundary is inherently inefficient, and exceptions don’t quite have the nice monadic composition you’d expect from results.
It’s not worth it. I gave Matrix/Riot 2 years to become usable: fix performance, fix resource usage, behave like modern tech they are claiming to replace. It was not worth the effort.
10 years of IRC logs from irssi: 500MB of disk space 2 years of moderate Matrix/Riot usage (with IRC bridges which I ran myself): 12GB Postgres database
Insane. This tech is dead on arrival in my opinion.
At least when XMPP works, it works well; provided you aren’t getting screwed over by server/client inconsistency in support. When Matrix works, it’s slow as a dog, client and server. (Not to mention New Vector seems a bit…. fucky when it comes to competition in homeservers.)
Yeah, XMPP’s weakness are the XEPs and the inconsistent implementation. It should have all been one consolidated protocol, but then it might not have had adoption due to complexity. sigh
I’ll be honest, I looked into contributing to Dendrite (the Golang successor) but found the codebase a mess (and it uses gb, which is not the way the community as a whole has been moving for years, but that’s more of a personal preference I guess). Maybe they’ll get their act together but for now I’m going to pass.
Thats a very odd thing to have an issue with. 12gb is fairly minor in todays terms. If you take a look at the source for a message in matrix you will see they each contain a whole lot more info than an IRC messsage such as the origin server, message type, event ID, room id and a whole lot more. Also riot supports inline media which on it’s own would take up 12GB with some moderate usage.
Matrix doesn’t aim to be a 1:1 copy of IRC, It supports a whole lot more features that users expect of modern software and that necessarily means more resource usage.
The media is not stored in the Postgres database.
The software is slow. It should never have been written in Python, because they’re affected by the GIL. The database is poorly optimized and has lots of issues that require manual intervention like this: https://github.com/matrix-org/synapse/issues/1760
The best summary I can provide is this quote, “[The problem with Matrix ] has everything to do with synapse, bad signature design (canonicalized json vs http sigs) and an overall terrible agent model.”
12GB Postgres database means poor performance unless you have good hardware. Try running it on an Rpi or a Scaleway C1. You’re not going to have a usable experience. Even a Digital Ocean $5/mo droplet won’t be usable.
Not everyone has a Dual Xeon with 64GB of RAM colocated. I do. It was even awful on that.
I previously ran every application I made on crappy hardware to make sure it wasnt overbloated. If it worked there, probably be great on newer boxes. Seeing the $5 droplets mentioned all the time makes me think they might be a nice, new baseline. What you think since you mentioned them?
Quassel manages to store all the same data, also in a PostgreSQL database, in much less than 12GB. If you add fulltext search, it still won’t be even close.
The problem is that Matrix as a project just has a lot of things left to fix, my current favorite is their “database” backend on Android
Matrix could be great, if they actually drop HTTP Longpolling, actually finish a native socket implementation, actually finish their Rust server implementation, replace their serialization format with a more efficient one, and so on, and so on.
In a few years Matrix may become great – for today, it isn’t there yet.
Disclaimer: I’m personally involved with IRC, and develop Quasseldroid, a client for the Quassel bouncer.
finish their Rust server implementation
You mean in go.
I am backing the project on Patreon. Right now, I have completely replaced both XMPP and Messenger and I surely hope that it will improve over time.
Oh, it ended up being go? Last I heard about it, someone was rewriting the server in Rust. Was that abandoned?
Thanks for your feedback. I am yet to use it extensively so I cannot comment on the performance issues as of now.
TLDR;
It’s worth noticing that both these groups have their center in the USA but their decisions affects the whole world.
So we could further summarize that we have two groups, one controlled by USA lobbies and the other controlled by the most powerful companies in the world, fighting for the control of the most important infrastructure of the planet.
Under Trump’s Presidency.
Take this, science fiction! :-D
This is somewhat disingenuous. Web browser’s HTML parser needs to be compatible with existing web, but W3C’s HTML4 specification couldn’t be used to build a web-compatible HTML parser, so reverse engineering was required for independent implementation. With WHATWG’s HTML5 specification, for the first time in history, a web-compatible HTML parsing got specified, with its adoption agency algorithm and all. This was a great achievement in standard writing.
Servo is a beneficiary of this work. Servo’s HTML parser was written directly from the specification without any reverse engineering, and it worked! To the contrary to your implication, WHATWG lowered barrier to entry for independent implementation of web. Servo is struggling with CSS because CSS is still ill-specified in the manner of HTML4. For example, only reasonable specification of table layout is an unofficial draft: https://dbaron.org/css/intrinsic/ For a laugh, count the number of times “does not specify” appear in CSS2’s table chapter.
You say Backwards compatibility is necessary, and yet Google managed to get all major sites to adopt AMP in a matter of months. AMP has even stricter validation rules than even XHTML.
XHTML could have easily been successful, if it hadn’t been torpedoed by the WHATWG.
That’s nothing to do with the amp technology, but with google providing CDN and preloading (I.e., IMHO abusing their market position)
abusing their market position
Who? Google? The web AI champion?
No… they do no evil… they just want to protect their web!
Disingenuous? Me? Really? :-D
Who was in the working group that wrote CSS2 specification?
I bet a coffee that each of those “does not specify” was the outcome of a political compromise.
But again, beyond the technical stuffs, don’t you see a huge geopolitical issue?
This is an interesting interpretation, but I’d call it incorrect.
I’m not speaking on behalf of my function in the w3c working group I’m in, nor for Mozilla. But those positions provided me with the understanding and background information to post this comment.
XHTML had little traction, because of developers
I remember that in early 2000s everyone started to write <br/> instead of <br> and it was considered cool and modern. There were 80x15 badges everywhere saying website is in xhtml. My Motorola C380 phone supported wap and some xhtml websites, but not regular html in builtin browser. So I had impression that xhtml was very popular.
xhtml made testing much easier. For me it changed many tests from using regexps (qr#<title>foo</title>#) to using any old XML parser and XPATH.
Agreed. Worth noting that, after the html5 parsing algorithm was fully specified and libraries like html5lib became available, it became possible to apply exactly the same approach with html5 parsers outputting a DOM structure and then querying it with xpath expressions.
This is an interesting interpretation, but I’d call it incorrect.
You are welcome. But given your arguments, I still stand with my political interpretation.
the reason to create whatwg wasn’t about control
I was 24 back then, and my reaction was “What? Why?”.
My boss commented: “wrong question. You should ask: who?”
XHTML had little traction, because of developers
Are you sure?
I wrote several web site back then using XML, XSLT and XInclude serverside to produce XHTML and CSS.
It was a great technological stack for distributing contents over the web.
w3c didn’t “start working on a new Dom”. They copy/backport changes from whatwg hoping to provide stable releases for living standards
Well, had I wrote a technical document about an alternative DOM for the whole planet, without anyone asking me to, I would be glad if W3C had take my work into account!
In what other way they can NOT waste WHATWG’s hard work?
Wel, except saying: “guys, from now on do whatever Google, Apple, Microsoft and few other companies from the Silicon Valley tell you to do”.
But I do not want to take part for W3C: to me, they lost their technical authority with EME (different group, but same organisation).
The technical point is that we need stable, well thought, standards. What you call live standard, are… working draft?
The political point is that no oligopoly should be in condition to dictate the architecture of the web to the world.
And you know, in a state where strong cryptography is qualified as munitions and is subject to export restrictions.
I’m not speaking on behalf of my function in the w3c working group I’m in, nor for Mozilla. But those positions provided me with the understanding and background information to post this comment.
I have no doubt about your good faith.
But probably your idealism is fooling you.
As you try to see these facts from a wider perspective, you will see the problem I describe.
XHTML was fairly clearly a mistake and unworkable in the real world, as shown by how many nominally XHTML sites weren’t, and didn’t validate as XHTML if you forced them to be treated as such. In an ideal world where everyone used tools that always created 100% correct XHTML, maybe it would have worked out, but in this one it didn’t; there are too many people generating too much content in too many sloppy ways for draconian error handling to work well. The whole situation was not helped by the content-type issue, where if you served your ‘XHTML’ as anything other than application/xhtml+xml it wasn’t interpreted as XHTML by browsers (instead it was HTML tag soup). One result was that you could have non-validating ‘XHTML’ that still displayed in browsers because they weren’t interpreting it as XHTML and thus weren’t using strict error handling.
(This fact is vividly illustrated through syndication feeds and syndication feed handlers. In theory all syndication feed formats are strict and one of them is strongly XML based, so all syndication feeds should validate and you should be able to consume them with a strictly validating parser. In practice plenty of syndication feeds do not validate and anyone who wants to write a widely usable syndication feed parser that people will like cannot insist on strict error handling.)
there are too many people generating too much content in too many sloppy ways for draconian error handling to work well.
I do remember this argument was pretty popular back then, but I have never understood why.
I had no issue in generating xhtml strict pages from user contents. This real world company had a couple handred of customers with pretty various needs (from ecommerce, to online magazines or institutional web sites) and thousands of daily visitors.
We used XHTML and CSS to distribute highly accessible contents, and we had pretty good results with a prototype based on XLS-FO.
To me back then the call to real world issues seemed pretestuous. We literally had no issue. The issues I remember were all from IE.
You are right that many mediocre software were unable to produce proper XHTML. But is this an argument?
Do not fix the software, let’s break the specifications!
It seems a little childish!
XHTML was not perfect, but it was the right direction.
Look at what we have now instead: unparsable contents, hundreds of incompatible javascript frameworks, subtle bugs, bootstrap everywhere (aka much less creativity) and so on.
Who gain most from this unstructured complexity?
The same who now propose the final solution lock-in: web assembly.
Seeing linux running inside the browser is not funny anymore.
Going after incompetent developers was not democratization of the web, it was technological populism.
What is possible does not matter; what matters is what actually happens in the real world. With XHTML, the answer is clear. Quite a lot of people spent years pushing XHTML as the way of the future on the web, enough people listened to them to generate a fair amount of ‘XHTML’, and almost none of it was valid and most of it was not being served as XHTML (which conveniently hid this invalidity).
Pragmatically, you can still write XHTML today. What you can’t do is force other people to write XHTML. The collective browser world has decided that one of the ways that people can’t force XHTML is by freezing the development of all other HTML standards, so XHTML is the only way forward and desirable new features appear only in XHTML. The philosophical reason for this decision is pretty clear; browsers ultimately serve users, and in the real world users are clearly not well served by a focus on fully valid XHTML only.
(Users don’t care about validation, they care about seeing web pages, because seeing web pages is their goal. Preventing them from seeing web pages is not serving them well, and draconian XHTML error handling was thus always an unstable situation.)
That the W3C has stopped developing XHTML and related standards is simply acknowledging this reality. There always have been and always will be a great deal of tag soup web pages and far fewer pages that validate, especially reliably (in XHTML or anything else). Handling these tag soup web pages is the reality of the web.
(HTML5 is a step forward for handling tag soup because for the first time it standardizes how to handle errors, so that browsers will theoretically be consistent in the face of them. XHTML could never be this step forward because its entire premise was that invalid web pages wouldn’t exist and if they did exist, browsers would refuse to show them.)
Users don’t care about validation, they care about seeing web pages, because seeing web pages is their goal.
Users do not care about the quality of concrete because having a home is their goal.
There will always be incompetent architects, thus let them work their way so that people get what they want.
Users do not care about car safety because what they want is to move from point A to point B.
There will always be incompetent manufacturers, thus let them work their way so that people get what they want.
That’s not how engineering (should) work.
Was XHTML flawless? No.
Was it properly understood by the average web developers that most companies like to hire? No.
Was it possible to improve it? Yes. Was it better tha the current javascript driven mess? Yes!
The collective browser world has decided…
Collective browser world? ROTFL!
There’s a huge number of browsers’ implementors that nobody consulted.
Among others, in 2004, the most widely used browser, IE, did not join WHATWG.
Why WHATWG did not used the IE design if the goal was to liberate developers from the burden of well designed tools?
Why we have faced for years incompatibilities between browsers?
WHATWG was turned into one of the weapons in a commercial war for the control of the web.
Microsoft lost such war.
As always, the winner write the history that everybody know and celebrate.
But who is old enough to remember the fact, can see the hypocrisy of these manoeuvres pretty well.
There was no technical reason to throw away XHTML. The reasons were political and economical.
How can you sell Ads if a tool can easily remove them from the XHTML code? How can you sell API access to data, if a program can easily consume the same XHTML that users consume? How can you lock users, if they can consume the web without a browser? Or with a custom one?
The WHATWG did not served users’ interests, whatever were the Mozilla’s intentions in 2004.
They served some businesses at the expense of the users and of all the high quality web companies that didn’t have much issues with XHTML.
Back then it was possible to disable Javascript without loosing access to the web functionalities.
Try it now.
Back then people were exploring the concept of semantic web with the passion people now talk about the last JS framework.
I remember experiments with web readers for blind people that could never work with the modern js polluted web.
You are right, W3C abandoned its leadership in the engineering of the web back then.
But you can’t sell to a web developer bullshit about HTML5.
Beyond few new elements and a slightly more structured page (that could have been done in XHTML too) all its exciting innovations were… more Javascript.
Users did not gain anything good from this, just less control over contents, more ads, and a huge security hole worldwide.
Because, you know, when you run a javascript in Spain that was served to you from a server in the USA, who is responsible for such javascript running on your computer? Under which law?
Do you really think that such legal issues were not taken into account from the browser vendors that flued this involution of the web?
I cannot believe they were so incompetent.
They knew what they were doing, and did it on purpose.
Not to serve their users. To use those who trusted them.
I think it’s more about all of this sounding like a science fiction plot than just taking a jab at the Trump presidency; just a few years ago nobody would have predicted that would have happened. So, no, not pure trolling.
Fair enough. I’m sorry for the accusation.
Since the author is critical of Apple/Google/Mozilla here, I took it as a sort of guilt by association attack on them (I don’t mind jabs at Trump), but I see that it probably wasn’t that.
No problem.
I didn’t saw such possible interpretation or I wouldn’t have written that line. Sorry.
After 20 years of Berlusconi and with our current empasse with the Government, no Italian could ever troll an American about his current President.
It was not my intention in any way.
As @olivier said, I was pointing to this surreal situation from an international perspective.
USA control most of internet: most root DNS, the most powerful web companies, the standards of the web and so on.
Whatever effect Cambridge Analitica had to the election of Trump, it has shown the world that internet is a common infrastructure that we have to control and protect together. Just like we should control the production of oxigen and global warming.
If Cambridge Analitica was able to manipulate USA elections (by manipulating Americans), what could do Facebook itself in Italy? Or in German?
Or what could Google do in France?
The Internet was a DARPA project. We can see it is a military success beyond any expectation.
I tried to summarize the debacle between W3C and WHATWG with a bit of irony because, in itself, it shows a pretty scary aspect of this infrastructure.
The fact that a group of companies dares to challenge W3C (that, at least in theory, is an international organisation) is an evidence that they do not feel the need to pretend they are working for everybody.
They have too much power, to care.
The last point is the crux of the issue: are technologists willing to do the leg work of decentralizing power?
Because regular people won’t do this. They don’t care. This, they should have less say in the issue, though still some, as they are deeply affected by it too.
No. Most won’t.
Technologist are a wide category, that etymologically includes everyone that feel entitled to speak about how to do things.
So we have technologists that mislead people to invest in the “blockchain revolution”, technologists that mislead politicians to allow barely tested AI to kill people on the roads, technologists teaching in the Universities that neural networks computations cannot be explained and thus must be trusted as superhuman oracles… and technologists that classify as troll any criticism of mainstream wisdom.
My hope is in hackers: all over the world they have a better understanding of their political role.
If anyone wonders about Berlusconi, Cracked has a great article on him that had me calling Trump a pale imitation of Berlusconi and his exploits. Well, until Trump got into US Presidency which is a bigger achievement than Berlusconi. He did that somewhat by accident, though. Can’t last 20 years either. I still think Berlusconi has him beat at biggest scumbag of that type.
Yeah, the article is funny, but Berlusconi was not. Not for Italians.
His problems with women did not impress much us. But for when it became clear most of them were underage.
But the demage he did to our laws and (worse) to our public ethics will last for decades.
He did not just changed the law to help himself: he destroyed most legal tools to fight the organized crime and to fight bribes and corruption.
Worse he helped a whole generation of younger people like him to be bold about their smartness with law workarounds.
I pray for the US and the whole world that Trump is not like him.
This blogpost is a good example of fragmented, hobbyist security maximalism (sprinkled with some personal grudges based on the tone).
Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.
Talking about threat models, it’s important to start from them and that explains most of the misconceptions in the post.
Were tradeoffs made? Yes. Have they been carefully considered? Yes. Signal isn’t perfect, but it’s usable, high-level security for a lot of people. I don’t say I fully trust Signal, but I trust everything else less. Turns out things are complicated when it’s about real systems and not fantasy escapism and wishes.
In this article, resistance to governments constantly comes up as a theme of his work. He also pushed for his tech to be used to help resist police states like with the Arab Spring example. Although he mainly increased the baseline, the tool has been pushed for resisting governments and articles like that could increase perception that it was secure against governments.
This nation-state angle didn’t come out of thin air from paranoid, security people: it’s the kind of thing Moxie talks about. In one talk, he even started with a picture of two, activist friends jailed in Iran in part to show the evils that motivate him. Stuff like that only made the stuff Drew complains about on centralization, control, and dependence on cooperating with surveillance organization stand out even more due to the inconsistency. I’d have thought he’d make signed packages for things like F-Droid sooner if he’s so worried about that stuff.
A problem with the “nation-state” rhetoric that might be useful to dispel is the idea that it is somehow a God-tier where suddenly all other rules becomes defunct. The five-eyes are indeed “nation state” and has capabilities that are profound; like the DJB talk speculating about how many RSA-1024 keys that they’d likely be able to factor in a year given such and such developments and what you can do with that capability. That’s scary stuff. On the other hand, this is not the “nation state” that is Iceland or Syria. Just looking at the leaks from the “Hacking Team” thing, there are a lot of “nation states” forced to rely on some really low quality stuff.
I think Greg Conti in his “On Cyber” setup depicts it rather well (sorry, don’t have a copy of the section in question) and that a more reasonable threat model of capable actors you do need to care about is that of Organized Crime Syndicates - which seems more approachable. Nation State is something you are afraid of if you are political actor or in conflict with your government, where the “we can also waterboard you to compliance” factors into your threat model, Organized Crime hits much more broadly. That’s Ivan with his botnet from internet facing XBMC^H Kodi installations.
I’d say the “Hobbyist, Fragmented Maximalist” line is pretty spot on - with a dash of “Confused”. The ‘threats’ of Google Play Store (test it, write some malware and see how long it survives - they are doing things there …) - the odds of any other app store; Fdroid, the ones from Samsung, HTC, Sony et al. - being completely owned by much less capable actors is way, way higher. Signal (perhaps a Signal-To-Threat ratio?) perform an good enough job in making reasonable threat actors much less potent. Perhaps not worthy of “trust”, but worthy of day to day business.
And yet, Signal is advertising with the face of Snowden and Laura Poitras, and quotes from them recommending it.
What kind of impression of the threat models involved do you think does this create?
Who should be the faces recommending signal that people will recognize and listen to?
Whichever ones are normally on the media for information security saying the least amount of bullshit. We can start with Schneier given he already does a lot of interviews and writes books laypeople buy.
What does Schneier say about signal?
He encourages use of stuff like that to increase baseline but not for stopping nation states. He adds also constantly blogged about the attacks and legal methods they used to bypass technical measures. So, his reporting was mostly accurate.
We counterpoint him here or there but his incentives and reo are tied to delivering accurate info. Moxie’s incentives would, if he’s selfish, lead to locked-in to questionable platforms.
I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”
If you actually look at the protocols? Slack is a clear case of Not Invented Here syndrome. Slack’s interface is not only slower, but does some downright crazy things (Such as transliterating a subset of emojis to plain-text – which results in batshit crazy edge-cases).
If you have a free month, try writing a slack client. Enlightenment will follow :P
Per IRCv3 people I’ve talked to, IRCv3 blew up massively on the runway, and will never take off due to infighting.
And yet everyone is using Slack.
There are swathes of people still using Windows XP.
The primary complaint of people who use Electron-based programs is that they take up half a gigabyte of RAM to idle, and yet they are in common usage.
The fact that people are using something tells you nothing about how Good that thing is.
At the end of the day, if you slap a pretty interface on something, of course it’s going to sell. Then you add in that sweet, sweet Enterprise Support, and the Hip and Cool factors of using Something New, and most people will be fooled into using it.
At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on: https://ircv3.net/specs/extensions/batch/chathistory-3.3.html)
The time for the IRC group to be working on a solution to persistent history was a decade ago. It strikes me as willful ignorance to disregard the success of Slack et al over open alternatives as mere fashion in the face of many meaningful functionality differences. For business use-cases, Slack is a better product than IRC full-stop. That’s not to say it’s perfect or that I think it’s better than IRC on all axes.
To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool? But imagine being a UX designer and wanting to help make some native open-source IRC client fun and easy to use for a novice. “Sisyphean” is the word that comes to mind.
If we want open solutions to succeed we have to start thinking of them as products for non-savvy end users and start being honest about the cases where closed products have superior usability.
IRC isn’t hip and cool because people can’t make money off of it. Technologies don’t get investment because they are good, they get good because of investment. The reason that Slack is hip/cool and popular and not IRC is because the investment class decided that.
It also shows that our industry is just a pop culture and can give a shit about good tech .
There were companies making money off chat and IRC. They just didn’t create something like Slack. We can’t just blame the investors when they were backing companies making chat solutions whose management stayed on what didn’t work in long-term or for huge audience.
IRC happened before the privatization of the internet. So the standard didn’t lend itself well for companies to make good money off of it. Things like slack are designed for investor optimization, vs things like IRC being designed for use and openness.
My point was there were companies selling chat software, including IRC clients. None pulled off what Slack did. Even those doing IRC with money or making money off it didn’t accomplish what Slack did for some reason. It would help to understand why that happened. Then, the IRC-based alternative can try to address that from features to business model. I don’t see anything like that when most people that like FOSS talk Slack alternatives. Then, they’re not Slack alternatives if lacking what Slack customers demand.
Thanks for clarifying. My point can be restated as… There is no business model for federated and decentralized software (until recently , see cryptocurrencies). Note most open and decentralized tech of the past was government funded and therefore didn’t face business pressures. This freed designets to optimise other concerns instead of business onrs like slack does.
The argument being made is that the vast majority of Slack’s appeal is the “hip-and-cool” factor, not any meaningful additions to functionality.
Right, as I said I think it’s important for proponents of open tech to look at successful products like Slack and try to understand why they succeeded. If you really think there is no meaningful difference then I think you’re totally disconnected from the needs/context of the average organization or computer user.
That’s all well and good, I just don’t see why we can’t build those systems on top of existing open protocols like IRC. I mean: of course I understand, it’s about the money. My opinion is that it doesn’t make much sense to insist that opaque, closed ecosystems are the way to go. We can have the “hip-and-cool” factor, and all the amenities provided by services like Slack, without abandoning the important precedent we’ve set for ourselves with protocols like IRC and XMPP. I’m just disappointed that everyone’s seeing this as an “either-or” situation.
I definitely don’t see it as an either-or situation, I just think that the open source community typically has the wrong mindset for competing with closed products and that most projects are unapproachable by UX or design-minded people.
Open, standard chat tech has had persistent history and much more for decades in the form of XMPP. Comparing to the older IRC on features isn’t really fair.
I have to disagree here. It shows that it is good enough to solve a problem for them.
I don’t see how Good and “good enough to solve a problem” are related here. The first is a metric of quality, the second is the literal bare minimum of that metric.
I’d dispute that. People who become interested in Signal seem much more prone to be using F-Droid than, say, WhatsApp users. Signal tries to be an app accessible to the common person, but few people really use it or see the need… and often they are free software enthusiasts or people who are fed up with Google and surveillance.
More likely sure, but that doesn’t mean that many of them reach the threshold of effort that they do.
IRC isn’t decentralised… it’s not even federated
Sure it is, it’s just that there are multiple federations.