Wait, what?! Where is uBlock Origin?
Nice to see them promote privacy addons. At least put an emphasise on the topic.
The problem is also noticing the attack. It is pretty easy to notice an attack on a plane or elevator, not so much when “counting” votes.
This is really interesting to get an idea of how people are taking advantage of BSD! I now have a much nicer idea of why people are going to it (and am a bit tempted myself). That feeling of having to go through ports and simply not having 1st-class support for some software seems… rough for desktop usage though
Define “1st class support”.
https://people.canonical.com/~ubuntu-security/cve/universe.html
I mean “someone talks to me about an application and I’m interested in trying it out on my system”?
I feel like the link to the CVE database is a bit of an unwarranted snipe here. I’m not talking too much about security updates, just “someone released some software and didn’t bother to confirm BSD support so now I’m going to need to figure out which ways this software will not work”.
To be honest I don’t really think that having all userland software come in via OS-maintained package managers is a great idea in the first place (do I really need OS maintainers looking after anki?). I’m fine downloading binaries off the net. Just nicer if they have out of the box support for stuff. I’m not blaming the BSDs for this (it’s more the software writer’s fault), just that it’s my impression that this becomes a bit of an issue if you try out a lot of less used software.
As an engineer that uses and works on a minority share operating system, I don’t really think it’s reasonable to expect chiefly volunteer projects to ship binaries for my platform in a way that fits well with the OS itself. It would be great if they were willing to test on our platform, even just occasionally, but I understand why they don’t.
Given this, it seems more likely to expect a good experience from binaries provided by somebody with a vested interest in quality on the OS in question – which is why we end up with a distribution model.
Yep, this makes a lot of sense.
I’m getting more and more partial to software relying on their host language’s package manager recently. It’s pretty nice for a Python binary to basically always work so long as you got pip running properly on your system, plus you get all the nice advantages of virtual environments and the like letting you more easily set things up. The biggest issue being around some trust issues in those ecosystems.
Considering a lot of communities (not just OSes) are getting more and more involved in distribution questions, we might be getting closer to getting things to work out of the box for non-tricky cases.
software relying on their host language’s package manager
In general I’m not a fan. They all have problems. Many (most?) of them lack a notion of disconnected operation when they cannot reach their central Internet-connected registry. There is often no complete tracking of all files installed, which makes it difficult to completely remove a package later. Some of the language runtimes make it difficult to use packages installed in non-default directory trees, which is one way you might have hoped to work around the difficulty of subsequent removal. These systems also generally conflate the build machine with the target machine (i.e., the host on which the software will run) which tends to mean you’re not just installing a binary package but needing to build the software in-situ every time you install it.
In practice, I do end up using these tools because there is often no alternative – but they do not bring me joy.
Operating system package managers (dpkg/apt, rpm/yum, pkg_add/pkgin, IPS, etc) also have their problems. In contrast, though, these package managers tend to at least have some tools to manage the set of files that were installed for a particular package and to remove (or even just verify) them later. They also generally offer some first class way to install a set of a packages from archive files obtained via means other than direct access to a central repository.
For development I use the “central Internet-connected registry.”, for production I use DEB/RPM packages in a repository:
There are probably more benefits that escape me at the moment :)
That feeling of having to go through ports and simply not having 1st-class support for some software seems… rough for desktop usage though
What kind of desktop software do you install from these non-OS sources?
I remember screwing around with Flathub on the command line in Fedora 27, but right now on Fedora 28, if you enable Flatpak in the Gnome Software Center thingy, it’s actually pretty seamless - type “Signal” in the application browser, and a Flatpak install link shows up.
With this sort of UX improvements, I’m optimistic. I feel like Fedora is just going to get easier and easier to use.
That is a very reductionist view of what people use the web for. And I am saying this as someone who’s personal site pretty much matches everything prescribed except comments (which I still have).
Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.
Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.
Chickenshit minimalism: https://medium.com/@mceglowski/chickenshit-minimalism-846fc1412524
I wouldn’t say medium even gives the illusion of simplicity (For example, on the page you linked, try counting the visual elements that aren’t blog post). Medium seems to take a rather contrary approach to blogs, including all the random cruft you never even imagined existed, while leaving out the simple essentials like RSS feeds. I honestly have no idea how the author of the article came to suggest medium as an example of minimalism.
I agree with your overall point, but Medium does provide RSS feeds. They are linked in the <head> and always have the same URL structure. Any medium.com/@user has an RSS feed at medium.com/feed/@user. For Medium blogs hosted at custom URLs, the feed is available at /feed.
I’m not affiliated with Medium. I have a lot of experience bugging webmasters of minimal websites to add feeds: https://github.com/issues?q=is:issue+author:tfausak+feed.
That is a very reductionist view of what people use the web for.
I wonder what Youtube, Google docs, Slack, and stuff would be in a minimal web.
YouTube, while not as good as it could be, is pretty minimalist if you disable all the advertising.
I find google apps to be amazingly minimal, especially compared to Microsoft Office and LibreOffice.
Minimalist Slack has been around for decades, it’s called IRC.
It is still super slow then! At some point I was able to disable JS, install the Firefox “html5-video-everywhere” extension and watch videos that way. That was awesome fast and minimal. Tried it again a few days ago, but didn’t seem to work anymore.
Edit: now I just “youtube-dl -f43 ” directly without going to YouTube and start watching immediately with VLC.
The youtube interface might look minimalist, but under the hood, it is everything but. Besides, I shouldn’t have to go to great lengths to disable all the useless stuff on it. It shouldn’t be the consumer’s job to strip away all the crap.
In a minimal web, locally-running applications in browser sandboxes would be locally-running applications in non-browser sandboxes. There’s no particular reason any of these applications is in a browser at all, other than myopia.
Distribution is dead-easy for websites. In theory, you have have non-browser-sandboxed apps with such easy distribution, but then what’s the point.
Non-web-based locally-running client applications are also usually made downloadable via HTTP these days.
The point is that when an application is made with the appropriate tools for the job it’s doing, there’s less of a cognitive load on developers and less of a resource load on users. When you use a UI toolkit instead of creating a self-modifying rich text document, you have a lighter-weight, more reliable, more maintainable application.
The power of “here’s a URL, you now have an app running without going through installation or whatnot” cannot be understated. I can give someone a copy of pseudo-Excel to edit a document we’re working together on, all through the magic of Google Sheet’s share links. Instantly
Granted, this is less of an advantage if you’re using something all the time, but without the web it would be harder to allow for multiple tools to co-exist in the same space. And am I supposed to have people download the Doodle application just to figure out when our group of 15 can go bowling?
They are, in fact, downloading an application and running it locally.
That application can still be javascript; I just don’t see the point in making it perform DOM manipulation.
As one who knows JavaScript pretty well, I don’t see the point of writing it in JavaScript, however.
A lot of newer devs have a (probably unfounded) fear of picking up a new language, and a lot of those devs have only been trained in a handful (including JS). Even if moving away from JS isn’t actually a big deal, JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language – you can do whatever you do in JS in python or lua or perl or ruby and it’ll come out looking almost the same unless you go out of your way to use particular facilities.
The thing that makes JS code look weird is all the markup manipulation, which looks strange in any language.
JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language
(a == b) !== (a === b)
but only some times…
Javascript has gotchas, just like any other organic scripting languages. It’s less consistent than python and lua but probably has fewer of these than perl or php.
(And, just take a look at c++ if you want a faceful of gotchas & inconsistencies!)
Not to say that, from a language design perspective, we shouldn’t prize consistency. Just to say that javascript is well within the normal range of goofiness for popular languages, and probably above average if you weigh by popularity and include C, C++, FORTRAN, and COBOL (all of which see a lot of underreported development).
Web applications are expected to load progressively. And that because they are sandboxed, they are allowed to start instantly without asking you for permissions.
The same could be true of sandboxed desktop applications that you could stream from a website straight into some sort of sandboxed local VM that isn’t the web. Click a link, and the application immediately starts running on your desktop.
I can’t argue with using the right tool for the job. People use Electron because there isn’t a flexible, good-looking, easy-to-use cross-platform UI kit. Damn the 500 mb of RAM usage for a chat app.
There are several good-looking flexible easy to use cross-platform UI kits. GTK, WX, and QT come to mind.
If you remove the ‘good-looking’ constraint, then you also get TK, which is substantially easier to use for certain problem sets, substantially smaller, and substantially more cross-platform (in that it will run on fringe or legacy platforms that are no longer or were never supported by GTK or QT).
All of these have well-maintained bindings to all popular scripting languages.
QT apps can look reasonably good. I think webapps can look better, but I haven’t done extensive QT customization.
The bigger issue is 1) hiring - easier to get JS devs than QT devs 2) there’s little financial incentive to reduce memory usage. Using other people’s RAM is “free” for a company, so they do it. If their customers are in US/EU/Japan, they can expect reasonably new machines so they don’t see it as an issue. They aren’t chasing the market in Nigeria, however large in population.
Webapps are sort of the equivalent of doing something in QT but using nothing but the canvas widget (except a little more awkward because you also don’t have pixel positioning). Whatever can be done in a webapp can be done in a UI toolkit, but the most extreme experimental stuff involves not using actual widgets (just like doing it as a webapp would).
Using QT doesn’t prevent you from writing in javascript. Just use NPM QT bindings. It means not using the DOM, but that’s a net win: it is faster to learn how to do something with a UI toolkit than to figure out how to do it through DOM manipulation, unless the thing that you’re doing is (at a fundamental level) literally displaying HTML.
I don’t think memory use is really going to be the main factor in convincing corporations to leave Electron. It’s not something that’s limited to the third world: most people in the first world (even folks who are in the top half of income) don’t have computers that can run Electron apps very well – but for a lot of folks, there’s the sense that computers just run slow & there’s nothing that can be done about it.
Instead, I think the main thing that’ll drive corporations toward more sustainable solutions is maintenance costs. It’s one thing to hire cheap web developers & have them build something, but over time keeping a hairball running is simply more difficult than keeping something that’s more modular running – particularly as the behavior of browsers with respect to the corner cases that web apps depend upon to continue acting like apps is prone to sudden (and difficult to model) change. Building on the back of HTML rendering means a red queen’s race against 3 major browsers, all of whom are changing their behaviors ahead of standards bodies; on the other hand, building on a UI library means you can specify a particular version as a dependency & also expect reasonable backwards-compatibility and gradual deprecation.
(But, I don’t actually have a lot of confidence that corporations will be convinced to do the thing that, in the long run, will save them money. They need to be seen to have saved money in the much shorter term, & saying that you need to rearchitect something so that it costs less in maintenance over the course of the next six years isn’t very convincing to non-technical folks – or to technical folks who haven’t had the experience of trying to change the behavior of a hairball written and designed by somebody who left the company years ago.)
I understand that these tools are maintained in a certain sense. But from an outsider’s perspective, they are absolutely not appealing compared to what you see in their competitors.
I want to be extremely nice, because I think that the work done on these teams and projects is very laudable. But compare the wxPython docs with the Bootstrap documentation. I also spent a lot of time trying to figure out how to use Tk, and almost all resources …. felt outdated and incompatible with whatever toolset I had available.
I think Qt is really good at this stuff, though you do have to marry its toolset for a lot of it (perhaps this has gotten better).
The elephant in the room is that no native UI toolset (save maybe Apple’s stack?) is nowhere near as good as the diversity of options and breadth of tooling available in DOM-based solutions. Chrome dev tools is amazing, and even simple stuff like CSS animations gives a lot of options that would be a pain in most UI toolkits. Out of the box it has so much functionality, even if you’re working purely vanilla/“no library”. Though on this points things might have changed, jQuery basically is the optimal low-level UI library and I haven’t encountered native stuff that gives me the same sort of productivity.
I dunno. How much of that is just familiarity? I find the bootstrap documentation so incomprehensible that I roll my own DOM manipulations rather than using it.
TK is easy to use, but the documentation is tcl-centric and pretty unclear. QT is a bad example because it’s quite heavy-weight and slow (and you generally have to use QT’s versions of built-in types and do all sorts of similar stuff). I’m not trying to claim that existing cross-platform UI toolkits are great: I actually have a lot of complaints with all of them; it’s just that, in terms of ease of use, peformance, and consistency of behavior, they’re all far ahead of web tech.
When it comes down to it, web tech means simulating a UI toolkit inside a complicated document rendering system inside a UI toolkit, with no pass-throughs, and even web tech toolkits intended for making UIs are really about manipulating markup and not actually oriented around placing widgets or orienting shapes in 2d space. Because determining how a piece of markup will look when rendered is complex and subject to a lot of variables not under the programmer’s control, any markup-manipulation-oriented system will make creating UIs intractably awkward and fragile – and while Google & others have thrown a great deal of code and effort at this problem (by exhaustively checking for corner cases, performing polyfills, and so on) and hidden most of that code from developers (who would have had to do all of that themselves ten years ago), it’s a battle that can’t be won.
It annoys me greatly because it feels like nobody really cares about the conceptual damage incurred by simulating a UI toolkit inside a doument renderer inside a UI toolkit, instead preferring to chant “open web!” And then this broken conceptual basis propagates to other mediums (VR) simply because it’s familiar. I’d also argue the web as a medium is primarily intended for commerce and consumption, rather than creation.
It feels like people care less about the intrinsic quality of what they’re doing and more about following whatever fad is around, especially if it involves tools pushed by megacorporations.
Everything (down to the transistor level) is layers of crap hiding other layers of different crap, but web tech is up there with autotools in terms of having abstraction layers that are full of important holes that developers must be mindful of – to the point that, in my mind, rolling your own thing is almost always less work than learning and using the ‘correct’ tool.
If consumer-grade CPUs were still doubling their clock speeds and cache sizes every 18 months at a stable price point and these toolkits properly hid the markup then it’d be a matter of whether or not you consider waste to be wrong on principle or if you’re balancing it with other domains, but neither of those things are true & so choosing web tech means you lose across the board in the short term and lose big across the board in the long term.
Youtube would be a website where you click on a video and it plays. But it wouldn’t have ads and comments and thumbs up and share buttons and view counts and subscription buttons and notification buttons and autoplay and add-to-playlist.
Google docs would be a desktop program.
Slack would be IRC.
What you’re describing is the video HTML5 tag, not a video sharing platform. Minimalism is good, I do agree, but don’t mix it with no features at all.
Google docs would be a desktop program.
This is another debate around why using the web for these kind of tasks, not the fact that it’s minimalist or not.
This was from 2012. Arguably, we’re already there. Tons of popular computers run signed bootloaders and won’t run arbitrary code. Popular OS vendors already pluck apps from their walled garden on the whims of freedom-optional sovereignties.
The civil war came and went and barely anyone took up arms. :(
It’s not like there won’t always be some subset of developer- and hacker-friendly computers available to us. Sure, iPhones are locked down but there are plenty of cheap Android phones which can be rooted, flashed with new firmware, etc. Same for laptops, there are still plenty to choose from where the TPM can be disabled or controlled.
Further, open ARM dev boards are getting both very powerful and very cheap. Ironically, it might even be appropriate to thank China and its dirt-cheap manufacturing industry for this freedom since without it, relatively small runs of these tiny complicated computers wouldn’t even be possible.
This is actually the danger. There will always be a need for machines for developers to use, but the risk is that these machines and the machines for everyone else (who the market seems to think don’t “need” actual control over their computers) will diverge increasingly. “Developer” machines will become more expensive, rarer, harder to find, and not something people who aren’t professional developers (e.g. kids) own.
We’re already seeing this happen to some extent. There are a large number of people who previously owned PCs but who now own only locked down smartphones and tablets (moreover, even if these devices aren’t locked down, they’re fundamentally oriented towards consumption, as I touched on here).
Losing the GPC war doesn’t mean non-locked-down machines disappearing; it simply means the percentage of people owning them will decline to a tiny percentage, and thus social irrelevance. The challenge is winning the GPC war for the general public, not just for developers. Apathy makes it feel like we’ve already lost.
Arguably iPhones are dev friendly in a limited way. if you’re willing to use Xcode, you can develop for your iPhone all you want at no charge.
Develop for, yes, within the bounds of what Apple deems permissible. But you can’t replace iOS and port Linux or Android to it because the hardware is very locked down. (Yes, you might be able to jailbreak the phone through some bug, until Apple patches it, anyway.)
Mind you, I’m not bemoaning the fact or chastising Apple or anything. They can do what they want. My original point was just that for every locked-down device that’s really a general-purpose computer inside, there are open alternatives and likely will be as long as there is a market for them and a way to cheaply manufacture them.
Absolutely! Even more impressive is that with Android, Google has made such a (mostly) open architecture into a mass market success.
However it’s interesting to note that on that very architecture, if you buy an average Android phone, it’s locked down with vendorware such that in order to install what you want you’ll likely have to wipe the entire ecosystem off the phone and substitute an OSS distribution.
I get that the point here is that you CAN, but again, most users don’t want the wild wild west. Because, fundamentally, they don’t care. They want devices (and computers) that work.
Google has made such a (mostly) open architecture into a mass market success.
Uh, I used to say that until I looked at the history and the present. I think it’s more accurate that they made a proprietary platform on an open core a huge success by tying it into their existing, huge market. They’ve been making it more proprietary over time, too. So, maybe that’s giving them too much credit. I’ll still credit them with their strategy doing more good for open-source or user-controlled phones than their major competitors. I think it’s just a side effect of GPL and them being too cheap to rewrite core at this point, though.
I like to think that companies providing OSes are a bit like states. They have to find a boundary over how much liberty over safety they should set, and that’s not an easy task.
This is not completely true. There are some features you can’t use without an Apple developer account which costs $100/yr. One of those features is NetworkExtension.
friendly in a limited way.
OK, so you can take issue with “all you want” but I clearly state at the outset that free development options are limited.
Over half a million people or 2 out of 100 Americans died in the Civil War. There was little innocent folks in general public could do to prevent it or minimize losses Personally, I found his “civil war” to be less scary. The public can stamp these problems out if they merely care.
That they consistently are apathetic is what scares me.
Agreed 100%.
I have no idea what to do. The best solution I think is education. I’m a software engineer. Not the best one ever, but I try my best. I try to be a good computing citizen, using free software whenever possible. Only once did I meet a coworker who shared my values about free software and not putting so much trust in our computing devices - the other 99% of the time, my fellow devs think I’m crazy for giving a damn.
Let alone what people without technical backgrounds give a damn about this stuff. If citizens cared and demanded freedom in their software, that would position society much better to handle “software eating the world”.
The freedoms guaranteed by free software were always deeply abstruse and inaccessible for laypeople.
Your GNOME desktop can be 100% GPL and it will still be nearly impossible for you to even try to change anything about it; even locating the source code for any given feature is hard.
That’s not to say free software isn’t important or beneficial—it’s a crucial and historical movement. But it’s sad that it takes so much expertise to alter and recompile a typical program.
GNU started with an ambition to have a user desktop system that’s extensible and hackable via Lisp or Scheme. That didn’t really happen, outside of Emacs.
Your GNOME desktop can be 100% GPL and it will still be nearly impossible for you to even try to change anything about it; even locating the source code for any given feature is hard.
I tried to see how true that is with a random feature. I picked brightness setting in the system status area. Finding the source for this was not so hard, it took me a few minutes (turns out it is JavaScript). Of course it would have been better if there was something similar to browser developer tools somewhere.
Modifying it would probably be harder since I can’t find a file called brightness.js on my machine. I suppose they pack the JavaScript code somehow…
About 10 years ago (before it switched to ELF) I used Minix3 as my main OS for about a year. It was very hackable. We did something called “tracking current” (which apparently is still possible): the source code for the whole OS was on the disk and it was easy to modify and recompile everything. I wish more systems worked like this.
Remember when the One Laptop Per Child device was going to have a “view source” button on every activity?
Part of that means that there’s absolutely NOTHING on your computer that isn’t planned.
2018: Install security patches, also get Candy Crush
I was going to write the same :) I’m pretty sure it is still true for the MS engineers as they most likely have a version of Windows (Enterprise?) that has none of that crap, so they never see it and doesn’t affect them.
It affects me too, but these decisions are all made at the management level. I’ve just formed a habit of uninstalling/disabling misfeatures as they appear.
The biggest benefit of Enterprise edition is that you’re allowed to disable things. But they usually enabled by default regardless.
I still don’t get why HTTPS not just TLS. Because of the server coalescing? Don’t like the sound of that much, in practice maybe lots of sites do get served from a few CDNs, but is that the centralising/monopoly-operation-normalising kind of thing we want to be enshrining in open source browsers? Oh Cloudflare are helping to push it? Hmmmm
DNS over TLS also a thing that’s been spec’d. The problem is that so many pieces of networking hardware have ossified over the years that there are real challenges to introducing new protocols on the internet. Using an existing protocol is a solution to that.
Ah right, that does make some sense. Even though the server coalescing etc is HTTP/2 which ossified hardware is hardly going to support. But even still, HTTPS seems like a complex & possibly heavyweight protocol to use as a carrier for comparatively simple payloads, no?
Port 853 (DNS over TLS) is easy to block (in collateral freedom sense). Port 443 (HTTPS) can’t be blocked.
If “block any and all DNS” is a viable approach for censorship, it’s pretty easy to change the port. There’s no reason to use a nearly unimplementably complex protocol stack to serve DNS.
That’s the best argument I’ve heard for it, by far. I wonder if there’d be some way to smart-multiplex protocols over 443 though. Mongrel2 used to do it I seem to recall.
Years ago I used a reverse proxy to do exactly that. Unfortunately I can not remember the tools I used.
Probably stunnel and iptables on the server were used but I cannot really remember the tricks. I also had to do some tricks on the client, probably.
But it’s not a new protocol – it’s TLS. If a middlebox can tell what is going over TLS in order to treat it differently, we refer to the situation as an “attack”.
There are plenty of situations in which TLS interception is consented to – corporate MITM boxes are the popular example – and they absolutely cause problems with deployment of new protocols (TLS 1.3 is filled with examples).
(I should note that TLS MITM boxes in my experience are all hot garbage and people shouldn’t use them, but there’s nothing wrong with them from a TLS threat modeling perspective.)
There are plenty of situations in which TLS interception is consented to – corporate MITM boxes are the popular example
Yes, but at that point, changes to DNS don’t help – you have a social problem, not a technical one. The group that is putting in the MITM boxes has the ability to force you to reveal your traffic regardless of what technology you put in. You’ve lost by default.
You don’t have to be trying to defeat that person, the goal would simply be to make sure it doesn’t break when deployed.
You might be onto something, scarily enough. They are actling like Cloudflare is a reputable middleman.
You mean, like 1.1.1.1, which is used to serve TLS over HTTP?
This isn’t a problem that throwing HTTP into the mix solves.
This isn’t a problem that throwing HTTP into the mix solves.
You really don’t need to convince me.
My first thought when I read about this was: where is the hypertext? I can’t think of me explaining to my grandchildren 20 years from now why we decided to use something designed to distribute HTML for DNS responses.
The problem is that so many pieces of networking hardware have ossified over the years that there are real challenges to introducing new protocols on the internet.
While I understand your argument, I always think of what ancient Egyptians would think of our “real challenges”.
Compared to people from 5000 years ago, we are all sissies.
The Egyptians never tried to coordinate hundreds of vendors, tens of thousands of deployments, and a billion users to update their network protocols.
I’m sure we could do better, but there are legitimate challenging technical problems, combined with messy incentive problems (no individual browser vendor wants to cause a perceived breakage, since the browser is generally blamed, and that would give an advantage to their competitors, or cause people to not upgrade, which for a modern browser would be catastrophic to security).
The Egyptians never tried to coordinate hundreds of vendors, tens of thousands of deployments, and a billion users to update their network protocols.
You should really visit Giza.
None of your arguments is false. But they are peanuts compared to building a Pyramid with the tools available 5000 years ago.
We should really compare to such human endeavours before celebrating our technical successes and before defining an issue as a “real challenge”.
Additional details on how to set this up: https://powerdns.org/tproxydoc/tproxy.md.html
For whatever its worth, I installed Fedora today and can’t replicate the problem. So it doesn’t seem to be a universal problem
Fedora 27 here with all updates installed on Lenovo X230 (Intel stuff), seeing the issue of increased memory usage here…
I was surprised to hear they were advertising there in the first place…
Would be nice if they would take it one step further and block all things FB by default unless you explicitly browse to facebook.com yourself.
Yeah, all of that is true when I was using Slackware and Gentoo. The good old days… All problems were solved when switching the Red Hat 7.1 (released in 2001) and installing it as the only OS on my laptop. Even WiFi worked out of the box. I never looked back.
Is anyone still adopting CMake since Meson exists?
CMake is still in wide use in the field and I can personally say I’ve seen it adopted on a bunch of new projects.
I had never heard of Meson, but in checking it out, it seems to have existed since 2011. Might I suggest a marketing campaign? What advantages does Meson have over CMake? Why are there two commands, meson and ninja? Is meson opinionated (it seems to be) or flexible?
This is a mess.
CSS grid achieves this without corrupting the semantic quality of the document.
When was the last time you saw a page that is following semantic guidelines? It is so full of crap and dynamically generated tags, hope was lost a long time ago. It seems to be so crazy that developers heard about the “don’t use tables” that they will put tabular data in floating divs. Are you kidding me?! Don’t even get me started about SPAs.
The fetishization of unminified code distribution is really bizarre.
The point is, I think, that the code should not require minifying and only contain the bare minimum to get the functionality required. The point is to have 1kbyte unminified JS instead of 800kbyte minified crap.
New information always appears more complex than old information when it requires updates to a mental model.
I feel like you completely missed his point here. He isn’t just talking about how complex the new stuff is. He even said flexbox was significantly better and simpler to use than “float”. What he is resisting is the continual reinvention that goes on in webdev. A new build tool every week. A new flavor of framework every month. An entire book written about loading fonts on the web. Sometimes you legitimately need that new framework or a detailed font loading library for your site. But frankly even if you are a large company you probably don’t need most of the new fad of the week that happens in web dev. FlexBox is probably still good enough for you needs. React is a genuine improvement for the state of SPA development. But 3-4 different build pipelines? No you probably don’t need that.
And while we are on the subject
CSS grid achieves this without corrupting the semantic quality of the document.
Nobody cares about the semantic quality of the document. It doesn’t really help you with anything. HTML is about presentation and it always has been. CSS allows you to modify the presentation based on what is presenting it. But you still can’t get away from the fact that how you lay things out in the html has an effect on the css you write. The semantic web has gone nowhere and it will continue to go nowhere because it’s built on a foundation that fundamentally doesn’t care about it. If we wanted semantic content we would have gone with xhtml and xslt. We didn’t because at heart html is about designing and presenting web pages not a semantic document.
Nobody cares about the semantic quality of the document.
Anybody who uses assistive technology cares about its semantic quality.
Anybody who choses to use styles in Word documents understands why they’d want to write documents with good semantic quality.
You still can’t get away from the fact that how you lay things out in the html has an effect on the css you write.
That’s… the opposite of the point.
All of the cycles in web design – first using CSS at all (instead of tables in the HTML) and then making CSS progressively more powerful – have been about the opposite:
How you lay things out on the screen should not determine how the HTML is written.
Of course the CSS depends on the HTML, as you say. The presentation code depends on the content! But the content should not depend on the presentation code. That’s the direction CSS has been headed. And with CSS Grid, we’re very close to the point where content does not have to have a certain structure in order to permit a desired presentation.
And that’s my main issue with the essay: it presents this forward evolution in CSS as cyclical.
(The other issue is that the experience that compelled the author to write the article in the first place – the frenetic wheel reinvention that has taken hold of the Javascript world – is wholly separate from the phases of CSS. As far as that is concerned, I agree with him: a lot of that reinvention is cyclical and essentially fashion-driven, is optional for anyone who isn’t planning on pushing around megabytes of Javascript, and that anyone who is planning on doing that ought to pause and reconsider their plan.)
If we wanted semantic content we would have gone with xhtml and xslt.
Uh… what? XHTML is absolutely no different from HTML in terms of semantics and XSLT is completely orthogonal. XML is syntax, not semantics. It’s an implementation detail at most.
If you are a building websites, please do more research and reconsider your attitude about semantic markup. Semantic markup is important for accessibility technologies like screen readers. RSS readers and search indexes also benefit from semantic markup. In short, there are clear and easily understood necessities for the semantic web. People do care about it. All front end developers I work with review the semantic quality of a document during code reviews and the reason they care is because it has a real impact on the user.
Having built and relied on a lot of sematic web (lowercase) tech, this is just untrue. Yes, many devs don’t care to use even basic semantics (h1/section instead of div/div) but that doesn’t mean there isn’t enough good stuff out there to be useful, or that you can’t convince them to fix something for a purpose.
I don’t know what you worked on but I’m guessing it was niche. Or if so then you spent a lot of time dealing with sites that most emphatically didn’t care about the semantic web. The fact is that a few sites caring doesn’t mean the industry cares. The majority don’t care. They just need the web page to look just so on both desktop and mobile. Everything else is secondary.
Man I would much prefer a separation between the doc web and the app web. I’d be interested to see a secure browser that can be composed with small protocol downloaders and documents viewers.
But where do you draw that line? There will always be things that are a mix of both. For example, a public read-only Google Doc, or an article on Medium. What about a YouTube video? You could argue that the video’s actually a document because it’s mostly static, but what about the comments?
Even search engines are a mix; clearly they’re not documents but they’re a critical part of the “documents web”. In fact, they don’t really work so well with the app web.
I think this is a strength of the web, not a detriment.
But where do you draw that line?
One obvious line could be JavaScript. If it works without JS it is part of the “documents web”.
Gmail runs without JavaScript. And a lot of people sure spend a lot of time complaining about apps that don’t gracefully degrade in the same way.
Meaningful URLs also seems like a prerequisite.
I could imagine a web mail client that used URLs correctly and presented itself in terms of each message or thread as its own document; it would be a big improvement over any web mail client I’ve actually used.
I don’t think one should try to concive of sucb a separation, while at the same time expecting that everything would stay the same, and no tradeoffs would be payed. And ultimately, “app web” would be just what we have today, so if you were to insist on using GDoc, Medium or YouTube, all products of the current way of things, they could still exist. Doc web could in that case just as well be Gopher+Markdown or something, and taking this example, there shouldn’t be any issue with search engines either, “app web” (or maybe a third system, so that we were to have a holy Trinity of the web) would host a search engine, with a few http links and a few gopher links. Since URIs exist, this really shouldn’t be too much of an issue.
I guess I still don’t understand what the purported benefit is here. A “secure” browser as OP mentioned? Would Firefox with NoScript meet that criteria? Or is the argument that HTML in general is too complicated to parse and as such represents a security risk?
If you’d ask me, simplicity is preferable to complexity. When something is simple, it’s easier to implement (hence multiple implementation can compete), easier to maintain and certainly, as you mention, it has a higher chance of being more secure. Nowadays, a browser nearly fulfills the function of a virtual machine, of sorts. It’s implementation is, partially for historical reasons, is so difficult and unhandy, that for practical purposes, 3 or 4 web engines predominate, and none of them are satisfactory: Memory leaks, security holes, incompatibility with standards, slow, etc. And it’s not like someone could just fix the issue by implementing a new engine - the problem is the situation itself, that necessitates browsers.
Splitting the task up, into separate frameworks appropriate to the task, could help to remedy these problems, and no, this isn’t solvable by installing a plugin.
I guess I was talking more about the app part mostly abandoning the html documents, You can keep using html for the document part (and other formats) While apps would be completely javascript (or other languages). So the secure browser role would be mostly about making this seamless and safe to use.
For instance a blog post could be a markdown document and it’s comments could be a separate app that just has a doc url, an auth system, comments and comment editing / managing features. Just use a window manager to have them both side by side.
But I’m mostly spitballing here
Edit: I mean this is already a bit similar to how I use tor for static websites and firefox for web apps, but I’d like more flexibility in terms of protocol use, file formats and vm’s
This also triggered a discussion on Twitter regarding creating a standard/competition for secure alternatives to JWT…
I’d recommend looking at LEDE instead of OpenWrt, it seems that’s where all development is going on.
I played around with Meson a bit, it is really nice!
I guess the next best thing after dropping Cloudflare and similar services completely…
If you can’t provide an alternative then I can’t take this seriously, and - until then - I hope that nobody else can either.
What do you need Cloudflare for? I’ve never seen a single use case for it (other than their DNS service, which is well done compared to many other providers). People who claim “DDoS protection” seem to have either picked a crappy web host, don’t know how to use caching, or are running Apache.
An alternative to surrendering your visitors to surveillance capitalism and forcing them to train Google’s AI that will enslave them?
I guess there are some use cases for services like CF, but most of the time it is just incompetency, forced on developers by their managers, or a fascination with bloat. A page without a spinner is just not the modern web!
See: http://idlewords.com/talks/website_obesity.htm
Does configuring rate limiting and doing load testing before production deployment not count as an alternative? It’s not like we weren’t running websites and dealing with the problems cloudflare tries to address before that service existed.
No. Cloudflare isn’t a “rate limiting” service. Your load testing isn’t going to compare to real traffic. It’s a nice thing to do, but should never be considered representative of real traffic.
A lot of the problems that Cloudflare addresses have become worse due to multiple reasons.
Firstly, this is later in time. Technology has improved. This means that attacks have become stronger.
Secondly, services like Cloudflare weren’t there and people now have to find ways to attack against services like Cloudflare. This means that doing it yourself is substantially harder now, since you probably can’t compete with them in terms of DDoS protection. I doubt you ever saw anyone performing the largest DDoS in the world by hacking into people’s IoT cameras back then, either, but comparing reality 10 years ago to now isn’t the best approach to solving these problems.
How are you going to implement DDoS protection? Rate limiting isn’t doing that for you, it’s just rejecting requests that are excessive. That’s what Cloudflare is trying to do here.
It’s not trying to rate limit, that makes little-to-no sense.
EDIT: Also, if you’re the one that marked my response as “incorrect” then I don’t think that you know what “incorrect” means. It is absolutely correct to say not to consider a non-alternative as an alternative. Downvotes shouldn’t be an “I don’t agree” button.