You need to build a SPA if you want your app to work offline. So if you want to compete with desktop-class software, you need a SPA. I don’t think that’s a mistake.
There are legitimate reasons for building a SPA, some of them mentionend in the article. I would rather argue that defaulting to a SPA without further considering options is a mistake I’ve often seen.
I agree - it shouldn’t be the default - but the article is on the glib side with “basically just a media player” as the only valid use-case.
As https://changelog.com/ proves this is even a bad example! It’s really offline support but even then I’m not sure it’s the deciding factor. Nothing’s stopping us from caching pre-rendered pages for offline support for (news/information sharing) consumer apps as opposed to interactive/client driven apps, e.g. editors or games.
As a Linux user, I can appreciate this. Heck, I wish Lobsters had a PWA so I could get a better experience on Android.
With the new design of the Rails site, David Heinemeier Hansson’s face appears near the top in their intro video, and Basecamp and Hey are the first two listed users of Rails. I don’t remember either of those things being true of the old site, and it certainly seems like a reassertion of ownership by DHH over Rails.
That’s hyperbole! Next to those 2 we have GitHub, Shopify, Twitch, Dribble, Hulu, Zendesk, AirBnB, Square, Kickstarter, Heroku, CoinBase, SoundCloud, CookPad. That’s 2 out of 15. The linked article also mentions Shopify and GitHub as 2 prime examples of businesses that are even bigger than Basecamp/Hey.
“are even bigger” sounds disingenious. Shopify and GH should be magnitudes bigger, by users and maybe even by developers.
“are even bigger” sounds disingenious […] bigger, by users and maybe even by developers
disingenuous dĭs″ĭn-jĕn′yoo͞-əs adjective
Not straightforward or candid; insincere or calculating.Pretending to be unaware or unsophisticated; faux-naïf.Unaware or uninformed; naive.Not noble; unbecoming true honor or dignity; mean; unworthy.Not ingenuous; wanting in noble candor or frankness; not frank or open; uncandid; unworthily or meanly artful.Not noble; unbecoming true honor or dignity; mean; unworthy; fake or deceptive.Not ingenuous; not frank or open; uncandid; unworthily or meanly artful.Assuming a pose of naivete to make a point or for deception.
Your point being? Many of the others will be at least an order of magnitude bigger in users, developers or even revenue. If you’re saying what I think you’re saying, i.e. only one or neither should be in that list because the others so much larger on some metric, well, let’s agree to disagree. There are plenty of reasons why these 2 names should be on that list given they’ve invented Rails and continue sponsoring its development one way or another.
The last two weeks I have had a few occasions to discuss Ruby 3 with friends, and wonder about when libraries will begin to ship features that lean into more parallel or asynchronous workfkows. The exact example one of them raised was “Even just allowing independent database queries to run concurrently would be amazing” and lo, Relation#load_async.
As exciting to me is the promise of removing some of the gotchas and edge cases of the Rails autoloader. I really should dig into that Zeitwerk upgrade guide, because I do worry for how many projects I have with autoloader workarounds that I will need to seek out and undo.
I’ve been doing async in Ruby for over a decade. While the ruby3 features are usefulz they don’t add anything fundamentally new.
Started out with callbacks and deferrables, worked with one of the inventors of fiber-based “invisible async”, most recently promise-based
This is pretty disingenuous. You could do async Ruby before Ruby 3, but to really do it you needed an external library like eventmachine. The language itself prevents true async because of a GIL. Ruby 3’s features make async much more practical to achieve with the base language itself.
IME the GIl makes no difference, since anything that benefits from async is IO bound and doesn’t use even one full core of CPU anyway…
I think I agreed with every single one of these except the pro-mobbing take.
Sophisticated DSLs with special syntax are probably a dead-end. Ruby and Scala both leaned hard into this and neither got it to catch on.
As someone who’s worked with ruby more than 10 years, couldn’t agree more here. rspec
is the conspicuous remnant of this delirium and is an unequivocal mistake.
Anecdata: Every time I see something that completely abuses Javascript in a way that breaks catastrophically when you drift outside of “blog engine demo” territory, it’s always somehow descended from rspec and/or cucumber.
As someone who’s worked with ruby more than 10 years, couldn’t agree more here. rspec is the conspicuous remnant of this delirium and is an unequivocal mistake.
Could you elaborate on this?
I haven’t used rspec
, but I’ve used mocha
in JS, which I think was inspired by rspec
. In Mocha we write a lot of statements like expect(someValue).to.have.keys(['a', 'b'])
. I don’t love the Mocha syntax, but it does produce quite nice, human-readable error output. I guess it could be easily reduced to expect(someValue).hasKeys(['a', 'b'])
.
Happy RSpec user here and definitely going to continue using it in future. Not sure why some people keep repeating that DSLs haven’t caught on, especially in Ruby. It’s the least convincing argument as to why it’s worse than something else. ActiveRecord, the ORM in Rails, is nothing but a DSL to model relationships and 1000s of companies use it to build successful businesses. The proof is in the pudding. Is it perfect, certainly not. Does it require reading (and potentially re-reading) the docs, sure does. Is there a learning curve to become proficient and does it require experience to know when to use what or stay away from it, most definitely like with all things high level.
RSpec is most likely the most successful DSL, at least judging by the download/deployment numbers, s. https://rubygems.org/gems/rspec (vs. https://rubygems.org/gems/minitest or https://rubygems.org/gems/activerecord for instance).
The problem I’ve encountered in most of these DSLs (I played a bit with Mocha many years ago, but have the most experience with SBT/Scala, Chef, and bizzaro DSLs atop Tcl invented in hardware land) is a combination of:
Poor documentation for the not happy paths: The happy path is easy, but the moment you need to do something off the beaten path, you’ll find sharp edges in the DSL and lacking documentation. This also makes teaching other engineers about a DSL difficult. In my experience we taught SBT mostly by having experienced engineers pair with newer engineers to teach them about the DSL. This adds a learning overhead to DSLs that just isn’t there for general purpose programming languages.
Bad error messages: Again, most DSLs are optimized for the happy path. Many of these DSLs don’t really chain errors together very well. When you give these DSLs something they don’t expect, they rarely output any sensible error output to work with.
Few escape hatches: DSL authors (looking at you SBT) really like to, understandably, constrain what you can do in the DSL. That’s great until it’s not. Most DSLs don’t offer you a good way to break out of their assumptions and don’t give you a good way to interact with their cloistered world.
I could write about this at length but I’ll try to be brief.
Rspec re-invents fundamental ruby language concepts for sharing and organizing code, with no benefit other than making your tests “read more English,” which can seem cool to beginners (and did to me at one point) but is purely cosmetic and superfluous. Examples of this language re-invention include shared_examples
/it_behaves_like
, let
statements, and proliferating “helper” files.
To use rspec well, you need to learn a whole new language and set of best practices. And every new member of your team does too. I mean, there are whole books on it. Testing frameworks should be simple, not book-worthy. And there is nothing special about testing that warrants this. If you invest time becoming a ruby expert, you should be able to use your ruby expertise to write good tests. You should be able to use normal language constructs to share code in tests.
This is an old debate, and DHH was complaining about it years ago.
With that said, I don’t mind the expect
DSL for assertions, and I like “it” blocks as a way of defining tests. But both of these, while technically DSLs, are small, focused, simple constructs that can be learned in five minutes and probably grokked without even reading the docs. Minitest is essentially just these parts of rspec, with the expect
assertions optional, and that’s what I’d recommend for testing in ruby.
Since “read like English” is the thing RSpec optimized for, it is incoherent to dismiss it. Programming languages should be easy to read and easy to write, but readability and writability is in tension. RSpec is the result of optimizing readability to extreme such that you don’t need to know RSpec to read it. Compare: you do need to know Ruby to read it.
When is “read like English” useful and not cosmetic? When it needs to be read by someone who don’t know how to program. If everyone reading your RSpec test knows how to program, sure, you probably shouldn’t use RSpec: as you said, it duplicates Ruby for not much benefit. The key idea is that learning RSpec when you already know Ruby is many times easier than learning Ruby when you don’t know how to program.
Since “read like English” is the thing RSpec optimized for, it is incoherent to dismiss it.
It’s not incoherent in the least. It is, and always has been, a silly and misguided goal. I say this as someone who has read the rspec book, who knows it and the philosophy behind it well, and at one time believed the hype.
Programming languages should be easy to read and easy to write, but readability and writability is in tension. RSpec is the result of optimizing readability to extreme such that you don’t need to know RSpec to read it. Compare: you do need to know Ruby to read it.
I’m sorry, but this is pure nonsense. Rspec is not more readable than ruby, and the idea that non-programming “stakeholders” will be more likely to read and participate in the design process if you use rspec or cucumber is a pipe dream. I’ve never seen it happen, not once. And in the rare instance that one of these stakeholders was going to do this, they would find it no more difficult to understand minitest or test/unit. The difference in friction would be neglibigle, and the skill and clarity of the programmer writing the tests would be overwhelmingly more important than the test framework.
The idea that non-programming “stakeholders” will be more likely to read and participate in the design process if you use rspec or cucumber is a pipe dream. I’ve never seen it happen, not once.
I have seen it happen, but if you haven’t, I 100% agree RSpec has been entirely useless for you.
https://www.stevenrbaker.com/tech/history-of-rspec.html gives a better description of the original goal.
Is there anything like this available as a Rails engine? I’ve wanted to get off GA for a long time but don’t really want to host anything outside of what I’ve got running on the base-level paid Heroku plan.
Haven’t used it myself yet but looks good: https://github.com/ankane/ahoy
This is great, probably almost exactly what I want based on a quick review of the README. Thank you so much!
They’re missing the most important (to me) one! Plugins are nerfed in terms of what keys they’re allowed to bind. For “security reasons” it’s impossible for a plugin in Chrome’s extension system (which Firefox tragically copied) to bind to ctrl-n, ctrl-t, or ctrl-p, all critical Emacs shortcuts. So plugins are more or less completely useless for building an Emacs-like browser.
True, plugins in the major browsers have been neutered.
Both Chromium and Firefox can be patched to get rid of the keybinding restrictions. Firefox can even be hot patched from the binary so you don’t have to rebuild anything to free up the reserved key bindings.
But yes, it’s sad that there isn’t a better way.
Holy smokes, that hot patch is amazing.
If I had found that four years ago I would have saved myself a lot of pain. (In the end I instead switched to a window manager which can remap all keystrokes from Emacs keys to “conventional keys” before the browser even sees them so I don’t have a need for that any more.) But I admire it all the same.
Ooh, I’d love to hear more about this. I’m 100% Linux these days but if there’s something that I do miss from macOS it’s that the default readline/Emacs keybindings for cursor navigation work in every text-like GUI element.
Oh man, EXWM’s simulation keys changed my life. I can’t sing their praises highly enough. I went from “once every twenty minutes I want to throw my laptop out the window” to “hey this is great” instantly: https://technomancy.us/184
I use it mostly to make Firefox bearable again but it works in any X program.
This is a great fix, but the change in Firefox extensions architecture got rid of lots of other interesting possibilities that made Vimperator / Pentadactyl a really immersive user experience.
And I say this as an Emacs fanboy who learned vi just to enjoy Vimperator. It was really good, but sadly it’s gone.
Every time someone links this WRT Firefox and WebExtensions, I have to link this post because it does a good job at explaining why.
I don’t begrudge them removing XUL; it needed to die.
I just wished they replaced it with something actually good.
I agree that XUL had to die I just wish they hadn’t used that as an opportunity to slam the door on full-access extensions. I also think WebExtensions are a good idea but I don’t think full-access extensions should have been fully killed.
My preferred approach would be having two types of extensions:
Basically the API for full-access would just be “you may run arbitrary JS in the main process”. Following API changes and not breaking things is the problem of the extension author.
I think this provides enough reason to prefer WebExtensions where they work and limits the maitnaince burden for Firefox devs while still allowing for truely powerful extensions.
Honestly without the old style of extensions I see very little meaningful difference between Firefox and chrome. The only reason I stick with FF is to resit the Google web monopoly.
When they say “built on the same technologies as other popular chat apps,” what do they mean? The web page mentions “privacy” a lot but doesn’t say anything specific about it. Neither do the GitHub repos’ READMEs, unless I missed something.
To me, “privacy” means my messages aren’t going to be stored on the admin’s server, given that a lot of the time I’d probably not have any particular trust in said admin or their competence at setting up e.g. a MongoDB server without public access open to the world.
Looks like its not really private at all. There’s no encryption. Its just self hosted.
So the privacy they’re talking about is the de facto gains of hosting it yourself or by a person you trust and not a big company like discord or guilded that sells your data
If a malicious actor was watching your traffic somehow, like over coffeehouse WiFi or w/e then your messages being sent aren’t any less secret or safe. That’s no better than discord is today though.
If you want encrypted rooms I guess you gotta stick with matrix but what appeals to me about revolt is my friends might actually try it because it feels like discord. Unlike matrix
That’s no better than discord is today though.
If I run discord from the browser it’s a https site, so plaintext should not be accessible from the wire.
I can’t imagine the “native” app works any differently.
There’s degrees of privacy. A self-hoster will probably not sell traffic/usage data to advertisers, but there’s absolutely not guarantees they won’t snoop on private messages/images. This can happen in a big org like Discord but there’s a higher chance that such behavior is monitored and caught - if nothing else for the massive reputational risk of it being exposed.
It’s also a practical thing. Matrix per default also doesn’t doesn’t e2e encrypt everything at all, depending on some circumstances, same goes for IRC. And that’s fine, it allows for very performant full text search and sync (which is also certainly eco friendlier). Telegram (the chat program) didn’t win because of its encryption, it won because of its features, usability and speed. And if I look at day to day usage of discord I’m certain you don’t need e2e here, you may tell people upfront that for e2e worthy content they should choose something else than a glorified gaming slack with video + audio and nice icons, but that’s it.
I’m currently adding some online sync to my own f-droid app, it also won’t have any e2e features. TLS it is, and if you’re distrusting me, you can host your own instance as easy as running some binary on your own VPS.
Matrix per default also doesn’t e2e encrypt anything
That’s wrong, Matrix (more specifically Element, the reference client implementation, previously called Riot) has been encrypting private chats by default for over a year now.
Ok then my work instance simply doesn’t use E2E in their rooms. But we’re using it longer than matrix has e2e and we’ve not adopted it for that.
Looks like its not really private at all. There’s no encryption. Its just self hosted.
The roadmap at least does mention E2EE as a possibility in future, s. https://revolt.chat/roadmap#features-4, potentially scroll down to see:
(draft) 0.6.0: E2EE Chat
This is the drafted version for e2ee.
I asked in the beta instance (devs are in there) and it looks like that e2ee roadmap item is for DMs and small group chats, not the discord-like ““servers”” within your instance
To each their own but IMHO this can be okay depending on the circumstances.
Example: I hang out a lot in a Slack where there’s ≈2000 persons in #general
. So it’s basically as private as Twitter. To me that’s fine – but I would perhaps not tell all the secrets of my heart in that Slack. But having a discussion about Go or Python or whatever is cool with me, I don’t consider those topics private.
Even if that chat room was end-to-end-encrypted I still would not consider it private. Anyone of those members could copy my messages or take screenshots and spread it to God-knows-where.
Mostly WebSockets. Which is pretty much a given for any browser-based chat product. But I suppose with Rustaceans that might not be the case, since in theory you could create your own socket protocol and communicate over a Wasm client… :)
That’s cool and all, but knowing Mozilla, an about:config option called “legacyUserProfileCustomizations” is gonna disappear.
Related: In Firefox 90, they removed the “Compact” density option, which I kind of rely on to be comfortable with Firefox. They added an about:config option, “browser.compactmode.show”, to add it back. In Firefox 91, if you have the option enabled, they renamed the option from “Compact” to “Compact (not supported)”. I know it’s only a matter of time before they remove it entirely. And I’m kind of panicking about that, because I really, really don’t want more vertical space to be wasted by padding in the tab bar on small laptop screens. If anything, I find the “Compact” option too big. I wish Mozilla would stop changing things for the worse, and I wish Mozilla would stop taking away configuration options.
Every major release, they’ve taken away functionality that I depend on.
I feel the same. I even often joke that Mozilla is spying on me with the sole goal of knowing what features I rely on and yanking them away from me :).
What is wrong with them?
Nothing is wrong with them. We just aren’t part of their main demographic target.
I absolutely dread every single firefox update for the same reason - something I rely on or have burned in have and muscle memory get altered or removed.
It feels completely hopeless to me as well because I can’t see another acceptable choice. I can’t support Google’s browser engine monopoly and every other browser I research has some other issue that makes me reject it in comparison.
It feels like abuse by endless paper cuts, unwanted and unnecessary changes forced on me with no realistic choice to opt out. These changes seem to be accelerating too and meanwhile firefox market share declines further and further.
That’s cool and all, but knowing Mozilla, an about:config option called “legacyUserProfileCustomizations” is gonna disappear.
The reason it was made an option is to slightly improve startup time since it won’t have to check for this file on the disk. Comparatively few people use it, which is hardly surprising since it’s always been a very hidden feature, so it kind of makes sense: if you’re going to manually create CSS files then toggling an about:config option is little trouble.
Apparently, the name “legacy” is in there “to avoid giving the impression that with this new preference we are adding a new customization feature”. I would be surprised if it was removed in the foreseeable future, because it’s actually not a lot of code/effort to support this.
That being said, instead of relegating userChrome to a “hidden feature”, it would seem to me that properly integrating support for these kind of things in Firefox would give a lot more benefits. In many ways, almost everyone is a “power user” because a lot of folks – including non-technical people – spend entire days in the browser. For example, I disable the “close tab” button because I press it by accident somewhat frequently which is quite annoying. While loads of people don’t have this problem, I suspect I’m not the only person with this small annoyance considering some other browsers just offer this as a setting, but I am one of the comparatively small group of people who has the know-how to actually fix it in Firefox.
The technical architecture to do a lot of these things is right there and actually works quite well; it just doesn’t fit in the “there is one UX to satisfy everyone, and if you don’t like it then there’s something wrong with you”-mentality 🤷
All preference changes through about:config
are officially unsupported and come with zero guarantees. This includes creating custom chrome
folders.
There actually is a maintenance and complexity cost for keeping these things alive. We’ve done a lot of hardening lately that was cumbersome to pull off in light of those customizations. In essence, we are disabling some protections when we detect profile hackery. I want to repeat: despite our team working around and “acknowledging” the existence of those hacks, they are still unsupported and we can’t promise to always work around custom profile quirks.
The best way to be heard and change things in an open source project is to show up and help drive things. I know this isn’t easy for big project like ours…
This used to be a longer comment but I edited it and now it only shows this note 😳. I was needlessly nasty. Sorry, man, rough day.
I, too, prefer the “Compact” theme. Is there still anything that can be done to keep it going forward?
I’ve heard this startup time justification before, but surely the additional hassle of implementing, testing, and documenting a new configuration parameter isn’t worth saving a single stat call on startup? It’s hard to imagine that even shows up in the profile.
If everything else has already been tightly optimized, the stat call being performed on a spinning rust drive could be shown as being a major performance bottleneck when profiling startup performance.
When I rebuild LLVM, ninja does a stat system call for every C++ source and header file on the disk, about 10,000 of them in total. If I have no changes, then it completes in so little time that it appears instantaneous.
If the cost of a stat system call is making a user-noticeable difference to start times, then you’re in an incredibly rare and happy place.
What’s even dumber is when you see all these websites that check that the browser is Chrome/Chromium to enable a feature, whether or not the browser can actually support it. How did this happen? There is everything needed in CSS and JS to check if a feature is supported nowadays.
There are ways to check whether or not JS supports a feature and the same is true for CSS (s. @supports
) but the problem is that some browsers lie with (Mobile) Safari being a prominent example unfortunately.
I was curious to see if this was built on top of GTK, but it seems like desktop Linux/BSD is not a target at all.
https://docs.microsoft.com/en-us/dotnet/maui/supported-platforms mentions that Linux support is provided by the community. Maybe somebody else can comment on the quality. Honestly, it feels like a lost opportunity/oversight if I’m generous and if not, it just goes on to highlight that not much has actually changed about M$ w.r.t. to Linux, i.e. do the bare minimum to make money of/with it but not more.
It’s difficult to support ‘Linux’ because there’s no ‘Linux’ GUI toolkit. It’s possible to support Android. It’s possible to support GTK. It’s possible to support Qt. If you support GTK, the KDE folks will hate you. If you support Qt, the GNOME folks will hate you. In both cases, you’re taking dependencies on LGPL’d things and everyone shipping the code needs to understand the legal obligations that this comes with (the cross-platform bits are all MIT licensed so the legal obligations are trivial to comply with).
If the community maintains GTK or Qt (or both) integrations then anyone wanting to use them is picking up an extra dependency (the community-maintained version) and needs to do the same due diligence on licensing that they’d need to for any other dependency.
Personally, I’d love to see something based on Skia added for a completely portable MIT-licensed GUI toolkit but it’s not something I have time to work on.
see my comment above, it seems that they are moving in that direction. Microsoft.Maui.Graphics supports Linux via GTK and states that it implements SkiaSharp APIs.
Although, since you work your probably have more preview then just my searches
see my comment above, it seems that they are moving in that direction. Microsoft.Maui.Graphics supports Linux via GTK and states that it implements SkiaSharp APIs
The links you found make it look as if they’re moving towards a pure .NET widget set. SWING did that and it was awful for a couple of reasons. The first is that Java was painfully slow 20 years ago. The CLR is a lot better than a 20-year-old JVM and computers are a lot faster, so that’s not a problem. The second is common with things like Qt that do the same: Your apps don’t feel native unless you put in a lot of work. For example, on macOS it took 15 years for Qt to use the same keyboard shortcuts for navigation in a text field as every other application on the system. NSTextView
automatically integrates with text services so I can, for example, hit a shortcut key in any NSTextView
with rich text enabled and have a LaTeX math expression replaced with a PDF rendering of it (which embeds the source in the PDF metadata so I can hit another key combination to get back the original). There’s a huge amount of plumbing required to get that kind of thing right and you have to redo a lot of if if the system APIs change.
For X11/Wayland, the UIs are so inconsistent that it probably doesn’t matter. For iOS / macOS, it’s really noticeable when things bring their own widget set (or even don’t properly port their UI. For example, command-shift-z is redo on every Mac application, except on Office where it’s command-y). For Windows / Android it’s mildly annoying but there’s already quite a bit of inconsistency.
Although, since you work your probably have more preview then just my searches
I try to meet up with some of the .NET folks when I’m in Redmond to get their perspective on Verona, but I haven’t managed to visit for two years because of the pandemic. I’m not working on anything in the .NET space so I don’t know anything that isn’t public.
Personally, I’d love to see something based on Skia added for a completely portable MIT-licensed GUI toolkit but it’s not something I have time to work on.
I haven’t looked but I’d assume that’s what Flutter uses across the board, incl. the recently announced Linux implementation.
Flutter, unfortunately, is tightly coupled with Dart. This is great if you want to use Dart, but it means that you don’t have a language-agnostic toolkit and so doesn’t look like something that MAUI could use.
There is
https://github.com/dotnet/Microsoft.Maui.Graphics
“…. Microsoft.Maui.Graphics is a cross-platform graphics library for iOS, Android, Windows, macOS, Tizen and Linux completely in C#. With this library you can use a common API to target multiple abstractions allowing you to share your drawing code between platforms, or mix and match graphics implementations within a singular application.
…”
Even now (de-prioritized ?) Tizen is there.
The Linux seems to be supported via GTK
This is a graphics only library (means Canvas + fonts + PDF ), not a library of GUI controls. The graphics library implements mono’s SkiaSharp APIs. Noteable that mono’s SkiaSharp itself does not appear to support Linux, at least from its readme). So MAUI’s support for Linux’s graphic’s primitives is moving in a right direction, compared to previous SkiaSharp.
There is an experimental project on top of Microsoft.Maui.Graphics, that aims to build Controls for all the operating systems:
https://github.com/dotnet/Microsoft.Maui.Graphics.Controls
However, this project does not have anything in its https://github.com/dotnet/Microsoft.Maui.Graphics.Controls/tree/main/src/GraphicsControls/Platform for Linux, yet.
I certainly would hope that .NET ecosystem would support Linux and the 3 BSD as first class citizen. MAUI’s summary emphasizes support for various sensors, and I think Linux and the couple of BSDs have a reasonably noticeable presence in the device + sensors space.
Also I wish MS would have a consolidated, easy to consume strategy for all of their cross platform UI efforts (graphics, controls, etc) – right now it is really difficult to grok.
I see a lot of posts on Firefox vs Chrome (or in this case Chromium) and it always seems to be people lobbying for others to use Firefox for any number of moral or security reasons. The problem that I see with a lot of this is that Firefox just isn’t as good of a user experience as Chromium-based browsers. Maybe Mozilla has the best intentions as a company, but if their product is subjectively worse, there’s nothing you can really do.
I’ve personally tried going back to Firefox multiple times and it doesn’t fulfill what I need, so I inevitably switch back to Vivaldi.
This is really subjective. I tried using ungoogled-chromium but switched back to Firefox. I used Vivaldi for a while but switched to Firefox as well. Before I was using the fork of Firefox called Pale Moon but I got concerned with the lack of updates (due to how small is the team).
Sure it is, but almost 80% of the world is using a chromium browser right now and Firefox is stagnant at best, slowly losing ground. Firefox even benefits from being around longer, having a ton of good will, and some name recognition and it still can’t gain market.
It also didn’t get advertised everytime you visit Google from another browser. It also isn’t installed by default on every Android phone.
Firefox already had its brand established for years before that happened. It’s also worth noting that Microsoft ships with its browser (which is now a Chromium variant, but wasn’t until recently) and doesn’t even use Google as the search engine, so the vast majority of new users don’t start with a browser that’s going directly to google to even see that message.
And yet they start with a browser and why replace something if what you have already works discounting those pesky moral reasons as if those are not worth anything.
Among technical users who understand browsers, sure, you might choose a browser on subjective grounds like the UX you prefer. (Disclaimer: I prefer the UX of Firefox, and happily use it just fine.)
Most people do not know what a browser even is. They search for things on Google and install the “website opener” from Google (Chrome) because that’s what Google tells you to do at every opportunity if you are using any other browser.
When some players have a soap box to scream about their option every minute and others do not, it will never matter how good the UX of Firefox is. There’s no way to compete with endless free marketing to people who largely don’t know the difference.
If that were the case, people would switch back to Edge and Safari because both Windows and MacOS ask you to switch back, try it out again, etc every so often.
The UX of firefox is ok (they keep ripping off the UI of Opera/Vivaldi though fwiw and have been doing so forever), but it functionally does not work in many cases where it should. Or it behaves oddly. Also, from a pure developer perspective, their dev tools are inferior to what has come out of the chromium project. They used to have the lead in that with Firebug, too, but they get outpaced.
Yeah, I switched to Firefox recently and my computer has been idling high ever since. Any remotely complicated site being left as the foreground tab seems to be the culprit.
Except that, as far as I can tell, Firefox isn’t produced by a malicious actor with a history of all sorts of shenanigans, including a blatantly illegal conspiracy with other tech companies to suppress tech wages.
Sure, if your personal threat model includes nation states and police departments, it may be worthwhile switching to Chromium for that bit of extra hardening.
But for the vast majority of people, Firefox is a better choice.
I don’t think we can meaningfully say that there is a “better” choice, web browsers are a depressing technical situation, that every decision has significant downsides. Google is obviously nefarious, but they have an undeniable steering position. Mozilla is more interested in privacy, but depends on Google, nor can they decide to break the systems that are created to track and control their users, because most non-technical users perceive the lack of DRM to mean something is broken (“Why won’t Netflix load”). Apple and Microsoft are suspicious for other reasons. Everything else doesn’t have the manpower to keep up with Google and/or the security situation.
When I’m cynical, I like to imagine that Google will lead us into a web “middle age”, that might clean the web up. When I’m optimistic, I like to imagine that a web “renaissance” would manage to break off Google’s part in this redesign and result in a better web.
Mozilla also has a history of doing shady things and deliberately designed a compromised sync system because it is more convenient for the user.
Not to mention, a few years ago I clicked on a Google search result link and immediately had a malicious EXE running on my PC. At first I thought it was a popup, but no, it was a drive-by attack with me doing nothing other than opening a website. My computer was owned, only a clean wipe and reinstallation helped.
I’m still a Firefox fan for freedom reasons but unfortunately, the post has a point.
a few years ago I clicked on a […] link and immediately had a malicious EXE
I find this comment disingenuous due to the fact that every browser on every OS had or still has issues with a similar blast radius. Some prominent examples include hacking game consoles or closed operating systems via the browser all of which ship some version of the Webkit engine. Sure, the hack was used to “open up” the system but it could have been (and usually is) abused in exactly the same way you described here.
Also, I’m personally frustrated by people holding Mozilla to a higher standard than Google when it really should be the absolute opposite due to how much Google knows about each individual compared to Mozilla. Yes, it would be best if some of the linked issues could be resolved such that Mozilla can’t intercept your bookmark sync but I gotta ask: really, is that a service people should really be worried about? Meanwhile, Google boasts left, right and center how your data is secure with them and we all know what that means. Priorities people! The parent comment is absolutely right: Firefox is a better choice for the vast majority of people because Mozilla as a company is much more concerned about all of our privacy than Google. Google’s goal always was and always will be to turn you into data points and make a buck of that.
your bookmark sync
It’s not just bookmark sync. Firefox sync synchronizes:
If you are using these features and your account is compromised, that’s a big deal. If we just look at information security, I trust Google more than Mozilla with keeping this data safe. Of course Google has access to the data and harvests it, but the likelihood that my Google data leaks to hackers is probably lower than the likelihood that my Firefox data leaks to hackers. If I have to choose between leaking my data to the government or to hackers, I’d still choose the government.
If I have to choose between leaking my data to the government or to hackers, I’d still choose the government.
That narrows down where you live, a lot.
Secondly, I’d assume that any data leaked to hackers is also available to Governments. I mean, if I had spooks with black budgets, I’d be encouraging them to buy black market datasets on target populations.
I’d assume that any data leaked to hackers is also available to Governments.
Exactly. My point is that governments occasionally make an effort not to be malicious actors, whereas hackers who exploit systems usually don’t.
I clicked on a Google search result link
Yeah, FF is to blame for that, but also lol’d at the fact that Google presented that crap to you as a result.
Which nicely sums up the qualitative difference between Firefox and Google. One has design issues and bugs; the other invades your privacy to sell the channel to serve up .EXEs to your children.
Whose browser would you rather use?
Mozilla also has a history of doing shady things and deliberately designed a compromised sync system because it is more convenient for the user.
Sure, but I’d argue that’s a very different thing, qualitatively, from what Google has done and is doing.
I’d sum it up as “a few shady things” versus “a business model founded upon privacy violation, a track record of illegal industry-wide collusion, and outright hostility towards open standards”.
There is no perfect web browser vendor. But the perfect is the enemy of the good; Mozilla is a lot closer to perfect than Google, and deserves our support on that basis.
These mitigations are not aimed at nation-state attackers, they are aimed at people buying ads that contain malicious data that can compromise your system. The lack of site isolation in FireFox means that, for example, someone who buys and ad on a random site that you happen to have open in one tab while another is looking at your Internet banking page can use spectre attacks from JavaScript in the ad to extract all of the information (account numbers, addresses, last transaction) that are displayed in the other tab. This is typically all that’s needed for telephone banking to do a password reset if you phone that bank and say you’ve lost your credentials. These attacks are not possible in any other mainstream browser (and are prevented by WebKit2 for any obscure ones that use that, because Apple implemented the sandboxing at the WebKit layer, whereas Google hacked it into Chrome).
Hmmmm. Perhaps I’m missing something, but I thought Spectre was well mitigated these days. Or is it that the next Spectre, whatever it is, is the concern here?
There are no good Spectre mitigations. There’s speculative load hardening, but that comes with around a 50% performance drop so no one uses it in production. There are mitigations on array access in JavaScript that are fairly fast (Chakra deployed these first, but I believe everyone else has caught up), but that’s just closing one exploit technique, not fixing the bug and there are a bunch of confused deputy operations you can do via DOM invocations to do the same thing. The Chrome team has basically given up and said that it is not possible to keep anything in a process secret from other parts of a process on current hardware and so have pushed more process-based isolation.
A reminder we shouldn’t be leaving our private communications at the mercy of privative software.
Open source should be a legal requirement in many scenarios.
Of course the issue is that most governments are not interested in private communications to begin with, often quite the opposite.
Yeah, that was the point. Netflix happily uses FreeBSD but couldn’t care less about FreeBSD users.
Of course not. Why would a for profit media company waste (expensive) resources to support an OS that basically nobody uses on the desktop?
I know it sounds harsh, but Freebsd desktop use is irrelevant to any company.
Gaming on Linux was mostly irrelevant until Steam found a reason to support/foster it (apply pressure on Microsoft + Apple and their app stores). Given that the PS4 (and presumably PS5) uses FreeBSD for it’s OS and Netflix supports that platform there’s probably some incentive there to upstream certain things. Though I presume Sony is happy to keep status quo for the moment.
I imagine a lot of the PS4 graphics code they write is under NDA with AMD since they’re not just using off-the-shelf components, but I could be wrong. Has Sony given anything back?
Has Sony given anything back?
Not that I know of but then I’m totally the wrong person to answer that question.
Hey, at least they’re in the second largest donor class this year. I’d think FreeBSD Development would deserve more all things considered.
“hours of rollouts and draining and reconnection storms with state losses.”
I work with a platform that’s mostly built from containers running services (no k8s here though, if that’s important), but the above isn’t familar to me.
State doesn’t get lost: load balancers drain connections and new tasks are spun up and requests go to the new tasks.
When there’s a failure: Retries happen.
When we absolutely have to have something work (eventually) or know everything about the failure: Persistent queues.
The author doesn’t specify what’s behind the time necessary for rollouts. I’ve seen some problematic services, but mostly rollout takes minutes - and a whole code/test/security scan/deploy to preprod/test/deploy to prod/test/… cycle can be done in under an hour, with the longest part being the build and security scanning.
The author also talks about required - and scheduled - downtime. Again I don’t know why the platform(s) being described would necessarily force such a requirement.
Here’s one example: the service may require gigabytes of state to be downloaded to work with acceptable latency on local decisions, and that state is replicated in a way that is constantly updated. These could include ML models, large routing tables, or anything of the kind. Connections could be expected to be active for many minutes at a time (not everything is a web server serving short HTTP requests that are easy to resume), and so on.
Rolling the instance means having to re-transfer all of the data and re-establish all of that state, on all of the nodes. If you have a fleet with 300 instances requiring 10 minutes to shut down from connection drains, and that they take 10 minutes to come back up and re-sync their state and come back to top performance, rolling them in batches of 10 (because you want it somewhat gradual and to let times for things to rebalance) will take roughly 10 hours, longer than a working day.
I do have some services which work in a similar way, come to think of it - loading some mathematical models and warming up before they’re performing adequately - along similar timescales.
I think we’ve been lucky enough not to be working at the number of instances you’ve described there, or to have fast enough iteration on the models for us to need to release them as often as daily.
For Erlang to be taken out of the picture where it was working nicely doing what (OTP) does best does sound painful.
We have something similar, at much larger numbers than described. We cut over new traffic to the new service versions, and keep the old service versions around for 2-4 weeks as the long tail of work drains.
It sucks. I really wish we had live reloading.
On the other hand, the article mentioned something like “log in to repl and shoot the upgrade” seems like a manual work. I would think that having 1 hour of manual work over 10 hours of automated rollout have different tradeoffs.
As for the fleet shutdown calculation, you can also deal with that differently. You can at the very least halve the time by first bringing the new instances up, then shutting down the old ones, so your batch doesn’t take 20 minutes, but 10. If you want to “let times things for things to rebalance”, you still have to do that in the system that you described in the article.
Now, I’m not saying that I didn’t agree a lot with what you wrote there. But I did get a wibe that seem to be talking down on containers or k8s, and not comparing the tradeoffs. But mostly I do agree with a lot of what you’ve said.
You can at the very least halve the time by first bringing the new instances up, then shutting down the old ones, so your batch doesn’t take 20 minutes, but 10.
That doesn’t sound right. It takes 10 minutes to bring up new instances and 10 minutes to drain the old ones them, at least that’s my understanding. Changing around the order of these steps has the advantage of over-provisioning such that availability can be guaranteed but the trade-off is (slightly‽) higher cost short-term (10h in that example). Doing these 2 steps in parallel is of course an option and probably what you suggest.
Unless you need i18n, then never manipulate case with css, or at least make sure you only ever do it when .en
is present on the body
or something.
Care to elaborate a bit more? ferd gives a few examples supporting the OP. Same arguments would apply to languages like German and Russian.
Taken from MDN:
The text-transform property is not reliable for some locales; for example, text-transform: uppercase won’t work properly with languages such as Irish/Gaelic. For example, App Size in English may be capitalized via text-transform: uppercase to APP SIZE but in Gaelic this would change Meud na h-aplacaid to MEUD NA H-APLACAID which violates the locales orthographic rules, as it ought to be MEUD NA hAPLACAID. In general, localizers should make the decision about capitalization. If you want to display WARNING, add a string with that capitalization, and explain it in the localization note.
The examples they give here are only for Gaelic, but I would imagine there is more than 1 language where no font is going to encapsulate the orthographic complexities of planet Earth.
Not to mention how many custom fonts it might take to handle all of these (which are probably already present as best as possible on the end user’s computer) results in more web page bloat.
Thank you for following up, TIL. I’m still unconvinced about “never manipulate case with css” but the rest of your remark about I18n and to be more precise—only use text-transform
for certain lang
—makes absolute sense. Given that so many web sites/apps don’t even support I18n to begin with IMO the benefits of using it (s. ferd’s comment) outweigh the potential negatives. Once you decide to go all in on I18n and support as many languages as possible you’ll usually run into many many other cases where most I18n implementations will fail you one way or another.
I wrote a cron job that fetches RSS feeds and pipes new items into a folder in my emails.
Advantages:
Disadvantages:
I use Newsboat as a backend for fetching RSS items.
I wrote Newsboat-Sendmail which taps into the Newsboat cache to send emails to a dedicated email address.
To make sure the IDs of the emails’ subjects are kept whenever the server asks me to wait before sending more emails, I wrote Sendmail-TryQueue. It saves emails that could not be sent on disk (readable EML format, plus shell script for the exact sendmail command that was used).
Finally I use Alot to manage the notifications/items.
Thunderbird is one client.
I can also use it via the fastmail web ui, or my phone.
Lastly, the chromedriver integration means I get full articles with images, instead of snippets.
Ah, I think I misunderstood its features and your workflow. And now I’m curious. How does the non-RSS bit work? Do you customize & redeploy when adding new sources? In other words, how easy or hard is it to generalize extracting the useful bits, especially in today’s world of “CSS-in-JS” where sane as in human-friendly class names go away?
So, the current incarnation has several builtins, each wrapping a simpler primitive:
My planned-future changes:
I use rss2email which basically does the same thing.
I wrote a rss reader which is meant for cronjobs, which is btw the reader I use.
https://gitlab.com/dacav/crossbow
The version 0.9.0 is usable. Soon I plan to release version 1.0.0
There’s also Artichoke (GitHub: https://github.com/artichoke/artichoke).
I can’t wait to see what Artichoke could do for video games in terms of rapid prototyping, configurability, and new content generation.
It’s probably worth pointing out that there’s also https://crystal-lang.org/ which—if you haven’t heard about it yet—is basically the “if Ruby and Go had a child”. The biggest trade-off is that it’s a compiled language. The other trade-off might or might not be that it’s a typed language but there’s type inference.
I agree that infinite ranges are best!
However, I would order the other two differently, or perhaps label them equals.
Contributors to Rails have repeatedly stated that Arel is considered an internal, private API, and it is not recommended for use in application code. To my knowledge, they also do not explicitly call out changes to Arel in release notes. I realize their pleading gets little attention. They also know their pleading gets little attention. That does not make it a good idea to ignore those pleas.
In the case of raw SQL for a comparison operator, the two proposed drawbacks are less impactful (in my opinion) than requests from the Rails team.
Yes, raw SQL is not preferable in a general sense. It also technically has a higher risk of injection, in general cases. However, when used with keyword interpolation, the values will ultimately run through ActiveRecord::ConnectionAdapters::Quoting#quote
. If your Ruby Date object (or an ActiveSupport::TimeWithZone
object, or any other comparable database type with a Ruby equivalent) would cause an issue in that code, we’ve all got much bigger problems than just less-than and greater-than operators.
With regards to “database adapter compatibility”, I question whether less-than and greater-than are, in reality, not portable across different SQL databases? I am ignorant where this might be so, and would be happy to learn of those cases.
But if so, is that transition between two database engines (with such wildly different comparison operators, and therefore presumably other differences?) more likely than changes to a private/internal API, or less likely? It’s a bet on one risk or another, I think either one can be said to be crappy bet in a general sense.
In the case of these comparison operators (rather than “in general”), it feels like an incredibly minor difference, but one that leans toward the raw SQL. They are both changes which could bring pain. One of the changes you are possibly in of control of: Are you likely to change databases to one which does not support the >
and <
operators? The other change you do not control: does the Rails core team change something internal to Arel?
I really really wish queries in ActiveRecord could be built like in Sequel. It’s so much nicer than Arel, which like you said you really shouldn’t be using in production anyway. Honestly, the only way to do anything relatively complex with the database in ActiveRecord involves string interpolation and sanitization. It’s the biggest complaint I have with the entire stack.
I’ve had some success using interpolation with to_sql
(which sanitizes for you).
It’s still a bit yuck but it’s the least bad alternative I’ve found in rails.
I have only used Sequel on one side project. I really, really enjoyed it, and wish I had the opportunity to use it at work. Alas, decisions made years ago about this-or-that ORM are not worth the literal business cost to undo at the expense of more impactful, revenue-driving features.
One of the ideas of ActiveRecord in its early days, as stated by DHH himself, is not that SQL is bad and we should avoid writing it at all costs for some ideological reason. Instead his idea was that the simplest of SQL queries (e.g. a bunch of WHERE clauses with a LIMIT or JOIN thrown in) should be able to be expressed in very simple, application-native code. Not exactly his words, but something like that, as well as some comment about how ActiveRecord works very purposefully to let you write raw SQL when you feel you need to. If I could find the right Rails book I purchased once-upon-a-2010 I would find the exact quote, but I think the idea remains.
Sequel is great, but I have not used it “in anger” to know where the warts are. ActiveRecord has warts, and I know where they are. Despite those, it is good enough in many cases, and in the cases where it is not, was explicitly built to allow the programmer an “out”, and to write SQL when they really need to.
I have listened to the The Bike Shed podcast for many years running. During the era when Sean Griffin was a host, he was both paid to contribute to ActiveRecord full-time (I think?) and was building a new, separate ORM in Rust. Some of the discussions provided a very interesting lens into some of the tradeoffs in ActiveRecord: which were inherent, and which were just incidental choices or entrenched legacies that need not remain in an ideal world.
EDIT: Followup thought. You really do need a mental model for ActiveRecord::Relation
when using “ActiveRecord”. Something I contributed at work (and which I hope to open source somehow in 2020) was an extension (patch?) to the awesome_print gem that previews ActiveRecord::Relation
more intelligently. After building it, I realized that both junior and mid-level engineers on my team did not completely grok ActiveRecord::Relation
, and how that just being able to see bits of it splayed out, in chunks more discrete than just calling #to_sql
, helped them feel more confident that what they were building was the right thing.
The problem with interpolating to_sql
or using any form of SQL strings is that ActiveRecord scopes can no longer be composed for any mildly more complicated/useful queries, especially if ActiveRecord tries to alias a sub-query one way or another as strings are exempt from aliasing. ActiveRecord doesn’t parse any SQL strings. This is a problem as you don’t know who or what will consume/compose queries with those scopes using SQL strings later. Changing a scope which is used in many contexts to use literal SQL becomes a very dangerous undertaking as it might break many of its consumers due to the above. So I’m with @colonelpanic on this one. IMO, Rails Core team should either embrace Arel and its direct use or maybe replace it with something better.
The other thing I’ve had success with in rails: PostgreSQL supports updatable views.
Turning a monster query into a view is a big, ugly undertaking - but so far I’ve only needed it after a project has become a success (at which point I don’t mind too much) and tends to happen to the least-churned tables (I’ve only had to modify these kind of views once or twice).
Contributors to Rails have repeatedly stated that Arel is considered an internal, private API, and it is not recommended for use in application code.
I have very little sympathy for this position because the official query interface is simply not adequate for even mildly complicated use-cases.
I’ve been using Arel directly, and even patching in new features, for ten years. Can’t think of a time it’s ever been an issue.
I will continue to use Arel until a better alternative presents itself. String interpolation is not a serious alternative.
In Rails’ code, the core example of utilizing #arel_table is exactly greater_than: https://github.com/rails/rails/blob/c56d49c26636421afa4f088dc6d6e3c9445aa891/activerecord/lib/active_record/core.rb#L266
The bigger concern with SQL injection is future developers adding into the string unsafe code, so avoiding them is preferable.
As far as database compatibility, there are plenty of non-SQL database adapters available, and sticking to some form of Arel or built-in syntax, rather than SQL, keeps it more likely to translate to many different databases. It’s not “a must”, but it’s pretty sweet to swap out adapters on an app and have everything “Just work”
As far as database compatibility, there are plenty of non-SQL database adapters available, and sticking to some form of Arel or built-in syntax, rather than SQL, keeps it more likely to translate to many different databases.
I am highly skeptical that there are actually that many databases in use which wouldn’t be just as happy with the simpler greater-than/less-than formulations.
With regards to “database adapter compatibility”, I question whether less-than and greater-than are, in reality, not portable across different SQL databases? I am ignorant where this might be so, and would be happy to learn of those cases.
FWIW here are the ORM-to-SQL operator mappings for Django’s four built-in database backends:
So if there’s a database where >
and <
aren’t the greater-than/less-than operators, it isn’t one of those four.
The headline is exciting but the link goes to a page that tells you that there’s an optional (not built by default), which has been the case for a while. The only change since I last read the module docs is that it no longer requires a patched version of OpenSSL.
That’s correct, but the vast majority of NGINX features are behind feature flags that are not compiled by default. Apologies that the headline is a bit sensationalist, but this is the first time that HTTP/3 has been in NGINX mainline as far as I am aware.
As an example,
--with-http_ssl_module
is a flag that most distros have to put in to allow for any HTTPS support in NGINX, same with HTTP/2.NGINX by “default” does not have SSL or HTTP/2 enabled or even compiled out of the box, both are locked behind compile flags, but obviously almost every distribution enables these flags.
IMO, this is arguing semantics and in general I’d even disagree that this is correct. According to http://nginx.org/en/linux_packages.html#dynmodules by default we have:
However, in practice, I’m not even sure what this means and for whom. For instance, assuming “Main” refers to mainline if you install mainline Nginx from their provided package repositories, e.g. http://nginx.org/en/linux_packages.html#Ubuntu, SSL or HTTP/2 are definitely enabled; have been for years. So if those flags are only relevant if you build from source than that point will be moot for the majority of people installing it from either the package repos of their distribution of choice or using Nginx provided repos.
And it gets even more weird. From the OP we get
but http://nginx.org/en/linux_packages.html#distributions also has
which would somewhat imply that all other distributions will have it enabled if installed from their package repos presumably.