1. 5

    I see a lot of posts on Firefox vs Chrome (or in this case Chromium) and it always seems to be people lobbying for others to use Firefox for any number of moral or security reasons. The problem that I see with a lot of this is that Firefox just isn’t as good of a user experience as Chromium-based browsers. Maybe Mozilla has the best intentions as a company, but if their product is subjectively worse, there’s nothing you can really do.

    I’ve personally tried going back to Firefox multiple times and it doesn’t fulfill what I need, so I inevitably switch back to Vivaldi.

    1. 10

      This is really subjective. I tried using ungoogled-chromium but switched back to Firefox. I used Vivaldi for a while but switched to Firefox as well. Before I was using the fork of Firefox called Pale Moon but I got concerned with the lack of updates (due to how small is the team).

      1. 2

        Sure it is, but almost 80% of the world is using a chromium browser right now and Firefox is stagnant at best, slowly losing ground. Firefox even benefits from being around longer, having a ton of good will, and some name recognition and it still can’t gain market.

        1. 8

          It also didn’t get advertised everytime you visit Google from another browser. It also isn’t installed by default on every Android phone.

          1. 8

            Firefox also isn’t installed by default by a bunch of PC vendors.

            1. 1

              Firefox already had its brand established for years before that happened. It’s also worth noting that Microsoft ships with its browser (which is now a Chromium variant, but wasn’t until recently) and doesn’t even use Google as the search engine, so the vast majority of new users don’t start with a browser that’s going directly to google to even see that message.

              1. 2

                And yet they start with a browser and why replace something if what you have already works discounting those pesky moral reasons as if those are not worth anything.

            2. 4

              Among technical users who understand browsers, sure, you might choose a browser on subjective grounds like the UX you prefer. (Disclaimer: I prefer the UX of Firefox, and happily use it just fine.)

              Most people do not know what a browser even is. They search for things on Google and install the “website opener” from Google (Chrome) because that’s what Google tells you to do at every opportunity if you are using any other browser.

              When some players have a soap box to scream about their option every minute and others do not, it will never matter how good the UX of Firefox is. There’s no way to compete with endless free marketing to people who largely don’t know the difference.

              1. 1

                If that were the case, people would switch back to Edge and Safari because both Windows and MacOS ask you to switch back, try it out again, etc every so often.

                The UX of firefox is ok (they keep ripping off the UI of Opera/Vivaldi though fwiw and have been doing so forever), but it functionally does not work in many cases where it should. Or it behaves oddly. Also, from a pure developer perspective, their dev tools are inferior to what has come out of the chromium project. They used to have the lead in that with Firebug, too, but they get outpaced.

          2. 1

            Yeah, I switched to Firefox recently and my computer has been idling high ever since. Any remotely complicated site being left as the foreground tab seems to be the culprit.

          1. 64

            Except that, as far as I can tell, Firefox isn’t produced by a malicious actor with a history of all sorts of shenanigans, including a blatantly illegal conspiracy with other tech companies to suppress tech wages.

            Sure, if your personal threat model includes nation states and police departments, it may be worthwhile switching to Chromium for that bit of extra hardening.

            But for the vast majority of people, Firefox is a better choice.

            1. 13

              I don’t think we can meaningfully say that there is a “better” choice, web browsers are a depressing technical situation, that every decision has significant downsides. Google is obviously nefarious, but they have an undeniable steering position. Mozilla is more interested in privacy, but depends on Google, nor can they decide to break the systems that are created to track and control their users, because most non-technical users perceive the lack of DRM to mean something is broken (“Why won’t Netflix load”). Apple and Microsoft are suspicious for other reasons. Everything else doesn’t have the manpower to keep up with Google and/or the security situation.

              When I’m cynical, I like to imagine that Google will lead us into a web “middle age”, that might clean the web up. When I’m optimistic, I like to imagine that a web “renaissance” would manage to break off Google’s part in this redesign and result in a better web.

              1. 19

                Mozilla also has a history of doing shady things and deliberately designed a compromised sync system because it is more convenient for the user.

                Not to mention, a few years ago I clicked on a Google search result link and immediately had a malicious EXE running on my PC. At first I thought it was a popup, but no, it was a drive-by attack with me doing nothing other than opening a website. My computer was owned, only a clean wipe and reinstallation helped.

                I’m still a Firefox fan for freedom reasons but unfortunately, the post has a point.

                1. 11

                  a few years ago I clicked on a […] link and immediately had a malicious EXE

                  I find this comment disingenuous due to the fact that every browser on every OS had or still has issues with a similar blast radius. Some prominent examples include hacking game consoles or closed operating systems via the browser all of which ship some version of the Webkit engine. Sure, the hack was used to “open up” the system but it could have been (and usually is) abused in exactly the same way you described here.

                  Also, I’m personally frustrated by people holding Mozilla to a higher standard than Google when it really should be the absolute opposite due to how much Google knows about each individual compared to Mozilla. Yes, it would be best if some of the linked issues could be resolved such that Mozilla can’t intercept your bookmark sync but I gotta ask: really, is that a service people should really be worried about? Meanwhile, Google boasts left, right and center how your data is secure with them and we all know what that means. Priorities people! The parent comment is absolutely right: Firefox is a better choice for the vast majority of people because Mozilla as a company is much more concerned about all of our privacy than Google. Google’s goal always was and always will be to turn you into data points and make a buck of that.

                  1. 1

                    your bookmark sync

                    It’s not just bookmark sync. Firefox sync synchronizes:

                    • Bookmarks
                    • Browsing history
                    • Open tabs
                    • Logins and passwords
                    • Addresses
                    • Add-ons
                    • Firefox options

                    If you are using these features and your account is compromised, that’s a big deal. If we just look at information security, I trust Google more than Mozilla with keeping this data safe. Of course Google has access to the data and harvests it, but the likelihood that my Google data leaks to hackers is probably lower than the likelihood that my Firefox data leaks to hackers. If I have to choose between leaking my data to the government or to hackers, I’d still choose the government.

                    1. 1

                      If I have to choose between leaking my data to the government or to hackers, I’d still choose the government.

                      That narrows down where you live, a lot.

                      Secondly, I’d assume that any data leaked to hackers is also available to Governments. I mean, if I had spooks with black budgets, I’d be encouraging them to buy black market datasets on target populations.

                      1. 1

                        I’d assume that any data leaked to hackers is also available to Governments.

                        Exactly. My point is that governments occasionally make an effort not to be malicious actors, whereas hackers who exploit systems usually don’t.

                  2. 6

                    I clicked on a Google search result link

                    Yeah, FF is to blame for that, but also lol’d at the fact that Google presented that crap to you as a result.

                    1. 3

                      Which nicely sums up the qualitative difference between Firefox and Google. One has design issues and bugs; the other invades your privacy to sell the channel to serve up .EXEs to your children.

                      Whose browser would you rather use?

                    2. 3

                      Mozilla also has a history of doing shady things and deliberately designed a compromised sync system because it is more convenient for the user.

                      Sure, but I’d argue that’s a very different thing, qualitatively, from what Google has done and is doing.

                      I’d sum it up as “a few shady things” versus “a business model founded upon privacy violation, a track record of illegal industry-wide collusion, and outright hostility towards open standards”.

                      There is no perfect web browser vendor. But the perfect is the enemy of the good; Mozilla is a lot closer to perfect than Google, and deserves our support on that basis.

                    3. 8

                      These mitigations are not aimed at nation-state attackers, they are aimed at people buying ads that contain malicious data that can compromise your system. The lack of site isolation in FireFox means that, for example, someone who buys and ad on a random site that you happen to have open in one tab while another is looking at your Internet banking page can use spectre attacks from JavaScript in the ad to extract all of the information (account numbers, addresses, last transaction) that are displayed in the other tab. This is typically all that’s needed for telephone banking to do a password reset if you phone that bank and say you’ve lost your credentials. These attacks are not possible in any other mainstream browser (and are prevented by WebKit2 for any obscure ones that use that, because Apple implemented the sandboxing at the WebKit layer, whereas Google hacked it into Chrome).

                      1. 2

                        Hmmmm. Perhaps I’m missing something, but I thought Spectre was well mitigated these days. Or is it that the next Spectre, whatever it is, is the concern here?

                        1. 11

                          There are no good Spectre mitigations. There’s speculative load hardening, but that comes with around a 50% performance drop so no one uses it in production. There are mitigations on array access in JavaScript that are fairly fast (Chakra deployed these first, but I believe everyone else has caught up), but that’s just closing one exploit technique, not fixing the bug and there are a bunch of confused deputy operations you can do via DOM invocations to do the same thing. The Chrome team has basically given up and said that it is not possible to keep anything in a process secret from other parts of a process on current hardware and so have pushed more process-based isolation.

                    1. 10

                      A reminder we shouldn’t be leaving our private communications at the mercy of privative software.

                      Open source should be a legal requirement in many scenarios.

                      1. 6

                        Of course the issue is that most governments are not interested in private communications to begin with, often quite the opposite.

                      1. 4

                        So can we watch Netflix on FreeBSD now?

                        1. 3

                          I suppose it’s more about the use of FreeBSD inside the Netflix infrastructure.

                          1. 5

                            Yeah, that was the point. Netflix happily uses FreeBSD but couldn’t care less about FreeBSD users.

                            1. 15

                              Of course not. Why would a for profit media company waste (expensive) resources to support an OS that basically nobody uses on the desktop?

                              I know it sounds harsh, but Freebsd desktop use is irrelevant to any company.

                              1. 1

                                Gaming on Linux was mostly irrelevant until Steam found a reason to support/foster it (apply pressure on Microsoft + Apple and their app stores). Given that the PS4 (and presumably PS5) uses FreeBSD for it’s OS and Netflix supports that platform there’s probably some incentive there to upstream certain things. Though I presume Sony is happy to keep status quo for the moment.

                                1. 2

                                  I imagine a lot of the PS4 graphics code they write is under NDA with AMD since they’re not just using off-the-shelf components, but I could be wrong. Has Sony given anything back?

                                  1. 1

                                    Has Sony given anything back?

                                    Not that I know of but then I’m totally the wrong person to answer that question.

                              2. 7

                                Hey, at least they’re in the second largest donor class this year. I’d think FreeBSD Development would deserve more all things considered.

                            2. 3

                              Sure You can, In a Linux/Windows/Android VM under Bhyve :p

                            1. 4

                              “hours of rollouts and draining and reconnection storms with state losses.”

                              I work with a platform that’s mostly built from containers running services (no k8s here though, if that’s important), but the above isn’t familar to me.

                              State doesn’t get lost: load balancers drain connections and new tasks are spun up and requests go to the new tasks.

                              When there’s a failure: Retries happen.

                              When we absolutely have to have something work (eventually) or know everything about the failure: Persistent queues.

                              The author doesn’t specify what’s behind the time necessary for rollouts. I’ve seen some problematic services, but mostly rollout takes minutes - and a whole code/test/security scan/deploy to preprod/test/deploy to prod/test/… cycle can be done in under an hour, with the longest part being the build and security scanning.

                              The author also talks about required - and scheduled - downtime. Again I don’t know why the platform(s) being described would necessarily force such a requirement.

                              1. 13

                                Here’s one example: the service may require gigabytes of state to be downloaded to work with acceptable latency on local decisions, and that state is replicated in a way that is constantly updated. These could include ML models, large routing tables, or anything of the kind. Connections could be expected to be active for many minutes at a time (not everything is a web server serving short HTTP requests that are easy to resume), and so on.

                                Rolling the instance means having to re-transfer all of the data and re-establish all of that state, on all of the nodes. If you have a fleet with 300 instances requiring 10 minutes to shut down from connection drains, and that they take 10 minutes to come back up and re-sync their state and come back to top performance, rolling them in batches of 10 (because you want it somewhat gradual and to let times for things to rebalance) will take roughly 10 hours, longer than a working day.

                                1. 3

                                  I do have some services which work in a similar way, come to think of it - loading some mathematical models and warming up before they’re performing adequately - along similar timescales.

                                  I think we’ve been lucky enough not to be working at the number of instances you’ve described there, or to have fast enough iteration on the models for us to need to release them as often as daily.

                                  For Erlang to be taken out of the picture where it was working nicely doing what (OTP) does best does sound painful.

                                  1. 7

                                    We have something similar, at much larger numbers than described. We cut over new traffic to the new service versions, and keep the old service versions around for 2-4 weeks as the long tail of work drains.

                                    It sucks. I really wish we had live reloading.

                                  2. 3

                                    On the other hand, the article mentioned something like “log in to repl and shoot the upgrade” seems like a manual work. I would think that having 1 hour of manual work over 10 hours of automated rollout have different tradeoffs.

                                    As for the fleet shutdown calculation, you can also deal with that differently. You can at the very least halve the time by first bringing the new instances up, then shutting down the old ones, so your batch doesn’t take 20 minutes, but 10. If you want to “let times things for things to rebalance”, you still have to do that in the system that you described in the article.

                                    Now, I’m not saying that I didn’t agree a lot with what you wrote there. But I did get a wibe that seem to be talking down on containers or k8s, and not comparing the tradeoffs. But mostly I do agree with a lot of what you’ve said.

                                    1. 1

                                      You can at the very least halve the time by first bringing the new instances up, then shutting down the old ones, so your batch doesn’t take 20 minutes, but 10.

                                      That doesn’t sound right. It takes 10 minutes to bring up new instances and 10 minutes to drain the old ones them, at least that’s my understanding. Changing around the order of these steps has the advantage of over-provisioning such that availability can be guaranteed but the trade-off is (slightly‽) higher cost short-term (10h in that example). Doing these 2 steps in parallel is of course an option and probably what you suggest.

                                1. 3

                                  Unless you need i18n, then never manipulate case with css, or at least make sure you only ever do it when .en is present on the body or something.

                                  1. 1

                                    Care to elaborate a bit more? ferd gives a few examples supporting the OP. Same arguments would apply to languages like German and Russian.

                                    1. 2

                                      Taken from MDN:

                                      The text-transform property is not reliable for some locales; for example, text-transform: uppercase won’t work properly with languages such as Irish/Gaelic. For example, App Size in English may be capitalized via text-transform: uppercase to APP SIZE but in Gaelic this would change Meud na h-aplacaid to MEUD NA H-APLACAID which violates the locales orthographic rules, as it ought to be MEUD NA hAPLACAID. In general, localizers should make the decision about capitalization. If you want to display WARNING, add a string with that capitalization, and explain it in the localization note.

                                      The examples they give here are only for Gaelic, but I would imagine there is more than 1 language where no font is going to encapsulate the orthographic complexities of planet Earth.

                                      Not to mention how many custom fonts it might take to handle all of these (which are probably already present as best as possible on the end user’s computer) results in more web page bloat.

                                      1. 1

                                        Thank you for following up, TIL. I’m still unconvinced about “never manipulate case with css” but the rest of your remark about I18n and to be more precise—only use text-transform for certain lang—makes absolute sense. Given that so many web sites/apps don’t even support I18n to begin with IMO the benefits of using it (s. ferd’s comment) outweigh the potential negatives. Once you decide to go all in on I18n and support as many languages as possible you’ll usually run into many many other cases where most I18n implementations will fail you one way or another.

                                  1. 3

                                    I wrote a cron job that fetches RSS feeds and pipes new items into a folder in my emails.

                                    Advantages:

                                    • Most mail clients (well, the ones I use) support basic styling, HTML & images
                                    • Search is already implemented (by the mail host)
                                    • Read / unread tracking is already implemented, and syncs across devices
                                    • Clients can be configured to prefetch attachments, so you can read offline and sync up the read state afterwards.
                                    • The fetch script can work on things that aren’t RSS via chromedriver

                                    Disadvantages:

                                    • Getting attachments to display inline on a variety of clients took too much work.
                                    • It’s kind of a hack
                                    1. 3

                                      I use Newsboat as a backend for fetching RSS items.

                                      I wrote Newsboat-Sendmail which taps into the Newsboat cache to send emails to a dedicated email address.

                                      To make sure the IDs of the emails’ subjects are kept whenever the server asks me to wait before sending more emails, I wrote Sendmail-TryQueue. It saves emails that could not be sent on disk (readable EML format, plus shell script for the exact sendmail command that was used).

                                      Finally I use Alot to manage the notifications/items.

                                      1. 2

                                        …so basically Thunderbird.

                                        1. 1

                                          Thunderbird is one client.

                                          I can also use it via the fastmail web ui, or my phone.

                                          Lastly, the chromedriver integration means I get full articles with images, instead of snippets.

                                          1. 1

                                            Ah, I think I misunderstood its features and your workflow. And now I’m curious. How does the non-RSS bit work? Do you customize & redeploy when adding new sources? In other words, how easy or hard is it to generalize extracting the useful bits, especially in today’s world of “CSS-in-JS” where sane as in human-friendly class names go away?

                                            1. 1

                                              So, the current incarnation has several builtins, each wrapping a simpler primitive:

                                              • The simplest is just ‘specify a feed url and it’ll grab the content from the feed and mail it to you’.
                                              • The next simplest-but-useful is ‘specify a feed url and it’ll grab the link from the feed, fetch the link, parse it as html, extract all content matching a css selector, inline any images, and mail it to you’. This works well for eg webcomics.
                                              • The third level replaces ‘fetch the link’ with ‘fire up chrome to fetch the link’ but is otherwise similar.

                                              My planned-future changes:

                                              • Use chromedriver but specify the window size and content coordinates; this should work around css-in-js issues by looking for boxes of approximately the right size / position in the document. I’m not currently following any feeds that need this, though.
                                              • Store values and look for changes. I plan to use this to (eg) monitor price changes on shopping sites.
                                        2. 2

                                          haha, I like this one. You’ve turned RSS into newsletters!

                                          1. 1

                                            mailchimp sells this as a feature.

                                          2. 1

                                            I use rss2email which basically does the same thing.

                                            1. 1

                                              I wrote a rss reader which is meant for cronjobs, which is btw the reader I use.

                                              https://gitlab.com/dacav/crossbow

                                              The version 0.9.0 is usable. Soon I plan to release version 1.0.0

                                            1. 12
                                              1. 2

                                                I can’t wait to see what Artichoke could do for video games in terms of rapid prototyping, configurability, and new content generation.

                                                1. 1

                                                  It’s probably worth pointing out that there’s also https://crystal-lang.org/ which—if you haven’t heard about it yet—is basically the “if Ruby and Go had a child”. The biggest trade-off is that it’s a compiled language. The other trade-off might or might not be that it’s a typed language but there’s type inference.

                                                  1. 1

                                                    That does look pretty cool!

                                              1. 2

                                                I agree that infinite ranges are best!
                                                However, I would order the other two differently, or perhaps label them equals.

                                                Contributors to Rails have repeatedly stated that Arel is considered an internal, private API, and it is not recommended for use in application code. To my knowledge, they also do not explicitly call out changes to Arel in release notes. I realize their pleading gets little attention. They also know their pleading gets little attention. That does not make it a good idea to ignore those pleas.

                                                In the case of raw SQL for a comparison operator, the two proposed drawbacks are less impactful (in my opinion) than requests from the Rails team.

                                                Yes, raw SQL is not preferable in a general sense. It also technically has a higher risk of injection, in general cases. However, when used with keyword interpolation, the values will ultimately run through ActiveRecord::ConnectionAdapters::Quoting#quote. If your Ruby Date object (or an ActiveSupport::TimeWithZone object, or any other comparable database type with a Ruby equivalent) would cause an issue in that code, we’ve all got much bigger problems than just less-than and greater-than operators.

                                                With regards to “database adapter compatibility”, I question whether less-than and greater-than are, in reality, not portable across different SQL databases? I am ignorant where this might be so, and would be happy to learn of those cases.

                                                But if so, is that transition between two database engines (with such wildly different comparison operators, and therefore presumably other differences?) more likely than changes to a private/internal API, or less likely? It’s a bet on one risk or another, I think either one can be said to be crappy bet in a general sense.

                                                In the case of these comparison operators (rather than “in general”), it feels like an incredibly minor difference, but one that leans toward the raw SQL. They are both changes which could bring pain. One of the changes you are possibly in of control of: Are you likely to change databases to one which does not support the > and < operators? The other change you do not control: does the Rails core team change something internal to Arel?

                                                1. 2

                                                  I really really wish queries in ActiveRecord could be built like in Sequel. It’s so much nicer than Arel, which like you said you really shouldn’t be using in production anyway. Honestly, the only way to do anything relatively complex with the database in ActiveRecord involves string interpolation and sanitization. It’s the biggest complaint I have with the entire stack.

                                                  1. 1

                                                    I’ve had some success using interpolation with to_sql (which sanitizes for you).

                                                    It’s still a bit yuck but it’s the least bad alternative I’ve found in rails.

                                                    1. 1

                                                      I have only used Sequel on one side project. I really, really enjoyed it, and wish I had the opportunity to use it at work. Alas, decisions made years ago about this-or-that ORM are not worth the literal business cost to undo at the expense of more impactful, revenue-driving features.

                                                      One of the ideas of ActiveRecord in its early days, as stated by DHH himself, is not that SQL is bad and we should avoid writing it at all costs for some ideological reason. Instead his idea was that the simplest of SQL queries (e.g. a bunch of WHERE clauses with a LIMIT or JOIN thrown in) should be able to be expressed in very simple, application-native code. Not exactly his words, but something like that, as well as some comment about how ActiveRecord works very purposefully to let you write raw SQL when you feel you need to. If I could find the right Rails book I purchased once-upon-a-2010 I would find the exact quote, but I think the idea remains.

                                                      Sequel is great, but I have not used it “in anger” to know where the warts are. ActiveRecord has warts, and I know where they are. Despite those, it is good enough in many cases, and in the cases where it is not, was explicitly built to allow the programmer an “out”, and to write SQL when they really need to.

                                                      I have listened to the The Bike Shed podcast for many years running. During the era when Sean Griffin was a host, he was both paid to contribute to ActiveRecord full-time (I think?) and was building a new, separate ORM in Rust. Some of the discussions provided a very interesting lens into some of the tradeoffs in ActiveRecord: which were inherent, and which were just incidental choices or entrenched legacies that need not remain in an ideal world.

                                                      EDIT: Followup thought. You really do need a mental model for ActiveRecord::Relation when using “ActiveRecord”. Something I contributed at work (and which I hope to open source somehow in 2020) was an extension (patch?) to the awesome_print gem that previews ActiveRecord::Relation more intelligently. After building it, I realized that both junior and mid-level engineers on my team did not completely grok ActiveRecord::Relation, and how that just being able to see bits of it splayed out, in chunks more discrete than just calling #to_sql, helped them feel more confident that what they were building was the right thing.

                                                      1. 1

                                                        The other thing I’ve had success with in rails: PostgreSQL supports updatable views.

                                                        Turning a monster query into a view is a big, ugly undertaking - but so far I’ve only needed it after a project has become a success (at which point I don’t mind too much) and tends to happen to the least-churned tables (I’ve only had to modify these kind of views once or twice).

                                                        1. 1

                                                          The problem with interpolating to_sql or using any form of SQL strings is that ActiveRecord scopes can no longer be composed for any mildly more complicated/useful queries, especially if ActiveRecord tries to alias a sub-query one way or another as strings are exempt from aliasing. ActiveRecord doesn’t parse any SQL strings. This is a problem as you don’t know who or what will consume/compose queries with those scopes using SQL strings later. Changing a scope which is used in many contexts to use literal SQL becomes a very dangerous undertaking as it might break many of its consumers due to the above. So I’m with @colonelpanic on this one. IMO, Rails Core team should either embrace Arel and its direct use or maybe replace it with something better.

                                                      2. 2

                                                        Contributors to Rails have repeatedly stated that Arel is considered an internal, private API, and it is not recommended for use in application code.

                                                        I have very little sympathy for this position because the official query interface is simply not adequate for even mildly complicated use-cases.

                                                        I’ve been using Arel directly, and even patching in new features, for ten years. Can’t think of a time it’s ever been an issue.

                                                        I will continue to use Arel until a better alternative presents itself. String interpolation is not a serious alternative.

                                                        1. 1

                                                          In Rails’ code, the core example of utilizing #arel_table is exactly greater_than: https://github.com/rails/rails/blob/c56d49c26636421afa4f088dc6d6e3c9445aa891/activerecord/lib/active_record/core.rb#L266

                                                          The bigger concern with SQL injection is future developers adding into the string unsafe code, so avoiding them is preferable.

                                                          As far as database compatibility, there are plenty of non-SQL database adapters available, and sticking to some form of Arel or built-in syntax, rather than SQL, keeps it more likely to translate to many different databases. It’s not “a must”, but it’s pretty sweet to swap out adapters on an app and have everything “Just work”

                                                          1. 1

                                                            As far as database compatibility, there are plenty of non-SQL database adapters available, and sticking to some form of Arel or built-in syntax, rather than SQL, keeps it more likely to translate to many different databases.

                                                            I am highly skeptical that there are actually that many databases in use which wouldn’t be just as happy with the simpler greater-than/less-than formulations.

                                                          2. 1

                                                            With regards to “database adapter compatibility”, I question whether less-than and greater-than are, in reality, not portable across different SQL databases? I am ignorant where this might be so, and would be happy to learn of those cases.

                                                            FWIW here are the ORM-to-SQL operator mappings for Django’s four built-in database backends:

                                                            So if there’s a database where > and < aren’t the greater-than/less-than operators, it isn’t one of those four.

                                                          1. 4

                                                            Revealing intentions. Enum, a module, a namespace, something. In Ruby, I’ve abused modules for this and it’s not super great but I’d still do it because it reads really nice.

                                                            class User
                                                              module Enabled; end
                                                              module Disabled; end
                                                            
                                                              attr_reader :account_state
                                                            
                                                              def initialize
                                                                @account_state = Enabled
                                                              end
                                                            
                                                              def enable!
                                                                @account_state = Enabled
                                                              end
                                                            
                                                              def disable!
                                                                @account_state = Disabled
                                                              end
                                                            end
                                                            
                                                            # main - but write tests in real world  ;)
                                                            user = User.new
                                                            if user.account_state == User::Enabled
                                                              puts "User is active."
                                                              user.disable!
                                                            end
                                                            
                                                            puts user.account_state
                                                            

                                                            When you run it:

                                                            User is active.
                                                            User::Disabled
                                                            

                                                            You can use enums, namespaces in other languages to do the same thing. Like I said, I don’t super love this and don’t do this as a rule. But instead of using true/false when I have states, sometimes I do it like this (in non-Ruby too). And it’s dynamically typed so you need to test it which is neither a pro/con. I just don’t like the namespace ;end bit. Note that this isn’t constants because there’s no value being stored. It’s purely namespace and intention revealing. I think that’s kind of neat.

                                                            1. 2

                                                              Rather than module Enabled; end you could also use Enabled = Module.new.

                                                              1. 2

                                                                Wouldn’t :enabled and :disabled be more idiomatic Ruby?

                                                                1. 2

                                                                  I’d say so. You could still expose the User::Enabled and User::Disabled constants with those as the values.

                                                                  The use of “sentinel modules” for values is an odd choice and I can’t see any real benefits (comparisons might even be marginally more expensive with this implementation?).

                                                                  Also worth noting that you should always consider looking at things like ActsAsStateMachine rather than hand-rolling your own, of course.

                                                                  1. 1

                                                                    Yes, that’s more common. But there’s no safety of symbols scattered everywhere (of course there’s no safety with modules scattered either). With symbols, you’d have to go find the possible/expected values but I think having the namespace has a chance of getting some editor autocomplete to work outside the file. There’s really not a guarantee which is why testing is so valued/polished. I don’t always do this module trick. I think it just reveals intention because :disabled becomes a module with a namespace like User::Disabled.

                                                                    Other languages have this stuff more formalized and checked. Pattern matching with enums is pretty great (to me).

                                                                1. 9

                                                                  illumos, and moreover Triton/SmartOS from Joyent, is excellent.

                                                                  Although I’ve done a recent rebuild of my home infrastructure and have moved on, I spent years running Joyent’s cloud platform: Triton, on a cluster of Intel NUCs. It’s great that they offer it open source, and I highly recommend people check it out, if they’re unfamiliar. Although we’re living in an ephemeral container-centric world, with lots of cool constructs and patterns evolving, the notion of having a container that acted just like a HVM was always a pleasurable and exciting one (illumos Zones, check them out!).

                                                                  And of course, Joyent really pushed their engineering with a great Docker API solution, too! So, I had a bunch of services running in zones, and a fair few containers too. All wrapped up with Terraform, Packer, and Ansible for provisioning. A lone KVM instance running OpenBSD for my OpenIKED VPN. I’m just rambling now, but I’m sure people can tell I loved that stack, and it demonstrates how flexible it is for something you can set up at home/in a private DC.

                                                                  TL;DR - if you’re not familiar with illumos, SmartOS, Triton (and Joyent in general), definitely check out their stuff. It’s all open source, and is really cool!

                                                                  1. 4

                                                                    Illumos is in my top two companies I would trust to run a docker container in production along with Google. I trust them because

                                                                    1. They have really solid systems engineers.

                                                                    2. Neither of them actually run the docker engine in production.

                                                                    1. 2

                                                                      You also gain dtrace and the best ZFS implementation. I’ve never had to run Docker in prod but this has been my planned solution since this became possible.

                                                                      1. 2

                                                                        Absolutely! There are some fantastic technologies that you get at your fingertips. I also forgot to mention in my post how exciting the Linux syscall translation was when it hit. OS level virtualization (containers) of the Linux kernel… on an illumos host. Mindblowing stuff. There are some excellent talks out there from @bcantrill (that are always very entertaining) on many of the things I’ve noted. I’d urge anyone reading, who’s curious about any of this, go watch some of them :)

                                                                        1. 2

                                                                          I was jealous for a long time because Zones were a bit more “complete” than FreeBSD jails and then their Linux syscall translation was also more complete than FreeBSD’s…

                                                                          Things are better now in FreeBSD land but Illumos still has a more polished solution…. and a damn fine network stack… and a damn fine CPU scheduler… and a damn fine memory management…

                                                                          If Solaris was open sourced sooner I don’t know what the world would look like

                                                                        2. 1

                                                                          I’m a big fan of both of those technologies. So it only sweetens the deal for me.

                                                                        3. 1

                                                                          Doesn’t Google use Docker in production? That was surprising, to me.

                                                                          1. 4

                                                                            Nope, They use their own container technology which predates docker by over a decade. They just wrap it in a docker api facade for you to make it easier for you to interact with it.

                                                                        4. 1

                                                                          What are the reasons for moving?

                                                                          1. 2

                                                                            Good question! To be honest, although I loved the stack, it had gathered dust for a while. Certainly in a sense of the methods I was using to define my infrastructure. The landscape changed pretty drastically in a short period of time, in the Ops world. I was doing all this stuff with kubernetes and GitOps at work, and still deploying with Terraform and Ansible at home.

                                                                            A large part of why I have my home setup is to learn things, try things, develop things. I felt I wanted a stack that closer represented the things I was currently enjoying.

                                                                            I could have tried out running k8s on top of Triton, but to be honest, the implementation Joyent have blogged about looks a little hefty for my liking (and my resources). It leverages KVM instances to run various k8s components.

                                                                            I’ve been thoroughly enjoying Nix (and NIxOS) for quite some time, so I decided I’d redesign my home cluster:

                                                                            • NixOS on the metal
                                                                            • All system expressions deployed to the servers via NixOps
                                                                            • Declarative setup of k8s and some accompanying ‘core’ services
                                                                            • k8s services defined with YAML/kustomize, slurped in and deployed via GitOps with ArgoCD

                                                                            I’ve been a massive nerd about it all and captured everything in a GitHub project, with a roadmap and issues for everything I plan to implement.

                                                                            Whilst I’m excited about it, it’s largely blocked at the moment by the state of k8s deployments on NixOS. The modules provided to bootstrap a k8s cluster are a bit wonky in their current state. I believe ‘offline hacker’ is doing a complete rework of it all in the background. So I’m very much looking forward to his work.

                                                                            1. 1

                                                                              Out of curiosity, is that GitHub project public?

                                                                        1. 2

                                                                          I wish there would be another platform where I could publish articles as easy I can on Medium. I hate their layout for not loggedin users (and a lot more). But it’s easy to see how well an article is doing and to be able to write on the go.

                                                                          1. 8

                                                                            Maybe https://write.as/ or https://dev.to could work. As a reader, I certainly prefer both over Medium.

                                                                          1. 1

                                                                            Is there any feature that doesn’t exist out-of-the-box on Linux?

                                                                            1. 4

                                                                              GPU drivers

                                                                              1. 1

                                                                                Most likely, I’ll end up buying a Lenovo A485 to replace my MBP 2012. To my knowledge, the AMD and Intel GPU drivers work out of the box these days so it’s only NVIDIA, right?!

                                                                              2. 2

                                                                                macOS support.

                                                                                1. 1

                                                                                  Nothing like that exists out of the box. Linux is but a kernel.

                                                                                  1. 1

                                                                                    In @soc’s defense, he wrote “[…] on Linux”.

                                                                                  2. 1

                                                                                    You don’t have to use xorg/wayland.

                                                                                  1. 2

                                                                                    Here’s a question for the ages: are there any actually-existing good hosted CI providers out there?

                                                                                    1. 7

                                                                                      Not if you need speed: http://bitemyapp.com/posts/2016-03-28-speeding-up-builds.html

                                                                                      I would honestly pay good money for reliable, tested deployment automation that stood things like CI up.

                                                                                      1. 1

                                                                                        Who’d you end up going with for the dedicated server / what are the specs on that machine like?

                                                                                        1. 2

                                                                                          Approximately this with NVMe RAID: https://www.ovh.com/us/dedicated-servers/infra/173eg1.xml

                                                                                          tbqh, most the time we saved on compilation was lost to the GHCJS build later on. I was very sad.

                                                                                      2. 5

                                                                                        We use buildkite at my company. One nice aspect is that we get an agent to run on /our/ “hardware” (we just use large vm instances). It works pretty well.

                                                                                        1. 3

                                                                                          Another vote for buildkite here - their security posture is markedly better and you have much more control over performance.

                                                                                          1. 2

                                                                                            It’s probably worth mentioning here that GitLab offers similar functionality with their GitLab CI offering. You can use their infrastructure or install runners (their equivalent of agents) on as many machines as you like. Disclaimer: I haven’t used either yet but attended a meetup event where somebody praised them highly and ditched their Atlassian stack for that single reason.

                                                                                            1. 1

                                                                                              Their website looks intriguing could you elaborate on their security posture? Is it just an artifact of the on-premise build agent, or is there more to it than that?

                                                                                          2. 5

                                                                                            If you happen to run on Heroku, Heroku-CI works quite well. You don’t wait in a queue—we just launch a new dyno for every CI run, which happens while you blink. It’s definitely not as full features as Circle, or even Travis, but it’s typically good enough.

                                                                                            1. 1

                                                                                              At $WORK we run some things on Heroku but we can’t or don’t want to for most things — it’s either too expensive or the workload isn’t really well-suited for it.

                                                                                            2. 4

                                                                                              What do you need? I like Travis, they also get vastly better when you actually use the paid offering and they offer on-premise should you actually need it.

                                                                                              1. 2

                                                                                                I need builds to not take 25-30 minutes.

                                                                                                Bloodhound averages 25 minutes right now on TravisCI and that’s after I did a lot of aggressive caching: https://travis-ci.org/bitemyapp/bloodhound/builds/286053172?utm_source=github_status&utm_medium=notification

                                                                                                Gross.

                                                                                                1. 2

                                                                                                  I was asking cmhamill.

                                                                                                  But, just to be clear: your builds take 8-14 minutes. What takes time for you is the low concurrency settings on travis public/free infrastructure. It’s a shared resource, you only get so many parallel builds. That’s precisely why I referred to their paid offering: travis is a vastly different beast when using the commercial infrastructure.

                                                                                                  I also recommend not running the full matrix for every pull request, but just the stuff that frequently catches errors.

                                                                                                  1. 3

                                                                                                    I was asking cmhamill.

                                                                                                    You were asking in a public forum. I didn’t ask you to rebut or debate my experiences with TravisCI. https://github.com/cmhamill their email is on their GitHub profile if you’d like to speak with them without anyone one else chiming in. I’m relating an objection that is tied to real time lost on my part and that of other maintainers. It is a persistent complaint of other people I work with in OSS. I’m glad TravisCI’s free offering exists but I am not under the illusion that the value they’re providing was brought into existence ex nihilo with zero value derived from OSS.

                                                                                                    It’s a shared resource, you only get so many parallel builds. That’s precisely why I referred to their paid offering: travis is a vastly different beast when using the commercial infrastructure.

                                                                                                    We use commercial TravisCI at work. It’s better than CircleCI or Travis’ public offering but still not close to running a CI service on a dedis (singular or plural).

                                                                                                    I had to aggressively cache (multiple gigabytes) the build for Bloodhound before it stopped timing out. I’m glad their caching layer can tolerate something that fat but I wish it wasn’t necessary just to keep my builds working period.

                                                                                                    That combined with how unresponsive TravisCI has been in general leaves a sour taste. If there was a better open source CI option than something like DroneCI I’d probably have rented a dedi for the projects I work on already.

                                                                                                    1. 5

                                                                                                      You were asking in a public forum. I didn’t ask you to rebut or debate my experiences with TravisCI.

                                                                                                      You posted in a public forum and received some valid feedback based on the little context of your post ;)

                                                                                                  2. 1

                                                                                                    How long does it take on your local machine as a point of comparison?

                                                                                                    1. 2

                                                                                                      https://mail.haskell.org/pipermail/ghc-devs/2017-May/014200.html

                                                                                                      That’s just build, doesn’t include test suite, but the tests are a couple more minutes.

                                                                                                      1. 1

                                                                                                        Hm, that’s roughly the time your travis needs, too?

                                                                                                        https://travis-ci.org/bitemyapp/bloodhound/jobs/286053181#L539 -> 120.87s seconds

                                                                                                        1. 0

                                                                                                          Nope, the mailing list numbers do not include --fast and that makes a huge difference.

                                                                                                          You are off your rocker if you think the EC2 machines Travis uses are going to get close to what my workstation can do.

                                                                                                          1. 2

                                                                                                            Would you rather pay for a licensed software distribution that you drop in a fast dedicated computer you’ve bought and it turns that computer into a node in a CI cluster that can be used like Travis?

                                                                                                            Would you rather pay for a service just like Travis but more expensive and running on latest-and-greatest CPUs and such?

                                                                                                            1. 3

                                                                                                              Would you rather pay for a licensed software distribution that you drop in a fast dedicated computer you’ve bought and it turns that computer into a node in a CI cluster that can be used like Travis?

                                                                                                              If it actually worked well and I could test it before committing to a purchase, probably yes I would prefer that to losing control of my hardware or committing to a SAAS treadmill but businesses loooooooove recurring revenue and I can’t blame them.

                                                                                                              Would you rather pay for a service just like Travis but more expensive and running on latest-and-greatest CPUs and such?

                                                                                                              That seems like a more likely stop-gap as nobody seems to want to sell software OTS anymore. Note: it’s not really just CPUs, it’s tenancy. I’d rather pay SAAS service premium + actual-cost-of-leasing-hardware and get fast builds than the “maybe pay us extra, maybe get faster builds” games that most CI services play. Tell me what hardware I’m actually running on and with what tenancy so I don’t waste my time.

                                                                                                  3. 1

                                                                                                    Has anyone done this kind of dependency scan on Travis that this guy did on CircleCI? I suspect you will see much the same.

                                                                                                    Travis does have one clear advantage here in that it’s OSS so you can SEE its dependencies and make your own decisions. See my note about CircleCI needing to be better about communication above.

                                                                                                    1. 3

                                                                                                      Well… “scan”. They posted a screenshot of their network debugger tab :).

                                                                                                      Travis (.org) uses Pusher, but not their tracking scripts. It integrates Google Analytics and as such, communicates with it. ga.js is loaded from google.

                                                                                                      The page connects to:

                                                                                                      • api.travis-ci.org
                                                                                                      • cdn.travis-ci.org (which ends up being fast.ly)
                                                                                                      • gravatar.com (loading avatar images)
                                                                                                      • statuspage.io (loading some status information as JSON)
                                                                                                      • fonts.googleapis.com (loading the used fonts)
                                                                                                      • ws.pusherapp.com

                                                                                                      All in all, it is considerably less messy then circle-ci’s frontend.

                                                                                                      Also, Travis does not have your tokens or code in their web frontend, code is on Github, tokens should be encrypted using the encrypted environment: https://docs.travis-ci.com/user/environment-variables#Defining-encrypted-variables-in-.travis.yml

                                                                                                      1. 2

                                                                                                        You have proven my point perfectly.

                                                                                                        CircleCI’s only sin here is one of a lack of communication. There is nothing actually wrong with any of the callouts the article mentions, they just need to be VERY sure that their users are aware of exactly who is seeing the source code they upload. This should be an object lesson for anyone running a SaS company, ESPECIALLY if said SaS company caters to developers.

                                                                                                        1. 4

                                                                                                          This is not an apples to apples comparison, in my post I cited Javascripts only (which can make AJAX requests and extract source code), @skade cites that Travis loads fonts, images, and CSS from third party domains, which don’t have those properties; a compromise in CSS might change the appearance of a page but generally can’t result in your source code/API tokens being leaked to a third party.

                                                                                                          As far as I follow the only external Javascript run by Travis CI is Pusher. So, no, it has not proven your point perfectly, in fact it demonstrates the opposite.

                                                                                                1. 3

                                                                                                  I read this expected some basics but some of it was new to me, such as pushd/popd. Good writeup.

                                                                                                  1. 3

                                                                                                    I remembered those from your comment, and was kind of expecting more. Recommend checking it out, it’s a fast read for anyone who knows this stuff, and useful to anyone else.

                                                                                                    disown was completely new to me, wonder if zsh has it ;)