1. 3

    In the demo repo linked from this post, I was able to extract the secret using just strings(1) . No decompilation required!

    1. 1

      I don’t agree with the viewpoint I’m about to describe, but it’s a counterpoint I’ve heard to using git this way. I used to work with some engineers who were against this method because it changes the meaning of a commit. In their eyes, a “commit” was interpreted quite literally. Meaning, it should leave the code in a reasonably presentable state, and definitely compilable. By using git add -p, you are committing a set of code that has never been tested.

      1. 5

        I am a kind of person who prefers commits to be atomic and ideally compilable state (not that I’m fanatic about it, but that’s what I’m striving for). I thing add -p combined with interactive rebase only helps with that – if you discovered a typo or need some amendment, you can surgically squash it into one of your previous commits, instead of having a ‘broken’ commit and a useless separate commit with an amendment

        1. 3

          You’re right in theory, but I always use git -p and the only things I usually leave out are single debug log lines, or some CSS changes when I work on the app itself, etc.pp, not huge swaths of code. The amount of times this has bitten be can be counted on one hand. So yes, technically it’s “never tested” but if you at any point push before running tests locally… I don’t think it matters most of the times, that’s what CI is for.

          1. 2

            This counterpoint (which I understand isn’t yours) isn’t even one: always work in tiny increments, commit anytime the tests are green.

            Your tests aren’t green fo long stretches of time? Fix that.

            All green tests don’t mean that code can be committed? Fix that.

          1. 11

            That was far more interesting than I’d have hoped. Especially because it was more about operating this at scale. For my non-petabyte-scale stuff I’ve always felt like mysql is easier to use as developer. The permissions system for example is confusing. But I was also bitten by things like using utf8mb4 over utf8 in mysql. (and I always recommend mariadb)

            1. 9

              I’m a little stunned to hear anyone say they prefer MySQL over PostgreSQL as a developer. A couple things I’ve really disliked about MySQL over the years:

              • silent coercion of values (where PG, in contrast, would noisily raise a fatal error to help you as a developer) – it makes it a lot harder to debug things when what you thought you stored in a column is not what you get out of it
              • the MySQL CLI had significantly poorer UX and feature set than psql. My favourite pet peeve (of MySQL): Pressing Ctrl-C completely exited the CLI (by default), whereas, in psql, it just cancels the current command or query.
              1. 4

                After spending three years trying to make MySQL 5.6.32 on Galera Cluster work well, being bitten by A5A SQL anomalies, coercion of data type sillyness et al, I’ve found Postgres to be a breath of fresh air and I never want to go back to the insane world of MySQL.

                Postgres has it’s warts, incredible warts, but when they’re fixed, they’re usually fixed comprehensively. I’m interested in logical replication for zero downtime database upgrades, but being the only girl on the team who manages the backend and the database mostly by herself, I’m less than inclined to hunt that white whale.

                1. 2

                  the MySQL CLI had significantly poorer UX and feature set than psql

                  Hmm I’ve always felt the opposite way. The psql client has lots of single-letter backslash commands to remember to inspect your database. What’s the difference the various combinations of \d, \dS, \dS+, \da, \daS, \dC+, and \ds? It’s all very confusing, and for the same reason we don’t use single-letter variables. I find MySQL’s usage of Show tables, show databases, describe X to be a lot easier to use.

                  1. 1

                    Yeah this is also bugging me. Sure “show databases” is long and something like “db” would be nice, but I know it and (for once at least) it’s concistent to “show tables” etc.

                    1. 1

                      I grant you that, but \? and \h are 3 keystrokes away, and the ones I use most frequently I’ve memorized by now. But I just couldn’t stand the ^C behaviour, because I use that in pretty much every other shell interface of any kind without it blowing up on me. MySQL was the one, glaring exception.

                  2. 1

                    Totally agree, this is almost exactly my situation too. I had always used mysql and knew it pretty well, but got burned a few times trying to deal with utf8, then got hit with a few huge table structure changes (something I think has improved since). Ended up moving to Postgres for most new stuff and have been pretty happy, but I do miss Mysql once and a while.

                  1. 4

                    I’m curious about how this is implemented. It seems to delegate the lifting to some sort of policy engine. The logic seems to be some dsl embedded within the go file: https://github.com/mhausenblas/cidrchk/blob/master/main.go#L20

                    1. 1

                      It seems to use the evaluation module from https://github.com/open-policy-agent/opa

                      1. 2

                        Seems like an odd dependency. Go’s stdlib has IPNet.Contains.

                    1. 2

                      Do any platforms enforce that binary signatures match the package’s signature? This feels like a common security feature that would cause problems with this technique.

                      1. 10

                        I wonder how much security review maintainers actually do. Reviews are difficult and incredibly boring. I have reviewed a bunch of crates for cargo-crev, and it was very tempting to gloss over the code and conclude “LGTM”.

                        Especially in traditional distros that bundle tons of C code, a backdoor doesn’t have to look like if (user == "hax0r") RUN_BACKDOOR(). It could be something super subtle like an integer overflow when calculating lengths of buffers. I don’t expect a volunteer maintainer looking at a diff to spot something like that.

                        1. 4

                          a backdoor doesn’t have to look like if (user == “hax0r”) RUN_BACKDOOR()

                          Reminded me of this attempted backdoor: https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-attempt-of-2003/

                          1. 3

                            I assume that as long as we keep insisting on designing complex software in languages that can introduce vulnerabilities, then such software will contain such vulnerabilities (introduced intentionally or unintentionally).

                            I think integer overflow is a great example. In C & C++ unsigned integers wrap on overflow and signed integers exhibit UB on overflow. Rust could have fixed this, but overflow is unchecked by default in release builds for both signed and unsigned integers! Zig doubles down on the UB by making signed and unsigned integer overflow both undefined unless you use the overflow-specific operators! How is it that none of these languages Rust doesn’t handle overflow safely by default?! (edit: My information on Zig was inaccurate and false, it actually does have checks built into the type-system (error system?) of the language!)

                            Are the performance gains really worth letting every addition operation be a potential source of uncaught bugs and vulnerabilities? I certainly don’t think so.

                            1. 6

                              Rust did fix it. In release builds, overflow is not UB. The current situation is that overflow panics in debug mode but wraps in release mode. However, it is possible for implementations to panic on overflow in release mode. See: https://github.com/rust-lang/rfcs/blob/master/text/0560-integer-overflow.md

                              Obviously, there is a reason for this nuanced stance. Because overflow checks presumably inhibit performance or other optimizations in the surrounding code.

                              1. 2

                                To clarify what I was saying: I consider overflow and UB to both be unacceptable behavior by my standard of safety. Saying that it is an error to have integer overflow or underflow, and then not enforce that error by default for all projects feels similar to how C has an assert statement that is only enabled for debug builds (well really just when NDEBUG isn’t defined). So far the only language that I have seen which can perform these error checks at compile time (without using something like Result) is ATS, but that language is a bit beyond my abilities.

                                If I remember correctly, some measurements were taken and it was found that always enabling arithmetic checks in the Rust compiler lead to something like a 5% performance decrease overall. The Rust team decided that this hit was not worth it, especially since Rust is aiming for performance parity with C++. I respect the team’s decision, but it does not align with the ideals that I would strive for in a safety-first language (although Rust’s primary goal is memory safety not everything-under-the-sun safety).

                                1. 4

                                  That’s fine to consider it unacceptable, but it sounded like you thought overflow was UB in Rust based on your comment. And even then, you might consider it unacceptable, but surely the lack of UB is an improvement. Of course, you can opt into checked arithmetic, but it’s quite a bit more verbose.

                                  But yes, it seems like you understand the issue and the trade off here. I guess I’m just perplexed by your original comment where you act surprised at how Rust arrived at its current state. From what I recall, the intent was always to turn overflow checks on in release mode once technology erased the performance difference. (Or rather, at least that was the hope as I remember it at the time the RFC was written.)

                                  1. 2

                                    Oh no I actually am glad that at least Rust has it defined. It is more like Ada in that way, where it is something that you can build static analysis tools around instead of the UB-optimization-wild-west situation like in C/C++.

                                    From what I recall, the intent was always to turn overflow checks on in release mode once technology erased the performance difference.

                                    Yes I believe that was in the original RFC, or at least it was discussed a bunch in the GitHub comments, but “once technology erased the performance difference” is different than “right now in shipping code after the language’s 1.0 release”. I would say it is less of surprise that I feel and more a grumpy frustration - one of the reasons I picked up (or at least tried to pick up RIP) Rust in the first place was because I wanted a more fault-tolerant systems programming language as my daily driver (I am primarily a C developer). But I remember being severely disappointed when I learned that overflow checks were disabled by default in release builds because a that ends up being only marginally better than everyone using -fwrapv in GCC/Clang. I like having the option to enable the checks myself, but I just wish it was universal, because that would eliminate a whole class of errors from the mental model of a program (by default).

                                    :(

                              2. 3

                                With the cost, it depends. Even though overflow checks are cheap themselves, they have a cost of preventing autovectorization and other optimizations that combine or reorder operations, because abort/exception is an observable side effect. Branches of the checks are trivially predictable, but they crowd out other branches out of branch predictors, reducing their overall efficiency.

                                OTOH overflows aren’t that dangerous in Rust. Rust doesn’t deal with bare malloc + memcpy. Array access is checked, so you may get a panic or a higher-level domain-specific problem, but not the usual buffer overflow. Rust doesn’t implicitly cast to a 32-bit int anywhere, so most of the time you have to overflow 64-bit values.

                                1. 3

                                  How is it that none of these languages handle overflow safely by default?!

                                  The fact that both signed and unsigned integer overflow is (detectable) UB in Zig actually makes zig’s –release-safe build mode safer than Rust’s –release with regards to integer overflow. (Note that Rust is much safer however with regards to memory aliasing)

                                  1. 1

                                    I stand corrected. I just tested this out and it appears that Zig forces me to check an error!

                                    I’ll cross out the inaccuracy above…and I think I’m going to give Zig a more rigorous look… I’m quite stunned and impressed actually. Sorry for getting grumpy about something I was wrong about.

                                  2. 2

                                    I don’t think so either. I did a benchmark of overflow detection in expressions and it wasn’t that much of a time overhead, but it would bloat the code a bit, as you have to not only check after each instruction, but only use instructions that set the overflow bit.

                                1. 0

                                  I come from the Windows world (where IIS is the only web server that matters) so the idea of having to use a (reverse) proxy to (easely) support HTTPS is ludicrous to me.

                                  1. 4

                                    IIS fills the same place as nginx in this design.

                                    1. 2

                                      You don’t have to, but it is convenient.

                                      1. 2

                                        It’s really easy. Here’s a trivial nginx conf to enable that:

                                        server {
                                                server_name api.myhost.com;
                                                listen 80;
                                        
                                                location / {
                                                        include proxy_params;
                                                        proxy_pass http://localhost:5000/;
                                                }
                                        
                                               # Certbot will put in SSL for you
                                        }
                                        

                                        And then you can easily get SNI-based multiple hosts on the same ‘node’ if you’d like. This lets you easily handle SSL stuff and still have whatever web-server you want bound to the local host:port to actually do the action. You can also do all the fun URL rewrite stuff up there if you’d like.

                                      1. 2

                                        A whole lot! On my kubernetes cluster:

                                        Elsewhere at home, mostly in VMs:

                                        • Supporting services for Kubernetes like storage, mysql, postgres
                                        • Plex (self-hosted Netflix)
                                        • Syncthing (self-hosted Dropbox)
                                        • Minio (Self-hosted S3)
                                        • Subsonic (a self-hosted music streaming server)
                                        • Pysonic (a custom drop-in replacement for Subsonic’s XML API since I found the stock server to be problematic)
                                        • A custom (and unnamed!) podcast generator (records, on schedule, segments from live streams and exports them as a podcast feed)
                                        • Gitea, of course
                                        • Various provisioning/monitoring/alerting tools to help maintain everything else (puppet, influxdb, grafana, custom monitoring, etc)

                                        As you can tell, I very much like writing tools apps for myself. Some of these are new (and I would call good) and others haven’t been updated in years.

                                        1. 1

                                          The gist I’m getting from this is that both implementations have some pretty serious flaws and that someone should create a 3rd standard that mixes the best of both.

                                          1. 3

                                            Why put the commands in a separate file, copy it to the container, and run it? In the Dockerfile you can write:

                                            RUN set -euo pipefail && \
                                                export DEBIAN_FRONTEND=noninteractive && \
                                                apt-get update && \
                                                apt-get -y upgrade && \
                                                apt-get -y install --no-install-recommends syslog-ng && \
                                                apt-get clean && \
                                                rm -rf /var/lib/apt/lists/*
                                            

                                            Which saves you the extra layer that needing the additional copy step would add.

                                            I’ve been under the impression that base images are respun often enough that there is really no point to running package upgrades on build. Is this not the case?

                                            Also, if you’re REALLY concerned about overhead from layers, you can squash your image into a single layer.

                                            1. 3

                                              If you have set -e your && becomes redundant and should be set to ;.

                                              1. 2

                                                I’d say it’s best stick with the && idiom, lest someone come along and copy the ; without the set. Shell tends to propagate via clipboard, particularly in Dockerfiles.

                                                1. 1

                                                  I strongly disagree. If your premise of your code is that “people are going to copy/paste/break” the code anyway you have a lot of other struggles too.

                                                  1. 1

                                                    Well yeah, we’re starting from a baseline of using shell. Of course there are problems! My perspective is based on supporting Docker in a large organization. I think that it is better to set folks up for (partial) success based on what they will likely do in practice, rather than pretending that domain specialists are going to learn shell properly.

                                              2. 2

                                                The centos:8 image hasn’t been updated for two months. To be fair, none of the updates appear to be security fixes, but I would not rely on base image being up-to-date.

                                                1. 2

                                                  Wow, that’s very strange. Every centos-based image out there having a slightly different layer containing the same updates seems like a missed opportunity for space and bandwidth savings because of layer re-use.

                                              1. 1

                                                A local date and time plus a UTC offset can be converted to a UTC date and time, but not the other way around.

                                                This seems incorrect. Why can’t it? UTC itself is local if you’re in the right place. It’s no more difficult than converting times between different timezones anyway.

                                                1. 3

                                                  Things that share the same UTC offset can have different political boundaries, and that in turn means you may have different DST (and hence UTC offset times). Merely having an offset loses that information.

                                                  1. 1

                                                    That’s true for a naked timestamp, but a timestamp with no related information isn’t really useful. I don’t think the timezone field is a good place to store the location of whatever the date belongs to.

                                                    1. 2

                                                      Oh no I totally agree–that’s why I specified that.

                                                1. 2

                                                  Scratching an itch, and it’s great, but one should be cautious with Tampermonkey.

                                                  1. 4

                                                    This seems clickbaity considering the article is about licensing rather than any technical danger, like a vulnerability or something.

                                                  1. 18

                                                    The entire debate about systemd fascinates me as an expression of FLOSS culture. The way I see it, systemd is a boon for two classes of users - the few who manage huge amounts of “machines” (i.e. “the cloud”), and the many who use Linux as a workstation or on a laptop, where they seldom have any reason to bother about how a service starts, or is scheduled.

                                                    The class of users who don’t like systemd seem to be the Mittelstand - the sysadmin of a few dozen machines, the user of a VPS who manages a lot of their own services to participate in the open web. They’re reliant on the “folk wisdom” of their own experience, and the substrate of blog and forum posts explaining how to set stuff up. Having to learn a new way of dealing with this stuff feels unnecessary - it worked fine before! The issues that systemd is set up to deal with have never impacted them.

                                                    1. 16

                                                      Disagree. I am exactly a user of all 3 groups and

                                                      • had a few problems with systemd in the cloud that wouldn’t have happened with other init systems
                                                      • I don’t care about startup time anywhere, but I also don’t use autoscaling
                                                      • my laptop shuts down slower now
                                                      • if you have bare metal, the 20s saved with systemd don’t compare to the 3min RAM check…

                                                      Overall systemd solved all the problems I never had. Not as a laptop user, not as a small-scale system admin, but ok - I didn’t work with cloud stuff before it happened.

                                                      1. 4

                                                        You shut your laptop down? I haven’t turned a laptop off (until I sell it) in a decade. They crash from time to time, and restart for updates, but I can’t think of a reason to turn one off unless you’re selling it or putting it into storage.

                                                        1. 2

                                                          work laptop: true, I usually don’t shut it down

                                                          my private laptops: I use them once or twice a month, why would I keep all 3 (one is very old and only used every few months) of them running all the time?

                                                        2. 1

                                                          Who runs a RAM check in the modern world? Seems like a complete waste of time to me.

                                                          (Feel free to convince me otherwise - I’m always open to persuasive arguments!)

                                                          1. 10

                                                            Even without a RAM check, a typical HP server still takes a good minute or two to POST and then prepare all of the firmwares for things like HBAs, NICs, sensors, RAID arrays etc before you even get as far as the operating system bootloader. I imagine the same to be true on most “server-grade” tin.

                                                        3. 13

                                                          I am in “the user of a VPS” category myself and I use Debian. I had some experience of writing init scripts and I am glad I don’t have to do this anymore.

                                                          I also happen to SRE a huge Linux fleet at work, which uses a custom task supervisor written before systemd.

                                                          I am all for more diversity and experimentation in init systems (e.g. one written in Rust would be great to experiment with, given a well-defined format for service files).

                                                          1. 3

                                                            I also happen to SRE a huge Linux fleet at work, which uses a custom task supervisor written before systemd.

                                                            I see this once in awhile and always wonder why. Enterprise distros since RHEL 6 (released in 2011!) ship with upstart as an init system, which is a great process manager. I don’t understand the need for installing yet another process manager.

                                                        1. 2

                                                          I haven’t been paying attention to Calibre ever since the author asserted the project will stay on Python 2, and that he’ll personally maintain python 2. Is that still the case?

                                                          1. 2

                                                            This wasn’t even really true as of the link you posted (did you read past the first comment?), and it certainly isn’t the case now. Calibre runs under Python 3 and has for quite a few versions now.

                                                            1. 1
                                                              1. 1

                                                                Sorry, are you saying somewhere in my link the author changed his mind? I must have missed that. But I don’t see any other comments by the project’s author so I’m a little confused what you’re referring to?

                                                            1. 11

                                                              More people than you realize will take a URL, go to their favorite search engine, and type the URL into the search engine’s search field, never realizing they can actually edit the contents of the address bar above, [snip]

                                                              I paint this bleak picture primarily for the benefit of Internet veterans [snip]

                                                              If my description of “normal” users above surprised, shocked or disappointed you, you’re the target audience.

                                                              Hmm. I don’t see this as bleak at all. I see this as a great advancement in tech, as now even non-technical users have no problem navigating to any site they wish. What would they have done before search engines? I suspect they would have been locked out because it was too hard to use at the time.

                                                              1. 12

                                                                now even non-technical users have no problem navigating to any site they wish.

                                                                That doesn’t sound like the situation described. The situation described in the text you quoted says that non-technical users are only capable of visiting sites their search engine allows them to visit. It’s adding an additional layer or tracking and another opportunity for censorship, but only to non-technical users.

                                                                1. 4

                                                                  says that non-technical users are only capable of visiting sites their search engine allows them to visit.

                                                                  Sure. What would those users have done before search engines?

                                                                  1. 7

                                                                    In the described case, and many I’ve seen over others’ shoulders, they are typing the URL they want to visit into a search engine. Without a search engine, they would do the same thing I do when I don’t have a bookmark to a site I want to visit that I haven’t visited in the past: type it in the URL bar or browser start page. They might get a character wrong, in which case they are no more susceptible to phishing and other problems as if they were doing the same into a search engine.

                                                                    I don’t think this in itself is much worse, but it does teach non-technical users to ignore that search engines are web sites, and instead they think it’s their browser. But it seems inevitable.

                                                                    1. 2

                                                                      I don’t think they’re typing the URL into the search engine. They’re typing the name of what they want into the search engine, like “facebook”. Google trends comparison. Edit: another common variation is “[service_name] login”.

                                                                      they would … type it in the URL bar or browser start page.

                                                                      Hmm, that’s exactly what the article’s author is complaining about - people not doing this because they don’t realize it’s a feature. I think the root cause of this is that users don’t understand what URLs are.

                                                                      And why should they? They can just type “facebook” into whatever input field is focused when the browser opens and eventually end up where they want. Had the user typed 4 more characters (”.com”) and into the browser’s URL bar instead, they’d end up at the same place and save the step of clicking a search result. Even an average person understands saving time and effort. So why don’t they do it?

                                                                      I think we’re assuming the average user is way more savvy than reality.

                                                                      1. 1

                                                                        I think users will do the least that they have to. I think then that not requiring they know the difference between a website and their computer should not be confusable!

                                                                        1. 1

                                                                          The article is saying they are going to a search page and entering the URL into the search. While a start page or address bar may also go to a search page, the browser will first attempt the text as a URL! (and in my case, neither is enabled to search - I have a search bar for that.)

                                                                  2. 4

                                                                    Maybe I’m being dense but I don’t see any difference between

                                                                    “type the exact digits into the phone application on your mobile”

                                                                    “type the exact website address into the browser address bar”

                                                                    from a point of usability. Imagine browsers never had added the omnibar.

                                                                    1. 3

                                                                      I would guess it’s something between the learning curve and level of standardization. Phones give you clear feedback that you did something wrong but provide zero help when you dial incorrectly, besides telling you that you did so. Omnibars provide suggestions that, with today’s very smart search engines, are almost always what you wanted. Browsers come in a variety of shapes and sizes, but phone dial pads are always the same. Phone numbers (at least, when dialing domestically) are always the same length and format. Web addresses have much more variation.

                                                                    2. 4

                                                                      What would they have done before search engines?

                                                                      What my parents and my grand parents eventually had to do, they would have learnt.

                                                                      That being said as an internet “veteran” of 25 years I still find it more convenient to type the name of a company into the nav bar and have my search engine of choice display a series of links of which the first one is usually what I am after rather than type in the whole url.

                                                                      All I can say is my teenage self would have been very disappointed if they saw how I navigate the internet today, what can I say? The lowest common denominator won out; in technology you either adapt or you die.

                                                                    1. 3

                                                                      It probably makes sense if the “correct” baseline orientation is considered to be landscape, as in a legacy digital camera.

                                                                      I tried to verify this with my iPhone (8 plus) but the “Orientation” parameter is not visible in images taken with it and processed in Adobe Lightroom.

                                                                      Edit there’s no Orientation tag in the images from my Nikon D700 either (and it knows the orientation because it always presents vertical images sideways on the rear screen, as well as showing them correctly vertical after importing the image). Is this a new addition to the Exif spec?

                                                                      More edit this exact behavior is described in this article, recently linked here.

                                                                      1. 2

                                                                        Worst part is that when you upload an image to a website that uses the CSS

                                                                        image-orientation: from-image;

                                                                        it rotates the vertical photo sideways!

                                                                        1. 5

                                                                          I’d argue that images shouldn’t be uploaded unmodified from a camera and onto the public web like this. This problem aside, the gps data in photos seems like a privacy issue. I don’t know what other fields in the exif one may want to keep secret too.

                                                                          1. 2

                                                                            There is no GPS data in my photos:

                                                                            Settings > Privacy > Location Services > disable for Camera

                                                                            Furthermore Apple now strips GPS and other EXIF data when you send images on iPhone through different mediums. e.g., iMessage, AirDrop*. I haven’t confirmed if they’re doing it when Safari accesses Photos app though but they should,

                                                                            Even when I downsize the photos the EXIF rotation bug remains. It’s a super pain in the butt to get photos from my camera to the internet right now and have them correct.

                                                                            My usual procedure for transferring images is to use the viewExif iOS app to strip all EXIF, save a copy, then upload that one. The image selection modal on iOS has blue text at the bottom of the screen telling you the image size, and you can touch it to open a dialog to choose a different image size.

                                                                            Turns out even when I strip EXIF it seems to doing the equivalent of ImageMagick’s “-auto-rotate” which rotates the image to match EXIF orientation hint, so it keeps screwing up.

                                                                            *theres a special option for AirDrop to send the original unmodified

                                                                            1. 1

                                                                              Adobe Lightroom allows one to restrict the metadata included in a JPG on publishing. I’m sure other image editing programs do too.

                                                                              Also many popular image hosting sites (like Imgur) strip EXIF down to the bare minimum.

                                                                              1. 1

                                                                                Also I want to point out that lots of people have phones but no real computers so how else will they get the images online if they don’t have the skills or tools to do image editing / manual stripping of EXIF?

                                                                                It’s a sad situation.

                                                                          1. 2

                                                                            Do they need to be real? The DS9K is pretty amusing.

                                                                            1. 3

                                                                              This kind of CSS usage:

                                                                              <div class=“red”>Some text</div>
                                                                              <div class=“left red”>Some text</div>
                                                                              

                                                                              And similarly:

                                                                              .red {
                                                                                  color: #F00;
                                                                              }
                                                                              .left {
                                                                                  float: left;
                                                                              }
                                                                              

                                                                              IMO, is actually the least scalable way to use CSS. To the author’s point, sure, you instantly know what’s happening when looking at the HTML code alone in the above examples. And you, given a green field, can certainly build a product faster.

                                                                              But what is scalability? I think an important part of scalability is the ability to make widespread changes in a predictable, controlled, and repeatable matter.

                                                                              Let’s look at this example again:

                                                                              <div class=“red”>Some text</div>
                                                                              

                                                                              Why is the text red? Is it an error? Is it a call-out of important information? Does it simply look nice given the background color where this div appears? You can’t answer any of these questions by looking at the code, and that makes changing this code extremely difficult.

                                                                              Let’s say you’re tasked with changing the appearance of something that happens to use this red class. Now you have to track down everywhere “red” is used and figure out if the grep result you’re looking at renders into the type of element you’re supposed to be changing. You cannot do this any other way than manually walking through the source code and rendered pages. You MUST view every page in the app, even the odd edge case pages, to be sure you “got them all”. A newbie dev may even be tempted to change .red’s color property to the new version. Now you have a class called ‘red’ that makes text blue! Ugh!

                                                                              Now consider the alternative:

                                                                              <div class=“call-to-action”>Some text</div>
                                                                              <div class=“error-message”>Some text</div>
                                                                              

                                                                              This differentiates elements that, while they may appear identically when rendered, exist for drastically different purposes. Also note that lack of .left; the separation of layout and theme is a whole other discussion. Furthermore, you can be confident that by changing .call-to-action, you’ve correctly updated the elements you wanted to, left the elements you didn’t want to change unaffected, and not missed any instances. I’m not saying you should always use only a single class on elements - combining classes is Good but it also needs to be done in a scalable way. One could use classes like .button-call-to-action and .button-cta-big or .button-cta-small.

                                                                              Also, creating named classes - those like .call-to-action - does NOT make the CSS harder to maintain. Perhaps true if you’re using plain css, but that is generally not the case anymore. CSS preprocessors allow you to define your core appearance using variables and reusable classes, that you’d then map to these named classes.

                                                                              With this approach it’s extremely easy to add, remove, and update CSS and I think that better fits the idea of scalability. This article is from 2016 and from my perspective this “.red” type of CSS usage has become even more prevalent. I truly don’t understand why.

                                                                              To clarify: when I say element I don’t always mean a single html tag, it can be a visual element like a call to action or contact form. Many designers use a “UI Library” with common elements (as in widgets) throughout the website.

                                                                              1. 1

                                                                                This is a better approach, but… build tools that stop you messing this stuff up.

                                                                                For instance, I hacked CSS modules into my rails app with ~30 lines of code. If you need to modify something that uses that approach, you know statically which templates use which CSS.

                                                                                I haven’t done the same for JS yet, though I probably should. So far, “data-script-hook=“name of functionality”.

                                                                              1. 42

                                                                                Microsoft ♥ Linux – we say that a lot, and we mean it!

                                                                                I’m calling bullshit on this. Microsoft ‘loves Linux’ so much that they’ve ignored requests to support Linux with Outlook/Word/Powerpoint/Teams/etc. Microsoft ‘loves Linux’ so much that they effectively killed Linux support on Skype. Microsoft ‘loves Linux’ so much that they prevent Skype from even working over the web interface on (arguably) the most popular browser used by folks on Linux (if you visit web.skype.com with Firefox you get redirected to this page: https://www.skype.com/en/unsupported-browser). Or do they only ‘love Linux’ when it suites their financial and PR interests?

                                                                                1. 19

                                                                                  I’d like to add the lack of official linux drivers for their Microsoft-branded laptops to this list.

                                                                                  1. 24

                                                                                    do they only ‘love Linux’ when it suites their financial and PR interests?

                                                                                    Well, obviously. Expecting any large corporation to “love” anything that’s not purely out of self-interest strikes me as rather naïve.

                                                                                    Either way, I much prefer the current Microsoft over the “Linux is cancer” and “get the facts” Microsoft of 15 years ago.

                                                                                    1. 10

                                                                                      You can’t “love” something then actively ignore critical parts of it. A better slogan for what they are doing is “microsoft tolerates Linux.” I take issue with the fact that they are heavily implying that they are doing more than tolerating it now (when clearly they are not).

                                                                                      1. 4

                                                                                        Microsoft is making money off of Linux. They “love” it the only way a big profit-driven company can; they found a way to monetize people who actually like it.

                                                                                        1. 4

                                                                                          You can run Microsoft SQL Server on Linux, which seems like a lot more than “tolerating” it. Office has been ported to iOS and Android — I don’t see why they wouldn’t be porting it to Linux too, if there were sufficient demand. (The 2019 numbers I could find showed <5% market share for Linux, measured by web browser.)

                                                                                          1. 8

                                                                                            That still seems like toleration. I’m not convinced that if Linux hadn’t stuck around and/or expanded beyond microsoft’s wildest dreams, that they would still consider it a cancer. They may support Linux on a small subset of all software they pump out, but they ignore it on the vast majority. Can we at least agree that the ’microsoft loves Linux” slogan is pure marketing bullshit and not reflective of their actual behavior?

                                                                                      2. 8

                                                                                        if you visit web.skype.com with Firefox you get redirected to this page: https://www.skype.com/en/unsupported-browser)

                                                                                        Wow, you actually do. What the fuck Microsoft?

                                                                                        1. 12

                                                                                          Or do they only ‘love Linux’ when it suites their financial and PR interests?

                                                                                          Like any company, yes. They love Linux on Azure.

                                                                                          1. 4

                                                                                            I recently had to battle and debug some EWS/Azure/Exchange crap just to get evolution-ews working with Microsoft 2FA. Microsoft has supported Exchange+Evolution exactly 0%. It’s all gnome devs and other random volunteers figuring out how their broken OAuth2/Azure/Office365 rubbish works.

                                                                                            1. 3

                                                                                              Microsoft ‘loves Linux’ so much that they effectively killed Linux support on Skype.

                                                                                              The Skype client for Linux works fine. Sure, it’s Electron and ugly, but so is the Mac version. But it does the job.

                                                                                              (Sure, there are better and open solutions, but the outside world uses Skype.)

                                                                                              1. 0

                                                                                                Microsoft ‘loves Linux’ so much that they’ve ignored requests to support Linux with Outlook/Word/Powerpoint/Teams/etc.

                                                                                                You can’t use the O365 versions on browsers on Linux?

                                                                                              1. 42

                                                                                                To quote a friend: To stop offering Mercurial hosting is bad. To delete the repositories is evil.

                                                                                                1. 12

                                                                                                  I’m not sure about evil, but yeah this sounds bad. It doesn’t seem that either of these two options would be huge amounts of extra investment:

                                                                                                  • automatically convert hg repos to git repos
                                                                                                  • archive hg repos but keep serving a read-only mirror

                                                                                                  I wonder, does archive.org have the means to mirror the public bitbucket hg repositories?

                                                                                                  1. 14

                                                                                                    I do a lot of work around reproducible builds, and find the deletion of public source code to be quite severe. A lot of important projects don’t get regular maintenance, and it takes quite a lot of work to archive source. Converting to git means the inputs to the build process have changed. This might not be a huge deal for people today, but if you’re trying to rebuild something from a decade ago this is a serious problem.

                                                                                                    1. 6

                                                                                                      I do a lot of work around reproducible builds

                                                                                                      Don’t most big shops (Linux distros, Mozilla, Google) vendor the universe anyway? Specifically to avoid vanishing source code, or even minor network flakiness during build?

                                                                                                      1. 6

                                                                                                        I’m not sure what you mean by “vendor the universe”, but what I have seen is creation of private forks of public open source projects even if there is no intent to modify the code. This has two benefits.

                                                                                                        1. If the author or the hosting provider (e.g. bitbucket) deletes the repository, you still have access to it
                                                                                                        2. Performing a build only requires that one hosting provider be up rather than N
                                                                                                        1. 7

                                                                                                          Vendoring the universe means to, in your builds or deployments etc, pull all dependencies from source you control.

                                                                                                          1. 2

                                                                                                            “Vendoring” is the act of taking source code from your dependencies and including it in your own source tree. Doing that for the whole universe is what you do if you want to be sure that nobody else can break your build.

                                                                                                          2. 1

                                                                                                            Yes, Google has a third_party/ directory where mirrors of OSS code are stored. There is a team that works on the tools to keep things in sync.

                                                                                                        2. 4

                                                                                                          I think back to Gitorious and how they went down and everything they hosted is gone as well. That’s slightly different as the entire company folded, but there are still some things on there which probably didn’t exist anywhere else, which are now gone.

                                                                                                          I remember looking through my creative commons music once, finding a song I liked and trying to lookup the artist and see if they had other stuff. No only could I not find the artist, I couldn’t find the track! After some digging I found their old ccMixter account, from which they deleted all their tracks. The CC song I had literally didn’t existed anywhere I could find (at least under that named) that was indexed by Google/DDG or Bing.

                                                                                                          We look at how much new stuff is created each day. I wonder how much stuff is deleted forever.

                                                                                                          1. 1

                                                                                                            I think back to Gitorious and how they went down and everything they hosted is gone as well. That’s slightly different as the entire company folded, but there are still some things on there which probably didn’t exist anywhere else, which are now gone.

                                                                                                            Interesting in this context is the work by Guix and the Software Heritage to store all source archives/repositories used in Guix.