1. 6

    I can only imagine the sequence of events that lead to this discovery. Imagine, and sympathize.

    Reminds me of a time a coworker discovered a problem with a motor controller by drumming on the table.

    1. 8

      There’s a “Making of” video that’s quite interesting.

    1. 8

      One of the easiest arguments about this is: “YOU don’t get to decide that you have nothing to hide when you give up your privacy.”

      1. 4

        I also like to use the formulation: “You don’t have anything to hide at this moment, from this government.

        1. 2

          Came here to say exactly this - it’s more that you don’t have anything to hide right now, but what is considered legal or illegal can change at pretty much any time.

        2. 4

          Another alternative: you may not care about people collecting your private data, but you probably do care about people using your data to exploit you.

          I think targeted ads for commercial products are fine. But hyper-targeted political ads designed to manipulate and misinform? Hard no.

          1. 2

            Another alternative: you may not care about people collecting your private data, but you probably do care about people using your data to exploit you.

            These arguments never work.

            That’s the price you pay for using the Internet. You can’t get away from it. Otherwise how can anyone fund their Web site?

            1. 2

              Historically, no. Since Cambridge Analytica, yes they definitely do.

            2. -1

              I think targeted ads for commercial products are fine. But hyper-targeted political ads designed to manipulate and misinform? Hard no.

              I think people are getting more and more sensitive regarding targeted ads as well. I’ve heard non-technical people complain that companies like Facebook serve them ads related to a product they talked about with somebody or they’ve browsed on the internet. They feel like they are manipulated into buying those products.

              As for political ads, it might be different elsewhere, but I’ve never seen somebody react well to political ads. However, I feel that those hyper-targeted political ads are somewhat of a myth.

          1. 1

            Recently I’ve been working with people coming from other languages to Ruby and I was surprised how useful they found having Rubocop enabled on a project and, also, reek.

            The way I use it is somewhat similar to the relaxed style already mentioned here, meaning that I define a set of rules in .rubocop.yml and we have no .rubocop_todo.yml. I found it’s very helpful for new developers coming into a project to see an offence (thanks to the IDE/editor they are using) and get their first commit by writing a small refactor.

            1. 16

              One benefit of using a CMS or site generator is an automatic RSS feed. Hint: this blog currently has no RSS feed ;)

              1. 5

                I was really considering writing a blog that actually uses an RSS feed as the source of truth, then uses XSLT to generate the HTML…

                1. 6

                  Pro tip: use an Atom feed. RSS 2.0 is pretty loose, spec wise.

                  1. 1

                    This brings back old memories. Back in 2006, I think, I used a CMS called Symphony that would generate XML and then you’d style your blog using XSLT. Since I was a junior developer back then, it was quite handy to learn XML and XSLT, which were still the rage back then :-)

                  2. 2

                    That is very true, but I’m working on a shell script to generate a feed from HTML.

                    Edit: Here’s a preliminary version of the script (results).

                    1. 3

                      Be careful your shell script doesn’t turn into a static site generator!

                      1. 1

                        Indeed! I saw it coming too, so now I’ve switched to a PHP script (feed.php) that generates it on the server from my index.html, so that I don’t have to worry about generating it :-)

                  1. 1

                    I really enjoyed the article because it made me think about my journey as a software developer, a journey which I started back in 2005 when I was hired as a junior C#/ASP.NET developer.

                    Almost a year later I discovered Ruby and I now realise that what I really enjoyed about Ruby was its ability to write beautiful code, at least when compared to the C# code I wrote before. The idea of an Enumerable mix-in that would magically allow you to iterate through your custom objects was pure poetry when you’d compare it to the utilitarian foreach found in C# at that time.

                    What’s quite interesting is that I’ve been feeling the desire to move from Ruby to another language and I couldn’t understand why. Since I have a friend who’s also thinking of moving away from Ruby, I thought it’s the community not evolving at the same pace as I was but I now I think that I’m exposed to other programming languages and ideas though communities like Lobsters and Ruby doesn’t feel as “poetic” as it once felt, not when comparing it with languages as Clojure or Haskell. As a fun exercise, I’ve shared the link to my friend, I’m pretty sure he’ll consider himself in the second “tribe” (although I’m more inclined to see it as “school of thought”).

                    1. 7

                      The minitest example only verifies if the Foo class has a three properties and those are not nil, while the RSpec example verifies if model Foo has three associations defined and all the associated models can be nil. So it’s not exactly an apples-to-apples comparison.

                      Personally, I do like the spec-style syntax as it allows me to describe behaviour based on contexts, which is slightly more cumbersome to do with Test::Unit-style syntax. That being said, I did notice that people tend to go to town with RSpec but thankfully there’s the Better Specs site which can provide some general guidance.

                      1. 9

                        A post about all the positives without any of the frustration and negatives. What about security updates within the container? A new dev isn’t going to just be able to get up and running day 1 with Docker compose. It will help, but getting on-boarded still takes time. I know the author specifically says this is about development and so it’s fine there is no reference or the trouble with orchestration systems or larger configurations.

                        I dunno. It doesn’t feel like there is a lot of really in-depth stuff here. I wrote a post about Docker a while back, and it might be the opposite; too long by comparison.

                        1. 2

                          Incidentally, we use Docker a lot just in development. Deployment is done through a pipeline like most deployments (you commit code in master and the microservice is automatically built and deployed to QA). There are some discussions about deploying containers but not everybody is sold on the idea and we have it easy because our hosting provider has support for containers.

                          Basically, right now we’re simply using Docker as some sort of common ground for everybody, regardless of the OS used. So I think we’re in the same use-case as the author, so let me answer some of your questions/comments.

                          What about security updates within the container?

                          If you’re just developing with Docker, so you don’t push the actual container, you don’t really care about security updates (except for those within your code, of course).

                          A new dev isn’t going to just be able to get up and running day 1 with Docker compose. It will help, but getting on-boarded still takes time.

                          Personally, I don’t find a lot of value with Docker (and this is why it took me so long to consider Docker more than a fad) but I am comfortable with managing Linux and FreeBSD systems, I have configured servers with Puppet and Chef. I also run on my laptop the same Ubuntu version as we have on the servers just to make sure I have a very close match to production.

                          However, not every developer is like me. We have a .NET developer in our team that recently migrated to Ruby (all our microservices are written in a flavour of Ruby, either MRI or JRuby) and it’s been very simple for him to have install Docker for Windows, copy a docker-compose.override.yml, run docker-compose build and then start working. Previously we had a wiki page that listed some steps, whoever reinstalled their machine had to tinker with it because it was slightly out of date and, in some cases, people would get stuck and ask for help.

                          it’s fine there is no reference or the trouble with orchestration systems or larger configurations

                          Actually, this is one of the issues with Docker, especially if you’re working with microservices because you’ll sometimes want to have multiple microservices running at the same time (one of them being authentication). Normally. you’d open multiple tabs and start the service in each tab or maybe have something like Foreman to start them all (but you need to make sure the services pick up the right environment variables) but with Docker you kind of need to have a docker-compose.yml file because container A can’t contact container B unless it’s defined in the same docker-compose.yml, so it’s not as straightforward.

                          Personally, I found two major issues with Docker from a developing point of view:

                          • On OSes like Windows, Docker is definitely not lightweight: you basically have to run a Linux VM which will host the containers. This will likely change in the future as Docker is working on Linux Containers on Windows but it’s still not there yet.
                          • I’ve seen cases where people with the same Dockerfile, docker-compose.yml, docker-compose.override.yml and .env files run the same command and get different results (tests would fail). Only after completely removing Docker and reinstalling it the problems were fixed. Having a tool that works most of the time is sometimes worse than having a tool that fails reliably because it’s harder to find a long-term solution :-)
                        1. 3

                          This seems a bit far-fetched, and would be applicable to almost any third party repo. A real life example of this short attack would have been more interesting.

                          1. 2

                            I think the main point here is that Skype adds a /etc/apt/sources.list.d/skype-stable.list file without explicitly saying this (although I might be wrong and I might have clicked a “get updates” button).

                            On the other hand, anyone can browse the repo and see what packages are available at any time so the conclusion that Microsoft “can easily inject malicious packages via regular update and replace distro packages w/ their own manipulated ones” does seem a bit far-fetched.

                            1. 4

                              On the other hand, anyone can browse the repo and see what packages are available at any time so the conclusion that Microsoft “can easily inject malicious packages via regular update and replace distro packages w/ their own manipulated ones” does seem a bit far-fetched.

                              It’s not far-fetched, it’s a fact they can do it. And they could easily avoid getting caught by adding additional date/time, IP address and user-agent header filters to ensure only the target will get the updates via apt-get, for example via automated updates. Anyone else browsing the repo, or the target browsing the repo at another time or using a web browser would not see the replacement packages. To be sure, this requires malicious intent, which one might argue is “far fetched”, but the NSA has been known to pull such shenanigans as a matter of course.

                              Updates would show up in the apt logs, of course, but once installed, a malicious application could scrub local logs easily, as the post-install scripts run as root. This would be pretty hard to detect, let alone prove.

                              1. 3

                                Any package can add to /etc/apt/sources.list.d, though.

                                It would be nice if they said they were doing it, but it is possible to check before installing with “apt-file list ”.

                            1. 3

                              Would it have been that hard to use infer.fb.com?

                              1. 10

                                You’d think a tech company could get that right. What really ticks me off is how the banking and financial industry seems to find subdomains so intolerable. It seems like every bank expects you to just implicitly trust any domain with their name in it.

                                1. 3

                                  Or at least get the certificate right.

                              1. 1

                                Can anyone explain in layman terms how Google can take the GPL license Linux kernel , build some stuff on top of it and have those to be proprietary ? How does this work ? How can the stuff built on top be proprietary ?

                                1. 14

                                  It’s the same thing as building proprietary software that runs on linux. GPL means that google has to release any changes it makes to the linux kernel under GPL as well. Software that just runs on top of Linux/Android can be proprietary.

                                  1. 2

                                    It’s basically due to using the “Android” name, which is copyrighted, and having the Google apps (Maps, Play Store, etc) pre-installed. Forking Android and calling it something else is just fine.

                                  1. 1

                                    Windows 64-bit link is broken.

                                    1. 1

                                      I couldn’t find any mentions of the 64-bit version so I had to look on their download site to grab one.

                                    1. 2

                                      It sounds to me like they are deprecating all server services and probably preparing to merge macOS Server into macOS so they’ll have just one computer OS. Am I missing anything?

                                      1. 5

                                        Ever since Lion they stopped having a macOS Server version. That’s when the server app appeared the first time which was installing what previously was part of the server OS.

                                        Over time they removed more and more features from that though, so now all that’s left is OpenDirectory (their LDAP/Kerberos AD equivalent) and their MDM solution.

                                        1. 2

                                          It now makes sense why it seems such a dramatic change for me as the last time I’ve worked with macOS Server it was back during the Tiger days. Thank you for clearing things up!

                                      1. 8

                                        After reading this post I went back to my almost-dead blog, confident that it will be quite lightweight. The Firefox network tab showed (with cache disabled):

                                        • 13 requests
                                        • 360KB transferred (1.00 MB after expanding the gzipped files)
                                        • Finish: 2.21 seconds.
                                        • DOMContentLoaded: 829ms
                                        • load: 1.76s

                                        I was quite puzzled as this is a static website based on a rather simple Jekyll template. Turns out that the “Like” button that I added a while back generated 5 requests and 204 KB (or 795 KB after expanding) of load. I’m quite surprised of that and I’ll probably resurrect my blog just to remove that like button.

                                        1. 0

                                          Is it me, or does a 2x test/code ratio feel like a hilarious waste of time, or not nearly enough?

                                          1. 2

                                            I thought we had a normal ratio for a Ruby/Rails application that has thorough test coverage, but I’d love to hear from others on what their ratio looks like. We didn’t aim to have a certain ratio or anything. We write tests as we’re writing code, and that’s what we ended up with. Lines of code isn’t a consistent metric across different projects since style and conventions come into play, but I thought it was something I could share to convey the size of our application.

                                            1. 2

                                              The more consistent our architecture became, the more we were able to leverage integration tests, i.e. end to end assertions of the public side effects resulting from a single use case, and have confidence in that use case’s correctness. The average use case simply had far less “exciting” code, relied more on libraries, and far more that could be taken for granted. Unit tests are emphasized in our library code (e.g. internal gems) or in anything “interesting”, which is taken to mean any logic hairy enough that it doesn’t just fall out of air with our bog-standard architecture.

                                              Certain policy and permission classes, many of which boil down to a comparison operator or two, are also unit tested by default in pursuit of “tests as documentation” rather than assertion of correctness.

                                              I haven’t run the stats in a while but if you ignore view code (any .erb files, css and our javascript whose tests are nonexistent or in shambles), we are sitting at 1:1 or bit lower in our main monolith. In my conversations with others, I would not say that your 2:1 is out of the ordinary, though anything higher might make me raise an eyebrow.

                                              At some point if you’re being so thorough that you have excess of a 2:1 test:domain code ratio, you either have a very hairy, naturally complex business domain (and my eyebrow lowers), or you should look into property testing/generative testing.

                                            2. 1

                                              I think it depends tremendously on how you write tests.

                                              EG: If your tests are mostly high-level integration tests it seems quite high; if they’re mostly low-level unit tests it seems low.

                                              Similarly, If you’re using a terse style (eg via DSL / metaprogramming) it seems high; if you’re using a verbose style (eg the recommended rspec approach) I’d say you’ve mostly tested the ‘happy path’.

                                              1. 1

                                                Right, that’s my point. I also was speaking in general, not specifically about a Rails application. I don’t do Rails any longer, but when I did, the codebase was more like 5-8x tests to codes. And of course, in Scala, the type system – for all of my reservations about complexity – allows a much leaner ratio.

                                              2. 1

                                                If you’re really serious about testing, especially more end-to-end stuff and interacting with the front-end, it’s pretty easy to end up needing 10 lines in a test to check a feature that only required 2 or 3 to implement.

                                                There’s some cross-checking too of course, but if writing 3x as much code meant basically no regressions, that’s a decent deal

                                                1. 1

                                                  I depends a lot on the code. For example, in my current codebase I have something like:

                                                  Maybe(resource).fmap(&:author).fmap( ->(n) { n[:full_name] }).or(Some('Somebody'))
                                                  

                                                  This is part of a method that gets a hash that, ideally, looks like this:

                                                  {
                                                    author: {
                                                      full_name: 'John Doe'
                                                    }
                                                  }
                                                  

                                                  However, there might be some cases where the hash is nil or the hash does not contain the expected keys. So for a line (actually two, since the line is too long for my taste) of code you easily end up with four test cases (hash is nil, hash does not have the :author key, the :author key returns a hash without the :full_name key and the ideal scenario where all the data is present).

                                                  Then again, you most likely have a bunch of lines of code that are just doing simple things, like check if a property is set or something like that, where the test cases are a lot simpler so you might not end up with a 2x test/code ratio.

                                                  1. 2

                                                    I have no idea what your domain/code-culture is, but if you just want something short, maybe plain Ruby is enough? :

                                                    Hash(resource).dig(:author, :full_name) || 'Somebody'
                                                    

                                                    Your short code example looks like a mix of Ruby with Rust, or Haskell monads. Yet, I wonder what happens when resource is an Array. Does that Maybe function swallow the exception? It’s hard to bolt on types where there were none before! :)

                                                    1. 2

                                                      The library used is dry-monads and if you pass an array you get an NoMethodError: undefined methodauthor’`.

                                                      I agree that the dig method is more appropriate for a Ruby codebase and in some places it was used instead of the Maybe monad. The reason why we’re using that (and I was the one pushing it as team lead) was that those constructs are closer to constructs in other languages and one of the side goals that I have is to enable people as much as possible to explore other languages and my feeling is that this kind of code helps.

                                                1. 7

                                                  I‘m not convinced that the current trend to put authentication info in local storage is entirely driven by the thought of being able to bypass the EU cookie banner thing. I think it‘s more related to the fact that a lot of people are jumping on the JWT bandwagon and that you need to send that JWT over an Authorization header rather than the cookie header.

                                                  Also, often, the domain serving the API isn‘t the domain the user connects to (nor even a single service in many cases), so you might not even have access to a cookie to send to the API.

                                                  However, I totally agree with the article that storing security sensitive things in local storage is a very bad idea and that httponly cookies would be a better idea. But current architecture best-practice (stateless JWT tokens, microservices across domains) make them impractical.

                                                  1. 4

                                                    Hey! You are correct in that this isn’t the main reason people are doing this – but I’ve spoken to numerous people who are doing this as a workaround because of the legislation which is why I wrote the article =/

                                                    I think one way of solving the issue you mention (cross-domain style stuff) is to use redirect based cookie auth. I’ve recently put together a talk which covers this in more details, but have yet to write up a proper article about it. It’s on my todo list: https://speakerdeck.com/rdegges/jwts-suck-and-are-stupid

                                                    1. 2

                                                      Ha! I absolutely agree with that slide deck of yours. It’s very hard to convince people though.

                                                      One more for your list: having JWTs valid for a relatively short amount of time but also provide a way to refresh them (like what you’d do with an oauth refresh token) is tricky to do and practically requires a blacklist on the server, reintroducing state and defeating the one single advantage of JWTs (their statelessnes, though of course you can have that with cookies too)

                                                      JWTs to me feel like an overarchitectured solution to an already solved problem.

                                                      1. 1

                                                        There’s a third use case: services that are behind an authentication gateway (like Kong) and whenever a user is doing an authenticated request then the JWT is injected by the gateway into the request headers and passed forward to the corresponding service.

                                                        But yes, a lot of people are using $TECHNOLOGY just because it’s the latest trend and discard “older” approaches just because they are no longer new which is quite interesting because we today see a resurgence of functional languages which are quite old, but I digress.

                                                      2. 2

                                                        you need to send that JWT over an Authorization header rather than the cookie header.

                                                        Well, you don’t need to, but many systems require you to. It’s completely possible — although it breaks certain HTTP expectations — to use cookies for auth² is after all quite an old technique.

                                                        1. 1

                                                          This is true – you could definitely store it in a cookie – but there’s basically no incentive to do so. EG: Instead just use a cryptographically signed session ID and get the same benefits with less overhead.

                                                          The other issue w/ storing JWTs in cookies is that cookies are limited to 4kb of data, and JWTs often exceed that by their stateless nature (trying to shove as much data into the token as possible to remove state).

                                                        2. 1

                                                          Could you point me to some sort of explanation of why using localStorage is bad for security? Last time I looked at it, it seemed that there was no clear advantage to cookie based storage: http://blog.portswigger.net/2016/05/web-storage-lesser-evil-for-session.html

                                                          1. 2

                                                            Just as the article says: if you mark the session cookie as http only, then an XSS vulnerability will not allow the token to be exfiltrated by injected script code.

                                                            1. 1

                                                              Are we reading the same article? What I see is:

                                                              • “The HttpOnly flag is an almost useless XSS mitigation.”
                                                              • “[Web storage] conveys a huge security benefit, because it means the session tokens don’t act as an ambient authority”
                                                              • “This post is intended to argue that Web Storage is often a viable and secure alternative to cookies”

                                                              Anyway, I was just wondering if you have another source with a different conclusion, but if not, it’s OK.

                                                              1. 3

                                                                I disagree with the author of that article linked above. I’m currently typing out a full article to explain in more depth – far too long for comments.

                                                                The gist of it is: HttpOnly works fine at preventing XSS. The risk of storing session data in a cookie is far less than storing it in local storage. The attack surface is greater there. There are a number of smaller reasons as well.

                                                                1. 1

                                                                  Great, I would appreciate a link (or a Lobsters submission) when you’ve written it.

                                                        1. 2

                                                          How exactly does the EU think it can make people not sell to EU citizens if they have no local presence?

                                                          1. 4

                                                            I’m curious to see how POSIX-compliant the Windows terminal is; when I tried ssh with the Linux subsystem, the Windows terminal couldn’t display Mutt correctly, I think because it was struggling with curses’ heavy usage of escape sequences, but I’m not sure. I ended up having to use MobaXTerm, which is perhaps my least favorite piece of software I still end up using every day. Getting to switch to something lighter and with fewer attention-grabby bits all around my workspace would be excellent.

                                                            On that note, if anyone here knows any good, simple terminals for Windows (my dream is alacritty or st, but for Windows), I would love to hear about them!

                                                            1. 1

                                                              I’ve been using ConEmu for the last six months. While it’s not a POSIX-compliant terminal, I found out that it performs fairly well with most. I haven’t used it with Mutt, though, but Vim works just fine.

                                                            1. 3

                                                              I find it odd that the author does not say that he’s a Google employee, probably paid by Google to work on NetBSD. I say that because I can’t see many outsiders pushing code into Google’s repos, which kind of makes the NetBSD support semi-official.

                                                              1. 1

                                                                Well, he does says that this is his activity paid with his own money.

                                                                Where he works is not relevant in that case, though he could mention that.

                                                              1. 3

                                                                I think Travis CI is rather expensive for a side-project that’s not open source as the cheapest paid plan is $69. On Heroku you can use the Nano plan for SemaphoreCI which is free and if you push more than 100 times per month then you can upgrade to the Starter plan which seems very similar to the Bootstrap plan on Travis.