1. 23

    Put me squarely in the don’t understand the webcam stickers camp. What’s on my screen is 99% more likely to be interesting than what’s in front of it. Like, why try to extort me with a video of me picking my nose when you can just remote drive my browser and empty my bank account. And then there’s the whole microphone thing. It’s hard to imagine a threat model where webcam stickers are relevant.

    1. 48

      I was squarely in the same camp… until WebEx started my video on a call when I didn’t want it to, and a nice view of me (and my wife!) in bed wearing pyjamas (I was dialling in from 6 timezones ahead to listen to a town hall meeting) was projected on the wall for everyone to enjoy.

      I’m not worried about evil malware, I’m worried about WebEx ;-)

      1. 19

        I had that happen with google hangouts while I was listening to a call on the toilet. That was a bad moment. With “continuous deployment” this is bound to happen unpredictably.

        1. 9

          Yup, IMO badly-written conference calling software is a much more realistic and everyday threat than teh evil hackerz. WebEx and Hangouts and other systems seem to be constantly changing their UI and behavior, yet always seem to really want to broadcast video. And then sometimes pop up some other modal dialog blocking the buttons to stop it. It’s worth it IMO to definitely never ever send out video unless I’ve explicitly okayed it first, no matter what some marketing manager thinks would help them increase their engagement by 1%.

          1. 6

            I was fortunate enough to be dressed when it happened to me. Conference software has the worst defaults.

          2. 42

            My threat model isn’t malicious attackers as much as incompetence. I use webcam covers in case (1) a program that I trust has some mindblowing lapse in competence and turns on my webcam unexpectedly or (2) I fat-finger a video call button without noticing.

            1. 4

              Ah, I hadn’t thought about that too much since I rarely use such software. Also, I think this thread is the first time I’ve seen someone mention that. It’s always the evil hackers that get blamed instead.

              1. 2

                Do you use a webcam cover on your smartphone too?

                1. 5

                  I removed the front camera in my phone. It was useless for me, and I didn’t like the idea of never knowing if some app was using it.

                2. 1

                  But what’s the big difference compared to disabling the webcam in your BIOS settings?

                  1. 18

                    With a piece of electrical tape over the camera I can “re-enable” it in seconds without rebooting for the times when I do actually need it. Disabling it in the BIOS is a good option if you know you’ll really never need it though.

                    1. 1

                      Fair enough, but for someone who never needs it, this doesn’t really change a lot…

                    2. 14

                      Stickers/covers are simple in every aspect of their operation.

                      1. 4

                        Exactly! Most people’s understanding of stickers/covers allows them a fairly high degree of confidence that it’s working. You can hold the sticker up to the light to confirm that it’s opaque to visible light and you can see that it covers the lens. You can also run a camera application to see what it can see. By comparison, it is incredibly difficult to confirm that a BIOS setting does what it says it does.

                  2. 14

                    It’s hard to imagine a threat model where webcam stickers are relevant.

                    Porn and whacking off to it. I believe one Black Mirror episode was centered on that. I think blackmail on such footage is a credible threat even if you’re not into kinky/illegal stuff. And even if not anything as sleazy as that, there’s something quite disturbing in a random person essentially being inside your house looking around with you having no clue about it.

                    1. 4

                      The real threat seems to be people worried about the threat, given all the “I caught you visiting a naughty site, you know which one, pay me bitcoins” spam I get.

                      1. 4

                        You do understand that there’s a pretty big difference between the two situations, right? Someone leaking that you visited a naughty site isn’t really comparable to someone leaking pictures or video of you.

                        1. 0

                          The scam threat obviously includes “I hacked your webcam” blah blah. Sorry for not posting the entire spam here.

                          1. 1

                            Right, that makes sense. I’ve never actually read such a spam e-mail; if I get any, they just end up caught in the spam filter.

                            You would presumably take the threat more seriously if someone contacted you with some actual proof, such as showing an actual image of you naked taken from your webcam?

                            1. 1

                              I’ve had this email a few times, and they spoof the sender address to make it look like it came from your own email address. This at least gives the illusion of them having hacked you specifically.

                        2. 2

                          In a lot of the country, getting caught viewing porn can hurt their career or ability to run for office. It’s hypocritical given lots of people in those same areas watch porn. It’s a reality, though. This is also true for lots of other habits or alternate lifestyles cameras might reveal.

                          1. 4

                            In some countries, any consumption of anything deemed immoral can have even more devastating consequences. I know a guy from a small Persian Gulf country — a son of a late imam too — who was scammed for a few thousand euros recently by a con-artist he found on Grindr.

                            Losing a few thousand euros is not the harsh consequence in this scenario.

                      2. 10

                        I mostly agree, but I don’t think you should need to choose. I’d prefer HW switches for microphone, webcam, wireless and allowing only whitelisted HID device instances being active.

                        As I see Microsoft and Apple (even more so) have started to realize that there is a user demand for more privacy. The next windows update will notify you when there is an active microphone recording going on, for example. I think this is not a bad direction, but too little too late for my taste.

                        Also I think it is a design flaw that in current windows versions it is still so simple to globally register every keystroke, and that in Windows UWP, and Android there are so many grouped capabilities, and still you have to allow the app to use a capability in advance, or for now and for ever to use these privileges…

                        I don’t have much experience with Apple products.

                        Edit: regarding webcams:

                        You need to take into account that the line between digital and psychical life is getting thinner and blurrier. I often leave my machine running when I leave home, as it is energy efficient, and I might need to log in remotely, or a download is running in the background. A malicious actor could get information about my physical whereabouts, or about an opportunity for home invasion for example, should they deem it profitable.

                        1. 1

                          started to realize

                          This is hardly new. Apple’s 2003 external webcam model, the iSight, included a manual iris shutter/switch that rotated to both disable the device and physically obstruct the camera. Fashions change.

                        2. 4

                          There’s a mix of bad things people are doing right now and some things they could do with it that they’ll figure out eventually. I’m not writing about the latter since I prefer them to be delayed.

                          For now, I’m for being able to totally disable inputs, specific wireless, etc for a simple reason: no access by default until it needs it (POLA). No power by default until it needs it if available. I can try to guess every bad thing that can happen with risky peripherals. Or I can just shut them down when not using them. Covering my webcam is easy way to shut down its vision. My old laptop had a wireless switch, too. My old speakers didn’t act up when I had to turn something down quickly since the knobs actually worked. Killed power with last turn.

                          On a related note, I also buy old, dumb appliances without smart anything. They also last longer, are cheaper, and have no smart anything for people to hack. If there’s a risk from hackers, just eliminate it where it’s easy. Then, don’t think about it again.

                          1. 3

                            Funny, I had never even thought of tape over the webcam as a security measure.

                            For me it’s entirely there to make sure I’m not on camera when I join meetings unless I explicitly want to be.

                            1. 1

                              If you buy a new laptop there is no choice between with or without webcam. I don’t need it and never use it ergo I put it sticker on the camera, a simple and pragmatic solution.

                              1. 1

                                Well someone can take over your bank account and take your photo.

                              1. 5

                                While this article looks at safety by analysing outcomes in a medical context, I think a lot of the thinking in there could be ported over to running the kind of software systems that many of us here are responsible for.

                                The core idea resonated really strongly with me. We stand to learn a lot from from systems that are quietly successful, rather than focusing mostly on how we fixed the ones that loudly failed.

                                It also spoke to an idea that I agree with strongly: approaches that think everything can be solved simply by adding another process or rule for people to follow doom us to the same sub-par outcomes we know today. Or as phrased more eloquently in the article:

                                you cannot inspect safety or quality into a process: the people who do the process create safety

                                1. 2

                                  It also suggests that policies don’t create our successes, which is probably not what most people want to hear.

                                1. 37

                                  What about dependencies? If you use python or ruby you’re going to have to install them on the server.

                                  How much of the appeal of containerization can be boiled directly down to Python/Ruby being catastrophically bad at handling deploying an application and all its dependencies together?

                                  1. 6

                                    I feel like this is an underrated point: compiling something down to a static binary and just plopping it on a server seems pretty straightforward. The arguments about upgrades and security and whatnot fail for source-based packages anyway (looking at you, npm).

                                    1. 10

                                      It doesn’t really need to be a static binary; if you have a self-contained tarball the extra step of tar xzf really isn’t so bad. It just needs to not be the mess of bundler/virtualenv/whatever.

                                      1. 1

                                        mess of bundler/virtualenv/whatever

                                        virtualenv though is all about producing a self-contained directory that you can make a tarball of??

                                        1. 4

                                          Kind of. It has to be untarred to a directory with precisely the same name or it won’t work. And hilariously enough, the --relocatable flag just plain doesn’t work.

                                          1. 2

                                            The thing that trips me up is that it requires a shell to work. I end up fighting with systemd to “activate” the VirtualEnv because I can’t make source bin/activate work inside a bash -c invocation, or I can’t figure out if it’s in the right working directory, or something seemingly mundane like that.

                                            And god forbid I should ever forget to activate it and Pip spews stuff all over my system. Then I have no idea what I can clean up and what’s depended on by something else/managed by dpkg/etc.

                                            1. 4

                                              No, you don’t need to activate the environment, this is a misconception I also had before. Instead, you can simply call venv/bin/python script.py or venv/bin/pip install foo which is what I’m doing now.

                                            2. 1

                                              This is only half of the story because you still need a recent/compatible python interpreter on the target server.

                                          2. 8

                                            This is 90% of what I like about working with golang.

                                            1. 1

                                              Sorry, I’m a little lost on what you’re saying about source-based packages. Can you expand?

                                              1. 2

                                                The arguments I’ve seen against static linking are things like you’ll get security updates etc through shared dynamic libs, or that the size will be gigantic because you’re including all your dependencies in the binary, but with node_packages or bundler etc you’ll end up with the exact same thing anyway.

                                                Not digging on that mode, just that it has the same downsides of static linking, without the ease of deployment upsides.

                                                EDIT: full disclosure I’m a devops newb, and would much prefer software never left my development machine :D

                                                1. 3

                                                  and would much prefer software never left my development machine

                                                  Oh god that would be great.

                                            2. 2

                                              It was most of the reason we started using containers at work a couple of years back.

                                              1. 2

                                                Working with large C++ services (for example in image processing with OpenCV/FFmpeg/…) is also a pain in the ass for dynamic libraries dependencies. Then you start to fight with packages versions and each time you want to upgrade anything you’re in a constant struggle.

                                                1. 1

                                                  FFmpeg

                                                  And if you’re unlucky and your distro is affected by the libav fiasco, good luck.

                                                2. 2

                                                  Yeah, dependency locking hasn’t been a (popular) thing in the Python world until pipenv, but honestly I never had any problems with… any language package manager.

                                                  I guess some of the appeal can be boiled down to depending on system-level libraries like imagemagick and whatnot.

                                                  1. 3

                                                    Dependency locking really isn’t a sufficient solution. Firstly, you almost certainly don’t want your production machines all going out and grabbing their dependencies from the internet. And second, as soon as you use e.g. a python module with a C extension you need to pull in all sorts of development tooling that can’t even be expressed in the pipfile or whatever it is.

                                                  2. 1

                                                    you can add node.js to that list

                                                    1. 1

                                                      A Node.js app, including node_modules, can be tarred up locally, transferred to a server, and untarred, and it will generally work fine no matter where you put it (assuming the Node version on the server is close enough to what you’re using locally). Node/npm does what VirtualEnv does, but by default. (Note if you have native modules you’ll need to npm rebuild but that’s pretty easy too… usually.)

                                                      I will freely admit that npm has other problems, but I think this aspect is actually a strength. Personally I just npm install -g my deployments which is also pretty nice, everything is self-contained except for a symlink in /usr/bin. I can certainly understand not wanting to do that in a more formal production environment but for just my personal server it usually works great.

                                                    2. 1

                                                      Absolutely but it’s not just Ruby/Python. Custom RPM/DEB packages are ridiculously obtuse and difficult to build and distribute. fpm is the only tool that makes it possible. Dockerfiles and images are a breeze by comparison.

                                                    1. 2

                                                      Relatedly: the 8 year outstanding bug to make Net::HTTP handle character encodings at all.

                                                      1. 3

                                                        One of the GoCardless SREs here.

                                                        Happy to discuss anything and answer questions, though I’m going to bed in the next hour!

                                                        1. 2
                                                          1. What’s one lesson you learned from this incident that would be useful to share with developers who are not SREs?
                                                          2. What’s one assumption you had challenged?
                                                          1. 5
                                                            1. Cold code is broken code. Code that only exists to handle failure is most susceptible to this. A more common example is an infrequently run (e.g. monthly) cron job. In many cases I’d prefer it to run daily, even if it has to no-op at the last second, so that more of the code is exercised more often. Better still, in some cases it could do its work incrementally! Either way is better than having the job fail on the day it really has to run.
                                                            2. Our ability to manually take actions that have been handled by automation for a long time. Turns out that’s not so good, and prolonged the incident even after we’d decided to bring Postgres up by hand.
                                                        1. 2

                                                          The article’s claim seems bold. Could you not apply this quote to Prolog?

                                                          Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints.

                                                          1. 4

                                                            Yeah, one of the more common criticisms of this article floating around is that it seems unaware of large parts of the idea not being new. Which is a bit odd since Karpathy is a smart and well-read guy, so maybe it just leaves out the “related work” section for punchiness and rhetorical effect. But since the whole claim is that this is a totally new way of looking at software it makes for a weird read.

                                                            Prolog itself doesn’t do exactly that; with standard logic programming, you encode the logic directly by writing clauses, rather than giving input/output examples. But inductive logic programming is a version that does; you give it input/output examples and it induces the program’s clauses. There’s also genetic programming as a somewhat better-known set of techniques. As well as program synthesis, a more formal-methods take on it.

                                                            The real new part is the implicit claim that, essentially, “it works now”. GP is notoriously difficult to get anything useful out of, and clearly Karpathy thinks NN-based program induction won’t suffer the same fate. But that to a large extent remains to be seen…

                                                            1. 1

                                                              Gonna blame tiredness for that one. The bit in parens in my quote definitely doesn’t fit Prolog! The part I’m contesting is the idea that specifying programs in terms of constraints and relying on computers to explore a program space is new.

                                                          1. 1

                                                            Projects have their own goals, and I don’t see why those should be dictated by distros.

                                                            I’m very much in favour of projects setting out their approach to support in a way that works for them. Ultimately, if $distro wants to maintain an ancient version of your work indefinitely, then good luck to them.

                                                            One project I’m involved in has a take on this which boils down to “we work on all versions of Ruby and Rails still in security support by upstream”. It felt like a reasonable trade-off to make, considering the finite amount of time we have to work on it.

                                                            1. 4

                                                              The hard part is gonna be balancing my time between the two!

                                                              1. 1

                                                                I think with the newer Macs you can use Touch ID to protect keychain entries. Combine the two and you’re getting closer to the security level of the separate hardware key!

                                                                1. 5

                                                                  The behaviour is really surprising when you’ve not run into it before (and hard to reason about even when you have).

                                                                  The title is super clickbaity though. I can’t think of a single mainstream relational database that defaults to serialisable transactions.

                                                                  1. 1

                                                                    It may slip under the radar for many, but I have so much <3 for this commit mentioned in the article.

                                                                    I have a pretty strong preference for handling failover at Layer 7 rather than Layer 2/3 (i.e. with virtual IPs). This change makes that way easier!

                                                                    1. 6

                                                                      I’m starting to find it odd when a service with 2FA doesn’t offer TOTP as the main option.

                                                                      It’s widely supported. You don’t need a bunch of different physical tokens/separate apps to authenticate. It’s more secure than SMS.

                                                                      1. 3

                                                                        Most embarrassing is the fact that PayPal still only offers SMS. Their 2FA messages are often delayed or dropped, too.

                                                                      1. 1

                                                                        I think deadlines passed with every I/O (including lock acquisition) are the only way out of this.

                                                                        https://golang.org/pkg/context/ is the only time I’ve seen it supported at a language level.

                                                                        1. 2

                                                                          Not really clear to me what the author means. Should everyone just use Spanner? There isn’t anything else out there like Spanner (although CockroachDB is trying).

                                                                          1. 2

                                                                            I don’t think there’s a production-ready equivalent that you can run yourself (closed or open source).

                                                                            FoundationDB had a bunch of the guarantees, minus the SQL interface. Then Apple bought it and shut it down right away (side note: how terrifying is the idea of your database software no longer being available?).

                                                                            CockroachDB doesn’t seem quite there yet. I really want it to be.

                                                                          1. 1

                                                                            Reminder that nothing is tradeoff-free.

                                                                            Reminder that you’ll have to structure your data a certain way not to run into a throughput wall (true of many databases).

                                                                            Reminder to read the Spanner paper to find these things out.

                                                                            That said, Spanner seems dope.

                                                                            1. 2

                                                                              I am considering using postgresql for a project and the only thing that concerns me about it is the upgrade story. As someone who comes from using distributed DBs where zero downtime upgrades are the norm, several months of effort to do an upgrade in postgresql seems unacceptable.

                                                                              Does anyone know if there are any plans to make this better?

                                                                              1. 2

                                                                                Random finding in my twitter feed after reading your comment: http://www.slideshare.net/dataloop/zero-downtime-postgres-upgrades

                                                                                1. 1

                                                                                  Interesting. Unfortunately it still seems quite a bit more complicated.

                                                                                  1. 3

                                                                                    Author of the talk here, it is. I think Postgres has a long way to go on upgrades and clustering.

                                                                                    Since it’s not linked from that SlideShare page (and that page is controlled by the meetup hosts), here’s the video.

                                                                              1. 4

                                                                                The clickbait version of my opinion.

                                                                                The nuanced version of my opinion.

                                                                                I’d like to specifically call out the thing I talk about right near the end of the post. If you already have good automation around building VM images and blue-green deploys, containers probably don’t give you anything worthwhile (caveats: ease of using the same setup for development, machine utilisation).

                                                                                1. 4

                                                                                  I didn’t realize they had to practically build an emulator for NES/6502 just to read a sound file. Wow. At the beginning, the author says the exploit activates without playing the file. Offers to explain that later. I must be overlooking the explanation. Why does it execute without opening the file?

                                                                                  1. 7

                                                                                    It was briefly touched upon in one of the bullet points about attack vectors (it was seemingly unrelated, so you may have skimmed and missed it):

                                                                                    When the Downloads folder is later viewed in a file manager such as nautilus, an attempt is made to auto thumbnail files with known suffixes (so again, call the NSF exploit something.mp3). The exploit works against the thumbnailer.

                                                                                    1. 4

                                                                                      Appreciate it! Makes me smile as I disabled thumbnails on most systems worried a parsing attack would happen at some point. I think they already did on Windows but can’t recall with bad memory. A general principle of mine is I want to control when something dangerous happens. Specifically, safe by default with me consciously making that the decision to do something risky and being aware of it.

                                                                                      1. 3

                                                                                        Absolutely. This is a close relative to the “autorun” exploit on older Windows versions where it would execute whatever was defined in a removable disk/drive’s root “autorun.inf” file.

                                                                                        Ubuntu core maintainers should be aware of this type of attacks against thumbnailers, as there’s a ticket open for sandboxing thumbnailers (“gnome thumbnailers should have an apparmor profile”): https://bugs.launchpad.net/ubuntu/+source/totem/+bug/715874

                                                                                        But no meaningful progress has been made to address the ticket apart from a PoC from 2011.

                                                                                        1. 3

                                                                                          Makes me smile as I disabled thumbnails on most systems worried a parsing attack would happen at some point.

                                                                                          http://anti-reversing.com/Downloads/HES_2011_Presentations/USB%20Autorun%20attacks%20against%20Linux%20-%20Jon%20Larimer.pdf

                                                                                          1. 1

                                                                                            Nice write-up. Yeah, all kinds of issues apparently.

                                                                                          2. 2

                                                                                            The whole “parsing is one of the riskiest things we do” thing only hit home for me recently, when I read the qmail paper (PDF).

                                                                                            In this case, the huge number of different parsers a file browser may decide to invoke is pretty damn scary!

                                                                                            1. 4

                                                                                              Indeed. And if you think about the number of frameworks and applications that make use of file(1), either directly or indirectly, to determine file types, you’d never sleep at night… OpenBSD’s implementation has been privilege separated since 5.8.

                                                                                              1. 2

                                                                                                That was a great paper. The people publishing the most on parser and protocol issues at language level are LANGSEC:

                                                                                                http://www.langsec.org/

                                                                                        1. 3

                                                                                          The problem is in, however, how those images get produced. Take https://github.com/CentOS/CentOS-Do... for example, from the official CentOS Dockerfile repository. What’s wrong with this? IT’S DOWNLOADING ARBITRARY CODE OVER HTTP!!!

                                                                                          What’s wrong with auditing the Dockerfile? Seems to me Docker is a lot more transparent than other methods. Thoughts?

                                                                                          1. 5

                                                                                            It’s nice that you can audit them, but they’re all written like this. Docker claims it can be used for reproducible builds, but the first lines in every single Dockerfile are apt-get install a-whole-bunch-of-crap and npm/pip/gem install oh-my-god-thats-a-lot-of-packages. Nobody is actually trying to manage their dependencies or develop self contained codebases, just crossing their fingers and hoping upstream doesn’t break anything.

                                                                                            1. 1

                                                                                              How is this different from build systems that don’t use Docker? Sure, you might be using Jenkins to build stuff (and have to manage those hosts for the OS-level packages), but the npm/pip/gem/jar, etc., there’s no difference. You still have to manage your dependencies. In my experience, the Docker stuff helps with the OS-level packages (previously we had multiple Jenkins hosts that had the versions of things specific to projects – god help you if you accidentally built your project on the wrong host).

                                                                                              1. 4

                                                                                                I use maven, where the release plugin enforces that releases only depend on releases, and releases are immutable, which together means that builds are reproducible (unless someone used version ranges, but the culture is to not do that). You can also specify the GPG keys to check signatures against for each dependency. It’s not the default configuration and there’s a bootstrapping problem (you’d better make sure the version of the gpg plugin cached on your Jenkins machine is one that actually checks), but it’s doable.

                                                                                                1. 1

                                                                                                  On personal projects and at work I’ve been putting all the dependencies I use in the source repository. Usually we include the source code, for build tools (premake, clang-format) we add binaries to the repo instead.

                                                                                                  There are never any surprises from upstream, and you can build the code on any machine that has git and a C++ compiler.

                                                                                                  There’s some friction adding a new library but I don’t think that’s a bad thing. If a dependency is really too difficult to integrate with our build system then the code is probably going to be difficult too. If we need to do something easy people will write it themselves.

                                                                                              2. 1

                                                                                                At the risk of stating the obvious: if you audit the Dockerfile and it says “hey we downloaded this thing over HTTP and never checked the signature” there’s no way to tell if you got MITMed.

                                                                                                1. 3

                                                                                                  Okay, so then you use another Dockerfile (or write your own). This is a very strange tack to take; you may as well say that Rust is an insecure programming languages because with a few lines of code you can create a trivial RCE vulnerability (open listener socket, accept connection, read line, spawn a shell command).

                                                                                                  For what it’s worth, almost every Dockerfile I’ve used installs its dependencies using something like apt or yum/rpm – and signatures are checked! And when installing via apt isn’t an option, Docker doesn’t keep you from doing the right thing (download over https, check signatures). You’re just running shell commands, after all.

                                                                                                  1. 1

                                                                                                    My point exactly. There’s nothing wrong with taking an existing Dockerfile that you find to be suspect, beefing it up by correcting some obvious security issues, and resubmitting it as a patch.

                                                                                                    I fail to see what the author of the article thinks is a better alternative. I’m open to be convinced otherwise, but saying it’s actively harmful seems overstated.

                                                                                                    1. 1

                                                                                                      For what it’s worth, almost every Dockerfile I’ve used installs its dependencies using something like apt or yum/rpm – and signatures are checked!

                                                                                                      OK, so the signatures are checked. You still don’t know what version you got.

                                                                                                      1. 4

                                                                                                        Then pin the damned versions (apt-get install <pkg>=<version>), point at snapshot repos, and upgrade deliberately. This problem is totally orthogonal to Docker. All typical package repos suffer from it. I only know of Nix that doesn’t.

                                                                                                    2. 1

                                                                                                      This is why companies that care host their own registry for Docker images, just like they’ve done for Java, Python, Ruby, etc., for years. It is unfortunate that Docker didn’t design the registry system to be easily proxied, but this is easily worked around with current registry tools (Artifactory, for one).

                                                                                                  1. 9

                                                                                                    I find this whole issue really interesting, and this post is really acutely timed for me, thanks for putting it up.

                                                                                                    Early trials of Docker put me right off, but I’ve dug into the workstation client recently and I’ve been really pleasantly surprised. Seems a nice, simple way of running jail-like envs with nice isolation, which could most likely replace Vagrant in my workflow - if the deployment story is straight. But looking into that I find a bunch of stories like this, and this one is kind of the icing on the cake.

                                                                                                    Is there anyone here on lobste.rs who’s using Docker really successfully in deployment systems and can give an insight into this? What’s the deal, are you getting more or less downtime and hassle? Are you having to hack round things to get things running smoothly like the guy in this post suggests? Do the benefits it brings compensate sufficiently? How comparable is the amount of work you’ve had to do to get a stable Docker workflow in place with what you’d have had to do using another system?

                                                                                                    1. 6

                                                                                                      We’re using Docker in production at work, and not looking to back away from that decision.

                                                                                                      I’m not gonna sit here and say the original post is wrong - a lot of stuff in it is right. Yes, you need to write a script to clean out images (and it’ll be janky). Yes, something breaks in every release (the last two changed the output format of their syslog adapter, which was frustrating).

                                                                                                      Honestly though? It comes down to approach. If Docker doesn’t give you (or a group of people in your organisation) some clear benefits, don’t use it. That’s a cultural issue, not a technical one. If you do decide it’s worth it, then remember this quote from Julia Evans:

                                                                                                      You don’t just set up new software and expect it to magically work and solve all your problems—using new software is a process.

                                                                                                      Oh, side note: we don’t run our databases (or anything stateful) in containers, but never say never. Docker may not be the container system most suited to it, but I don’t think putting cgroups and namespaces up around a database process is an inherently bad idea.

                                                                                                      1. 5

                                                                                                        jail-like envs

                                                                                                        So, honest, honest question (please don’t tell me it’s just because duuuuuh, Linux users are stupid, hahahha, stupid LInux users)… why are we using Docker instead of BSD jails? I don’t really know much about either, but if jails is what people seem to think we should have done, why didn’t that become the popular option? The top google hit I can find for this question is that Docker is not at all like BSD jails, without further explanation. So, someone out there thinks that Docker does something that people need which BSD jails don’t do. What is that?

                                                                                                        And I doubt it is “runs on Linux”, because seeing how the kernel seems kind of incidental (you need a VM anyway to run Docker on Windows and macOS), there must be a deeper reason. Can someone who understands both jails and Docker well enough explain?

                                                                                                        1. 5

                                                                                                          Docker provides a lot of management mechanics over top of raw containerization (where by my understanding—having actually used neither—e.g. LXC is much closer to jails in terms of raw functionality). I’ve personally found the Docker features I’ve used to be handy, though I can’t speak to how robust, well-designed, or generally applicable any of them are. And I think “runs on Linux”, or more precisely, “runs Linux binaries”, is actually a killer feature: there’s a surprisingly large amount of proprietary server software for Linux exclusively out there, for which jails provide zero help. Once you’re using it to run your Linux binaries on your Linux servers, the ridiculous contortions to also run it on non-Linux systems almost make sense, from the perspective of maintaining a consistent interface.

                                                                                                          Also, Docker has a marketing department, which unfortunately almost always becomes the “killer feature” in a corporate environment.

                                                                                                        2. 1

                                                                                                          I’m late to the party here, but figured someone might still get value out of this: We use docker containers to send between 100 and 150 million emails a day, and to keep a few legacy applications together on some old hardware.

                                                                                                          It’s a solution that more or less works, but the ‘Docker’ bit is the least reliable part of the whole architecture (CentOS, Docker, postfix, custom scripts). Basic commands often fail and require cleanup (e.g. docker attach) and there’s the docker daemon SPOF.

                                                                                                          Networking and logging are more complicated and limited than I feel is necessary, and we don’t do anything with storage except for mounting postfix queue directories into the containers.

                                                                                                          Would we use it again? Maybe. Our devs say they like Docker, but I think they like the idea of containerization more than they like Docker itself. I don’t see any huge advantages over something like LXC or rkt. I actually came to Docker from LXC, expecting something significantly different or better, and was baffled by the hype and popularity.

                                                                                                          Although they’re architecturally different, I really like FreeBSD jails, especially with ZFS, nullfs, and other goodies that don’t exist on Linux. It seems like a much more solid base to build infrastructure on top of. See projects like cbsd (https://www.bsdstore.ru/en/about.html) if you want to see some crazy-cool ideas.