1. 4

    It’s a logic I’m starting to follow more and more (while it’d already be noticed on my blog and such, as I pretty much only have a basic CSS) this logic.

    In fact, I’m working on re-building my CV from scratch with pretty much 2 CSS instructions (for background and foreground colours), and I’ve never found an easier-to-maintain CV.

    1. 2

      For castling.club, I tried to not even set colors. I wonder how that works, in practice, for browsers that apply a theme? All it has is a translucent blue background for links, which I was hoping would work for both light and dark themes.

      In practice, the page is just black text on white for macOS. Maybe the dark style in 10.12 will affect it? I remember Epiphany on Linux followed the GTK theme back in that day, which actually broke a lot of sites that only set one of text or background color.

      1. 3

        A problem I found with not setting colours anywhere was the contrast. Just reducing the contrast by putting a text colour of #222 and a background colour of #ddd can help quite a bit with reading.

        I don’t know how one can handle “system theme support” or disable it in CSS though, if that’s even possible.

    1. 7

      A funny thing I realized a while back is that, for me, there’s very little difference between Linux and the BSDs, and to some extent even OSX.

      Most of the software I use on a daily basis (Emacs, StumpWM, SBCL, Chromium, rxvt, zsh, etc.) is virtually identical between systems. There are some nuances, like GNU vs BSD userland tools, but for the most part it doesn’t affect me much.

      OSX has a different UI, but even there most of my time is spent in Emacs, Chromium, or the terminal (with zsh), so it ends up being nearly the same, too.

      1. 5

        I feel that’s only true for the time spent coding? As soon as you have to deploy or manage a service/system, things get interesting. Though I guess it’s less of a problem for any kind of code that only runs locally?

        Lobsters has me all hyped for OpenBSD, and there’s openbsd.amsterdam now offering VMs, which is really interesting. But at the same time, I don’t want to spend a lot of time on the maintenance of private little side-projects. I currently take a single VM from a generic provider, run Debian on it, and set it update and reboot automatically. If I can get to that point with OpenBSD, I’d be even more interested in trying. (But I’ve only spent a little time researching so far.)

        1. 1

          I feel that’s only true for the time spent coding? As soon as you have to deploy or manage a service/system, things get interesting. Though I guess it’s less of a problem for any kind of code that only runs locally?

          Yeah, I suppose that’s true, but almost everything I write lately is just loaded into a Lisp image and launched from the REPL, so it’s largely the same every where.

          I think administration is definitely where there are the biggest user noticeable differences between all the systems.

        2. 2

          Unix is Unix. I no longer really draw distinctions, because they are largely meaningless for the level at which I interact with systems.

        1. 2

          Suing abandonware archives is too meanly. Personally I found Nintendo franchises like all these marios and zeldas as disgusting as Hollywood stuff. They had done lots of aggressive marketing in social networks recently to ensure “geek culture” is associated with their silly characters targeted to 5-year-old kids. I hope if all these ROMs would be removed from internets, this will lower popularity of Nintendo brands.

          1. 12

            It’s not abandonware when they’re maintaining their titles for virtual console on recent platforms. It’s not targeted at just 5-year-olds, it’s family entertainment that plenty of adults enjoy. Your comparison with Hollywood is far-fetched, and the adjectives you use are very trollish.

            1. 3

              when they’re maintaining their titles for virtual console

              Except they’re not? On a switch only VC Mario title is an arcade one. No Zelda except BOTW (the latest one). DS Zelda titles are only available second-hand as cartridges.

              1. 4

                They’re available on the 3DS VC. I’ve been playing through them all. And I’m in my thirties, FWIW. :)

                1. 4

                  This was not the case at one point if memory serves. It is also no guarantee going forward.

                  1. 2

                    I think the point that people have been making is that Nintendo had no interest in re-releasing these games until they discovered how popular they were in the ROM scene and second hand markets.

            1. 5

              The OOM killer is, IMNSHO, broken as designed. Track how much memory is available, return NULL, let the application deal with it then, when it can still be dealt with, instead of killing a random (I know, not really random) process later. I disable the OOM killer whenever feasible.

              1. 2

                In practice though, C++ throws, Rust panics, I think only well-written C code would have a chance of behaving ‘correctly’ in this case? And that’s the kind of low-level process that’s unlikely to be selected by the OOM killer.

                So effectively, letting the application deal with it equals letting the application crash. The application that runs into this situation can be whatever application happens to need an allocation at some point. That seems more random than what the OOM killer targets?

                1. 4

                  That’s not the OS’s decision to make, though. With the OOM killer enabled, C/C++ doesn’t have the option to handle it differently. If Rust or Go ever wants to change how they handle allocation failure in the future, they can’t if the OOM killer is enabled. It’s too strong of a policy decision for such low-level features as allocation and process lifetime.

                  (Of course, I haven’t written a kernel used by billions, so it’s easy for me to judge.)

                  1. 3

                    Sounds to me like a good opportunity for an opt-in flag asserting that a particular binary handles allocation failures gracefully, so return NULLs to them when appropriate; else deal with it via the OOM killer.

                  2. 2

                    If there were capacity planning done and limits set on processes or process groups, the ones violating their own capacity would be the ones degraded.

                    1. 1

                      OpenVMS used process limits for that reason. Plus accounting purposes like link says. Then, they had both virtualized kernels and clustering to mitigate that level of failure.

                1. 9

                  They could’ve just build a OpenGL compatibility library on top of Metal right? Apple keeps playing the silo card…

                  1. 3

                    I don’t have a source handy, but someone on Twitter had some clues that the iOS implementation already is layered on top of Metal. So I’m hoping they do a code dump and/or find a new steward.

                  1. 4

                    Lots of C bashing going on here.

                    I’ll only comment that C is used today mainly in the embedded domain, a place where it is strong and growing (in terms of jobs etc).

                    1. 1

                      Perhaps webassembly will bring it out to the frontend!

                      1. 3

                        Certainly, WebAssembly is bringing a lot of good existing C/C++ code to the frontend. In a personal project, I’m using libogg, libopus and libspeexdsp. I find it really cool to be able to use these from the web! (I guess these particular libs lend themselves well, because they have little interaction with the OS, and are very portable.)

                        And then there’re also the big names in game development porting their engines, of course.

                    1. 33

                      While I think a website like this would make sense in a few years, right now I think GDPR is complicated, confusing, and scary enough to a lot of companies that they are going to make mistakes. I’d rather help them do it better than mock them.

                      1. 15

                        As one of the thousands of engineers who had to figure out how to shoehorn a six-month compliance project into a lean team’s packed roadmap, I concur. This wasn’t easy, not even at a company that respects user data to begin with. Lots of the jokes I’ve seen about GDPR right now just lessen my opinion of the teller.

                        1. 23

                          On the other hand, we’ve all had literally more than 2 years to work on said six-month compliance project, and the fact that so many companies try to push on until the very end to start working on it is the actual problem here IMO.

                          1. 4

                            Not from my point of view – who cares if companies just woke up to GDPR two weeks ago, if I don’t use them for data processing? None of my actual pain came from that. But I definitely spent a lot of time working on GDPR when I’d rather have been building product, other deadlines slipped, things moved from To-Do to Backlog to Icebox because of this. We’re ready for GDPR, but that stung.

                            1. 3

                              I was essentially trying to put “People like you don’t get to complain about it being hard to fit something into a certain time period when they had literally 4 times that amount of time to do it.” ^__^

                              1. 3

                                Well, if people like you (who didn’t even do the work) get to complain, then so do I! If someone tells me they’re gonna punch me in the face, then they punch me in the face, I still got punched in the face.

                                1. 4

                                  I did our GDPR planning and work, and I’m so glad to see it in effect. The industry is finally gaining some standards. Sometimes it’s time to own-up that you care more about your own bottom-line than doing the right thing, if you complain about having to give up a “rather have been building product” attitude.

                                  1. 1

                                    Sometimes if you don’t build a product, GDPR compliance becomes irrelevant because you never get a company off the ground. As a one-person platform team until last September, I don’t regret how I prioritized it.

                                  2. 6

                                    Well, if people like you (who didn’t even do the work) get to complain, then so do I!

                                    I actually did do the work. But either way, complaining about it being a pain overall is just fine, because it is. On the other hand, explicitly complaining that because you had to do it in 6 months you had issues fitting it in, had other deadlines slip, and had to essentially kill other to-do’s is a very different thing. If you’d used the extra 18 months, I bet you’d have had much less issues with other deadlines.

                                    If someone tells me they’re gonna punch me in the face, then they punch me in the face, I still got punched in the face.

                                    This analogy doesn’t even make sense in context…

                                    1. 6

                                      If you’d used the extra 18 months, I bet you’d have had much less issues with other deadlines.

                                      I’ll totally remember this for next time.

                          2. 25

                            Well, I agree in general, but this article specifically highlights some cases of just plain being mean to your users. I’m okay with mocking those.

                            1. 7

                              I disagree. GDPR is expensive to get wrong so the companies aren’t sure what to expect. They are likely being conservative to protect themselves.

                              1. 7

                                They were not conservative in tracking users, and spending for tracking and spying on users was not expensive?

                                As a user I don’t care about the woes of companies. They forced the lawmakers to create these laws, as they were operating a surveilance capitalism. They deserve the pain, the costs, and the fear.

                                1. 1

                                  and spending for tracking and spying on users was not expensive?

                                  Tracking users is very cheap, that’s why everyone can and does do it. It’s just bits.

                                  As a user I don’t care about the woes of companies.

                                  Feel free not to use them, then. What I am saying is that GDPR is a new, large and expansive, law with a lot of unknowns. Even the regulators don’t really know what the ramifications will be. I’m not saying to let companies not adhere to the law, I’m just saying on the first day the world would probably benefit more from helping the companies comply rather than mocking them.

                                  EDIT:

                                  To be specific, I think companies like FB, Google, Amazon, etc should be expected to entirely comply with the law on day one. It’s smaller companies that are living on thinner margins that can’t necessarily afford the legal help those can that I’d want to support rather than mock.

                            2. 10

                              It’s not like the GDPR was announced yesterday. It goes live tomorrow after a two year onboarding period.

                              If they haven’t got their act in order after two years, it’s reasonable to name and shame.

                            1. 21

                              Gosh, I couldn’t make it very far into this article without skimming. It goes on and on asking the same ‘why’ but mentally answering it in the opposite direction of the quoted comments.

                              Docker is easy, standard isolation. If it falls, something will replace it. We’re not going in the opposite direction.

                              The article doesn’t explain to me what other ways I have of running 9 instances of an app without making a big mess of listening ports and configuration.

                              Or running many different PHP apps without creating a big mess of PHP installs and PHP-FPM configs. (We still deal with hosting setups that share the same install for all apps, then want to upgrade PHP.)

                              Or how to make your production setup easy to replicate (roughly) for developers who actually work on the codebase. (Perhaps on macOS or Windows, while you deploy on Linux.)

                              We’re not even doing the orchestration dance yet, these are individual servers that run Docker with a bunch of shell scripts to provision the machine and manage containers.

                              But even if we only use 1% of the functionality in Docker, I don’t know how to do that stuff without it. Nevermind that I’d probably have to create a Vagrantbox or something to get anyone to use it in dev. (I’ve come to dislike Vagrant, sorry to say.)

                              Besides work, I privately manage a little cloud server and my own Raspberry Pi, and sure they don’t run Docker, but they don’t have these requirements. It’s fine to not use Docker in some instances. And even then, Docker can be useful as a build environment, to document / eliminate any awkward dependencies on the environment. Makes your project that much easier to pick up when you return to it months later.

                              Finally, I’m sorry to say that my experiences with Ansible, Chef and Puppet have only ever been bad. It seems to me like the most fragile aspect of these tools is all the checks of what’s what in the current environment, then act on it. I’m super interested in trying NixOS sometime, because from what I gather, the model is somewhat similar to what Docker does: simply layering stuff like we’ve always done on software.

                              1. 1

                                For the php part it’s not that complex. Install the required versions (Debian and Ubuntu both have 5.6 through 7.2 “major” releases available side by side that’s to Ondrej Sury’s repo. Then just setup a pool per-app (which you should do anyway) and point to the apps specific Unix domain socket for php-fpm in the vhost’s proxy_fcgi config line.

                                I’ve used this same setup to bring an app from php5.4 (using mod_php) up through the versions as it was tested/fixed too.

                                Is there some config/system setup required? You betcha. Ops/sysadmins is part of running a site that requires more than shared hosting.

                                What are you gonna do with docker, have each developer just randomly writing whatever the fuck seems like a good idea and pushing their monolithic images to prod with no ops supporting it?

                                1. 12

                                  What are you gonna do with docker, have each developer just randomly writing whatever the fuck seems like a good idea and pushing their monolithic images to prod with no ops supporting it?

                                  Yes. The whole point of “DevOps”/docker is to deploy softwares certified by “Works on My Machine” certification program. This eliminates coordination time with separate Ops team.

                                  1. 2

                                    Is this sarcasm, or are you actually in favour of the definition “DevOps = Developers [trying to] do Ops” ?

                                    1. 7

                                      Descriptively, that’s what DevOps is. I am prescriptively against such DevOps, but describing what’s currently happening with docker is unrelated to whether I am in favor of it.

                                      1. 3

                                        I don’t disagree that it’s a definition used by a lot of places (whether they call it devops or not). But I believe a lot of people who wax poetic about “DevOps” don’t share this same view - they view it as Operations using ‘development’ practices: i.e. writing scripts/declarative state files/etc to have reproducible infrastructure, rather than a “bible” of manual steps to go through to setup an environment.

                                        I’m in favour of the approach those people like, but I’m against the term simply because it’s misleading - like “the cloud” or “server less”.

                                  2. 2

                                    I don’t understand your last point, that’s exactly what developers do all day.

                                    In Docker, the PHP version the app depends on is set in code. It doesn’t even take any configuration changes when the app switches to a new PHP version.

                                    But if there’s one gripe I have with the Docker way of things, baking everything into an image, it’s security. There are no shared libraries in any way, upgrading a dependency minor version requires baking a new image.

                                    I kinda wish we had a middle road, somewhere between Debian packages and Docker images.

                                    1. 3

                                      the PHP version the app depends on is set in code

                                      And of course we all know Docker is the only way to define dependencies for software packages.

                                      1. 4

                                        Did anyone say it was? Docker is just one of the easiest ways to define the state of the whole running environment and have it defined in a text file which you can easily review to see what has been done.

                                      2. 1

                                        You can share libraries with Docker by making services share the same Docker image. You can actually replicate Debian level of sharing by having a single Docker image.

                                        1. 2

                                          Well, I guess this is just sharing in terms of memory usage? But what I meant with security is that I’d like if it were possible to have, for example, a single layer in the image with just OpenSSL, that you can then swap out with a newer version (with, say, a security fix.)

                                          Right now, an OpenSSL upgrade means rebuilding the app. The current advantage managing your app ‘traditionally’ without Docker is that a sysadmin can do this upgrade for you. (Same with PHP patch versions, in the earlier example.)

                                          1. 4

                                            And this is exactly why I don’t buy into the whole “single-use” container shit show.

                                            Want to use LXC/LXD for lightweight “VM’s”? Sure, I’m all for it. So long as ops can manage the infra, it’s all good.

                                            Want to have developers having the last say on every detail of how an app actually runs in production? Not so much.

                                            What you want is a simpler way to deploy your php app to a server and define that it needs a given version of PHP, an Apache/Nginx config, etc.

                                            You could literally do all of that by just having your app packaged as a .deb, have it define dependencies on php-{fpm,moduleX,moduleY,moduleZ} and include a vhost.conf and pool.conf file. A minimal (i.e. non-debian repo quality but works for private installs) package means you’ll need maybe half a dozen files extra.

                                            And then your ops/sysadmin team can upgrade openssl, or php, or apache, or redis or whatever other thing you use.

                                            1. 2

                                              I actually do think this is a really good idea. But what’s currently there requires a lot more polish for it to be accessible to devs and small teams.

                                              Debian packaging is quite a pain (though you could probably skip a lot of standards). RPM is somewhat easier. But in both cases, the packages typically bundle default app configuration and systemd unit files, which is a model that sort of assumes things only have 1 instance.

                                              You could then go the LXC route, and have an admin manage each instance in a Debian container. That’s great, but we don’t have the resources to set up and manage all of this, and I expect that is the case for quite a lot of small teams out there.

                                              Maybe it’s less complicated than I think it is? If so, Docker marketing got something very right, and it’d help if there was a start-to-finish guide that explains things the other way.

                                              Also remember that Docker for Mac/Windows makes stuff really accessible for devs that are not on Linux natively. Not having to actually manage your VM is a blessing, because that’s exactly my gripe with Vagrant. At some point things inside the VM get hairy, because of organic growth.

                                              1. 3

                                                But in both cases, the packages typically bundle default app configuration and systemd unit files, which is a model that sort of assumes things only have 1 instance.

                                                In the case of the context - it is one instance. Either you build your packages with different names for different stages (e.g. acme-corp-foo-app-test, acme-corp-foo-app-staging, acme-corp-foo-app-prod) or use separate environments for test/stage/prod - either via VMs, LXC/LXD, whatever.

                                                Nothing is a silver bullet, Docker included. It’s just that Docker has a marketing team with a vested interest in glossing over it’s deficiencies.

                                                If you want to talk about how to use the above concept for an actual project, I’m happy to talk outside the thread.

                                                1. 2

                                                  Also remember that Docker for Mac/Windows makes stuff really accessible for devs that are not on Linux natively. Not having to actually manage your VM is a blessing, because that’s exactly my gripe with Vagrant. At some point things inside the VM get hairy, because of organic growth.

                                                  This is exactly why at work we started to use Docker (and got rid of Vagrant).

                                                  1. 1

                                                    At some point things inside the VM get hairy, because of organic growth.

                                                    Can you define “hairy”?

                                                    1. 2

                                                      The VM becomes a second workstation, because you often SSH in to run some commands (test migrations and the like). So people install things in the VM, and change system configuration in the VM. And then people revive months old VMs, because it’s easier than vagrant up, which can take a good 20 minutes. There’s no reasoning about the state of Vagrant VMs in practice.

                                                      1. 3

                                                        So people install things in the VM, and change system configuration in the VM

                                                        So your problem isn’t vagrant then, but people. Either the same people are doing the same thing with Docker, or not all things are equal?

                                                        because it’s easier than vagrant up, which can take a good 20 minutes

                                                        What. 20 MINUTES? What on earth are you doing that causes it to take 20 minutes to bring up a VM and provision it?

                                                        There’s no reasoning about the state of Vagrant VMs in practice.

                                                        You know the version of the box that it’s based on, what provisioning steps are configured to run, and whether they’ve run or not.

                                                        Based on everything you’ve said, this sounds like blaming the guy who built a concrete wall, when your hammer and nails won’t go into it.

                                                        1. 1

                                                          I suppose the main difference is that we don’t build images for Vagrant, but instead provision the machine from a stock Ubuntu image using Ansible. It takes a good 3 minutes just to get the VirtualBox VM up, more if you have to download the Ubuntu image. From there, it’s mostly adding repos, installing deps, creating configuration. Ansible itself is rather sluggish too.

                                                          Compare that to a 15 second run to get a dev environment up in Docker, provided you have the base images available.

                                                          A people problem is a real problem. It doesn’t sound like you’ve used Docker for Mac/Windows, but the tool doesn’t give you a shell in the VM. And you don’t normally shell into containers.

                                                          1. 1

                                                            That’s interesting that it takes you 20 minutes to get to something usable. I never had that experience back when I used VMware and VirtualBox. I can’t remember having it anyway. I decided to see what getting Ubuntu up on my box takes with the new version for comparison to your experience. I did this experiment on my backup laptop: a 1.5GHz Celeron with plenty of RAM and older HD. It’s garbage far as performance goes. Running Ubuntu 16-17 (one of them…), VirtualBox, and Ubuntu 18.04 as guest in the a 1GB VM. That is, the LiveCD of Ubuntu 18.04 that it’s booting from.

                                                            1. From power on to first Ubuntu screen: 5.7 seconds.

                                                            2. To get to the Try or Install screen: 1 min 47 seconds.

                                                            3. Usable desktop: 4 min 26 seconds.

                                                            So, it’s up in under 5 minutes on the slowest-loading method (LiveCD) on some of the slowest hardware (Celeron) you can get. That tells me you could probably get even better startup time than me if you install and provision your stuff into a VirtualBox VM that becomes a base image. You use it as read-only, snapshot it, whatever the feature was. I rarely use VirtualBox these days so can’t remember. I know fully-loaded Ubuntu boots up in about a minute on this same box with the VirtualBox adding 5.7s to get to that bootloader. Your setup should just take 1-2 minutes to boot if doing it right.

                                                            1. 0

                                                              It takes a good 3 minutes just to get the VirtualBox VM up

                                                              What? Seriously? Are your physical machines running on spinning rust or with only 1 or 2 GB of RAM or something? That is an inordinate amount of time to boot a VM, even in the POS that is Virtualbox.

                                                              but the tool doesn’t give you a shell in the VM.

                                                              What, so docker attach or docker exec /bin/bash are just figments of my imagination?

                                                              you don’t normally shell into containers

                                                              You don’t normally just change system settings willy nilly in a pre-configured environment if you don’t know what you’re doing, but apparently you work with some people who don’t do what’s “normal”.

                                                              1. 2

                                                                Physical machines are whatever workstation the developer uses. Typically a Macbook Pro in our case. Up until Vagrant has SSH access to the machine, I’m not holding my breath.

                                                                You’re confusing shell access to the VM with shell access to containers. The Docker commands you reference are for container access.

                                                                People do regularly make changes to vhost configuration, or installed packages in VMs when testing new features, instead of changing the provisioning configuration. Again, because it takes way longer to iterate on these things with VMs. And because people do these things from a shell inside the VM, spending time there, they start customizing as well.

                                                                And people do these things in Docker too, and that’s fine. But we’re way more comfortable throwing away containers than VMs, because of the difference in time. In turn, it’s become much easier to iterate on provisioning config changes.

                                                                1. 2

                                                                  If time was a problem, sounds like the Docker developers should’ve just made VM’s faster in existing stacks. The L4Linux VM’s in Dresden’s demo loaded up about one a second on old hardware. Recently, LightVM got it down to 2.3 milliseconds on a Xen variant. Doing stuff like that also gives the fault-isolation and security assurances that only come with simple implementations which Docker-based platforms probably won’t have.

                                                                  Docker seems like it went backwards on those properties vs just improving speed or usability of virtualization platforms.

                                                                  1. 1

                                                                    You’re confusing shell access to the VM with shell access to containers. The Docker commands you reference are for container access.

                                                                    No. Your complaint is that people change configuration inside the provisioned environment. The provisioned environment with Docker isn’t a VM - that’s only there because it requires a Linux kernel to work. The provisioned environment is the container, which you’ve just said people are still fucking around with.

                                                                    So your complaint still boils down to “virtualbox is slow”, and I still cannot imagine what you are doing to take twenty fucking minutes to provision a machine.

                                                                    That’s closer to the time to build a base box from nothing than the time to bring up an instance and provision it.

                                                                    1. 2

                                                                      Look, this is getting silly. You can keep belittling every experience I’ve had, as if we’ve made these choices based on a couple of tiny bad aspects in the entire system, but that’s just not the case, and that’s not a productive discussion.

                                                                      I did acknowledge that in practice Docker images a lot more things, which factors into a lot of the slowness of provisioning in the Vagrant case for us. There’s just a lot more provisioning has to do compared to Docker.

                                                                      And while we could’ve gone another route, I doubt we would’ve been as happy, considering where we all are now as an industry. Docker gets a lot of support, and has a healthy ecosystem.

                                                                      I see plenty of issues with Docker, and I can grumble about it all day. The IPv6 support is terrible, the process management is limited, the Docker for Mac/Windows filesystem integrations leave a lot to be desired, the security issue I mentioned in this very thread. But it still has given us a lot more positives than negatives, in terms of developer productiveness and managing our servers.

                                                                      1. 1

                                                                        You can keep belittling every experience I’ve had Every ‘issue’ you raised boils down to ‘vagrant+virtualbox took took to long to bring up/reprovision’. At 20 minutes, that’s not normal operation, it’s a sign of a problem. Instead of fixing that, you just threw the whole lot out.

                                                                        This is like saying “I can’t work out why apache keeps crashing under load on Debian. Fuck it, I’m moving everything to Windows Server”.

                                                                        But it still has given us a lot more positives than negatives The linked article seems to debunk this myth.

                                                                      2. 2

                                                                        I have the same experience as @stephank with VirtualBox. Every time I want to restart with a clean environment, I restart with a standard Debian base box and I run my Ansible playbooks on it. This is slow because my playbooks have to reinstall everything (I try to keep a cache of the downloaded packages in a volume on the host, shared with the guest). Docker makes this a lot easier and quicker thanks to the layer mechanism. What do you suggest to keep using Vagrant and avoid the slow installation (building a custom image I guess)?

                                                                        1. 2

                                                                          Please tell me “the same experience” isn’t 20 minutes for a machine to come up from nothing?

                                                                          I’d first be looking to see how old the base box you’re using is. I’m guessing part of the process is an apt-get update && apt-get upgrade - some base boxes are woefully out of date, and are often hard-coded to use e.g. a US based mirror, which will hurt your update times if you’re elsewhere in the world.

                                                                          If you have a lot of stuff to install, then yes I’d recommend making your own base-box.

                                                                          What base-box are you using, out of interest? Can you share your playbooks?

                                                                          1. 2

                                                                            Creating a new VM with Vagrant just takes a few seconds, provided that the base box image is already available locally.

                                                                            Provisioning (using Ansible in my case) is what takes time (installing all the services and dependencies required by my app). To be clear, in my case, it’s just a few minutes instead of 20 minutes, but it’s slow enough to be inconvenient.

                                                                            I refresh the base box regularly, I use mirrors close to me, and I’ve already checked that apt-get update/upgrade terminates quickly.

                                                                            My base box is debian/jessie64.

                                                                            I install the usual stuff (nginx, Python, Go, Node, MySQL, Redis, certbot, some utils, etc.).

                                                                            1. 2

                                                                              Reading all yours comments, you seem deeply interested by convincing people that VMs are solving all the problems people think Docker is solving. Instead of debating endlessly on comments here, I’d be (truly) interested to read about your work-flow as a an ops and as a dev. I’ve finished my studies using Docker and never had to use VMs that much on my machines, so I’m not an expert and would be really interested to have a good article/post/… that I could learn from on the subject on how VM would be better than Docker.

                                                    2. 1

                                                      I think the point is to use something like ansible, so you put some ansible config in a git repo then you pull the repo, build the docker image, install apps, apply the config and run, all via ansible.

                                                    3. 2

                                                      How do you manage easily 3 different versions of PHP with 3 different version of MariaDB? I mean, this is something that Docker solves VERY easily.

                                                      1. 4

                                                        Maybe if your team requires 3 versions of a database and language runtime they’ve goofed…

                                                        1. 8

                                                          It’s always amusing to have answers pointing the legacy and saying “it shouldn’t exist”. I mean, yes it’s weird, annoying but it exists now and will exists later.

                                                          1. 6

                                                            it exists now and will exists later.

                                                            It doesn’t have to exist at all–like, literally, the cycles spent wrapping the mudballs in containers could be spent just…you know…cleaning up the mudballs.

                                                            There are cases (usually involving icky third-party integrations) where maintaining multiple versions of runtimes is necessary, but outside of those it’s just plan sloppy engineering not to try and cleanup and standardize things.

                                                            (And no, having the same container interface for a dozen different snowflakes is not standardization.)

                                                            1. 2

                                                              I see it more like, the application runs fine, the team that was working on it doesn’t exist anymore, instead of spending time to upgrade it (because I’m no java 6 developer), and I still want to benefit from bin packing, re-scheduling, … (and not only for this app, but for ALL the apps in the enterprise) I just spend time to put it in a container, and voila. I still can deploy it in several different cloud and orchestrator without asking for a team to spend time on a project that already does the job correctly.

                                                              To be honest, I understand that containers are not the solution to everything, but I keep wondering why people don’t accept that it has some utility.

                                                            2. 2

                                                              I think the point is that there is often little cost/benefit analysis done. Is moving one’s entire infrastructure to Docker/Kubernetes less work than getting all one’s code to run against the same version of a database? I’m sure sometimes it is, but my experience is that these questions are rarely asked. There is a status-quo bias toward solutions that allow existing complexity to be maintained, even when the solutions cost more than reducing that complexity.

                                                              1. 4

                                                                Totally agreed, but I’m also skeptical on the reaction of always blaming containers to add complexity. From my point of view, many things that I do with containers is way easier than if I had to do it another way (I also agree that some things would be easier without them too).

                                                          2. 2

                                                            Debian solves three different versions of php with Ondrej’s packages (or ppa on Ubuntu).

                                                            In anything but dev or the tiniest of sites you’ll have you database server on a seperate machine anyway - what possible reason is there to have three different versions of a database server on the same host for a production environment?

                                                            If you need it for testing, use lx{c,d} or vms.

                                                            1. 3

                                                              Especially MySQL has broken apps in the past, going from 5.5 -> 5.6, or 5.6 -> 5.7. Having a single database server means having to upgrade all apps that run on top of it in sync. So in practice, we’ve been running a separate database server per version.

                                                              Can’t speak for other systems, though.

                                                              1. 1

                                                                As you said, testing is a good example of such use case. Then why using VMs when I can bin-pack containers on 1 (or many) machine, using less resources?

                                                                1. 1

                                                                  That still isn’t a reason to use it in prod, and it isn’t that different from using LXC/LXD style containers.

                                                                  1. 1

                                                                    Do you have rational arguments to be against Docker which is using LXC? For now I don’t see any good reason not too. It’s like saying that you don’t want to use a solution because you can use the technologies it uses underneath.

                                                                    1. 6

                                                                      It’s like saying that you don’t want to use a solution because you can use the technologies it uses underneath.

                                                                      That’s a reasonable position though. There are people who have good reasons to prefer git CLI to Github Desktop, MySql console to PHPMyAdmin, and so forth. Abstractions aren’t free.

                                                                      1. 1

                                                                        Exactly! But I don’t see such hatred for people using Github Desktop or PHPmyadmin. It’s not because you don’t want to use it that it doesn’t fit the usecase of someone.

                                                                        1. 1

                                                                          As someone who usually ends up having to ‘cleanup’ or ‘fix’ things after someone has used something like a GUI git client or PHPMyAdmin, I wouldn’t use the word hatred, but I’m not particularly happy if someone I work with is using them.

                                                                          1. 1

                                                                            I can do interactive staging on the CLI, but I really prefer a GUI (and if I find a good one, would probably also use a GUI for rebasing before sending a pull request).

                                                                      2. 2

                                                                        If I want a lightweight machine, LXC provides that. Docker inherently is designed to run literally a single process. How many people use it that way? No, they install supervisord or whatever - at which point, what’s the fucking point?

                                                                        You’re creating your own ‘mini distribution’ of bullshit so you can call yourself devops. Sorry, I don’t drink the koolaid.

                                                                        1. 1

                                                                          Your argument is purely flawed. You justify the initially of Docker by generalizing what a (narrow) subset of users is doing. Like I said, I’m ready to hear rational arguments.

                                                                          1. 2

                                                                            generalizing what a (narrow) subset of users is doing

                                                                            I found you 34K examples in about 30 seconds: https://github.com/search?l=&q=supervisord+language%3ADockerfile&type=Code

                                                                            1. 1

                                                                              Hummm okay you got me on this one! Still, I really think there is some real utility for such a solution, even if yes it can be done in many other ways.

                                                          1. 1

                                                            I hadn’t heard of this client before! Very cool (and permissively licensed).

                                                            There’s quite few bittorrent implementations around, maybe a handful actively used / maintained? Lots of clients derive from the original or use libtorrent. It’s understandable, imho, because the protocol, extensions and details like those mentioned in this blog are rather complex.

                                                            1. 5

                                                              This poses an interesting problem for anti-cheat systems like VAC. It’s not impossible to detect this kind of hack, but could it then be used to trick VAC into banning legitimate players?

                                                              I’m not aware of any stories about VAC false positives. Trust in VAC seems almost absolute. So anything like the above happening could turn hairy quick.

                                                              1. 3

                                                                Honestly, games seem like a security issue waiting to happen. Almost every part of them is designed without security in mind (except on consoles, and even the only to an extent) in exchange for performance. Now with Vulkan, they have much lower levels of access to the GPU than they did before, allowing for greater risks involving GPU drivers to happen. Their network protocols are likely highly exploitable, as this article shows.

                                                                VAC has mostly dealt with script kiddies. Once the cheating world develops far more advanced methods, then I think Valve et al will have a hell of a time.

                                                                1. 4

                                                                  VAC has mostly dealt with script kiddies. Once the cheating world develops far more advanced methods, then I think Valve et al will have a hell of a time.

                                                                  Steam itself has millions of active users, most of them with a credit card on file - games are a big target not only for cheating - it’s a lucrative target for criminals and I’m surprised wide scale exploitation of them is not yet a thing.

                                                              1. 3

                                                                https://noisy.fun/ is getting updates!

                                                                I’ve been tinkering with lots of things behind the scenes, updating dependencies, improving the build process, etc. A lot of the work going into it is me just trying out things, even if they don’t add function. For example, just this weekend I switched it to FontAwesome 5.

                                                                Now I am taking some steps to allow actually adding new features (no spoilers). I need space for more controls, so I’m working on adding a menu and moving some controls there. Should also be a big step towards a better UI on mobile.

                                                                1. 13

                                                                  Some of the ‘alternatives’ are a bit more iffy than others. For any service that you don’t have the source to or can’t self-host (telegram, protonmail, duckduckgo, mega, macOS, siri to name a few), you’re essentially trusting them to uphold their privacy policy and to respect your data (now, but also hopefully in the future).

                                                                  And in some cases it seems to me that it’s little more than fancy marketing capitalizing on privacy-conscious users.

                                                                  1. 18

                                                                    Telegram group messages aren’t even e2e encrypted, Telegram has access to full message content. The only thing Telegram is good at is marketing, because they’ve somehow convinced people they’re a secure messenger.

                                                                    1. 6

                                                                      To be fair, they at least had the following going for them:

                                                                      • no need to use a phone client, as compared to WhatsApp which deletes your account if you access it with an unofficial client. You can just buy a pay-as-you-go SIM card and receive your PIN with a normal cell-phone
                                                                      • they had an option for e2e encrypted chats, with self deleting messages (there was this whole fuss with the creator offering a million dollars (?) if anyone could find a loophole)
                                                                      • their clients were open source, and anyone could implement their API

                                                                      Maybe there was more, but these were the arguments I could think of on the spot. I agree that it isn’t enough, but it’s not like their claim was unsubstantiated. It just so happened that other services started adopting some of Telegrams features, making them loose their edge over the competition.

                                                                      1. 4

                                                                        Also the client UX is pretty solid imho. Bells and whistles are not too intrusive, and stuff works as you’d expect.

                                                                        Regarding its security: It is discussed in the FAQ what security models they offer in which chat mode.

                                                                      2. 6

                                                                        I’m much less worried about the source code than I am the incentives of the organization behind the software. YMMV, of course.

                                                                        1. 2

                                                                          Even if you have source code, it’s difficult to verify a service or piece of software (binary) matches that source code.

                                                                          1. 2

                                                                            Yes, but then if anything feels wrong, it gets possible to find an alternative provider for the same software.

                                                                            Still… Hard to beat the privacy of a hard drive at home accessed through SFTP.

                                                                          2. 2

                                                                            I was checking email SaaS providers last weekend as the privacy policy changes at current provider urge me not to renew my subscription when it ends. I have found mostly the same offers, and to be honest neither seemed convincing to me.

                                                                            For example the Tutanota offer seemed questionable: They keep me so secured that the email account can only be accessed by their email client, no free/open protocol is available. Only their mail client can be used, they use proprietary encryption scheme for my own benefit… OK, it is open sourced, but come on… I cannot export my data in a meaningful way to change providers. So what kind of encryption scheme is it? It is RSA-2048+AES, not using GPG/PGP “standards”, and is hosted in Germany, pretty much a surveillance state… This makes their claims questionable at least.

                                                                          1. 4

                                                                            So Power is switching to little endian by default?

                                                                            Only slightly related, but I’ve been looking for somewhere to test a piece of code on big endian, but that seems to be rather difficult as a private person. I think the only options are to find some physical hardware on the cheap?

                                                                            I have a Pi3, and that’s supposed to be bi-endian, but I’m not sure how to go about installing a big endian Linux on it. Same goes for a Scaleway ARM virtual machine, I guess.

                                                                            1. 7

                                                                              Shell accounts at Polarhome are free for developers of open source projects (and cheap otherwise). Their Debian/PPC and Solaris/SPARC are big-endian IIRC.

                                                                              You can also run QEMU, here’s a random repo with instructions.

                                                                              1. 3

                                                                                You should be able to virtualize, Debian for example supports some Big Endian architectures. I don’t reckon it matters much though, Big Endian is definitely on the way out.

                                                                                If you do want to go physical, you can get an Octeon-based system, they’re Big Endian mips64. Mostly used in networking equipment. Cavium has an incomplete list of products using Octeon processors, stuff under the consumer tab is probably your best bet for cheap stuff.

                                                                                I have a Ubiquiti UniFi Security Gateway running on Octeon. It’s running some kind of Debian derivative, or so I assume since dpkg and the Debian package keys are present.

                                                                                $ lscpu
                                                                                Architecture:          mips64
                                                                                Byte Order:            Big Endian
                                                                                [...]
                                                                                
                                                                                $ uname -a
                                                                                Linux ubnt 3.10.20-UBNT #1 SMP Fri Nov 3 15:45:37 MDT 2017 mips64 GNU/Linux
                                                                                

                                                                                This seems consistent with the development kit information on the Cavium Octeon web page:

                                                                                OS: Linux 2.6 (SDK 2.x) for OCTEON II or Linux 3.10 (SDK 3.1.x) 64-bit SMP OS for OCTEON II & III

                                                                                My other UniFi hardware runs Little Endian ARMv7 though. Looks like processors made by either MediaTek, or Qualcomm for the wireless gizmos.

                                                                                1. 2

                                                                                  Yeah, Ubiquiti’s Octeon stuff (specifically EdgeRouter) is quite well known, it’s supported by FreeBSD and OpenBSD for example. But consumer router grade CPUs are uhhhh rather weak :(

                                                                                2. 3

                                                                                  Or just get actual POWER box. Talos II (mentioned in the article) is relatively cheap for the specs.

                                                                                  1. 3

                                                                                    It’s still very prohibitively expensive unless you’re very dedicated to having a POWER box. I have access to off-lease POWER6 boxes acquired for cheap on eBay, but those are large, loud, pour out heat, suck up electricity, and generally only desirable if you really want a POWER box but lack funds. (Not to mention the firmware bugs that IBM refused to patch for it, so newer distros don’t support POWER6.)

                                                                                    Really, the best way to play with PPC still is to buy an old Power Mac, which is kinda sad.

                                                                                    edit: interesting thread on this topic of high-end RISC systems being hard to acquire for devs, which reduces their viability on the market

                                                                                    1. 2

                                                                                      I guess I am dedicated :D

                                                                                      But I’m going to get it because it’s all FOSS, no blobs, that’s the main reason. It’s also not that expensive, considering specs. And it’s just as power hungry as similar Intel boxes. Sure, older POWER generations were much more power hungry, but things changed with POWER9.

                                                                                  2. 2

                                                                                    the IBM PDP program gives access to POWER based systems, they’ve just added 9 support but previously had 7 & 8 based systems running AIX & Suse.

                                                                                  1. 18

                                                                                    Slightly off topic: I see people complaining a lot about Electron, with Slack being a prime example.

                                                                                    I think the success of Visual Studio Code means that it is possible to build excellent apps using Electron. And that the performance of Slack app is not necessarily representative of all Electron based apps.

                                                                                    1. 26

                                                                                      I think it’s possible. But VSC is literally the only Electron app that doesn’t blatantly suck performance-wise. Is that because Microsoft just actually put in the effort to make something good? Or is it because Microsoft has built best-in-class IDEs that scale radically better than any alternative for a long long long time?

                                                                                      Now no one get me wrong, I’m a UNIX guy through and through, but anyone who claims there’s anything better than Visual Studio for large scale C++ development has no clue what they’re talking about. C++ as a language is complete bullshit, the most hostile language you can write an IDE for. Building an IDE for any other language is child’s play in comparison, and Microsoft is proving it with VSC.

                                                                                      I don’t think it’s currently possible for anyone besides Microsoft to make an excellent Electron app. They took a bunch of internal skill for building huge GUI applications that scale, and built their own language to translate that skill to a cross platform environment. I think they could have chosen whatever platform they felt like, and only chose to target Javascript because the web / cloud is good for business. We’ll start seeing good Electron apps when Typescript and the Microsoft way become the de facto standard for building Electron apps.

                                                                                      But I also haven’t slept in 24 hours so maybe I’m crazy. I reckon I’ll go to bed now.

                                                                                      1. 7

                                                                                        but anyone who claims there’s anything better than Visual Studio for large scale C++ development has no clue what they’re talking about.

                                                                                        JetBrains CLion might actually be a bit better – but they started building addons to improve development in VisualStudio (e.g. the amazing ReSharper) originally, and only expanded to build their own IDE later on.

                                                                                        I fully agree on all other points.

                                                                                        1. 5

                                                                                          CLion definitely has a great feature set, but I’ve found a lot of it to be unusably slow, at least on our large codebase. Lots of us use Qt Creator even though it’s objectively worse and has some sketchy bugs, because it’s at least fast for the stuff it does do. I look forward to the day I can comfortably switch to CLion.

                                                                                          1. 3

                                                                                            CLion is fantastic, I came to it after a lot of use of PyCharm.

                                                                                        2. [Comment removed by author]

                                                                                          1. 5

                                                                                            I don’t think I can agree on the hype thingy here.

                                                                                            Background: I hate developing on Windows, been using Linux for god knows how many years, but I do have a Windows work(play)station at home where I sometimes do development and I don’t always want to ssh into some box to develop (or in the case of creating Windows applications, I can’t)

                                                                                            I’ve been using eclipse for years (for the right combination of languges and available plugins, of course) and had been searching for a decent “general-purpose” replacements (e.g. supports a lot of languages in a decent way, is configurable enough so you can work, has more features than, say, an editor with only syntax highlighting). So ok, I never used Sublime Text (tried it out, didn’t like it for some reason) and VS Code was the first thing since like 10 years where it was just a joy having a nice, functioning and free IDE/text editor that didn’t look like it was written i the 90s (like, can’t configure the font, horrible Office 94-like MDI), doesn’t take 2mins to load (like eclipse with certain plugins) etc.pp

                                                                                            It’s about frictionless onboarding, and yes, maybe I sound really nitpicky here - but it’s from the standpoint as a totally hobbyist programmer, as my overlap with any work projects or any serious open source work (where I usually have the tooling set up like at work, as it’s long-ongoing and worth the investmnt). That’s also the focus on free. Absolutely willing to pay for a good IDE (e.g. IntelliJ IDEA) but not if I’m firing it up once per month.

                                                                                          2. 3

                                                                                            Is there a chance that the ill reputation of Electron apps is that Electron itself offers ample opportunity for prime footgunmanship?

                                                                                            I’d argue that yes, it’s quite possible to build a nice simple (moderately) lightweight thing in Electron; it’s just pretty hard in comparison to building, say, a nice simple (definitely) lightweight CLI. Or even a desktop app using a regular graphical toolkit?

                                                                                            1. 10

                                                                                              Visual Studio Code, Slack, Discord, WhatsApp, Spotify all are unfortunately not simple. And while they could be reduced to simpler apps, I kinda feel like we’re all using them exactly because they have all these advanced features. These features are not useless, and a simpler app would disappoint.

                                                                                              It also seems like GUI and CLI toolkits are lagging behind the Web by maybe a decade, no joke. I’d love to see a native framework that implements the React+Redux flow. Doesn’t even have to be portable or JavaScript.

                                                                                              1. 4

                                                                                                I’m a huge fan of CLI software that eats text and outputs text. It’s easier to integrate into my flow, and the plethora of tools that are already available to manipulate the inputs and outputs.

                                                                                                An example: I’ve written a CLI client to JIRA that I have plugged into the Acme editor. I just tweaked my output templates a bit to include commands that I’d want to run related to a given ticket as part of my regular output, and added a very simple plumber rule that fetches a ticket’s information if I right-click anything that looks like a JIRA ticket (TASK-1234, for example). It’s served me well as a means to not have to deal with the JIRA UI, which I find bloated and unintuitive, and it allows me to remain in the context of my work to deal with the annoyance of updating a ticket (or fetching info regarding a ticket (or listing tickets, or pretty much anything really)). It’s far from perfect, but it covers most, if not all, of my day-to-day interaction with JIRA, and it’s all just an integration of different programs that know how to deal with text.

                                                                                                [edit: It’s far from perfect, but I find it better than the alternative]

                                                                                                1. 1

                                                                                                  Is either part of that open-source by chance? I’ve been trying acme as my editor and use JIRA at work. I have a hunch you’re largely describing four lines of plumb rules and a short shell script, but I’m still having trouble wrapping my head around the right way to do these things.

                                                                                                  1. 3

                                                                                                    Full disclosure, the JIRA thing has bugs that have not stopped me from using it in any meaningful way. https://github.com/otremblay/jkl

                                                                                                    The acme plumbing rule is as follows:

                                                                                                    type	is	text
                                                                                                    data	matches	'([A-Za-z]+)-([0-9]+)'    
                                                                                                    plumb	start	rc -c 'jkl '$1'-'$2' >[2=1] | nobs | plumb -i -d edit -a ''action=showdata filename=/jkl/'$1'-'$2''''
                                                                                                    

                                                                                                    It checks for a file called “.jklrc” in $HOME. Its shape is as follows:

                                                                                                    JIRA_ROOT=https://your.jira.server/   
                                                                                                    JIRA_USER=yourusername
                                                                                                    JIRA_PASSWORD=yourpassword
                                                                                                    JIRA_PROJECT=PROJECTKEY
                                                                                                    #JKLNOCOLOR=true
                                                                                                    RED_ISSUE_STATUSES=Open
                                                                                                    BLUE_ISSUE_STATUSES=Ready for QA,In QA,Ready for Deploy
                                                                                                    YELLOW_ISSUE_STATUSES=default
                                                                                                    GREEN_ISSUE_STATUSES=Done,Closed
                                                                                                    # The following is the template for a given issue. You don't need this, but mine contains commands that jkl can run using middleclick.
                                                                                                    JKL_ISSUE_TMPL="{{$key := .Key}}{{$key}}	{{if .Fields.IssueType}}[{{.Fields.IssueType.Name}}]{{end}}	{{.Fields.Summary}}\n\nURL: {{.URL}}\n\n{{if .Fields.Status}}Status:	 {{.Fields.Status.Name}}\n{{end}}Transitions: {{range .Transitions}}\n	{{.Name}}	| jkl {{$key}} '{{.Name}}'{{end}}\n\n{{if .Fields.Assignee}}Assignee:	{{.Fields.Assignee.Name}}\n{{end}}jkl assign {{$key}} otremblay\n\nTime Remaining/Original Estimate:	{{.Fields.PrettyRemaining}} / {{.Fields.PrettyOriginalEstimate}}\n\n{{.PrintExtraFields}}\n\nDescription:   {{.Fields.Description}} \n\nIssue Links: \n{{range .Fields.IssueLinks}}	{{.}}\n{{end}}\n\nComments: jkl comment {{$key}}\n\n{{if .Fields.Comment }}{{$k := $key}}{{range .Fields.Comment.Comments}}{{.Author.DisplayName}} [~{{.Author.Name}}] (jkl edit {{$k}}~{{.Id}}):\n-----------------\n{{.Body}}\n-----------------\n\n{{end}}{{end}}"
                                                                                                    
                                                                                                    1. 1

                                                                                                      Thank you so much! I’ll take a look shortly. It really helps to see real-world examples like this.

                                                                                                      1. 2

                                                                                                        If “jkl” blows up in your face, I totally accept PRs. If you decide to go down that path, I’m sorry about the state of the code. :P

                                                                                                2. 1

                                                                                                  It also seems like GUI and CLI toolkits are lagging behind the Web by maybe a decade, no joke. I’d love to see a native framework that implements the React+Redux flow. Doesn’t even have to be portable or JavaScript.

                                                                                                  I couldn’t disagree more. Sure, maybe in “developer ergonomics” Web is ahead, but GUI trounces Web in terms of performance and consistency.

                                                                                                  1. 1

                                                                                                    I’d love to see a native framework that implements the React+Redux flow.

                                                                                                    Maybe Flutter?

                                                                                                    1. 1

                                                                                                      Flutter is such a native framework, although only for mobile (i.e. Android & iOS).

                                                                                                    2. 2

                                                                                                      I belive one of the things that gave Electron apps a bad reputation (aside from the obvious technological issuses) were things like “new” web browsers, built with electron, offering nothing practically new, that most people would actually want, such as lower memory consumption, for example.

                                                                                                      1. 2

                                                                                                        Building UIs is hard in general - it seems like electron trades off ease of making UIs performant for ease of building them.

                                                                                                        That being said, it seems like it’s not prohibitively difficult to build a fast UI in electron: https://keminglabs.com/blog/building-a-fast-electron-app-with-rust/

                                                                                                        It seems like most people building electron apps just don’t think about performance until much later in the process of development.

                                                                                                      2. 2

                                                                                                        I think one of the few main selling points of Electron was accessibility. Anybody with solid knowledge of HTML, CSS and JS could find his way around and build the app that was running on multiple platforms. But it wasn’t performant and it was quite resource hog as it turned out. Now why is this not the case with Visual Studio Code? Because it is being written by really good developers, who are working for Microsoft, who worked on creating Typescript, in which is Visual Studio Code written, on top of Electron. Now you can get the sense of things why Visual Studio Code is different case than rest of the Electron apps, people behind it are the reason. And whole story defeats the point of Electron. If Electron as a platform could produce half as good results as VSC in terms of performance and resource efficiency than maybe it would be more viable option, as it is right now, I can see the pendulum swinging back to some other native way of implementing applications.

                                                                                                        1. 2

                                                                                                          I mean, I hate the web stack like few others, but I think the point that ultimately the people are more determinative than the technology stands.

                                                                                                          I just really hate the web.

                                                                                                        2. 1

                                                                                                          I completely agree. I think that a lot of the frustrations with the quality of Electron apps is misplaced.

                                                                                                        1. 23

                                                                                                          I released a Rust crate to simulate a RISC-V CPU: https://github.com/stephank/rvsim

                                                                                                          This has been spare time project of several months. It was fun and challenging to build.

                                                                                                          The most recent work was the floating point parts, which is maybe the first time I actually took a good look at how floating point really works.

                                                                                                          Connecting the C parts of Berkeley SoftFloat was also fun, though I may try to translate the pieces I’m using to Rust. As another fun challenge.

                                                                                                          Eventually, I want to try and build little simulations of fictional machines with it. Kinda like Pico8 or ye olde RoboWar.

                                                                                                          1. 7
                                                                                                            1. 3

                                                                                                              It is a great logo!

                                                                                                              1. 1

                                                                                                                I feel like there’s a reference I’m missing. :-)

                                                                                                                1. 1

                                                                                                                  Simon’s Cat I’m guessing?

                                                                                                              1. 6

                                                                                                                I used to think the best way was to check for anything, @, anything, ., anything and then I was informed it is possible to have an email address with no dot if the domain side is an ipv6 address…

                                                                                                                1. 2

                                                                                                                  That sounds like it wouldn’t work with SPF or DKIM any way?

                                                                                                                  1. 3

                                                                                                                    Do SPF or DKIM matter in this context? The email receiver validates these things for the sender. You don’t need it if you only plan to receive emails, do you?

                                                                                                                1. 12

                                                                                                                  Docker has not been very good software for my team at all. We’ve managed to trigger non-stop kernel semaphore leak bugs as well as lvm filesystem bugs. Some of them going through multiple different attempts to fix. And any attempt to try to figure it out yourself by reading their code is stymied by the weird Moby/Docker disconnect that seems to be there.

                                                                                                                  If you are thinking about running docker by yourself and not in someone else’s managed docker solution then beware. It’s very sensitive to the kernel you are running and the filesystem drivers you are using it with. As far as I can tell if you aren’t running in Amazon, or Googles docker hosted solutions you are in for a bad time. And only Amazon is actually running docker. Google just sidestepped the whole issue by using their own container technology under the hood.

                                                                                                                  The whole experience has soured me on Docker as a deployment solution. It’s wonderful for the developer but it’s a nightmare for whoever has to manage the docker hosts.

                                                                                                                  1. 11

                                                                                                                    A few things that bit me:

                                                                                                                    • containers don’t report real memory limits. Running top will report all 32GB of system memory even if the container is limited to 2GB. Scala/Java or other JVM apps aren’t aware of this limit, so you have to wrap the Java process with -X memory limit flags, otherwise your container will get killed (you don’t even get an OutOfMemory exception) and marathon/k8s/whatever scheduler will start a new one. Eventually most interpreters (python, ruby, jvm, etc.) will have built in support to check cgroup memory limits, but for now it’s a pain.
                                                                                                                    • Not enough tooling in the container. I don’t want to have to apt-get nc each time I rebuild a container to see if my network connections work. I’ve heard good things about sysdig bridging this gap though.
                                                                                                                    • Tons of specific Kernel flags (really only matters if you use Gentoo or you compile your own kernel).
                                                                                                                    • Weird network establishment issues. If you expose a port on the host, it will be available before it’s available to a linked container. So if you want to do a check to see if something like a database is ready, you have to do it in a container.

                                                                                                                    I’m sure there are more. Overall I actually do like Docker, despite some of the weirdness. However I hate how we have k8s/marathon/nomad/swarm .. there’s no one scheduler or scheduler format and if you switch from one to the other, you’re redoing a lot of tooling, labels and config to get all your services to connect together. Consul makes me want to stab myself. DC/OS uses up 2GB ~ 4GB of ram just for the fucking scheduler on each node! k8s is a nightmare to configure without a team of at least three and really ten. None of these solutions scale up from one node to a ton easily (minikube is a hack).

                                                                                                                    Containers are nice. The scheduling systems around them can go die in a fire.

                                                                                                                    1. 4
                                                                                                                      containers don’t report real memory limits
                                                                                                                      

                                                                                                                      [X] we’ve been bitten by this.It also has implications for monitoring so you get double the fun.

                                                                                                                      Not enough tooling in the container.
                                                                                                                      

                                                                                                                      [X] we’ve established out own baseline container images and

                                                                                                                      Weird network establishment issues.
                                                                                                                      

                                                                                                                      [X] container and k8s networking was, at least until a few months ago, a mess.

                                                                                                                      Consul makes me want to stab myself.

                                                                                                                      [X] we hacked our own

                                                                                                                      without a team of at least three and really ten.

                                                                                                                      [X] confirmed, we’re throwing money and people at it.

                                                                                                                      None of these solutions scale up from one node to a ton easily (minikube is a hack).

                                                                                                                      [X] I’ve thrown up my hands on having a working developer environment without running it on a cloud provider. We can’t trust minikube to behave sufficiently similarly as staging and production.

                                                                                                                      Containers are nice. The scheduling systems around them can go die in a fire.

                                                                                                                      I’m not even sure containers are that nice, the idea of containers is nice but the execution is still half-baked.

                                                                                                                      1. 2

                                                                                                                        Why do you need so many people to operate kubernetes well? And what is it enabling, to make that kind of expenditure worth it?

                                                                                                                        1. 2

                                                                                                                          We’re developing a commercial turn-key, provider-independent platform based on it. Dog-fooding our own stuff has exposed many sharp bits and rough edges.

                                                                                                                          1. 1

                                                                                                                            Thanks.

                                                                                                                    2. 7

                                                                                                                      I’ve had a positive experience with Triton. It doesn’t support all of Docker’s features, since like Google they opted for emulating Docker and apparently decided some things weren’t having, but for the features Triton does it Just Works.

                                                                                                                      Of course, that means getting used to administering a different ecosystem.

                                                                                                                      1. 1

                                                                                                                        I love the idea of Triton, but having rolled it out for a past position I worked at I can say honestly that I would not recommend it. There is no high-availability for many of the internal services by default (you need to roll your own replicas etc), there is no routing across networks (static routes and additional interfaces in every instance is not a good solution). I love Joyent as a company, and their products have a great hypothetical appeal to me as a technologist but there are just too many “buts” to justify spending the kind of money they charge for the solution they offer.

                                                                                                                        1. 2

                                                                                                                          I’m just curious how old the version of Triton was, because it has had software-defined networking for ~3 years or so. Was there a limitation with it?

                                                                                                                      2. 2

                                                                                                                        That stinks, but sounds more like a critique of the Linux kernel? Are you running anything custom?

                                                                                                                        Newer Docker defaults to overlayfs (no more aufs), and runs fine for us on stock Debian 9 kernels (without the extra modules package, or any dkms modules). This is both on bare metal and the AMIs Debian provides. Though we run on plain ext4, without LVM.

                                                                                                                        1. 4

                                                                                                                          My experience is purely anecdotal so shouldn’t be taken as more than that.

                                                                                                                          However we aren’t on anything custom. Running latest CentOS kernels for everything and we keep it patched. The bugs aren’t in the linux kernel. It’s the way docker does things when it sets up the cgroups and manages them. My early experimentation with other container runtimes seems to indicate that they don’t have the same problems.

                                                                                                                          Just searching for the word hang in the moby project shows 171 open bugs and 521 closed. Most of them from a cursory examination look very similar to our issues. For us the tend to manifest as a deadlock in the docker engine which then causes the managed containers to go unhealthy and start a reboot loop. We’ve had to have cronjobs run and kill the docker daemons periodically in the past to keep things up and running.

                                                                                                                          1. 2

                                                                                                                            Maybe there are bugs in the way Docker sets up cgroups too, but you mentioned kernel semaphore leaks and LVM bugs which seem to be squarely in the kernel? Which seems to track to me - I know when systemd started exposing all this Linux kernel-specific stuff, they were the first really big consumer so they also exposed lots of kernel bugs.

                                                                                                                      1. 10

                                                                                                                        Congrats on getting your server up and written about!

                                                                                                                        The thing that strikes me as kinda odd–maybe I’m just showing my age–is that you seem to have not one, not two, but many webservers:

                                                                                                                        • Caddy on the host machine
                                                                                                                        • Apache in the Next Cloud (I think?)
                                                                                                                        • Nginx in Gitlab (if you’re using the community edition image, which doesn’t look like it’s be updated in two years?)
                                                                                                                        • Go server for Hugo (which, as a static site generator, should just be a directory of files to serve directly, no?)
                                                                                                                        • Apache for Piwik
                                                                                                                        • Node for Bookstack (unavoidable)

                                                                                                                        Like, I can’t help but wonder if this is really an efficient use of resources. This sort of thing is why I view container-based solutions for ops with tremendous skepticism.

                                                                                                                        1. 3

                                                                                                                          Congrats on getting your server up and written about!

                                                                                                                          Thank you very much!

                                                                                                                          Apache in the Next Cloud (I think?)

                                                                                                                          There is a version (tag) of the nextcloud image that does not use a webserver and only exposes the php-fpm port. I’m using that image.

                                                                                                                          Thank you for reading my post so thoroughly - You’re mostly right. Except for the nextcloud service every other container has a dedicated web-server built in. This takes some additional memory, but I don’t think it’s relevant cpu-wise (perceived, not factual).

                                                                                                                          Nevertheless: Yes, it comes with some overhead. And this surely is not a solution for everyone. But in my case, I’m very happy to be able to isolate all the services with containerization; The pleasure of having easy updates and clean isolation far outweighs the (IMO) slight overhead in computation. Although: With some extra effort I’d be able to remove the web-servers in most of the containers.

                                                                                                                          1. 1

                                                                                                                            Although: With some extra effort I’d be able to remove the web-servers in most of the containers.

                                                                                                                            I’m curious - how would you do this and still keep gitlab isolated? You’d still be running it with thin/puma/unicorn rather than spawning with passenger, right?

                                                                                                                            1. 2

                                                                                                                              I’d configure gitlab to not use the integrated nginx server and configure caddy to serve gitlab accordingly :). I’ve not figured out the required caddy settings, but that’s on my agenda.

                                                                                                                              All other daemons that are required for running gitlab you’ve mentioned would still run inside the docker container. Thus gitlab would be isolated, but without the nginx server.

                                                                                                                          2. 3

                                                                                                                            This is unfortunate, but also rather necessary for isolation. Often, apps depend on specific webserver settings (especially in the PHP world). If you’re going to pull those settings outside the container, that means you have to be aware of any changes during an upgrade.

                                                                                                                            For what it’s worth, in our case at least, there’s only 2 webservers in the request path, and our apps that need their own webserver always use nginx. These instances of nginx have all caching and buffering disabled, and are at the very bottom in memory usage on the system.

                                                                                                                            I’m not sure of the processing overhead per request of an extra webserver, but we’ve currently not hit any issues. The idea is to cache as much as possible at the front proxy, and the remainder that goes through the stack is heavier any way. My gut says the overhead is probably small compared to the rest of the app logic in PHP.

                                                                                                                          1. 2

                                                                                                                            The absolute worst part of this is that this code is generated boilerplate that is present everywhere in the language. I find the same thing is present in PHP. At least in ObjC, there’s sugar in the language.

                                                                                                                            And maybe higher level languages are following the wrong OO model, because I feel there’s very little to gain from distinguishing between properties and methods there. Message-passing OO is much simpler.