1. 8

    The thread of security issues unveiled during the last few months in Intel CPU and similar architectures is an industrial nightmare. It’s difficult to accept that a whole industry could have been built on a such fragile basis…

    1. 25

      I mean, have you seen the software the world runs on?

      1. 6

        It’s difficult to accept that a whole industry could have been built on a such fragile basis…

        See also car software.

        1. 5

          For me, it was easy after seeing how much better older stuff was that they ignored for money. Intel did try to make their own better stuff which failed repeatedly. They tried three times with i432, i960, and Itanium. Each had good and bad (i960 my favorite) but Intel was punished hard for all of them. Customers did buy billions in x86’s based on performance (most important), cost, and watts. Predictably, Intel decided to keep doing what people paid billions for instead of what cost them billions. I blame the buyers as much as them given I’ve tried to sell lots of them on secure, usable products that were free or cost almost nothing. Usually unsuccessful due to some aspect of human nature.

          Like in most other tech products. It was surprising to me that Intel’s products weren’t worse than they are. Well, maybe they were as the bugs keep coming in as those assessing them predicted. They weren’t designed for strong security since market doesn’t pay for that. So, they have a lot of security problems. The hackers ignored them way longer than I thought they would, though.

          1. 4

            What shocks me most is how long we have been using these techniques without widespread awareness of these issues or the potential for this class of issues.

            Some people predicted these problems, sure, but their concerns were mostly dismissed. So over the course of decades, we’ve seen other chip makers and architectures adopt the same techniques and thus enable these same bug classes.

            For a young initiative like RISC-V, this is a great opportunity. They have not sunk years and years of development in features which may never be entirely safe to implement (speculative execution, hyperthreading, …) and are now able to take these new threats into account, quite early in their development. This could be a boon for industrial adoption, especially while many competitors are forced to rethink so many man-years of performance improvements.

          1. 1

            My wife and I have noticed this. She lists over the pockets that are just smattered all over my cargos. Even my regular pants generally have pockets. Many of her pants and skirts have stitched up pockets or stitching to look like pockets. What the heck!

            1. 2

              The ‘stitched up’ pockets might be referred to as ‘fake pockets’ elsewhere in this thread.

              There are some sewed-shut pockets that can be cut open and made functional without damaging the garment, but I’ve only observed this in men’s formal wear.

            1. 3

              Pockets, unlike purses, are hidden, private spaces.

              This seems backwards. One’s bag does a much better job of concealing the outline and even the presence of objects.

              1. 1

                Yes but then you need to carry a bag, which most men don’t need too. Many men may carry a bag to work, but going to dinner or the game on the weekend most men will not be carrying anything outside of their pockets.

              1. 14

                I don’t buy it because the real protocol is what you read and write from the file, not that you can read and write files. And if the “file” is a directory, what do the filenames you read and write from/to it mean?

                So is there really any difference between open(read("/net/clone")) and net_clone();? The author seems to say the former is more loosely coupled than the latter because the only methods are open and read on the noun that is the file…. but really, you are stating exactly the same thing as the “verb” approach (if anything, I’d argue it is more loosely typed than loosely coupled). If a new version wants to add a new operation, what’s the difference between making it a new file that returns some random data you must write code to interpret, and a new method that returns some data you must write code to use?

                1. 24

                  So is there really any difference between open(read(”/net/clone”)) and net_clone();

                  Yes: The fact that you can write tools that know nothing about the /net protocol, and still do useful things. And the fact that these files live a uniform, customizable namespace. You can use “/net/tcp/clone”, but you can also use “/net.home/tcp/clone”, which may very well be a completely different machine’s network stack. You can bind your own virtual network stack over /net, and have your tests run against it without sending any real network traffic. Or you can write your own network stack that handles roaming and reconnecting transparently, mount it over /net, and leave your programs none the wiser. This can be done without any special support in the kernel, because it’s all just files behind a file server.

                  The difference is that there are a huge number of tools you can write that do useful things with /net/clone that know nothing about what gets written to the /net/tcp/* files. And tools that weren’t intended to manipulate /net can still be used with it.

                  The way that rcpu (essentially, the Plan 9 equivalent of VNC/remote desktop/ssh) works is built around this. It is implemented as a 90 line shell script It exports devices from your local machine, mounts them remotely, juggles around the namespace a bit, and suddenly, all the programs that do speak the devdraw protocol are drawing to your local screen instead of the remote machine’s devices.

                  1. 5

                    You argue better than I can, but I’ll add that the shell is a human interactive environment, C api’s are not. Having a layer that is human interactive is neat for debugging and system inspection. Though this is a somewhat weaker argument once you get python binding or some equivalent.

                    1. 1

                      I was reminded of this equivalent.

                    2. 1

                      But in OOP you can provide a “FileReader” or “DataProvider”, or just a FilePath that abstracts either where the file is or what you are reading from too. The simplest would be the net_clone function above just taking a char* file_path, but in an OOP language the char* or how we read from whatever the char* is can be abstracted too.

                      1. 2

                        Yes, but how do you swap it out from outside your code? The file system interface allows you to effectively do (to use some OOP jargon) dependency injection from outside of your program, without teaching any of your tools about what you’re injecting or how you need to wire it up. It’s all just names in a namespace.

                        1. 0

                          without teaching any of your tools about what you’re injecting or how you need to wire it up

                          LD_PRELOAD, JVM ClassPath…

                    3. 6

                      So is there really any difference between open(read(”/net/clone”)) and net_clone();?

                      Yes, there is. ”/net/clone” is data, while net_clone() is code.

                      1. 4

                        I don’t buy it because the real protocol is what you read and write from the file, not that you can read and write files

                        Yes - but the read()/write() layer allows you to do useful things without understanding that higher-level protocol.

                        It’s a similar situation to text-versus-binary file formats. Take some golang code for example. A file ‘foo.go’ has meaning at different levels of abstraction:

                        1. golang code requiring 1.10 compiler or higher (uses shifted index expression https://golang.org/doc/go1.10#language)
                        2. golang code
                        3. utf-8 encoded file
                        4. file

                        You can interact with ‘foo.go’ at any of these levels of abstraction. To compile it, you need to understand (1). To syntax-highlight it you only need (2). To do unicode-aware search and replace, you need only (3). To count the bytes, or move/delete/rename the file you only need (4).

                        The simpler interfaces don’t allow you to do all the things that the richer interfaces do, but having them there is really useful. A user doesn’t need to learn a new tool to rename the file, for example.

                        If you compare that to an IDE, it could perhaps store all the code in a database and expose operations on the code as high-level operations in the UI. This would allow various clever optimisations (e.g. all caller/callee relationships could be maintained and refactoring could be enhanced).

                        However, if the IDE developer failed to support regular expressions in the search and replace, you’re sunk. And if the IDE developer didn’t like command line tools, you’re sunk.

                        (Edit: this isn’t just one example. Similar affordances exist elsewhere. Text-based internet protocols can be debugged with ‘nc’ or ‘telnet’ in a pinch. HTTP proxies can assume that GET is idempotent and various cacheing headers have their standard meanings, without understanding your JSON or XML payload at all.)

                      1. 0

                        Feels like a weak argument. I wonder if the author will agree to eat their hat if Waymo gets cars driving the general public around in Phoenix by the end of the year…

                        1. 2

                          Bold claims require bold hats being eaten.

                          1. 1

                            Italic hats taste better

                        1. 5

                          Seriously? Where is emacs?

                          1. 1

                            It’s still recovering from Kyle Machulis’s loving.

                            https://www.youtube.com/watch?v=D1sXuHnf_lo

                            1. 1

                              Hold up! This is a gallery of IDEs, not OSs :)

                            1. 21

                              Gosh, I couldn’t make it very far into this article without skimming. It goes on and on asking the same ‘why’ but mentally answering it in the opposite direction of the quoted comments.

                              Docker is easy, standard isolation. If it falls, something will replace it. We’re not going in the opposite direction.

                              The article doesn’t explain to me what other ways I have of running 9 instances of an app without making a big mess of listening ports and configuration.

                              Or running many different PHP apps without creating a big mess of PHP installs and PHP-FPM configs. (We still deal with hosting setups that share the same install for all apps, then want to upgrade PHP.)

                              Or how to make your production setup easy to replicate (roughly) for developers who actually work on the codebase. (Perhaps on macOS or Windows, while you deploy on Linux.)

                              We’re not even doing the orchestration dance yet, these are individual servers that run Docker with a bunch of shell scripts to provision the machine and manage containers.

                              But even if we only use 1% of the functionality in Docker, I don’t know how to do that stuff without it. Nevermind that I’d probably have to create a Vagrantbox or something to get anyone to use it in dev. (I’ve come to dislike Vagrant, sorry to say.)

                              Besides work, I privately manage a little cloud server and my own Raspberry Pi, and sure they don’t run Docker, but they don’t have these requirements. It’s fine to not use Docker in some instances. And even then, Docker can be useful as a build environment, to document / eliminate any awkward dependencies on the environment. Makes your project that much easier to pick up when you return to it months later.

                              Finally, I’m sorry to say that my experiences with Ansible, Chef and Puppet have only ever been bad. It seems to me like the most fragile aspect of these tools is all the checks of what’s what in the current environment, then act on it. I’m super interested in trying NixOS sometime, because from what I gather, the model is somewhat similar to what Docker does: simply layering stuff like we’ve always done on software.

                              1. 1

                                For the php part it’s not that complex. Install the required versions (Debian and Ubuntu both have 5.6 through 7.2 “major” releases available side by side that’s to Ondrej Sury’s repo. Then just setup a pool per-app (which you should do anyway) and point to the apps specific Unix domain socket for php-fpm in the vhost’s proxy_fcgi config line.

                                I’ve used this same setup to bring an app from php5.4 (using mod_php) up through the versions as it was tested/fixed too.

                                Is there some config/system setup required? You betcha. Ops/sysadmins is part of running a site that requires more than shared hosting.

                                What are you gonna do with docker, have each developer just randomly writing whatever the fuck seems like a good idea and pushing their monolithic images to prod with no ops supporting it?

                                1. 13

                                  What are you gonna do with docker, have each developer just randomly writing whatever the fuck seems like a good idea and pushing their monolithic images to prod with no ops supporting it?

                                  Yes. The whole point of “DevOps”/docker is to deploy softwares certified by “Works on My Machine” certification program. This eliminates coordination time with separate Ops team.

                                  1. 2

                                    Is this sarcasm, or are you actually in favour of the definition “DevOps = Developers [trying to] do Ops” ?

                                    1. 8

                                      Descriptively, that’s what DevOps is. I am prescriptively against such DevOps, but describing what’s currently happening with docker is unrelated to whether I am in favor of it.

                                      1. 3

                                        I don’t disagree that it’s a definition used by a lot of places (whether they call it devops or not). But I believe a lot of people who wax poetic about “DevOps” don’t share this same view - they view it as Operations using ‘development’ practices: i.e. writing scripts/declarative state files/etc to have reproducible infrastructure, rather than a “bible” of manual steps to go through to setup an environment.

                                        I’m in favour of the approach those people like, but I’m against the term simply because it’s misleading - like “the cloud” or “server less”.

                                  2. 2

                                    I don’t understand your last point, that’s exactly what developers do all day.

                                    In Docker, the PHP version the app depends on is set in code. It doesn’t even take any configuration changes when the app switches to a new PHP version.

                                    But if there’s one gripe I have with the Docker way of things, baking everything into an image, it’s security. There are no shared libraries in any way, upgrading a dependency minor version requires baking a new image.

                                    I kinda wish we had a middle road, somewhere between Debian packages and Docker images.

                                    1. 3

                                      the PHP version the app depends on is set in code

                                      And of course we all know Docker is the only way to define dependencies for software packages.

                                      1. 4

                                        Did anyone say it was? Docker is just one of the easiest ways to define the state of the whole running environment and have it defined in a text file which you can easily review to see what has been done.

                                      2. 1

                                        You can share libraries with Docker by making services share the same Docker image. You can actually replicate Debian level of sharing by having a single Docker image.

                                        1. 2

                                          Well, I guess this is just sharing in terms of memory usage? But what I meant with security is that I’d like if it were possible to have, for example, a single layer in the image with just OpenSSL, that you can then swap out with a newer version (with, say, a security fix.)

                                          Right now, an OpenSSL upgrade means rebuilding the app. The current advantage managing your app ‘traditionally’ without Docker is that a sysadmin can do this upgrade for you. (Same with PHP patch versions, in the earlier example.)

                                          1. 5

                                            And this is exactly why I don’t buy into the whole “single-use” container shit show.

                                            Want to use LXC/LXD for lightweight “VM’s”? Sure, I’m all for it. So long as ops can manage the infra, it’s all good.

                                            Want to have developers having the last say on every detail of how an app actually runs in production? Not so much.

                                            What you want is a simpler way to deploy your php app to a server and define that it needs a given version of PHP, an Apache/Nginx config, etc.

                                            You could literally do all of that by just having your app packaged as a .deb, have it define dependencies on php-{fpm,moduleX,moduleY,moduleZ} and include a vhost.conf and pool.conf file. A minimal (i.e. non-debian repo quality but works for private installs) package means you’ll need maybe half a dozen files extra.

                                            And then your ops/sysadmin team can upgrade openssl, or php, or apache, or redis or whatever other thing you use.

                                            1. 2

                                              I actually do think this is a really good idea. But what’s currently there requires a lot more polish for it to be accessible to devs and small teams.

                                              Debian packaging is quite a pain (though you could probably skip a lot of standards). RPM is somewhat easier. But in both cases, the packages typically bundle default app configuration and systemd unit files, which is a model that sort of assumes things only have 1 instance.

                                              You could then go the LXC route, and have an admin manage each instance in a Debian container. That’s great, but we don’t have the resources to set up and manage all of this, and I expect that is the case for quite a lot of small teams out there.

                                              Maybe it’s less complicated than I think it is? If so, Docker marketing got something very right, and it’d help if there was a start-to-finish guide that explains things the other way.

                                              Also remember that Docker for Mac/Windows makes stuff really accessible for devs that are not on Linux natively. Not having to actually manage your VM is a blessing, because that’s exactly my gripe with Vagrant. At some point things inside the VM get hairy, because of organic growth.

                                              1. 3

                                                But in both cases, the packages typically bundle default app configuration and systemd unit files, which is a model that sort of assumes things only have 1 instance.

                                                In the case of the context - it is one instance. Either you build your packages with different names for different stages (e.g. acme-corp-foo-app-test, acme-corp-foo-app-staging, acme-corp-foo-app-prod) or use separate environments for test/stage/prod - either via VMs, LXC/LXD, whatever.

                                                Nothing is a silver bullet, Docker included. It’s just that Docker has a marketing team with a vested interest in glossing over it’s deficiencies.

                                                If you want to talk about how to use the above concept for an actual project, I’m happy to talk outside the thread.

                                                1. 2

                                                  Also remember that Docker for Mac/Windows makes stuff really accessible for devs that are not on Linux natively. Not having to actually manage your VM is a blessing, because that’s exactly my gripe with Vagrant. At some point things inside the VM get hairy, because of organic growth.

                                                  This is exactly why at work we started to use Docker (and got rid of Vagrant).

                                                  1. 1

                                                    At some point things inside the VM get hairy, because of organic growth.

                                                    Can you define “hairy”?

                                                    1. 2

                                                      The VM becomes a second workstation, because you often SSH in to run some commands (test migrations and the like). So people install things in the VM, and change system configuration in the VM. And then people revive months old VMs, because it’s easier than vagrant up, which can take a good 20 minutes. There’s no reasoning about the state of Vagrant VMs in practice.

                                                      1. 3

                                                        So people install things in the VM, and change system configuration in the VM

                                                        So your problem isn’t vagrant then, but people. Either the same people are doing the same thing with Docker, or not all things are equal?

                                                        because it’s easier than vagrant up, which can take a good 20 minutes

                                                        What. 20 MINUTES? What on earth are you doing that causes it to take 20 minutes to bring up a VM and provision it?

                                                        There’s no reasoning about the state of Vagrant VMs in practice.

                                                        You know the version of the box that it’s based on, what provisioning steps are configured to run, and whether they’ve run or not.

                                                        Based on everything you’ve said, this sounds like blaming the guy who built a concrete wall, when your hammer and nails won’t go into it.

                                                        1. 1

                                                          I suppose the main difference is that we don’t build images for Vagrant, but instead provision the machine from a stock Ubuntu image using Ansible. It takes a good 3 minutes just to get the VirtualBox VM up, more if you have to download the Ubuntu image. From there, it’s mostly adding repos, installing deps, creating configuration. Ansible itself is rather sluggish too.

                                                          Compare that to a 15 second run to get a dev environment up in Docker, provided you have the base images available.

                                                          A people problem is a real problem. It doesn’t sound like you’ve used Docker for Mac/Windows, but the tool doesn’t give you a shell in the VM. And you don’t normally shell into containers.

                                                          1. 2

                                                            That’s interesting that it takes you 20 minutes to get to something usable. I never had that experience back when I used VMware and VirtualBox. I can’t remember having it anyway. I decided to see what getting Ubuntu up on my box takes with the new version for comparison to your experience. I did this experiment on my backup laptop: a 1.5GHz Celeron with plenty of RAM and older HD. It’s garbage far as performance goes. Running Ubuntu 16-17 (one of them…), VirtualBox, and Ubuntu 18.04 as guest in the a 1GB VM. That is, the LiveCD of Ubuntu 18.04 that it’s booting from.

                                                            1. From power on to first Ubuntu screen: 5.7 seconds.

                                                            2. To get to the Try or Install screen: 1 min 47 seconds.

                                                            3. Usable desktop: 4 min 26 seconds.

                                                            So, it’s up in under 5 minutes on the slowest-loading method (LiveCD) on some of the slowest hardware (Celeron) you can get. That tells me you could probably get even better startup time than me if you install and provision your stuff into a VirtualBox VM that becomes a base image. You use it as read-only, snapshot it, whatever the feature was. I rarely use VirtualBox these days so can’t remember. I know fully-loaded Ubuntu boots up in about a minute on this same box with the VirtualBox adding 5.7s to get to that bootloader. Your setup should just take 1-2 minutes to boot if doing it right.

                                                            1. 0

                                                              It takes a good 3 minutes just to get the VirtualBox VM up

                                                              What? Seriously? Are your physical machines running on spinning rust or with only 1 or 2 GB of RAM or something? That is an inordinate amount of time to boot a VM, even in the POS that is Virtualbox.

                                                              but the tool doesn’t give you a shell in the VM.

                                                              What, so docker attach or docker exec /bin/bash are just figments of my imagination?

                                                              you don’t normally shell into containers

                                                              You don’t normally just change system settings willy nilly in a pre-configured environment if you don’t know what you’re doing, but apparently you work with some people who don’t do what’s “normal”.

                                                              1. 2

                                                                Physical machines are whatever workstation the developer uses. Typically a Macbook Pro in our case. Up until Vagrant has SSH access to the machine, I’m not holding my breath.

                                                                You’re confusing shell access to the VM with shell access to containers. The Docker commands you reference are for container access.

                                                                People do regularly make changes to vhost configuration, or installed packages in VMs when testing new features, instead of changing the provisioning configuration. Again, because it takes way longer to iterate on these things with VMs. And because people do these things from a shell inside the VM, spending time there, they start customizing as well.

                                                                And people do these things in Docker too, and that’s fine. But we’re way more comfortable throwing away containers than VMs, because of the difference in time. In turn, it’s become much easier to iterate on provisioning config changes.

                                                                1. 3

                                                                  If time was a problem, sounds like the Docker developers should’ve just made VM’s faster in existing stacks. The L4Linux VM’s in Dresden’s demo loaded up about one a second on old hardware. Recently, LightVM got it down to 2.3 milliseconds on a Xen variant. Doing stuff like that also gives the fault-isolation and security assurances that only come with simple implementations which Docker-based platforms probably won’t have.

                                                                  Docker seems like it went backwards on those properties vs just improving speed or usability of virtualization platforms.

                                                                  1. 1

                                                                    You’re confusing shell access to the VM with shell access to containers. The Docker commands you reference are for container access.

                                                                    No. Your complaint is that people change configuration inside the provisioned environment. The provisioned environment with Docker isn’t a VM - that’s only there because it requires a Linux kernel to work. The provisioned environment is the container, which you’ve just said people are still fucking around with.

                                                                    So your complaint still boils down to “virtualbox is slow”, and I still cannot imagine what you are doing to take twenty fucking minutes to provision a machine.

                                                                    That’s closer to the time to build a base box from nothing than the time to bring up an instance and provision it.

                                                                    1. 2

                                                                      Look, this is getting silly. You can keep belittling every experience I’ve had, as if we’ve made these choices based on a couple of tiny bad aspects in the entire system, but that’s just not the case, and that’s not a productive discussion.

                                                                      I did acknowledge that in practice Docker images a lot more things, which factors into a lot of the slowness of provisioning in the Vagrant case for us. There’s just a lot more provisioning has to do compared to Docker.

                                                                      And while we could’ve gone another route, I doubt we would’ve been as happy, considering where we all are now as an industry. Docker gets a lot of support, and has a healthy ecosystem.

                                                                      I see plenty of issues with Docker, and I can grumble about it all day. The IPv6 support is terrible, the process management is limited, the Docker for Mac/Windows filesystem integrations leave a lot to be desired, the security issue I mentioned in this very thread. But it still has given us a lot more positives than negatives, in terms of developer productiveness and managing our servers.

                                                                      1. 1

                                                                        You can keep belittling every experience I’ve had Every ‘issue’ you raised boils down to ‘vagrant+virtualbox took took to long to bring up/reprovision’. At 20 minutes, that’s not normal operation, it’s a sign of a problem. Instead of fixing that, you just threw the whole lot out.

                                                                        This is like saying “I can’t work out why apache keeps crashing under load on Debian. Fuck it, I’m moving everything to Windows Server”.

                                                                        But it still has given us a lot more positives than negatives The linked article seems to debunk this myth.

                                                                      2. 2

                                                                        I have the same experience as @stephank with VirtualBox. Every time I want to restart with a clean environment, I restart with a standard Debian base box and I run my Ansible playbooks on it. This is slow because my playbooks have to reinstall everything (I try to keep a cache of the downloaded packages in a volume on the host, shared with the guest). Docker makes this a lot easier and quicker thanks to the layer mechanism. What do you suggest to keep using Vagrant and avoid the slow installation (building a custom image I guess)?

                                                                        1. 2

                                                                          Please tell me “the same experience” isn’t 20 minutes for a machine to come up from nothing?

                                                                          I’d first be looking to see how old the base box you’re using is. I’m guessing part of the process is an apt-get update && apt-get upgrade - some base boxes are woefully out of date, and are often hard-coded to use e.g. a US based mirror, which will hurt your update times if you’re elsewhere in the world.

                                                                          If you have a lot of stuff to install, then yes I’d recommend making your own base-box.

                                                                          What base-box are you using, out of interest? Can you share your playbooks?

                                                                          1. 2

                                                                            Creating a new VM with Vagrant just takes a few seconds, provided that the base box image is already available locally.

                                                                            Provisioning (using Ansible in my case) is what takes time (installing all the services and dependencies required by my app). To be clear, in my case, it’s just a few minutes instead of 20 minutes, but it’s slow enough to be inconvenient.

                                                                            I refresh the base box regularly, I use mirrors close to me, and I’ve already checked that apt-get update/upgrade terminates quickly.

                                                                            My base box is debian/jessie64.

                                                                            I install the usual stuff (nginx, Python, Go, Node, MySQL, Redis, certbot, some utils, etc.).

                                                                            1. 2

                                                                              Reading all yours comments, you seem deeply interested by convincing people that VMs are solving all the problems people think Docker is solving. Instead of debating endlessly on comments here, I’d be (truly) interested to read about your work-flow as a an ops and as a dev. I’ve finished my studies using Docker and never had to use VMs that much on my machines, so I’m not an expert and would be really interested to have a good article/post/… that I could learn from on the subject on how VM would be better than Docker.

                                                    2. 1

                                                      I think the point is to use something like ansible, so you put some ansible config in a git repo then you pull the repo, build the docker image, install apps, apply the config and run, all via ansible.

                                                    3. 2

                                                      How do you manage easily 3 different versions of PHP with 3 different version of MariaDB? I mean, this is something that Docker solves VERY easily.

                                                      1. 4

                                                        Maybe if your team requires 3 versions of a database and language runtime they’ve goofed…

                                                        1. 8

                                                          It’s always amusing to have answers pointing the legacy and saying “it shouldn’t exist”. I mean, yes it’s weird, annoying but it exists now and will exists later.

                                                          1. 6

                                                            it exists now and will exists later.

                                                            It doesn’t have to exist at all–like, literally, the cycles spent wrapping the mudballs in containers could be spent just…you know…cleaning up the mudballs.

                                                            There are cases (usually involving icky third-party integrations) where maintaining multiple versions of runtimes is necessary, but outside of those it’s just plan sloppy engineering not to try and cleanup and standardize things.

                                                            (And no, having the same container interface for a dozen different snowflakes is not standardization.)

                                                            1. 2

                                                              I see it more like, the application runs fine, the team that was working on it doesn’t exist anymore, instead of spending time to upgrade it (because I’m no java 6 developer), and I still want to benefit from bin packing, re-scheduling, … (and not only for this app, but for ALL the apps in the enterprise) I just spend time to put it in a container, and voila. I still can deploy it in several different cloud and orchestrator without asking for a team to spend time on a project that already does the job correctly.

                                                              To be honest, I understand that containers are not the solution to everything, but I keep wondering why people don’t accept that it has some utility.

                                                            2. 2

                                                              I think the point is that there is often little cost/benefit analysis done. Is moving one’s entire infrastructure to Docker/Kubernetes less work than getting all one’s code to run against the same version of a database? I’m sure sometimes it is, but my experience is that these questions are rarely asked. There is a status-quo bias toward solutions that allow existing complexity to be maintained, even when the solutions cost more than reducing that complexity.

                                                              1. 4

                                                                Totally agreed, but I’m also skeptical on the reaction of always blaming containers to add complexity. From my point of view, many things that I do with containers is way easier than if I had to do it another way (I also agree that some things would be easier without them too).

                                                          2. 2

                                                            Debian solves three different versions of php with Ondrej’s packages (or ppa on Ubuntu).

                                                            In anything but dev or the tiniest of sites you’ll have you database server on a seperate machine anyway - what possible reason is there to have three different versions of a database server on the same host for a production environment?

                                                            If you need it for testing, use lx{c,d} or vms.

                                                            1. 3

                                                              Especially MySQL has broken apps in the past, going from 5.5 -> 5.6, or 5.6 -> 5.7. Having a single database server means having to upgrade all apps that run on top of it in sync. So in practice, we’ve been running a separate database server per version.

                                                              Can’t speak for other systems, though.

                                                              1. 1

                                                                As you said, testing is a good example of such use case. Then why using VMs when I can bin-pack containers on 1 (or many) machine, using less resources?

                                                                1. 1

                                                                  That still isn’t a reason to use it in prod, and it isn’t that different from using LXC/LXD style containers.

                                                                  1. 1

                                                                    Do you have rational arguments to be against Docker which is using LXC? For now I don’t see any good reason not too. It’s like saying that you don’t want to use a solution because you can use the technologies it uses underneath.

                                                                    1. 6

                                                                      It’s like saying that you don’t want to use a solution because you can use the technologies it uses underneath.

                                                                      That’s a reasonable position though. There are people who have good reasons to prefer git CLI to Github Desktop, MySql console to PHPMyAdmin, and so forth. Abstractions aren’t free.

                                                                      1. 1

                                                                        Exactly! But I don’t see such hatred for people using Github Desktop or PHPmyadmin. It’s not because you don’t want to use it that it doesn’t fit the usecase of someone.

                                                                        1. 1

                                                                          As someone who usually ends up having to ‘cleanup’ or ‘fix’ things after someone has used something like a GUI git client or PHPMyAdmin, I wouldn’t use the word hatred, but I’m not particularly happy if someone I work with is using them.

                                                                          1. 1

                                                                            I can do interactive staging on the CLI, but I really prefer a GUI (and if I find a good one, would probably also use a GUI for rebasing before sending a pull request).

                                                                      2. 2

                                                                        If I want a lightweight machine, LXC provides that. Docker inherently is designed to run literally a single process. How many people use it that way? No, they install supervisord or whatever - at which point, what’s the fucking point?

                                                                        You’re creating your own ‘mini distribution’ of bullshit so you can call yourself devops. Sorry, I don’t drink the koolaid.

                                                                        1. 1

                                                                          Your argument is purely flawed. You justify the initially of Docker by generalizing what a (narrow) subset of users is doing. Like I said, I’m ready to hear rational arguments.

                                                                          1. 2

                                                                            generalizing what a (narrow) subset of users is doing

                                                                            I found you 34K examples in about 30 seconds: https://github.com/search?l=&q=supervisord+language%3ADockerfile&type=Code

                                                                            1. 1

                                                                              Hummm okay you got me on this one! Still, I really think there is some real utility for such a solution, even if yes it can be done in many other ways.

                                                          1. 2

                                                            Ah, yes it is. If it is on GitHub for example, anyone can use it, modify it, contribute to it. They can even add some of the stuff the author talks about, readmes, documentation and comments. They can submit bugs and suggestions, they can work on features or fixes. “Just” putting it up is often good enough.

                                                            1. 3

                                                              Agreed, I regularly send patches against README and docs for things that I use, as I’m learning how to use them. It’s just good manners to take your newly acquired knowledge and help others with it. Doubly so if you find something in the documentation that doesn’t exist in the executable.

                                                            1. 4

                                                              In terms of desktop adblocking and tracking blocking solutions, I use uBlock Origin, Privacy Badger, and this hosts file.

                                                              1. 4

                                                                I like the technique in general but often have local stuff listening on various ports. I wish there was a well-known ‘/dev/null’ IP address which these could be routed to … a tiny daemon could then return a protocol-appropriate NAK immediately and log the attempt.

                                                                1. 1

                                                                  Rather than 127.0.0.1, could you just route to something on your LAN that doesn’t exist?

                                                                2. 3

                                                                  I use uBlock Origin as well, works like a charm! Just have to disable font blocking on a few pages for them to be readable.

                                                                1. 6

                                                                  TLDR: The laptop was not tampered with.

                                                                  Still a good read though :-)

                                                                  1. 16

                                                                    That he knows of.

                                                                    1. 5

                                                                      It’s impossible to prove… :)

                                                                      1. 5

                                                                        For sure haha. One can do better than he did, though.

                                                                        For one, he can block evil maid style attacks very cheaply. I’ve done plenty of tamper-evident schemes for that stuff. You can at least know if they opened the case. From there, one can use analog/RF profiling of the devices to detect chip substitutions. It requires specialist, time-consuming skills or occasional help of a specialist to give you black box method plus steps to follow for device they already profiled.

                                                                        The typical recommendation I gave, though, was to buy a new laptop in-country and clear/sell it before you leave. This avoids risks at border crossings where they can legally search or might sabotage devices. Your actual data is retrievable over a VPN after you put Linux/BSD on that sucker. Alternatively, you use it as a thin client for a real system but latencies could be too much for that.

                                                                        So, there’s a few ideas for folks looking into solving this problem.

                                                                        1. 3

                                                                          This (and the original article) are a techno solutions to a techno problem that doesn’t really exist.

                                                                          If you’re a journo doing this, they will look at your visa and say, you claim to be a journalist, but you have no laptop, we don’t believe you, entry denied.

                                                                          I’m pretty sure even a very open country like NZ will do this to you. (If you claim not to be a journalist and start behaving as one, again, violating your visa conditions (ie working not visiting, out you go).

                                                                          As to spying on what you have on an encrypted drive….. rubber hose code breaking sorts that out pretty quick.

                                                                          I grew up in the Very Bad Old days and tend to have a very dim view of the technical abilities, patience and human kindness of the average spook.

                                                                          1. 2

                                                                            I got the idea from people doing it. They werent journalists, though. The other thing people did which might address that problem is take boring laptops with them. They have either nothing interesting or some misinformation. Nothing secret happens on it during trip. Might even use it for non-critical stuff like youtube just so its different when they scan it on return.

                                                                    2. 5

                                                                      TLDR: The laptop was not tampered with in a way he’s foreseen.

                                                                      To just say the laptop was not tampered with is missing his point completely.

                                                                    1. 1

                                                                      Well yes a lot of their issues are caused by having APIs that are too open. To be fair, back in those days, the tech ecosystem was definitely pushing for this openness. It was considered a good thing. Now, not so much..

                                                                      1. 1

                                                                        In our buzzwords-driven field?

                                                                        Probably people considered API access “a good thing” just because “Facebook/Google is doing this too!”

                                                                        But the problem was not the technology back then, just like AI is not the solution right now.

                                                                        It’s the business model.

                                                                        I remember a younger Zuckerberg explaining the world how privacy had no value for modern people.

                                                                        He meant it!

                                                                        1. 1

                                                                          back in those days, the tech ecosystem was definitely pushing for this openness.

                                                                          I would hardly call 2015 “those days”.

                                                                          1. 1

                                                                            Back in my day…

                                                                        1. 4

                                                                          To me the only question that hasn’t been asked and answered is how data that is not intentionally shared by users (cookies correlation, GPS coordinates, …) is being stored (and kept even after deleting your account) and sold to consulting companies?

                                                                          Each time a question landed’ near this, he smartly shifted his answer to the « two category » of data and that the user has control over the 2 categories. Nonetheless, we still have no answer about it.

                                                                          1. 2

                                                                            Storing user data is a liability as you are “On the hook” if it gets out to hackers. Unfortunately “on the hook” currently just means your reputation is damaged slightly until uber does a bigger stuff up later that week stealing your limelight. I’d like governments to get to the point where the law puts the fear of God into companies and their lawyers forbid them to hold onto creepy tracking data at least. Even required user data such as name, age, address is still a liability. Apparently companies don’t think like me though.

                                                                          1. 11

                                                                            Zuck: People just submitted it. Zuck: I don’t know why. Zuck: They “trust me” Zuck: Dumb fucks.

                                                                            1. 8

                                                                              And if anyone think he changed, just watch him buy lots of houses around his for privacy as he convinces everyone else to give up theirs. ;)

                                                                              1. 2

                                                                                Rich people are different to regular people, I like talking to my neighbours and having BBQs every so often with people in my street. That’s quite dark and Black Mirrory to want to ignore everyone around you and just interact digitally with a select few friends that are probably just like you, good bye diversity.

                                                                            1. 7

                                                                              I keep reading this, but I’ve only seen it once, thankfully - and I’ve been in my share of workplaces, having spent about half my career as a contractor. Am I a lucky freak, or is it just not actually that common?

                                                                              1. 7

                                                                                Well, for another anecdotal data point… In my 30 year career I’ve seen many “necessary nice people”, and several executives who were jerks, but I can’t think of any “necessary jerk” individual contributors. There were certainly some jerks, but they didn’t seem necessary.

                                                                                They do enable good dramatic situations, though, so I can understand why they’re popular in literature. And for obvious reasons they’re overrepresented in real life stories of office harassment.

                                                                                1. 1

                                                                                  What do you mean by “necessary nice person”?

                                                                                  Is it “A is the only person who knows how to do X?”, ie https://en.m.wikipedia.org/wiki/Bus_factor

                                                                                  1. 2

                                                                                    Sometimes, but more often it’s “A can do X twice as fast as anybody else” or “A knows who knows how to do X for any value of X”.

                                                                                2. 5

                                                                                  What I’ve seen is people whose lack of social finesse has been papered over because “well programmers haha”

                                                                                  (I’ve had this advantage as well)

                                                                                  I have never seen another job position where being bad at humans is considered acceptable. I am, of course, all for giving people opportunities to improve, but the bar is set so much lower than basically any other job

                                                                                  1. 2

                                                                                    “Well I guess they don’t have to talk to the customers directly….let’s hire them!”

                                                                                  2. 3

                                                                                    I think the people who truly encompass this personality type (or a combination of traits that make for this type of personality) typically find they’re war towards the top .. if they’re good at what they do. Those who are truly lacking empathy, that find their way further along on the sociopath/psychopathic scales, tend to via for positions at the top. They take large risks and, if they’re good at it, they jump in to fill positions the moment they can.

                                                                                    I agree, I have encountered few of these Necessary Jerks on my team in myself in my 15+ years in tech. There were one or two, but none that were really that bad or who I couldn’t find some common ground and get along with. There were more people who were incompetent, which is annoying, but so long as they’re nice and trying .. eh..everyone needs a job. There are people who are incompetent and refuse to learn and shit heads about it, and you wonder why the hell they still have a job – and you just gotta be as nice as you can (They are a lesson in patience).

                                                                                    From what I’ve heard, people with necessary jerks, are typically teams with just really shitty management. I think this post that was up a few months ago really encompasses that type of work environment:

                                                                                    https://startupsventurecapital.com/you-fired-your-top-talent-i-hope-youre-happy-cf57c41183dd

                                                                                    For the type of person I mentioned at the beginning of this comment, I recommend the book The Dictators Handbook. It’s pretty eye opening as far as what it really takes to grab and hold onto a position of power, like being a CEO. Spoiler alert, knowing anything useful about your business or technology, or even caring about your employees/staff, has very little to do with it.

                                                                                  1. 2

                                                                                    Just fire up gedit as a scratch pad that you have to paste to first, then copy from gedit to the terminal.

                                                                                    1. 2

                                                                                      That’s what I do. One can also sanity check the command against the man pages.

                                                                                    1. 4

                                                                                      Fine. As long as Ubuntu still runs on the new hardware.

                                                                                      1. 3

                                                                                        Linux has been really cranking up the ARM support, so lots of stuff will probably work. On the other hand, the ARM ecosystem is full of vendor-specific peripheral chips that don’t have a universal interface, and there just isn’t the same ecosystem for supporting that kind of stuff as there is on x86. It’s not generic yet and I can’t imagine Apple spearheading an effort to build the same generic hardware interfaces that x86 has, especially when they profit so much on having their highly custom vertical integrations.

                                                                                        On the other hand, Google has done a lot of work here with Chromebooks. If MacBooks become the premier ARM laptop, the community will have a decent starting point towards supporting them. I’d guess that Linux will work passably within a year unless Apple’s ARM laptop looks radically different from anything we’ve seen.

                                                                                      1. 40

                                                                                        Whenever I read tech articles about reducing keystrokes I tend to roll my eyes. cd‘ing directories already takes up a very small portion of my time—optimization will never be worth it. Now if you can tell me how to make roadmap estimations that don’t put my team in peril, now that’s going to help me to not waste my time!

                                                                                        Edit: It’s a cool tool, just maybe the article is touting it as more of a life saver than it actually is.

                                                                                        1. 12

                                                                                          I mean, I do too, but people do actually take this kind of thing seriously. I’ve had several people say they wouldn’t use ripgrep because the command was too long to type, but upon hearing that the actual command was rg, were much more satisfied. Maybe I missed their facetiousness, but they didn’t appear to be joking…

                                                                                          1. 5

                                                                                            Could they not have just alias’d the command if it was “too long”?

                                                                                            1. 4

                                                                                              The people in question don’t sound clever enough for that.

                                                                                              1. 1

                                                                                                Are you asking me? Or them? ;-)

                                                                                              2. 4

                                                                                                I wonder if these are different people than the ones who complain about short unix command names and C function names…

                                                                                              3. 9

                                                                                                For those of us with RSI, these little savings add up, and can make for a pretty big difference in comfort while typing.

                                                                                                1. 8

                                                                                                  Oh please. If you’re really worried about a couple of words and keystroke saving, you’d setup directories and make aliases that will take you specifically where you want to go. Assuming it was even a GUI you were using with a mouse, you’d still have to click through all the folders.

                                                                                                  Overall, paying close attention to your workspace setting and ergonomics can go a long way in helping improve your RSI situation than this little jumper will ever do

                                                                                                2. 4

                                                                                                  My thoughts exactly. I have often wasted time trying to optimize something which took so little time to begin with, even if I reduced the time to nothing it would have no significant impact on overall performance. And the less-obvious trap is optimizations like this add additional complexity which leads to more time spent down the road.

                                                                                                  1. 9

                                                                                                    All right, buddy. Cool.

                                                                                                    Did I say it a “life saver”? Nope. Did I say it could save you a lot time? Yup. If cd'ing into directories doesn’t waste your time, cool. Move along, read the next blog post on the list.

                                                                                                    I’m sorry about your roadmap estimations. Sounds like you’ve got a lot on your chest there.

                                                                                                    1. 31

                                                                                                      Let me just take a step back and apologize—nobody likes negative comments on their work and I chose my words poorly and was insensitive. I’m rather burnt out and, in turn, that makes me appear more gruff online. I’m positive that someone will find this useful, especially if they’re managing multiple projects or similar use cases.

                                                                                                      1. 23

                                                                                                        I really appreciate you saying that. The whole point of this piece was to share something that literally makes me whistle to myself with joy every time I use it. I hope you find some time to take care of your burn out. It’s no joke and I’ve suffered from it quite a bit in the past three years myself. <3

                                                                                                        I know it’s easy to look at everything as “this is just like X but not quite the way I like it” and I don’t blame you for having that reaction (like many here). AutoJump is to me the epitome of simple, delightful software that does something very simple in a humble way. I wish I had spent more time extolling the virtues of the simple weighted list of directories AutoJump stores in a text file and that ridiculously simple Bash implementation.

                                                                                                        The focus on characters saved was a last minute addition to quantity the claim in the title. Which I still think will be beneficial to anyone who remotely has frustrations about using cd often and may suspect there is a better way.

                                                                                                      2. 6

                                                                                                        If only there was a way to optimize crank posting. So many keystrokes to complain!

                                                                                                      3. 2

                                                                                                        the parent tool is probably overkill but a simple zsh function to jump to marked projects with tab completion is pretty awesome to have.

                                                                                                        alias j="jump "
                                                                                                        export MARKPATH=$HOME/.marks
                                                                                                        function jump {
                                                                                                        cd -P "$MARKPATH/$1" 2>/dev/null || echo "No such mark: $1"
                                                                                                        }
                                                                                                        
                                                                                                        function mark {
                                                                                                        echo "mark name_of_mark"
                                                                                                        mkdir -p "$MARKPATH"; ln -s "$(pwd)" "$MARKPATH/$1"
                                                                                                        }
                                                                                                        
                                                                                                        function unmark {
                                                                                                        rm -i "$MARKPATH/$1"
                                                                                                        }
                                                                                                        
                                                                                                        #if you need it on another os.
                                                                                                        #function marks {
                                                                                                        #ls -l "$MARKPATH" | sed 's/  / /g' | cut -d' ' -f9- | sed 's/ -/\t-/g' && echo
                                                                                                        #}
                                                                                                        
                                                                                                        # fix for the above function for osx.
                                                                                                        function marks {
                                                                                                        \ls -l "$MARKPATH" | tail -n +2 | sed 's/  / /g' | cut -d' ' -f9- | awk -F ' -> ' '{printf "%-10s -> %s\n", $1, $2}'
                                                                                                        }
                                                                                                        
                                                                                                        function _completemarks {
                                                                                                        reply=($(ls $MARKPATH))
                                                                                                        }
                                                                                                        
                                                                                                        compctl -K _completemarks jump
                                                                                                        compctl -K _completemarks unmark
                                                                                                        
                                                                                                        1. 1

                                                                                                          I’ve tried this, but I keep end up making shortcuts and forgetting about them because I never train myself well enough to use them until they’re muscle memory.

                                                                                                          I think I’ll just stick to ‘cd’ and also extensive use of ctrl-r (preferably with fzf)

                                                                                                          1. 1

                                                                                                            And then you go to a work mates computer, or su/sudo/SSH and it’s unusable :)

                                                                                                            1. 1

                                                                                                              well this is one of the most useful shortcuts in my arsenal. type j <tab> or jump <tab> and it completes all the marked directories. If you get over the initial forget to use it curve it’s amazing and simple (just a folder in your home dir with a bunch of symlinks. and a few helpers to create those.)

                                                                                                        1. 4

                                                                                                          Good article. I really like how even handed it is. Sometimes folks who write bits like this tend to be super polarized in their opinion and represent things as black and white, whereas the world is almost always painted in shades of gray.

                                                                                                          1. 2

                                                                                                            Plus the author is hilarious. Pretty much the whole article needs to be wrapped in a sarcasm tag.

                                                                                                          1. 5

                                                                                                            There really needs to be a federated github.

                                                                                                            1. 46

                                                                                                              Like… git ?

                                                                                                              1. 21

                                                                                                                So github but without the hub. May be on to something.

                                                                                                                1. 7

                                                                                                                  Github is one of my favorite stories when I talk about how decentralized systems centralize.

                                                                                                                  1. 7

                                                                                                                    But did GitHub really centralize something decentralized? Git, as a VCS is still decentralized, nearly everyone who seriously uses it has a git client on their computer, and a local repository for their projects. That part is still massively decentralized.

                                                                                                                    GitHub as a code sharing platform, that allows issues to be raised and discussed, patches/pull requests to be submitted, etc. didn’t previously exist in a decentralized manner. There seems to have always been some central point of reference, be it website or just a mailing list. It’s not as if whole project were just based around cc’ing email to one another all the time. How would new people have gotten involved if that were the case?

                                                                                                                    The only thing I could see as centralising is the relative amount of project hosted on GitHub, but that isn’t really a system which can be properly described as “decentralized” or “centralized”..,

                                                                                                                    1. 4

                                                                                                                      It’s the degree to which people are dependent on the value-adds that github provides beyond git. It’s like a store having a POS that relies on communication with a central server. Sure, they can keep records on paper do sales but it’s not their normal course, so they don’t. This comment on HN sums it up: https://news.ycombinator.com/item?id=16124575

                                                                                                                    2. 1

                                                                                                                      Got any other examples?

                                                                                                                      1. 3

                                                                                                                        Email would be a prominent one. Most people (and I can’t say I am innocent) use gmail, hotmail, yahoo mail, etc. I belive there is some general law that describes this trend in systems, which can then be applied to the analysis of different topics, for example matter gathering in around other matter in physics or money accumulating itself around organization with more money, etc.

                                                                                                                        On the other side you have decentralized systems which didn’t really centralized significantly, for whatever reason, such as IRC, but which had a decrease in users over time, which I also find to be an interesting trend.

                                                                                                                        1. 4

                                                                                                                          Many businesses run their own email server and also I don’t have to sign up to gmail to send a gmail user an email but I do have to sign up to github.

                                                                                                                          1. 1

                                                                                                                            A tendency towards centralisation doesn’t mean that no smaller email servers exist, I’m sorry if you misunderstood me there. But on the other hand, I have heard of quite a few examples where businesses just use gmail with a custom domain, so there’s that.

                                                                                                                            And it’s true that you don’t have to be on gmail to send an email to a hotmail server, for example, but most of the time, if just a normal person were to set up their mail server, all the major mail providers automatically view this new host as suspicious and potentially harmful, thus more probably redirecting normal messages as spam. This wouldn’t be that common, if the procentual distribution of mail servers weren’t that centralised.

                                                                                                                        2. 1

                                                                                                                          Did a talk using them. This cuts to the chase: https://www.youtube.com/watch?v=MgbmGQVa4wc#t=11m35s

                                                                                                                    3. 1

                                                                                                                      Git has a web interface?

                                                                                                                      1. 7

                                                                                                                        … federation is about data/communications between servers.. but seeing as you asked, yes it does: https://manpages.debian.org/stretch/git-man/gitweb.1.en.html

                                                                                                                        1. 10

                                                                                                                          To be fair, whjms did say “a federated github”. The main feature of GitHub is its web interface.

                                                                                                                          1. 2

                                                                                                                            Right, and there are literally dozens of git web interfaces. You can “federate” git and use whichever web ui you prefer.

                                                                                                                            1. 12

                                                                                                                              But you then miss out on issue tracking, PR tracking, stats, etc. I agree that Git itself provides a decentralized version control system. That’s the whole point. But a federated software development platform is not the same thing. I would personally be very interested to see a federated or otherwise decentralized issue tracking, PR tracking, etc platform.

                                                                                                                              EDIT: I should point out that any existing system on par with Gitea, Gogs, GitLab, etc could add ActivityPub support and instantly solve this problem.

                                                                                                                              1. 4

                                                                                                                                Doesn’t give you access to all the issues, PRs and comments though.

                                                                                                                                1. 4

                                                                                                                                  git-appraise exists. Still waiting for the equivalent for issues to come along.

                                                                                                                                  https://github.com/google/git-appraise

                                                                                                                                  1. 4

                                                                                                                                    huh git appraise is pretty cool.

                                                                                                                                    I was going to suggest some kind of activitypub/ostatus system for comments. A bit like peertube does to manage comments. But a comment and issue system that is contained within the history of the project would be really interesting. Though it would make git repos take a lot more space for certain projects no?

                                                                                                                                    1. 3

                                                                                                                                      I’d assume that those could potentially be compressed but yes. It’s definitely not ideal. https://www.fossil-scm.org/index.html/doc/tip/www/index.wiki

                                                                                                                                      ^^^^ Unless I’m mistaken, Fossil also tracks that kind of stuff internally. I really like the idea that issues, PRs, and documentation could live in the same place, mostly on account of being able to “go back in time”, and see when you go back to a given version, what issues were open. Sounds useful.

                                                                                                                                  2. 3

                                                                                                                                    BugsEverywhere (https://gitlab.com/bugseverywhere/bugseverywhere), git-issues (https://github.com/duplys/git-issues), sit (https://github.com/sit-it/sit) all embed issues directly in the git repo.

                                                                                                                                    Don’t blame the tool because you chose a service that relies on vendor lock-in.

                                                                                                                                    1. 4

                                                                                                                                      If I recall correctly the problem here is that to create an issue you need write access to the git repo.

                                                                                                                                      Having issues separated out of the repositories can make it easier, if the web interface can federate between services, that’s even better. Similar to Mastodon.

                                                                                                                                      1. 1

                                                                                                                                        There’s nothing to say that a web interface couldnt provide the ability for others to submit issues.

                                                                                                                                  3. 3

                                                                                                                                    Right, and there are literally dozens of git web interfaces.

                                                                                                                                    Literally dozens of git web interfaces the majority of developers either don’t know or care about. The developers do use GitHub for various reasons. voronoipotato and LeoLamda saying a “federated Github” means the alternative needs to look like or work with Github well enough that those using Github, but ignoring other stuff you mentioned, will switch over to it. I’m not sure what that would take or if it’s even legal far as copying appearance goes. It does sound more practical goal than telling those web developers that there’s piles of git web interfaces out there.

                                                                                                                                    1. 1

                                                                                                                                      Im going to respond to two points in reverse order, deliberately:

                                                                                                                                      or care about.

                                                                                                                                      Well, clearly the person I replied to does care about a git web interface that isn’t reliant on GitHub.com. Otherwise, why would they have replied?

                                                                                                                                      Literally dozens of git web interfaces the majority of developers either don’t know [about]

                                                                                                                                      Given the above - The official git project’s wiki has a whole page dedicated to tools that work with git, including web interfaces. That wiki page is result 5 in google and result 3 in duckduckgo when searching for “git web interface”. If a developer wants a git web interface, and can’t find that information for themselves, nothing you, or I or a magic genie does will help them.

                                                                                                                              2. 5

                                                                                                                                It’s not built-in, but Gogs and Gitea are both pretty nice.

                                                                                                                                1. 2

                                                                                                                                  Hard agree. I run a personal Gogs site and it’s awesome.

                                                                                                                            2. 7

                                                                                                                              It would be enough if people stopped putting all their stuff on github.

                                                                                                                              1. 8

                                                                                                                                It won’t happen for a while due to network effects. They made it easy to get benefits of a DVCS without directly dealing with one. Being a web app, it can be used on any device. Being free, that naturally pulls people in. There’s also lots of write-ups on using it or solving problems that are a Google away due to its popularity. Any of these can be copied and improved on. The remaining problem is huge amount of code already there.

                                                                                                                                The next solution won’t be able to copy that since it’s a rare event in general. Like SourceForge and Github did, it will have to create a compelling reason for massive amounts of people to move their code into it while intentionally sacrificing the benefits of their code being on Github specifically. I can’t begin to guess what that would take. I think those wanting no dependency on Github or alternatives will be targeting a niche market. It can still be a good one, though.

                                                                                                                                1. 2

                                                                                                                                  I hear the ‘network effects’ story every time, but we are not mindless automatons who have to use github because other people are doing it. I’m hosting the code for my open source projects on a self-hosted gitlab server and i’m getting contributions from other people without problems. Maybe it would be more if the code was on github, but being popular isn’t the most important thing for everyone.

                                                                                                                                  1. 1

                                                                                                                                    Just look at sourceforge, if everyone had to set up their own CVS/SVN server back in the say do you think all those projects would have made it onto the internet?

                                                                                                                                    Now we have a similar situation with got, if GitHub/Bitbucket/etc. didn’t exist I’m sure most people would have stuck with sourceforge (Or not bothered if they had to self host).

                                                                                                                                    You can also look at Googlecode to see the problem with not reaching critical mass (IMHO). There were some high profile projects there, but then I’m sure execs said, why are we bothering to host 1% (A guess) of what is on GitHub?

                                                                                                                                    1. 1

                                                                                                                                      ‘Network effects’ doesn’t mean you’re mindless automatons. It means people are likely to jump on bandwagons. It also means that making it easy to connect people together, esp removing friction, makes more of them do stuff together. The massive success of Github vs other interfaces argues my point for me.

                                                                                                                                      “Maybe it would be more if the code was on github”

                                                                                                                                      That’s what I telling you rephrased. Also, expanded to the average project as some will get contributions, some won’t, etc.

                                                                                                                                  2. 4

                                                                                                                                    Heck even I won’t move off of it until there is a superior alternative, sorry.

                                                                                                                                  3. 3

                                                                                                                                    I thought about a project along these lines a while ago. Something along the lines of cgit, which could offer a more or less clean and consistent UI, and a easy to set up backend, making federation viable in the first place. Ideally, it wouldn’t even need accounts, instead Email+GPG could be used, for example by including an external mailing list into the repo, with a few addition markup features, such as internal linking and code highlighting. This “web app” would then effectively only serve as an aggregator of external information, onto one site, making it even easier to federate the entire structure, since the data wouldn’t even be necessarily bound to one server! If one were to be really evil, one could also use GitHub as a backend…

                                                                                                                                    I thought about all of this for a while, but the big downsides from my perspective seemed to be a lack of reliability on servers (which is sadly something we have come to expect with tools such as NPM and Go’s packaging), asynchronous updates could mess stuff up, unless there were to be a central reference repo per project, and the social element in social coding could be hard to achieve. Think of stars, followings, likes, fork overviews, etc. these are all factors which help projects and devs display their reputation, for better or for worse.

                                                                                                                                    Personally, I’m a bit sceptical that something along these lines would manage to have a real attractiveness, at least for now.

                                                                                                                                    1. 3

                                                                                                                                      Lacks a web interface, but there are efforts to use ipfs for a storage backend.

                                                                                                                                      https://github.com/cryptix/git-remote-ipfs

                                                                                                                                      1. 3

                                                                                                                                        I think there have been proposals for gitlab and gitea/gogs to implement federated pull request. I would certainly love it since I stuff most of my project into my personal gitea instance anyway. Github is merely a code mirror where people happen to be able to file issues.

                                                                                                                                        1. 3

                                                                                                                                          I think this would honestly get the work done. Federated pull request, federated issue discussion

                                                                                                                                          1. 1

                                                                                                                                            I’m personally a bit torn if a federated github-like should handle it like a fork, ie, if somebody opens an issue they do it on their instance and you get a small notification and you can follow the issue in your own repo

                                                                                                                                            Or if it should merely allow people to use my instance to file issues directly there like with OAuth or OpenID Connect. Probably something we’ll have to figure out in the process.

                                                                                                                                            1. 2

                                                                                                                                              just make it work like gnusocial/mastodon. username@server.com posted an issue on your repo. You can block server, have a whitelist, or let anyone in the world is your oyster.

                                                                                                                                          2. 1

                                                                                                                                            Would be nice if I could use my gitlab.com account to make MRs on other gitlab servers.

                                                                                                                                          3. 1

                                                                                                                                            I always thought it would be neat to try to implement this via upspin since it already provides identity, permissions, and a global (secure) namespace. Basically, my handwavy thoughts are: design what your “federated github” repo looks like in terms of files. This becomes the API or contract for federation. Maybe certain files are really not files but essentially RPCs and this is implemented by a custom upspin server. You have an issue directory, your actually git directory, and whatever else you feel is important for managing a software project on git represented in a file tree. Now create a local stateless web interface that anyone can fire up (assuming you have an upspin user) and now you can browse the global upspin filesystem and interact with repos ,make pull requests, and file issues.

                                                                                                                                            I was thinking that centralized versions of this could exist like github for usability for most users. In this case users’ private keys are actually managed by the github like service itself as a base case to achieve equal usability for the masses. The main difference is that the github like service exports all the important information via upspin for others to interact with via their own clients.

                                                                                                                                          1. 33

                                                                                                                                            Seeing the WTFPL used in practice really annoys me, and is almost insulting to people who need to care about this stuff.

                                                                                                                                            Some people do this stuff as a statement against licenses, which is… laudable? a political statement. But a lot of people choose this kind of license because it’s “clever”, and all you’ve done is made it harder for the software to be used in an environment where people are more serious.

                                                                                                                                            the “Do No Evil” clause for jslint was a combination of ineffective political speech (what does this even mean) and bad licensing (basically unknowable). If you’re serious about making a political statement, here’s a list of banned activities you can add to your license if you’re fine throwing away the OSI seal of approval:

                                                                                                                                            This software cannot be used in support of the following activities:

                                                                                                                                            • weapons research and manufacturing
                                                                                                                                            • advertising
                                                                                                                                            • facial recognition (note: authoritarian regimes + advertising are basically the main use of this one)

                                                                                                                                            This is a trite example, but an actual position that goes beyond just saying “don’t be X”

                                                                                                                                            1. 18

                                                                                                                                              The WTFPL can also expose the developer to liability. See Dan Berlin’s comment about the WTFPL on HN. He is a lawyer.

                                                                                                                                              1. 4

                                                                                                                                                WTFPL is bad and really nobody should use it. I would think it’s very very hard to enforce what someone uses your software for, there might literally be 0 evidence. Also what if its used by a small militia group against a evil authoritarian government, should they be restricted against weapons research and manufacturing? I would be very careful about implementing that first rule as it might only apply to those with a conscience.

                                                                                                                                                1. 4

                                                                                                                                                  WTFPL is bad […] it’s very very hard to enforce what someone uses your software for

                                                                                                                                                  Huh? WTFPL is all about not enforcing anything. Like, it’s literally:

                                                                                                                                                  1. You just DO WHAT THE FUCK YOU WANT TO.

                                                                                                                                                  The

                                                                                                                                                  “Do No Evil” clause for jslint

                                                                                                                                                  that rtpg mentioned is not part of WTFPL.

                                                                                                                                                  1. 1

                                                                                                                                                    The first part was about how wtfpl was bad and nobody should use it, the second was a response to the list of banned activities you could add to your license which I thought was also bad. WTFPL is bad because it is purposely broad which in our legal system you might be better with no license at all but I’m not a lawyer and this should not be construed as legal advice.

                                                                                                                                                  2. 1

                                                                                                                                                    Thanks for the feedback. This was more meant as an example of a concrete political stance that one could take.

                                                                                                                                                    I think it’s hard to underestimate the power of legal threats, at least in the US. If the GPL had a weapons manufacturing bank included, you bet that Lockheed Martin and company would not be using any of that stuff. Imagine simply not having access to some great (well, mostly great) software because of an ethical issue

                                                                                                                                                    Personally I might be more on the pacifist side of things, but it’s hard to deny certain safety benefits of a technologically advanced military. But I bet many of us would not want any part of something like biological weapons development.

                                                                                                                                                    1. 1

                                                                                                                                                      Lockheed-Martin ciuld just use a high-performance, separation kernel to isolate GPL. Many companies advertise this with defense sector usually in their solutions list. Here’s a writeup with details from one.

                                                                                                                                                  3. 1

                                                                                                                                                    And user tracking. It might be easier to just say not for use by fortune 500 companies, any company based in Ireland, Delaware, Singapore, Hong Kong, the Bahamas, and any government of a country with “People’s” or “Democratic” in the title.