1.  

    This is a great post. It posits an idea “Let’s treat Bash development as software engineering, with all the discipline of any other programming language”. It presents the meat of this idea, and that’s all.

    1.  

      Definitely. I hope the Oilshell guy sees this and thinks about this as part of his improvements to shell.

      1.  

        I knew about (and regularly use) shellcheck.net. It is a wonderful tool to learn about bash and fix potential bugs. Oilshell integrating (at least some form of) shellcheck’s linting capability and shfmt (automatic formatter) would be great UX wins.

        1.  

          FWIW I wrote a little bit about ShellCheck in this comment: https://www.reddit.com/r/oilshell/comments/7fjl5t/any_idea_on_the_completeness_of_shellchecks_parser/

          Skip down to where I say Oil was “negatively inspired” by ShellCheck. In summary, I’ve used ShellCheck, but I don’t like how many false positives it gives. Oil’s approach is to instead design a language where the obvious thing is not wrong, and where a nicely written program doesn’t produce lint errors on every other line.

          Also, Oil should be more statically analyzable, so diagnostics should be more accurate. For example, statically resolvable imports would make a lot of errors more accurate.

          Although keep in mind this is the goal, not something I’ve done yet. I still need to meet feature/performance parity with bash clone before really tackling the Oil language. As for shfmt, I definitely want to have an Oil language formatter, but that’s also future work.

          1.  

            “But we want your cake now!!” :) Seriously though. I really appreciate your work with OilShell. I hope your experiment is very successful, and once it stabilizes a bit more I hope to use it as a replacement for Bash. I’m excited about your work in fixing up our vegetables, so we can have nicer cake. And I’ve probably run that analogy into the ground now, so I’ll shut up :)

    1. 2

      to this day I’m surprised that Postgres cannot be upgraded without downtime. I guess there’s maintenance windows, but it feels like so many DBs out there have uptime requirements

      EDIT: don’t want to be too whiny about this, Postgres is cool and has a lot of stuff. I guess it’s mostly the webdev in me thinking “well yeah of course I need 100% uptime” that made me expect DBs to handle this case. But I ugess the project predates these sorts of expectation

      1. 1

        I don’t disagree… but just to be clear:

        minor versions(i.e. bug fixes) do not need any downtime really, you just replace the binaries and restart. (i.e. from 9.4.6 -> 9.4.7)

        Major versions (9.4 -> 9.5 ) do need a dump/restore of the database, which is annoying. You can avoid this almost completely now with logical replication, which is now included with PG 10 (before this version it’s available as a module back to PG9.4 I think).

        1. 2

          Ah thanks for the information, super helpful! Previously, when reading up on upgrading PG I got the impression I couldn’t do this on major versions.

          1. 1

            see: https://www.2ndquadrant.com/en/resources/pglogical/ it’s one of the use-cases.

          2. 2

            Major versions (9.4 -> 9.5 ) do need a dump/restore of the database

            pg_upgrade has been available and part of the official codebase since 9.0 (7ish years). It’s still not perfect, but it’s been irreplaceable for me when migrating large (45+TB) databases.

            1. 1

              True, I had forgotten. I’ve been using PG since the 8.x days. pg_upgrade didn’t work for me from 9.0 -> 9.1 (or thereabouts, def. at the beginning of pg_upgrade existence) and haven’t ever tried it again. I should probably try it again, see if it works better for us!

            2.  

              There have also been numerous logical replication tools (Slony for example) that allowed upgrades without downtime since at least around 8.0, but probably earlier.

          1. 6

            Woo! Excited to see some Nix stuff being mentioned.

            I had been running NixOS on my work Macbook for a while, but due to issues with a mix of HiDPI & non HiDPI displays on Linux - I went back to macOS.

            With a new found love though, I was delighted to find out about nix-darwin - LnL has always been really friendly and helpful on #nixos, when I had questions about achieving something with it.

            As such, here’s my collection of expressions that declare the system configuration I use across my macOS machines: https://github.com/cmacrae/.nixpkgs

            I’d certainly consider myself an absolute novice, but as you can see - even with little experience you can cobble something fairly comprehensive together.

            My next plans are formed around my home infrastructure. Currently, I have a little rack - with one shelf occupied by a little Joyent Triton cluster made up of 3 intel NUCs. On top of Triton I run a number of home media services which are in lx-branded OS containers. Right now, I’ve formed a workflow around Packer, Ansible, and Terraform for creating images and deploying services.

            I’m planning to introduce NixOS as a base lx-branded OS image, which you could then “inject” Nix system expressions into for declarative, reproducible images for varying deployments and services.

            1. 2

              I find nix-darwin and your nixPGS for macOS really interesting. I want to try it out, but am pretty much non-nix smart.

              I’ve just ordered a new MacBook, and will need to move everything over.
              Is there a way to take your existing configuration and put it into nix-darwin?

              Is there an idiot’s guide to getting started and making this all work somewhere?

              How do I know the name of the variables I can set?

              Keyboard

              system.keyboard = { enableKeyMapping = true; remapCapsLockToControl = true; };

              this is awesome, but how could I have figured it out except seeing it in your config?

              I’m not exactly an idiot, but around nix, I definitely am :)

              1.  

                I’m afraid there’s no good answer to that at the moment, I should probably look into how nixos builds the configuration.nix manpage. Currently you’ll need to use the darwin-option command or look at the sources.

            1. 31

              In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.

              There’s another Lobster thread right now about how distributions like Debian are obsolete. The idea being that people use stuff like npm now, instead of apt, because apt can’t keep up with modern software development.

              Kubernetes official installer is some curl | sudo bash thing instead of providing any kind of package.

              In the meantime I will keep using only FreeBSD/OpenBSD/RHEL packages and avoid all these nightmares. Sometimes the old ways are the right ways.

              1. 7

                “In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.”

                I think this misses the point. The relevant claim was that npm has a good general approach to packaging, not that npm is perfectly written. You can be solving the right problem, but writing terribly buggy code, and you can write bulletproof code that solves the wrong problem.

                1. 5

                  npm has a good general approach to packaging

                  The thing is, their general approach isn’t good.

                  They only relatively recently decided locking down versions is the Correct Thing to Do. They then screwed this up more than once.

                  They only relatively recently decided that having a flattened module structure was a good idea (because presumably they never tested in production settings on Windows!).

                  They decided that letting people do weird things with their package registry is the Correct Thing to Do.

                  They took on VC funding without actually having a clear business plan (which is probably going to end in tears later, for the whole node community).

                  On and on and on…

                  1. 2

                    Go and the soon-to-be-official dep dependency managment tool manages dependencies just fine.

                    The Go language has several compilers available. Traditional Linux distro packages together with gcc-go is also an acceptable solution.

                    1. 4

                      It seems the soon-to-be-official dep tool is going to be replaced by another approach (currently named vgo).

                    2. 1

                      I believe there’s a high correlation between the quality of the software and the quality of the solution. Others might disagree, but that’s been pretty accurate in my experience. I can’t say why, but I suspect it has to do with the same level of care put into both the implementation and in understanding the problem in the first place. I cannot prove any of this, this is just my heuristic.

                      1. 8

                        You’re not even responding to their argument.

                        1. 2

                          There’s npm registry/ecosystem and then there’s the npm cli tool. The npm registry/ecosystem can be used with other clients than the npm cli client and when discussing npm in general people usually refer to the ecosystem rather than the specific implementation of the npm cli client.

                          I think npm is good but I’m also skeptical about the npm cli tool. One doesn’t exclude the other. Good thing there’s yarn.

                          1. 1

                            I think you’re probably right that there is a correlation. But it would have to be an extremely strong correlation to justify what you’re saying.

                            In addition, NPM isn’t the only package manager built on similar principles. Cargo takes heavy inspiration from NPM, and I haven’t heard about it having a history of show-stopping bugs. Perhaps I’ve missed the news.

                        2. 8

                          The thing to keep in mind is that all of these were (hopefully) done with best intentions. Pretty much all of these had a specific use case… there’s outrage, sure… but they all seem to have a reason for their trade offs.

                          • People are angry about a proposed go package manager because it throws out a ton of the work that’s been done by the community over the past year… even though it’s fairly well thought out and aims to solve a lot of problems. It’s no secret that package management in go is lacking at best.
                          • Distributions like Debian are outdated, at least for software dev, but their advantage is that they generally provide a rock solid base to build off of. I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.
                          • While I don’t trust curl | sh it is convenient… and it’s hard to argue that point. Providing packages should be better, but then you have to deal with bug reports where people didn’t install the package repositories correctly… and differences in builds between distros… and… and…

                          It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place… there are plenty of good, solid options for development and we’re moving (however slowly) towards safer, more efficient build/dev environments.

                          But maybe I’m just telling myself all this so I don’t go crazy… jury’s still out on that.

                          1. 4

                            Distributions like Debian are outdated, at least for software dev,

                            That is the sentiment that seems to drive the programming language specific package managers. I think what is driving this is that software often has way too many unnecessary dependencies causing setup of the environment to build the software being hard or taking lots of time.

                            I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                            Often it is possible to install libraries at another location and redirect your software to use that though.

                            It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place…

                            I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                            I’m growing more disillusioned the more I read Hacker News and lobste.rs… Help me be happy. :)

                            1. 1

                              So like squeak/smalltalk images then? Whats old is new again I suppose.

                              http://squeak.org

                              1. 1

                                I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                                You could say the same thing about Docker. I think package managers and tools like Docker are a net win for the community. They make it faster for experienced practitioners to setup environments and they make it easier for inexperienced ones as well. Sure, there is a lot you’ve gotta learn to use either responsibly. But I remember having to build redis every time I needed it because it wasn’t in ubuntu’s official package manager when I started using it. And while I certainly appreciate that experience, I love that I can just install it with apt now.

                              2. 2

                                I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                                Speaking of Python specifically, it’s not a big problem there because everyone is expected to work within virtual environments and nobody runs pip install with sudo. And when libraries require building something binary, people do rely on system-provided stable toolchains (compilers and -dev packages for C libraries). And it all kinda works :-)

                                1. 4

                                  I think virtual environments are a best practice that unfortunately isn’t followed everywhere. You definitely shoudn’t run pip install with sudo but I know of a number of companies where part of their deployment is to build a VM image and sudo pip install the dependencies. However it’s the same thing with npm. In theory you should just run as a normal user and have everything installed to node_modules but this clearly isn’t the case, as shown by this issue.

                                  1. 5

                                    nobody runs pip install with sudo

                                    I’m pretty sure there are quite a few devs doing just that.

                                    1. 2

                                      Sure, I didn’t count :-) The important point is they have a viable option not to.

                                    2. 2

                                      npm works locally by default, without even doing anything to make a virtual environment. Bundler, Cargo, Stack etc. are similar.

                                      People just do sudo because Reasons™ :(

                                  2. 4

                                    It’s worth noting that many of the “curl | bash” installers actually add a package repository and then install the software package. They contain some glue code like automatic OS/distribution detection.

                                    1. 2

                                      I’d never known true pain in software development until I tried to make my own .debs and .rpms. Consider that some of these newer packaging systems might have been built because Linux packaging is an ongoing tirefire.

                                      1. 3

                                        with fpm https://github.com/jordansissel/fpm it’s not that hard. But yes, using the Debian or Redhat blessed was to package stuff and getting them into the official repos is def. painful.

                                        1. 1

                                          I used the gradle plugins with success in the past, but yeah, writing spec files by hand is something else. I am surprised nobody has invented a more user friendly DSL for that yet.

                                          1. 1

                                            A lot of difficulties when doing Debian packages come from policy. For your own packages (not targeted to be uploaded in Debian), it’s far easier to build packages if you don’t follow the rules. I like to pretend this is as easy as with fpm, but you get some bonus from it (building in a clean chroot, automatic dependencies, service management like the other packages). I describe this in more details here: https://vincent.bernat.im/en/blog/2016-pragmatic-debian-packaging

                                          2. 2

                                            It sucks that you come away from this thinking that all of these alternatives don’t provide benefits.

                                            I know there’s a huge part of the community that just wants things to work. You don’t write npm for fun, you end up writing stuff like it because you can’t get current tools to work with your workflow.

                                            I totally agree that there’s a lot of messiness in this newer stuff that people in older structures handle well. So…. we can knowledge share and actually make tools on both ends of the spectrum better! Nothing about Kubernetes requires a curl’d installer, after all.

                                          1. 3

                                            This makes me wonder what would happen if you married something like TDD in with your SCM and ticketing system.

                                            You could for instance have a ticket with a test case(i.e. code) attached and you would then write and checkin around closing that ticket. Suddenly your commit messages can almost be written for you, and the amount of integration you could (eventually) get around your SCM and your codebase could lead to some great optimizations around all the daily drudgery you have to do with commits, bug tracking, etc.

                                            1. 2

                                              Maybe a dumb questions, but in semver what is the point of the third digit? A change is either backwards compatible, or it is not. To me that means only the first two digits do anything useful? What am I missing?

                                              It seems like the openbsd libc is versioned as major.minor for the same reason.

                                              1. 9

                                                Minor version is backwards compatible. Patch level is both forwards and backwards compatible.

                                                1. 2

                                                  Thanks! I somehow didn’t know this for years until I wrote a blog post airing my ignorance.

                                                2. 1

                                                  PATCH version when you make backwards-compatible bug fixes See: https://semver.org

                                                  1. 1

                                                    I still don’t understand what the purpose of the PATCH version is? If minor versions are backwards compatible, what is the point of adding a third version number?

                                                    1. 3

                                                      They want a difference between new functionality (that doesn’t break anything) and a bug fix.

                                                      I.e. if it was only X.Y, then when you add a new function, but don’t break anything.. do you change Y or do you change X? If you change X, then you are saying I broke stuff, so clearly changing X for a new feature is a bad idea. So you change Y, but if you look at just the Y change, you don’t know if it was a bug-fix, or if it was some new function/feature they added. You have to go read the changelog/release notes, etc. to find out.

                                                      with the 3 levels, you know if a new feature was added or if it was only a bug fix.

                                                      Clearly just X.Y is enough. But the semver people clearly wanted that differentiation, they wanted to be able to , by looking only at the version #, know if there was a new feature added or not.

                                                      1. 1

                                                        To show that there was any change at all.

                                                        Imagine you don’t use sha1’s or git, this would show that there was a new release.

                                                        1. 1

                                                          But why can’t you just increment the minor version in that case? a bug fix is also backwards compatible.

                                                          1. 5

                                                            Imagine you have authored a library, and have released two versions of it, 1.2.0 and 1.3.0. You find out there’s a security vulnerability. What do you do?

                                                            You could release 1.4.0 to fix it. But, maybe you haven’t finished what you planned to be in 1.4.0 yet. Maybe that’s acceptable, maybe not.

                                                            Some users using 1.2.0 may want the security fix, but also do not want to upgrade to 1.3.0 yet for various reasons. Maybe they only upgrade so often. Maybe they have another library that requires 1.2.0 explicitly, through poor constraints or for some other reason.

                                                            In this scenario, releasing a 1.2.1 and a 1.3.1, containing the fixes for each release, is an option.

                                                            1. 2

                                                              It sort of makes sense but if minor versions were truly backwards compatible I can’t see a reason why you would ever want to hold back. Minor and patch seem to me to be the concept just one has a higher risk level.

                                                              1. 4

                                                                Perhaps a better definition is library minor version changes may expose functionality to end users you did not intend as an application author.

                                                                1. 2

                                                                  I think it’s exactly a risk management decision. More change means more risk, even if it was intended to be benign.

                                                                  1. 2

                                                                    Without the patch version it makes it much harder to plan future versions and the features included in those versions. For example, if I define a milestone saying that 1.4.0 will have new feature X, but I have to put a bug fix release out for 1.3.0, it makes more sense that the bug fix is 1.3.1 rather than 1.4.0 so I can continue to refer to the planned version as 1.4.0 and don’t have to change everything which refers to that version.

                                                          2. 1

                                                            I remember seeing a talk by Rich Hickey where he criticized the use of semantic versioning as fundamentally flawed. I don’t remember his exact arguments, but have sem ver proponents grappled effectively with them? Should the Go team be wary of adopting sem ver? Have they considered alternatives?

                                                            1. 3

                                                              I didn’t watch the talk yet, but my understanding of his argument was “never break backwards compatibility.” This is basically the same as new major versions, but instead requiring you to give a new name for a new major version. I don’t inherently disagree, but it doesn’t really seem like some grand deathblow to the idea of semver to me.

                                                              1. 1

                                                                IME, semver itself is fundamentally flawed because humans are the deciders of the new version number and we are bad at it. I don’t know how many times I’ve gotten into a discussion with someone where they didn’t want to increase the major because they thought high major’s looked bad. Maybe at some point it can be automated, but I’ve had plenty of minor version updates that were not backwards compatible, same for patch versions. Or, what’s happened to me in Rust multiple times, is the minor version of a package incremented but the new feature depends on a newer version of the compiler, so it is backwards breaking in terms of compiling. I like the idea of a versioning scheme that lets you tell the chronology of versions but I’ve found semver to work right up until it doesn’t and it’s always a pain. I advocate pinning all deps in a project.

                                                                1. 2

                                                                  It’s impossible for computers to automate. For one, semver doesn’t define what “breaking” means. For two, the only way that a computer could fully understand if something is breaking or not would be to encode all behavior in the type system. Most languages aren’t equipped to do that.

                                                                  Elm has tools to do at least a minimal kind of check here. Rust has one too, though not widely as used.

                                                                  . I advocate pinning all deps in a project.

                                                                  That’s what lockfiles give you, without the downsides of doing it manually.

                                                        1. 5

                                                          Based on this writing, it seems that we are yet again separating dev from prod. Use ubuntu/debian base for dev, but build special for production.

                                                          I thought one of the main points of Docker was being able to run the same container in production. Seems that’s still not going to happen with Docker either. Dev just has to run long enough to make the next commit, and needs gobs of debug built-in. Prod has to run forever and be secure.

                                                          Seems the only upside you really get with the Docker workflow is similar tooling between dev and production.

                                                          1. 2

                                                            From my experience, the difference from dev/prod is not the biggest issue, as long as you have the same images for testing/staging and production.

                                                            Some teams do not even use Docker images for development and that’s not a big issue as long as you have good CI (at least that’s been a very long time we didn’t have the “that’s work on testing and not in production”.

                                                            1. 1

                                                              Use ubuntu/debian base for dev, but build special for production.

                                                              You can use the same images for development/testing, though? You might install a few extra packages into your dev environment (gdb, …) with the same base Dockerfile.

                                                              1. 1

                                                                If testing becomes production, I think the goal would be having production and testing IDENTICAL, or as identical as you can make them. Otherwise what’s the point?

                                                              2. 1

                                                                You’re right, you should strive to keep containers immutable. Having two Docker images for the same code defeats the benefit of having a CI pipeline with promotion across environments. The article doesn’t shed much light on what’s best practice when it comes to packaging applications for dev/prod. But the author seems to suggest that there’re better ways to debug containers than attaching to it. I suspect he’s referring to health checks for readiness & liveness and a proper logging library to record logs. Also, it’s generally slower and more tedious developing an application within a Docker container. Usually, it’s much easier to work locally on the code and then let CI package the immutable container. The Docker image is akin to a jar or a deb file. You don’t build those differently for dev or prod.

                                                                1. 1

                                                                  I would think monitoring, metrics and logging would be the way to debug production in most cases. In general you just want the starting inputs and the output errors, so you can replicate the issue in dev to fix. If you can’t replicate it, then you have to break out dtrace and friends and get serious, which is super annoying.

                                                                  Well you might build your jar or deb file differently, you might strip out debugging symbols in production, it’s pretty common actually.

                                                                  I agree developing INSIDE a docker container is way annoying. I think the dev answer for Docker is to run all the extra crap your code depends on in development. I.e. my code needs Redis, a PG instance, etc to to work right, so I’d run Redis and PG in Docker for dev, but still do main code locally, if possible. Harder to do if you are writing *nix apps on Windows for instance, but :)

                                                              1. 7

                                                                Neat idea! One question though: How do you handle renewals? In my experience, postgresql (9.x at least) can only re-read the certificate upon a server restart, not upon mere reloads. Therefore, all connections are interrupted when the certificate is changed. With letsencrypt, this will happen more frequently - did you find a way around this?

                                                                1. 5

                                                                  If you put nginx in front as a reverse TCP proxy, Postgres won’t need to know about TLS at all and nginx already has fancy reload capability.

                                                                  1. 3

                                                                    I was thinking about that too - and it made me also wonder whether using OpenResty along with a judicious combination of stream-lua-nginx-module and lua-resty-letsencrypt might let you do the whole thing in nginx, including automatic AOT cert updates as well as fancy reloads, without postgres needing to know anything about it at all (even if some tweaking of resty-letsencrypt might be needed).

                                                                    1. 1

                                                                      That’s funny I was just talking to someone who was having problems with “reload” not picking up certificates in nginx. Can you confirm nginx doesn’t require a restart?

                                                                      1. 1

                                                                        Hmm, I wonder if they’re not sending the SIGHUP to the right process. It does work when configured correctly.

                                                                    2. 2

                                                                      I’ve run into this issue as well with PostgreSQL deployments using an internal CA that did short lived certs.

                                                                      Does anyone know if the upstream PostgreSQL devs are aware of the issue?

                                                                      1. 19

                                                                        This is fixed in PG 10. “This allows SSL to be reconfigured without a server restart, by using pg_ctl reload, SELECT pg_reload_conf(), or sending a SIGHUP signal. However, reloading the SSL configuration does not work if the server’s SSL key requires a passphrase, as there is no way to re-prompt for the passphrase. The original configuration will apply for the life of the postmaster in that case.” from https://www.postgresql.org/docs/current/static/release-10.html

                                                                    1. 2

                                                                      I’ve heard a great deal of buzz and praise for this editor. I’ve got a couple decades’ experience with my current editor – is it good enough to warrant considering a switch?

                                                                      1. 3

                                                                        What do you love about your current editor?

                                                                        What do you dislike about it?

                                                                        What are the things your editor needs to provide that you aren’t willing to compromise on?

                                                                        1. 2

                                                                          It probably isn’t, but it’s maybe worth playing around with, just to see how it compares. It’s definitely the best behaved Electron app I’ve ever seen. It doesn’t compete with the Emacs operating system configurations, but it does compete for things like Textmate, Sublime, and the other smaller code-editors. It has VI bindings(via a plugin) that’s actually pretty good(and can use neovim under the hood!). I still don’t understand Microsoft’s motivation for writing this thing, but it’s nice that they dedicate a talented team to it.

                                                                          It’s very much still a work in progress, but it’s definitely usable.

                                                                          1. 3

                                                                            Here’s the story of how it was created[1]. It’s a nice, technical interview. However, the most important thing about this editor is that it marked an interesting shift in Microsoft’s culture. It appears that is the single most widely used open source product originating by MS.

                                                                            https://changelog.com/podcast/277

                                                                            1. 1

                                                                              Thanks for linking that show up.

                                                                          2. 2

                                                                            It’s worth a try. It’s pretty good. I went from vim to vscode mostly due to windows support issues. I often switch between operating systems, so having a portable editor matters.

                                                                            1. 1

                                                                              It’s pretty decent editor to try it out. I’ve personally given up because it’s just too slow :| The only scenario in which I tolerate slowness, is a heavy-weight IDE (e.g., IntelliJ family). For simple editing I’d rather check out sublime (it’s not gratis, but it’s pretty fast).

                                                                              1. 1

                                                                                It doesn’t have to be a hard switch, I for example switch between vim and vs-code depending on the language and task. And if there is some Java or Kotlin to code then I will use Intellij Idea, simply because it feels like the best tool for the job. See your text editors more like a tool in your toolbelt, you won’t drive in a screw with a hammer, won’t you? I see the text editors I use more like a tool in my toolbelt.

                                                                                1. 1

                                                                                  I do a similar thing. I’ve found emacs unbearable for java (the best solution I’ve seen is eclim which literally runs eclipse in the background), so I use intellij for that.

                                                                                  For python, emacs isn’t quite as bad as it is with java, but I’ve found pycharm to be much better.

                                                                                  Emacs really wins out with pretty much anything else, especially C/++ and lisps.

                                                                                  1. 1

                                                                                    VS Code has a very nice python module (i.e. good autocomplete and debugger), the author of which has been hired by MS to work on it full time. Not quite PyCharm-level yet but worth checking out if you’re using Code for other stuff.

                                                                              1. 1

                                                                                I can’t speak for others which have this issue, but I’m waiting for letsencrypt to add support for wildcard certificates, and then I’ll change over. No big reason to update the cert now, and then again in a month, when I still have all of March to make the change.

                                                                                1. 32

                                                                                  I wasn’t implying. I was stating a fact.

                                                                                  And he’s wrong about that.

                                                                                  https://github.com/uutils/coreutils is a rewrite of a large chunk of coreutils in Rust. POSIX-compatible.

                                                                                  1. 12

                                                                                    So on OpenBSD amd64 (the only arch rust runs on… there are at least 9 others, 8 7 or 6 of which rust doesn’t even support! )… this fails to build:

                                                                                    error: aborting due to 19 previous errors
                                                                                    
                                                                                    error: Could not compile `nix`.
                                                                                    warning: build failed, waiting for other jobs to finish...
                                                                                    error: build failed
                                                                                    
                                                                                    1. 8

                                                                                      Yep. The nix crate only supports FreeBSD currently.

                                                                                      https://github.com/nix-rust/nix#supported-platforms

                                                                                    2. 8

                                                                                      The openbsd guys are stubborn of course, though they might have a point. tbh somebody could just fork a BSD OS to make this happen. rutsybsd or whatever you want to call it.

                                                                                      edit: just tried to build what you linked, does cargo pin versions and verify the downloads? fetching so many dependencies at build time makes me super nervous. Are all those dependencies BSD licensed? It didn’t even compile on my machine, maybe the nixos version of rust is too old - i don’t know if the rust ecosystem is stable enough to base an OS on yet without constantly fixing broken builds.

                                                                                      1. 10

                                                                                        just tried to build what you linked, does cargo pin versions and verify the downloads?

                                                                                        Cargo pins versions in Cargo.lock, and coreutils has one https://github.com/uutils/coreutils/blob/master/Cargo.lock.

                                                                                        Cargo checks download integrity against the registry.

                                                                                        For offline builds, you can vendor the dependencies: https://github.com/alexcrichton/cargo-vendor, downloading them all and working from them.

                                                                                        Are all those dependencies BSD licensed?

                                                                                        Yes. Using: https://github.com/onur/cargo-license

                                                                                        Apache-2.0/MIT (50): bit-set, bit-vec, bitflags, bitflags, block-buffer, byte-tools, cc, cfg-if, chrono, cmake, digest, either, fake-simd, filetime, fnv, getopts, glob, half, itertools, lazy_static, libc, md5, nodrop, num, num-integer, num-iter, num-traits, num_cpus, pkg-config, quick-error, rand, regex, regex-syntax, remove_dir_all, semver, semver-parser, sha2, sha3, tempdir, tempfile, thread_local, time, typenum, unicode-width, unindent, unix_socket, unreachable, vec_map, walker, xattr

                                                                                        BSD-3-Clause (3): fuchsia-zircon, fuchsia-zircon-sys, sha1

                                                                                        MIT (21): advapi32-sys, ansi_term, atty, clap, data-encoding, generic-array, kernel32-sys, nix, onig, onig_sys, pretty-bytes, redox_syscall, redox_termios, strsim, term_grid, termion, termsize, textwrap, void, winapi, winapi-build

                                                                                        MIT OR Apache-2.0 (2): hex, ioctl-sys

                                                                                        MIT/Unlicense (7): aho-corasick, byteorder, memchr, same-file, utf8-ranges, walkdir, walkdir

                                                                                        It didn’t even compile on my machine, maybe the nixos version of rust is too old - i don’t know if the rust ecosystem is stable enough to base an OS on yet without constantly fixing broken builds.

                                                                                        This is one of my frequent outstanding annoyances with Rust currently: I don’t have a problem with people using the newest version of the language as long as their software is not being shipped on something with constraints, but at least they should document and test the minimum version of rustc they use.

                                                                                        coreutils just checks against “stable”, which moves every 6 weeks: https://github.com/uutils/coreutils/blob/master/.travis.yml

                                                                                        Can you give me rustc --version?

                                                                                        Still, “commitment to stability” is a function of adoption. If, say, Ubuntu start shipping a Rust version in an LTS release, more and more people will try to stay backward compatible to that.

                                                                                        1. 2

                                                                                          rustc 1.17.0 cargo 0.18.0

                                                                                          1. 11

                                                                                            You’re probably hitting https://github.com/uutils/coreutils/issues/1064 then.

                                                                                            Also, looking at it, it is indeed that they use combinatior functionality that became available in Rust 1.19.0. std::cmp::Reverse can be easily dropped and replaced by other code if 1.17.0-support would be needed.

                                                                                            Thanks, I filed https://github.com/uutils/coreutils/issues/1100, asking for better docs.

                                                                                            1. 1

                                                                                              thanks for doing that, great community outreach :P

                                                                                        2. 5

                                                                                          Rust is “stable” in the sense that it is backwards compatible. However it is evolving rapidly so new crates or updates to crates may require the latest compiler. This won’t mean you’ll have to constantly fix broken builds; just that pulling in new crates may require you to update to the latest compiler.

                                                                                          1. 4

                                                                                            Yes, Cargo writes a Cargo.lock file with versions and hashes. Application developers are encouraged to commit it into version control.

                                                                                            Dependencies are mostly MIT/Apache in the Rust world. You can use cargo-license to quickly look at the licenses of all dependencies.

                                                                                            Redox OS is fully based on Rust :)

                                                                                          2. 4

                                                                                            Although you’re right to point out that project, one of Theo’s arguments had to do with compilation speeds:

                                                                                            By the way, this is how long it takes to compile our grep:

                                                                                            0m00.62s real 0m00.63s user 0m00.53s system

                                                                                            … which is currently quite undoable for any Rust project, I believe. Cannot say if he’s exaggerating how important this is, though.

                                                                                            1. 10

                                                                                              Now, at least for GNU coreutils, ./configure runs a good chunk of what rust coreutils needs to compile. (2mins for a full release build, vs 1m20.399 just for configure). Also, the build is faster (coreutils takes a minute).

                                                                                              Sure, this is comparing apples and oranges a little. Different software, different development states, different support. The rust compiler uses 4 cores during all that (especially due to cargo running parallel builds), GNU coreutils doesn’t do that by default (-j4 only takes 17s). On the other hand: all the crates that cargo builds can be shared. That means, on a build farm, you have nice small pieces that you know you can cache - obviously just once per rustc/crate pairing.

                                                                                              Also, obviously, build farms will pull all kinds of stunts to accelerate things and the Rust community still has to grow a lot of that tooling, but I don’t perceive the problem as fundamental.

                                                                                              EDIT: heh, forgot --release. And that for me. Adjusted the wording and the times.

                                                                                              1. 5

                                                                                                OpenBSD doesn’t use GNU coreutils, either; they have their own implementation of the base utils in their tree (here’s the implementation of ls, for example). As I understand it, there’s lots of reasons they don’t use GNU coreutils, but complexity (of the code, the tooling, and the utils themselves) is near the top of the list.

                                                                                                1. 6

                                                                                                  Probably because most(all?) the openBSD versions of the coreutils existed before GNU did, let alone GNU coreutils. OpenBSD is a direct descendant of Berkeley’s BSD. Not to mention the licensing problem. GNU is all about the GPL. OpenBSD is all about the BSD(and it’s friends) license. Not that your reason isn’t also probably true.

                                                                                                2. 2

                                                                                                  That means, on a build farm, you have nice small pieces that you know you can cache - obviously just once per rustc/crate pairing.

                                                                                                  FWIW sccache does this I think

                                                                                                3. 7

                                                                                                  I think it would be more fair to look at how long it takes the average developer to knock out code-level safety issues + compiles on a modern machine. I think Rust might be faster per module of code. From there, incremental builds and caching will help a lot. This is another strawman excuse, though, since the Wirth-like languages could’ve been easily modified to output C, input C, turn safety off when needed, and so on. They compile faster than C on about any CPU. They’re safe-by-default. The runtime code is acceptable with it improving even better if outputting C to leverage their compilers.

                                                                                                  Many defenses of not using safe languages is that easy to discount. And OpenBSD is special because someone will point out that porting a Wirth-like compiler is a bit of work. It’s not even a fraction of the work and expertise required for their C-based mitigations. Even those might have been easier to do in a less-messy language. They’re motivated more by their culture and preferences than any technical argument about a language.

                                                                                                  1. 3

                                                                                                    It’s a show stopper.

                                                                                                    Slow compile times are a massive problem for C++, honestly I would say it’s one of the biggest problems with the language, and rustc is 1-2 orders of magnitude slower still.

                                                                                                    1. 12

                                                                                                      It’s a show stopper.

                                                                                                      Hm, yet, last time I checked, C++ was relatively popular, Java (also not the fastest in compilation) is doing fine and scalac is still around. There’s people working on alternatives, but show stopper?

                                                                                                      Sure, it’s an huge annoyance for “build-the-world”-approaches, but well…

                                                                                                      Slow compile times are a massive problem for C++, honestly I would say it’s one of the biggest problems with the language, and rustc is 1-2 orders of magnitude slower still.

                                                                                                      This heavily depends on the workload. rustc is quite fast when talking about rather non-generic code. The advantage of Rust over C++ is that coding in mostly non-generic Rust is a viable C alternative (and the language is built with that in mind), while a lot of C++ just isn’t very useful over C if you don’t rely on templates very much.

                                                                                                      Also, rustc stable is a little over 2 years old vs. C/C++ compilers had ample headstart there.

                                                                                                      I’m not saying the problem isn’t there, it has to be seen in context.

                                                                                                      1. 9

                                                                                                        C++ was relatively popular, Java (also not the fastest in compilation) is doing fine and scalac is still around.

                                                                                                        Indeed, outside of gamedev most people place zero value in fast iteration times. (which unfortunately also implies they place zero value in product quality)

                                                                                                        rustc is quite fast when talking about rather non-generic code.

                                                                                                        That’s not even remotely true.

                                                                                                        I don’t have specific benchmarks because I haven’t used rust for years, but see this post from 6 months ago that says it takes 15 seconds to build 8k lines of code. The sqlite amalgamated build is 200k lines of code and has to compile on a single core because it’s one compilation unit, and still only takes a few seconds. My C++ game engine is something like 80k if you include all the libraries and builds in like 4 seconds with almost no effort spent making it compile fast.

                                                                                                        edit: from your coreutils example above, rustc takes 2 minutes to build 43k LOC, gcc takes 17 seconds to build 270k, which makes rustc 44x slower…

                                                                                                        The last company I worked at had C++ builds that took many hours and to my knowledge that’s pretty standard. Even if you (very) conservatively say rustc is only 10x slower, they would be looking at compile times measured in days.

                                                                                                        while a lot of C++ just isn’t very useful over C if you don’t rely on templates very much.

                                                                                                        That’s also not true at all. Only small parts of a C++ codebase need templates, and you can easily make those templates simple enough that it has little to no effect on compile times.

                                                                                                        Also, rustc stable is a little over 2 years old vs. C/C++ compilers had ample headstart there.

                                                                                                        gcc has gotten slower over the years…

                                                                                                        1. 6

                                                                                                          Even if you (very) conservatively say rustc is only 10x slower,

                                                                                                          Rustc isn’t slower to compile than C++. Depends on the amount of generics you use, but the same argument goes for C++ and templates. Rust does lend itself to more usage of generics which leads to more compact but slower-compiling code, which does mean that your time-per-LOC is higher for Rust, but that’s not a very useful metric. Dividing LOCs is not going to get you a useful measure of how fast the compiler is. I say this as someone who has worked on both a huge Rust and a huge C++ codebase and know what the compile times are like. Perhaps slightly worse for Rust but not like a 2x+ factor.

                                                                                                          The main compilation speed problem of Rust vs C++ is that it’s harder to parallelize Rust compilations (large compilation units) which kind of leads to bottleneck crates. Incremental compilation helps here, and codegen-units already works.

                                                                                                          Rust vs C is a whole other ball game though. The same ball game as C++ vs C.

                                                                                                          1. 2

                                                                                                            That post, this post, my experience, lines, seconds… very scientific :) Hardware can be wildly different, lines of code can be wildly different (especially in the amount of generics used), and the amount of lines necessary to do something can be a lot smaller in Rust, especially vs. plain C.

                                                                                                            To add another unscientific comparison :) Servo release build from scratch on my machine (Ryzen 7 1700 @ 3.9GHz, SATA SSD) takes about 30 minutes. Firefox release build takes a bit more. Chromium… even more, closer to an hour. These are all different codebases, but they all implement a web browser, and the compile times are all in the same ballpark. So rustc is certainly not that much slower than clang++.

                                                                                                            Only small parts of a C++ codebase need templates

                                                                                                            Maybe you write templates rarely, but typical modern C++ uses them all over the place. As in, every STL container/smart pointer/algorithm/whatever is a template.

                                                                                                            1. 2

                                                                                                              To add another unscientific comparison :) Servo release build from scratch on my machine (Ryzen 7 1700 @ 3.9GHz, SATA SSD) takes about 30 minutes. Firefox release build takes a bit more. Chromium… even more, closer to an hour. These are all different codebases, but they all implement a web browser, and the compile times are all in the same ballpark. So rustc is certainly not that much slower than clang++.

                                                                                                              • Firefox 35.9M lines of code
                                                                                                              • Chromium 18.1M lines of code
                                                                                                              • Servo 2.25M lines of code

                                                                                                              You’re saying compiling 2.25M lines of code for a not feature complete browser that takes 30 minutes is comparable to compiling 18-35M lines of code in ‘a bit more’?

                                                                                                              1. 4

                                                                                                                Line counters like this one are entirely wrong.

                                                                                                                This thing only counted https://github.com/servo/servo. Servo code is actually split among many many repositories.

                                                                                                                HTML parser, CSS parser, URL parser, WebRender, animation, font sanitizer, IPC, sandbox, SpiderMonkey JS engine (C++), Firefox’s media playback (C++), Firefox’s canvas thingy with Skia (C++), HarfBuzz text shaping (C++) and more other stuff — all of this is included in the 30 minutes!

                                                                                                                plus,

                                                                                                                the amount of lines necessary to do something can be a lot smaller in Rust

                                                                                                                1. 2

                                                                                                                  Agreed, it grossly underestimates how much code Chromium contains. You are aware of the horrible depot_tools and the amount of stuff they pull in?

                                                                                                                  My point was, you are comparing a feature incomplete browser that is a smaller code base at least in one order of magnitude but takes 30 minutes compared to “closer to an hour” of Chromium. If think your argument doesn’t hold - you are free to provide data to prove me wrong.

                                                                                                                2. 3

                                                                                                                  Servo’s not a monolithic codebase. Firefox is monolithic. It’s a bad comparison.

                                                                                                                  Chromium is also mostly monolithic IIRC.

                                                                                                        2. 2

                                                                                                          Free- and OpenBSD can compile userland from source:

                                                                                                          So decent compile times are of essence, especially if you are targeting multiple architectures.

                                                                                                        3. 6

                                                                                                          Well, ls is listed as only semi done, so he’s only semi wrong. :)

                                                                                                          1. 11

                                                                                                            The magic words being “There has been no attempt”. With that, especially by saying “attempt”, he’s completely wrong. There have been attempts. At everything he lists. (he lists more here: https://www.youtube.com/watch?v=fYgG0ds2_UQ&feature=youtu.be&t=2112 all of what Theo mentions has been written, in Rust, some even have multiple projects, and very serious ones at that)

                                                                                                            For a more direct approach at BSD utils, there’s the redox core utils, which are BSD-util based. https://github.com/redox-os/coreutils

                                                                                                            1. 2

                                                                                                              Other magic words are “POSIX compatible”. Neither redox-os nor the uutils linked by @Manishearth seem to care particularly about this. I haven’t looked all that closely, but picking some random utils shows that none of them is fully compliant. It’s not even close, so surely they can’t be considered valid replacements of the C originals.

                                                                                                              For example (assuming that I read the source code correctly) both implementations of cat lack the only POSIX-required option -u and the implementations of pwd lack both -L and -P. These are very simple tools and are considered done at least by uutils…

                                                                                                              So, Theo may be wrong by saying that no attempts have been made, but I believe a whole lot of rather hard work still needs to be done before he will acknowledge serious efforts.

                                                                                                              1. 5

                                                                                                                This rapidly will devolve into a no true scotsman argument.

                                                                                                                https://github.com/uutils/coreutils#run-busybox-tests

                                                                                                                uutils is running the busybox tests. Which admittedly test for something other than POSIX compliance, but neither the GNU or BSD coreutils are POSIX-compliant anyway.

                                                                                                                uutils is based on the GNU coreutils, redox’s ones are based on the BSD ones, which is certainly a step in the right direction and can certainly be counted as an attempt.

                                                                                                                For example (assuming that I read the source code correctly) both implementations of cat lack the only POSIX-required option -u and the implementations of pwd lack both -L and -P.

                                                                                                                Nobody said they were complete.

                                                                                                                All we’re talking about is Theo’s rather strong point that “there has been no attempt”. There has.

                                                                                                          2. 1

                                                                                                            I’m curious about this statement in TdR in the linked email

                                                                                                            For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.

                                                                                                            Is this true?

                                                                                                            1. 15

                                                                                                              As always with these complaints, I can’t find any reference to exact issues. What’s true is that LLVM uses quite a bit of memory to compile and rustc builds tend not to be the smallest themselves. But not that big. Also, recent improvements have definitely worked here

                                                                                                              I do regularly build the full chain on a ACER c720p, with FreeBSD, which has a celeron and 2 GB of RAM, I have to shut down the X server and everything before, but it works.

                                                                                                              As usual, this is probably an issue of the kind “please report actual problems, and we work fixing that”. “We want to provide a build environment for OpenBSD and X Y Z is missing” is something we’d be happy support, some fuzzy notion of “this doesn’t fulfill our (somewhat fuzzy) criteria” isn’t actionable.

                                                                                                              Rust for Haiku does ship Rust with i386 binaries and bootstrapping compilers (stage0): http://rust-on-haiku.com/downloads

                                                                                                              1. 10

                                                                                                                As always with these complaints, I can’t find any reference to exact issues.

                                                                                                                Only because it’s a thread on the OpenBSD mailing lists, people reading that list have the full context of the recent issues with Firefox and Rust.

                                                                                                                I’ll assume you just don’t follow the list so here is the relevant thread lang/rust: update to 1.22.1

                                                                                                                • For this release, I had lot of problem for updating i386 to 1.22.1 (too much memory pressure when compiling 1.22 with 1.21 version). So the bootstrap was initially regenerated by crosscompiling it from amd64, and next I regenerate a proper 1.22 bootstrap from i386. Build 1.22 with 1.22 seems to fit in memory.

                                                                                                                As I do all this work with a dedicated host, it is possible that ENOMEM will come back in bulk.

                                                                                                                And if the required memory still grows, rustc will be marked BROKEN on i386 (and firefox will not be available anymore on i386)

                                                                                                                1. 7

                                                                                                                  Only because it’s a thread on the OpenBSD mailing lists, people reading that list have the full context of the recent issues with Firefox and Rust.

                                                                                                                  Sure, but has this:

                                                                                                                  And if the required memory still grows, rustc will be marked BROKEN on i386 (and firefox will not be available anymore on i386).

                                                                                                                  Reached the Rust maintainers? (thread on the internals mailing list, issue on rust-lang/rust?)

                                                                                                                  I’m happy to be corrected.

                                                                                                                  1. 7

                                                                                                                    Reached the Rust maintainers? (thread on the internals mailing list, issue on rust-lang/rust?)

                                                                                                                    I don’t know. I don’t follow rust development, however the author of that email is a rust contributor like I mentioned to you in the past so I assume that it’s known to people working on the project. Perhaps you should check on that internals mailing list, I checked rust-lang/rust on github but didn’t find anything relevant :)

                                                                                                                    1. 7

                                                                                                                      I checked IRLO (https://internals.rust-lang.org/) and also nothing. (“internals” by the way referring to the “compiler internals”, we have no closed mailing list). The problem on projects of that scale seems to be that information travel is a huge issue and that leads to aggrevation. The reason I’m asking is not that I want to disprove you, I just want to ensure that I don’t open a discussion that’s already happening somewhere just because something is going through social media currently.

                                                                                                                      Thanks for pointing that out, I will ensure there’s some discussion.

                                                                                                                      Reading the linked post, it seems to mostly be a regression when doing the jump between 1.21 to 1.22, so that should probably be a thing to keep an eye out for.

                                                                                                                    2. 2

                                                                                                                      Here’s a current Rust bug that makes life hard for people trying to work on newer platforms.

                                                                                                                2. 2

                                                                                                                  I’m skeptical; this has certainly worked for me in the past.

                                                                                                                  I used 32 bit lab machines as a place to delegate builds to back when I was a student.

                                                                                                                  1. 4

                                                                                                                    Note that different operating systems will have different address space layout policies and limits. Your effective space can vary from possibly more than 3GB to possibly less than 2GB.

                                                                                                              1. 25

                                                                                                                Mercurial, made by another (in my opinion) much more well-spoken kernel hacker, is what really introduced me to the concept that you do not break interface for downstream users, no matter how wrongly you may think they are using the interface.

                                                                                                                It’s an attitude that is difficult to convey because software developers always want to have the freedom to “improve” their own software, even at the possible cost of breaking something for some users (and worse, even telling users that they shouldn’t have been doing that in the first place).

                                                                                                                I keep going back to this blog post which I wish more people agreed with (Steve Losh is another person influenced by Mercurial):

                                                                                                                http://stevelosh.com/blog/2012/04/volatile-software/

                                                                                                                1. 4

                                                                                                                  Good write-up. And, yet, backward compatibility is the reason for most woes of IBM/COBOL and Wintel. Both improved their stacks a lot by creating incompatible additions. On IBM’s side, they added stuff from UNIX ecosystem. On Microsoft’s side, they broke the driver model and permission model after switching to managed code for lots of apps. The author you linked to could’ve written a similar piece on Vista as almost everyone did. Although they did botch execution of it, the key, painful changes that were polished up by Windows 7 were great for the long term in both reliability, security, and (.NET) maintainability getting off C++. Just the driver architecture and verifier alone eliminated most blue screens.

                                                                                                                  Note that what I described doesn’t mean changing things randomly and unnecessarily which cause a lot of what author describes. The companies doing high-availability software often create deltas in between that support old functionality/configurations and new ones. Optionally tools to convert between them manually or automatically. Then, the upgrade doesn’t have unplanned downtime or headaches. Minimal at least. We don’t see most proprietary or FOSS software doing that. Instead, it’s “Surprise! Your stuff is now broken!”

                                                                                                                  The other thing to address is the author writes as if developers owe the users something. There’s some moral imperative. There are FOSS developers out there who are fairly selfless in that they’re all about the experience of their users. Many aren’t, though.They might be working for corporations such as Red Hat or IBM contributing to Linux. They might be building something mainly for themselves or a small group of contributors that they share with the world. They might even be building a product with a somewhat-neglected, FOSS version with less features. In any case, most of the users will be freeloaders who the developers are not working for or value very little. If those people are having problems, the developers with such motivations should ignore them.

                                                                                                                  So, I’d ask whether the Skype developers were trying to create a fantastic experience for Steve Losh on his new box or were doing what their managers wanted for the product for whatever reasons the business had. I’m leaning toward the latter which reframes his gripe about them. Maybe similar with the others. They also maybe well-intentioned but sloppy as he says. Who knows. Just not as simple as all developers having a moral imperative to create a specific experience for specific or all users.

                                                                                                                  1. 3

                                                                                                                    “And, yet, backward compatibility is the reason for most woes of IBM/COBOL and Wintel.”

                                                                                                                    I’m no kernel expert, but I think what Linus means about no regressions/don’t change the interface is only referring to minor versions, major versions of the kernel are allowed to change the API?

                                                                                                                    1. 7

                                                                                                                      I’m no kernel expert, but I think what Linus means about no regressions/don’t change the interface is only referring to minor versions, major versions of the kernel are allowed to change the API?

                                                                                                                      No. The kernel’s public API is supposed to be backwards compatible even between major versions. From https://github.com/torvalds/linux/tree/master/Documentation/ABI :

                                                                                                                      Most interfaces (like syscalls) are expected to never change and always be available.

                                                                                                                      1. 1

                                                                                                                        How did you get “Active user with invites disabled” on your profile?

                                                                                                                        1. 1

                                                                                                                          How did you get “Active user with invites disabled” on your profile?

                                                                                                                          By inviting each and every user that asked for an invite through the website form. This must have upset the gatekeepers who decided that since some of those were spammers (probably the self-promotion type) I need to have my inviting rights revoked.

                                                                                                                      2. 4

                                                                                                                        Linus has made it clear that version numbers of the kernel mean nothing and are entirely arbitrary. In his last interview(https://www.youtube.com/watch?v=NLQZzEvavGs&feature=share) He makes it VERY plain this is the case. He said basically that after the minor numbers get into the double digits he starts to lose track and bumps the major number. So we should hit 5.0 around next summer, but he made zero promises of this.

                                                                                                                        1. 4

                                                                                                                          See this other thread. You need to click “Show 89 previous comments” because Google+ seems to be unable to link to specific comments.

                                                                                                                          Alan Cox (a kernel hacker) writes “my 3.6rc kernel will still run a Rogue binary built in 1992. X is back compatible to apps far older than Linux.” and I would assume it still runs today.

                                                                                                                          And below Alan is Linus ranting about Gnome breaking stuff all the time.

                                                                                                                          1. 4

                                                                                                                            To add to what @stefantalpalaru said, actually in Linux version numbers mean not much. They’re added to distinguish between versions, and the major version number changes every time Linus thinks the minor version numbers are getting too big.

                                                                                                                            1. 1

                                                                                                                              I was replying to the linked article.

                                                                                                                        1. 2

                                                                                                                          Minus the upgrade story, this looks pretty neat for places that want an appliance. Kudos for making it all open-source and working on merging upstream into Debian!

                                                                                                                          1. 1

                                                                                                                            I think for microservices and small programs, fat binaries are arguably better(or at least no worse) than using a container with shared libraries, extra files, etc. But like @jclulow said, once you move past small programs/microservices, then having it all shoved into a single binary will only cause pain. Especially if you have/need multiple programs for some reason, say syslog, linkerd, a watcher process to restart or some other helper app, etc. Then suddenly containers start to be arguably better than fat binaries. Getting multiple programs into a binary is do-able, things like busybox do it, but it definitely complicates things unnecessarily, when there is little to no need. If your program has lots of data files, a database or other static data and shoving that into a binary starts to seem.. less than wise. We have a perfectly good OS and filesystem, that works reliably.

                                                                                                                            Is Docker all that and a box of chocolates, definitely not, it has it’s use cases. Shoving Go(or other fat binary) into Docker makes little sense, of that I agree.

                                                                                                                            1. 1

                                                                                                                              How specifically “does it cause pain”? How specifically are “containers arguably better”? Just because someone says its so? What happens when they say something different/conflicting?

                                                                                                                              Feels vs reals?

                                                                                                                              1. 3

                                                                                                                                Using Python as a specific example, but other VM based languages tend to have similar pains in my experience.

                                                                                                                                Pain: Things like Python for example tend to not do well when shoved into fat binaries, other VM based languages are the same. PyInstaller(the app that shoves things into fat binaries) for example still doesn’t support Python3.6. plus my other examples I think are fairly specific, what part would you like more explanation on?

                                                                                                                                Containers are arguably better because they are a lot easier to reason about and get all dependencies together. Yes things like Virtualenv exist, but building C libs into Virtualenv’s is not the easiest. It’s much easier to use system package managers/libraries for C libraries and venv’s for python code. Or just use a container and shove all the various dependencies into that, so you get to contain everything you need with the app while also being lazy and using system libraries, python libs, etc when using a container you don’t need to worry about the venv/building C library headaches, you can shove that responsibility off to your system package manager, and still contain 1 application from another. The alternative is something like Nix/Guix, but last I played with them they were not really ready for prime-time.

                                                                                                                                I started my entire thing with ‘I think’, so clearly it’s opinion, not fact. If you decided to take it as fact, I worry about your reading comprehension. I’d also worry if you decided to take the linked article as fact.

                                                                                                                                As for different/conflicting views, I welcome them! It helps me learn and re-think my attitudes and opinions. How about you?

                                                                                                                                How specifically are containers arguably worse (which I assume is the position you are taking)?

                                                                                                                                1. 0

                                                                                                                                  Excellent response, thank you. (Python is also important to me, and understand the difficulties in doing a “pip install” into a static binary w/o needing PyInstaller - already have a nested filesystem. Consider this not to be a problem for this discussion.) Please explain further any of your other examples you decide needs greater scope than just with what I’ve described here, as I’ve like to hear them.

                                                                                                                                  (I’m beginning to think that its just poor support for doing useful things with static binaries that might be at the heart of creating new containers - one adds to entropy because it’s easier to do a “clean sheet” that way, without regard for messing with peoples dependence on the past.)

                                                                                                                                  I’ve used Virtual environments to encompass multiple development environments with limited effect. You’re right, C’s too messy to fit that model, although pure Python development its good enough. Package managers always seem to be “work in progress”, where things mostly work, but then you trip across something undone, under done, or flat out wrong, so you spend too much time having to debug someone else’s poorly documented code. Yes I didn’t care much for Nix either. I guess the problem with all of these is you have to rely on others to maintain what you’ll depend on, and so if it isn’t closely related to your own tool base / shell utils, its just too much pain for too little gain. Is that about right?

                                                                                                                                  Opinions aren’t bad, they just help more if there’s some collateral to justify them. I realize that takes effort, but I do appreciate it when you take the effort. (Also, when it doesn’t appear to bruise egos as my remarks seem to do some - not what I’m after in contributing to this community by challenging opinions.)

                                                                                                                                  Haven’t taken anything as fact from the linked article. Like many articles, it’s a bit conclusory and absurd, but it does “edge onto” an interesting area. (BTW am no fan of GO and I think Rob Pike should have his head examined.) My “agenda” is one is more compact, less fragile, more obvious, deployable Python distributed applications where N > 10,000 and I can change the OS/kernel to do this the best way w/o any involvement of anyone else. I like my stuff.

                                                                                                                                  Thank you for your inclusory mind set, I share that aim, and I’d like to encourage your trust in your genuine expression. If what I’m speaking to doesn’t work for you, I’d just like to understand it better, because I’m sure what you’re after is what I’m after to, once I understand it. Don’t want to waste anyone’s time with noise.

                                                                                                                                  Not talking the position that containers are arguably worse. Just pushing back with “wait, is this really doing what I want, what baggage is it bringing along, and why do I beat my head on this thing when I didn’t before”. So just some casual skepticism, where I’m willing to explore other approaches industriously to check out a hypothesis. (Like building a filesystem into a static executable container just to see that I can do a pip install inside it.)

                                                                                                                                  So when I make Docker containers, I find that they are difficult to bound with content required by various packages. You’ll end up with things that mostly work, but the exceptions/omissions are often hard to find. If one builds a regression framework to prove a container’s scope of use/function/capability, one seems to spend as much time maintaining the regression framework as one does the container itself. (With the static binary, the issue becomes more of the scope of path names, for that you can chroot/jail and catch the exception and do a fixup.)

                                                                                                                                  Docker containers thus get rebuilt a lot, which is overly complex to that of a static executable. Also, it’s easier to trace/profile a static executable to get a map of where time/memory is used in a fine grained way - with containers its much more hit/miss, and most of the container’s I’ve inherited from others seem to contain much unused portions as well as obscure additions that are left in “just because they seem to be needed somehow”. These may be insignificant, … but how do you then know the scope of what the container will do then?

                                                                                                                                  Then, there’s the sudden spikes in memory/storage usage that exceeds the container’s size/resources. For a static binary, one can more easily backtrack memory allocations to find the determinism of “why?”

                                                                                                                                  Finally, when you want to change code to dynamically shift relocation addresses to foil injection attacks, it’s really simple to do so by relinking the static binary as a single, atomic operation. Doing such with Docker is fraught with surprises, as sometimes you discover dependencies within the libraries that are in part set off by the artifacts of how the libraries are dynamically linked i.e. ordering/assignment. Not to mention debugging this to find these surprises.

                                                                                                                                  Hope this isn’t TL;DR.

                                                                                                                                  1. 2

                                                                                                                                    Containers(i.e. docker) are very well defined, so I don’t see these issues you speak of. Perhaps they come hither and yonder when doing funky things like requiring external disks, etc.

                                                                                                                                    Why try to push a static filesystem into a static binary when chroot and/or containers do that for you? That’s sort of the whole point.

                                                                                                                                    As for memory usage of Docker, Docker does have a horrible case of no resource limits by default, but you can definitely force them on. Hashicorp Nomad for example does this by default with Docker containers. If Kubernetes/Marathon/etc don’t do this, that’s kind of sad.

                                                                                                                                    There are of course surprises when dependency management hits you on the head, but if you limit yourself as much as possible to the OS level dependencies(i.e. dpkg/yum/rpm), especially for C level code, then you shoot yourself in the foot a lot less when dealing with these problems.

                                                                                                                                    We haven’t really covered security here, and Docker IN THEORY gives you better security, but it’s sort of laughable now to claim that it actually does give you better security, especially with Docker set to default values. Jails and Zones definitely give you better security, and I’d like to think Docker will get there.. eventually, but I’m not holding my breath. It’s hard to bolt security on after the fact.

                                                                                                                              2. 1

                                                                                                                                Shoving Go(or other fat binary) into Docker makes little sense, of that I agree.

                                                                                                                                There are use cases for this. There is a base Go docker image that you can pull into your CI for building/distributing your application. If you use some type of scheduling system (DC/OS with Marathon or Kubernetes), you can then easily cluster that Go app.

                                                                                                                                There are lots of different use cases for containers and they can be used to solve a lot of problems … and introduce new ones, like not having security update checks for libraries within the containers.

                                                                                                                                1. 2

                                                                                                                                  Using Docker to build go apps makes some sense, you want your build environments to be well defined and that’s something containers give you. If you are of the mindset to deploy everything via Marathon/Kubernetes I could see a use case for hiding it in a Docker image just to make deployment easier. I’d argue that’s one of the good parts of Hashicorp Nomad, it supports exec as well as docker, so you can run anything, not only docker containers.

                                                                                                                              1. 12

                                                                                                                                Ok, I’ll ask a stupid question. What does a great deployment pipeline look like?

                                                                                                                                1. 10

                                                                                                                                  It depends on what you’re trying to deploy and what constraints you have; there isn’t one magic bullet. One pipeline I was especially proud of for a Python app I wrote at Fog Creek worked like this:

                                                                                                                                  1. Create a pristine dump of the target version of the source code. Say the revision is 1a2b3c4d. We used Mercurial, not Git, so the command was hg archive -t tbz2 -R /path/to/bare/repo -r 1a2b3c4d, but you can do the same in Git.
                                                                                                                                  2. Upload this to each server that’ll run the app, into /srv/apps/myapp/1a2b3c4d
                                                                                                                                  3. Based on the SHA1 of requirements.txt, make a new virtualenv if necessary in /srv/virtualenvs/<sha1 of requirements.txt> on each server hosting the app.
                                                                                                                                  4. Copy general configuration into /srv/config/myapp/1a2b3c4d. Configs are generally stored outside the actual app repo for security reasons, even in a company that otherwise uses monolithic repositories, so the SHA here matches the version of the app designed to consume this config info, not the SHA of the config info itself. (Which should also make sense intuitively, since you may need to reconfigure a running app without deploying a new version.)
                                                                                                                                  5. Introduce a new virtual host, 1a2b3c4d.myapp.server.internal, that serves myapp at revision 1a2b3c4d.
                                                                                                                                  6. Run integration tests against this to make sure everything passes.
                                                                                                                                  7. Switch default.myapp.server.internal to point to 1a2b3c4d.myapp.server.internal and rerun tests.
                                                                                                                                  8. If anything goes wrong, just switch symlinks of default.myapp.server.internal back to the old version.

                                                                                                                                  Now, that’s great for an app that’s theoretically aiming for five-nines uptime and full replacability. But the deploy process for my blog is ultimately really just rsync -avz --delete. It really just comes down to what you’re trying to do and what your constraints are.

                                                                                                                                  1. 7

                                                                                                                                    I doubt you’ll find consistent views, which makes that the opposite of a stupid question.

                                                                                                                                    My ideal deployment pipeline looks something like the following:

                                                                                                                                    • Deployable artifacts are built directly from source control by an automated system (e.g. Jenkins).
                                                                                                                                    • Ideally, some sort of gate is in place to ensure code review has occurred before a deployable artifact is built (e.g. Gerritt, though that project is very dogmatic and while I don’t disagree with it, I don’t strongly stand behind it either).
                                                                                                                                      • CI builds off unreviewed commits are fine, but I would consider them a developer nicety, not a part of the deployment pipeline.
                                                                                                                                    • Deployable artifacts are stored somewhere. Only the build tool should be able to write to it, but anyone should be able to read from it. (I don’t care what this looks like. Personally, I’d probably just use a file server.)
                                                                                                                                    • Deployment into a staging environment is one-click, or possibly automatic, from the artifact store.
                                                                                                                                    • Deployment into a production environment is one-click from the staging environment. The application must have successfully deployed into staging to be deployed into prod. Ideally, the application must go through some QA in staging to be deployed to prod, but that’s a process concern more than a technical one.
                                                                                                                                      • Operational personnel need to be able to bypass QA (in fact, bypass almost all of this pipeline) in outage situations.

                                                                                                                                    Note that I’m coming at this from the server-side perspective; “deploy into” means something different for desktop/client software, but I think the overall flow should still work (though I’ve never professionally developed client software, so I don’t know for sure).

                                                                                                                                    1. 3

                                                                                                                                      We do:

                                                                                                                                      1. Tests are ran, gpg signatures on commits checked, we are ready to deploy!
                                                                                                                                      2. Create a clean export of the revision we are deploying: hg archive -r 1a2b3c4d build/
                                                                                                                                      3. Dump the revision # and other build info into a version file in build/ this is in JSON format as a {}.
                                                                                                                                      4. Shove this into a docker image: docker build -t $(JOB_NAME):$(BUILD_NUMBER) . and push it to internal docker registry.
                                                                                                                                      5. Update nomad(hashicorp product) config file to point to the new $(BUILD_NUMBER) via sed: sed -e "s/@@BUILD_NUMBER@@/$(BUILD_NUMBER)/g" $(JOB_NAME).nomad.sed >$(JOB_NAME).nomad.
                                                                                                                                      6. Do the same as previous step but for the environment we will be running in (dev, test, prod) if required.
                                                                                                                                      7. nomad run $(JOB_NAME).nomad

                                                                                                                                      Nomad will handle dumping vault secrets, config information, etc from the template directive in the config file. So Configuration happens outside of the repo, and lives in Vault and Consul.

                                                                                                                                      You can tell by the env. variables we use Jenkins :) different CI/CD systems will have different variables probably. If unfamiliar with Jenkins, BUILD_NUMBER is just a integer count of how many builds jenkins has done for that job. JOB_NAME is just the name you gave it inside of Jenkins for this job.

                                                                                                                                      1. 2

                                                                                                                                        This is way off topic, but I’d love to hear why you went with Nomad and how it’s been working for you. It seems to fill the same niche as Kubernetes, but I hear practically nothing about it—even at shops using Packer, Terraform, and other Hashicorp products.

                                                                                                                                        1. 4

                                                                                                                                          We started with Nomad before Kubernetes was a huge thing, i.e. we heard about Nomad first. But I wouldn’t change that decision now, looking back. Kubernetes is complicated. Operationally it’s a giant pain. I mean it’s awesome, but it’s a maintenance burden. Nomad is operationally simple.

                                                                                                                                          Also Nomad runs things outside of docker just fine, so we can effectively replace supervisor, runit, systemd, etc with Nomad. Not that I remotely suggest actually replacing systemd/PID 1 with Nomad, but that all the daemons and services you normally run on top of your box can be put under nomad, so you have 1 way of deploying, regardless of how it runs. I.e. Postgres tends to work better on bare hardware, since it’s very resource intensive, but with the Nomad exec driver it runs on bare hardware under Nomad perfectly fine, and gives us 1 place to handle logs, service discovery, process management, etc. I think maybe the newer versions of Kubernete’s can sort of do that now, but I don’t think it’s remotely easy, but I don’t really keep up.

                                                                                                                                          But mostly the maintenance burden. I’ve never heard anyone say Kubernetes is easy to setup or babysit. Nomad is ridiculously easy to babysit. It’s the same reason Go is popular, it’s a fairly boring, simple language complexity wise. This is it’s main feature.

                                                                                                                                          1. 2

                                                                                                                                            Thanks for the write up! Definitely makes me want to take another look at it.

                                                                                                                                        2. 1

                                                                                                                                          Do the same as previous step but for the environment we will be running in (dev, test, prod) if required.

                                                                                                                                          Could you elaborate on this step? This is the on that confuses me the most all the time…

                                                                                                                                          1. 2

                                                                                                                                            Inside of Jenkins job config we have an ENV variable called MODE and it is an enum, one of: dev, test, prod

                                                                                                                                            Maybe you can derive it from the job-name, but the point is you need 1 place to define if it will run in dev/test/prod mode.

                                                                                                                                            So if I NEED to build differently for dev, test or prod (say for new dependencies coming in or something, I can.

                                                                                                                                            That same MODE env variable is pushed into the nomad config: env { MODE = “dev” } It’s put there by sed, identically to how I put the $(BUILD_NUMBER).

                                                                                                                                            And also, if there are config changes needed to the nomad config file based on environment, say the template needs to change to pull from the ‘dev’ config store instead of the ‘prod’ config store, or if it gets a development vault policy instead of a production one, etc. I also do these with sed, but you could use consul-template, or some other templating language if one wanted. Why sed? because it’s always there and very reliable, it’s had 40 years of battle testing.

                                                                                                                                            So that when the nomad job starts, it will be in the processes environment. The program can then, if needed act based on the mode in which it’s running. Like say turning on feature flags under testing or something.

                                                                                                                                            Obviously all of these mode specific changes should be done sparingly, you want dev, test, prod to behave as identically as possible, but there are always gotchas here and there.

                                                                                                                                            Let me know if you have further questions!

                                                                                                                                            1. 2

                                                                                                                                              Thank you very much! Helps a lot!

                                                                                                                                        3. 2

                                                                                                                                          What does a great deployment pipeline look like?

                                                                                                                                          I do a “git push” from the development box into a test repo on the server. There, a post-update hook checks out the files and does any other required operation, after which is runs some quick tests. If those tests pass, the hook pushes to the production repo where another post-update hooks does the needful, including a true graceful reload of the application servers.

                                                                                                                                          If those tests fail, I get an email and the buggy code doesn’t get into production. The fact that no other developer can push their code into production while the codebase is buggy is considered a feature.

                                                                                                                                          Since I expect continuous integration to look like my setup, I don’t see the point of out-of-band testing that tells you that the code that reached production a few minutes ago is broken.

                                                                                                                                          1. 1

                                                                                                                                            The setup we use is not even advanced, but simply resilient against all the annoyances we’ve encountered over time running in production.

                                                                                                                                            I don’t really understand the description underneath “the right pattern” of the article. It seems weird to have a deploy tree you reuse everytime?

                                                                                                                                            Make a clean checkout everytime. You can still use a local git mirror to save on data fetched. Jenkins does this right, as long as you add the cleanup step in checkout behaviour.

                                                                                                                                            From there, build a package, and describe the environment it runs in as best as possible. Or just make fewer assumptions about the environment.

                                                                                                                                            This is where we use a lot of Docker. The learning curve is steep, it’s not always easy, there are trade offs. But it forces you to think about your env, and the versions of your code are already nicely contained in the image.

                                                                                                                                            (Another common path is unpacking in subdirs of a ‘versions’ dir, then having a ‘current’ symlink you can swap. I believe this is what Capistrano does, mentioned in the article. Expect trouble if you’re deploying PHP.)

                                                                                                                                            I’ll also agree with the article that you should be able to identify what you deploy. Stick something produced by git describe in a ‘version’ file at your package root.

                                                                                                                                            Maybe I’m missing a lot here, but I consider it project specific details you just have to wrestle with, in order to find what works. I’ve yet to find a reason to look into more fancy stuff like Kubernetes and whatnot.

                                                                                                                                            1. 1

                                                                                                                                              I think that the point kaiju’s thread wants to make is that you shouldn’t be deploying from your local machine, since every developers environment will differ slightly and those artifacts might cause a bad build when sent to production. I believe the normal way is to have the shared repo server build/test and deploy on a push hook, so that the environment is the same each time.

                                                                                                                                            1. 3

                                                                                                                                              I agree this is cool, but these are not things that should be used daily, changing history is stupid. Pretty much the only reason I can see using a tool like this is to immediately back out a password(or other secret) committed by mistake. But really the right answer is just change the password/secret, make a new commit removing it and note that this is not valid anymore, let the history stand as people making mistakes and move along with life.

                                                                                                                                              1. 13

                                                                                                                                                I disagree. One area where this is insanely useful on a daily workflow is code review, because it gives you the best of both worlds: the mainstream, published history can be the versions of the patches that were finally accepted, with all fixes and any squashing or splitting that needed to happen, but if I need to later ask questions like, “why was this particular solution done here, rather than the alternatives?”, then I stand a much higher chance of being able to trivially answer that within the SCM by tracking obsolete versions of that changeset. This is a huge improvement, in practice, from trying to discover what GitHub PRs existed for a given commit and track those histories entirely through the GitHub UI.

                                                                                                                                                1. 1

                                                                                                                                                  hrm. for me, code review belongs in the other person’s branch.. if Tootie wants me to merge her stuff, then I’ll look at her code on her branch/repo and then merge into the main branch/repo. i.e. hg incoming -vp <Tootie's repo>

                                                                                                                                                  But if you were doing code-review ala github or other methods, then I can see where this could be useful.

                                                                                                                                                  1. 1

                                                                                                                                                    Does that mean you’re not doing CI? You might want to consider setting up a CI server - it’s very useful.

                                                                                                                                                    1. 1

                                                                                                                                                      I’m not sure where you got no CI from what I said… but yes, I’m all for CI/CD. On push to the central VCS server (regardless of repo/branch), Jenkins will go forth and run some tests, etc. If it happens to be the main stable(or dev/test/etc) branch, it will also deploy. I definitely agree it’s useful.

                                                                                                                                                      The nice thing about my method is it’s decentralized, and anyone can use whatever tools they want to do code review, much like the Linux Kernel. If Tootie wants 500 feature branches or repo’s, then she can have them, organized however she wants. When it’s time to merge into the main branch, the main server repo(s) require a gpg signature from 1 other developer (via the commitsigs extension) – it’s just a test that’s run as part of the CI run, which effectively makes our code reviews GPG signed.

                                                                                                                                                      1. 1

                                                                                                                                                        Ah, I see, your central server has all the in-development branches. The nice thing about the evolve workflow is that you can have a publishing server that has a canonical linear history where all the tests pass on every commit. It takes some tooling and discipline to get there, but the benefit is that bisect is always useful and it’s easier to follow how the codebase evolved over periods when many developers were working simultaneously.

                                                                                                                                                        1. 1

                                                                                                                                                          For us, every push, regardless of branch/repo gets tested by Jenkins, a commit hook on our central repo calls into Jenkins to run. Can you explain more about your workflow and how it works? I think maybe I’m missing something. How does every commit get tested?

                                                                                                                                                          1. 1

                                                                                                                                                            After passing code review from one or two other developers, and after the test bot certifies that the tests pass on every commit in the series, a bot rebases the series onto the current default branch and then pushes to the publishing server. In principle one could also squash the series into a single commit but that means that each commit is no longer as easy to read. We like to keep commits atomic and manageable to review by a person without getting overwhelmed by a huge diff.

                                                                                                                                                            1. 1

                                                                                                                                                              Assuming I understand you correctly, this is how it works:

                                                                                                                                                              So developer A makes a small change, commits it, pushes it to Server A. Server A then runs tests, assuming pass - dev A begs 1-2 other devs to review their commit. After code review is signed off (via what mechanism?) a bot rebases it onto some new repo, and pushes to Server B. Server B then does the standard CI/CD stuff and deploys?

                                                                                                                                                2. 7

                                                                                                                                                  changing history is stupid

                                                                                                                                                  Mercurial keeps track of whether a changeset is public (because you pushed it/someone pulled it) or draft. By default, Evolve-related commands (rebase, prune, fold, evolve, and others), will only change unpublished history, that is the changesets that are still in draft mode: this prevents you from accidently editing history others may already depend on. As Gecko says, it’s a pleasure to be able to edit history without destroying the old history.

                                                                                                                                                  You can also share a history that you mutate, e.g. with a close colleague or with yourself on another device. Create a so-called ‘non-publishing’ repo: changesets pushed to this repository remain in draft mode. Evolve now supports sharing your history mutations. It’s a rare use case, but it’s supported; and Evolve’s notion of ‘obsolesence’ and ‘successor changesets’ is what makes it possible to support it at all.

                                                                                                                                                  1. 3

                                                                                                                                                    Yes, Mercurial is awesome, I’m not disagreeing, it’s what I use. Like I said this is very cool, but I think as a daily driver it’s not a good idea to use. That it doesn’t destroy old history like Git does is indeed a breath of fresh air. I’m not at all disagreeing with how it’s implemented/what it does. It’s great the tool exists for when you need it, I just don’t think one should be using it all the time.

                                                                                                                                                  2. 4

                                                                                                                                                    In addition to code review, another reason to modify history on a daily basis is to avoid broken commits. It’s generally good practice to avoid landing any commits that lead to a broken state (in build or tests). Otherwise other contributors who happen to rebase onto the busted commit will be left wondering whether or not their changes are responsible for the failures.

                                                                                                                                                    Rather than committing a “bustage” fix on top, much better to change history and make sure the broken commit never makes it into upstream in the first place.

                                                                                                                                                    1. 1

                                                                                                                                                      For large teams, or where you are not able to communicate in basically real-time, I can see this being useful. For smaller teams and/or where real-time communication(i.e. a chat channel/etc) I don’t see this as a big deal. But I can agree there are times where this would be useful, but if you are breaking builds on a daily basis.. maybe you are doing something wrong? :P

                                                                                                                                                  1. 1

                                                                                                                                                    Love the post. As a Rails dev, it makes me painfully aware about how little I know about databases. One thing I noticed:

                                                                                                                                                    Time is within business hours. CHECK ('8:00 am'::time <= VALUE AND VALUE <= '5:00 pm'::time)

                                                                                                                                                    Time assumptions sound dangerous here.

                                                                                                                                                    1. 3

                                                                                                                                                      Time is the worst. I work hard to only involve myself in applications that can safely assume everyone lives in London.

                                                                                                                                                      1. 1

                                                                                                                                                        Time is definitely awful, but I’ve found if you offload all time processing into PG, and use the PG date/time tools it’s not abysmal.

                                                                                                                                                    1. 2

                                                                                                                                                      If so much time is added by key travel, I’d really like to know what the key travel is on the keyboards on those machines from the 70s?

                                                                                                                                                      1. 5

                                                                                                                                                        For the typical 8-bit computer from the 80s, the CPU itself would do the scanning. In the PC world (starting with the original IBM in 1981) the keyboard itself had a CPU that scanned the keyboard matrix, then sent the keycode down a serial connection to the main PC. I don’t recall the baud rate, but I’m sure there’s some delay right there.

                                                                                                                                                        1. 5

                                                                                                                                                          That answers where pretty much all the latency except the key travel time comes in. ;)

                                                                                                                                                          1. 2

                                                                                                                                                            The old keyboards tend to have a large key travel in my experience. The really old teletypes were actual typewriters with some digital interfaces attached. The key travel on those would be measured at like an inch or more I bet (just from memory, I don’t happen to have any laying around anymore).

                                                                                                                                                      1. 5

                                                                                                                                                        The second slide bothers me:

                                                                                                                                                        WHY USE OPENBSD

                                                                                                                                                        • UNIX-like
                                                                                                                                                        • Get the latest version of OpenSSH, OpenSMTPD, OpenNTPD, OpenIKED, OpenBGPD, LibreSSL, mandoc
                                                                                                                                                        • Get the latest PF (Packet Filter) features
                                                                                                                                                        • Get carp(4), httpd(8), relayd(8)
                                                                                                                                                        • Security focused Operating System
                                                                                                                                                        • Thorough documentation
                                                                                                                                                        • Cryptography

                                                                                                                                                        These aren’t reasons to use OpenBSD. These are features of the OS, with the exception of “thorough documentation”

                                                                                                                                                        What are reasons derived from these features? Maybe these:

                                                                                                                                                        • Security first
                                                                                                                                                          • Consistent updates to remote access, mail transit, time synchronization
                                                                                                                                                          • Tight integration with modern cryptography library with the least number of CVEs in the industry
                                                                                                                                                          • Industry-leading performance of built-in firewall with extensive, easily managed packet filtering features
                                                                                                                                                        • Built-in, highly performant web server with fewer than X vulnerabilities in last Y years
                                                                                                                                                        • Lightweight default installation completed within five minutes
                                                                                                                                                          • Small footprint encourages addition of only the software necessary for intended purpose of system
                                                                                                                                                          • Large ecosystem available
                                                                                                                                                        • Thorough, centralized documentation for every step of setup and use

                                                                                                                                                        This gives me business reasons to continue paying attention.

                                                                                                                                                        1. 1

                                                                                                                                                          +1 Do you think i need to rename this slide to features ? And add your content on a new slide ‘Why use OpenBSD’ ?

                                                                                                                                                          If you have further suggestions … your re welcome! :) Thank’s!

                                                                                                                                                          1. 3

                                                                                                                                                            You want to catch peoples’ attention by asserting that the thing you are supporting is better than the thing they’re using or better than the thing they are considering for task T. Don’t let advantages be self-evident: explain them! This is an introductory presentation.

                                                                                                                                                            I’d call it “notable packages” or “core software” and drop the one that aren’t software.

                                                                                                                                                            Some quick notes off the top of my head, n.b. that I am not an OpenBSD person and I know just enough to understand that I probably should be and probably would be if i had more time to devote to it.

                                                                                                                                                            Maybe some slides like these:

                                                                                                                                                            Why use OpenBSD?

                                                                                                                                                            Security first.

                                                                                                                                                            • Consistent updates to remote access, mail transit, time synchronization
                                                                                                                                                            • Tight integration with modern cryptography library with the least number of CVEs in the industry
                                                                                                                                                            • Industry-leading performance of built-in web server, load balancer, and firewall with extensive, easily managed packet filtering features

                                                                                                                                                            Other reasons to use OpenBSD

                                                                                                                                                            • Built-in, highly performant web server with fewer than X vulnerabilities in last Y years
                                                                                                                                                            • Lightweight default installation completed within five minutes
                                                                                                                                                              • Small footprint encourages addition of only the software necessary for intended purpose of system
                                                                                                                                                              • Large ecosystem available
                                                                                                                                                            • Thorough, centralized documentation for every step of setup and use

                                                                                                                                                            Notable software packages

                                                                                                                                                            • OpenSSH remote access
                                                                                                                                                            • OpenSMTPD mail server
                                                                                                                                                            • OpenNTPD time server
                                                                                                                                                            • OpenIKED keyserver
                                                                                                                                                            • OpenBGPD routing server
                                                                                                                                                            • LibreSSL for modern cryptography

                                                                                                                                                            All of these are maintained as separate packages but are core components of the OS.

                                                                                                                                                            Notable programs

                                                                                                                                                            • carp(4) - IP address sharing on the same network
                                                                                                                                                            • httpd(8) - web server optimized for the OS, top performance compared to other OS server packages
                                                                                                                                                            • relayd(8) - highly performant load balancer for IP traffic
                                                                                                                                                            • pf(4) - enterprise-quality packet filtering firewall
                                                                                                                                                            • mandoc(1) - extensive system-wide documentation in a variety of formats

                                                                                                                                                            Notable technology

                                                                                                                                                            • pledge(2) - whitelists required system calls at startup, limiting attack surface by restricting what a program can do to what it is intended to do
                                                                                                                                                            • zfs(8) - enterprise-grade expandable, recoverable, and snapshottable filesystem

                                                                                                                                                            Pick some other stuff from https://www.openbsd.org/innovations.html for it, too.

                                                                                                                                                            Quite frankly, I find the inclusion of the manual page section in the name to be confusing. I’d omit it if you don’t explain it at least non-exhaustively.

                                                                                                                                                            1. 1

                                                                                                                                                              Uh, OpenBSD has ZFS? .. since when? I mean https://www.tedunangst.com/flak/post/ZFS-on-OpenBSD I mean I guess it’s sort of there, but I don’t think anyone suggests you actually USE it on OpenBSD. Regardless it’s not Notable technology from OpenBSD, they clearly don’t care for it, but like some of the features it has…

                                                                                                                                                              Otherwise I like this approach for “why OpenBSD” better than what is on the slides now.

                                                                                                                                                              1. 3

                                                                                                                                                                Sort of there? Where exactly? Have you checked the date of that commit? ;^)

                                                                                                                                                                1. 1

                                                                                                                                                                  LOL, exactly!

                                                                                                                                                                2. 1

                                                                                                                                                                  Sorry,

                                                                                                                                                                  n.b. that I am not an OpenBSD person and I know just enough to understand that I probably should be

                                                                                                                                                                  This was in my browser history: https://man.openbsd.org/FreeBSD-11.0/zfs.8 but I see now that it’s from the FreeBSD section. That’s confusing.

                                                                                                                                                                3. 1

                                                                                                                                                                  On OpenBSD, packages are pre-compiled binaries of 3rd-party software so I wouldn’t use that word as it may cause confusion. The above are certainly not packages in that sense.

                                                                                                                                                            1. 2

                                                                                                                                                              I do pretty much the same thing as @je, but I do it by ‘topic’, like I have work/servicename.txt or vacation/location.txt etc. New stuff goes at the top of the file.

                                                                                                                                                              This and https://github.com/BurntSushi/ripgrep gets me everything every note editor has with all the future proofing one could hope for.

                                                                                                                                                              1. 1

                                                                                                                                                                I even do that too! Especially the vacations one. In each of my side-projects folder, I have an ‘idea.md’ file which is based off of a template I have which I then fill in per-project. The ‘log’ section is the equivalent of my general daily log file, just specific to this project.

                                                                                                                                                                # Initial thought
                                                                                                                                                                
                                                                                                                                                                Date:
                                                                                                                                                                Location:
                                                                                                                                                                
                                                                                                                                                                Inspiration:
                                                                                                                                                                Solution:
                                                                                                                                                                
                                                                                                                                                                Why is it worth the effort?
                                                                                                                                                                Why is it not worth the effort?
                                                                                                                                                                
                                                                                                                                                                What already exists?
                                                                                                                                                                Difficulty:
                                                                                                                                                                
                                                                                                                                                                Tags:
                                                                                                                                                                
                                                                                                                                                                # Log