1. 21

Lately at work there’s been a discussion about release process and how much of the testing QA should do vs developers. I was curious to see how it’s done elsewhere.

Describe the release process at work and/or on your personal projects; how does code go from development to production (QA? Build pipeline? Who automates what?), and even after production (how does your monitoring work? Rollbacks? Etc).

  1.  

  2. 9

    Current job (small team):

    • Commit and push changes
    • Concourse CI picks those changes up and
      1. if they’re not tagged, it runs the unit tests and stops
      2. if they’re tagged, it runs the unit tests and continues
    • It builds docker images and pushes them to ECR
    • It deploys those images to ECS in the staging environment
    • We monitor the changes in staging (metrics are scraped by Prometheus w/ graphing from Grafana)
    • If everything looks good in staging we push a button in Concourse to send the images to ECS in the prod environment

    Concourse build pipeline definition, tasks and scripts are defined in the repo and infra is managed with Terraform (which Concourse runs). It took me about three days to set everything up and it has been running smoothly ever since (~6 months).

    To roll back, we just re-run older deployment jobs.

    We don’t have a dedicated QA team. Everyone tests their own changes or asks someone else on the team for help and we have an extensive unit test suite.

    Side projects (just me):

    • ./scripts/deploy runs tests and deploys either to GAE or Heroku and that’s it.
    1. 1

      Same, but Jenkins and Datadog. It just works.

      1. 2

        Oh, using the tags to decide whether to deploy or not is a niiiice idea, I’ll steal it.

    2. 6

      I’ve worked at several different places with widely varying processes.

      Current place, we do code changes in Github PRs with CI and deploy to multiple levels of testing environment for our testing department and our customers’ testing departments to try out. At various points, we designate a release version, do testing and documentation on that, and release it. We have a in-house built system for deploying onto all of our environments, so by the time something gets released to production, it’s been deployed using the same systems many times. We have a variety of monitoring systems, which includes a custom application that regularly hits a test route on our services and reports failures, logging services that watch for keywords in our logs, and AWS CloudWatch alarms that send emails.

      My personal projects, most of them have few to no users other than myself. I like to deploy with git pushes and run a script on the server to update anything that needs updating and restart the server process.

      One of the more interesting ones was at one of my previous jobs. We tried having a testing environment for our application, but we were never able to find bugs effectively in it. The application’s purpose was to do complex calculations on how chemical processes run, and setting up those calculations was very time-consuming for the Chemical Engineers doing it. Nobody had much enthusiasm about setting up tests in non-production environments that were thorough enough to actually expose bugs in the calculations. After several episodes of catching bugs in production instead of testing, we decided to switch to deploying straight to production for one of the smaller projects, and update other, bigger projects to newer versions after they had been used in the first project for a few weeks or so.

      1. 6

        I have found the following flow to work relating to devs, QA, and deployment:

        1. Bug Fix / Feature Request comes in pipeline. Is approved or not by PM.
        2. Write up a clear outline on what will be covered for this specific issue. PM (or reporter), Dev and QA are present so everyone knows what’s going on.
        3. Dev creates code, creates PR
        4. QA checks that PR (in dev environment, not locally)
        5. QA says things look good/bad, gives details on steps taken, automates tests.
        6. PM or reporter looks at it, gives go ahead or sends back if issue found
        7. If things are good, dev merges and deploys. If things are bad, go back to step 3 and move forward.
        8. Code is in production. Dev checks logs for the next few hours. Adds log checking to whatever service is being used. If things look bad or there is an alert, revert. If they’re good, then awesome.

        I do not agree that QA should perform deployments.

        The project is the baby of the developer. Unless the QA has a very deep knowledge of the project, the framework, the environment, and what signs indicate an issue, they should focus on doing their job. Their job should be test (functionality, edge cases, whatever is needed), write automation around those tests, and include that automation in CI.

        Developers know what environment their code works under. They know what to look for in the logs that may indicate an issue. They usually have a good idea if some new issue that cropped up after deploying is due to their latest commit.

        Without this knowledge, or a transfer of this knowledge to the QA, then developers are handing their baby to a friend to baby sit without telling them about any allergies or foods that make them gassy.

        1. 5

          My two cents regarding my personal projects…

          When I was working on the MATHC 1.x.x, I was using SemVer, which has a concept of not breaking backwards compatibility while in that major version, possible of adding features in each minor version (x.MINOR.x), and having the patch version (x.x.PATCH) for fixes.

          The SemVer process is very inorganic, and the development was often affected because of SemVer rules. This was the same thing I noticed when contributing on OSS projects of others. Having a critical design flaw and you just made a patch that will fix that? You will have to wait until the next major version, because it breaks backwards compatibility.

          When I released MATHC 2, I switched to CalVer, and the scheme was YYYY.MM.DD.MICRO. The version was actually MATHC 2018.08.02.0. It’s a pretty straight forward scheme, as the version is just the dates. I added a MICRO for reserved use when I had to publish two versions on the same day, in case I noticed a bug right after the previous release. I always try to keep backwards compatibility, but the development of my project is not affected by it. If I have to break backward compatibility, then I will break it, and I will mention in the release notes. The versioning scheme doesn’t rule over the development process, it’s just a reflection of it. Very simple.

          Regarding testing, I had test units during the first major version, but it was a lot of work for only one person to take care of, and it was hardly ever useful. I removed the test unit in the newer versions and I rely on the feedback of users to identify hidden bugs. The good thing about personal OSS projects is that you can easily make changes to prioritize your mental health over the project :)

          1. 2

            We use calver at work as well, and if for nothing else, just not having discussions of what is a major, a minor or a patch release is SUCH a good thing.

            1. 2

              I know, right? I wish other projects followed this scheme. After some time using CalVer, I’m starting to believe that SemVer is (to some extent) holding back the quality of software.

              1. 2

                I mean, IF we could automatically determine whether a release is major, minor or patch by checking the public APIs, then that would be great, but there’s no way to do it reliably in most languages. Without this kind of tooling, it might make sense to use semver for libraries, but for systems that are used by humans, not really.

          2. 5

            Roughly thousands of manhours of manual testing all over the world. Requires a few months and millions of dollars. Details vary depending on the customer.

            For the developers there is continuous integration testing for each pull request done via x86 simulation and on target device. Code reviews for each pull request. Sometimes manual full system testing by developers but mostly by a special integration team.

            I’m working for an automotive supplier so our processes are probably not applicable to you. The contrast might be interesting though. ;)

            1. 1

              Work with an Android integrator, sounds remarkably similar. I develop tools to help people test the actual android devices, so I don’t work with that specific workflow, but the people that actually work on Android, it’s pretty similar.

            2. 4

              At $job:

              1. Announce roll out in the common chat channel. We have about 50 ish things that can be rolled out and concurrent rollouts are the norm.
              2. Roll out like 5 minutes later tops.
              3. Monitor graphs, the common error tracing, and the chat while in vanguard.
              4. Declare success 15 minutes later and deploy to the rest of the hosts.

              Abort and revert if anyone signals a problem during that process.

              The first action taken when a rollout is suspect is to revert, if it’s been live for less than a day. That’s not a hostile action and as the rollout system is uniform across the company anyone knows how to revert or rollout any system.

              It sounds janky, but is remarkably efficient in practice.

              1. 4

                I am working on CI based releases.

                • Dev work goes in feature branches
                • Test manually in a QA environment for a faked login process of provisioned auth users.
                • If everything looks good manually release to Prod infrastructure.

                … I haven’t had to rollback any releases yet … So yay! (In 5+ years)

                Prod has Grafana graphs and internal audit reports of the live system.

                1. 4

                  My build system currently contains 14 Java EE web applications and 49 Java library projects. Back when it was only 6 projects and one web application, I performed builds and dependency management manually. About a dozen projects was the tipping point for me where I subsequently added Ant for builds, Ivy for dependency resolution, and finally closed the loop about 5 years ago by moving builds entirely to Jenkins and Artifactory. Our core projects have 100% branch coverage, so QA/QC is essentially guaranteed via the build process. This saved me so much time; it is like having a whole team handling my builds and testing for me. I remember the bad old days where I would spend 30 minutes coding and 2 hours building and deploying. Now Jenkins handles the full build cascade, which on a single server instance takes right around an hour if the core projects need to be built and tested. Most of the time however, I’m just committing changes to the web applications, which take only a minute or so to build and deploy.

                  Here is my current process:

                  1. Commit code to private SubVersion repository.
                  2. SubVersion automatically triggers Jenkins build when commit completes.
                  3. Jenkins build (fully automated).
                    • Jenkins updates its local repository.
                    • Jenkins runs Ant build.
                      • Ant resolves Ivy dependencies via private Artifactory instance.
                      • Ant performs build and javadoc.
                      • Ant runs unit tests (JUnit) with code coverage reports (Cobertura).
                    • Jenkins publishes artifacts to private Artifactory instance if build was successful.
                    • Jenkins deploys web application(s) to development server (Tomcat) if build was successful.
                    • Jenkins triggers step 3 for any downstream projects.
                  4. Local development environment (Eclipse with IvyDE) resolves the newly deployed artifacts from private Artifactory instance.
                  5. Deployments of web applications to production server occur at most once per week and are handled manually via Tomcat manager. This is trivial but I want a better way to manage deployments to production.
                  1. 3

                    Probably pretty vanilla but it works:

                    • every PR gets tested by jenkins (integration tests and codestyle)
                    • after a merge the pr is being packaged into linux packages and pushed to pulp
                    • we manually trigger a job that will copy the package to our dev repository
                    • our infrastructure orchestration will try to update packages every couple of minutes
                    • we keep track of errors on a grafana dashboard, if everything is ok we de trigger a deploy to acc
                    • etc… untill drp is online

                    We don’t have a massive volume of activity so the manual actions are something we can live with. We’re working on using containers for the next version, but deploying to dev, acc, … probably will always be a manual action.

                    1. 3
                      • commit and push changes on personal branch, tell someone ready for merge.
                      • any other team member reviews and merges into stable and pushes to the stable repo.
                      • CI runs make deploy which will run tests, builds and then deploys it.

                      Personal Projects:

                      • make deploy does the right thing.

                      We use Makefile as our entry point into our projects, make is everywhere, and it’s mature, well tested software, the warts are well known, etc. make test will just work, regardless of the language or tools actually used to run the tests, etc. i.e. for Rust code, we use cargo to run tests, for Python code we use pytest w/ hypothesis, but you don’t have to remember those details, you just remember make test.

                      1. 2

                        Always had a prejudice with Make, wrote a makefile the other day, it changed my mind. I still find the syntax and documentation messy, but it’s good for what it’s intended for. I plan on spreading it’s use at work.

                        1. 2

                          Good luck! I agree it’s not perfect, it definitely has warts, but it’s very mature software that’s not going away anytime soon. It will die about the time the C language dies, which won’t be in my lifetime.

                          Other build software, it’s anyone’s guess how long it will be maintained.

                      2. 3

                        Current job (~12 devs, legacy Elixir monolith)

                        1. Read ticket/bug report (possibly after filing it)
                        2. Fork tip of master to feature branch, open PR
                        3. Do work, keep pushing upstream
                        4. If extended QA (read: !engineering review) is required, deploy to a staging server
                        5. Rebase branch against master
                        6. Announce desire for code review
                        7. Update PR as needed to address feedback and appease CI gods
                        8. Merge PR
                        9. Announce impending changes to rest of company, spin up Datadog and Rollbar in background to watch for problems
                        10. Run deploy script (which internally does a bunch of Elixir/Erlang fuckery)
                        11. (in script) Update CHANGELOG with version notes
                        12. (in script) Launch deployment (build code, copy code, hot-upgrade) and update our production server
                        13. Announce in changelog channel blurb script gives
                        14. Goto 1.

                        Comments: This is waaaaay better than following the manual runbook we had written up last year..the CTO used a lot of the documentation I’d gathered for to help improve the tooling scripts. Deploying monoliths is not complicated usually–in Elixir/Phoenix’s case, a previous generation of engineers had opted for shiny and left us with a slow and sometimes unstable deploy situation. I can’t wait for the day we just treat it like a normal damned app on Heroku and can just use MIX_ENV=prod mix phx.server. There’s some rumbling about K8s (mostly from the devs who haven’t had to debug fucked deploys…) but I’m kinda hoping we can hold off on that until other more fundamental annoyances are handled.

                        Releasing software on the web is still less annoying than cutting gold in desktop land. :)

                        1. 3

                          I can’t speak for the rest of the teams at work, only for my own team, which is somewhat unique to the company because our code actually interfaces with our customers, the various Monopolistic Phone Companies, in the call path of a phone call. With that out of the way …

                          Developer (me and two others) write the code. Bug fix, feature enhancement, new feature, what have you. Once we are happy, the ticket (Jira) is assigned to our QA engineer, who does testing in the DEV environment (programs generated by our build system Jenkins—ops can automatically push stuff from Jenkins to the various environments mentioned), Once that passes, it moves on to the QA environment and another round of testing. Once that passes, it moves on to STAGING for another round of testing. Once that passes, we then submit a request to our customer, the Monopolistic Phone Company stating our intent to upgrade our stuff. They have 10 business days to accept or reject the proposal. If they reject, we wait and try again later. If they accept, on the agreed upon day (actually, night), ops will push to PRODUCTION. During the push, the QA engineer and developer(s) in question will also be there (on the phone and via chat) to test and make sure things went okay in the deployment.

                          There have been two times when during the push to PRODUCTION, things weren’t working correctly, and I (the developer) called for a roll back, which is easy in our environment (both times dealt with parsing telephone numbers [1]).

                          The testing of our stuff about half automated. The “happy path” is automated, but the “less-so-happy-paths” aren’t, and they’re quite complex to set up (our business logic component makes queries to two different services, we need to handle the case when both time out, or one but not the other—that’s four test cases, only one (both reply in time) is easy to test (there’s a difference between the service being down, and its reply coming in too late)). As for writing the automated tests we do have, that has been my job (I started out as the QA engineer for the team in question, even before it was a team [2]). The QA engineer does write some code, but will come and ask me about the finer points of testing some of the scenarios.

                          Sadly, STAGING and PRODUCTION should match, but for our team, it doesn’t. And some stuff can’t be tested until it hits PRODUCTION, because how do you test an actual cell phone in a lab environment? Especially when your company can’t afford a lab environment like the Monopolistic Phone Company has?

                          [1] If you can avoid doing so, avoid it. I cannot. I need to parse numbers for the North American Numbering Plan, and just that is difficult enough. I’ve seen numbers that make me seriously question the competency of the Monopolistic Phone Company to manage their own phone networks.

                          [2] Long story.

                          1. 1

                            Clients I work with uses a thing called femtocell to setup cell networks from one country on another, in order to run tests. It’s quite expensive, though (or so I hear), and they’re moving onto other alternatives as much as possible.

                            1. 2

                              Everything dealing with the Monopolistic Phone company is expensive, even the equivalent of DNS queries [1].

                              [1] Take a phone number and lookup the name. Funny enough, this is done over DNS! (NAPTR records) Imagine having to pay for every DNS lookup. It’s insane.

                          2. 3

                            Hopefully, I can provide something unique here. I am co-founder of Merit which is a decentralized digital currency. Releasing software for decentralized software is interesting and unlike any other project that I’ve worked on before. Yes, I actually write a ton of code and most of the protocol level changes. I also do the release process personally (we are a small team). We had a huge protocol level change recently and the release process went something like this.

                            1. Get community buy-in on any proposed protocol changes. Big miners are critical here.
                            2. Private tests on the regression test chain.
                            3. People review PR on Github and do private testing.
                            4. Announce to the community that the software is ready to go to the test network.
                            5. Deploy changes to the test network which has dozens of machines privately run and many more publically run.
                            6. The software is released on test network but the new feature isn’t turned on until a future date.
                            7. Monitor the test network and start testing it. This process can take a month or more.
                            8. Announce to the community the new software is ready to be released on the main network and the feature turned on at date X.
                            9. Merge to master.
                            10. Release binaries to the community and insist on updating them ASAP.
                            11. Watch as the new feature turns on and monitor for issues. Issue patch releases for any problems.

                            That said, all of the above can only be done with a relatively small community (compared to say bitcoin). The important members who control a lot of the mining power on the network must agree on the changes.

                            Other approaches we will take in the future would use a signaling mechanism where when it looks like a sufficient majority are signaling support for a feature then it will be turned on.

                            This is the most difficult kind of deployment I have ever done. Decentralization makes everything harder here.

                            1. 3

                              for my tiny eslint plugin, I do this:

                              • Check everything locally:
                                • npm run test
                                • npm run lint
                              • Bump version in package.json
                              • git add
                              • git commit
                              • git push
                              • wait for travis results (to see how other supported nodejs versions do)
                              • npm publish
                              1. 3

                                For codebases with a concrete version, e.g. libraries or binaries shipped to customers: git tag -a #.#.# && git push --tags, then CI picks it up and runs the normal checks and tests followed by deployment to our artifact hosting system or our content delivery system.

                                For codebases that are continuously delivered, a new release is built on every master merge. After successful checks and tests, CI builds a Docker image and pushes it to our internal artifact hosting system as well as Amazon ECS ECR. Right now, it’s a manual step to go log in to ECS and stop the running task for the service that was just released. We’re looking into ways to automate it (and I’d love some suggestions).

                                1. 5

                                  Painful.

                                  1. 4

                                    So, I work on two projects at work. One is a agent, part of a distributed system, that runs in several servers and provides access to some hardware connected to said servers. We maintain some of those servers, others are run by users. The other project is a library, used by the first project.

                                    The agent system is release once every sprint, and our sprints take 2 weeks. In the Tuesday of the second week, we ‘close’ the release: nothing merged after this will get in the release, unless it’s a bugfix. Then for the next 3 days, we run manual tests on the whole system (of which the agent is a part). If errors are found, we merge the fixes. We use a sort of trunk based development, so, if we find a bug, we’ll create a release branch and cherry-pick it there. Once we ran all tests, we manually generate a zip package (which involves creating a fresh clone, copying a bunch of files around, and zipping the resulting folder) and a) manually deploy it to the servers we maintain, and b) upload it to an internal confluence page.

                                    It’s very manual and very error-prone. We’re finally automating some parts of it, but it’s not done yet. The system is written in python, so we could be a simple pip package, but for permission reasons (not everyone is allowed to run it), we’re not there yet.

                                    The library is kinda worse. First, it has a binary dependency, that lives in separated repositories, so we have to make sure we keep the versions synced. It’s also a python project, but also not packaged as a pip package. The way it’s release is: We compile the binary dependency, copy it over to the development repo. We than commit and merge the release in the development branch. Then, we copy the changes, MANUALLY, to a “release” repository, commit it in there, and merge it. The users get the updates from the release repository, and we work in the development repository. I tried to sync the repositories history, to simplify this freaking mess, but there are manually applied changes in the release repository that make it impossible (or nearly) to just squash and push the changes from the development repo to the release repo.

                                    I recently wrote a small makefile to automate some of this terrible process, and my plan is to bring all this crap together in a single repo, split the library into two parts, and package both of them as proper pip-installable packages, but we’re a fairly long way from that, yet.

                                    Tell me that ain’t painful.

                                    1. 3

                                      Oh, also important to note: the agent system has some automated tests, that are supposed to run for every pushed commit and give it a -1 code review if it fails. That’s broken, currently, and because our PO is a , we haven’t had time to fix it yet. When it used to run, it was a good thing, but we don’t have lots of coverage, and the main server (that controls all agent servers), has almost no automated tests, so, there’s that.

                                      The library has some unit tests, but it’s a library to communicate with smartphones, so, there’s only so much test you can do without actually connecting it to a real phone and testing it. We do have a separated repo with integration tests that use real hardware, but it’s completely broken and I haven’t got time to fix it yet. So right now the reality is: I run the unit tests before a release, run a basic connection test, and hope for the best =/ Our release cycle is pretty tiny, one week, and we have release a worrying amount of bugs into production in the last months. I raised that with our manager and got the usual “Yeah, we should worry about that” response that will likely turn into no actual action.

                                      1. 2

                                        Thanks for elaborating. I think that managers can be motivated by costs sometimes. If you add up the time it takes to do all the manual steps, say, over a month, then the effort required for automation might look more attractive. Maybe you could show that it will pay off within 6 months or a year.

                                    2. 7

                                      Your comment doesn’t answer the OP’s questions and doesn’t contribute anything to the discussion. Please write something useful instead. I’m disappointed that several people thought this one-word comment should be upvoted.

                                      1. 2

                                        It’s just a joke…

                                        1. 4

                                          It would be better if lobste.rs didn’t devolve into obvious one-word jokes. There’s already Reddit.

                                          1. 1

                                            And as with all jokes, it has some reality to it =/

                                        2. 3

                                          Please don’t post snarky, unsubstantial comments like this. They’re the candy of forums: delightful for a sweet moment and ruinous in quantity.

                                          1. 3

                                            I don’t want to sound arrogant but just stating ‘Painful.’ doesn’t seem to be helpful imho… Care to explain why it’s painful en what actions you have taken/you will take to make it less painful?