1. 6

    They’ve blogged about this a bit in the past and the answer isn’t all that straightforward. For the configuration side they sync the configuration to the host from their control plane (source) so that their control plane servers don’t need to be involved in each request. The host side is much more complicated and is OS dependent. On the Linux side it depends heavily on how the individual distro configures DNS resolution. In the case of Windows, macOS, and Linux with systemd-resolved the resolver supports a routing table for DNS queries and can send them to different servers based on a few different criteria.

    1. 3

      While it’s really good to see Tailscale investing in maintaining technical accuracy and readability in their blog, they’ve unfortunately fell in the same trap as their predecessors.

      Of course, we think we’re more right than others, but the others think the same about themselves, and Debian resolvconf refuses to pick a winner.

      and later

      However, as Tailscale we actually want this behavior, so we use it to set DNS configuration when we can:

      No, you don’t want this behavior. There is no reason for Tailscale to be the authority of DNS on machines where Tailscale is deployed and it should not be handling the forwarding of non-Tailscale queries. A user of dnsmasq or systemd-resolved or similarly capable local DNS resolver should be able to specify which subdomains they want to resolve using Tailscale’s DNS. Should the UI for this tooling be improved? Absolutely. Should the Tailscale stack be where it happens? Certainly not. Multiple VPNs or other overlay networks could exist on the same machine and Tailscale shouldn’t be the one owning edge DNS routing.

      At this point, one wonders why none of the giants have tried to fix the real issue here:

      /etc/resolv.conf does not have support for routing DNS based on the domain name

      1. 3

        Even if you solved it you’d have to wait years for your code to end up in Debian or Red Hat and there’d be a good chance your fix was never accepted widely enough to rely on.

        1. 3

          Is resolv a good place for this though? If we can achieve subdomain DNS server routing with a single line in dnsmasq, is it worth trying to update an old, underspecified config file?

          1. 1

            Not sure. I think reasonable people could disagree on that, specially when it comes to embedded/containers. While I’m personally in the dnsmasq-everywhere-camp, we could do better than both the options.

            1. 5

              I fear that at this point unless someone makes a PR to glibc and other such libraries/oses with such an improvement and a distribution-agnostic specification to do it, you may be able to avoid the XKCD Standards problem; but I fear that the pushback is going to be along the lines of “just use dnsmasq/systemd-resolved/libfoobang” or whatever. If you want to champion such a thing then I’d be more than happy to use it, but I wouldn’t want to do it myself. The current state of the world is kinda painful yes but at least it somewhat works enough to bootstrap more elaborate mechanisms.

      1. 6

        What do they mean by “microVM”? The marketing docs are pretty short of details about this and googling returns links about programming language VMs.

        1. 27

          A microVM is similar to a VM but does not boot firmware (UEFI or BIOS) and uses a much smaller device model (the virtualized devices available to the VM). Otherwise a microVM provides all the same containment features of a VM. In the case of Firecracker there are only 4 emulated devices: virtio-net, virtio-block, serial console, and a 1-button keyboard controller used only to stop the microVM. Also the kernel is started by executing Linux as an elf64 executable, not bootstrapped by firmware.

          1. 2

            Thank you! That helps clear things up.

        1. 9

          We are making Firecracker open source because it provides a meaningfully different approach to security for running containers.

          Why would I run containers inside Firecracker micro VMs, as opposed to just deploying my software directly into the VM? Is the assumption that I’m using containers already for (eg) local development and testing?

          1. 16

            Firecracker is solving the problem of multi-tenant container density while maintaining the security boundary of a VM. If you’re entirely running first-party trusted workloads and are satisfied with them all sharing a single kernel and using Linux security features like cgroups, selinux, and seccomp then Firecracker may not be the best answer. If you’re running workloads from customers similar to Lambda, desire stronger isolation than those technologies provide, or want defense in depth then Firecracker makes a lot of sense. It can also make sense if you need to run a mix of different Linux kernel versions for your containers and don’t want to spend a whole bare-metal host on each on.

            1. 2

              Thanks. I was thinking about this in the context of the node / npm vulnerabilities that were also being discussed yesterday. I was imagining using these microVMs to (eg) contain node applications for security, without having to package the application up into a container.

              1. 2

                (disclaimer: I work for Amazon and specifically work on the integration between the Firecracker VMM and container software)

                Multi-tenant is a big use-case, but so is any workload where there is at least some untrusted code running. Firecracker helps to enable workloads where some third-party, untrusted code is expected to cooperate in a larger system.

                In case that’s too abstract, think of a situation where a third-party component handles some aspect of data processing, but should not have access to the rest of the resources that are present in your application. Firecracker helps you establish a hypervisor-based boundary (including a separate kernel) between the third-party component and your code.

              2. 4

                As far as I can tell “container” is about supporting a specific packaging format, OCI(Open Container Initiative). You can just deploy your software directly. In fact, I think there is no “container” support at the moment. To quote:

                We are working to make Firecracker integrate naturally with the container ecosystem, with the goal to provide seamless integration in the future.

                1. 10

                  (disclaimer: I work for Amazon and specifically work on the integration between the Firecracker VMM and container software)

                  “Container” is about the ecosystem of container-related software, including OCI images, CNI plugins for networking, and so forth. We’ve open-sourced a prototype that integrates the FIrecracker VMM with containerd here, and plan to continue to develop that prototype into something a bit more functional than it is today.

              1. 2

                Stow is a really neat tool and I use it for managing my ~/.local tree which contains locally built software packages. However, for dotfiles I just check my whole home directory into a git repository and add ignore rules for files and directories that don’t need to be versioned. It’s pretty low complexity and has the added benefit that you can spot config drift and new dotfiles (which you may or may not want to version) very quickly. I’d very much recommend that over using stow to manage dotfiles.

                1. 5

                  Title is slightly wrong. You can boot it but you can’t install it because the OS is blocked from seeing the internal storage.

                  1. 15

                    I don’t think “blocked from seeing the internal storage” is quite the correct characterization. The T2 chip is acting as an SSD controller, I bet if somebody takes the time to write a T2 driver for Linux everything will work just fine. The difficulty there will likely be that there is no datasheet available for the chip so the driver will have to be reverse engineered from mac OS which is certainly not trivial.

                    1. 5

                      This has shades of the “Lenovo is blocking Linux support” “incident” where Lenovo just forced the storage controller into a RAID mode Linux didn’t have a driver for.

                      1. 2

                        At least from what the system report tool says the drive appears as an NVME SSD and just an iteration on the one from previous generations (AP0512J vs AP0512M in the 2018 Air). So it might just work with the Linux NVME drivers once there’s a working UEFI shim that’s trusted. At that point this tutorial seems plausible.

                        1. 3

                          Trust is not an issue because secure boot can be completely disabled.

                          As the article mentions, people who tried live USBs found out that the internal storage is not recognized. So looks like T2 is indeed actually acting as an SSD controller. (And of course macOS would report the actual underlying SSD even if there is no direct connection to it. The T2 could be reporting that info to the OS.)

                      2. 8

                        The difficulty there will likely be that there is no datasheet available for the chip

                        Unless they completely and utterly butchered the initialization, no amount of datasheets will save you. From the T2 documentation:

                        By default, Mac computers supporting secure boot only trust content signed by Apple. However, in order to improve the security of Boot Camp installations, support for secure booting Windows is also provided. The UEFI firmware includes a copy of the Microsoft Windows Production CA 2011 certificate used to authenticate Microsoft bootloaders.

                        NOTE: There is currently no trust provided for the the Microsoft Corporation UEFI CA 2011, which would allow verification of code signed by Microsoft partners. This UEFI CA is commonly used to verify the authenticity of bootloaders for other operating systems such as Linux variants.

                        To bypass the check of the cryptographic signature, you’d probably have to find some kind of exploitable vulnerability in the verification code (or even earlier in the boot process so that you get code execution in the bootloader before the actual check).

                        1. 8

                          As the article says, you can disable the T2 Secure Boot so the code signature verification is not the problem at that point. The problem then is that the T2 acts as the SSD controller, and nobody has taught Linux yet how to talk to a T2 chip. The article incorrectly conflates the two issues.

                          1. 5

                            Doesn’t look like it’s conflating them. You might have to scroll down further :) but there’s a screenshot of the Startup Security Utility and this text:

                            However, reports have come in that even with it disabled, users are still unable to boot a Linux OS as the hardware won’t recognize the internal storage device. Using the External Boot option (pictured above), you may be able to run Linux from a Live USB, but that certainly defeats the purpose of having an expensive machine with bleeding-edge hardware.

                          2. 2

                            Secure boot can be disabled. Then the machine will boot anything you tell it to boot, bringing the security inline with machines predating the T2.

                            Source: I tried it out on my iMac pro which is a T2 machine.

                            1. 1

                              edit: mis-read that. Yeah until they add partner support you’re probably pretty stuck. Although somebody like RedHat or Canonical that have relationships with Microsoft might be able to have them cross-sign their shim to support booting on the new Air. Either that or we’re stuck waiting for Apple to support the UEFI CA.

                        1. 69

                          Fastmail. They are trustworthy, quick to respond to service requests, and rock solid. I can count the number of outages in the past ~10 years on one hand.

                          1. 18

                            +1 for Fastmail. I’ve been using them for several years now and they’re very reliable, have a really solid web UI, and from what I can tell a solid position on security. They also contribute to moving the state of internet mail forward by improving Cyrus and contributing to RFCs. All in all I’d highly recommend them.

                            1. 13

                              They also contribute to moving the state of internet mail forward by improving Cyrus and contributing to RFCs.

                              That’s another good point: they are by all accounts a solid technical citizen, contributing back and moving the state of the art forward. I like to reward good behaviour when I spend my money, and it’s nice to be able to do that and get top of the line service, to boot.

                            2. 14

                              I also switched from Gmail to Fastmail.

                              The funny thing is that for the amount of press that Gmail received/receives for being “fast”, once you switch to Fastmail, you realize that Gmail is actually very slow. The amount of bloat and feature-creep they’ve introduced is fascinating.

                              1. 3

                                You’re talking about the web interface or the speed at which the mail is sent?

                                1. 1

                                  The web interface.

                                  1. 2

                                    I just use thunderbird (and k9 on mobile). I don’t see why you’d ever use a web interface for email when a standalone client is so much nicer to use.

                                    1. 1

                                      I’m on a desktop client too (Evolution). Just pointing out the advantage of Fastmail over Gmail. :)

                              2. 9

                                Love Fastmail. I only wish more tools had first class CalDAV/CardDAV support. When I switched over, I was genuinely surprised how pervasive it’s become to slap on Google account sync and call it a day, even in FOSS. Aside from the built-in macOS/iOS apps, most solutions involve fussing with URLs and 3rd party plugins, if it’s supported at all.

                                1. 1

                                  Fastmail has a link generator for CalDAV so it’s super easy to get the right URLs. I do agree for 3rd party plugins, it’s annoying to have to install add-ons for standard and open source protocols…

                                2. 7

                                  It was the best one I found, too, overall. I dont know about trustworthy, though, given they’re in a Five Eyes country expanding police and spy authority every year.

                                  Maybe trustworthy from threats other than them, though. I liked them for that.

                                  1. 7

                                    Yeah, I’m not concerned about state level actors, or more properly, I don’t lose sleep over them because for me and my threat model, there’s simply nothing to be done.

                                    1. 4

                                      I’m not worried about the state spying on me, I’m worried about the apparatus the state builds to spy on me being misused by service provider employees and random hackers.

                                      1. 1

                                        If those are your concerns, using PGP is probably recommended.

                                      2. 3

                                        That will be most folks, too. Which makes it a really niche concern.

                                        1. 2

                                          Maybe it oughtn’t be niche, but it’s pretty down my list of practical concerns.

                                    2. 5

                                      I use Fastmail as well, and became a customer by way of pobox.com acquisition.

                                      I’ll have to add, this was about the only time I can ever recall that a service I use was acquired by another company and I was actually fine with it, if not a bit pleased.

                                      My thinking was along the lines of “well, the upstream has purchased one of the biggest users of their tools, can’t be bad.”

                                      I’ve not had any noticeable difference in the level of service provided, technically or socially, except the time difference to Australia is something to keep in mind.

                                      I do hope that no one here in the US lost their jobs because of the acquisition, however.

                                      1. 3

                                        I do hope that no one here in the US lost their jobs because of the acquisition, however.

                                        Nope! We’ve hired a bunch more people in both offices, and the previous Pobox management are now C-level execs. We’re pretty sure the acquisition has been a win for just about everyone involved :)

                                      2. 5

                                        I can also recommend it, especially due to their adherence to web standards. After 10+ years of GMail, the only functioning client had been Thunderbird, which too often too large. Since switching to Fastmail, I’ve been having a far better experience with 3rd party clients, and a better mail experience in general (probably also because I left a lot of spam behind me).

                                        1. 4

                                          I second that. I was searching for a serious e-mail provider for a catch-all email, calendar and contacts.

                                          I had trouble setting up my carddav autodiscovery DNS configuration and they helped me without considering me as a “dumb” client. Serious, clear and direct. The most efficient support I could’ve encountered by far.

                                          It’s paid, and I’m paying the second plan (of 5$/month), and I think it’s perfectly fair, considering that, firstly, e-mail infrastructure is costly, and secondly, that their service is just plain awesome.

                                          1. 5

                                            They’ve recently added the ability to automatically set up iOS devices with all of their services when you create a new OTP. I didn’t know that I needed this, but it’s a wonderful little bonus. It’s stuff like that that keeps me happily sending them money, and will as long as they keep doing such a good job.

                                            1. 1

                                              I did not know about such a thing, since I’m not an iOS user, but sure sounds nice !

                                          2. 4

                                            Do you know if they store the emails in plaintext server-side?

                                            1. 2

                                              It’s a good question. I don’t know, and would like to. I’ll shoot them a mail.

                                              1. 1

                                                Their help page on the matter isn’t clear, although it does describe a lot of things that seem pretty good. Now you’ve got me wondering. (Happy Fastmail user here, and I even convinced my wife to move to it from GMail!)

                                                edit: It does sound like it’s plain text but you could read it a couple of ways.

                                                All your data is stored on encrypted disk volumes, including backups. We believe this level of protection strikes the correct balance between confidentiality and availability.

                                                1. 4

                                                  Encrypted at rest (encrypted block devices), but cleartext available to the application because we need it for a bunch of things, mostly search, also previews and other bits and pieces. Of course, the applications that hit the on-disk files have their own protections.

                                                  1. 1

                                                    I’d imagine their disks are encrypted as a whole - but not using per-mailbox encryption based on keys derived from individual user passwords.

                                                    However, even if such claims are made you can’t verify that and shouldn’t trust a companies word on it. I’d recommend PGP if that is a concern.

                                                    1. 1

                                                      using per-mailbox encryption based on keys derived from individual user passwords.

                                                      If this is a feature you’re looking for in a hosted solution, Protonmail is probably your best option.

                                                      However, even if such claims are made you can’t verify that.

                                                      Up to a point you can, Protonmail has released their webmail client as open source. Of course, with today’s JavaScript ecosystem it’ll be very hard to verify that the JavaScript code you are running actually corresponds to that code. Also, you can’t verify they’re not secretly storing a plaintext copy of inbound mails before encryption. But down that path lies madness, or self-hosting.

                                                      1. 1

                                                        But down that path lies madness, or self-hosting.

                                                        And the desperate hope that your correspondent also is sufficiently paranoid.

                                                2. 3

                                                  +1 for Fastmail. Switched recently after self-hosting (well, the last several years at a friend’s) since the dial-up days and I’m satisfied.

                                                  1. 3

                                                    Another Fastmail user here. I switched from GMail and my only regret is that I didn’t switch sooner.

                                                    I don’t think there are any workflow advantages, but I appreciate that they don’t track me, and I trust them more than Google.

                                                    I have the $30 per year subscription.

                                                    1. 3

                                                      One of other things I want to highlight is reliability/availability. Making sure I dont miss important emails is even more important than privacy to me. Newer, smaller, and privacy-focused sites might not have as much experience in keeping site up or getting all your mail in reliably.

                                                      Fastmail has been around for quite a while with positive feedback from everyone Ive seen. So, they might perform better than others in not missing/losing email and being available. Just speculating here based on what their customers say.

                                                      1. 3

                                                        SMTP actually tolerates outages pretty well… I’ve had my self hosted server down for a couple days, and everyone resent me everything when I fixed it.

                                                        1. 1

                                                          Haha. Good to know.

                                                      2. 1

                                                        What service do you use for Calendars and such?

                                                        1. 4

                                                          I use FastMail for calendars and contacts. I actually don’t use it for e-mail much since my ISP is pretty ok.

                                                          For Android I switched from CalDAV-Sync + CardDAV-Sync to DAVdroid. Both work but the latter is easier to configure (by way of having less config options).

                                                          I tried self-hosting Radical for a while but for the time I had to put into it I’d rather pay FastMail $30 per year.

                                                          1. 1

                                                            Fastmail! We have a family email account and shared calendars and reminders and suchlike, and I have a personal account as well.

                                                        1. 7

                                                          I’d also recommend pass. It’s just a bash script that manages files encrypted with GPG in a Git repository so should work anywhere bash, git, and gpg work. It uses git to sync so you can use SSH or any other transport git supports for syncing. There’s also a pretty decent quality open-source iOS app.

                                                          1. 4

                                                            I’d highly recommend MoinMoin. I’ve been running Moin for over a decade as my personal wiki and it works spectacularly well.

                                                            When I was looking for a wiki the three primary considerations were cost of maintenance (must not require constant maintenance, be easy to setup, easy to backup), ease of use (from the UI), and extensibility (so that I could work my way around any deficiencies in the software). Moin did well on all of these critera. It’s a single standalone app that stores everything as files on the disk, no db server. The file layout is logical and easy to browse with ls if you’re so inclined, no proprietary formats, and it’s easy to backup with a simple rsync or tar command. It’s got a decent web UI and a lot of macros that make information organization easy. It’s also got an API that works well, and for which I’ve built integration with VIM. It’s also written in Python which makes it easy to install in a virtualenv (no root access necessary) for isolation and it’s designed to be extended (though this can range from very easy to kinda painful depending on what you’re extending). It can also use the Python Xapian extension for fast full-test indexing.

                                                            It’s also used by quite a few open source projects like Debian, Python, and GNOME to name a few.

                                                            1. 4

                                                              AWS CodeBuild allows you to specify your buildspec file either within the console or using CloudFormation so you can store all of that configuration completely outside of the repository. It’s not technically a free product but if you use 100 build minutes or less a month you’ll fall into the free tier, past that it’s still pretty cheap at $0.30 per build-hour. One other nice feature is that, while they provide build images, you can bring your own if you have very specific toolchain or version needs.

                                                              1. 2

                                                                It requires to connect to SCM on following terms:


                                                                Repositories: Public and private

                                                                This application will be able to read and write all public and private repository data.

                                                                Organizations and teams: Read-only access

                                                                This application will be able to read your organization and team membership.


                                                                AWS CodeBuild … is requesting access to the following:

                                                                • Read your account information
                                                                • Read your repositories

                                                                so it’s much saner here.

                                                                But there is also a warning:

                                                                AWS CodePipeline does not support Bitbucket.

                                                                AWS CodePipeline cannot build source code stored in Bitbucket. If you want AWS CodePipeline to use this build project, choose a different source provider.

                                                                Not sure yet what it exactly means, i.e. how CodeBuild and CodePipeline are related.

                                                                1. 2

                                                                  By default only Ubuntu 14.04 is available:

                                                                  [Container] 2017/12/10 00:53:59 Running command uname -a
                                                                  Linux 9d4937a7a169 4.9.58-18.55.amzn1.x86_64 #1 SMP Thu Nov 2 04:38:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
                                                                  [Container] 2017/12/10 00:53:59 Running command lsb_release -drc
                                                                  Description:	Ubuntu 14.04.5 LTS
                                                                  Release:	14.04
                                                                  Codename:	trusty
                                                                  [Container] 2017/12/10 00:54:00 Running command gcc -v
                                                                  Using built-in specs.
                                                                  Target: x86_64-linux-gnu
                                                                  Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.8.4-2ubuntu1~14.04.3' --with-bugurl=file:///usr/share/doc/gcc-4.8/README.Bugs --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.8 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.8 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --disable-libmudflap --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.8-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
                                                                  Thread model: posix
                                                                  gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)

                                                                  Whole “building” (running these commands) took around ~7 seconds, but most of time was spent before calling these commands:

                                                                  [Container] 2017/12/10 00:53:53 Waiting for agent ping
                                                                  [Container] 2017/12/10 00:53:54 Waiting for DOWNLOAD_SOURCE
                                                                  [Container] 2017/12/10 00:53:58 Phase is DOWNLOAD_SOURCE
                                                                  [Container] 2017/12/10 00:53:59 CODEBUILD_SRC_DIR=/codebuild/output/src906734987/src/...
                                                                  [Container] 2017/12/10 00:53:59 YAML location is /codebuild/readonly/buildspec.yml
                                                                  [Container] 2017/12/10 00:53:59 Processing environment variables
                                                                  [Container] 2017/12/10 00:53:59 Moving to directory /codebuild/output/src906734987/src/...
                                                                  [Container] 2017/12/10 00:53:59 Registering with agent
                                                                  [Container] 2017/12/10 00:53:59 Phases found in YAML: 1

                                                                  Phase details:

                                                                  • SUBMITTED Succeeded
                                                                  • PROVISIONING Succeeded 24 secs
                                                                  • DOWNLOAD_SOURCE Succeeded 4 secs
                                                                  • INSTALL Succeeded
                                                                  • PRE_BUILD Succeeded
                                                                  • BUILD Succeeded 1 sec
                                                                  • POST_BUILD Succeeded
                                                                  • UPLOAD_ARTIFACTS Succeeded
                                                                  • FINALIZING Succeeded 3 secs

                                                                  But other docker images can be indeed provided too. Will check it later.

                                                                  1. 2

                                                                    All of the Dockerfiles for their provided containers are open-sourced on GitHub. I’ve used them as a jumping-off point for building custom images with good luck in the past.

                                                                    They’ve also announced Windows support is forthcoming and you can sign-up for early access. I haven’t played with it though since I don’t have any projects that require Windows.

                                                                    1. 1

                                                                      Thanks for the links. It’s good to know how exactly their docker images are created. Windows support is also useful feature for cross-platform software.

                                                                  2. 2

                                                                    They also support zip files in S3, so if you had a use-case that didn’t work with either GitHub or BitBucket you could just zip up your source and build it like that. The bit about CodePipeline only applies if you’re using CodePipeline to orchestrate a build and deployment workflow, you can also just use CodeBuild standalone which removes those limitations.

                                                                    1. 1

                                                                      They do support S3, but it’s amazon’s service, so I couldn’t expect less from them. Jokes aside, I’m not S3 user, but it may be a useful feature for some workflows out there in the wild. CodePipeline has only 1 active pipeline per month in free tier, which I presume could be a problem in case of wanting to use it in more than one project.

                                                                      1. 2

                                                                        I’ve only used the S3 support a few times but it comes in handy for any really odd build workflows that are tough to model in a repository alone or require some form of external input, such as updating docker images based on changes to the dependent software within that container (following Linux package updates, etc…) but I consider those flows to be a bit of a hack.

                                                                        Where I’ve found that CodePipeline really shines is when you need to deal with builds and complex promotions to testing environments then eventually to production. That being said I really use it for a few of my side projects. For the basic build, test, publish flow it’s easy enough to model in CodeBuild alone.

                                                                  3. 1

                                                                    Interesting! Didn’t know that AWS provides something like this. Cannot comment on CodeBuild pricing, as I have never compared prices for build services before. But 100 minutes should be enough for smaller projects without thousands of commits per month.