Threads for ricardbejarano

    1. 2

      This is super cool, thanks for sharing!

    2. 1

      What’s wrong with the builtin iCloud keychain, that I need to replace it with a third party software?

      What exactly does the secure keyboard entry do in the Terminal? It’s Apple Support page doesn’t really explain it either.

      1. 3

        What exactly does the secure keyboard entry do in the Terminal

        Same thing it does in xterm: prevents other applications from grabbing the keyboard and listening to input.

      2. 2

        Are you referring to “14. Use a password manager”? I interpreted this as, “This is what I use; use what you like.” I prefer a 3rd party password manager to iCloud keychain just because I use my credentials outside the Apple ecosystem.

        1. 1

          Author here, you are correct.

    3. 1

      I’m curious why Terragrunt wasn’t good enough. It also has templating capabilities and has been around for a long time now.

      1. 2

        Mostly because of the code injection capabilities.

        We inject >1200 AWS providers (accounts * regions) by default. Stacks also declares variables for you, it injects state backend for all stacks’ layers… It gives us a level of flexibility no other tool in the space does.

        Also, we already had tons of Terraform deployments without Terragrunt so moving to it was a huge effort we didn’t want to make, so we wrote Stacks which is backwards compatible (from Terraform’s POV, nothing changed).

        I mention Terragrunt at 8:00.

    4. 2

      I don’t see a future in defining infrastructure in configuration languages with templating bolted on top of them. CDK for Terraform has had solutions for almost all of the problems described in the related talk (besides the existing code problem) for over a year now, and honestly, it is a huge improvement on managing infrastructure from my experience. Stacks have been in the tool for over two years now, code re-usability is amazing (writing a wrapper to get a nicer interface for a single resource is very viable, and usually great with CDKTF, while the terraform-native solution of modules is annoying enough to handle that you don’t use it for less than a dozen resources), and you don’t have to deal with any templating annoyances, since you have a proper programming language in your disposal.

      Having a programming language to help you with abstraction is great, since there’s been literal decades of work put into improving the abstraction capabilities of languages. Configuration languages have received no such work, since usually configuration needs to be very explicit. But now configuring infrastructure is essentially programming, and I think the language for such work should be a programming language, not a configuration one.

      1. 1

        Hi, author here, we used to use CDKTF actually!

        But we figured all it does is just generate JSON and dump it into a cdk.tf.json file, so we chose to get rid of it to improve performance. CDK uses jsii to transpile your Python code to JS, then spins up Node to run it, synths your code, and then comes back. This took ~10s vs. the few ms Stacks takes, so we stopped using it.

        You can see Stacks as an opinionated CDKTF, that, when used with the proper file structure, leads to an amazing dev experience (we only push resources to git, everything else is taken care of).

        1. 1

          Quick correction - CDK uses jsii to call JS code from whatever other language, instead of trying to convert it to JS, which would be way harder. You don’t even need to use their CLI to generate the JSON - just running the file with a few special environment variables works.

          While I do agree that it is slower, I think the benefits you gain from having a full-blown programming language are well worth it. You gain a fair bit of productivity because you need less “devops” people as more normal developers can understand and work on it, and the additional safeguards that you can gain by using proper type systems (e.g. passing a whole VPC object instead of it’s ID, AWS CDK has this built-in, but Terraform doesn’t because the underlying Terraform type system doesn’t have such information, but you can build it yourself). You gain a lot of extra capabilities as well, since you have a programming language under you - you can create new stacks programmatically, create truly re-usable templates, etc. One of the nicest things I’ve done with CDKTF was in essence a project infrastructure skeleton - for each project, a bunch of infrastructure managed with CDKTF would be spun up, managed out of decentralized repositories, but with all of the truly important code sitting in the same project generator repo - since it’s just a library that you can install, each project would just import it, and use the main skeleton construct, and other helper constructs to build out their own infrastructure. This was previously tried by templating plain Terraform files with jinja2, but was very brittle and hard to update in comparison.

          1. 1

            But we do have a full-blown programming language, Stacks is written in Python.

            If you mean we could’ve used CDK for all our infrastructure code, that’s definitely possible if starting from scratch, but we had >5100 files to migrate from HCL to CDK.

            With Stacks we maintained the same “contract” with both our users and Terraform, so we didn’t have to change a thing. Our devs don’t have to learn a thing and Terraform state doesn’t freak out. We just made our code cleaner and more robust, DRY and free of drift.

            1. 1

              Yeah, that’s why I said “almost all problems”. I do understand the difficulty of migrating from HCL to CDK, done it myself a couple times with relatively small projects, it’s definitely a fair bit of work every time. I’ve seen that the last update has worked quite a lot on that, so it might be quicker to do now, but it’s still not ideal. But I think if you’re working on a new project, or you don’t have that much infrastructure already (I’d say less than a ~100 resources), going with CDK is a lot better choice than Stacks.

              1. 1

                Might be. Terragrunt is also a good choice if starting from scratch.

                All these tools are very good if starting from zero, the problem with Terraform is when you want to migrate one to another. This is, mostly, because of Terraform’s state.

                1. 1

                  Migrating state is definitely possible, even if quite manual and somewhat annoying in CDKTF. IMO there are opportunities to mostly automate the state migration, and I hope tools for that will be soon built in CDKTF.

    5. 5

      Please don’t copy the terrible overlay setup in Kubernetes YAML for infrastructure. I don’t see what this offers above the standard Terraform command/functionality.

      1. 1

        Hi, author here, I suggest you check out my talk at SREcon where I explain the problems it solves.

        1. 1

          Cool, I will check it out.

    6. 6

      I’d like to see a “requestor pays” container registry. The fee could cover the registry costs and you could add a markup to fund/donate the developer.

    7. 2

      Company: ThousandEyes (part of Cisco)

      Company site: https://www.thousandeyes.com/

      Position(s): pretty much everything but I’m looking for SREs.

      Location: US, UK, Portugal (remote or on-site)

      Description: ThousandEyes is a digital experience monitoring tool used by many to monitor their online services’ performance. We’re looking for SREs on all our teams. This is by far the best place I’ve ever worked in. The culture is just awesome.

      Tech stack: AWS, Kubernetes, Terraform, Puppet, Prometheus, Grafana, Kibana…

      Compensation: pretty competitive I’d say.

      Contact: https://boards.greenhouse.io/thousandeyes/ or my email.

    8. 4

      I like TablePlus (paid). Not affiliated, just a user.

    9. 13

      I have a used Intel NUC that’s basically better than the RPi in every way except maybe power consumption.

      1. 6

        And price.

        I have one for Kodi, but HomeAssistant runs from a Pi. Wouldn’t mind an m.2 connector on the next generation, though.

        1. 2

          Price, yes, but not value.

      2. 2

        I’ve wanted to use NUCs, but it’s very hard to find any second hand where I am (Ireland). Even eBay rarely has them as buy-it-now, and the ones that are buy-it-now, tend to be for near-new prices. I’ve tried getting them through auctions, but have always missed thanks to last minute bids.

        1. 5

          I suggest you check for Chromebooks with a broken screen. You can replace the bootloader via Mr. Chromebox and then install Linux. Some Chromebooks have 4GB of memory and a USB 3.0 port, which may be enough for your storage needs if you are OK with an external drive. Make sure it comes with the external power adapter though, many of them listed for sale do not.

      3. 1

        I hope the new Wall Street Canyon models are ok for consumers because they don’t have a consumer line this gen. Like, I hope they are priced ok, etc.

      4. 1

        Oh damn, are these anything like HP EliteDesk?… It’s what I’m using as my main driver but I didn’t know they were considered on-par with RPi…! Honestly it’s run games great.

      5. 0

        Are they energy efficient enough? I know that’s a very vague question, but to me the raspis seem to be more energy efficient.

    10. 3

      Thanks for this! Great post.

    11. 21

      Appreciated that you’re trying to help here, but open source projects don’t just get maintainers by looking for random people on the internet. A new maintainer needs to be invested in the project, have an existing history of high quality contributions, and a be trusted to maintain the project’s vision, i.e. any potential new maintainers will already be known to the existing maintainers.

      1. 6

        You’re correct, but I don’t think it defeats OP’s purpose.

        There are many projects that I use (but have never contributed to), that I’d be willing to find the time to inherit if no one else would. I’m not saying the outgoing maintainer should hand it off to me straight away, but this serves as a plaza to put outgoing and incoming maintainers in touch with each other.

        1. 5

          There’s also https://adoptoposs.org for this specific purpose

      2. 1

        Right. I find it hard to work or understand things I’m not interested in (or motivated to be), so I’m not sure how well this drive-by maintainer search could work.

        1. 4

          The idea for this list came from a Mastodon thread. Someone I follow was feeling burnt out about his project and was looking for someone to help him review submissions for the 512kb.club, to which I offered my help. I’m now reviewing multiple PRs a day for the project.

          seeking-maintainers.net is an experiment to see if there is demand for such a platform in other parts of the internet.

    12. 3

      Centos 6? Tha’s EOL already…

      1. 2

        At the bottom, it says the roadmap was derived from a Reddit comment which was posted… 7 years ago

        1. 1

          I thought this looked familiar, I recall seeing this on Reddit many years ago

          1. 1

            Yes, on r/sysadmin and r/homelab!

    13. 10

      This is a great article, with very valid points and well researched decisions. That said:

      Cloud Agnostic

      This is cheating. Just because you switched from hosted Kubernetes (GKE) to self-managed Nomad doesn’t mean you can’t have self-managed K8s.

      Everything else is fine, I liked the article.

    14. 6

      It seems to me that if one is going to go that far off the beaten path (i.e. not just running “docker build”), then it would also be worth looking into Buildah, a flexible image build tool from the same group as Podman. Have you looked into Buildah yet? I haven’t yet used it in anger, but it looks interesting.

      1. 6

        +1000 for Buildah.

        No more dind crap in your CI.

        Lets you export your image in OCI format for, among other useful purposes, security scanning before pushing, etc.

        Overall much better than Docker’s build. Highly recommend you try it.

        1. 3

          Added looking into it to my todo list, thanks for the suggestion @mwcampbell and @ricardbejarano.

        2. 2

          Im intrigued, what do you use for security scanning the image?

          1. 4

            My (GitLab) CI for building container images is as follows:

            • Stage 1: lint Dockerfile with Hadolint.
            • Stage 2: perform static Dockerfile analysis with Trivy (in config mode) and TerraScan.
            • Stage 3: build with Buildah, export to a directory in the OCI format (buildah push myimage oci:./build, last time I checked, you can’t do this with the Docker CLI), pass that as an artifact for the following stages.
            • Stage 4a: look for known vulns within the contents of the image using Trivy (this time in image mode) and Grype.
            • Stage 4b: I also use Syft to generate the list of software in the image, along with their version numbers. This has been useful more times than I can remember, for filing bug reports, comparing a working and a broken image, etc.
            • Stage 5: if all the above passed, grab the image back into Buildah (buildah pull oci:./build, can’t do this with Docker’s CLI either) and push it to a couple of registries.

            The tools in stage 2 pick up most of the “security bad practices”. The tools in stage 4 give me the of known vulnerabilities in the image’s contents, along with their CVE, severity and whether there’s a fix in a newer release or not.

            Having two tools in both stages is useful because it increases coverage, as some tools pick up vulns that others don’t.

            Scanning before pushing lets me decide whether I want the new, surely vulnerable image over the old (which may or may not be vulnerable as well). I only perform this manual intervention on severities high and critical, though.

            1. 1

              Thanks for the response. What are your thoughts on https://github.com/quay/clair which seem to replace both Gripe and Trivy?

              1. 1

                I haven’t used it, can’t judge.

                Thanks for showing it to me.

        3. 1

          I’ve never used dind, but have only used Jenkins and GitHub Actions. Is that a common thing?

          1. 1

            IIRC GitHub Actions already has a Docker daemon accessible from within the CI container. So you’re already using Docker in Whatever on your builds.

            There are many problems with running the Docker daemon within the build container, and IMO it’s not “correct”.

            A container image is just a filesystem bundle. There’s no reason you need a daemon for building one.

      2. 4

        I have not looked at it, but my understanding is that Podman’s podman build is a wrapper around Buildah. So as a first pass I assume podman build has similar features. It does actually have at least one feature that docker build doesn’t, namely volume mounts during builds.

        1. 2

          If I remember correctly, the Buildah documents specify that while yes - podman build is basically a wrapper around Buildah - it doesn’t expose the full functionality of Buildah, trying to be more of a simple wrapper for people coming from Docker. I can’t recall what specific functionality was hidden from the user, but it was listed in the docs.

    15. 1

      Thats’s useful, well done. Thanks!

    16. 23

      I just bought domain and use it. It also allows me to setup TLS via Let’s Encrypt without need to adding root cert everywhere. IMHO perfect solution, and not that expensive or troublesome. I have also 100% guarantee, that there will be no conflicts.

      1. 3

        Only drawback some people may raise is the risk of domain name enumeration, where a would be attacker could enumerate all devices and services on your network just by looking at public DNS.

        That said, I don’t think that’s really a problem.

        1. 12

          Only drawback some people may raise is the risk of domain name enumeration, where a would be attacker could enumerate all devices and services on your network just by looking at public DNS.

          How? Just do local DNS resolution on the network using that domian. For example, you might have a public DNS entry for foobar.com, but you might have DNS for me.foobar.com, bazz.foobar.com, etc on your local network. So requests for those on your local network are serviced by your local network, and you have no mention of them in the public DNS. Am I missing something?

          1. 3

            That requires you to have a split-horizon DNS configuration. It’s pretty easy if you’re running your own DNS resolver but most ISP-provided consumer routers don’t support it and so you’ll also need to be running your own DHCP server. You might be able to put an SOA record in that points to a LAN IP but that will only work for devices running their own caching resolver.

            1. 2

              I have to have that anyway because my modem/router does not support connecting to the WAN IP from the LAN. I can specify the DNS server I want to use through the modem, which i have avoided up to now because i’ve had trouble with dnsmasq (and/or the wifi drivers for the EEEPC laptopserver it’s running from. Especially from the iphone, but sporadically from the rest of the network too. I’ve actively intended to fix that soon for about a year now.

            2. 1

              I use a combination of split-horizon and hidden-primary DNS. No need for private IP ranges to be public.

          2. 2

            The context here is let’s encrypt TLS. If you don’t resolve the name externally, how do you pass ACME validation? Plus there’s the certificate transparency log.

            1. 1

              You can do ACME validation via DNS as well, so you get the ease of using an externally valid SSL certs but can restrict internal domains with split-horizon DNS

              https://letsencrypt.org/docs/challenge-types/

              1. 2

                But that just moves the enumeration from foo.bar to _acme-challenge.foo.bar, right? Or an I missing something?

                1. 1

                  No, thinking about it more I think you’re correct, you’d be subject to DNS enumeration either from your DNS provider or the certificate transparency logs, at least for the existing of the domains themselves. The information about which IPs are pointing to which domain would remain within the internal network though.

                  The exception here could be to use a wildcard certificate which let’s encrypt just started supporting last year.

    17. 5

      Welcome to lobsters! A couple of community etiquette notes:

      • You don’t need to tell us you’re the author, it says “authored by” under the link 🙂
      • New users (accounts under 90 days) can’t use the ask tag. This case is a little fuzzy because you’re also submitting a story, but that story is basically an ask, so I think it falls slightly under the “wait until you’re past 90 days before doing” bin.
      1. 4

        For as long as people respond in comments here (instead of private email only OP can see) I think this can spark an insightful conversation. So I vote to keep it.

      2. 2

        I’m new here too. Is there anywhere I can see a full list of things like “you can use the ask tag after 90 days”?

        1. 1

          The only way I know of is to look through the source code.

          1. 1

            Thanks, found the relevant code: looks like I need 50 karma to invite other people https://github.com/lobsters/lobsters/blob/6faa5d37d2fdf8e4d1accbdcd4ffbe28c1db7088/app/models/user.rb#L137

      3. 1

        You don’t need to tell us you’re the author, it says “authored by” under the link 🙂

        Oh, I wasn’t aware of the meaning of authored vs via. I’m pretty new here, and wasn’t paying attention to it before.

        so I think it falls slightly under the “wait until you’re past 90 days before doing” bin.

        Oh, OK, fair enough. Do I delete it? Or it just gets removed?

    18. 3

      This is an excellent article. Thanks for sharing!

    19. 4

      Just to offer an alternative, I use “Dark Reader” [1] for Chrome which tries to automatically apply a dark theme to websites. It’s not great for most websites (so I keep it as a opt-in per site), but does a really good job with simple sites like lobsters.

      [1] https://chrome.google.com/webstore/detail/dark-reader/eimadpbcbfnmbkopoojfekhnkhdbieeh?hl=en-US

      1. 2

        Just be aware that these kind of extensions get full access to all you see and do on your browser, because they need it in order to function.

        Is dark mode a reasonable tradeoff? That’s for you to decide.

        1. 3

          For this specific extension, Dark Reader is recommended by Mozilla on AMO. This means it has passed an additional level of security / privacy review beyond what a typical extension receives.

          Of course your point is still valid. But if you are a Firefox user who trusts Mozilla more than the Dark Reader dev(s), this may sway your decision.

        2. 2

          A workable (IMO) middleground is to just grab (and ideally audit) the source and then load the unpacked extension on individual devices. This dodges the “I made an extension with justifiably broad permissions and am selling it to a party that will do Bad Things with those permissions for a shitload of money” threat.

          1. 2

            Yup, but not many people do that.

            I know how to do it but I didn’t. Used to use 2-3 extensions with this kind of access. Now I no longer use them, and simply accept that the web is not as comfortable as I’d like it to be.

      2. 1

        Dark reader also lets you apply custom styling. So you can take the CSS in this post and copy it in the Dev Tools panel in Dark reader to use it.

    20. 2

      Thanks OP for posting, I’m interviewing people at work these weeks and this is a great way of getting insight into what people expect, like, feel uncomfortable with…

      Personally I’ve only had/given around 20 interviews, and I don’t remember anyone in particular, so I guess I haven’t had a “wow” one yet.