1. 5

    On my wishlist: A way to block all the bloody “Subscribe to my spiffy mailinglist”-popups that has infested the web.

    1. 2

      Big same. I was working on a browser plugin to turn position:fixed/etc elements into display:none, but it ran into a wall of

      1. literally the first wild website I tested it on hit an infinite loop
      2. javascript permission errors when trying to introspect style sheets

      I suspect dealing with it robustly would require hacking up the browser renderer itself.

      1. 2

        The No, Thanks extension gets rid of some of them. Enough that I’m willing to pay its subscription fee because those stupid things make my blood boil, but it still misses a bunch.

        1. 1

          Thanks, I’ll give it a spin.

        2. 1

          The unfortunate reality is that they work. I remember reading, I think, Andrew Chen (A16Z) who mentioned that he feels sorry for these popups but he has to keep them on his blog since they work.

          1. 3

            Andrew Chen doesn’t have to have these annoying popups on his blog, he could perfectly well choose to have a button or a link. Truth is that he chose the annoying popups because he values the number of subscriptions more than the wellbeing of his audience.

            1. 1

              Do you have the source / data for the that? I’m not even sure how you’d measure how well they work. I assume you’d have to do some A/B testing, but while you can measure the number of people who sign up for your newsletter, and possibly even track whether the emails cause them to come back to your blog, you can’t measure the people who are unimpressed or get annoyed and don’t come back or recommend your blog to others.

          1. 5

            To be that person, I have no trust in a project that explicitly calls out the “nonsense that is the urbit project” without at all mentioning the nonsense that is it’s ideological foundation in far right reactionary thought.

            1. 3

              far right reactionary thought

              Wouldn’t reactionary thought eschew solutions using technology at their core? I think they identify as neoreactionary for this reason…

              1. 2

                Exactly. And the first two paragraphs don’t even tell me what exactly this project is about.

              1. 2

                This explains why I am getting a deluge of small low quality PRs on one of my Open source repo. The repo isn’t even code but just a list of things.

                1. 3

                  No one likes YAML but it survives. I am definitely amused at such cryptic languages which thrives despite being kludgy.

                  1. 9

                    No one likes YAML but it survives. I am definitely amused at such cryptic languages which thrives despite being kludgy.

                    “Good things come to an end, bad things have to be stopped.” ― Kim Newman

                  1. 1

                    A commit to turn a flag on?

                    1. 2

                      You are storing flag values somewhere. If you are not using git commit to turn them on then you must have a different full-fledged system to record when the flag was modified, by whom, and who approved it.

                    1. 3

                      Please make sure that it is three-step approach, you need to go back and remove the old branches eventually. As long as the other branches exist they complicate future changes.

                      1. 1

                        Good point. You are right, in some cases, it is three-step. While writing this I implicitly assumed the deletion of old code.

                      1. 0

                        There is only one GNU/Linux distro that was remotely comparable to the Mac OS user experience. Alas. No more.

                        1. 1

                          This scripting seems to replicate features of a build system. I wondered about this before: Why does nobody treat test reports as build artifacts? Let Make (or whatever) figure out the dependencies and incremental creation.

                          1. 2

                            Sometimes you have multiple build systems. For example, let’s say I have a repo with two independent dirs - one containing Javascript (npm builds) and one containing Android (gradle builds). Both build incrementally fine on my machine but on a CI, if I am only modifying the Android code then it is a waste to build and test the Javascript dir. Incremental creation does not work since the past artifacts are missing. And they are intentionally missing to ensure that the builds are clean and reusable.

                            I have actually seen a bug where keeping the past artifacts created a subtle bug which was removed after we removed persistent caching from Circle CI.

                            1. 2

                              Some build systems, i.e. Bazel do it (it’s called “caching”, the same as saving build artifacts). This build system is especially designed for monorepos. Probably Buck, a build system with similar origins, does this too.

                              However, writing tests for this behavior can be tricky, as it requires “hermeticity”: tests can only read data from their direct dependencies. Otherwise, “green” build may become cached and stay green in subsequent runs, where it will become red if cache is cleared.

                              Sadly, it’s quite hard to use Bazel for js/ruby/python and similar, it does not have builtin rules for ecosystems of these languages, and for shell-based general rule you have to know what files your shell command will output before it runs (directories can’t be output of rules).

                              1. 2

                                My inspiration in some form came from both Bazel (which I used inside Google) and Buck (which I used at Facebook). Both are great tools. Setting them up and training the whole team to use them , however, is a time-consuming effort.

                                1. 2

                                  it requires “hermeticity”: tests can only read data from their direct dependencies.

                                  Nix is able to guarantee this since it heavily sandboxes builds, only allowing access to declared dependencies and the build directory. I haven’t seen anyone exploiting this for CI yet but it might be worth playing with.

                                  1. 1

                                    How long does it take to setup Nix?

                                    1. 1

                                      I’m not really sure how to answer that. Learning nix definitely takes a while and the documentation isn’t great. Writing a hello world build script takes seconds. Setting up some really complicated system probably takes longer ¯_(ツ)_/¯

                                      I guess I can at least point at some examples:

                                      1. 1

                                        Thanks. After reading through the links, I am happy with my setup which returns 99% of the benefit without making every developer learn to write a new build system.

                              1. 1

                                Go Lang is great for concurrency, IMHO. Rust might be better but it is still somewhat unstable.

                                1. 1

                                  On all of my GitHub actions development, I’ve lived and breathed through https://github.com/nektos/act.

                                  It gives me a real feedback loop, and I can at least see things going on locally before pushing anything to GitHub. It’s a start towards what you’re looking for.

                                  1. 1

                                    Thanks a lot. I will start using it as well.

                                  1. 2

                                    It’s 2020, we still don’t have a good Linux laptop which competes with Macbooks yet.

                                    1. 6

                                      What about Dell XPS Developer Edition?

                                      1. 1

                                        13” is a bit small for the developer version.

                                        1. 3

                                          They also have the Precision with Ubuntu pre-installed. It’s the XPS15 with better hardware under the hood IIRC.

                                      2. 3

                                        Thinkpads compete just fine. In fact, they beat Macbooks outright. I cannot find a single flaw with the T495s I’m typing this on. Incredible battery life, sharp screen, good keyboard and trackpad, a decent number of ports, good cooling, lightweight and portable. Personally I like the aesthetics of the Thinkpad more than the Macbook’s too, but that’s subjective.

                                        1. 3

                                          Thinkpads compete just fine. In fact, they beat Macbooks outright.

                                          As someone who happily chooses to use a ThinkPad T480 after many years of using Apple laptops, I disagree vehemently. I bought mine when my macbook pro died and the only new Apple replacements were the terrible keyboard, touchbar endowed models that maxed out at 16GB RAM. That didn’t work for me, so I went T480.

                                          The screen is a downgrade. The keyboard is an upgrade. The touchpad is a cruel joke. Fortunately, I can just turn the touchpad off and use the trackpoint. Battery life is better. The cooling is worse. CPU throttles regularly. I may open it up and re-paste it; I hear that helps.

                                          Getting Linux to work well on it was bumpy. I use Fedora. Setting up disk encryption so that it worked across two drives was a royal PITA. I still have to hold my jaw just right when I plug or unplug my thunderbolt 3 docking station. Most of the time I choose to shutdown first. Resolution scaling doesn’t work half as well as it did on Mac. Jetbrains tools can lock up the entire gui. The wired ethernet adapter on the Lenovo thunderbolt dock is hideously slow; it’s actually faster to use wifi. Multiple displays still suck compared to Mac.

                                          Make no mistake. I like this machine, and am happier overall with it than I was with my macbook setup. It wins for me, as a software developer, on balance. Especially when I consider that, when I bought it, this $2100 rig would’ve cost $3500 for something from Apple with half the RAM but a faster CPU and SSD.

                                          But there’s no way I’d say it wins outright. Even if you gave me a week to tweak Linux the best I could, I could not hand it to any of my Macbook-toting friends (who are not software developers) and expect them to have a better experience with my hand tweaked thinkpad than they have out of the box on their Macbook.

                                          1. 1

                                            The screen is a downgrade

                                            I strongly prefer the matte screen on the Thinkpads. I also got the 400nit low-power screen and its colour range is incredible. I have use Macbooks before briefly and they definitely have good screens (especially so 5-6 years ago, when they had the highest res screens in laptops), but my T495s’ screen is equally good, if not better, thanks to it being matte.

                                            The touchpad is a cruel joke

                                            Touchpads are the one thing that Macbooks have an edge in and I’ll admit that. But the T495s’ touchpad is nowhere near that and I like it a lot. I also use the trackpoint a lot; took a while to get used to, but it’s quite powerful.

                                            1. 1

                                              I strongly prefer the matte screen on the Thinkpads.

                                              While I think, based on looking over the shoulders of colleagues, that Apple has gotten the anti-glare coating on their glossy screens good enough that I could happily use them, I was comparing my T480’s screen to my (2011 or 2012?) MBP17’s matte 1920x1200 screen that it replaced. That MBP17 was by quite some distance my favorite laptop screen ever. If I could get that keyboard/battery/trackpad/screen with a modern motherboard, I’d happily do so.

                                              Based on your description of the 495, it sounds like they improved the matte screen between the T480 and the T495. I’d rate the 480’s as passable but not great.

                                              They may also have improved the touchpad; the T480’s touchpad makes me understand why so many Thinkpad users hate touchpads. (Or maybe Apple ruined me for those.) I like the trackpoint a great deal, though, so I’m happy as long as I can disable the touchpad. And I actually don’t run it fully disabled these days. I have all of its “click” functionality turned off, set its scrolling to two finger only mode, and use it like a big scroll wheel so that my trackpoint middle button functions like a traditional middle mouse button. I’m pretty happy with that.

                                              I really love the giant external battery on the T480. I routinely get 12 hours of heavy VMware usage or 22+ hours of browsing/editing usage with the 72Wh. I’m disappointed and annoyed that they seem to have discontinued this feature on the 490 series, and really hope they bring it back.

                                          2. 1

                                            Same question: what’s the battery life on Linux?

                                            1. 1

                                              I consistently get 9-10 hours and I haven’t even bothered to optimise it.

                                              1. 1

                                                To add another datapoint for you, on my T480 with the big 72Wh rear battery, I see 12-ish hours of heavy compiling/VM testing usage. 22+ hours of browsing and text editing. I’m running Fedora 31 with powertop and tlp packages to manage power, but no manual customization on those.

                                          1. 2

                                            What are your goals? Are your goals to bend your mind? Improve your skills? Change your first language? Switch careers / jobs? It’s hard to give you real advice here without more context. Racket, Erlang, Haskell, OCaml, Scala, Clojure – all of these would be good to explore and learn, but which one depends completely on your end goal.

                                            1. 2

                                              Purely learning the functional programming mindset. I have heard this phrase quite a few times. Being eventually able to build something production-quality would be great.

                                              1. 8

                                                OK.

                                                1. If you want to stretch your mind the most, I’d say Haskell. Haskell is lazily evaluated, and gives no fucks about the way you’ve programmed before. You can do “stateful” things only inside a specific box, and you’re forced to deal with foreign concepts almost immediately.

                                                2. Clojure would force lots of the “functional” bits on you around referential transparency, and provides lots of higher order thinking opportunities. It’s practical to boot as it gives you access to the entirety of the Java / JVM ecosystem, in a dynamically typed “candy shell”.

                                                3. Racket, on the other hand, has amazing learning materials, opt-in gradual typing (via Typed Racket), contracts, and an extremely powerful language construction kit with its hygienic macro system, and first class languages support. There are many, both high quality, and toy quality languages built in Racket that can demonstrate functional concepts. A great example of this: hackett, which implements as Haskell like language in Racket, pie which is the language from The Little Typer, and even a Datalog.

                                                1. 2

                                                  If you are the kind of person who learns by inflicting pain on themselves (like the “if you want to start vim, disable your arrow keys” people), then start with something like Haskell which might annoy you to no end.

                                                  If you are the kind of person who likes to easily transition into things and learn step by step (or is easily frustrated if the thing you want to do takes 5min in your language and you give up after 3h in the new language…) then I’d suggest something like Clojure, where you can easily reach for the Java interop and if bad comes to worst just write one or two Java classes that you embed and then come back and rewrite it.

                                                  It’s probably not hard to discern which kind of person I am :) When learning a new language I’m totally fine with writing unidiomatic code for a while and just getting a feel, then improving by rewriting.

                                              1. 0

                                                I know it’s not what you’re asking, but still: Do you know any assembler? If not, I’d say to consider learning one.

                                                1. 2

                                                  Thankfully, I know assembly for both X86 and for Atmel microcontrollers.

                                                1. 15

                                                  Cloud Run sounds cool I guess, and I might try it sometime. But honestly, I don’t see a problem with just getting a conventional server. I have a $5/month Digital Ocean server, and I run like 10 things on it. That’s the nice thing about a plain old Linux server, as long as none of your individual things takes up a ton of resources or gets too much traffic, you can fit quite a few of them on one cheap server.

                                                  1. 2

                                                    Do you manage SSH certs for those 10 yourself? What happens when the services go down? What about logging?

                                                    1. 4

                                                      It’s all running on 1 server, so there’s only one SSH key to manage. Well, one for every device I connect to it from, but that’s not that many, and there really isn’t anything to manage.

                                                      Everything is set up through SystemD services. I wrote control files for the services that didn’t already have them (Nginx, Postgres, etc). It’s perfectly capable of restarting things and bringing them up if the server reboots. Everything that has logs is set up with logrotate and transports to SumoLogic. I did set up a few alerts through there for services that I care about keeping running and have been troublesome in the past. Also have some automatic database backups to S3. These are all one-off toy projects used pretty much only by me, and this level of management has proved sufficient and low-maintenance enough to keep them up to my satisfaction.

                                                      Of course, I would re-evaluate things and probably set up something dedicated and more repeatable if any of those services ever got a significant number of users, generated revenue, or otherwise merited it. There’s plenty of options for exactly how, and which one to use would depend on the details.

                                                      1. 3

                                                        They said a single server so yes a single SSH key I’d imagine, every major init system on Linux has service crash detection and restart, and syslog (and if you are feeling brave GoAccess).

                                                        1. 1

                                                          Assuming you meant SSH and mistyped cert instead of key it’s one machine so one key.

                                                          Assuming you meant SSL instead of SSH. I run everything in Docker compose. I use this awesome community maintained nginx image[1] that sets it up as a reverse proxy and automates getting let’s encrypt certificates for each domain I need with just a little config in the compose file.

                                                          From there I write a block in the nginx configuration for each service, add the service to my compose file and voila it is done.

                                                          [1]https://docs.linuxserver.io/images/docker-letsencrypt

                                                          1. 1

                                                            Good point, could have meant SSL Certs. I use the Let’s Encrypt automated package. It’s quite good these days - can set up your nginx config for you mostly-correctly right off the bat, and renews in place automatically. I just set up a cron job to run it once a week, pipe the logs to Sumologic, and then forget about it. Worked fine automatically when I was serving multiple domains from the same nginx instance too, though I’m not doing that right now.

                                                            1. 1

                                                              Sorry, I did mean SSL certs. You are right about automating it and that’s what I would do for professional work. For a side-project, however, I prefer eliminating it completely and letting Google do it.

                                                              From there I write a block in the nginx configuration for each service, add the service to my compose file and voila it is done Can you share more details of your setup here?

                                                          2. 1

                                                            I used this too but then my provider sunset the hardware I was on and migration was a nightmare because it’s easy to fall into bad patterns with this mode.

                                                            Admittedly it was over 10 years of cruft but still.

                                                            1. 2

                                                              That did honestly kind of happen to me too. I had a server like that running with I think Ubuntu 14.04 LTS for quite a while. Eventually I decided it needed upgrading to a new server with 18.04 - security patches, old instance, etc. It was a bit of a pain figuring out the right way to do the same things on a much newer version. It only really took about a full day or so to get everything moved over and running though, and a good opportunity to upgrade a few other things that probably needed it and shut off things that weren’t worth the trouble.

                                                              I’d say it’s a pretty low price overall considering the number of things running, the flexibility for handling them any way I feel like, the low price, and the overall simplicity of 1 hosting service and 1 server instead of a dozen different hosting systems I’d probably be using if I didn’t have that flexibility.

                                                          1. 2

                                                            Nice writeup.

                                                            I started using Cloud Run after they announced it in alpha, for a couple of toy services that were previously in App Engine.

                                                            I updated them to use Cloud Build too, so you can avoid that manual gcloud deploy step: https://github.com/jamesog/whatthemac/blob/master/cloudbuild.yaml

                                                            1. 1

                                                              Nice. I have used Cloud Build before and I think its a great idea if the builds are going to take lot of resources. Personally, I still try to manually test via make docker_run before deploying an image, so, building locally works. I am sure though at some point I will migrate to Cloud Build as well.

                                                            1. 3

                                                              As there are no VMs, I can’t SSH into the machine and make changes, which is excellent from a security perspective since there is no chance of someone compromising and running services on it.

                                                              What’s wrong with SSH? Extendeding this logic, if someone compromised your Google account, you’re toast. Just use passwordless login, a off-disk key with something like a Yubikey (password protected, of course), and disable root login.

                                                              1. 4

                                                                SSH can be plenty secure but no SSH is even more secure.

                                                                1. 7

                                                                  Until you invent a less-secure workaround for not having access to ssh.

                                                                  1. 2

                                                                    They’re using the appliance model here. They build the appliance with no ability to log into it. It’s uploaded to run on Google’s service. When time to fix or upgrade, a new one is built, the other is thrown away, and new one put in its place. It’s more secure than SSH if Google’s side of it is more secure than SSH.

                                                                    Now, that part may or may not be true. I do expect it’s true in most cases since random developers or admins are more likely to screw up remote security than Google’s people.

                                                                    1. 2

                                                                      Uploading Docker images that can’t be SSH into IMHO is much more secure.

                                                                  2. 3

                                                                    If someone accesses my Google account, they can access my GCP account anyways. The advantage here is that my Google account is more protected (not just with 2-factor) but because Google is always on the watch out. For example, if I am logging in from the USA and suddenly there is a login from Russia, Google is more likely to block that or warn me about it. That’s not going to happen with a VM I am running in GCP.

                                                                    Just use passwordless login, a off-disk key with something like a Yubikey (password protected, of course),

                                                                    None of that protects against vulnerability in the software though. For example, my Wordpress installation was compromised and someone served malware through it. That attack vector goes away with docker container based websites (Attack vector-like SQL injection do remain though since the database is persistent)

                                                                    1. 8

                                                                      I am a PenTester by trade and one of the things I like to do is keep non-scientific statistics and notes about each of my engagements because I think they can help me point out some common misconceptions that are hard for people to compare in real world (granted these are generally large corporate entities not little side projects).

                                                                      Of that data only about 4 times have I actually gotten to sensitive data or internal network access via SSH, and that was because they were configured for LDAP authentication and I conducted password sprays. On the other side of the coin, mismanagement of Cloud keys that has lead to the compromise of the entire cloud environment has occurred 15 times. The most common vectors are much more subtle, like Server Side Request Forgery that allows me to access the metadata instances and gain access to deployment keys, developers accidentally publishing cloud keys to DockerHub or a public CI or in source code history, or logging headers containing transient keys. Key managment in the cloud will never be able to have 2FA and I think that’s the real weakness, not someone logging into your Google account.

                                                                      Also in my experience actual log analysis from cloud environments does not actually get done (again just my experience). The amount of phone calls from angry sysadmins asking if I was the one who just logged into production SSH during an assessment versus entire account takeovers in the cloud with pure silence is pretty jarring.

                                                                      I often get the sense that just IP whitelisting SSH or having a bastion server with only that service exposed and some just networking design could go a long way

                                                                      1. 1

                                                                        The most common vectors are much more subtle, like Server Side Request Forgery that allows me to access the metadata instances and gain access to deployment keys, developers accidentally publishing cloud keys to DockerHub or a public CI or in source code history, or logging headers containing transient keys. Key managment in the cloud will never be able to have 2FA and I think that’s the real weakness, not someone logging into your Google account

                                                                        Thanks for sharing this.

                                                                        1. SSRF or SQL injection will remain a concern as long as its a web service irrespective of docker or VM
                                                                        2. logging headers containing transient keys - this again is a poor logging issue which holds for both docker and VM
                                                                        3. I agree that key management in the cloud is hard. But I think you will have to deal with that both on docker and VM

                                                                        I often get the sense that just IP whitelisting SSH or having a bastion server with only that service exposed and some just networking design could go a long way This won’t eliminate most issues like SQL injection or SSRF etc. to a great extent. And IP whitelisting doesn’t work in the new world especially when you are traveling and could be logging in from random IPs (unless you always log into through VPN first)

                                                                        1. 4

                                                                          You seem to be kind of missing my point, I’m not arguing between Docker vs VMs or even application security. The original comment was about SSH specifically and I am making an argument that the corner cases for catastrophic failures with SSH tend to be around weak credentials or leaked keys which are all decently well understood. Whereas in the cloud world, the things that can lead to catastrophic failure (sometimes not even of your own mistakes) are much much more unknown, subtle, and platform specific. The default assumption of SSH being worse than cloud native management is not one I agree with, especially for personal projects.

                                                                          IP whitelisting doesn’t work in the new world especially when you are traveling and could be logging in from random IPs

                                                                          For some reason I hear this a lot and I seriously wonder, do you not think that’s how it’s always been? There’s a reason that some of the earliest RFC’s for IPv6 address the fact that mobility is an issue. I’m not necessarily advocating this in the personal project territory, but this is the whole point of designing your network with bastion hosts. That way you can authenticate to that one location with very strict rules, logging, and security policies and then also not have SSH exposed to your other services.

                                                                          1. 2

                                                                            All fair points.

                                                                  1. 1

                                                                    Cloudflare Workers are another thing that look fairly interesting for this kind of purpose - effectively serverside service workers, WASM and all, with 100,000 free hits a day.

                                                                    1. 1

                                                                      Yeah, I considered that as well. I fear a lock-in similar to AWS Lambda here. Google Cloud Run gives full portability since I can move the Docker container elsewhere (say to K8s) as well.

                                                                      1. 1

                                                                        Understandable, although it feels like the compute portion is not really the concerning bit (Docker is “portable”, true service workers are “portable”). What usually isn’t portable are the storage interfaces if you care about persistence at all.

                                                                        1. 1

                                                                          That’s true. You are right that they are only partially portable. In principle, I can move the code over but use the storage API from Google but that can be expensive.

                                                                          However, if I am moving inside the Google (or AWS) services, then Google Cloud Run allows me to deploy something as a side-project and then upgrade it to a full-fledged K8s or VM based setup in the future in case I desire to.

                                                                    1. 4

                                                                      I’m curious how you use GCS for persistence. Are you using it as a blob store, or something more structured?

                                                                      I’ve been toying around with Cloud Run for a bit, but “free-tier persistence” is a problem I don’t have a great solution for yet

                                                                      1. 4

                                                                        I’m a heavy user of Cloud Run for side projects, and I constantly find myself wishing that Google Cloud would offer a free-tier for Google Cloud SQL. Something in the < 500MB range, along with some light CPU restrictions. I’m currently paying $10/month for a Cloud SQL Postgres instance that’s only using 128MB of storage, ~300MB of RAM, and 2% CPU.

                                                                        1. 2

                                                                          I would love a free-tier of Google Cloud SQL as well :)

                                                                          1. 1

                                                                            Maybe deploy a SQL server on free google VM (f1-micro) or Oracle Cloud VM. But you’ll have to maybe manage it a bit.

                                                                            1. 1

                                                                              That’s the exact opposite of what I want to do.

                                                                            2. 1
                                                                              1. 2

                                                                                Yup, and I’ve done this in the past, but Heroku doesn’t guarantee your connection details unless you get them from the heroku CLI. It’s so they can move your DB around as they see fit. You also don’t get notifications that the connection string has changed, so it’s not an ideal solution for an app running off of the Heroku platform.

                                                                            3. 3

                                                                              I’m curious how you use GCS for persistence. Are you using it as a blob store, or something more structured?

                                                                              I write JSON blobs. The latency is acceptable as long as you are accessing one blob per HTTP request. You can’t build a latency-sensitive system like a chat server on top of it for sure.

                                                                            1. 2

                                                                              I used to deploy everything to Heroku or sometimes, Now.sh for static page apps. Heroku has a great out of the box experience and even has the ability to deploy a Docker container, but they don’t offer SSL for free dynos.

                                                                              I recently moved all my projects to a k8s cluster, either DO and Linode has great price for a hobby tier k8s cluster ($10/mo for a single node cluster - 1 CPU core, 2GB RAM and 50GB storage), I got the same ability to deploy docker images and also got a free, managed and automatic SSL.

                                                                              It was a great experience.

                                                                              1. 2

                                                                                I know basic k8s. Nothing against it, but I felt it a bit overwhelming. Maybe once I get a better hold of it, I will jump onto it.

                                                                                1. 1

                                                                                  Yeah it seems to be overwhelming, especially with the load of documentation, but the way I use it is just really limited to create deployment, expose the service, and sometime restart it, just that :D the good point I found is we can actually control how much resources we want to allocate to each application running in k8s.

                                                                                  1. 2

                                                                                    You can control the resource allocation in Google Cloud Run as well.

                                                                              1. 1

                                                                                Interesting setup. How can they deploy a docker container as “serverless”? Will they need to keep the container on standby in case someone uses it? If so, wouldn’t that effect load times?

                                                                                1. 2

                                                                                  The container can be booted fairly quickly, but certainly not as quick as an always-running service. For example I just hit one of my Cloud Run endpoints, which I assume was asleep, and it took 200ms to respond to the initial request. Subsequent requests were served in about 80ms.

                                                                                  1. 2

                                                                                    Exactly. I think the latency could be a problem if you are trying to run a full-fledged money-making project but latency isn’t an issue for side-projects.

                                                                                  2. 1

                                                                                    If so, wouldn’t that effect load times?

                                                                                    And I guess that’s why a small container is even more important.