Log messages should definitely have a format that’s easily grokkable; I think backend engineers often overlook this and want to write sentences instead. Great read.
The best interview I had was also at my current workplace.
All in all, I really liked how it was done. There’s only a few minor things I would have changed, but we are constantly improving and refining the process to try and remove bias and make it better for all parties involved.
Small take home task combined with open-ended questions based on how the candidate designed and implemented the solution is, in my opinion, the best interview format.
It is an opportunity for both the interviewer and interviewee to learn and discuss pro and cons of a given design. If done right, both parties come out feeling that the time spent was valuable, even if it does not lead to an offer.
My favourite is similar - small job-relevant take home task, followed by in person refactoring of that task (‘the requirements changed’), then some open ended questions. It actually doesn’t take all that long compared to what some companies do, but I think it gets you to the knowledge you need quickly.
You have to sell people on the role though or they tend not to be prepared to do a take home task.
Does anybody use koa instead of express for nodejs projects? I tried koa a few months ago for a medium size project, and although the API is nice, the community support is almost non-existent. Did I miss something?
OP here. I don’t have a wealth of experience with Koa but in the past few days I have been using it to build a Shopify application (they provide utility libraries for Koa which is the reason I tried Koa in the first place) and I have been enjoying it. The documentation of the core project and the modules I use is thorough, and the community, whilst smaller than Express, feels supportive.
As a company, we are looking to move from being tightly coupled to Amazon AWS to a more agnostic approach where we can deploy our platform to different cloud providers (this is not a technical requirement at first, but needed by the business).
The obvious approach for achieving such outcome is to go with Kubernetes; for the past two weeks, I have been diving in the documentation of various tools including Kubernetes (+ Kustomise), Helm, ArgoCD, Ingresses (Istio, Nginx), etc. etc. I have found the amount of information to be overwhelming. We are pretty happy with our current pipeline which deploys on three separate environments (Staging/QA/Production) in Amazon ECS; the move to Kubernetes and GitOps already sound like a big endeavour, with a lot of decisions to be made on tooling and pipelines, and that’s frankly frightening.
My company uses kubernetes and has a similar business requirement to be cloud agnostic. We use all of the hosted clusters, but there is still a crazy amount of complexity going on. Despite a dedicated team and some deep experience, we run into issues fairly often, especially when trying to spin up new services. Once a service is set up its fairly robust, but getting new things deployed is a massive pain.
All of this is to say unless you really need it, I would try to avoid the complexity. I primarily work on the backend, so I don’t interact with the devops work super often, but every time I do its just layers upon layers of abstractions. Even the experts at our company have trouble.
You can be cloud agnostic without k8s + co., and there are alternatives like nomad that I have heard good things about. But yeah, there is a crazy amount to learn, and even once you have things running there is a crazy amount to debug. Troubleshooting also becomes 2x harder.
Thanks for your comment. It confirms my concerns regarding the complexity of a solution like Kubernetes for a small sized company. My main concern at this stage is how to get started since the most basic setup seems to involve many different tools, and supporting multiple environments like we do today involve adding even more complexity.
I have also heard very good feedback on Nomad, but we need to think of future recruitments. There is no doubt that Kubernetes has won the container orchestration, and the number of potential knowledgeable / expert candidates would be significantly higher with Kubernetes vs Nomad (even if the latter is more suitable for our needs).
You’re right, there are numerous tools. I think for getting started you can forgo things like helm and flux, and stick with raw k8s manifests. Helm is a pretty attrocious templating solution in my opinion, and we have run into a number of bugs in what should be a really simple program, so I’d argue you don’t ever need it. Even with just k8s manifests there is a lot to learn, but at least its just one tool rather than 5 or 6.
You will have to do what is best for your situation, so definitely take everything with a grain of salt. One argument I would have for recruitments is that usually the popular technology has a bigger pool of talent, but the average quality of that talent is worse off. Personally I think startups should use niche but powerful tech rather than popular tech, since the applicant pool will self filter. Hiring takes a long time and a bad hire is 2x worse than missing out on a good hire at a small size.
Just food for thought! Wish you all the best in your endeavors.
I agree with your comment on niche technologies unlocking a pool of experts; the counterpart to this argument is that these people may cost a lot of money to acquire and retain, since they will be in demand. Having a large pool of candidates means that you, indeed, you will have more junior candidates, but it’s also an opportunity for people to grow in your company and for building a diverse team that can grow with your organisation.
That being said, I will have definitely have a look and build a small POC with it.
Author here. I wrote this other piece about this specific choice/challenge: https://zwischenzugs.com/2019/03/25/aws-vs-k8s-is-the-new-windows-vs-linux/
Interesting read, thank you very much.
The infographic at the end describes my feeling as a newcomer in the Kubernetes world; it feels that the best practices are not yet fully established so the ecosystem is super diverse and full of products of varying quality.
PS: I am one of those people who were playing Linux in its early days! I remember (not very fondly) the kernel panics following plugging an USB device (especially DSL modems, Linux loved those!)
Disclaimer: I work for Google on what I would call a k8s “adjacent” product where we are heavily invested in the k8s ecosystem, but not part of it.
I think the k8s ecosystem is pretty Wild West as there is so much, and it’s impossible to figure out which tool is best-of-class. I think this is a common situation for “new” technologies. k8s is basically a cloud low-level operating system at this point, and there needs to be layers on top. Some good abstractions for some use cases do exist now, e.g. GCP Cloud Run, but if you’re determined on being cloud agnostic, it’s going to be a hard road until each cloud has comparable products. I don’t spend time in AWS/Azure land as I have my own job to do, but I do not think they have a Cloud Run-esque solution yet.
Do you have to be cloud agnostic? If it’s for super high 99.999% reliability then yeah, that’s your only realistic option. If it’s for having an escape ramp if you want to switch to a different provider for some reason, then I think you could get away with just building your Docker images, and having scaffolding around the single provider you’re invested in. Retooling to a new provider wouldn’t be simple, but it would be an order months, not order years, issue, in my estimation.
But I’ve never done this so don’t take my word for it.
The new website feels incredibly snappy and reactive. I don’t remember the previous version to be honest so I will not try a comparison. A great website really is the best way to make a great first impression, and this does the job very well!
There are some layout issues in the developer section, but nothing dramatic.
I have been reading the Baby Whisperer to try to prepare for the arrival of my first child, a baby boy due on the 2nd of November.
I am conscious that nothing can prepare you for such a big change but I want to smooth the transition to this new life as much as possible.
I have watched the video on the website and the idea looks promising, especially for users who dont’ multitask too much. I have never been a fan of the material design, it does look good but flat elements don’t convey well if they are interactive or not except when hovered.
I am currently thinking of investing in an ultra wide monitor for my work from home office: a completely different direction than what is described in this article. I basically want more space at the expense of the pixel density.
As I am getting older, I enjoy a lot more a clean layout and clean desk. It is most certainly subjective but minimalism brings me joy.
As such, I am excited by the new LG lineup; their 34” and 38” are now good compromises for gamers and software developers. I happen to be both!
I’ve been using a 34” ultra-wide display with 3440x1440 pixels for… (checks) my goodness! Almost 5 years now. I’ve tried various multi-monitor setups but using this one display seems to be the sweet spot for most of what I do. The 38” 3840x1600 displays seem to have a similar pixel size (110 dpi vs 109?) so would probably be even better, though they weren’t as readily available at the time I bought this one. I believe these days you can even get 5120x1440 monsters?
For testing purposes, I’ve also got an LG 27” UHD 4K display. (~163 dpi) I can’t get on with this as a primary display with macOS. At native retina resolution (“looks like 1920x1080”) everything seems huge and I’m constantly fighting with the lack of screen real estate. And as the article says, at native 1:1 resolution, everything is too tiny, and the scaled modes are blurry. So I’m going to dissent on the advice of going for a 27” 4k. The ultra-wide 5120x2160 displays have the same pixel pixel size so I’d imagine I’d have similar problems with those, though the bit of extra real estate probably would help.
Don’t get me wrong, I like high resolutions. But I think for this to work with raster based UI toolkits such as Apple’s, you basically have to go for around 200 dpi or higher. And there’s very little available on the market in that area right now:
I can find a few 24” 4K displays which come in at around 185dpi. That wouldn’t solve the real estate issue, but perhaps a multi-monitor setup would work. But then you’ve got to deal with the bezel gap etc. again, and each display only showing 1080pt in the narrow dimension still seems like it might be a bit tight even when you can move windows to the other display.
Above 200dpi, there are:
That seems to be it? I unfortunately didn’t seize the opportunity a few years ago when Dell, HP, and Iiyama offered 5K 27” displays.
Now, perhaps 27” 4K displays work better in other windowing systems. I’ve briefly tried mine in Windows 10 and wasn’t super impressed. (I really only use that OS for games though) It didn’t look great in KDE a few years ago, but to be fair I didn’t attempt any fine tweaking of toolkit settings. So for now I’m sticking with ~110dpi; I’ve got a 27” 2560x1440 display for gaming, the aforementioned 3440x1440 for work, and the 4K 27” for testing and occasional photo editing.
I’m sure 27” at 4K is also great for people with slightly poorer vision than mine. Offloading my 27” 4K onto my dad when he next upgrades his computer would give me a good excuse to replace it with something that suits me better. Maybe my next side project actually starts making some money and I can give that 8K monitor a try and report back.
Another thing to be careful with: high-speed displays aren’t necessarily good. At the advertised 144Hz, my Samsung HDR gaming display shows terrible ghosting. At 100Hz it’s OK, though I would still not recommend this specific display.
(Now, don’t get me started on display OSDs; as far as I can tell, they’re all awful. If I were more of a hardware hacker I’d probably try to hack my displays’ firmware to fix their universally horrible OSD UX. Of course Apple cops out of this by not having important features like input switching in their displays and letting users control only brightness, which can be done from the OS via DDC.)
I switched from a single LG 27” 4k monitor to two LG 24” 4k monitors for around $300/each. I’m happy with the change. Looking forward to the introduction of a 4k ultrawide to eliminate the bezel gap in the middle; currently all such ultrawides are 1440p.
The 34WK95U-W is a high-DPI ultrawide with good color reproduction. It has temporary burn-in problems but I’ve been using two (stacked) for a year and overall I’m happy with them.
They aren’t high refresh though (60hz).
I wanted to re-install Linux on my desktop yesterday: I usually work on my MacBook Pro 15” 2017 but lately I found it slowing me down more often than not (I essentially do full stack development which is a lot for the i7 CPU).
Anyhow; I download the latest version of Fedora, burn it on a USB key and boot it to what I thought would be a very simple installation. Grub menu, select installation, black screen, no signal… well that’s not a good start. Adding nomodeset to the kernel option boots normally. It appears that DisplayPort is not supported properly, and one has to boot using nomodeset until you install the proprietary drivers…. at this point I started to wonder if I really wanted to spend the time of I was happy with my ageing MBP; I chose the latter…
I am very used to working with Linux, and even now I favour convenience over tinkering, I am very familiar to Linux administration. There was a time where I loved playing with my system, configuring a super tailored Archlinux system, addressing compatibility issues with my hardware, etc… this time is gone, I just want to boot my computer and get to work without bad surprise…
I know that this is a very peculiar situation, and that hardware issues are rare nowadays on Linux, but I don’t think any distribution has the level of polish of macOS (even if it is not getting better these days).
I dislike the snap situation, but Ubuntu is very good for it just works. I have a laptop that needed careful configuring to work properly on Fedora, but worked without any changes on Ubuntu 18.04 LTS.
My development environment is what I would call traditional:
My one is described in GitHub repo for macOS & iOS. I got new mac a month ago so it is further simplified, will update the repos shortly.
But of the recent things, I moved most project management to Notion from Trello. Started using VS Code insiders with minimal extensions.
I want to rewrite and/or start using https://github.com/Keats/kickstart to bootstrap new projects.
Thanks for sharing your setup and your workflow with us. Your Git repositories, and Gitbook are a goldmine of fascinating information/tips.
This is great work! This is simple to set-up yet flexible.
I have a great use case in my company for this kind of technology: we build web editors to abstract the complexity of creating configurations for some of our products (they are basically very fancy forms). While these web editors cover 95% of our features, we (the Engineering team) sometimes have to edit some definition files manually for complex scenarios (such as defining original and tricky CSS animations). We could add an advanced mode on some screens that would display a CodeJar micro editor to allow developers and advanced users to achieve this directly in the web application.
Great. Nice to hear that.
My company has always been highly dependent on the Amazon ecosystem. When we made the move to a microservice architecture three years ago, we opted for Amazon ECS as it was the simplest way to achieve container orchestration.
With new business constraints, we have to migrate from Amazon to a different cloud provider, breaking away from the Amazon ecosystem, including ECS. We are still a relatively small company and cannot afford to spend months on the infrastructure instead of focusing on delivering on the business front.
Articles such as this one are a good reminder that Kubernetes is still not an easy solution to implement, and some of the comments confirm my assumption that upgrading and maintaining a Kubernetes cluster can be challenging. It’s a tough call since it’s also the most popular technology around, which helps with recruitment. The absolute certainty is that we no longer want to be tied to a cloud provider, and will choose a technology that allows us to move more freely between providers (no coupling is nothing else than a dream).
I’d recommend checking out Hashicorp Nomad. It’s operationally simple and easy to get your head around for the most part. A past issue of the FreeBSD Journal had an article on it .
I love Hashicorp. I’ve yet to encounter a product of theirs that didn’t spark joy.
I’ve talked to more teams switching away from Nomad to Kubernetes than I have talked to people using Nomad or considering Nomad. A common Nomad complaint I hear is that support and maintenance is very limited.
I’m interested to hear any experience reports on the product that suggest otherwise - My team runs on Beanstalk right now, but I miss the flexibility of a more dynamic environment.
I assume if you are willing to pay the $$$$$$$$$‘s for the Enterprise version, the support is fabulous. If it’s not, one is definitely getting ripped off. I have no experience with the enterprise versions of any of Hashicorp’s products, we can’t afford it. We also don’t need it. We went in knowing we would never buy the enterprise version.
We haven’t had any issues getting stuff merged upstream that makes sense, and/or getting actual issues fixed, but we’ve been running nomad now for years, and I don’t even remember the last time I had to open an issue or PR, so it’s possible things have changed in that regard.
It is indeed one of the alternatives that we have been looking at. My only concerns about Nomad (and Hashicorp products in general) are the additional cost once you use the enterprise features, the smaller candidates pool (everyone is excited about Kubernetes these days), and the absence of load balancing.
Both Traefik and Fabio works as an ingress load balancer on the (nomad) cluster.
We use the Spring Cloud Gateway as our ingress.
For service to service communication you can run Consul (we do this)
Seconding fabio as an incredibly simple automatic ingress, and consul service discovery (-> connect+envoy) for service to service. There’s a nice guide as well https://learn.hashicorp.com/nomad/load-balancing/fabio . I’d consider nomad incomplete without consul and vault, but i’d also say the same (particularly vault is irreplaceable) for k8s.
As for hiring – hire the k8s candidates. They’re both omega schedulers so the core scheduling concepts and job constructs (pod <-> alloc, etc) translate well, and I’d wager for some k8s veterans not having to deal with a balkanized ecosystem and scheduler wordpress plugins (CRDs/operators) would be seen as features.
docker itself is by far the weakest link for us, but nomad also offers flexibility there.
My suggestion is just make sure you don’t need the enterprise features :) You can get very far without the enterprise $$$$ expense, that’s what we do. But agreed, the Enterprise version for any of the Hashicorp things are very very expensive.
Load balancing is easily solved as others have said. We use HAProxy and it lives outside of the nomad cluster.
Agreed, everyone is all excited by k8s, because everything that comes out of Google(if only the design) must be perfect for your non-google sized businesses. Let’s face it, the chances of any of us growing to the size of Google is basically trending towards zero, so why optimize prematurely for that?
The upside for the candidate pool being “smaller” is you can read through the Nomad docs in an hour or so, and have a pretty good idea of how everything works and ties together, and can be productive, even as a sysadmin type role, pretty quickly. IME one can’t begin to understand how k8s works in an hour.