1. 3

    Company: Jobvite

    Company Site: https://www.jobvite.com/ and https://talent.jobvite.com/

    Positions: https://talent.jobvite.com/search/engineering/jobs

    Location: Remote US, Remote Canada, Kitchener ON, Indianapolis IN, Richmond BC, Bangalore, India

    Description: (from the website): ‘At Jobvite, our mission is to provide our customers with the tools to attract, engage, hire, and retain the talent that drives success.’ I am a team lead there and work on the Rails side of things.

    Tech stack: Java (In the ATS), Ruby/Rails/(Some, growing amounts of) React (and a tiny but critical bit of Go) (Enterprise recruitment marketing suite)

    Contact: Apply through the careersite or PM me with any questions and I’ll do my best to answer them (if I can/ I am allowed to).

    1. 10

      I’ve only used it for a sum total of 30 minutes, but https://k6.io/ was pretty easy to get going.

      1. 4

        Never even heard of k6 before, it looks wicked. Nice one.

        1. 3

          I’m also a fan of k6, since it’s not more complex than most one-line CLI tools to get going, but you have full-on JS scripting if you need to script complex behavior.

          1. 3

            I’ve also had success using k6. Solid tool.

          1. 6

            Other than MacBooks, are there any UltraBook (or relatively slim) laptops using a 16:10 aspect ratio? I can work with 16:9 on a 13” display and love my PBP and ThinkPads, but I prefer a bit more vertical space (hell, I prefer my iPad Pro’s 4:3 at that size).

            1. 5

              The latest (gen 9) Thinkpad X1 carbon switches to 16:10.

              1. 4

                I use an old IBM ThinkPad with a 4:3 1024x768 display. It’s pretty great. (And I can look at it without squinting!) But it’s twenty years old, so I can’t watch videos on it or use modern web browsers.

                That said, I’m happy that vendors are finally exploring aspect ratios other than 16:9, which is arguably the worst one, at least for laptops.

                1. 1

                  And I can look at it without squinting

                  I thought that Thinkpads from the 4:3 era predated LED backlights; isn’t it extremely dim? I’ve honestly been tempted to pick up an older 1600x1200 but the idea of going back to a CCFL just seems like a step too far.

                  1. 2

                    In a sunny room, it’s pretty dim, but workable. I use light themes predominately. Not sure what kind of backlights it has exactly. Definitely worse than my X1 Carbon 3rd gen.

                    But I’d personally take a dim screen over a high-dpi screen. The X1 sees little to no use because GUI scaling never works well and everything is too small without it.

                    1600x1200 might not be too bad, though, depending on the size.

                    1. 2

                      You can run a HDPI display at a lower resolution, and it generally looks amazing since the pixels are so small you see none of them (whereas that’s all you see when running a 1024x768 ~13” native display)

                      1. 1

                        Well, you can only run it at half resolution, right? Doesn’t work out too well unless you have really high dpi. 1920x1080/2 is 960x540, which is a very low resolution for 13".

                        But I don’t know what you mean about pixels. I don’t “see” the pixels on any of my laptops, regardless of resolution. The only screen I’ve ever been able to visually see the pixels on was the pre-Retina iPhone.

                        1. 1

                          Well, you can only run it at half resolution, right? Doesn’t work out too well unless you have really high dpi. 1920x1080/2 is 960x540, which is a very low resolution for 13”.

                          HDPI is not a resolution, it’s pixel density. I don’t think you’re limited to /2 scaling.. I’ve certainly done that (e.g. 4k display at 1080p), but also have run a 4k display at 1440p or a 1080p display at 1280x720.

                          But I don’t know what you mean about pixels. I don’t “see” the pixels on any of my laptops, regardless of resolution.

                          Strange, I see them on my partner’s 1366x768 IPS thinkpad x230 display. Maybe it’s one of those things that once you see, you can’t unsee it.

                          1. 2

                            HDPI is not a resolution, it’s pixel density.

                            Yes, I know, that’s why I specified the size of the screen as well as the resolution.

                            I’ve certainly done that (e.g. 4k display at 1080p), but also have run a 4k display at 1440p or a 1080p display at 1280x720.

                            Hm. A 1920x1080 display should not be able to – properly – run at 1280x720 unless it is a CRT. Because each pixel has an exact physical representation, it won’t align correctly (and the screen will thus be blurry) unless the resolution is exactly half of the native resolution (or a quarter, sixteenth etc.).

                            Strange, I see them on my partner’s 1366x768 IPS thinkpad x230 display. Maybe it’s one of those things that once you see, you can’t unsee it.

                            Yeah, strange! As I said, I saw them on the iPhone <4, so I sort of know what you’re talking about, but I’ve never seen them elsewhere.

                            Perhaps it really depends on some other factor and has little to do with dpi after all?

                      2. 2

                        My home desktop has a lovely 1600x1200 20” monitor that we pulled off the curb for free. It’s actually such a pleasure to use; too bad so much modern software is designed specifically for widescreen.

                  2. 4

                    The frame.work laptop is 3:2 (And the pricing seems not too bad either - Looks close to double the peformance of my ‘12 Retina MBP and nicely confgured for ~1300US$ but my MBP is still running fine for what I’m using it for)

                    https://frame.work/products/laptop-diy-edition

                    1. 2

                      The XPS 13 has a 16:10 display now and even has an OLED option. Developer Edition (aka it with Ubuntu): https://www.dell.com/en-us/work/shop/dell-laptops-and-notebooks/new-xps-13-developer-edition/spd/xps-13-9310-laptop/ctox139w10p2c3000u

                      I’ve been eyeing it up for awhile now myself.

                      1. 3

                        Note that, IIRC, OLED laptop displays are kind of weird on Linux because the traditional model of ‘just dim the backlight’ doesn’t work. I don’t know what the current state of the world is, but I definitely remember a friend complaining about it a year-ish ago. I personally wouldn’t go for it unless I could confirm that there was something working well with that exact model.

                        1. 4

                          Thanks for the heads up. I’m not seeing anything that definitively says it’s fixed now, but it does sound like there’s at least a workaround: https://github.com/udifuchs/icc-brightness

                          Hopefully by the time I actually decide to get one there will be proper support.

                          1. 1

                            Huh, I thought the display panels would translate DPCD brightness control into correct dimming. Looks like I might be right: e.g. for the Thinkpad X1 Extreme’s AMOLED panel there is now a quirk forcing DPCD usage.

                        2. 1

                          Pretty much everything in the Microsoft Surface line is 3:2.

                          1. 1

                            All the 3:2 displays I’ve seen have been glossy; do any exist in matte?

                          2. 1

                            X1 nano

                          1. 3

                            I can’t keep up with all the AmigaOS drama, between all the forks with similar version numbers (i.e I don’t know which 3.x is most authoritative, ignoring the PowerPC “Amiga” clusterfuck) and feuding companies (Hyperion and Cloanto).

                            (I tend to think the Amiga is extremely overrated due to the mythology that’s spawned around it, but I wouldn’t say no to a 1200 or something if one fell in my lap - since going price for them nowadays is criminal. That’s a side concern to “which Amiga company is the good one?”)

                            1. 3

                              On the orange site, I’ve seen this explanation for it:


                              Kickstart 1.0 - 3.1: By Commodore. Actually 3.0 was “officially” last but 3.1 was ongoing work that got wrapped up well enough. I don’t really remember if Commodore officially released 3.1 or if it was picked up from their corpse by someone.

                              HAAGE & PARTNER BRANCH:

                              AmigaOS 3.5-3.9: First post-3.1 versions from 1999-2000 (for Motorola 68020 and up rather than 68000 and up) by Haage & Partner. Main features a TCP/IP stack and a new GUI, a new GUI toolkit called ReAction, MPEG movie player, MP3 player, >4 GB disk partitioning support.

                              HYPERION POWER PC BRANCH:

                              AmigaOS 4.0-4.1: First PowerPC-only version. Main features memory virtualization, new GUI, integrated third-party graphics driver support, etc.

                              HYPERION “CLASSIC” BRANCH:

                              Now they returned to 3.1 BUT with 3.9 source code still on their hands. Trying to advance Kickstart from a new angle that allows support for all Amigas, even the 68000 (Amiga 500). This is NOT for PowerPC. AmigaOS 4 is for those systems but since that’s basically a dead end in 2021, this is a more pragmatic move. I also find less “careless” and more conservative than 3.5+, focusing on kernel improvements rather than bolting on big third party tools and libraries. Basically more how I’d expect actual Commodore releases would look like.

                              AmigaOS 3.1.4: Backporting numerous features and lessons learnt from 3.9 and now available for all Amigas, that is including the MC68000. An important update for classic Amigas since it brings in particular support that makes interacting with modern hardware easier with larger hard drives, and I think it added MC68060 support too for accelerators and whatnot.

                              AmigaOS 3.2: A continuation of the 3.1.4 branch and now probably surpassing 3.9 in many areas.

                              AmigaOS 3.x…?


                              I always had a soft spot for Hyperion but I am not close to the issue to have a really informed opinion. Anyway, I posted this here because those with 68k Amigas will benefit from all the goodies in this update.

                              1. 2

                                At some point the (integer) library version numbers from the 3.2 branch are going to go above the versions used in the 3.9 or 4.0 branches and things are gonna get really confusing.

                              2. 3

                                (I tend to think the Amiga is extremely overrated due to the mythology that’s spawned around it, but I wouldn’t say no to a 1200 or something if one fell in my lap - since going price for them nowadays is criminal. That’s a side concern to “which Amiga company is the good one?”)

                                Did you use it as the time? It was revolutionary and while I agree it doesn’t really fit into the modern world, I would argue it was a better experience using it than modern systems. The rest is just nostalgia for a simpler time :) (source: a huge fan since ‘83, owned 2x1000’s, 1x1200 and worked as a paid Amiga dev back in the day).

                                The legal situation is why the Vampire team decided to focus on Aros for it’s line of accelerators/clones (muddying the waters even futhure).

                                1. 2

                                  To clarify: I think at the time, the Amiga 500/1000/2000 was a nice system, but due to various factors like Commodore ineptitude, all the follow-ons were disappointing. Yes, it could have turned out better, but i’m talking about what we have now. What chafes me is the cult aspect; the dumb upgrades, the false mythology around the systems, and the grifter companies trying to sell router evaluation boards to eurotrash with more nostalgia than sense.

                                  I still think the Archimedes (due to a CPU fast enough it could just brute force its way to Amiga level graphics, ahead-of-its-time design, and influence on modern systems) and Macintosh (purely for software mouthfeel; Workbench and GEM are dire) are nicer systems, but the Amiga sucks all the oxygen from the room.

                                  1. 2

                                    I had to look up Archimedes to refresh my memory (Acorn had zero presence in NA), though I’ve looked into RiscOS a bit in the past. I agree on the whole Amiga marketplace - scrabbling for scraps. Not sure I agree on Mac/Finder being better, I do agree GEM was not (though I’ve never used it on either MS-DOS or an Atari ST).

                                    I gave up on my Amiga (a 1200) when I sold it and bought a (faster) 386 system and switched to Linux, it wasn’t as nice (far from it), but was more powerful. I even ran AmiWM for a while ;)

                                1. 2

                                  Are there still 6502 / Z80 / 68K chips being made? I haven’t heard of any modern-day hardware based on them.

                                  I do know the venerable 8051 is still popular for very-low-end embedded use cases, but even back in the day it was described as extremely awkward to program, so I’d be surprised if hobbyists used it! (But what do I know, I found the 6502 nearly impossible back in my teens, so I never got into assembly on my Apple II. My friend’s Z80-based system was easier, I thought.)

                                  1. 3

                                    Yes. 6502 chips are still being made/sold by Western Design Center (https://www.westerndesigncenter.com/). Z80s are still around in the Zilog Z80, Z180 and eZ80 line. Freescale (descendant of Motorola) produces variants of the 68000/68020 as embedded products (or did until recently). There were also dozens second source and derivatives of these processors, and someone might still be selling those; I haven’t checked in a while.

                                    There are also specialty shops like Innovasic that recreate/reverse engineer old processors like this in FPGA/ASICs for long term support purposes. They aren’t cheap.

                                    I still use 8051s for hobby stuff. They’re weird but there’s, what, 50 years of tooling and experience to work with.

                                    1. 4

                                      Freescale (descendant of Motorola)

                                      Who are now part of NXP :)

                                      1. 1

                                        My dad, who worked for Zilog in their heyday circa 1980, would be happy to hear that.

                                        (He later worked for Xilinx, so he’d be happy about all the hobbyists using FPGAs too!)

                                      2. 1

                                        z80 variant is still being made iirc as is the 6502 (and 65816). There are of course plenty of FPGA implementations.

                                        1. 1

                                          You can buy 17 MHz (?) 65C02 chips for $8, brand new. They’re still being made. There’s a very interesting new board being made using them running at 8 MHz: https://www.commanderx16.com/

                                          8051 is not bad if most of your variables fit into 8 registers and everything fits into 256 bytes. Talking to more memory than that is pretty annoying

                                          z80 and 6502 each have their own annoying quirks. With cunning you can hand-write some pretty tight stuff on either (in completely different ways) but both are awful for compiling C (or Pascal back then) to.

                                        1. 2
                                          • Build garden boxes
                                          • Move my saas app’s hosting from Linode to Fly
                                          • Demo data/account for said app
                                          • Since the box is (probably) going away at Linode look at Dockerizing Stitcherd and getting it’s home page running on Fly.
                                          1. 3

                                            Had my eye on Fly for a while, but not used it yet. Would be interested in your thoughts once complete

                                            1. 1

                                              Yep, I am planning a write up on the why and how. I have already brought a copy of my production database over (it’s tiny still - hence this is time to move if it was going to move) and brought an instance of my app up on it which was pretty simple and straight forward to get running. I’ve seen it descibed as a less black-boxy heroku and I would agree with that.

                                              1. 1

                                                What is Fly? I tried googling “fly web host” but couldn’t find anything.

                                                1. 1

                                                  Ah, sorry I missed this! https://fly.io/ a less black boxy heroku PaaS (I am tending to think about it as a hosted Hashicorp stack though)

                                                  1. 1

                                                    No worries! Ah interesting, thanks!

                                            1. 2

                                              One thing that’s helped me is having my whole dev environment set up with a Vagrant script, so I can use it in a VM on various work, home and cloud computers and still have access to all the tools I’m used to.

                                              Another useful thing is code-server, which is a browser interface for VSCode (which is already an Electron app so it’s not too hard to put in a browser). You can run it inside Vagrant (maybe on a cloud instance) and have access to a familiar IDE.

                                              1. 1

                                                That’s a fantastic idea! If I end up using VSCode at some point – I’ve bounced off it a few times, mostly because “emacs” :) – I may mimic that.

                                                1. 1

                                                  A little late to the discussion, but I recently switched to VSCode and I am using https://github.com/whitphx/vscode-emacs-mcx for key bindings. It’s not quite perfect of course, but nothing that trips me up on a day to day basis.

                                                  I’ve been using Emacs since ’97 or so off and on (mostly on - I used Eclipse when I was doing Java work) and so the muscle memory is quite hardwired at this point ;) I do miss recording macros on the fly (though I might guess something like that exists for VSCode already).

                                                  I do drop into a terminal and run emacs -nw or mg (usually for writing git commit messages).

                                              1. 1

                                                I am using cqrs/event sourcing (lite I’d call it - using something mostly derrived from https://kickstarter.engineering/event-sourcing-made-simple-4a2625113224) for a (very) few things at $DAYJOB and yes the ‘audit log for free’ is a large part of why I like it but I also find it provides a nice pattern for defining a clean, low coupled API for updating the data models that is both resusable/composable and easier to test in isolation.

                                                1. 1

                                                  Company: Jobvite

                                                  Company site: https://www.jobvite.com/

                                                  Position(s): Lots: https://talent.jobvite.com/search/jobs

                                                  Location: Remote: (Canada, US, we have people in UK too), Onsite: Indianapolis IN, Kitchner ON, Bangalore India

                                                  Description: Application Tracking System and a Recruitement Marketing software from SMB to Enterprise. From the careersite: “We are Jobvite and we are on a mission to help people and companies grow as we continue to build and expand the innovative SaaS solutions within our leading end-to-end talent acquisition suite. Jobvite empowers talent aquisition leaders to better understand recruitment data and improve recruiting results as they engage, hire, and grow diverse talent”

                                                  Tech stack: There are essentially two. The ATS is primarily Java, the RM suite is Ruby/Rails (and some React) (and a tiny bit of Go :) )

                                                  Contact: https://talent.jobvite.com/

                                                    1. 3

                                                      I’m not sure this is an april fools joke. The discussion seems pretty reasonable and it was proposed yesterday, not on april fools day

                                                      1. 2

                                                        You just got Punk’d by Ian Lance Taylor! Bummer.

                                                        1. 1

                                                          Ian Lance Taylor proposing this seems pretty April Fools-ey, as he’s usually striking down new language change proposals. Also comments from Rob Pike. Though half of the people in the thread seem to be serious, maybe this is some kind of self-aware joke about how boring Go is?

                                                        2. 2

                                                          The submitter confirmed that this wasn’t intended as an April Fool’s day joke.

                                                        1. 4

                                                          I am working on a SaaS startup to provide tools for boards of directors that will help imrpove the quality of information they need to govern better while reducing the amount of work for staff and management to provide that information. The first implemented is a simple risk registry to replace the spreadsheet file we were using (I am on the board of a small CU here in the Fraser Valley). Others are planned. Working on the marketing site for it and then the TOS and privacy policies.

                                                          1. 2

                                                            I am using a TOMOKO MMC023 (Cheap 10 Keyless) and I have no complaints. I’ve been contemplating a 60%: Geek-Customized-SK64S from Bangood or a XD64 Kit are the contenders. I don’t touch type so ergonmic keyboards tend to be frustrating for me to use.

                                                            1. 8

                                                              I am one of several k8s admins at work and I really hate k8s. In the past I’ve been at another shop as a developer where we used DC/OS (marathon/mesos) which I found a lot easier from a developer perspective, but my own experiments with it made me want to stab that terrible Java scheduler that ate resources for no damn reason. (K8S is written in Go and is considerably leaner as far as resources, but a much bigger beast when it comes to config/deployment).

                                                              I’ve dabbled with Nomad before and I do know some advertising startups that actively use it for all their apps/jobs. If I was getting into the startup space again, I’d probably look at using it.

                                                              K8S is a hot mess of insane garbage. When it’s configured and running smoothly, a good scheduler helps a lot when doing deployments and rolling/zero-downtime updates. But they tend to consume a lot of nodes and it’s very difficult to go from 1 to 100 (Having your simple proof of concept running on just 1 system and then scale up to n adding redundancy and masters). Some people talk about minikube or k3s, but they’re not true 0 to scale systems.

                                                              I did a whole post on what I think about docker and scheduling systems a few years back:

                                                              https://battlepenguin.com/tech/my-love-hate-relationship-with-docker-and-container-orchestration-systems/

                                                              1. 4

                                                                You should look at juju. It uses LXC/LXD clustering to avoid a lot of the shortcomings of k8s (which are many and varied). Maybe Nomad is better, but it’s all expressed in a language named after the founding company. This in and of itself is enough reason to squint really hard and ask “why?”.

                                                                Also: https://github.com/rollcat/judo It’s like ansible, but written in Go and only for the most basic of all basic kinds of provisioning.

                                                                1. 3

                                                                  re: HCL

                                                                  I look at it this way. HCL is a (from the README) a toolkit for building config lanuages… “Inspired by libucl, nginx configuration, and others.” yaml is a pain to hand edit when they get large (ie k8s). json is a pain too (no comments for example - as an aside, why are we (still) using serialization formats for config files!?). toml is … okay… but a bit strange to get the structure right. It brings consistency (mostly) between their own products and being open source, means others can adopt it as well.

                                                                  1. 3

                                                                    My understanding is you can use JSON anywhere HCL is accepted by the tools as well, so if you’re generating it out of some other system you can emit JSON not have to emit HCL.

                                                                    I much prefer writing HCL[2] for configuring things, it’s a little clearer than YAML (certainly less footguns, no) and supports comments unlike JSON.

                                                                    1. 2

                                                                      It’s not the language itself that bothers me (it’s a little weird as I would rather use a more-universally-accepted solution, but that’s my personal preference and I do not impose that on anyone else). it’s that it is owned by a company that is known for taking products and making them closed and expensive. This is precisely what companies do, though, and it’s not too surprising. You can get an “enterprise” version of any product hashicorp builds. The question remains: will HCL ever be forced into an “enterprise” category? Will it ever force users to accept a license that they do not agree or pay to use it? YAML/JSON have the advantage of being community-built so I doubt that will ever happen to them.

                                                                      I realize now that I’m grandstanding here and proclaiming the requirement of using FOSS – but I don’t wholeheartedly agree to that. I have no problem using proprietary software (I use several every day, in fact). I’m just remaining a little squinty-eyed at HCL specifically. I don’t know that I could bring myself to choose HCL for tasks at my day job for things that do not inherently require it.

                                                                      That brings me full circle back to my point: be careful, HCL is born from a commercial entity that may not always play nice. Hashicorp has generally in the past, but there are examples of how the companies with the best intentions do not always keep their principles.

                                                                1. 1

                                                                  Forum software comes to mind.

                                                                  I know it’s not self hosted but there’s also nextdoor.com and your neighbors might already be using it.

                                                                  EDIT: I should have read all of the comments before posting.

                                                                  1. 5

                                                                    There are a lot of people in between the two camps not complaining either way. I want to like Wayland really I do, i3 and thus Sway looked realy slick and I was interested enough to try a few times, but came to the conclusion I don’t want a tiling window mangager (regardless of the stack underneath it - why are almost all compositors tiling?!?).

                                                                    Heres my point of view (all .02 worth of it): Wayland doesn’t do anything that is particlarily better for me. The issues with the X11 code base/protocol are not an issue for me so why shouid I switch? Somethings seem like a step backwards for modularity too eg, every compositor needs to implement XYZ so there’s a lot of duplicated effort (I know… wlroots).

                                                                    But then again, I’m just a 50something curmudgeon that’s been running X11 for over 25 years with a cobbled together workflow using WindowChef, sxhkd and some shell scripts. But Hikari is at least a little intriguing and if it gets (more?) scriptable ala windowchef I would give it another go.

                                                                    1. 2

                                                                      We use K8s at work (dev (but really it’s a royal pain for dev) and qa/integration - and will be used for prod at some point). I spent some time playing with the Hashicorp stack over the holidays and since @cadey s recent posts about it I’ve been lookiing into Nix more lately.

                                                                      I am more than a little intrigued by the idea of Nix and NixOS (and I guess NixOPs too) but the learning curve for it has been frustrating in that there are lots of example blog posts, but they all seem to be missing the bigger picture of how it all fits together (eg when I look at the Repos associated with the posts, there are a plethora of .nix files but no sense of how/when things get called/resolved).

                                                                      I had little trouble getting the Hashicorp stuff running (upto and including waypoint) and was pretty nice to work with but the infrastructure and overhead needed seems too much for a single box side project.

                                                                      I would love to experiment with Nix for said side projects, but my VPS provider of choice would not make it easy to use without manual intervention and so I spent yesterday filling in some notes on a process supervisor/deployment framework that I’ve been thinking about for a while (often saying to myself it’s not needed if I have X) that will run on a Plain Old Linux Box but still allow for automation around builds and deploys.

                                                                      1. 2

                                                                        We use K8s at work (dev (but really it’s a royal pain for dev) and qa/integration - and will be used for prod at some point). I spent some time playing with the Hashicorp stack over the holidays and since @cadey s recent posts about it I’ve been lookiing into Nix more lately.

                                                                        I think taking up Nix and immediately trying to use it in such a context is a lot to take in and will most likely end in frustration. Although NixOS is a Linux distribution, it is not like any other distribution and switching to NixOS pretty much amounts to learning an alien Unix system. Sure, the regular Unix commands are there, but otherwise the system is not like any other Linux distribution. If you want to user Nix in more complex setups, there is no way around learning the language (easy if you are familiar with FP, otherwise harder) and how nixpkgs works. Sooner or later, you will have to write your own derivations, and that will be painful if you do not know the language and nixpkgs functions and conventions.

                                                                        A good place to start is using Nix as an extra package manager on a local Linux or Mac workstation/laptop. This not only gives you access to many packages (more than most other package sets), but also gives you ad-hoc development shells (with nix-shell). To make it less ad-hoc you can create per-project default.nix files or shell.nix files. Basically using Nix as a way to manage virtual environments, but then for any package. These don’t require you buy into Nix immediately, and you can just use it on the side. If you don’t like it, no harm done. Blow away ~/.nix-profile and uninstall Nix.

                                                                        A good way of playing with declarative configuration, without immediately switching to NixOS, is to use Home Manager. It allows you to declaratively configure your home environment (basically dot files + packages). It gives you some of the benefits of NixOS, without immediately jumping into a foreign environment. If you like Home Manager, you’ll like NixOS and it’s a good point to jump ship.

                                                                      1. 1

                                                                        I can’t really vouch for it (since I’ve used it for a sum total of about 30 minutes) but https://k6.io/ was pretty easy to get going.

                                                                        1. 2

                                                                          That’s about how long I’ve used k6s for as well! We evaluated it before writing our own tool, tried so hard to avoid writing our own tool. The scenario we have is basically we open a WebSocket and also do HTTP calls, and we have to trigger those HTTP calls based on the WebSocket but we also have a periodic background task that runs every 30 seconds (heartbeat). I couldn’t get all of that to fit into k6s, but could be I missed something in the docs.

                                                                        1. 26

                                                                          Pro tip: this applies to you if you’re a business too. Kubernetes is a problem as much as it is a solution.

                                                                          Uptime is achieved by having more understanding and control over the deployment environment but kubernetes takes that away. It attracts middle managers and CTOs because it seems like a silver bullet without getting your hands dirty but in reality it introduces so much chaos and indirections into your stack that you end up worse off than before, and all the while you’re emptying your pockets for this experience.

                                                                          Just run your shit on a computer like normal, it’ll work fine.

                                                                          1. 9

                                                                            This is true, but let’s not forget that Kubernetes also has some benefits.

                                                                            Self-healing. That’s what I miss the most with a pure NixOS deployment. If the VM goes down, it requires manual intervention to be restored. I haven’t seen good solutions proposed for that yet. Maybe uptimerobot triggering the CI when the host goes down is enough. Then the CI can run terraform apply or some other provisioning script.

                                                                            Zero-downtime deployment. This is not super necessary for personal infrastructures but is quite important for production environments.

                                                                            Per pod IP. It’s quite nice not to have to worry about port clashes between services. I think this can be solved by using IPv6 as each host automatically gets a range of IPs to play with.

                                                                            Auto-scaling. Again not super necessary for personal infrastructure but it’s nice to be able to scale beyond one host, and not to have to worry on which host one service lives.

                                                                            1. 6

                                                                              Did anyone tried using Nomad for personal projects? It has self-healing and with the raw runner one can run executables directly on NixOS without needing any containers. I have not tried it myself (yet), but would be keen on hearing the experiences.

                                                                              1. 3

                                                                                I am experimenting with the Hashiscorp stack while off for the holidays. I just brought up a vagrant box (1GB ram) with Consul, Docker and Nomad runing (no jobs yet) and the overhead looks okay:

                                                                                              total        used        free      shared  buff/cache   available
                                                                                Mem:          981Mi       225Mi       132Mi       0.0Ki       622Mi       604Mi
                                                                                Swap:         1.9Gi       7.0Mi       1.9Gi
                                                                                

                                                                                but probably too high to fit Postgres, Traefik or Fabio and a Rails app into it as well, but 2GB will probably be lots (I am kind of cheap so the less resources the better).

                                                                                I have a side project running in ‘prod’ using Docker (for Postgres and my Rails app) along with Caddy running as a systemd service but it’s kind of a one off machine so I’d like to move towards something like Terraform (next up on the list to get running) for bring up and Nomad for the reasons you want something like that.

                                                                                But… the question that does keep running through the back of my head, do I need even Nomad/Docker? For a prod env? Yes, it’s probably worth the extra complexity and overhead but for personal stuff? Probably not… Netlify, Heroku, etc are pretty easy and offer free tiers.

                                                                                1. 1

                                                                                  I was thinking about doing this but I haven’t done due diligence on it yet. Mostly because I only have 2 droplets right now and nobody depends on what’s running on them.

                                                                                2. 1

                                                                                  If you’re willing to go the Amazon route, EC2 has offered most of that for years. Rather than using the container as an abstraction, treat the VM as a container: run one main process per VM. And you then get autoscaling, zero downtime deploys, self-healing, and per-VM IPs.

                                                                                  TBH I think K8s is a step backwards for most orgs compared to just using cloud VMs, assuming you’re also running K8s in a cloud environment.

                                                                                  1. 2

                                                                                    That’s a good point. And if you don’t care about uptime too much, autoscaling + spot instances is a pretty good fit.

                                                                                    The main downside is that a load-balancer is already ~15.-/month if I remember correctly. And the cost can explode quite quickly on AWS. It takes quite a bit of planning and effort to keep the cost super low.

                                                                                3. 5

                                                                                  IMO, Kubernetes’ main advantage isn’t in that it “manages services”. From that POV, everything you say is 100% spot-on. It simply moves complexity around, rather than reducing it.

                                                                                  The reason I like Kubernetes is something entirely different: It more or less forces a new, more robust application design.

                                                                                  Of course, many people try to shoe-horn their legacy applications into Kubernetes (the author running git in K8s appears to be one example), and this just adds more pain.

                                                                                  Use K8s for the right reasons, and for the right applications, and I think it’s appropriate. It gets a lot of negative press for people who try to use it for “everything”, and wonder why it’s not the panacea they were expecting.

                                                                                  1. 5

                                                                                    I disagree that k8s forces more robust application design; fewer moving parts are usually a strong indicator of reliability.

                                                                                    Additionally, I think k8s removes some of the pain of microservices–in the same way that a local anathestic makes it easier to keep your hand in boiling water–that would normally help people reconsider their use.

                                                                                  2. 5

                                                                                    And overhead. Those monster yaml files are absurd in so many levels.

                                                                                    1. 2

                                                                                      Just run your shit on a computer like normal, it’ll work fine.

                                                                                      I think that’s an over-simplification. @zimbatm’s comment makes good points about self-healing and zero-downtime deployment. True, Kubernetes isn’t necessary for those things; an EC2 auto-scaling group would be another option. But one does need something more than just running a service on a single, fixed computer.

                                                                                      1. 3

                                                                                        But one does need something more than just running a service on a single, fixed computer.

                                                                                        I respectfully disagree…worked at a place which made millions over a few years with a single comically overloaded DO droplet.

                                                                                        We eventually made it a little happier by moving to hosted services for Mongo and giving it a slightly beefier machine, but otherwise it was fine.

                                                                                        The single machine design made things a lot easier to reason about, fix, and made CI/CD simpler to implement as well.

                                                                                        Servers with the right provider can stay up pretty well.

                                                                                        1. 2

                                                                                          I don’t see how your situation/solution negates the statement.

                                                                                          You’ve simply traded one “something” (Kubernetes) with another (“the right provider”, and all that entails–probably redundant power supplies, network connections, hot-swappable hard drives, etc, etc).

                                                                                          The complexity still exists, just at a different layer of abstraction. I’ll grant you that it does make reasoning about the application simpler, but it makes reasoning about the hardware platform, and peripheral concerns, much more complex. Of course that can be appropriate, but it isn’t always.

                                                                                          I’m also unsure how a company’s profit margin figures into a discussion about service architectures…

                                                                                          1. 5

                                                                                            I’m also unsure how a company’s profit margin figures into a discussion about service architectures…

                                                                                            There is no engineering without dollar signs in the equation. The only reason we’re being paid to play with shiny computers is to deliver business value–and while I’m sure a lot of “engineers” are happy to ignore the profit-motive of their host, it is very unwise to do so.

                                                                                            I’ll grant you that it does make reasoning about the application simpler, but it makes reasoning about the hardware platform, and peripheral concerns, much more complex.

                                                                                            That engineering still has to be done, if you’re going to do it at all. If you decide to reason about it, do you want to be able to shell into a box and lay hands on it immediately, or hope that your k8s setup hasn’t lost its damn mind in addition to whatever could be wrong with the app?

                                                                                            You’ve simply traded one “something” (Kubernetes) with another (“the right provider”, and all that entails–probably redundant power supplies, network connections, hot-swappable hard drives, etc, etc).

                                                                                            The complexity of picking which hosting provider you want to use (ignoring colocation issues) is orders and order of magnitudes less than learning and handling k8s. Hosting is basically a commodity at this point, and barring the occasional amazingly stupid thing among the common names there’s a baseline of competency you can count on.

                                                                                            People have been sold this idea that hosting a simple server means racking it and all the craziness of datacenters and whatnot, and it’s just a ten spot and an ssh key and you’re like 50% of the way there. It isn’t rocket surgery.

                                                                                          2. 2

                                                                                            Servers with the right provider can stay up pretty well.

                                                                                            I was one of the victims of the DDOS that hit Linode on Christmas day (edit: in 2015; didn’t mean to omit that). DO and Vultr haven’t had perfect uptime either. So I’d rather not rely on single, static server deployments any more than I have to.

                                                                                            1. 1

                                                                                              can you share more details about this?

                                                                                              I’ve always been impressed by teams/companies maintaining a very small fleet of servers but I’ve never heard of any successful company running a single VM.

                                                                                              1. 4

                                                                                                It was a boring little Ubuntu server if I recall correctly, I think like a 40USD general purpose instance. The second team had hacked together an impressive if somewhat janky system using the BEAM ecosystem, the first team had built the original platform in Meteor, both ran on the same box along with Mongo and supporting software. The system held under load (mostly, more about that in a second), and worked fine for its role in e-commerce stuff. S3 was used (as one does), and eventually as I said we moved to hosted options for database stuff…things that are worth paying for. Cloudflare for static assets, eventually.

                                                                                                What was the business environment?

                                                                                                Second CTO and fourth engineering team (when I was hired) had the mandate to ship some features and put out a bunch of fires. Third CTO and fifth engineering team (who were an amazing bunch and we’re still tight) shifted more to features and cleaning up technical debt. CEO (who grudgingly has my respect after other stupid things I’ve seen in other orgs) was very stingy about money, but also paid well. We were smart and well-compensated (well, basically) developers told to make do with little operational budget, and while the poor little server was pegged in the red for most of its brutish life, it wasn’t drowned in bullshit. CEO kept us super lean and focused on making the money funnel happy, and didn’t give a shit about technical features unless there was a dollar amount attached. This initially was vexing, but after a while the wisdom of the approach became apparent: we weathered changes in market conditions better without a bunch of outstanding bills, we had more independence from investors (for better or worse), and honestly the work was just a hell of a lot more interesting due in no small part to the limitations we worked under. This is key.

                                                                                                What problems did we have?

                                                                                                Support could be annoying, and I learned a lot about monitoring on that job during a week where the third CTO showed me how to setup Datadog and similar tooling to help figure out why we had intermittent outages–eventual solution was a cronjob to kill off a bloated process before it became too poorly behaved and brought down the box. The thing is, though, we had a good enough customer success team that I don’t think we even lost that much revenue, possibly none. That week did literally have a day or two of us watching graphs and manually kicking over stuff just in time, which was a bit stressful, but I’d take a month of that over sitting in meetings and fighting matrix management to get something deployed with Jenkins onto a half-baked k8s platform and fighting with Prometheus and Grafana and all that other bullshit…as a purely random example, of course. >:|

                                                                                                The sore spots we had were basically just solved by moving particular resource-hungry things (database mainly) to hosting–the real value of which was having nice tooling around backups and monitoring, and which moving to k8s or similar wouldn’t have helped with. And again, it was only after a few years of profitable growth that it traffic hit a point where that migration even seemed reasonable.

                                                                                                I think we eventually moved off of the droplet and onto an Amazon EC2 instance to make storage tweaks easier, but we weren’t using them in any way different than we’d use any other barebones hosting provider.

                                                                                                1. 4

                                                                                                  Did that one instance ever go completely down (becoming unreachable due to a networking issue also counts), either due to an unforeseen problem or scheduled maintenance by the hosting provider? If so, did the company have a procedure for bringing a replacement online in a timely fashion? If not, then I’d say you all just got very lucky.

                                                                                                  1. 1

                                                                                                    Yes, and yes–the restart procedure became a lot simpler once we’d switched over to EC2 and had a hot spare available…but again, nothing terribly complicated and we had runbooks for everything because of the team dynamics (notice the five generations of engineering teams over the course of about as many years?). As a bonus, in the final generation I was around for we were able to hire a bunch of juniors and actually teach them enough to level them up.

                                                                                                    About this “got very lucky” part…

                                                                                                    I’ve worked on systems that had to have all of the 9s (healthcare). I’ve worked on systems, like this, that frankly had a pretty normal (9-5, M-F) operating window. Most developers I know are a little too precious about downtime–nobody’s gonna die if they can’t get to their stupid online app, most customers–if you’re delivering value at a price point they need and you aren’t specifically competing on reliability–will put up with inconvenience if your customer success people treat them well.

                                                                                                    Everybody is scared that their stupid Uber-for-birdwatching or whatever app might be down for a whole hour once a month. Who the fuck cares? Most of these apps aren’t even monetizing their users properly (notice I didn’t say customers), so the odd duck that gets left in the lurch gets a hug and a coupon and you know what–the world keeps turning!

                                                                                                    Ours is meant to be a boring profession with simple tools and innovation tokens spent wisely on real business problems–and if there aren’t real business problems, they should be spent making developers’ lives easier and lowering business costs. I have yet to see k8s deliver on any of this for systems that don’t require lots of servers.

                                                                                                    (Oh, and speaking of…is it cheaper to fuck around with k8s and all of that, or just to pay Heroku to do it all for you? People are positively baffling in what they decide to spend money on.)

                                                                                                  2. 1

                                                                                                    eventual solution was a cronjob to kill off a bloated process before it became too poorly behaved and brought down the box … That week did literally have a day or two of us watching graphs and manually kicking over stuff just in time, which was a bit stressful,…

                                                                                                    It sounds like you were acting like human OOM killers, or more generally speaking manual resource limiters of those badly-behaved processes. Would it be fair to say that sort of thing would be done today by systemd through its cgroups resource management functionality?

                                                                                                    1. 1

                                                                                                      We probably could’ve solved it through systemd with Limit* settings–we had that available at the time. For us, we had some other things (features on fire, some other stuff) that took priority, so just leaving a dashboard open and checking it every hour or two wasn’t too bad until somebody had the spare cycles to do the full fix.

                                                                                          1. 10

                                                                                            There are more reasons to not buy a new laptop: in most cases, you are forced to buy a license for an unwanted proprietary operating system. Laptop CPUs are also equipped with malicious spyware technologies like Intel AMT/ME. By buying a new laptop, you vote with your money for these legacy and harmful technologies and unethical practices.

                                                                                            I am looking forward to laptops with OpenPOWER and RISC-V processors… Until that, I will probably not buy a new laptop.

                                                                                            Less harmful / compromise solution might be a DIY modular „laptop“ consisting of a portable display, keyboard, mouse, powerbank and a single board computer. It is all reusable components, SBCs are cheap and you can upgrade the parts as needed.

                                                                                            1. 3

                                                                                              There are such machines available now: Puri.sm, System76, Pinebook Pro, MNT/Reform, EOM68, even the Novena - last three on Crowd Supply - and the Dell XPS 13 (but only for the preinstalled Linux). There’s even a group working on a PPC notebook.

                                                                                              I have spent a comparable amount of money as the author over roughly the same time period (~5000CAD: nx7010 (2003), Black Macbook (2009) and a 16G 2012 Retina MBP and it’s still running fine but the other day I hit my first resource limitation… storage space to do something (Docker for Mac needs a bunch). Yes I can order a bigger SSD (and probably should, 8.5 years is a long time for an SSD) but my wife is thinking about a machine (she only uses her 5 year old ipad and phone) so I am looking at a new machine and would hand down the MPB to her.

                                                                                              Thinking about the System76 Lemur. Its: User upgradable (assuming it’s not maxed out at order time), servicable (ie replace the battery), IME disabled, CoreBoot and linux preinstalled and has resonable specs otherwise. Whichever (and if) I order a new machine, I am targeting a 5-8 year life span for it.

                                                                                              1. 5

                                                                                                ME disablement is misleading. It removes a lot of the functionality of the ME, but it’s still running, has to be running, and is still unauditable and unreplaceable (for now) code in the booth path (since it does bringup).

                                                                                                1. 2

                                                                                                  Yep but if it can’t be the ideal, then it should be as close as possible to the ideal. Ironically I am typing this on an even more closed system.

                                                                                                  As an update I’ve ordered upgrades (4GB RAM and a 256GB SSD) for my wifes white Macbook (circa 2008) and a new (larger) SSD for my MBP. I’ll put linux on the Macbook so she’ll have something current software wise and I won’t be running out of space. I expect to get at least a couple years more out of each.