1. 1


    • Starting an integration against Xero (financial/bookkeeping system) to get our financial data exported. Lots of work around internal UX, finding customers who want to beta test and figuring out what our “MVP” will be.
    • Doing some mentorship for a (somewhat junior) co-worker, seeing lots of progress and feeling helpful :)


    • Helping my girlfriend setting up a personal site. Surprised by how rusty you get when you have to get the “basics” set up, seemingly I have forgotten how basic stuff works. Also, having to do a basic 3-page, basic frontend design and looking at HTML5 Boilerplate (and wondering where it all went wrong…)
    • Setting up the same server for some easy future deployments/services/projects, considering using Nomad (based on watching the recent post here)
    1. 4

      This speaks out of my heart. I‘ve never missed pattern matching support in Python and it will probably lead to a lot of confusion down the road.

      1. 7

        Pattern matching can be really powerful, but mixed with pythons general move away from functional programming (such as hiding away reduce in functools), it might very well turn out to be a double-edged sword.

        1. 5

          You can move reduce but you can never take it away. All languages are going towards the same bag of features, gradual typing, GADTs, CTFE/Const expr, pattern matching.

          Basically every language will crib from ML until ML is everywhere.

          You also set the direction of Python and how it approaches functional programming and idioms. The functional language you want is already here.

          The fact that we don’t need to use MacroPy to enable pattern matching is pretty nice. Python 3.10 feels like the 2.1/2.2 release did.

      1. 1

        SQL is modeled on top of relational algegrab, which underpins relational databases - and then they put a ton of new features on top as part of their implementation. There is a solid mathematical foundation for how things should work, and then there’s 50+ years of experience developing software on top of this foundation.

        SQL is not pretty or slick, but it’s powerful. There are a lot of ORMs and other database engines/layers that can simplify it, and 95% of the time it’s perfectly performant because you do not need crazy queries. You can have entire, fully-fledged web platforms that have nothing but straightforward queries.

        But when you do need the power, to pull out a bunch of data and compare it and manipulate it at the source - lest you pull in 2GB of data that needs to be aggregated in-memory for a single HTTP request - it really shines. And I would argue that any developer worth their salt should know SQL well, at least if they have more than a few years of experience. Good data modeling and knowing how your database works (not all relational databases are created equal) can go a long way, even with sub-par code architecture. I also think that’s the same reason SQL is still going strong after all these years - a lot of other query languages fall short at a certain complexity or are specialized for very specific use cases.

        I have seen too many N+1 queries or other completely insane queries in otherwise nice codebases, where the developer has pulled out all the stops to make the code efficient, when it could be fixed by altering the query with a 99% performance increase.

        1. 5


          I don’t trust anyone else. They go broke or go scared of Nethack for Windows CE. I don’t know why anyone hosts on shared platforms or why they even exist.

          1. 17

            A couple of reasons:

            • no stable Internet connectivity at home
            • no stable energy at home
            • no money to buy servers (everything cannot run on a Raspberry Pi)
            • hosting services is against ISP’s TOS (very common in my country)
            • managing emails deliverability is tedious and it’s not even always technical
            • managing disk failure costs some money too (you have to pay for extra disks)
            1. 0

              Well cellphone isn’t so great either. But things like email have a baked in several day delay model. I front with office 365 as I too have a life and couldn’t be bothered with lists and all the crap. My exchange server has the pptp RAs set to dial with default gate on the VPS so it sends out the “smart host” just as I only allow the MS servers inbound. I’ve been doing it for years now.

              Since my stuff is the pptp client how does the ISP know i’m hosting? Simple they don’t. No connections go to my home they all go to the VPS which in turn points to the pptp client.

              Powe and servers are so cheap, along with virtualization… Xeon e5v2’s board /cpu/16gb of ram are sub $100 USD. It’s trivial.

            2. 7

              To save time! You pay someone money to spend their time on it. With my day job I make enough to pay for these services, leaving me with time to spend with my family and doing leisure activities (reading, playing games, social events). There are some things I do for fun and to learn, but I can pick and choose those.

              If you truly trust no one, then of course self-hosting is the only option. For me, there’s quite a few companies I’ll happily trust with my projects.

            1. 8

              I feel there’s a fine line between breaking up functions for no reason and adding clarity. If you have a 2-liner of code that is 2 x 10 chained function calls, wrapping that up in a function and calling it extractXfromY helps explain what the code does implicitly. Taking 5 lines of simple variable assignment, arithmetic, database calls or similar and pushing into a separate class does little more than to lower the line count of your functions.

              Just as line count is a terrible indicator of productivity and progress, it is also a terrible indicator of code quality. A 100 line function that has a clear purpose, with a few inline comments sprinkled into it, might be vastly preferable to unraveling the same function by having to jump back and forth between the “main” function and smaller bites of code just to figure out what is happening. As with all things, it depends.

              1. 1

                I agree that SLOC is not a perfect metric, and of course in different languages a reasonable SLOC will look different. In a language with tonnes of lines of boilerplate a 4 line function might be unacheivable. In a higher-level language, a 100 line function/procedure/class/thing would be madness. Depends on context.

              1. 4

                I feel that paragraph 2-3-4 in practice is going to restrict the licensed software from being used by most for-profit companies of more than a handful of people. I do not know of any larger companies where every employee/worker/owner has equal vote (and ditto for equal equity), but I’d love to know of such companies.

                Given the name of the license, this does seem to be the intended purpose, so I guess the license is just right for it’s, admittedly small, audience.

                1. 4

                  The author seems to have broadly Syndicalist views (similar to my own), there are a few syndicalist companies around – there’s a games company that has been doing fairly well in that regard, I can’t recall the name offhand.

                  1. 1

                    My (of course limited) experience says that all successful software co-operatives with 2 or more equal people sooner or later escalate to a “proper” company with employees.

                    Example from Germany: 2 or more people form a “GbR” and they’re personally liable. Very, very often after a few years this will be transformed into a “GmbH” (roughly like an LLC afaik) once you have employees or want to take on contracts from bigger companies. This is very common, and I wouldn’t even hold it against the founders, there are so many reasons why you wouldn’t want to have it if you have benefits. Oh, I don’t think this would conflict with 2d or 3 in general, but it seems very, vary rare. And now I really wonder why, or if I just haven’t noticed those.

                  1. 2

                    So no downvote possible at all?

                    1. 5

                      “Downvotes” were never possible really, it was always intended as a way to flag that there is a (significant) problem with the post, rather than “I don’t agree” or “I don’t like it”.

                      1. 3

                        I think the gist is that you either agree with / appreciate a comment and upvote it, or you think it’s inflammatory / misleading / trolling enough that you flag it. Disagreement was never the intended reason for downvoting comments.

                      1. 19

                        We have been running our own self-hosted kubernetes cluster (using basic cloud building blocks - VMs, network, etc.), starting from 1.8 and have upgraded up to 1.10 (via multiple cluster updates).

                        First off, don’t ever self-host kubernetes unless you have at least one full-time person for a small cluster, and preferably a team. There are so many moving parts, weird errors, missing configurations and that one error/warning/log message you don’t know exactly what means (okay, multiple things).

                        From a “user” (developer) perspective, I like kubernetes if you are willing to commit to the way it does things. Services, deployments, ingress and such work nicely together, and spinning up a new service for a new pod is straightforward and easy to work with. Secrets are so-so, and you likely want to do something else (like Hashicorp Vault) unless you only have very simple apps.

                        RBAC-based access is great in theory, but the documentation is poor and you end up having to cobble things together until it works the way you want. A lot of defaults are basically unsafe if you run any container with code you don’t control 100%, but such is life when running privileged docker containers. There are ways around it, but auditing and tweaking all of this to be The Right Way™️ suddenly adds a lot of overhead to your non-app development time.

                        To re-iterate, if you can ignore the whole “behind the scenes” part of running kubernetes, it’s not too bad and you get a lot of nice primitives to work with. They’ll get you 90% of the way and lets you have a working setup without too much hassle, but as with everything else the last 10% takes the other 90% of the time to get it just to where you want - granular network control between pods, red/green or other “non-standard” deployment methods.

                        1. 6

                          I think the vast majority of web and/or system developer jobs where the main task is “feature development” can rely on the built-in structures in most languages (maps, lists and sets using various implementations) as long as they use common developer sense, but I have come across quite a few bottlenecks where lack of knowledge of the underlying data store (whether SQL or not) was causing trouble.

                          I find SQL DB internals fascinating, like tuning a formula 1 race car, so I am likely biased, but it seems like a lot of developers have some simple-but-crude notions of how a database works, with very little understanding of what it takes when you start seeing bigger amounts of data and/or more activity (“I put an index on it, why didn’t it automatically improve the performance on my 100m rows, 300 column wide DB when I select all fields on 20% of the rows?”)

                          1. 8

                            There are a few things here that might be easier with ZFS. Here are a few constructive recommendations based on personal experience. (Note that I haven’t used btrfs in a while, so my advice is going to be fairly onesided toward ZFS.)


                            You do actually get bitrot repair with ZFS if you use either RAIDz or a mirror. Hardware RAID won’t defend against silent data corruption, which is almost negligent to ignore on spinning rust drives for long-term backups. ZFS lets you choose whether to use a mirror or RAIDz and will still repair corrupted data if a good copy exists somewhere. And, even if you use ZFS on a single drive (don’t), it will at least refuse to read the corrupted data.


                            For ZFS, I use Sanoid for snapshots, and Syncoid (part of Sanoid) for synchronization. This uses zfs send, is a ton more efficient than rsync, and is easily configured in NixOS. rsync.net and datto.com are two services that let you use them as zfs send targets over ssh. Personally, I have a remote dedicated backup server that I send to over a VPN to make sure the “1” part of the 3-2-1 rule works.


                            I think this is just specific to the filesystems you’re used to…

                            There’s an issue when you choose to encrypt backups: if data becomes corrupted, it’s hard to recover anything without a copy.

                            Not the case with ZFS on Linux 0.8. You get all the anti-corruption guarantees ZFS already gives you if you create an encrypted dataset. Also, you can zfs send -w or syncoid --sendoptions=w and get end-to-end encrypted backups where the backup target doesn’t have to know the key to your dataset to receive it. This is what the future is like.

                            File organization

                            I’d organize my top-level folders (music/documents/pictures/etc) with datasets in ZFS, which are all exposed as mountpoints on Linux or drives on Windows. Then, I’d add folders under them like normal volumes. You can also have different replication schemes/snapshot schemes/record sizes/compression methods/encryption methods for each to tune them to different workloads.

                            Take joy in knowing your information is safe probably until the day you die!

                            I’d argue that’s a [citation needed] unless you’re using a filesystem with built-in checksumming. NTFS is not one of those filesystems. Give ZFS a try, and truly have your information be safe until the day you die :-)

                            1. 9

                              While I agree (in theory, I haven’t used ZFS), what you’re describing does not seem to include the “… for mortals” part.

                              Most people who own a computer and care enough to have backups should be able to set up 2 hard drives and use BackBlaze. The article even recommends to get a hard drive enclosure rather than fiddling with internal hard drives, to remove as much friction as possible.

                              1. 2

                                Fair! Though, I’ve seen the Raspberry Pi 4 perform quite well with a 64-bit Linux (specifically, NixOS 20.09pre), ZFS, and a dual-drive external USB 3.0 bay. And ZFS on Windows is also a thing now, although caveat emptor with it until it’s more stable. So, maybe not for mortals quite yet, but getting there for people who are willing to get familiar with the Raspberry Pi and the ZFS command line. Which you could argue is still not “mortals” :-)

                                I just question the lack of data checksums for long-term storage. FS checksumming is actually pretty important for this, and if you just store two copies of your data without checksums, you have no idea which is the “right” one if a cosmic ray happens to bitflip one of your drives. So maybe a better suggestion if you’re stuck on something like NTFS would be to store an (also mirrored) SHA256SUMS file with your data.

                                1. 4

                                  My personal feeling is that most people do not care if a single file / photo becomes corrupt or has a glitch in it. For documents, images, audio and more, the program you use to consume it will happily fill in a blank or maybe show a weird symbol or play an odd noise.

                                  In many ways, backups are like password management: There’s the good/right way to do it, and then there’s the “good enough” way for most people, which balances “will actually be used” with “good enough protection”.

                                  Nobody who just barely cares enough about backups are going to set up a Raspberry Pi or do anything beyond what this article recommends. There’s a reason a lot of people use a NAS for backups, you plug in a bunch of disks (often included in the purchase) and then you set up using a nice web UI and it’s up and running.

                                  1. 3

                                    I haven’t used ZFS with it but the rpi 4 is indeed a seriously impressive server given its size and cost!

                                    I haven’t gone the external USB route, and instead use NFS mounted filesystems from my NAS over gigabit ethernet. I get great perfomance with that. I should try an external USB drive and compare with some benchmarking.

                                    I like the NFS from NAS option because it means that everything is instantly backed up so the pi is truly disposable.

                                    1. 2

                                      If you try ZFS with the Raspberry Pi 4, try to get a 64-bit aarch64 OS that has a good ZFS package (like NixOS). 64-bit architecture is a good idea for ZFS anyway, since block pointers, checksums, etc are all 64 bits wide. In my experience this setup has worked well enough that I’ve recommended a Raspberry Pi 4 and a USB 3 hard drive dock to others as a low-cost way to try ZFS with real drives. (Definitely don’t use the Raspberry Pi 3 for this, it has USB 2 shared with the ethernet controller, which will run you into all sorts of latency problems if you try to use it as a NAS).

                                      Raspbian, last I checked, only boots the ARM cores in 32-bit mode, and who knows what sort of ZFS modules they even provide. Compiling ZFS from source on a raspberry pi doesn’t sound that enjoyable, and NixOS’ binary cache seems to always have it.

                                      1. 2

                                        I’m sure nixos is amazing because everything I hear about it is amazing but I also wonder how Ubuntu 20.04 stacks up. I’m more familiar with its administrative interfaces and the like. I know the ISO for rpi is 64bit.

                                        1. 2

                                          Oh, good to know. Will be interested in how that goes.

                                  2. 2

                                    Isn’t this getting easier with tools like FreeNAS or the bundled ZFS support in recent Ubuntu versions like 20.04?

                                    You make a good point though. When I set up my home NAS ~3 years ago I chose Synology despite it being closed source because it had the appliance characteristics I wanted and was extensible enough software wise that I could be sure it would continue to meet my needs so long as it didn’t run out of disk :)

                                    (Ducks awaiting the rotten tomatoes for having chosen a closed source solution :)

                                  3. 1


                                    I think this is just specific to the filesystems you’re used to…

                                    With CBC, if an early block in a file gets corrupted, it won’t be possible to recover the rest.

                                    Not that this is a huge concern, if you have mirroring and checksumming set up properly.

                                    1. 1

                                      Encryption in ZFS is actually applied at the record (nominally 128k data block) level. GCM is recommended in practice, but, even if you use CBC mode, you’re likely to only corrupt a single record that’s mirrored elsewhere. It’s also overwhelmingly probable that the data checksums will catch it before it tries to decrypt, in all cases.

                                  1. 6

                                    For better or worse, I don’t agree with this:

                                    To me, email is a way of receiving simple communications that have a short time to live

                                    Maybe it should not be used for it, but newsletters, file sharing (!), tickets and receipts, etc. are not short lived. I often need to look them up many weeks, maybe even months later, and some I want to keep around and consume at my leasure. I think Hey addresses those issues well.

                                    The fact that the email protocol should probably never have been used for that is another issue entirely. If email was “pure” communication - prose from a sender to one or more recipients - that would be fine, but we’d need another protocol for receiving “things” that are easy to manage by machines (tags, content types, sender, etc.) whereas regular emails with text and attachments are anything but.

                                    1. 3

                                      You can have a wildcard domain, such as `.mydomain.com that routes to some dispatch service.

                                      Then each process you run registers (in a database, writes to disk, etc.) itself, and the dispatch service takes myprocessname.mydmain.com and routes it to that process.

                                      Theoretically, the dispatch service could also do a lookup of active processes running and dynamically route based on that, but I assume there might be some overhead with that approach (without caching).

                                      1. 1

                                        Hmm, this is interesting. So if I understand you correctly, you’re saying I can create a master process on the instance (or on a separate server?), that can read a remote database, figures out the process, and routes the request to that process?

                                        Are there any examples of this approach on GitHub, or in books you know and recommend? How might this approach change if individual processes and instances are shielded behind a load balancer?

                                        Can you name individual UNIX processes, or do you have to search by PID or command? I’m guessing that even if PIDs don’t change over the process lifetime, if the process goes down and it restarts, then it’ll have a different PID, and relying on a failing process to properly issue an HTTP call to update a remote registry isn’t wise because you’ll be coupling failure models within your system design.

                                        1. 2

                                          Also perhaps look into dbus: https://news.ycombinator.com/item?id=9451023

                                          1. 2

                                            I don’t know enough about process name / PID internals to know how easy or hard it is to look up - that’s also why I suggest that each process self-registers in some shared store (DB, disk, in-memory service, etc.). Someone further up suggested Consul which fits this role well - in general, “service discovery” is probably what you should be googling.

                                            The router (that takes incoming requests and sends them to the right process) can both live on the same instance or somewhere else, assuming you use HTTP for communication. If you want to use sockets or similar IPC, you’ll need to have it on the same instance.

                                            To handle failing processes, you could either have a heartbeat mechanism from the router checking that the process is up (with X failures before unregistering it) or you could just have a timeout on all incoming requests and make it so that a new process registering will overwrite the old process’ registration.

                                            It’s hard to be more specific without pulling in specific code samples or talking about actual implementation details.

                                        1. 3

                                          I started out post-uni at a 150+ developer company, and while it didn’t suffer from all of the issues listed here, the company culture, and how rigid everything was, turned me off from that kind of structure. It was also a crazy realization, to me, that I would actually rather take a pay cut and work at a smaller company.

                                          The pros at working at a smaller company is that you have a much bigger opportunity to take responsibility and grow in new areas (whether within development or cross-departmental) and carve out your own space. But the trade-off is, in most cases, a lower pay.

                                          1. 5

                                            Why didn’t they do this from the start?

                                            1. 8

                                              Disclaimer: I am only speculating here based on my own experience. I am not affiliated.

                                              I didn’t know how much work it was to release something that is even mildly popular as open source, let alone something that attracts a ton of attention, until I did it. It’s not just a matter of dumping a tarball somewhere and throwing up a link. You need to be prepared to help people build it and understand it. Making sure things can be built, modified and tested easily outside the small group that developed them can be a pile of work all by itself. Taking the time to also answer questions about them without either making your organization look bad for being unresponsive or for being dismissive of poorly considered questions requires resources.

                                              And when you’re trying to get something off the ground, your best resources are already spread thin.

                                              I don’t find it astonishing at all that a company could conclude that they lack the capacity to put their best face toward an open source release while they’re launching, and defer that release as a consequence.

                                              But I have no idea if that’s why or not. It resonates with me more than the code cleanup rationale others have mentioned.

                                              1. 3

                                                Probably wanted to mature the product first. Their reputation is sort of staked on how good the software is

                                                1. 1

                                                  That reasoning doesn’t make any sense to me, can you explain? They were releasing the software well before today to paying customers, so was the software not mature until now?

                                                  1. 12

                                                    They were releasing the software well before today to paying customers, so was the software not mature until now?

                                                    Well…yes, quite possibly. Releasing software doesn’t make it mature, so I’m not sure what you mean by this.

                                                    They may have also delayed open-sourcing to give themselves time to conduct security audits (and respond effectively to the findings), figure out licensing, negotiate SLAs with any third party source code hosting services, set up their bug bounty program, and [re]organize the codebase before going public. There’s also the possibility that they needed time to scale up their development team in anticipation of the increased volume of bug reports, security vulnerability disclosures, pull requests, and feature requests that inevitably accompanies open-sourcing. All these things take time, with many moving parts to consider

                                                    1. 1

                                                      There’s a difference between “working code” and “good code”. Good code should always be working code, but working code might be something that is good enough, not very nice, not very readable but gets the job done.

                                                      If they release code for critical applications that look like it was done by two interns over a couple of weeks, it might affect their reputation. If instead they clean up the code, make it nice, use best practices, etc. then it will make a much better impression upon release.

                                                1. 8

                                                  One observation after switching from 15 years of project development, usually in the lead development role, to working at a company with 300 developers working on a 20 year old product: while I used to feel more like a 10x developer, I now feel much more like a 1x developer.

                                                  I have come to see the 10x versus 1x developer debate less as a matter of talent and dedication but more as a matter of who happens to have started the code base or a new approach versus who is enlisted to maintain it.

                                                  Working at Amazon particularly seems like the job in which 90% of your time is spent on deciphering and dealing with the idiosyncrasies the previous developers left for you, meaning your visible output will be only 10% of that of the guy who started the code from scratch.

                                                  1. 4

                                                    while I used to feel more like a 10x developer, I now feel much more like a 1x developer.

                                                    I really hate this characterisation, for several reasons. First, it conflates multiplicative effect (what’s your multiplier on the team) with additive effect (how much do you contribute individually relative in some arbitrary unit). Second, the scale is entirely wrong. Third, it assumes developers don’t change and are entirely fungible.

                                                    Let’s assume that x is some arbitrary unit of developer productivity, such that an given project needs px to succeed. In theory, you can achieve that with either p 1x developers or p/10 10x developers. That doesn’t tell the whole story though.

                                                    I’ve worked with a few (thankfully a very few) developers who are -1x developers, and more that are -0.1x developers in terms of additive effect. The project would have gone faster if they’d just stepped away from the keyboard and never come back. In comparison to them, a 1x developer is great! They may make progress slowly, but they do make forward progress. The -0.1x developers are the ones where it takes more code-review time to get their work into a reasonable state than it would take someone vaguely competent to just do it.

                                                    These people aren’t always a lost cause, they may just be inexperienced. I’ve worked a lot with inexperienced contributors to open source projects who started needing 5-10 times as much of my time in code review and feedback than it would have taken for me to just write the code myself but ended up learning, improving, and then contributing a huge amount more overall than I could have written in the time I spent helping them. This is also often true for an experienced developer joining a new large project: it takes a while to understand a new codebase.

                                                    Even that; however, is ignoring the biggest impact for most developers: how much do they alter the productivity of the rest of the team. The productivity of the team is the sum of the additive impacts of each developer multiplied by the multiplicative impact of each developer. On moderately large teams, the multiplicative factor is far more important than the additive one. A developer that makes everyone in their team 10-20% more productive is far more valuable than a prima donna who writes ten times as much working code as everyone else but demotivates everyone so much that they each contribute only 80% of what they otherwise would. There are a lot of ways that developers can have a high multiplicative effect. Some are obvious, such as mentoring, doing good code reviews, and so on. Some relate to maintaining infrastructure (a developer who is willing and able to replace a crufty old build system is worth their weight in gold), properly prioritising work, and so on.

                                                    1. 3

                                                      I also think it depends on how green field development you do and the total size of the codebase. I can be a 10x developer on a completely no feature that has no dependencies or interaction with any other part of our product, but touching any of our core features sometimes makes me feel like a 0.5x developer, as I need to careful what I change, check that all interaction with other parts of the code work as intended, etc.

                                                      I imagine product with 1M+ LOC will have mostly - if not solely - 1x developers, with maybe the 10 year+ lead architect (or similar role) slightly above that.

                                                      1. 2

                                                        Hot take: when you’re an early employee at a startup you might be able to be a 10X developer easily even if you weren’t before and you won’t be anymore after you leave.

                                                        Huge problems can be solved easily with a little thinking and duct tape. That doesn’t mean you’re producing shit code, but ‘satisfactory’ code. Your horizon ends in a month, not in a roadmap 3 years down the road.

                                                        Source: Been there, done that. I don’t think I’m a better developer than most people but when I was working with a small team in a small company, quickly switching problems and languages to get stuff done and move quick makes you feel super productive and you can get easily annoyed if you don’t have an OK to good solution after a week.

                                                        Whereas if you work anywhere where a tiny feature in a code base that’s even just 2-3 years old can take 2 weeks you suddenly realize it was all a lie :)

                                                        1. 4

                                                          Quick link: https://en.wikipedia.org/wiki/Smoke_testing_%28software%29

                                                          Basically, if you try to run something and “smoke” comes out, you probably need to dig deeper. Superficial tests that everything looks good.

                                                          1. 2

                                                            It’s basic, cursory testing, to ensure that nothing is badly broken, but not diving into smaller details.


                                                          1. 1

                                                            I use a Macbook Pro 15” (2017) at work, and I love the mix between the unix’ish environment and a neat UI. Hardware-wise, it’s nowhere near perfect though, and I would stay away from any model pre-2020 (or whenever the 16” model came out and they switched), as the keyboards are horrendous and keys break.

                                                            Honestly, if I were doing it personally I would aim to build a hackintosh (desktop or laptop) - you get better value for money on the hardware, but still the aspects of MacOS that is, imo, the reason you buy macs in the first place. This does require time, patience and possibly accepting some quirks permanently, so it’s not for everybody. There are lots of guides out there on what hardware to chose for optimal compatibility.

                                                            1. 9

                                                              I feel like the development-side of using Kubernetes is great - you can define your application code alongside it’s manifest files, which easily allows you to spin up new services, change routing, etc.

                                                              However, running a kubernetes cluster requires the same or more effort as a team of ops people running your data center - there are so many moving parts, so many things that can break and just keeping it up-to-date and updating security packages is a challenge in itself.

                                                              Also, for better or worse, you get a sort-of built-in chaos monkey when running kubernetes. Nodes can and will go down more often than a regular EC2 instance just chugging away, for various reasons (including performing upgrades). It also adds some additional requirements from your application (decent, complete health-check + readiness check endpoints, graceful shutdown handling, etc.)

                                                              All in all, I am positive of kubernetes-the-deployment-target and less so of kubernetes-the-sysops-nightmare.

                                                              1. 14

                                                                This list seems to be based on a super Frankenstein’d, incompletely applied threat model.

                                                                There is a very real privacy concern to be had giving google access to every detail of your life. Addressing that threat does not necessitate making choices based on whether the global intelligence community can achieve access into your data — and less than skillfully applied that probably makes your overall security posture worse.

                                                                1. 1

                                                                  I agree that mentioning of the 5/9/14/howevermany eyes is unnecessary, and also not helpful. It’s not like if your data is stored on a server in a non-participating country that it somehow makes you more secure. All of that data still ends up traveling through the same routers on its way to you.

                                                                  1. 1

                                                                    If you’re going to put a whole lot of effort into switching away from Google, you might as well do it properly and move to actually secure services.

                                                                    1. 11

                                                                      In a long list of ways, Google is the most secure service. For some things (i.e. privacy) they’re not ideal, but moving to other services almost certainly involves security compromises (to gain something you lose something).

                                                                      Again, it all goes back to what your threat model is.

                                                                      1. 3

                                                                        Google is only the most secure service if you are fully onboard with their business model. Their business model is privacy violating at the most fundamental level, in terms of behavioral surplus futures. Whatever your specific threat model it then becomes subject to the opacity of Google’s auction process.

                                                                  1. 1

                                                                    Using an iPhone 6S, but the battery is crapping out which means I’ll get a replacement battery or get a new phone. If it’s the latter, I considered an iPhone 8 the newest iPhone I would buy, as none of the newer models are in a comfortable size. Someone else mentioned Samsung S10e, which could be promising - I have no loyalty towards either iPhone or Android - and I have used both - but the thought of “migrating” all my stuff puts me off switching to Android for the moment.

                                                                    I do feel like my iPhone 6S has decent performance, and I wish I could get a modern phone with twice the battery life (compared to the phone when it was new), even if means little or no performance improvement. Suggestions are welcome, but the iPhone 6S size is near the pain point of how big a phone I will tolerate.