1. 8

    I feel there’s a fine line between breaking up functions for no reason and adding clarity. If you have a 2-liner of code that is 2 x 10 chained function calls, wrapping that up in a function and calling it extractXfromY helps explain what the code does implicitly. Taking 5 lines of simple variable assignment, arithmetic, database calls or similar and pushing into a separate class does little more than to lower the line count of your functions.

    Just as line count is a terrible indicator of productivity and progress, it is also a terrible indicator of code quality. A 100 line function that has a clear purpose, with a few inline comments sprinkled into it, might be vastly preferable to unraveling the same function by having to jump back and forth between the “main” function and smaller bites of code just to figure out what is happening. As with all things, it depends.

    1. 1

      I agree that SLOC is not a perfect metric, and of course in different languages a reasonable SLOC will look different. In a language with tonnes of lines of boilerplate a 4 line function might be unacheivable. In a higher-level language, a 100 line function/procedure/class/thing would be madness. Depends on context.

    1. 4

      I feel that paragraph 2-3-4 in practice is going to restrict the licensed software from being used by most for-profit companies of more than a handful of people. I do not know of any larger companies where every employee/worker/owner has equal vote (and ditto for equal equity), but I’d love to know of such companies.

      Given the name of the license, this does seem to be the intended purpose, so I guess the license is just right for it’s, admittedly small, audience.

      1. 4

        The author seems to have broadly Syndicalist views (similar to my own), there are a few syndicalist companies around – there’s a games company that has been doing fairly well in that regard, I can’t recall the name offhand.

        1. 1

          My (of course limited) experience says that all successful software co-operatives with 2 or more equal people sooner or later escalate to a “proper” company with employees.

          Example from Germany: 2 or more people form a “GbR” and they’re personally liable. Very, very often after a few years this will be transformed into a “GmbH” (roughly like an LLC afaik) once you have employees or want to take on contracts from bigger companies. This is very common, and I wouldn’t even hold it against the founders, there are so many reasons why you wouldn’t want to have it if you have benefits. Oh, I don’t think this would conflict with 2d or 3 in general, but it seems very, vary rare. And now I really wonder why, or if I just haven’t noticed those.

        1. 2

          So no downvote possible at all?

          1. 5

            “Downvotes” were never possible really, it was always intended as a way to flag that there is a (significant) problem with the post, rather than “I don’t agree” or “I don’t like it”.

            1. 3

              I think the gist is that you either agree with / appreciate a comment and upvote it, or you think it’s inflammatory / misleading / trolling enough that you flag it. Disagreement was never the intended reason for downvoting comments.

            1. 18

              We have been running our own self-hosted kubernetes cluster (using basic cloud building blocks - VMs, network, etc.), starting from 1.8 and have upgraded up to 1.10 (via multiple cluster updates).

              First off, don’t ever self-host kubernetes unless you have at least one full-time person for a small cluster, and preferably a team. There are so many moving parts, weird errors, missing configurations and that one error/warning/log message you don’t know exactly what means (okay, multiple things).

              From a “user” (developer) perspective, I like kubernetes if you are willing to commit to the way it does things. Services, deployments, ingress and such work nicely together, and spinning up a new service for a new pod is straightforward and easy to work with. Secrets are so-so, and you likely want to do something else (like Hashicorp Vault) unless you only have very simple apps.

              RBAC-based access is great in theory, but the documentation is poor and you end up having to cobble things together until it works the way you want. A lot of defaults are basically unsafe if you run any container with code you don’t control 100%, but such is life when running privileged docker containers. There are ways around it, but auditing and tweaking all of this to be The Right Way™️ suddenly adds a lot of overhead to your non-app development time.

              To re-iterate, if you can ignore the whole “behind the scenes” part of running kubernetes, it’s not too bad and you get a lot of nice primitives to work with. They’ll get you 90% of the way and lets you have a working setup without too much hassle, but as with everything else the last 10% takes the other 90% of the time to get it just to where you want - granular network control between pods, red/green or other “non-standard” deployment methods.

              1. 6

                I think the vast majority of web and/or system developer jobs where the main task is “feature development” can rely on the built-in structures in most languages (maps, lists and sets using various implementations) as long as they use common developer sense, but I have come across quite a few bottlenecks where lack of knowledge of the underlying data store (whether SQL or not) was causing trouble.

                I find SQL DB internals fascinating, like tuning a formula 1 race car, so I am likely biased, but it seems like a lot of developers have some simple-but-crude notions of how a database works, with very little understanding of what it takes when you start seeing bigger amounts of data and/or more activity (“I put an index on it, why didn’t it automatically improve the performance on my 100m rows, 300 column wide DB when I select all fields on 20% of the rows?”)

                1. 8

                  There are a few things here that might be easier with ZFS. Here are a few constructive recommendations based on personal experience. (Note that I haven’t used btrfs in a while, so my advice is going to be fairly onesided toward ZFS.)

                  Mirroring

                  You do actually get bitrot repair with ZFS if you use either RAIDz or a mirror. Hardware RAID won’t defend against silent data corruption, which is almost negligent to ignore on spinning rust drives for long-term backups. ZFS lets you choose whether to use a mirror or RAIDz and will still repair corrupted data if a good copy exists somewhere. And, even if you use ZFS on a single drive (don’t), it will at least refuse to read the corrupted data.

                  Synchronizing

                  For ZFS, I use Sanoid for snapshots, and Syncoid (part of Sanoid) for synchronization. This uses zfs send, is a ton more efficient than rsync, and is easily configured in NixOS. rsync.net and datto.com are two services that let you use them as zfs send targets over ssh. Personally, I have a remote dedicated backup server that I send to over a VPN to make sure the “1” part of the 3-2-1 rule works.

                  Encryption

                  I think this is just specific to the filesystems you’re used to…

                  There’s an issue when you choose to encrypt backups: if data becomes corrupted, it’s hard to recover anything without a copy.

                  Not the case with ZFS on Linux 0.8. You get all the anti-corruption guarantees ZFS already gives you if you create an encrypted dataset. Also, you can zfs send -w or syncoid --sendoptions=w and get end-to-end encrypted backups where the backup target doesn’t have to know the key to your dataset to receive it. This is what the future is like.

                  File organization

                  I’d organize my top-level folders (music/documents/pictures/etc) with datasets in ZFS, which are all exposed as mountpoints on Linux or drives on Windows. Then, I’d add folders under them like normal volumes. You can also have different replication schemes/snapshot schemes/record sizes/compression methods/encryption methods for each to tune them to different workloads.

                  Take joy in knowing your information is safe probably until the day you die!

                  I’d argue that’s a [citation needed] unless you’re using a filesystem with built-in checksumming. NTFS is not one of those filesystems. Give ZFS a try, and truly have your information be safe until the day you die :-)

                  1. 9

                    While I agree (in theory, I haven’t used ZFS), what you’re describing does not seem to include the “… for mortals” part.

                    Most people who own a computer and care enough to have backups should be able to set up 2 hard drives and use BackBlaze. The article even recommends to get a hard drive enclosure rather than fiddling with internal hard drives, to remove as much friction as possible.

                    1. 2

                      Fair! Though, I’ve seen the Raspberry Pi 4 perform quite well with a 64-bit Linux (specifically, NixOS 20.09pre), ZFS, and a dual-drive external USB 3.0 bay. And ZFS on Windows is also a thing now, although caveat emptor with it until it’s more stable. So, maybe not for mortals quite yet, but getting there for people who are willing to get familiar with the Raspberry Pi and the ZFS command line. Which you could argue is still not “mortals” :-)

                      I just question the lack of data checksums for long-term storage. FS checksumming is actually pretty important for this, and if you just store two copies of your data without checksums, you have no idea which is the “right” one if a cosmic ray happens to bitflip one of your drives. So maybe a better suggestion if you’re stuck on something like NTFS would be to store an (also mirrored) SHA256SUMS file with your data.

                      1. 4

                        My personal feeling is that most people do not care if a single file / photo becomes corrupt or has a glitch in it. For documents, images, audio and more, the program you use to consume it will happily fill in a blank or maybe show a weird symbol or play an odd noise.

                        In many ways, backups are like password management: There’s the good/right way to do it, and then there’s the “good enough” way for most people, which balances “will actually be used” with “good enough protection”.

                        Nobody who just barely cares enough about backups are going to set up a Raspberry Pi or do anything beyond what this article recommends. There’s a reason a lot of people use a NAS for backups, you plug in a bunch of disks (often included in the purchase) and then you set up using a nice web UI and it’s up and running.

                        1. 3

                          I haven’t used ZFS with it but the rpi 4 is indeed a seriously impressive server given its size and cost!

                          I haven’t gone the external USB route, and instead use NFS mounted filesystems from my NAS over gigabit ethernet. I get great perfomance with that. I should try an external USB drive and compare with some benchmarking.

                          I like the NFS from NAS option because it means that everything is instantly backed up so the pi is truly disposable.

                          1. 2

                            If you try ZFS with the Raspberry Pi 4, try to get a 64-bit aarch64 OS that has a good ZFS package (like NixOS). 64-bit architecture is a good idea for ZFS anyway, since block pointers, checksums, etc are all 64 bits wide. In my experience this setup has worked well enough that I’ve recommended a Raspberry Pi 4 and a USB 3 hard drive dock to others as a low-cost way to try ZFS with real drives. (Definitely don’t use the Raspberry Pi 3 for this, it has USB 2 shared with the ethernet controller, which will run you into all sorts of latency problems if you try to use it as a NAS).

                            Raspbian, last I checked, only boots the ARM cores in 32-bit mode, and who knows what sort of ZFS modules they even provide. Compiling ZFS from source on a raspberry pi doesn’t sound that enjoyable, and NixOS’ binary cache seems to always have it.

                            1. 2

                              I’m sure nixos is amazing because everything I hear about it is amazing but I also wonder how Ubuntu 20.04 stacks up. I’m more familiar with its administrative interfaces and the like. I know the ISO for rpi is 64bit.

                              1. 2

                                Oh, good to know. Will be interested in how that goes.

                        2. 2

                          Isn’t this getting easier with tools like FreeNAS or the bundled ZFS support in recent Ubuntu versions like 20.04?

                          You make a good point though. When I set up my home NAS ~3 years ago I chose Synology despite it being closed source because it had the appliance characteristics I wanted and was extensible enough software wise that I could be sure it would continue to meet my needs so long as it didn’t run out of disk :)

                          (Ducks awaiting the rotten tomatoes for having chosen a closed source solution :)

                        3. 1

                          Encryption

                          I think this is just specific to the filesystems you’re used to…

                          With CBC, if an early block in a file gets corrupted, it won’t be possible to recover the rest.

                          Not that this is a huge concern, if you have mirroring and checksumming set up properly.

                          1. 1

                            Encryption in ZFS is actually applied at the record (nominally 128k data block) level. GCM is recommended in practice, but, even if you use CBC mode, you’re likely to only corrupt a single record that’s mirrored elsewhere. It’s also overwhelmingly probable that the data checksums will catch it before it tries to decrypt, in all cases.

                        1. 6

                          For better or worse, I don’t agree with this:

                          To me, email is a way of receiving simple communications that have a short time to live

                          Maybe it should not be used for it, but newsletters, file sharing (!), tickets and receipts, etc. are not short lived. I often need to look them up many weeks, maybe even months later, and some I want to keep around and consume at my leasure. I think Hey addresses those issues well.

                          The fact that the email protocol should probably never have been used for that is another issue entirely. If email was “pure” communication - prose from a sender to one or more recipients - that would be fine, but we’d need another protocol for receiving “things” that are easy to manage by machines (tags, content types, sender, etc.) whereas regular emails with text and attachments are anything but.

                          1. 3

                            You can have a wildcard domain, such as `.mydomain.com that routes to some dispatch service.

                            Then each process you run registers (in a database, writes to disk, etc.) itself, and the dispatch service takes myprocessname.mydmain.com and routes it to that process.

                            Theoretically, the dispatch service could also do a lookup of active processes running and dynamically route based on that, but I assume there might be some overhead with that approach (without caching).

                            1. 1

                              Hmm, this is interesting. So if I understand you correctly, you’re saying I can create a master process on the instance (or on a separate server?), that can read a remote database, figures out the process, and routes the request to that process?

                              Are there any examples of this approach on GitHub, or in books you know and recommend? How might this approach change if individual processes and instances are shielded behind a load balancer?

                              Can you name individual UNIX processes, or do you have to search by PID or command? I’m guessing that even if PIDs don’t change over the process lifetime, if the process goes down and it restarts, then it’ll have a different PID, and relying on a failing process to properly issue an HTTP call to update a remote registry isn’t wise because you’ll be coupling failure models within your system design.

                              1. 2

                                Also perhaps look into dbus: https://news.ycombinator.com/item?id=9451023

                                1. 2

                                  I don’t know enough about process name / PID internals to know how easy or hard it is to look up - that’s also why I suggest that each process self-registers in some shared store (DB, disk, in-memory service, etc.). Someone further up suggested Consul which fits this role well - in general, “service discovery” is probably what you should be googling.

                                  The router (that takes incoming requests and sends them to the right process) can both live on the same instance or somewhere else, assuming you use HTTP for communication. If you want to use sockets or similar IPC, you’ll need to have it on the same instance.

                                  To handle failing processes, you could either have a heartbeat mechanism from the router checking that the process is up (with X failures before unregistering it) or you could just have a timeout on all incoming requests and make it so that a new process registering will overwrite the old process’ registration.

                                  It’s hard to be more specific without pulling in specific code samples or talking about actual implementation details.

                              1. 3

                                I started out post-uni at a 150+ developer company, and while it didn’t suffer from all of the issues listed here, the company culture, and how rigid everything was, turned me off from that kind of structure. It was also a crazy realization, to me, that I would actually rather take a pay cut and work at a smaller company.

                                The pros at working at a smaller company is that you have a much bigger opportunity to take responsibility and grow in new areas (whether within development or cross-departmental) and carve out your own space. But the trade-off is, in most cases, a lower pay.

                                1. 5

                                  Why didn’t they do this from the start?

                                  1. 8

                                    Disclaimer: I am only speculating here based on my own experience. I am not affiliated.

                                    I didn’t know how much work it was to release something that is even mildly popular as open source, let alone something that attracts a ton of attention, until I did it. It’s not just a matter of dumping a tarball somewhere and throwing up a link. You need to be prepared to help people build it and understand it. Making sure things can be built, modified and tested easily outside the small group that developed them can be a pile of work all by itself. Taking the time to also answer questions about them without either making your organization look bad for being unresponsive or for being dismissive of poorly considered questions requires resources.

                                    And when you’re trying to get something off the ground, your best resources are already spread thin.

                                    I don’t find it astonishing at all that a company could conclude that they lack the capacity to put their best face toward an open source release while they’re launching, and defer that release as a consequence.

                                    But I have no idea if that’s why or not. It resonates with me more than the code cleanup rationale others have mentioned.

                                    1. 3

                                      Probably wanted to mature the product first. Their reputation is sort of staked on how good the software is

                                      1. 1

                                        That reasoning doesn’t make any sense to me, can you explain? They were releasing the software well before today to paying customers, so was the software not mature until now?

                                        1. 12

                                          They were releasing the software well before today to paying customers, so was the software not mature until now?

                                          Well…yes, quite possibly. Releasing software doesn’t make it mature, so I’m not sure what you mean by this.

                                          They may have also delayed open-sourcing to give themselves time to conduct security audits (and respond effectively to the findings), figure out licensing, negotiate SLAs with any third party source code hosting services, set up their bug bounty program, and [re]organize the codebase before going public. There’s also the possibility that they needed time to scale up their development team in anticipation of the increased volume of bug reports, security vulnerability disclosures, pull requests, and feature requests that inevitably accompanies open-sourcing. All these things take time, with many moving parts to consider

                                          1. 1

                                            There’s a difference between “working code” and “good code”. Good code should always be working code, but working code might be something that is good enough, not very nice, not very readable but gets the job done.

                                            If they release code for critical applications that look like it was done by two interns over a couple of weeks, it might affect their reputation. If instead they clean up the code, make it nice, use best practices, etc. then it will make a much better impression upon release.

                                      1. 8

                                        One observation after switching from 15 years of project development, usually in the lead development role, to working at a company with 300 developers working on a 20 year old product: while I used to feel more like a 10x developer, I now feel much more like a 1x developer.

                                        I have come to see the 10x versus 1x developer debate less as a matter of talent and dedication but more as a matter of who happens to have started the code base or a new approach versus who is enlisted to maintain it.

                                        Working at Amazon particularly seems like the job in which 90% of your time is spent on deciphering and dealing with the idiosyncrasies the previous developers left for you, meaning your visible output will be only 10% of that of the guy who started the code from scratch.

                                        1. 4

                                          while I used to feel more like a 10x developer, I now feel much more like a 1x developer.

                                          I really hate this characterisation, for several reasons. First, it conflates multiplicative effect (what’s your multiplier on the team) with additive effect (how much do you contribute individually relative in some arbitrary unit). Second, the scale is entirely wrong. Third, it assumes developers don’t change and are entirely fungible.

                                          Let’s assume that x is some arbitrary unit of developer productivity, such that an given project needs px to succeed. In theory, you can achieve that with either p 1x developers or p/10 10x developers. That doesn’t tell the whole story though.

                                          I’ve worked with a few (thankfully a very few) developers who are -1x developers, and more that are -0.1x developers in terms of additive effect. The project would have gone faster if they’d just stepped away from the keyboard and never come back. In comparison to them, a 1x developer is great! They may make progress slowly, but they do make forward progress. The -0.1x developers are the ones where it takes more code-review time to get their work into a reasonable state than it would take someone vaguely competent to just do it.

                                          These people aren’t always a lost cause, they may just be inexperienced. I’ve worked a lot with inexperienced contributors to open source projects who started needing 5-10 times as much of my time in code review and feedback than it would have taken for me to just write the code myself but ended up learning, improving, and then contributing a huge amount more overall than I could have written in the time I spent helping them. This is also often true for an experienced developer joining a new large project: it takes a while to understand a new codebase.

                                          Even that; however, is ignoring the biggest impact for most developers: how much do they alter the productivity of the rest of the team. The productivity of the team is the sum of the additive impacts of each developer multiplied by the multiplicative impact of each developer. On moderately large teams, the multiplicative factor is far more important than the additive one. A developer that makes everyone in their team 10-20% more productive is far more valuable than a prima donna who writes ten times as much working code as everyone else but demotivates everyone so much that they each contribute only 80% of what they otherwise would. There are a lot of ways that developers can have a high multiplicative effect. Some are obvious, such as mentoring, doing good code reviews, and so on. Some relate to maintaining infrastructure (a developer who is willing and able to replace a crufty old build system is worth their weight in gold), properly prioritising work, and so on.

                                          1. 3

                                            I also think it depends on how green field development you do and the total size of the codebase. I can be a 10x developer on a completely no feature that has no dependencies or interaction with any other part of our product, but touching any of our core features sometimes makes me feel like a 0.5x developer, as I need to careful what I change, check that all interaction with other parts of the code work as intended, etc.

                                            I imagine product with 1M+ LOC will have mostly - if not solely - 1x developers, with maybe the 10 year+ lead architect (or similar role) slightly above that.

                                            1. 2

                                              Hot take: when you’re an early employee at a startup you might be able to be a 10X developer easily even if you weren’t before and you won’t be anymore after you leave.

                                              Huge problems can be solved easily with a little thinking and duct tape. That doesn’t mean you’re producing shit code, but ‘satisfactory’ code. Your horizon ends in a month, not in a roadmap 3 years down the road.

                                              Source: Been there, done that. I don’t think I’m a better developer than most people but when I was working with a small team in a small company, quickly switching problems and languages to get stuff done and move quick makes you feel super productive and you can get easily annoyed if you don’t have an OK to good solution after a week.

                                              Whereas if you work anywhere where a tiny feature in a code base that’s even just 2-3 years old can take 2 weeks you suddenly realize it was all a lie :)

                                              1. 4

                                                Quick link: https://en.wikipedia.org/wiki/Smoke_testing_%28software%29

                                                Basically, if you try to run something and “smoke” comes out, you probably need to dig deeper. Superficial tests that everything looks good.

                                                1. 2

                                                  It’s basic, cursory testing, to ensure that nothing is badly broken, but not diving into smaller details.

                                                  https://en.m.wikipedia.org/wiki/Smoke_testing_(software)

                                                1. 1

                                                  I use a Macbook Pro 15” (2017) at work, and I love the mix between the unix’ish environment and a neat UI. Hardware-wise, it’s nowhere near perfect though, and I would stay away from any model pre-2020 (or whenever the 16” model came out and they switched), as the keyboards are horrendous and keys break.

                                                  Honestly, if I were doing it personally I would aim to build a hackintosh (desktop or laptop) - you get better value for money on the hardware, but still the aspects of MacOS that is, imo, the reason you buy macs in the first place. This does require time, patience and possibly accepting some quirks permanently, so it’s not for everybody. There are lots of guides out there on what hardware to chose for optimal compatibility.

                                                  1. 9

                                                    I feel like the development-side of using Kubernetes is great - you can define your application code alongside it’s manifest files, which easily allows you to spin up new services, change routing, etc.

                                                    However, running a kubernetes cluster requires the same or more effort as a team of ops people running your data center - there are so many moving parts, so many things that can break and just keeping it up-to-date and updating security packages is a challenge in itself.

                                                    Also, for better or worse, you get a sort-of built-in chaos monkey when running kubernetes. Nodes can and will go down more often than a regular EC2 instance just chugging away, for various reasons (including performing upgrades). It also adds some additional requirements from your application (decent, complete health-check + readiness check endpoints, graceful shutdown handling, etc.)

                                                    All in all, I am positive of kubernetes-the-deployment-target and less so of kubernetes-the-sysops-nightmare.

                                                    1. 14

                                                      This list seems to be based on a super Frankenstein’d, incompletely applied threat model.

                                                      There is a very real privacy concern to be had giving google access to every detail of your life. Addressing that threat does not necessitate making choices based on whether the global intelligence community can achieve access into your data — and less than skillfully applied that probably makes your overall security posture worse.

                                                      1. 1

                                                        I agree that mentioning of the 5/9/14/howevermany eyes is unnecessary, and also not helpful. It’s not like if your data is stored on a server in a non-participating country that it somehow makes you more secure. All of that data still ends up traveling through the same routers on its way to you.

                                                        1. 1

                                                          If you’re going to put a whole lot of effort into switching away from Google, you might as well do it properly and move to actually secure services.

                                                          1. 11

                                                            In a long list of ways, Google is the most secure service. For some things (i.e. privacy) they’re not ideal, but moving to other services almost certainly involves security compromises (to gain something you lose something).

                                                            Again, it all goes back to what your threat model is.

                                                            1. 3

                                                              Google is only the most secure service if you are fully onboard with their business model. Their business model is privacy violating at the most fundamental level, in terms of behavioral surplus futures. Whatever your specific threat model it then becomes subject to the opacity of Google’s auction process.

                                                        1. 1

                                                          Using an iPhone 6S, but the battery is crapping out which means I’ll get a replacement battery or get a new phone. If it’s the latter, I considered an iPhone 8 the newest iPhone I would buy, as none of the newer models are in a comfortable size. Someone else mentioned Samsung S10e, which could be promising - I have no loyalty towards either iPhone or Android - and I have used both - but the thought of “migrating” all my stuff puts me off switching to Android for the moment.

                                                          I do feel like my iPhone 6S has decent performance, and I wish I could get a modern phone with twice the battery life (compared to the phone when it was new), even if means little or no performance improvement. Suggestions are welcome, but the iPhone 6S size is near the pain point of how big a phone I will tolerate.

                                                          1. 3
                                                            > a hardware component, keyboard, is borked
                                                            > oh yeah let's change the whole OS
                                                            

                                                            And when I tell people to just install the macOS on regular machine (which is easy and non-destructive these days) they look at me like blasphemous or something, which is funny when post-Jobs Apple lost all its “religious” feel.

                                                            1. 9

                                                              Probably because it’s sketchy in a legal sense? They probably won’t go after you, but I don’t think someone as visible as say, DHH wants to be dealing with that. Or the effort to actually do hackintosh.

                                                              1. 6

                                                                I think Apple actually does not care about that anymore, they even silently made it easier with recent releases, having less restricted environment checks, more hardware support and even native virtualization support including KVMs VirtIO…

                                                                1. 3

                                                                  TBF, I don’t think this knowledge is well-known. I did a Hackintosh back in the Snow Leopard days and it was too much work. Every update I worried would lock me out.

                                                                  1. 1

                                                                    Right now the macOS itself installs itself as read-only except the space for user data (like iOS) so all your “state” differing from a Macbook install and PC install can be stored in a single .zip archive not larger than few megs.

                                                                2. 1

                                                                  Linus Tech Tips is making Hackintosh videos that get over a million views. I think it’s safe to say that they won’t go after anyone.

                                                                3. 2

                                                                  To be fair, installing a “hackintosh” version of macOS is not easy if you want all the bells and whistles to work (audio, wifi, sleep mode, etc.), especially if your computer is not the closest-match-to-a-mac hardware-wise. So a switch to a non-macbook in practice also means a switch to not-macOS for most people.

                                                                  1. 1

                                                                    How recently have you tried to do that? From my 3 installs in last 1-2 year the whole work was to drag and drop packages of OSS kernel extensions into EFI partition before installing anything. It’s extremely nice these days as the whole macOS install is not altered in any way and the only machine-dependent artifact is your EFI partition with OpenCore or Clover. So it’s already simple as “unzip this there and reboot”.

                                                                    For even more convenience, you can let people use automated installers like UniBeast, though I advise not to, as you don’t exactly know what it does. But they “get the job done”.

                                                                    But you have one valid point - a hardware compatibility. I can’t say the macOS is now very picky like it used to be, you can even run it on AMD CPUs without hassle, but there are some things you should avoid, a crazy wireless chips, bluetooth chips and dual-GPUs for example. On the other hand, you already want to not pick them up on any non-Windows OS even if they partially work, so it’s not a macOS-only thing.

                                                                    1. 1

                                                                      If that’s the case, I might be looking to install macOS in the near future on my home desktop. Do you have any good links / guides to follow?

                                                                      1. 1

                                                                        Sure thing. I was about to list you a whole lot of links and guides (mostly similar, but a tiny bit different for AMD / Intel CPUs as well as buying guides which are handy), but apparently some hero did that in r/Hackintosh sidebar, which should cover most of your needs.

                                                                        1. 1

                                                                          If you look for the words ‘hackintosh’ and ‘guide’ you should find what you need. I bought an extremely cheap ‘old’ desktop PC that was specifically called out as full supported (with its own guide) and it’s worked perfectly without any strange hacks or misbehaving - through a major OS version upgrade, too. It was so good I bought a MacBook Pro. I’m sure Apple realise it’s a way people find out how great the OS is.

                                                                  1. 3

                                                                    I really want to know what their video processing pipeline is like since they generate clips and varying video quality levels for what I assume is every device in existence. There were some nice nuggets here. I didn’t know about the Beacon api or the intersection observer. Seems like a mostly boring stack but considering they’ve been around for about 10 years and the site hasn’t slowed to a crawl on my intentionally crappy test laptop it means they’re doing something right.

                                                                    Did anyone pick up on whether they’re running all of their infra on AWS or just the vertica part? I thought the bandwidth costs would be killer.

                                                                    1. 4

                                                                      Why would they need to generate so many different quality levels? They probably just have 2 or 3 which is enough to cover most devices out there. Using ffmepg it’s trivial to generate these videos, though you need the infrastructure and processing power behind it.

                                                                      1. 2

                                                                        When you do it live, constantly, on terabytes of data, the infrastructure and processing power become the big problems.

                                                                        Edit: upon rereading it, they actually sound like they put a big emphasis on quality and compatibility too. So their question is, “if we can we make this content incrementally better for X market segment, is it worth it?” Start from the biggest X’s and work your way down like any other priority list!

                                                                        1. 2

                                                                          There’s absolutely no way they’d do live transcoding; these sites usually only have two versions, it’d be much cheaper to simply store both at all times.

                                                                          It’s actually a very simple thought experiment — you obviously cannot re-create the high-res version from the low-res one, and the low-res one would take so little space in storage compared to high-res one, that spending minutes trying re-create it from the high-res one would simply make very little sense — they’re probably transcoded once on upload, and pretty much forever cached.

                                                                          BTW, I’d suggest you read the DDIA book, which explains a lot of these things. It has many insights into how actual popular applications are designed nowadays, including the actual Twitter implementation — which answered my own question on why it often takes so long to post a Tweet.

                                                                          1. 2

                                                                            They might only have two versions from your perspective (SD and HD), but having worked in video development, it’s likely they have 3-4 x those two versions for compatability. The web has converged on a few technologies in the last few years, making it less cumbersome, but if they want to cover “most” devices, then I still expect them to have at least 2-3 sets of files.

                                                                          2. 1

                                                                            Do you think they do live transcoding? I’m certain they have multiple copies of the media transcoded to different qualities. It’s really not that much processing power when you have things like Ryzen boxes and GPUs which can rip through this in no time.

                                                                          3. 2

                                                                            At this point, they almost certainly don’t. But in the not too distant past, they would have had to have a multiplicity of encodings, because of the varying abilities of the various browsers/devices/codecs.

                                                                          4. 3

                                                                            This is tangential, but I have really enjoyed learning about how netflix handles encoding and processing their videos.

                                                                            Although Pornhub must process much more video than netflix does. I wonder what trade offs PH makes compared to Netflix’s approach based soley on the amount of content they have.

                                                                            Here is a brief article from the Netflix Engineering blog about encoding. But I first started thinking about it when I watched this system design video from Gaurav Sen.

                                                                            1. 2

                                                                              Although Pornhub must process much more video than netflix does

                                                                              Are you sure about this? I don’t remember where I read it, but I’m sure at some point I read that one of the adult sites (likely this one) determined that most viewing behaviour is to watch a bit at the beginning, and then skip forward to about 80% of the way through the video. The consumption of Netflix [I’m guessing] would look very different, i.e., watching a film start to finish.

                                                                              I would have thought that this site could optimise videos for certain behavioural patterns.

                                                                            2. 3

                                                                              Self hosted, I’ve seen their servers in the datacenter.

                                                                              Porn industry giants usually self-host as much as possible.

                                                                              1. 3

                                                                                Self-hosted using Level 3 as the network provider per Rusty.

                                                                              2. 2

                                                                                Although idk about processing, I do remember that Rusty said in Reddit AMA that they use Limelight for video CDN.

                                                                              1. 4
                                                                                • OS: Linux (btw I use Arch Linux)
                                                                                • Editor: NeoVim, using neovim-qt
                                                                                • Terminal: GNOME terminal if I need a terminal outside of NeoVim
                                                                                • DE: Cinnamon as my desktop environment
                                                                                • Browser: Firefox Developer Edition
                                                                                • Music: Pragha and YouTube (sometimes using mpv)
                                                                                • Shell: Fish
                                                                                • Email: Fastmail, I just use the web interface since it is actually fast. I do compose my Emails in NeoVim

                                                                                This is what my desktop looks like:

                                                                                http://downloads.yorickpeterse.com/desktop.png

                                                                                This is my NeoVim setup, which I run in full screen most of the time.:

                                                                                http://downloads.yorickpeterse.com/nvim.png

                                                                                1. 1

                                                                                  that is a really neat font, love it! could you please tell me which one it is?

                                                                                  1. 4

                                                                                    The font I use for NeoVim is Source Code Pro, the desktop font is Noto Sans Regular.

                                                                                    1. 1

                                                                                      sweet! thanks a lot :)

                                                                                  2. 1

                                                                                    What’s the color scheme you use for NeoVim? I have my font size a bit bigger with the color scheme I use now (different editor, but still), but it seems perfectly legible in your screenshot.

                                                                                    1. 1

                                                                                      neovim-qt

                                                                                      Any particular reason for prefering that to nvim in a terminal?

                                                                                      1. 2

                                                                                        neovim-qt has a significantly lower input latency compared to running NeoVim in a terminal. I no longer have the data sadly, but most terminals (including GPU ones like Kitty and Alacritty) had something like 2-3 times the input latency. The worst are libvte terminals, which for me had a latency of around 80-90 ms. neovim-qt in turned hovered somewhere between 10 and 20 ms.

                                                                                        1. 1

                                                                                          Ah. I use a low latency terminal and DE, so I would probably not benefit much. xterm has 2ms latency (90%). https://lwn.net/Articles/751763/

                                                                                    1. 1

                                                                                      I do wonder how you “know” if you’ve built a properly secure device. I guess something like TLA could help you write a spec saying something like “in order to change ownership you need to have existing login access”?

                                                                                      The modeling is probably the most difficult part, but I don’t know what tool would be best to ingest a model with some constraints here

                                                                                      1. 2

                                                                                        That’s the crux of the issue. We know exactly how regular locks work, how lock picking works and how to mitigate that with modern locks. As the final line says, “Do not buy a smart lock”, the attack surface is enormous and a lot of these companies are only “security experts” on the surface.

                                                                                        1. 1

                                                                                          I get the impression that it’s pretty hard indeed. There are always things you don’t model (with timings as an obvious example). Of course, this thing seems to be broken in a more obvious way.