Threads for binarycleric

  1. 1

    These damn young people driving around, not understanding how their engine works or the chemical composition of their motor oil.

    I see where the author is coming from but at the same time, sufficiently advanced technology should work seamlessly to the user. Most people don’t want to care about CPU architecture, OS versions, clock speeds, etc. They just want to message their friends and watch YouTube.

    Young people who are interested in how tech works will eventually learn but let the rest just enjoy it.

    1. 2

      Monoliths can potentially be shared by multiple teams and a single development standard can be enforced. Microservices can range in varying degrees of quality and operational health. Engineering discipline is tremendously important.

      However, it depends a lot on organizational discipline as well.

      How do you handle ownership during re-orgs? How do you handle employees leaving or changing teams? Who is on the hook for future updates and maintenance? A team? A person? What happens if the service needs to be changed and the owning team doesn’t have the capacity?

      I worry because at a previous job we seemed to have this unwritten rule that a person “owned” a service even if they were on a team that had totally different goals. This meant that when people left nobody would be retrained and code that was vital to certain projects would be left to rot. “Dying on the vine” as they called it.

      1. 1

        PHP did an amazing job of becoming the standard for shared hosting and brought a lot of smart and talented people into the industry.

        1. 6


          Finally starting to read the SRE Workbook after months of putting it off.


          Building an Infinity Gauntlet with different lighting configurations and smart home features using an Arduino, some LEDs, and the shell of the Marvel Legend’s Infinity Gauntlet toy. I want to make it easy to reprogram and able to work on both battery power and be plugged in. Lots of soldering to do this week.

          1. 7

            If a ‘snap’ gesture doesn’t randomly turn off half the things in the house, you will have disappointed a random internet guy with absolutely no stake in this.

            1. 1

              Ha! That might be a version 2 project.

          1. 5

            This is a really difficult problem for all parties involved. IaaS/PaaS providers can only look at certain kinds of fairly general metrics. This includes things like network traffic, database availability, HTTP requests being routed to appropriate processes, etc. Once a cloud provider hosts code they didn’t write or uses services they don’t directly maintain then there is a potential point of failure.

            Typically cloud providers view availability as control-plane and administrative console availability. Are we able to provision new resources and are those resources staying up to the best of our ability? If the answer is yes then we consider our service “available”. A single customer’s database being down or web processes being unavailability typically does not constitute an outage unless it is a wide-spread issue or something that was a result of a misconfiguration or bad code deploy on our part.

            Full disclosure: I work at Heroku and we’ve had discussions about this exact topic numerous times. Please be respectful and don’t throw any flame my way, I’m an engineer and don’t make product decisions (only advocate for them).

            1. 3

              yeah. agree with most of this. a happy middle ground seems to be “give operators enough signals to evaluate health on an individual level themselves, while they implement robustness at the application level.”

              that’s tricky too of course, don’t want to leak details of the magic behind whatever abstraction you’re supplying to clients in the metrics you give. I don’t think there’s a cloud platform that’s doing this well at the moment – closest is probably GCP, I feel overwhelmed (in terms of volume) at times with the amount of SLA violations/outages they inform me about.

              1. 1

                Signals and events is a difficult problem and something I’ve wanted to implement for years. Sending people email during these issues was nice in 2011 but now I feel like an event stream or (at-least) Webhooks would be better to give customers control over what to do in the event of a service disruption.

              2. 1

                The article is not about customer’s services, but provide-run services that are not accounted for: “Then you’ve had a whole bunch of fun outages caused by something going wrong with their services”

              1. 2

                This article made me cringe when I read it in 2009 and it makes me cringe even harder today. Thinking like this may work when you are a startup and need to ship ASAP but that just doesn’t fly when you need stability and money/lives are on the line.

                1. 5

                  Full disclosure: I work at Heroku.

                  I started writing something more focused but eventually it turned into a brain dump. It would be shorter and to the point if I had more time.

                  My team is responsible for managing Postgres/Redis/Kafka operations at the company and our setup is a little… *ahem* different. We never touch the UI and rely entirely on AWS APIs for our day-to-day operations. Requests for servers come in via HTTP API calls and we provision object representations of higher-level “Services” that contain servers, which contain EC2 instances, volumes, security groups, elastic IPs, etc.

                  My team is maybe 20(?) or so people, depending on who you ask and we own the entire provision -> operation -> maintenance -> retirement lifecycle. Other teams have abstracted some things for us so we’re building on the backs of giants who have built on the back of giants.

                  Part of our model is “shared nothing most of the time”. In the event of a failure, we don’t try and recover disks or instances. Our backup strategy revolves around getting information off the disks as quickly as possible and onto something more durable. This is S3 in our case and we treat pretty much everything else as ephemeral.

                  What I do and what you are looking to do are a little different but my advice would be to investigate the native tools AWS gives you and try to work with them as much as possible. There are cases where you need to down your own tools but you should have a good reason for that (aside from usual tech contrarian opinions). “I don’t trust AWS” or “vendor lock-in” aren’t really things you should consider at an early stage. Get off the ground and worry about the details when you have the revenue to support those ideas. You have to build a wheel before you can build an interstate system.

                  Keep ephemeralization in mind. Continue doing more with less until you are doing almost nothing. If you have an operation that AWS can handle for you, just let them. Your objective is to build a business and serve customers, not to build the world’s best infrastructure. Keep UX and self-service in mind. If your developers can’t easily push code changes, recover from bad deploys and scale their apps then you have a problem.

                  Look into AWS CodeDeploy, ECS, Lambda, RDS, NLBs, etc. Make sure you understand the AWS networking model as working with VPCs and Security Groups can be quite complex. Don’t rely entirely on their policy simulator as it can be quite confusing at times. Build things in staging and TEST TEST TEST.

                  Give each developer their own sub-account for staging/test environments. Some of AWS’s account features make this really easy. Don’t ever use root credentials. Keep an eye on trusted advisor to make sure developers aren’t spinning up dozens of r4.xlarge instances that do nothing (or worse, mine bitcoin).

                  MAKE SURE YOUR S3 BUCKETS AREN’T WORLD WRITABLE. This happens more than you think. You’ve gotta pen test your network to make sure you set it up correctly.

                  Learn the core concepts and learn to script as much as possible. The UI should only be used for experimentation early on, after that, you should take the time to teach a computer how to do this stuff. Consider immutability as much as possible. It is far better to throw something away and replace it in AWS land than to try and bring it back online if the root cause isn’t quickly apparent.

                  Remember that AWS serves loads of customers and they have to prioritize. If your startup is only paying a few thousand a month then don’t expect immediate responses. They’ll do their best but at that stage, you are pretty much on your own. If you can afford Enterprise Support then pay for it. Money well spent.

                  Use Reserved Instances as much as possible. You save a ton of money that way and once you get to a certain size AWS will likely start to cut bulk discount deals.

                  If this all sounds scary and you are building a basic web app or API, do yourself a favor and use Heroku (or similar) to get started. If your organization doesn’t have the resources to bring on people to build and manage this full-time, you’re doing yourself a disservice by trying anyway. I learned that the hard way at a previous job when I had a CTO who was allergic to the idea of PaaS.

                  That’s just my $0.02.

                  1. 2

                    Do you mind my asking what the most painful parts of using AWS at Heroku are, if not already covered in your (very thorough) write-up?

                    1. 4

                      Hmm… Where to begin? I’m not an expert in all of these things but I’ve often heard complaints about the following:

                      • Insufficient capacity issues with instance types which involves making support calls to AWS to help us limp along. Smaller regions have this issue quite often.
                      • Lack of transitive routing with VPC peering, which makes our Private Spaces product a bit cumbersome. Private Links may help but we’re still investigating.
                      • STS credentials expiring during long data uploads, which means we need to switch to IAM, which have hard limits.
                      • Cloudwatch being way too expensive for our use case so we have to poll a lot of our instances to determine events. We’ve spoken with them a few times about what we are trying to do and it is simply a use-case they aren’t accounting for right now. Maybe someday. The current pricing structure may have been feasible when more of Heroku was multi-tenant, but that isn’t the case anymore. I’ll accept that as a tradeoff.

                      Those are at least the most recent sticking points. We’ve been lucky enough to get in a room with some AWS developers in the past and it was reassuring to hear things like “we know all about it” and “we’re working on a solution”. They’re a huge organization and can be slow to make changes but I genuinely believe they are doing their best.

                      1. 3

                        Oh, oh, oh!

                        People not understanding that CPU credits on t2 instances are a thing. AWS gives you part-time access to the full power of a CPU on their cheaper instances but throttles you down if you use too much. It is nice for use-cases where bursting is required but will break your app like nobody’s business if you keep your instance under high load. There is a reason t2s are so cheap (~$40/month with on-demand pricing for a t2.medium).

                        You get what you pay for.

                        1. 2

                          Fascinating, thank you for the write-ups!

                    1. 6

                      From my experience, Ruby’s Net::HTTP should be avoided if possible. The library is pretty battle-tested in most cases but it can have odd quirks around error handling. Connection issues can raise any number of connections and it tends to be a leaky abstraction around other networking subsystems.

                      I know this isn’t necessarily the “ruby way” but I’d much rather have an HTTP library that doesn’t raise exceptions for connection issues, but rather use Go’s pattern of returning an error object. I’d rather reserve exceptions for problems with my code as opposed to external services.

                      At my last job we were stuck on old versions of libcurl so many other alternatives weren’t possible. I ended up using Excon (pure ruby, not built on Net::HTTP) as a replacement and found it to be much easier to use and far more consistent when our code fell off the happy path.

                      This all points to a larger issue. The post talks about an HTTP client with extra features that wraps another HTTP client with extra features that wrap a leaky abstraction. Why is all this necessary to make HTTP connections? I get that the point of We::Call is to help developers improve how they make external calls but all of this points to an outdated stdlib (seriously no offense to the folks who work on Net::HTTP).

                      Why can’t we have a stdlib HTTP client that requires timeouts and is pluggable to use different connection backends (consider this a rhetorical question)? Looking at all the HTTP clients/wrappers in the ruby ecosystem leads me to believe that the stdlib should either be updated or remove in favor of gems that can evolve outside of Ruby’s release cycle.

                      1. 2

                        Somewhat on-topic, the best tool I’ve found for managing migrations is sqitch. Not only is it way better than letting whatever obnoxious ORM you’re using manage the migrations, it’s actually pretty pleasant to use.

                        1. 2

                          This looks very similar to Flyway, which I love for the exact reasons you’re highlighting: you write the updates in pure SQL, making it trivial to use things like UPDATE INDEX CONCURRENT and the like (though unlike sqitch, Flyway does use database version numbers—something I’m okay with, honestly). That said, I don’t honestly find ORM migration tools universally awful. There’s nothing wrong with Django’s/Rails’/EF’s built-in tools for small projects; I’d just go to something like Flyway when the project gets a bit older and you start caring about things like details index specifications/making use of things such as the PostgreSQL update patterns like here.

                          1. 1

                            One of my team mates at a previous job was able to open-source our schema management tool ( It was developed in-house and used in a production environment for many years before things like ActiveRecord migrations were commonplace.

                            It requires a good bit of XML/XSLT knowledge but it worked very well for us in a HIPAA/PHI environment where data had to be tightly controlled and audited. It was a useful tool back in 2011 when working in a fairly legacy home-grown PHP codebase. I haven’t used dbsteward since I left that job, so I can’t really comment on using it with more modern frameworks.

                          1. 6

                            Adding and removing columns, keeping track of the basic state of the database is the easy part.

                            Something that would be worth expanding on; how do you test migrations? How do you verify your migrations do what you want and that your code works with the (now updated) data?

                            Also important; when do you run migrations? How do you do zero downtime migrations?

                            1. 1

                              Heroku employee here from the Department of Data. For migrations that could potentially cause high-impact, we advise customers to create a fork of their database and run a migration there first. Observe any problems and figure out a rough timeline. This may add some additional cost because you are briefly paying for two databases, but the overall cost is low compared to the risk of production impact.

                              This isn’t a pure apples-to-apples comparison since you don’t have a production workload, but it is far better than running the migration on your laptop or a tiny test database and hoping for the best.

                              As far as zero downtime migrations go, this requires some further engineering effort. The author mentions adding columns without a default value and then backfilling data. Solutions like these are best since they involve nearly 0 production impact. However, discipline and follow-up are required since your database will be in a sub-ideal state for a time.

                              As for when… At Heroku we tend to do migrations and maintenance during the day when help is more readability available and everyone is at their best. Some seasoned developers/sysadmins may insist on doing migrations at 3 AM on a Sunday, but this only increases the risk of mistakes since everyone is tired, grumpy, and trying to move as quickly as possible to get back to sleep.

                            1. 5

                              Many developers, past me included, forget that some basic vetting is required before pulling open-source code into your apps. If it is something large and well supported like various Node or Ruby libraries then you can be pretty confident, but pulling in something small, in size or in usage, should be done carefully. Doing something simple like checking install scripts, availability of documentation or reputability of the developer(s) should be enough to give you some sense of trust.

                              I wonder how difficult it would be to implement some kind of checks during the install process to make sure the library isn’t making network connections or writing files outside of the install path.

                              1. 3

                                Some repos have introduced (or plan to introduce) quality metrics of various sorts. Integrating these into the CLI clients might be a good start. For example, in my .npmrc (or whatever, I don’t use Node) I would specify that I want a warning if dependency resolution on any project ever results in pulling down a package with a score below X and a full-on failure if the score is below Y.

                                Of course this assumes that the scores are sufficiently difficult to game and that there is enough variance in scores that people can draw meaningful lines in the sand for their projects.

                              1. 1

                                I’m not sure how I feel about this. I can see the potential value but I worry this will only lead to added code complexity. This feels like a shortcut to writing one-off private methods in a class or module, which I guess can be useful. I suppose most of my concerns come from developers being bad developers versus the method itself.

                                1. 4

                                  Using a raspberry pi to build a small wireless access point that routes all traffic through my Buffered VPN account and/or Tor. Overall it’s a fairly simple project but I’m using it as an excuse to get more familiar with iptables and openvpn. Upcoming projects at work will require a bit more networking knowledge so I figure this will be a good learning experience and perhaps a blog post.

                                  Learning more about Rust has been on the TODO list for awhile now. I should probably brush up a bit since there’s probably Rust Belt Rust coming up in October.

                                  1. 3

                                    We often forget the value of Bootcamps vs college. A college is supposed to be a place of learning with the emphasis on students becoming well-rounded individuals with an academic focus on a smaller subset of topics. Sadly, in more recent times many public universities are becoming more like trade-schools, where the emphasis is teaching students more work-applicable skills as opposed to teaching them how to learn and research. Too many people think going to school for computer science is functionally the same as going to a 12-week bootcamp.

                                    I view coding bootcamps are more in-line with trade schools, just at a hyper accelerated and lower quality pace. Trade schools are designed to give students a fairly intensive introduction to core concepts that allow them to get an entry-level position. You don’t graduate from a trade school and become a master mechanic, expert plumber, senior developer, etc.

                                    Managing expectations and providing students with a quality curriculum are both very important. Far too many of these bootcamps seem to just give students a checklist of fairly basic skills in the currently trending languages/frameworks. Many value quantity over quality. Seeing students enter a 12 week bootcamp and be thrown Ruby, Python, C#, Swift, and Go (with associated frameworks) just means that they leave with only basic skills that most Junior/Mid-Level developers could Google in an afternoon.

                                    When interviewing candidates, I don’t particularly care where they went to school. I care about what they know. If all they can tell me about Ruby is how bundle install and rails generate with some extremely basic knowledge of Rails’ MVC model, then they aren’t going to be productive members of the team. Instead of shoving a dozen languages and frameworks at students, focus on one or two (say Rails and Ember) and teach them more in-depth. End the bootcamp with 1-2 weeks of students self-learning a new one to get an idea of other environments.

                                    Honestly, I’d rather see our field utilize apprenticeships than having people go into private unsecured debt to attend bootcamps. Sure an apprenticeship won’t be glamorous, but I’m willing to bet you’ll be a lot more valuable when it comes time to find your first real job.

                                    1. 4

                                      Catching up on all the VODs from SGDQ and trying to resist the urge to get into speedrunning.

                                      1. 1

                                        Oblivion was amazing :D

                                      1. 6

                                        There is a pretty big difference between comments that contain heated and/or unpopular options and comments that are outright hurtful and abusive. I value a community that lets people express their (non-abusive/hostile/trollish) views without fear of a ban or harassment. While removing bad actors is necessary, our community shouldn’t force people to follow some sort of hive mind in order to be accepted. Sadly, the line isn’t always clear.

                                        While I disagree with opinions stated by users like @nickpsecurity, I didn’t notice any comments that fell into petty personal attacks or bullying. Arguing about points mentioned in the content itself is fine to me but posting inappropriate things like commenting on the user’s gender identity, sexuality, race, appearance, etc. crosses a line.

                                        We have to make a distinction and this is why many projects/communities/conferences have adopted CoCs. While they can be unpopular to some, a good CoC outlines specifically what kinds of behaviors are not tolerated and what actions can be taken in these unfortunate events. Many CoCs floating around on the internet have a tendency to go too far but it isn’t difficult to lay out something basic that gives users assurances of somewhat professional behavior.

                                        I’ve marked some of those as Troll since in the past I’ve viewed trolling as an attempt by someone to provoke an emotional response or just being an asshole “for the lolz”. What if we made a “Being a Jerk”, “Boo! Not cool.”, or “ಠ_ಠ” downvote tag and having a certain ratio of these downvotes automatically collapses the thread on page load, like Reddit?

                                        1. 2

                                          Many CoCs floating around on the internet have a tendency to go too far but it isn’t difficult to lay out something basic that gives users assurances of somewhat professional behavior.

                                          I think your first clause gives the lie to the latter; coming up with a good CoC turns out to be really, really difficult.

                                        1. 4

                                          My current employer (Heroku) has a pretty unique way of doing this. After interviews, which can include showing code samples and answering technical questions in a non-trivia fashion, we offer to bring in a candidate and have them work with our team and codebase for a day or two. We call it the “starter project” and it is very useful for both us and the candidate. They get to see our processes (we’re a distributed team but some developers work at HQ), get to know the team members and see our code quality.

                                          At the end of their project, they are asked to give a 30-45 minute presentation on what they worked on, what challenges they faced, and what they learned. We don’t expect them to ship a feature or even have things fully working. This is purely an evaluation of how they work and approach problems. I’ve done take home work, coding challenges, and whiteboarding in the past. This method has been by-far my favorite.

                                          It isn’t without its problems since it involves people taking off a day or two of work and they sometimes have to be brought on-site (all expenses paid and we’re flexible if they can’t travel). Thankfully, it all occurs during the work day and candidates can use their own machine with a familiar development environment.

                                          1. 2

                                            Rust just isn’t, in my eyes, quite ready yet

                                            I know you said you’d cover this in a future blog post, but can you give a high-level list of bullet points? I’m sure the Rust devs would love to hear your thoughts.

                                            I’m sure I’ll get a “well actually” for this but did you considered Go? If so, what turned you off to the idea.

                                            1. 1

                                              Sorry, I meant to respond to this earlier. There’s not really anything I can say that’s likely to be of much use to the Rust developers, I’m afraid; I feel like the metaprogramming facilities in Rust aren’t yet as powerful as they should be (remember that C++ is my model here), and the language is harder to use than perhaps it could be (though I’m speaking mainly from secondhand experience here - I have a friend who’s using it relatively heavily and have seen him struggle with how to implement certain patterns). I know it’s also fiendishly tricky getting the Rust compiler itself to compile on OpenBSD (though of course the stable compiler is packaged). Probably the single biggest issue is that I couldn’t find an event loop library for Rust that looked like it fit the bill (must be able to handle timers - ideally both monotonic and realtime - signals, and process watch, as well as file descriptor readiness events, with ahead-of-time allocation where possible, and cross-platform of course).

                                              As for Go, I don’t think garbage collection belongs in an init process, and in particular I want to be certain that I can control allocation if not deallocation. I will talk a bit more about this in my blog.

                                              But ultimately, I like C++ and am familiar with it. That’s probably the main reason for the language choice, truth be told.

                                            1. 9

                                              This has been 5 years in the making for me. We first started when I joined the company around 5+ years.

                                              1. 7

                                                I’ve only been at Heroku for a year but this has been more-or-less the only project I’ve worked on. So happy to see it live and I’m excited to help improve healthcare vendor app development. The healthcare industry in my hometown (Pittsburgh) is pretty large and I really hope local projects adopt this as an alternative to their more traditional deployment models.

                                                I’ve worked at a few shops that had to be HIPAA compliant. Having PaaS/IaaS as a viable alternative would have been huge for us back then. We spent so much time and money reinventing the wheels of other projects/products because they were deemed as a security risk.

                                                1. 5

                                                  You should request a Heroku hat.

                                                  1. 1

                                                    As a Heroku hat wearer, I second this.

                                                  2. 2

                                                    As a mental health worker turned software engineer, HIPAA compliance is both near to my heart and a very difficult (but important!) problem for software. Thank you for this.

                                                    1. 1

                                                      So cool. Congrats ya’ll, this is a gamechanger for sure.

                                                      1. 1

                                                        Great news! Being in a city with a large Health IT and Medtech ecosystem (Houston), I’ve felt bad for those companies when they find themselves at some tech talks and some startup-y-leaning events. I remember one particular talk that focused heavily on “work your team shouldn’t be doing” and described a handful free-tier and affordable SaaS tools for every imaginable need, plus a smattering of IaaS and PaaS, basically to help engineers focus more time on their company’s core product.

                                                        About a third of the audience was in Healthcare and so a third of the Q&A boiled down to “So how is this data stored?” and “Oh, I guess we can’t use that.”

                                                        1. 1

                                                          This is really cool! Would you mind my asking a few questions?

                                                          1. What was involved in making this service HIPAA compliant? As an addendum, was LetsEncrypt integration related? Sounds like a huge project!
                                                          2. Are there any common development patterns that shouldn’t be used on Shield?
                                                          3. And maybe a question for a lawyer, but, say Heroku had a bug that made the service non-compliant with HIPAA, would that expose me as the app developer/company to legal difficulties?

                                                          I read through the blog post, but I didn’t click through to the more detailed docs. My apologies if these questions are answered there.

                                                          1. 1

                                                            Hey, the project touched all teams and orgs. My involvement was fairly minimal as I didn’t directly work on Shield.

                                                            What was involved in making this service HIPAA compliant?

                                                            A ton of stuff. One thing you can see on the Heroku buildpack is when someone makes a PR there is a section there for “compliance”. This is where another engineer has to check that they’ve reviewed the change and that it won’t introduce a security vulnerability and guards against someone slipping in some kind of a backdoor.

                                                            There are quite a few other things, but that one touched all engineers and codebases. HIPAA is as much about having a papertrail and being able to prove that you’re compliant as much as actually being compliant.

                                                            Someone who worked more on the actual details might be able to say more, or maybe not depending on our policies, but since that one is publicly visible I figure it’s fine to mention.

                                                            Would you mind my asking a few questions?

                                                            I would say that these would be better answered by one of our specialists from

                                                        1. 9

                                                          You could just, you know, install an operating system which you trust out of the box. With this script, at best, you’ve got an OS which you can distrust a little bit less. At worst, you’ve wasted loads of time on creating a false sense of security.

                                                          1. 4

                                                            For me, it’s not an issue of “trust”, but rather the direction of the OS. Trust, in this context, to me, means that I want my vendor to be transparent about what they do and how they do it, and I want them to give me switches to turn things off. Thus far, Microsoft has been doing both with Windows 10, so I don’t actually have any real trust issues with them.

                                                            I just happen to dislike some (though not all) of the integration points in Windows 10 with Microsoft cloud. Most of the telemetry, for example, really doesn’t bother me much, nor do Cortana, P2P updates, and a host of other things, but I do get irked by Windows forwarding everything I type into search to Bing, and I’m not thrilled with the ad integration/suggested start menu items. So I can turn off the things that I dislike and leave the rest. Throw in that I mostly do actually like Windows (it’s dramatically more configurable than macOS, there’s a rich software ecosystem, tools like Hyper-V/PowerShell/WSL make it very pleasant as a developer platform, and so on), and running a little script like this just isn’t a big deal.

                                                            (Many of these settings, by the way, have nothing to do with “trust”, but rather just switch Windows 10 into a more Windows 7-like mode. That’s a valid discussion to have, but not relevant to trusting or not trusting Microsoft.)

                                                            None of this is honestly unique to Windows, by the way. macOS sends searches online by default, it tries very hard to get you to put contacts and files on iCloud, it sends telemetry by default, etc. And it’s not honestly a big deal there, either, because I can turn those settings off if I don’t like them. Even Linux distros frequently do this. You may have forgotten, but Ubuntu went through a few versions where Unity sent queries to Amazon and where they pushed you hard to put all your contacts and whatnot in their cloud thing, and so on.

                                                            And the simple fact is that I honestly trust Microsoft a lot more than I trust Lenovo, so the weak point in my trust chain is my physical hardware, not my OS.

                                                            1. 2

                                                              switches to turn things off

                                                              if these were easily accessible, this script wouldn’t be necessary. also, their effectiveness seems dubious

                                                            2. 3

                                                              Like many things, that works in theory but not in practice. Plenty of applications require Windows and there is something to be said for compatibility. People can use wine but many of us want things to “just work”. Being willing to spend hours forcing square pegs to fit into round holes (for business purposes) show a pretty serious under-valuing of your time.

                                                              Truthfully: I play video games and gaming on non-Windows OSes is horrible.

                                                              1. 14

                                                                I play video games and keep a separate Windows PC just for gaming, through which no private information whatsoever flows (aside from Steam, Battlenet etc logins as necessary).

                                                                I have no idea how to get a Windows machine to be trustworthy so I skip the whole problem by having a Windows machine that I don’t need to trust in the first place. Works for me so far.

                                                                1. 2

                                                                  That’s actually my setup. :P I rarely do anything I care about in Windows. I use a fairly locked down Macbook for a lot of my “real” work.

                                                                2. 2

                                                                  Plenty of applications require Windows and there is something to be said for compatibility.

                                                                  That historically mostly applied to games or specialized software (AutoCAD, Photoshop etc.). Most software right now is web based and there’s a ton of people using just Chromebooks or their Android/iOS tablets. Desktop app lock-in is weakest in all history.

                                                                  Truthfully: I play video games and gaming on non-Windows OSes is horrible.

                                                                  Gaming on non-Windows OSes has been pretty decent for me (OpenBSD, Linux + Steam, Playstation 4)

                                                                  1. 1

                                                                    True, the number of apps that require windows are shrinking all the time. As someone who uses a mac for personal and professional programming, this makes me very happy.

                                                                    Re: Gaming. Overwatch. :) Sure there is a PS4 version but I find the PC version to be far better.

                                                                3. 2

                                                                  Apple is silly expensive and everything isn’t available on Linux.

                                                                  1. 5

                                                                    How is macOS a trustworthy system?

                                                                    1. 2

                                                                      Apple is silly expensive

                                                                      A ‘hackintosh’ might be worth considering

                                                                      everything isn’t available on Linux

                                                                      If your privacy is important enough to you, you can (almost always) find ways to work around this.

                                                                      There is also Windows Vista, which has no telemetry, and Windows 7/8 which is less extreme than Windows 10 (and more easily disabled).