Genuinely surprised. I was under the impression Google Domains was a kind of leverage for TLD control. Honestly it’s kinda nice then to see it either failed or wasn’t the intention. I wonder how much Squarespace bought it for…
This could be the result of some forced hand acting behind the scenes. Only yesterday, EU’s anti-trust committee took some strong action against Google right? This can’t be a coincidence.
You overestimate the speed at which these organizations are able to move, to think that an anti-trust action announced yesterday could result in the sale of a whole business unit today.
That’s true, but correlation and causation are different things. Yesterday’s action may not have caused this sale of business unit today but both these actions could be correlated to something else which may have been in the works since a long time?
Oh man. After figuring out how to use OpenBSD as a college apartment router on an old PowerMac G4 that I’d picked up for $5, I moved that setup to a PC Engines ALIX to save on power. That ran for many years without any problems, and I only switched to the APU2 for better performance.
I’ve been running that APU2 for over six years now, and it’s never had a problem either. I’m really sad to see this line of passively-cooled small-form-factor PCs go.
The framework is designed to be very thin, which makes it much more difficult to repair than an X201. It’s really too bad they went this route instead of prioritizing user serviceable parts as the primary design principle.
Thin is a feature for me; the laptop fits with my other laptops in my bag. I’ve not found the thinness to hamper my efforts any time I’ve opened the case (rare but I’ve e.g. upgraded the speakers to the higher quality ones).
Design constraints are not bad things, and the thinness makes it so nobody bats an eye when I pull it out. It’s expected that modern laptops be light and easy to transport, it’s unusual that they also are able to be repaired (ship of theseus style perhaps) by the end user.
It makes “repairability” about “you want me to give up my sleek devices” instead of “just choose the right devices and life gets better.” The values we hold dear will lose if we don’t do them well enough to promote them to people who are not convinced.
I switched away from the Mac ecosystem because I wanted something that I could upgrade over time (amongst many other reasons). I think reminding folks that computers should be user-serviceable and -upgradeable, without being ugly bricks only a dork would use, is cool.
When people see my Framework (13) they see a thin, light laptop, with an unusual logo. The modular ports are always fun for demos, and most folks are at least somewhat interested in the Framework’s ability to be upgraded, which is in stark contrast to Apple’s philosophy. Yeah, I run Linux, yeah I miss some of the good third-party apps that are MacOS-only, but not having to buy a whole new laptop in two years is worth it. Great conversations.
I also used to carry around around older ThinkPads, too: absolute thicc black chonkers. The only conversations those things raised were folks giving me shit because I was lugging around a heavy, ancient-looking beast.
I switched to Mac hardware for the opposite reason. I used to build my own desktops, but I kept running into things like needing to upgrade the motherboard to upgrade the CPU, needing new RAM to go with the new motherboard because it didn’t support the old kind, and then needing a new graphics card to actually get the benefits of the other bits. I have a NAS that is a box I assembled and in my last upgrade only the case and disks remain the same. That’s a reasonable trade for a NAS because the disks are its main reason to exist and so upgrading them separately from everything else is nice.
The main reason that the new Apple laptops feel fast is that they have carefully scaled everything to avoid bottlenecks. If you upgrade any one part, you’re unlikely to get much more performance, you’ll just see bottlenecks elsewhere. I’d rather have a machine that lasts a long time (I’m probably going to replace my MacBook Pro soon, it’s 10 years old now) than one where I can keep upgrading bits but need to ship-of-Theseus it to actually see a significant benefit.
That makes sense, but there does exist a world where you don’t have to buy the top-of-the-line laptop and use it for a decade. While I may upgrade my Framework to an AMD chip (partially because I can and partially for the increased battery life), I can also not do that and just increase the RAM and SSD size as-needed. That may not be as finely tuned as an Apple machine but I’m not going for “fastest bus throughput” there, I’m going for “I don’t have to buy a new laptop to double my RAM.”
To each their own though, I was about that Apple life for a few decades and they treated me well!
While I may upgrade my Framework to an AMD chip (partially because I can and partially for the increased battery life), I can also not do that and just increase the RAM and SSD size as-needed.
If you do, I’d be really curious to hear how well it works for you. It’s been almost 20 years since my daily work machine was one where I could upgrade components piecemeal. I suspect a lot has changed in that time, when the RAM technology that different CPUs supported was completely different. I upgraded the RAM in my PowerBook (largely because Apple charged insane markup on SO-DIMMs), but suffered from multiple motherboard replacements because the solder kept coming off on the DIMM slots but for newer machines (including the NAS) I’ve just bought the maximum that the motherboard supports, even when the RAM has been upgradable.
Replacing the CPU wasn’t a significant win for me in any machine since the end of the Socket 7 era (and even then often didn’t give the maximum speedup until I upgraded the motherboard and RAM as well).
I used to upgrade disks a lot and the jumps from 1 GiB to 20 GiB and 40 GiB were each accompanied by running out of disk space before I could afford the bigger disk, but my personal and work laptops (bought 7 year apart) both have 1 TiB SSDs and that hasn’t been a space constraint for me. With the growth of cloud storage for cool storage, I suspect that my local SSD will gradually become more a local cache than a truly local filesystem, which reduces the pressure further.
You can upgrade the SSD in Macs (I’ve replaced the battery in mine and that involved removing the SSD is vastly easier), but I’ve not felt the need to. The 1 TiB disk was expensive back then, but now it’s one of the cheaper options. I stopped running VMs locally a while ago, but I can imagine wanting a 2+ TiB disk if I had a bunch of VMs on my laptop.
capitulating to thinness fetishism because losers gave you shit for your thinkpad is the opposite of cool; it reflects a lack of confidence. the “cool” approach would be to hold the course if you really believe sacrificing cooling and repairability is not objectively worth fitting 1/4” more of who-knows-what in your backpack. if your principles crumble in the face of marketing-induced irrationality expressed by random people, maybe you don’t actually believe in them.
I know I shouldn’t feed the trolls, but here I go anyway.
capitulating to thinness fetishism because losers gave you shit for your thinkpad is the opposite of cool; it reflects a lack of confidence.
Or alternatively, as was the point of my comment, understanding that different users have different wants and needs might be a reasonable thing to do; as well as understanding that locking repairablity and longevity together with devices and laptops that most folks do not want to use only pushes folks away.
if your principles crumble in the face of marketing-induced irrationality expressed by random people, maybe you don’t actually believe in them.
If your principles require you to post inflammatory, ideological creed, you may want to reconsider your approach.
I don’t think it’s cool to associate thick computers with being a dork, or to assume ordinary people would be unable to overcome thinness fetishism if they understood the objective tradeoffs.
Your perspective seems to partially adapt Apple’s mindset, giving up some repairability for the sake of attracting customers while offering no objective benefit. That’s not necessarily wrong–Apple is extremely successful after all–but it suggests a pessimistic view of human nature, and is certainly not cool. IMO compromising principles for the sake of adoption at least requires some empirical justification, e.g. some evidence that there are non-dorks who use the Framework laptop, which is far from obvious to me.
Another feature is that they actually need to sell laptops, and in this day and age selling something as thick as an x200 is just not viable. I think it’s amazing what they’ve pulled off in a form factor that still looks like a ‘modern’ laptop.
Yeah exactly. I have a Logitech C925 on my home desk setup, because I like having an HD camera with FreeBSD and Linux support. But with my ThinkPads I know that when I’m away from my desk, I don’t have to cart it around with my and plug in every time I want to make a video call.
Honestly the lack of an integral webcam is the one design decision on the Reform that I don’t understand. It’s 2023 and much of the tech world is hybrid or remote; being able to make a video call from your device is table stakes.
I’m curious as to how you mean this. I’m a happy Framework daily user, but now that I think about it I’d take a slightly thicker model in exchange for an externally replaceable battery. Aside from that I can’t think of anything I’d categorize as a tradeoff between “serviceable” and “thin.”
Although I do miss my Macbook’s amazing trackpad and software support for same.
Aside from that I can’t think of anything I’d categorize as a tradeoff between “serviceable” and “thin.”
I’ve built probably two or three hundred keyboards by hand. Any individual piece could break on that board and it would be at most a 20 minute repair; more like 5 minutes for the majority of the parts. Most of those 5 minutes would be waiting for the soldering iron to heat up.
Last weekend I had to replace a key switch mechanism in a Thinkpad. Luckily it was an old Thinkpad from 2011, so it still had key caps which were easier to remove, but replacing the switch mechanism was difficult; I had a spare donor board in my closet, but I had destroyed three different switches in the process because the miniaturization process had made them so fragile and tiny. Something which would have been trivial on a larger device required tweezers and a magnifying glass.
So I would say that the X201-era Thinkpads are already making significant sacrifices to repairability in favor of miniaturization. I’ve also replaced the fan on a couple of these models, and it’s already very difficult to get it reseated precisely within the tolerances which will allow the case to close back up the way it’s supposed to. I have never even attempted to repair a super-thin laptop, (I’ve been avoiding them for what I hope are obvious reasons) but it is no great leap to assume that miniaturizing it even further would reduce these tolerances even further. Plus you have trade-offs like the RAM or the battery being soldered in, because it’s just a simple fact of engineering that connectors which allow modularity take up a lot of space.
Plus you have trade-offs like the RAM or the battery being soldered in, because it’s just a simple fact of engineering that connectors which allow modularity take up a lot of space.
That’s something fantastic Framework addresses. Their batteries and RAM are modular, despite being so thin! See, being thin doesn’t require all of these trade-offs!
But I wouldn’t want to try and repair a single key-switch on that keyboard, I agree.
Hmm, so in the case of the Framework the individual key switches aren’t easily replaceable either (to my knowledge).
I’ve also replaced the fan on a couple of these models, and it’s already very difficult to get it reseated precisely within the tolerances which will allow the case to close back up the way it’s supposed to. I have never even attempted to repair a super-thin laptop, (I’ve been avoiding them for what I hope are obvious reasons) but it is no great leap to assume that miniaturizing it even further would reduce these tolerances even further. Plus you have trade-offs like the RAM or the battery being soldered in, because it’s just a simple fact of engineering that connectors which allow modularity take up a lot of space.
For what it’s worth these jobs are all straightforward on the Framework, despite it being thin. The fan module is four captive screws so you don’t lose them and the forum is full of people doing things like comparing different heatsink pastes for the sake of it (i.e. pulling the module on and off constantly). The RAM is as simple to replace as laptops from the 90s, and the battery is three screws (also captive).
I’ve repaired a lot of laptops (including a classic ThinkPad and a couple of modern super-thin ones). I was expecting this Framework to be more repairable than the super-thin ones, but was still pleasantly surprised when it got here at how it was nicer to work on than my (high) expectations. Especially small things like how almost every screw is captive so you can’t lose it, but you can still remove a screw if it somehow gets damaged. They’ve really thought about repair aspects in the design.
Huh, what do you find lacking about the Framework’s trackpad?
I use a modern MBP for work and a Framework at home, and while other PC laptops’ trackpads have seemed noticeably deficient to me, I consider the Framework’s trackpad on par with the MBP’s. Am I missing something though?
To be fair, I could just be experiencing Wayland’s (or KDE’s or Gnome’s or whatever’s) trackpad support being less than stellar, which I gotta say the MBP has traditionally been stellar++.
Huh, what do you find lacking about the Framework’s trackpad?
I find it really hard to go back to mechanical trackpads after using force touch ones. The ones that are hinged are really infuriating - I shouldn’t have to apply more force at the top than the bottom. I can’t tell what kind the Framework is using, however.
Genuine question, which repairs are made more difficult by the Framework’s lack of thickness?
I got a Gen12 Framework last month and I’m super impressed with its potential repairability, so far. Replacing the display hinges looks potentially fiddly as you have to work around the cables, but on a lot of thinkpads you have to take a bunch of other stuff out instead.
(I’ve not owned an X201 but have owned an X61 and an X230. I loved them and I still have the X61, almost got convinced to do the X62 upgrade on it instead of buying a Framework. However glad I bought a Framework!)
I’ve never played with a Framework so I don’t have direct experience. I agree with the other response that the Framework is a little too thin (not to mention expensive!)
Satire and Linux, so I hoped the joke was clear enough. Although, windows 11 light theme could pass for kde in a short glance. Lunduke has good stuff, sometimes serious, sometimes historical, but sometimes also phoronix level clickbait.
I have a Macbook Pro M1 at work, and it is an amazing machine: silent, light and incredibly powerful. I have a quite decent personal windows machine that I got during the dark ages of the macbooks that feels like a turtle next to it. The next personal machine I am buying once my windows machine passes away is going to be whatever is the latest Mx in the market.
+1. If you need a bit of computing power, go for a MacBook Pro. The M1 in there has more cores and thus more power than e.g., the MacBook Air with M2. I’m doing fresh builds of Firefox in less than 10 minutes on an MBP. Compared to 3 minutes on a maxed-out Ryzen Threadripper or over 60 on a thinkpad x390.
I also have an M1 MBP at work. It’s great and, yes, almost always silent. But I’d hardly call it light—that’s probably its biggest downside in my book.
I do something a bit weird to store 2FA backup codes and other core “secrets”:
Prepare a set of YubiKeys w/ on-device generated OpenPGP keypairs. Among other things I set good PINs and enable proof-of-presence (ykman openpgp keys set-touch enc on).
Encrypt secrets (for example, github-recovery-codes.txt) to this set of OpenPGP keys.
Put the encrypted secrets (github-recovery-codes.txt.gpg) in Google Drive/Dropbox/etc.
For me, the advantages are:
The secrets are backed up to the cloud, and if I keep one of these YubiKeys on my keychain I can access them away from home if needed.
Because I’m encrypting secrets to single-purpose, seldom-used YubiKeys that require proof-of-presence (and which I distinguish from my normal U2F/FIDO2/SSH YubiKeys with a bright sticker), it would be challenging even for someone with control of my computer to get at a secret that I didn’t intend to access—as with secrets printed on paper and only typed into the computer when needed, but in contrast with secrets kept in my password manager. This is how I justify to myself that 2FA backup codes stored in this way still constitute “something I have” instead of being just another password.
The secrets are stored electronically, which can be easier to deal with than typing or OCRing printed secrets.
Even if someone takes my safe they’d have a very difficult time doing anything with these secrets without knowing my YubiKey PIN.
Obvious downsides include:
It’s expensive to buy a whole extra set of YubiKeys.
This approach requires using GnuPG and various smart card tools, and that all can be uncomfortably fiddly.
I had to write some Python scripts to do things like check the invariant “this collection of .gpg files is encrypted to the correct set of keys”.
As other people have commented in this thread, printing 2FA backup codes and putting them in a good fire safe is a sensible and straightforward approach.
But I think the much more important thing is that they found this vulnerability and reported it to Apple, who then fixed it, making all macOS users safer in the process. I think that’s much more noteworthy than whether the vulnerability was given a fancy name after the fact.
I did this for a while.. It mostly worked well but never worked great. The pcscd / gpg-agent dance was flaky.. and most days would have to start one or the other.
Since OpenSSH added FIDO2 and it’s in OpenBSD by default, I have completely switched to using it.. and I have to say it’s painless!
I’d previously tried to use an iPad Pro with the Apple Pencil (gen 1) as a note-taking device. It worked, and it’s superior for drawing, even. But it showed that the iPad isn’t designed as a dedicated paper replacement: the Pencil slips too easily on the glass, my palm was constantly smudging and rubbing on the screen, and I had to remember to keep the Pencil charged up. Worse, I couldn’t just leave the iPad open on my desk to glance at while I cross-referenced other materials for extended periods: because of the backlit display, it’s set to sleep after a minute or so.
Taken all together, these papercuts meant that even though I had an iPad Pro with an Apple Pencil, I would still turn to actual pen and paper more often. reMarkable 2 is the first device I’ve tried that I’m actually inclined to reach for over paper. The author nails it: using this thing is shockingly natural.
(I wish it had better ePUB navigation, on the other hand. And the desktop app could be a lot better, at least on macOS.)
My vote goes to 1Password, for ease of use, built in security model (client side encryption), versatility in handling all kinds of data (notes, credit cards, etc) and reliability of the plugins to work with all websites and apps. Other password management apps that I’ve tried have frequently had problems with some websites. Sometimes 1Password still has edge cases where e.g. 2FA is not automatically filled in and you have to copy paste it manually. But I haven’t seen a better app yet.
My work used LastPass and I couldn’t have created a worst UI if I’d tried. There was no easy way to generate a new password. It took three clicks in non-obvious places to get to it.
I used LastPass for several years before switching to 1Password a year ago. Wish I had switched earlier. LastPass’s UI design needs a lot of work and over time actually got worse with various annoying small bugs.
Hard no to LastPass. I used it years ago, audited it one evening on a lark, found a few vulns, reported them, a couple got fixed, a couple got me told to fuck off.
When I previously used LastPass, there were some weird differences between the browser version and the desktop version - there were some things that each of them couldn’t do.
One oddity worth noting - I don’t use the desktop app with 1Password. I’ve found their browser extension, 1PasswordX, to be more stable (it also has the benefit of working on Linux).
I believe with the addition of HaveIBeenPwned integration on the LastPass security dashboard, they’re pretty much similar feature wise (though maybe 1Password can store 2FA tokens). I’ve used 1Password because it felt way less clunky than LastPass and it doesn’t require me to install a random binary on my Linux machines in order to access my passwords.
I switched to 1Password from LastPass a couple years ago and haven’t looked back.
LastPass got unusably slow for me after I had more than a few hundred entries in it. I don’t know if they’ve fixed their performance problems by now, but I can’t think of anything I miss.
Long time 1Password user here. It’s by far the best tool I’ve ever used. And I believe it goes beyond the application itself, as the support team is also great. Given a matter as sensible as all my credentials to login into several different services, having good support is mandatory IMO.
1Password here too. Excuse the cliché, but it just works. The cost is minimal for me — $4/mo, I think.
I’ve been slowly moving some 2FA to it, but it seems dependent on 1Password itself detecting that the site supports it vs. something like Authy where I can add any website or app to it.
I just switched to 1Password after 5-10 years on Lastpass. There’s some quirks, it’s not perfect, I generally prefer it to Lastpass.
The only thing Lastpass truly does better is signup form detection. Specifically I like the model Lastpass uses of detecting the form submission, 1Password wants you to add the password prior to signing up, which gets messy if you fail signing up for some reason.
1Password wants you to add the password prior to signing up, which gets messy if you fail signing up for some reason.
Oh yeah, this is a constant frustration of mine. ALso, whenever I opt to save thep assword, I seem to have a solid 4-5 seconds of waiting before I can do this. This seems to be 1Password X, FWIW. Back in the good old days of 1Password 6 or so when vaults were just local files, the 1P browser extension seemed to save forms after submission.
I’ve been able to get my whole family onto a secure password manager by consolidating on 1Password. I don’t think I would have been successful with any of the other options I’ve found.
WhatsApp end-to-end encrypts all chats, by default, using the Signal protocol; Telegram only supports optional encryption of 1:1 messages with a more questionable protocol.
Either choice gives you better security guarantees than WhatsApp ,
It’s totally fine to dislike Facebook or to want an open source client. I may have different priors than the author, and for my part I trust Telegram, as a company, less than I trust Facebook with my data. But I’d have to think WhatsApp is flatly lying about it use of the Signal protocol to consider my conversations on Telegram more private than those on WhatsApp.
Ultimately, though, I agree with the author that Signal is the best choice out of the three.
The section on MoCA was interesting—I didn’t even know that exists.
But I’m really confused about the network topology the author settled on (partially because it isn’t clearly described). Multiple routers is probably the wrong choice for this kind of situation—multiple switches and APs, sure, but not multiple routers.
If I were setting this up I’d have a single router between my local network and Sonic. The router would give out IPv4 DHCP assignments and IPv6 router advertisements to the LAN. You can set up all the switches and APs you like behind that router, but directly exposing your LAN to your ISP’s network seems like a brittle mistake (and also possibly a security nightmare).
Ah, I wasn’t clear enough. I ended up running my two consumer routers in AP mode. I have a single managed switch sitting between the local network an Sonic.
That’s also an idea, but this comes at considerably more cost and is a bit problematic for some home-network applications like PoE. With fibre, it’s not as simple as with cables to connect surveillance cameras, wifi antennas, or anything else. And given PoE++ supports up to 70W, I could think of many applications where this might come in handy. :)
I like to follow the rule of “always pull an extra Cat5 or two if you have the room with any cable pull” (although recently updated the rule to Cat6 and now it sounds like I should do 8.1). When I did this with fiber a few years back, I had no plans for the Cat5, but did end up using it for POE later. As an aside, if you use Cat5 (not e) with POE, IME it will stop working reliably at some point. :(
It is really simple, and I understand you, because it confused the heck out of me before I figured it out. Up until (including) Cat 6A, it was part of the standard to use RJ45-connectors. Their disadvantage is that it’s really hard to shield them, which is why Cat 7 brought a new connector type (GG45) which looks almost like RJ45 but is not compatible with it (you can plug an RJ45 into a GG45 socket, but not the other way round). Additionally, Cat 7 isn’t even an international standard and quite messy. Most people use Cat 7 cables but terminate them with RJ45 connectors, which makes zero sense because this way you don’t even make use of the special shielding and grounding in the cable. It’s effectively a waste of money.
Cat 8.1 came later and fixed a lot of stuff. It is an international standard and uses the RJ45 connectors again (which is possible due to advances in shielding technology). There is also Cat 8.2, which uses different connectors, but that’s another matter. The cables themselves (Cat 8.1 and 8.2) are the same.
What I meant with my comment was this: If you renovate you house and install cables, the cables are the only thing that matter. If you really upgrade to 40GB/s in 10 years, it is possible. Even if, by then, other connectors are the norm, you can replace them on the existing cables, but you cannot easily replace the cables themselves in your wall, obviously.
tl;dr: If you want more than 10GB/s (which is not unreasonable anymore) and want to be future proof, skip Cat 7 and go directly with Cat 8 cables and Cat 8.1 RJ45 connectors.
Call me spoiled, but a 10G network between my NAS and various computers (a Mac mini, a workstation) is life-changing for me. Daily backup is faster, no seeking delays when play / scrolling 4K videos and just in general file transfers snappier. I live in an apartment now so cat6e works fine for me. But if I moved, I would seek solutions to have 10G connectivity in every room.
What kind of switches are you using? Last I really looked, 10 gigabit Ethernet hardware was still expensive enough to put it out of my reach for home use.
I’m about 1/2 way through replacing most of my home network with 10gbase-t - I just finished pulling new cat7 cable to replace cat5 that came with the house and wasn’t able to support 10g (or even 1g on a few of the links).
There still aren’t a lot of options for 10g home lab grade equipment. It seems like it’s either a nice used switch from eBay that makes my neighbors think I have a jet engine in my garage or a really cheap unmanaged 10g switch (e.g. MicroTik or something similar).
Everything from MikroTik is managed, and the models with “router” in the name dual boot SwOS/RouterOS. Heck, the 10G capable Marvell switch chip they use even supports accelerated L3 forwarding, and they finally started using that (in betas and for IPv4 only for now, IIRC)
I’ve been using Mikrotik for many years now, but I feel that their software and hardware QA has gone downhill lately. I got burned by a variant of this 10Gb problem, and they still haven’t made it right. A lot of their layer 3 stuff is a little off (search for BGP issues) too.
That said, no one else is even close to their price point for a redundant power switch (even most of the cheap stuff will accept power over passive POE and a wall wart). My advice is to use for L2 functionality, heavily test, and have spares even for home networks. And allow a fair amount of time to get accustomed to their rather exotic configurations, which change more often than they should.
My first impression of this was “this guy has a lot at stake with nudes.”
I agree with the idea that we should hold companies to the same standard and stop excusing big companies that we happen to like the product of (as a whole, not necessarily on the individual level). I don’t personally use icloud for anything other than text documents, but I can see how it would be an issue for sensitive information.
In the category of data that people hold onto in their iCloud backups, nudes are probably the most sensitive and well-understood variety. I think it totally makes sense to invoke that as a way to remind people of the sensitivity of the data they’re handing over to other companies.
I don’t know if it’s a generational thing or if I’m just an odd guy, but I don’t have any nudes of myself or others. I would be more worried about any sort of tax forms, bills, recovery codes, etc that I was storing in text on iCloud.
I was looking for information about Android’s approach, and found the following on Google’s support:
If your backups are uploaded to Google, they’re encrypted using your Google Account password. For some data, your phone’s screen lock PIN, pattern, or password is also used for encryption.
If you back up to Google Drive, here’s what’s backed up:
Contacts
Google Calendar events and settings
SMS text messages (not MMS)
Wi-Fi networks and passwords
Wallpapers
Gmail settings
Apps
Display settings (brightness and sleep)
Language and input settings
Date and time
Settings and data for apps not made by Google (varies by app)
Photos are another story, I guess.
As for contacts, they may be encrypted for backups, but they’re all fully available from other Google services like GMail, right? 🤔
If your backups are uploaded to Google, they’re encrypted using your Google Account password. For some data, your phone’s screen lock PIN, pattern, or password is also used for encryption.
OK, so, let’s be real here:
If the data is encrypted with your Google Account password, then either they’re storing your password in cleartext on the device and/or in the cloud, both of which options would be a rather bad idea given that you’re supposed to only use the password to get the authentication session token, or that you have to enter it all the time, which would be a rather poor UX. (I presume they must be storing it on the device, encrypting it with the lock PIN/pattern?)
Even if they themselves don’t have a password, I don’t see how they could possibly resist a request from a secret court to save such password the next time it is supplied by the user; this doesn’t compare favourably to what Apple was supposed to have been working on.
As for lock PIN or pattern, what sort of encryption are they using? These are usually just a few digits long, there aren’t that many combinations to try out all the inputs if you already have all the data for it locally.
If the data is encrypted with your Google Account password, then either they’re storing your password in cleartext on the device and/or in the cloud
Is this necessarily true? I feel like there could be some ways to “effectively” do this, without storing your password in cleartext. Here’s an example:
If you are asked for your pw when you encrypt, Google can sha512 your password and use that to decrypt in the same kind of way.
Of course, I don’t know that Google is making that ask at each encryption / decryption. Also, that would mean you would lose your data if you forgot your password, which is probably not the case. However, I just want to point out there could be some clever use of cryptography going on here.
Well, your reply started with “let’s be real” but you’re only presuming on what Google’s doing. I’m not sure they are as bad at encryption as you credit them for, but I can’t prove that either.
At any rate, Google is working with US gov law enforcement, to the extent that US-based companies are obliged to. That’s not great, but that’s expected.
Agree with this 100%, Windows is the best Linux distro
You can roughly split software into two categories:
Software that breaks randomly if you don’t update it: youtube-dl
Software that breaks randomly if you update it: everything else
I only want to update software in the first category and not software in the second category, but because Linux userspace is all-in on making everything rely on very specific versions of everything else, you can only either update everything or nothing.
On Windows, the only way to ship software is to statically link all of your dependencies, so I can update software individually with no problems. There’s a small amount of Linux software running in WSL, all of which I am fine with never updating, so it works out.
I only want to update software in the first category and not software in the second category, but because Linux userspace is all-in on making everything rely on very specific versions of everything else, you can only either update everything or nothing.
Sounds like you should give guix or nix a try; they are built around that whole concept of isolating updates and making them trivial to roll back if you turn out to not want them.
I see why the perception is this way, but really don’t think this should be the case. Mind if I quote you on this in a blogpost on how to practically use Nix later? :-)
Almost all software should sit in that camp or be able to be configured to sit in that camp. There’s literally no reason at all for most software to touch the network. One of the most underrated aspects of having a system package manager is you don’t have every program having to reimplement auto-update functionality securely. Updating is taken care of in your package manager, in one place, once. Updating is the only place the vast, vast majority of desktop software would ever “need” to touch the network.
Text editors, word processors, office software, email clients, video players.. the list goes on. None of them need to touch the internet at all.
I’m not talking about the internet. I’m talking about untrusted input. You are severely hampering your experience if you are never going to open a file from an untrusted source with your office software, email clients or video players. Even image viewers are potential vectors of attack. So, what software apart from a calculator falls into the category of “you never have to update it since it doesn’t interact with untrusted input”?
I also struggle to think of much software that falls into that first category. That’s the point I intended to make: most of the software we use needs to be (capable of being) updated regularly. Various package managers have their downsides, but adopting a stance of generally not updating software isn’t really a solution (unless one cares to spend way more effort staying on top of CVEs than I do).
Software on Windows tends not to be statically linked, just when you distribute the software you ship the dynamic libs with it. (The d in dll stands for dynamic).
Brew has an amazing compromise between sandboxing and updates. Try brew on Linux for things like this.
I always have the latest python provided through brew, but won’t mess up my system if I pip install something unstable.
Veering slightly off topic, I appreciate, but: has anyone actually used one of these? I love my HP-48gx, but I wouldn’t be averse to upgrading to something a bit more powerful if I didn’t have to give up keys or anything. I’ve been loathe to upgrade ever since my abortive attempt at using a 49g+.
I’m curious as to what you use these calculators for, where upgrading would actually be a net win over your current kit? I haven’t touched my TI calculators (an 83plus and a TI-86) since 2001(?) and even then it was for one specific class, and checking my work, not doing the work.
(edit: I’m making the assumption, based on previous interactions with you, that you’re still a software engineer, and not in a role that necessitates complex mathematical models – though, even then, I’d assume you’d use NumPy and friends…)
Two things. First, I do a reasonable amount of volunteer teaching and tutoring, and having a physical calculator is really handy for that (and kids like a non-TI for that, too). Second, when I’m doing retro video game work, checking bills, etc., I prefer using a physical calculator. I’ll use calc-mode in a pinch, but I just really prefer having the dedicated physical object. Even the 48 is overkill for either task, but I like the larger screen and RPN.
I mostly stopped using my HP graphing calculator after I got an HP 35s. It’s pricey for what it is but I really like it for general calculation, for some of the same reasons. And for anything more involved I turn to NumPy or Mathematica.
This is excellent news. I think I’ll finally be able to get rid of my functional but complicated YubiKey OpenPGP applet + gpg-agent setup, while retaining the benefits of hardware isolation and touch for user presence—and upgrading to ECDSA in the process.
More importantly, this may also be what it takes to get some of my friends, who haven’t yet made the leap to hardware token-backed SSH keys, to upgrade their security. Especially since it should work with not just the expensive YubiKey 4/5, but also cheaper U2F-only keys as well.
This is excellent news. I think I’ll finally be able to get rid of my functional but complicated YubiKey OpenPGP applet + gpg-agent setup
I would love that too, but I use my GPG hardware key (NitroKey Start, which runs the gnuk firmware) for the pass password manager, as well as signing git tags sometimes. But I agree that with the low prices of U2F keys, it may be enough to convince friends and colleagues to a hardware token. Also, they can serve as an extra factor for PAM-based logins, which is nice.
I’m a bit disappointed that the interviewer didn’t mention a single question regarding addiction or any ethical dimension. It’s kind of been assumed that not liking pornography is just a conservative, right-wing thing, but I don’t think that’s correct. I personally perceive it to be pushing harmful stereotypes (both as in what women should look like, or how intimacy should look like), and then there’s the problem with trafficking, and never knowing what’s actually going on behind the scenes. Chomsky says it well.
Setting aside things like these, which should be enough to say something isn’t right, but knowing the digital world (where creating addictions has become a common and often even necessary business model) reading
you have to be clever to innovate at the bleeding edge of the web.
makes me somewhat uneasy. Especially a front end developer should have to think about these questions. They are the ones tasked with creating “seamless experiences”, ultimately, disregarding the influence it has on people’s daily and personal life’s. I don’t think the interviewer should have just glossed over this. YouTube has hateful or harmful videos, but their raison d’être isn’t hosting them. PornHub will have it a bit harder that hosting and spreading pornography isn’t a big part of what they are.
From the technical perspective it’s somewhat interesting, I guess. It’s about the problems of high-demand video streaming, probably above the level of most other video sites, but still way below sites like YouTube. That’s like having an interview with a slaveholder on what kind of whips they have found to have the best quality CIA agent on what the best strategies are to manipulate a foreign election.
Edit: Rewrote a few sentences to avoid confusion, and replaced my analogy with a different one.
I’m a bit disappointed that the interviewer didn’t mention a single question regarding addiction or any ethical dimension.
Porn has been around a really long time. I’m pretty sure there’s nothing new to be discovered or discussed almost anywhere on earth on the topic, much less here.
Like, the human race has brute-forced about every part of that solution space we can. There is not a dirty thought we can have that hasn’t occurred to scores of other people at one point in history or another–of this I’m certain.
Not in the way it is now, as an endless torrent on demand. Modern porn has demonstrably changed society in ways that ancient porn did not. For example, women now believe that pubic hair is unclean and as a result of excessive pubic hair removal are getting health problems that pubic hair can prevent.
Also, just being around forever does not categorise something as innocuous or beneficial.
Hairstyles have been coming and going in fads ever since we left the trees and discovered hair can be cut and washed. Having this apply also to pubic hair is not exactly a huge change.
Quantity acquires a quality of its own, you know. Not to mention that quality is altogether different as well: 4K video isn’t the same as a blurry black and white photo. There’s a strange blindness to this effect in the tech industry, whether it comes to social media, endless tsunami of content on Netflix, or indeed porn. Much like Facebook’s idea that more communication is unconditionally better has backfired spectacularly, maybe it’s the same with porn. And then of course there’s also all the engineered “engagement” in all these areas. Don’t be so quick to say it’s all totally harmless.
I’m a bit disappointed that the interviewer didn’t mention a single question regarding addiction or any ethical dimension.
The audience is web developers wanting to read something interesting about web development at a big company. They also want most of them to enjoy the article. Talking about the damage they might be doing doesn’t serve either purpose. Most would’ve just clicked the little X or otherwise moved on.
There’s been a lot of good writing on that subject for anyone looking for it. The key words are easy to guess.
You’re kinda circling back to the same point. Yes, talking about ethical implications of our jobs is hard, and uncomfortable, but it’s necessary. Of course nost people don’t want to do it, off course most people don’t want to read about it. But it’s our responsibility to talk and to read about those things. “I don’t like doing it” is not a valid excuse for not doing something it’s your responsibility to do.
That said, the comparison with slavery is a bit out of place, imo.
You’re doing that trick many people do here where it becomes all or nothing in every post, forum, etc. The stress of introspecting on these topics make many people do it at certain times and read relaxing content at other times. They’re fine splitting it up. Dare I’d say most people prefer that based on that simply being most popular way content is done online.
Then, other people think they should be mentally engaged on these topics at all times in all articles, forums, etc due to their importance. They also falsely accuse people of not caring about social responsibilities if they don’t discuss them in every article where they might come into play. You must be in that group. Author of the original post and their audience is not. Hence, the separation of concerns that lets readers relax just focusing about web tech before optionally engaging with hard realities of life at another time in another article.
This isn’t a “what if my open source library was used by some military”-kind of question, I think that there is a much stronger connection between the two. Front end design is related to user behaviour, and I still consider this relation to be a technical question (UI design, user protection, setting up incentives, …).
If the interviewer had asked these questions, and the interviewee had chosen not to comment, that would have been something, but the article currently just brushes it away affront by saying “ Regardless of your stance on pornography, …”.
I’m a bit disappointed that the interviewer didn’t mention a single question regarding addiction or any ethical dimension
A tech-related, Lobsters-worthy discussion of the topic would focus on how they collected user behavior, analyzed it, measured whether they were reaching their goals, strategized for how to achieve them, and specific methods of influence with associated payoffs. It would actually be more Barnacles-like since marketing is behind a lot of that. These technical and marketing techniques are politically-neutral in that they are used by many companies to measure and advance a wide range of goals, including pornography consumption. They could be discussed free-standing with little drama if the focus was really on the technology.
You were doing the opposite. That quote is an ethical question, even says so, where you have political views about pornography consumption, you wanted theirs explored, and you might have had some goal to be achieved with that. The emotional language in the rest of your post further suggested this wasn’t about rational analysis of a technology stack. You also didn’t care what the writer or any of their readers thought about that. So, I countered representing the majority of people who just wanted to read about a web stack. A mix that either doesn’t care about ethics of porn or does with it being a depressing topic they want to handle at another time.
I was on 2nd cup of coffee when you wanted me to be thinking about lives being destroyed instead of reading peaceful and interesting things easier to wake up to. Woke up faster in a different way. Oh well. Now, I’m off this drama to find a Thursday submission in my pile.
A tech-related, Lobsters-worthy discussion of the topic would focus on how they collected user behavior, analyzed it, measured whether they were reaching their goals, strategized for how to achieve them, and specific methods of influence with associated payoffs.
I think these kinds of things were missing from the article. I know this isn’t the place to discuss pornography, and I try not to go into it in the comments. What I just brought up was a disappointment in the style and focus of the interview, and it being one-sided.
The emotional language in the rest of your post further suggested this wasn’t about rational analysis of a technology stack.
Well I do think it’s important, so I apologize for being a tad emotional. But other than what I wrote, I don’t have anything else to contribute. I neither run nor plan to run a streaming site, so I end up not having too strong opinions on what is being used in the backend stack ^^.
A mix that either doesn’t care about ethics of porn or does with it being a depressing topic they want to handle at another time.
I understand that, that’s why I prefixed my top comment with what you quoted. I furthermore feel obligated to apologise if anyone had to go through any inconvenience thinking about the “ethics of porn” because of my comment, I guess? No but seriously, bringing up a concern like this, which I explicitly tried to link back to a technical question, should be ok.
“I furthermore feel obligated to apologise if anyone had to go through any inconvenience thinking about the “ethics of porn” because of my comment, I guess? No but seriously, bringing up a concern like this, which I explicitly tried to link back to a technical question, should be ok.”
There’s quite a few people here that are OK with it. I’m not deciding that for anyone. I just had to remind you that caring people who want a break in some places exist and that you do more good by addressing the porn problem where it’s at. I appreciate you at least considering the effect on us.
“I neither run nor plan to run a streaming site”
The main problem is consumer side where there’s mass demand following by all types of supply and clever ways to keep people hooked. You can’t beat that since they straight-up want it. What you might do is work on profiles for porn sites with tools such as NoScript that make them usable without the revenue-generating ads. Then, lots of people push for their use. If there’s any uptake, they get a temporary hit in their wallet but maybe an offset with ad-free Premium. I’m not sure the effectiveness. I just know they’re an ad model with tools existing to attack that.
Griping about it on technical sites won’t change anything because… most viewers aren’t on technical sites and those that are rarely changed. So, it’s just noise. Gotta work on porn laws, labor protections for those involved, ethical standards in industry itself, ad blocking, etc.
If you would like to discuss the ethical aspects go to a different forum. I would rrecommend the community around Thaddeus Russell’s podcast for a critical and reasoned take from people that actually interact with sex workers https://www.thaddeusrussell.com/podcast/2
I’ve mentioned it elsewhere, but I’m not here to discuss the ethical aspects, not am I in a position to be able to. My comments are related to the interviewer and his choice of questions.
Your gave opinions, stated as scare-hints without support:
“then there’s the problem with trafficking,”
“which should be enough to say something isn’t right,”
… and then based upon the now well-built pretext that porn “isn’t right” (and is therefore ethically ‘wrong’) - you commented on what the interviewer should have done - i.e. they should have had the same opinions and conceptions as yourself - and they should have turned the interview into one about ethics.
The interview was interesting to read, because of the info about the tech. As bsima says, please take ethical discussion elsewhere.
As you said, I prefixed the controversial parts by saying that it was my opinion. But I don’t think that the interviewer must have shared my views. The point I was raising was that I thought it wasn’t appropriate for the interview to just ignore a quite relevant topic, since this was about PornHub specifically, not their parent company.
IMO, a just final question like
“What are you doing to enforce age restrictions?”
or
“Due to recent reports, do you think that doing something against pornography addiction among younger generations can be tackled technically or does it need more (social) effort?”
would have been more than enough, as to just show this is being considered. I’m not a journalist, so I don’t know how these questions could be phrased better, but I hope you do get my point.
Looking at this thread, I didn’t respond to people who started talking about the harmfulness of pornography or the lack thereof. This even though I would like to – yet I understand that it is off topic. In fact most of this sub-thread has been more about the meta-discussion.
All I can say is that I will be more careful not be too provoke these kinds of discussions in the future. I was thinking critically a lot about the topic the last few months, so my comment might not have been as neutral as some might have wished.
My analogy is that the direct consequences of technical questions are being more or less ignored, which I think is fair in both questions. Of course it’s not identical, but that’s stylistic devices for you.
I could come up with quite a few objections to pornography, but the chap in your video link is not only not convincing, he is also hinting that he watches porn even though he denies it. He backs up his statement “porn is degrading to women” by qualifying “just look at it” which implies that he does that enough to have an opinion.
Genuinely surprised. I was under the impression Google Domains was a kind of leverage for TLD control. Honestly it’s kinda nice then to see it either failed or wasn’t the intention. I wonder how much Squarespace bought it for…
This could be the result of some forced hand acting behind the scenes. Only yesterday, EU’s anti-trust committee took some strong action against Google right? This can’t be a coincidence.
You overestimate the speed at which these organizations are able to move, to think that an anti-trust action announced yesterday could result in the sale of a whole business unit today.
That’s true, but correlation and causation are different things. Yesterday’s action may not have caused this sale of business unit today but both these actions could be correlated to something else which may have been in the works since a long time?
You are forgetting the long, long history of Google shutting down their services at the drop of a hat.
https://killedbygoogle.com/
Oh man. After figuring out how to use OpenBSD as a college apartment router on an old PowerMac G4 that I’d picked up for $5, I moved that setup to a PC Engines ALIX to save on power. That ran for many years without any problems, and I only switched to the APU2 for better performance.
I’ve been running that APU2 for over six years now, and it’s never had a problem either. I’m really sad to see this line of passively-cooled small-form-factor PCs go.
What do you think about the framework laptop as a spiritual successor to these ThinkPads?
The framework is designed to be very thin, which makes it much more difficult to repair than an X201. It’s really too bad they went this route instead of prioritizing user serviceable parts as the primary design principle.
Thin is a feature for me; the laptop fits with my other laptops in my bag. I’ve not found the thinness to hamper my efforts any time I’ve opened the case (rare but I’ve e.g. upgraded the speakers to the higher quality ones).
Design constraints are not bad things, and the thinness makes it so nobody bats an eye when I pull it out. It’s expected that modern laptops be light and easy to transport, it’s unusual that they also are able to be repaired (ship of theseus style perhaps) by the end user.
who cares if someone bats an eye because your laptop is a few mm thicker than “normal”?
It makes “repairability” about “you want me to give up my sleek devices” instead of “just choose the right devices and life gets better.” The values we hold dear will lose if we don’t do them well enough to promote them to people who are not convinced.
if we sacrifice repairability in order to appeal to ultrabook enjoyers, that value has already lost.
I switched away from the Mac ecosystem because I wanted something that I could upgrade over time (amongst many other reasons). I think reminding folks that computers should be user-serviceable and -upgradeable, without being ugly bricks only a dork would use, is cool.
When people see my Framework (13) they see a thin, light laptop, with an unusual logo. The modular ports are always fun for demos, and most folks are at least somewhat interested in the Framework’s ability to be upgraded, which is in stark contrast to Apple’s philosophy. Yeah, I run Linux, yeah I miss some of the good third-party apps that are MacOS-only, but not having to buy a whole new laptop in two years is worth it. Great conversations.
I also used to carry around around older ThinkPads, too: absolute thicc black chonkers. The only conversations those things raised were folks giving me shit because I was lugging around a heavy, ancient-looking beast.
I switched to Mac hardware for the opposite reason. I used to build my own desktops, but I kept running into things like needing to upgrade the motherboard to upgrade the CPU, needing new RAM to go with the new motherboard because it didn’t support the old kind, and then needing a new graphics card to actually get the benefits of the other bits. I have a NAS that is a box I assembled and in my last upgrade only the case and disks remain the same. That’s a reasonable trade for a NAS because the disks are its main reason to exist and so upgrading them separately from everything else is nice.
The main reason that the new Apple laptops feel fast is that they have carefully scaled everything to avoid bottlenecks. If you upgrade any one part, you’re unlikely to get much more performance, you’ll just see bottlenecks elsewhere. I’d rather have a machine that lasts a long time (I’m probably going to replace my MacBook Pro soon, it’s 10 years old now) than one where I can keep upgrading bits but need to ship-of-Theseus it to actually see a significant benefit.
That makes sense, but there does exist a world where you don’t have to buy the top-of-the-line laptop and use it for a decade. While I may upgrade my Framework to an AMD chip (partially because I can and partially for the increased battery life), I can also not do that and just increase the RAM and SSD size as-needed. That may not be as finely tuned as an Apple machine but I’m not going for “fastest bus throughput” there, I’m going for “I don’t have to buy a new laptop to double my RAM.”
To each their own though, I was about that Apple life for a few decades and they treated me well!
If you do, I’d be really curious to hear how well it works for you. It’s been almost 20 years since my daily work machine was one where I could upgrade components piecemeal. I suspect a lot has changed in that time, when the RAM technology that different CPUs supported was completely different. I upgraded the RAM in my PowerBook (largely because Apple charged insane markup on SO-DIMMs), but suffered from multiple motherboard replacements because the solder kept coming off on the DIMM slots but for newer machines (including the NAS) I’ve just bought the maximum that the motherboard supports, even when the RAM has been upgradable.
Replacing the CPU wasn’t a significant win for me in any machine since the end of the Socket 7 era (and even then often didn’t give the maximum speedup until I upgraded the motherboard and RAM as well).
I used to upgrade disks a lot and the jumps from 1 GiB to 20 GiB and 40 GiB were each accompanied by running out of disk space before I could afford the bigger disk, but my personal and work laptops (bought 7 year apart) both have 1 TiB SSDs and that hasn’t been a space constraint for me. With the growth of cloud storage for cool storage, I suspect that my local SSD will gradually become more a local cache than a truly local filesystem, which reduces the pressure further.
You can upgrade the SSD in Macs (I’ve replaced the battery in mine and that involved removing the SSD is vastly easier), but I’ve not felt the need to. The 1 TiB disk was expensive back then, but now it’s one of the cheaper options. I stopped running VMs locally a while ago, but I can imagine wanting a 2+ TiB disk if I had a bunch of VMs on my laptop.
capitulating to thinness fetishism because losers gave you shit for your thinkpad is the opposite of cool; it reflects a lack of confidence. the “cool” approach would be to hold the course if you really believe sacrificing cooling and repairability is not objectively worth fitting 1/4” more of who-knows-what in your backpack. if your principles crumble in the face of marketing-induced irrationality expressed by random people, maybe you don’t actually believe in them.
I know I shouldn’t feed the trolls, but here I go anyway.
Or alternatively, as was the point of my comment, understanding that different users have different wants and needs might be a reasonable thing to do; as well as understanding that locking repairablity and longevity together with devices and laptops that most folks do not want to use only pushes folks away.
If your principles require you to post inflammatory, ideological creed, you may want to reconsider your approach.
I don’t think it’s cool to associate thick computers with being a dork, or to assume ordinary people would be unable to overcome thinness fetishism if they understood the objective tradeoffs.
Your perspective seems to partially adapt Apple’s mindset, giving up some repairability for the sake of attracting customers while offering no objective benefit. That’s not necessarily wrong–Apple is extremely successful after all–but it suggests a pessimistic view of human nature, and is certainly not cool. IMO compromising principles for the sake of adoption at least requires some empirical justification, e.g. some evidence that there are non-dorks who use the Framework laptop, which is far from obvious to me.
Another feature is that they actually need to sell laptops, and in this day and age selling something as thick as an x200 is just not viable. I think it’s amazing what they’ve pulled off in a form factor that still looks like a ‘modern’ laptop.
To be fair the MNT Reform project has remained afloat.
Nobody floats the idea of the Reform being issued as a corporate laptop
The point is that it’s viable.
The only reason I wouldn’t use a Reform for work is the lack of a webcam.
Oh, they just recently released a compatible webcam actually! https://shop.mntre.com/products/mnt-reform-camera :)
That … doesn’t really count ;)
What? Why not, because it’s a separate thing you have to carry around?
Yeah exactly. I have a Logitech C925 on my home desk setup, because I like having an HD camera with FreeBSD and Linux support. But with my ThinkPads I know that when I’m away from my desk, I don’t have to cart it around with my and plug in every time I want to make a video call.
Honestly the lack of an integral webcam is the one design decision on the Reform that I don’t understand. It’s 2023 and much of the tech world is hybrid or remote; being able to make a video call from your device is table stakes.
having an excuse not to enable video can be a killer feature for some :)
Fair enough. I’m one of those people who prefers having my video on and being able to see others, but I’d never make it policy.
I’m curious as to how you mean this. I’m a happy Framework daily user, but now that I think about it I’d take a slightly thicker model in exchange for an externally replaceable battery. Aside from that I can’t think of anything I’d categorize as a tradeoff between “serviceable” and “thin.”
Although I do miss my Macbook’s amazing trackpad and software support for same.
I’ve built probably two or three hundred keyboards by hand. Any individual piece could break on that board and it would be at most a 20 minute repair; more like 5 minutes for the majority of the parts. Most of those 5 minutes would be waiting for the soldering iron to heat up.
Last weekend I had to replace a key switch mechanism in a Thinkpad. Luckily it was an old Thinkpad from 2011, so it still had key caps which were easier to remove, but replacing the switch mechanism was difficult; I had a spare donor board in my closet, but I had destroyed three different switches in the process because the miniaturization process had made them so fragile and tiny. Something which would have been trivial on a larger device required tweezers and a magnifying glass.
So I would say that the X201-era Thinkpads are already making significant sacrifices to repairability in favor of miniaturization. I’ve also replaced the fan on a couple of these models, and it’s already very difficult to get it reseated precisely within the tolerances which will allow the case to close back up the way it’s supposed to. I have never even attempted to repair a super-thin laptop, (I’ve been avoiding them for what I hope are obvious reasons) but it is no great leap to assume that miniaturizing it even further would reduce these tolerances even further. Plus you have trade-offs like the RAM or the battery being soldered in, because it’s just a simple fact of engineering that connectors which allow modularity take up a lot of space.
That’s something fantastic Framework addresses. Their batteries and RAM are modular, despite being so thin! See, being thin doesn’t require all of these trade-offs!
But I wouldn’t want to try and repair a single key-switch on that keyboard, I agree.
Hmm, so in the case of the Framework the individual key switches aren’t easily replaceable either (to my knowledge).
For what it’s worth these jobs are all straightforward on the Framework, despite it being thin. The fan module is four captive screws so you don’t lose them and the forum is full of people doing things like comparing different heatsink pastes for the sake of it (i.e. pulling the module on and off constantly). The RAM is as simple to replace as laptops from the 90s, and the battery is three screws (also captive).
I’ve repaired a lot of laptops (including a classic ThinkPad and a couple of modern super-thin ones). I was expecting this Framework to be more repairable than the super-thin ones, but was still pleasantly surprised when it got here at how it was nicer to work on than my (high) expectations. Especially small things like how almost every screw is captive so you can’t lose it, but you can still remove a screw if it somehow gets damaged. They’ve really thought about repair aspects in the design.
Huh, what do you find lacking about the Framework’s trackpad?
I use a modern MBP for work and a Framework at home, and while other PC laptops’ trackpads have seemed noticeably deficient to me, I consider the Framework’s trackpad on par with the MBP’s. Am I missing something though?
To be fair, I could just be experiencing Wayland’s (or KDE’s or Gnome’s or whatever’s) trackpad support being less than stellar, which I gotta say the MBP has traditionally been stellar++.
I find it really hard to go back to mechanical trackpads after using force touch ones. The ones that are hinged are really infuriating - I shouldn’t have to apply more force at the top than the bottom. I can’t tell what kind the Framework is using, however.
Genuine question, which repairs are made more difficult by the Framework’s lack of thickness?
I got a Gen12 Framework last month and I’m super impressed with its potential repairability, so far. Replacing the display hinges looks potentially fiddly as you have to work around the cables, but on a lot of thinkpads you have to take a bunch of other stuff out instead.
(I’ve not owned an X201 but have owned an X61 and an X230. I loved them and I still have the X61, almost got convinced to do the X62 upgrade on it instead of buying a Framework. However glad I bought a Framework!)
I’ve never played with a Framework so I don’t have direct experience. I agree with the other response that the Framework is a little too thin (not to mention expensive!)
This is obviously a screenshot of KDE. I can’t say whether this blog is trolling or just ill-informed.
It’s tagged here as “satire”. There are some joke articles on the site, though also some more serious articles,
Thanks. I missed the tag, clearly.
Satire and Linux, so I hoped the joke was clear enough. Although, windows 11 light theme could pass for kde in a short glance. Lunduke has good stuff, sometimes serious, sometimes historical, but sometimes also phoronix level clickbait.
The shiny new “Dolphin” file manager was a dead give-away, tag or no :)
I have a Macbook Pro M1 at work, and it is an amazing machine: silent, light and incredibly powerful. I have a quite decent personal windows machine that I got during the dark ages of the macbooks that feels like a turtle next to it. The next personal machine I am buying once my windows machine passes away is going to be whatever is the latest Mx in the market.
+1. If you need a bit of computing power, go for a MacBook Pro. The M1 in there has more cores and thus more power than e.g., the MacBook Air with M2. I’m doing fresh builds of Firefox in less than 10 minutes on an MBP. Compared to 3 minutes on a maxed-out Ryzen Threadripper or over 60 on a thinkpad x390.
I also have an M1 MBP at work. It’s great and, yes, almost always silent. But I’d hardly call it light—that’s probably its biggest downside in my book.
I do something a bit weird to store 2FA backup codes and other core “secrets”:
ykman openpgp keys set-touch enc on
).github-recovery-codes.txt
) to this set of OpenPGP keys.github-recovery-codes.txt.gpg
) in Google Drive/Dropbox/etc.For me, the advantages are:
Obvious downsides include:
.gpg
files is encrypted to the correct set of keys”.As other people have commented in this thread, printing 2FA backup codes and putting them in a good fire safe is a sensible and straightforward approach.
seems kinda tactless to give a fancy name to this vulnerability and release the details only 3 days after the patches went out.
I mean, you know, not an Apple fanboy or anything, but pobody’s nerfect right?
Maybe, maybe not?
But I think the much more important thing is that they found this vulnerability and reported it to Apple, who then fixed it, making all macOS users safer in the process. I think that’s much more noteworthy than whether the vulnerability was given a fancy name after the fact.
Tactless or not, it does feel a bit like the resident owner of a fine glass house has chosen to start a stone-throwing contest.
You mean they should have announced the fancy name a few days in advance before the disclosure like everyone else does?
I did this for a while.. It mostly worked well but never worked great. The
pcscd
/gpg-agent
dance was flaky.. and most days would have to start one or the other.Since OpenSSH added FIDO2 and it’s in OpenBSD by default, I have completely switched to using it.. and I have to say it’s painless!
I even did a writeup showing how to use two different keys (resident and non-resident) on the same device: https://deftly.net/posts/2020-06-04-openssh-fido2-resident-keys.html
I want to use it. But as far as I understand, GitHub and others do not support it yet, right?
Ya, last I tried it didn’t work on GitHub. They always lag behind pretty bad with regard to OpenSSH features.
I’m confused, isn’t this a client-side OpenSSH feature? Shouldn’t GitHub be agnostic to whether the key lives on a FIDO2 device?
Is it a matter of GitHub not supporting the ed25519 key type?
The FIDO stuff is a new key type: ed25519-sk
I’d previously tried to use an iPad Pro with the Apple Pencil (gen 1) as a note-taking device. It worked, and it’s superior for drawing, even. But it showed that the iPad isn’t designed as a dedicated paper replacement: the Pencil slips too easily on the glass, my palm was constantly smudging and rubbing on the screen, and I had to remember to keep the Pencil charged up. Worse, I couldn’t just leave the iPad open on my desk to glance at while I cross-referenced other materials for extended periods: because of the backlit display, it’s set to sleep after a minute or so.
Taken all together, these papercuts meant that even though I had an iPad Pro with an Apple Pencil, I would still turn to actual pen and paper more often. reMarkable 2 is the first device I’ve tried that I’m actually inclined to reach for over paper. The author nails it: using this thing is shockingly natural.
(I wish it had better ePUB navigation, on the other hand. And the desktop app could be a lot better, at least on macOS.)
My vote goes to 1Password, for ease of use, built in security model (client side encryption), versatility in handling all kinds of data (notes, credit cards, etc) and reliability of the plugins to work with all websites and apps. Other password management apps that I’ve tried have frequently had problems with some websites. Sometimes 1Password still has edge cases where e.g. 2FA is not automatically filled in and you have to copy paste it manually. But I haven’t seen a better app yet.
Yeah, me too. I ended up at 1Password after trying a lot of both offline and online systems.
Have you had a chance to compare it with LastPass?
My work used LastPass and I couldn’t have created a worst UI if I’d tried. There was no easy way to generate a new password. It took three clicks in non-obvious places to get to it.
I used LastPass for several years before switching to 1Password a year ago. Wish I had switched earlier. LastPass’s UI design needs a lot of work and over time actually got worse with various annoying small bugs.
Hard no to LastPass. I used it years ago, audited it one evening on a lark, found a few vulns, reported them, a couple got fixed, a couple got me told to fuck off.
And also, LastPass: Security Issues
When I previously used LastPass, there were some weird differences between the browser version and the desktop version - there were some things that each of them couldn’t do.
One oddity worth noting - I don’t use the desktop app with 1Password. I’ve found their browser extension, 1PasswordX, to be more stable (it also has the benefit of working on Linux).
I believe with the addition of HaveIBeenPwned integration on the LastPass security dashboard, they’re pretty much similar feature wise (though maybe 1Password can store 2FA tokens). I’ve used 1Password because it felt way less clunky than LastPass and it doesn’t require me to install a random binary on my Linux machines in order to access my passwords.
I switched to 1Password from LastPass a couple years ago and haven’t looked back.
LastPass got unusably slow for me after I had more than a few hundred entries in it. I don’t know if they’ve fixed their performance problems by now, but I can’t think of anything I miss.
Long time 1Password user here. It’s by far the best tool I’ve ever used. And I believe it goes beyond the application itself, as the support team is also great. Given a matter as sensible as all my credentials to login into several different services, having good support is mandatory IMO.
1Password here too. Excuse the cliché, but it just works. The cost is minimal for me — $4/mo, I think.
I’ve been slowly moving some 2FA to it, but it seems dependent on 1Password itself detecting that the site supports it vs. something like Authy where I can add any website or app to it.
I just switched to 1Password after 5-10 years on Lastpass. There’s some quirks, it’s not perfect, I generally prefer it to Lastpass.
The only thing Lastpass truly does better is signup form detection. Specifically I like the model Lastpass uses of detecting the form submission, 1Password wants you to add the password prior to signing up, which gets messy if you fail signing up for some reason.
Oh yeah, this is a constant frustration of mine. ALso, whenever I opt to save thep assword, I seem to have a solid 4-5 seconds of waiting before I can do this. This seems to be 1Password X, FWIW. Back in the good old days of 1Password 6 or so when vaults were just local files, the 1P browser extension seemed to save forms after submission.
I’ve been able to get my whole family onto a secure password manager by consolidating on 1Password. I don’t think I would have been successful with any of the other options I’ve found.
WhatsApp end-to-end encrypts all chats, by default, using the Signal protocol; Telegram only supports optional encryption of 1:1 messages with a more questionable protocol.
It’s totally fine to dislike Facebook or to want an open source client. I may have different priors than the author, and for my part I trust Telegram, as a company, less than I trust Facebook with my data. But I’d have to think WhatsApp is flatly lying about it use of the Signal protocol to consider my conversations on Telegram more private than those on WhatsApp.
Ultimately, though, I agree with the author that Signal is the best choice out of the three.
The section on MoCA was interesting—I didn’t even know that exists.
But I’m really confused about the network topology the author settled on (partially because it isn’t clearly described). Multiple routers is probably the wrong choice for this kind of situation—multiple switches and APs, sure, but not multiple routers.
If I were setting this up I’d have a single router between my local network and Sonic. The router would give out IPv4 DHCP assignments and IPv6 router advertisements to the LAN. You can set up all the switches and APs you like behind that router, but directly exposing your LAN to your ISP’s network seems like a brittle mistake (and also possibly a security nightmare).
Ah, I wasn’t clear enough. I ended up running my two consumer routers in AP mode. I have a single managed switch sitting between the local network an Sonic.
Don’t be scared to consider Cat 8.1 (40GB/s up to 30m), because it is standardized with the RJ45-connectors contrary to Cat 7, which isn’t.
If you’re going from scratch, is there a good reason not to just do fibre for the runs and put RJ45 converters in the walls for easy-to-plug-in ?
That’s also an idea, but this comes at considerably more cost and is a bit problematic for some home-network applications like PoE. With fibre, it’s not as simple as with cables to connect surveillance cameras, wifi antennas, or anything else. And given PoE++ supports up to 70W, I could think of many applications where this might come in handy. :)
I like to follow the rule of “always pull an extra Cat5 or two if you have the room with any cable pull” (although recently updated the rule to Cat6 and now it sounds like I should do 8.1). When I did this with fiber a few years back, I had no plans for the Cat5, but did end up using it for POE later. As an aside, if you use Cat5 (not e) with POE, IME it will stop working reliably at some point. :(
Where can I find out more about what it means for Cat 8.1 to be standardized with RJ45?
Does this mean the Cat 8.1 spec specifies a certain RJ45 pinout? Or something else?
It is really simple, and I understand you, because it confused the heck out of me before I figured it out. Up until (including) Cat 6A, it was part of the standard to use RJ45-connectors. Their disadvantage is that it’s really hard to shield them, which is why Cat 7 brought a new connector type (GG45) which looks almost like RJ45 but is not compatible with it (you can plug an RJ45 into a GG45 socket, but not the other way round). Additionally, Cat 7 isn’t even an international standard and quite messy. Most people use Cat 7 cables but terminate them with RJ45 connectors, which makes zero sense because this way you don’t even make use of the special shielding and grounding in the cable. It’s effectively a waste of money.
Cat 8.1 came later and fixed a lot of stuff. It is an international standard and uses the RJ45 connectors again (which is possible due to advances in shielding technology). There is also Cat 8.2, which uses different connectors, but that’s another matter. The cables themselves (Cat 8.1 and 8.2) are the same.
What I meant with my comment was this: If you renovate you house and install cables, the cables are the only thing that matter. If you really upgrade to 40GB/s in 10 years, it is possible. Even if, by then, other connectors are the norm, you can replace them on the existing cables, but you cannot easily replace the cables themselves in your wall, obviously.
tl;dr: If you want more than 10GB/s (which is not unreasonable anymore) and want to be future proof, skip Cat 7 and go directly with Cat 8 cables and Cat 8.1 RJ45 connectors.
Ah thanks for the explanation, that’s very helpful. It didn’t even occur to me that Cat 7 wouldn’t have specified the use of an RJ45 connector at all.
You are very welcome! Yes, this fact is rarely mentioned and, for me at least, means that Cat 7 could very well not even exist.
Call me spoiled, but a 10G network between my NAS and various computers (a Mac mini, a workstation) is life-changing for me. Daily backup is faster, no seeking delays when play / scrolling 4K videos and just in general file transfers snappier. I live in an apartment now so cat6e works fine for me. But if I moved, I would seek solutions to have 10G connectivity in every room.
What kind of switches are you using? Last I really looked, 10 gigabit Ethernet hardware was still expensive enough to put it out of my reach for home use.
I am on MikroTik switch like the other threads already mentioned.
I’m about 1/2 way through replacing most of my home network with 10gbase-t - I just finished pulling new cat7 cable to replace cat5 that came with the house and wasn’t able to support 10g (or even 1g on a few of the links).
There still aren’t a lot of options for 10g home lab grade equipment. It seems like it’s either a nice used switch from eBay that makes my neighbors think I have a jet engine in my garage or a really cheap unmanaged 10g switch (e.g. MicroTik or something similar).
Everything from MikroTik is managed, and the models with “router” in the name dual boot SwOS/RouterOS. Heck, the 10G capable Marvell switch chip they use even supports accelerated L3 forwarding, and they finally started using that (in betas and for IPv4 only for now, IIRC)
I’ve been using Mikrotik for many years now, but I feel that their software and hardware QA has gone downhill lately. I got burned by a variant of this 10Gb problem, and they still haven’t made it right. A lot of their layer 3 stuff is a little off (search for BGP issues) too.
That said, no one else is even close to their price point for a redundant power switch (even most of the cheap stuff will accept power over passive POE and a wall wart). My advice is to use for L2 functionality, heavily test, and have spares even for home networks. And allow a fair amount of time to get accustomed to their rather exotic configurations, which change more often than they should.
My first impression of this was “this guy has a lot at stake with nudes.”
I agree with the idea that we should hold companies to the same standard and stop excusing big companies that we happen to like the product of (as a whole, not necessarily on the individual level). I don’t personally use icloud for anything other than text documents, but I can see how it would be an issue for sensitive information.
In the category of data that people hold onto in their iCloud backups, nudes are probably the most sensitive and well-understood variety. I think it totally makes sense to invoke that as a way to remind people of the sensitivity of the data they’re handing over to other companies.
I don’t know if it’s a generational thing or if I’m just an odd guy, but I don’t have any nudes of myself or others. I would be more worried about any sort of tax forms, bills, recovery codes, etc that I was storing in text on iCloud.
Indeed.
Pretty good article.
I went in thinking Apple was being hypocritical and now think that perhaps their move was pretty smart. Can’t push too much at once.
Also pretty surprised at Alphabet’s different approach also pretty smart.
I was looking for information about Android’s approach, and found the following on Google’s support:
Photos are another story, I guess.
As for contacts, they may be encrypted for backups, but they’re all fully available from other Google services like GMail, right? 🤔
https://support.google.com/android/answer/2819582?hl=en
OK, so, let’s be real here:
If the data is encrypted with your Google Account password, then either they’re storing your password in cleartext on the device and/or in the cloud, both of which options would be a rather bad idea given that you’re supposed to only use the password to get the authentication session token, or that you have to enter it all the time, which would be a rather poor UX. (I presume they must be storing it on the device, encrypting it with the lock PIN/pattern?)
Even if they themselves don’t have a password, I don’t see how they could possibly resist a request from a secret court to save such password the next time it is supplied by the user; this doesn’t compare favourably to what Apple was supposed to have been working on.
As for lock PIN or pattern, what sort of encryption are they using? These are usually just a few digits long, there aren’t that many combinations to try out all the inputs if you already have all the data for it locally.
Is this necessarily true? I feel like there could be some ways to “effectively” do this, without storing your password in cleartext. Here’s an example: If you are asked for your pw when you encrypt, Google can sha512 your password and use that to decrypt in the same kind of way.
Of course, I don’t know that Google is making that ask at each encryption / decryption. Also, that would mean you would lose your data if you forgot your password, which is probably not the case. However, I just want to point out there could be some clever use of cryptography going on here.
Well, your reply started with “let’s be real” but you’re only presuming on what Google’s doing. I’m not sure they are as bad at encryption as you credit them for, but I can’t prove that either.
At any rate, Google is working with US gov law enforcement, to the extent that US-based companies are obliged to. That’s not great, but that’s expected.
how it works: https://security.googleblog.com/2018/10/google-and-android-have-your-back-by.html
I don’t know what Google does, but we know what Firefox Sync does, and it doesn’t require them to store your password in plaintext or to enter it all the time. They run your password through a key derivation algorithm, with different parameters so that the server-side hash and the encryption key wind up different in spite of starting with the same password.
The two derived keys are what the client retains a plain text copy of.
Agree with this 100%, Windows is the best Linux distro
You can roughly split software into two categories:
I only want to update software in the first category and not software in the second category, but because Linux userspace is all-in on making everything rely on very specific versions of everything else, you can only either update everything or nothing.
On Windows, the only way to ship software is to statically link all of your dependencies, so I can update software individually with no problems. There’s a small amount of Linux software running in WSL, all of which I am fine with never updating, so it works out.
Sounds like you should give guix or nix a try; they are built around that whole concept of isolating updates and making them trivial to roll back if you turn out to not want them.
“Try guix or nix” feels like the “monads are just monoids in the category of endofunctors” of recommending hassle-free OS choices.
I see why the perception is this way, but really don’t think this should be the case. Mind if I quote you on this in a blogpost on how to practically use Nix later? :-)
Not at all.
I’ve been working on getting NixOS to run well under WSL2. I’ve gotten pretty close.
But there’s another split to consider:
The only software I can think of that falls into the first category is calculator to be honest… What other can you think of?
Almost all software should sit in that camp or be able to be configured to sit in that camp. There’s literally no reason at all for most software to touch the network. One of the most underrated aspects of having a system package manager is you don’t have every program having to reimplement auto-update functionality securely. Updating is taken care of in your package manager, in one place, once. Updating is the only place the vast, vast majority of desktop software would ever “need” to touch the network.
Text editors, word processors, office software, email clients, video players.. the list goes on. None of them need to touch the internet at all.
I’m not talking about the internet. I’m talking about untrusted input. You are severely hampering your experience if you are never going to open a file from an untrusted source with your office software, email clients or video players. Even image viewers are potential vectors of attack. So, what software apart from a calculator falls into the category of “you never have to update it since it doesn’t interact with untrusted input”?
It’s generally considered to be unsafe to open untrusted files with Microsoft Office even if it’s entirely up to date…
I also struggle to think of much software that falls into that first category. That’s the point I intended to make: most of the software we use needs to be (capable of being) updated regularly. Various package managers have their downsides, but adopting a stance of generally not updating software isn’t really a solution (unless one cares to spend way more effort staying on top of CVEs than I do).
You are confusing package managers with operating systems here.
Also linux has had Snaps for a while now - they do exactly what you are implying here but better: https://snapcraft.io/
Does this really happen? I’ve been running Arch linux for 5 years now and it happened maybe once. It seems like such an outdated meme.
Nothing about Linux forces you to update anything or to dynamically link anything.
My Ubuntu nags me about updates all the time.
Ubuntu is just one of many Linux distros (and IMHO one of the worst)
Sure. It’s always fun to waste time on configuring Arch.
Software on Windows tends not to be statically linked, just when you distribute the software you ship the dynamic libs with it. (The d in dll stands for dynamic).
Brew has an amazing compromise between sandboxing and updates. Try brew on Linux for things like this. I always have the latest python provided through brew, but won’t mess up my system if I pip install something unstable.
Veering slightly off topic, I appreciate, but: has anyone actually used one of these? I love my HP-48gx, but I wouldn’t be averse to upgrading to something a bit more powerful if I didn’t have to give up keys or anything. I’ve been loathe to upgrade ever since my abortive attempt at using a 49g+.
I’m curious as to what you use these calculators for, where upgrading would actually be a net win over your current kit? I haven’t touched my TI calculators (an 83plus and a TI-86) since 2001(?) and even then it was for one specific class, and checking my work, not doing the work.
(edit: I’m making the assumption, based on previous interactions with you, that you’re still a software engineer, and not in a role that necessitates complex mathematical models – though, even then, I’d assume you’d use NumPy and friends…)
Two things. First, I do a reasonable amount of volunteer teaching and tutoring, and having a physical calculator is really handy for that (and kids like a non-TI for that, too). Second, when I’m doing retro video game work, checking bills, etc., I prefer using a physical calculator. I’ll use calc-mode in a pinch, but I just really prefer having the dedicated physical object. Even the 48 is overkill for either task, but I like the larger screen and RPN.
Right on! Thanks for explaining!
I mostly stopped using my HP graphing calculator after I got an HP 35s. It’s pricey for what it is but I really like it for general calculation, for some of the same reasons. And for anything more involved I turn to NumPy or Mathematica.
This is excellent news. I think I’ll finally be able to get rid of my functional but complicated YubiKey OpenPGP applet + gpg-agent setup, while retaining the benefits of hardware isolation and touch for user presence—and upgrading to ECDSA in the process.
More importantly, this may also be what it takes to get some of my friends, who haven’t yet made the leap to hardware token-backed SSH keys, to upgrade their security. Especially since it should work with not just the expensive YubiKey 4/5, but also cheaper U2F-only keys as well.
This is excellent news. I think I’ll finally be able to get rid of my functional but complicated YubiKey OpenPGP applet + gpg-agent setup
I would love that too, but I use my GPG hardware key (NitroKey Start, which runs the gnuk firmware) for the
pass
password manager, as well as signing git tags sometimes. But I agree that with the low prices of U2F keys, it may be enough to convince friends and colleagues to a hardware token. Also, they can serve as an extra factor for PAM-based logins, which is nice.I’m a bit disappointed that the interviewer didn’t mention a single question regarding addiction or any ethical dimension. It’s kind of been assumed that not liking pornography is just a conservative, right-wing thing, but I don’t think that’s correct. I personally perceive it to be pushing harmful stereotypes (both as in what women should look like, or how intimacy should look like), and then there’s the problem with trafficking, and never knowing what’s actually going on behind the scenes. Chomsky says it well.
Setting aside things like these, which should be enough to say something isn’t right, but knowing the digital world (where creating addictions has become a common and often even necessary business model) reading
makes me somewhat uneasy. Especially a front end developer should have to think about these questions. They are the ones tasked with creating “seamless experiences”, ultimately, disregarding the influence it has on people’s daily and personal life’s. I don’t think the interviewer should have just glossed over this. YouTube has hateful or harmful videos, but their raison d’être isn’t hosting them. PornHub will have it a bit harder that hosting and spreading pornography isn’t a big part of what they are.
From the technical perspective it’s somewhat interesting, I guess. It’s about the problems of high-demand video streaming, probably above the level of most other video sites, but still way below sites like YouTube. That’s like having an interview with a
slaveholder on what kind of whips they have found to have the best qualityCIA agent on what the best strategies are to manipulate a foreign election.Edit: Rewrote a few sentences to avoid confusion, and replaced my analogy with a different one.
Porn has been around a really long time. I’m pretty sure there’s nothing new to be discovered or discussed almost anywhere on earth on the topic, much less here.
Like, the human race has brute-forced about every part of that solution space we can. There is not a dirty thought we can have that hasn’t occurred to scores of other people at one point in history or another–of this I’m certain.
Not in the way it is now, as an endless torrent on demand. Modern porn has demonstrably changed society in ways that ancient porn did not. For example, women now believe that pubic hair is unclean and as a result of excessive pubic hair removal are getting health problems that pubic hair can prevent.
Also, just being around forever does not categorise something as innocuous or beneficial.
Hairstyles have been coming and going in fads ever since we left the trees and discovered hair can be cut and washed. Having this apply also to pubic hair is not exactly a huge change.
As the article notes, gynecologists disagree, but what do they know, I guess.
Like comparing chewing coca leaves to mainlining cocaine.
Quantity acquires a quality of its own, you know. Not to mention that quality is altogether different as well: 4K video isn’t the same as a blurry black and white photo. There’s a strange blindness to this effect in the tech industry, whether it comes to social media, endless tsunami of content on Netflix, or indeed porn. Much like Facebook’s idea that more communication is unconditionally better has backfired spectacularly, maybe it’s the same with porn. And then of course there’s also all the engineered “engagement” in all these areas. Don’t be so quick to say it’s all totally harmless.
Well-put.
The audience is web developers wanting to read something interesting about web development at a big company. They also want most of them to enjoy the article. Talking about the damage they might be doing doesn’t serve either purpose. Most would’ve just clicked the little X or otherwise moved on.
There’s been a lot of good writing on that subject for anyone looking for it. The key words are easy to guess.
You’re kinda circling back to the same point. Yes, talking about ethical implications of our jobs is hard, and uncomfortable, but it’s necessary. Of course nost people don’t want to do it, off course most people don’t want to read about it. But it’s our responsibility to talk and to read about those things. “I don’t like doing it” is not a valid excuse for not doing something it’s your responsibility to do.
That said, the comparison with slavery is a bit out of place, imo.
You’re doing that trick many people do here where it becomes all or nothing in every post, forum, etc. The stress of introspecting on these topics make many people do it at certain times and read relaxing content at other times. They’re fine splitting it up. Dare I’d say most people prefer that based on that simply being most popular way content is done online.
Then, other people think they should be mentally engaged on these topics at all times in all articles, forums, etc due to their importance. They also falsely accuse people of not caring about social responsibilities if they don’t discuss them in every article where they might come into play. You must be in that group. Author of the original post and their audience is not. Hence, the separation of concerns that lets readers relax just focusing about web tech before optionally engaging with hard realities of life at another time in another article.
This isn’t a “what if my open source library was used by some military”-kind of question, I think that there is a much stronger connection between the two. Front end design is related to user behaviour, and I still consider this relation to be a technical question (UI design, user protection, setting up incentives, …).
If the interviewer had asked these questions, and the interviewee had chosen not to comment, that would have been something, but the article currently just brushes it away affront by saying “ Regardless of your stance on pornography, …”.
A tech-related, Lobsters-worthy discussion of the topic would focus on how they collected user behavior, analyzed it, measured whether they were reaching their goals, strategized for how to achieve them, and specific methods of influence with associated payoffs. It would actually be more Barnacles-like since marketing is behind a lot of that. These technical and marketing techniques are politically-neutral in that they are used by many companies to measure and advance a wide range of goals, including pornography consumption. They could be discussed free-standing with little drama if the focus was really on the technology.
You were doing the opposite. That quote is an ethical question, even says so, where you have political views about pornography consumption, you wanted theirs explored, and you might have had some goal to be achieved with that. The emotional language in the rest of your post further suggested this wasn’t about rational analysis of a technology stack. You also didn’t care what the writer or any of their readers thought about that. So, I countered representing the majority of people who just wanted to read about a web stack. A mix that either doesn’t care about ethics of porn or does with it being a depressing topic they want to handle at another time.
I was on 2nd cup of coffee when you wanted me to be thinking about lives being destroyed instead of reading peaceful and interesting things easier to wake up to. Woke up faster in a different way. Oh well. Now, I’m off this drama to find a Thursday submission in my pile.
I think these kinds of things were missing from the article. I know this isn’t the place to discuss pornography, and I try not to go into it in the comments. What I just brought up was a disappointment in the style and focus of the interview, and it being one-sided.
Well I do think it’s important, so I apologize for being a tad emotional. But other than what I wrote, I don’t have anything else to contribute. I neither run nor plan to run a streaming site, so I end up not having too strong opinions on what is being used in the backend stack ^^.
I understand that, that’s why I prefixed my top comment with what you quoted. I furthermore feel obligated to apologise if anyone had to go through any inconvenience thinking about the “ethics of porn” because of my comment, I guess? No but seriously, bringing up a concern like this, which I explicitly tried to link back to a technical question, should be ok.
“I furthermore feel obligated to apologise if anyone had to go through any inconvenience thinking about the “ethics of porn” because of my comment, I guess? No but seriously, bringing up a concern like this, which I explicitly tried to link back to a technical question, should be ok.”
There’s quite a few people here that are OK with it. I’m not deciding that for anyone. I just had to remind you that caring people who want a break in some places exist and that you do more good by addressing the porn problem where it’s at. I appreciate you at least considering the effect on us.
“I neither run nor plan to run a streaming site”
The main problem is consumer side where there’s mass demand following by all types of supply and clever ways to keep people hooked. You can’t beat that since they straight-up want it. What you might do is work on profiles for porn sites with tools such as NoScript that make them usable without the revenue-generating ads. Then, lots of people push for their use. If there’s any uptake, they get a temporary hit in their wallet but maybe an offset with ad-free Premium. I’m not sure the effectiveness. I just know they’re an ad model with tools existing to attack that.
Griping about it on technical sites won’t change anything because… most viewers aren’t on technical sites and those that are rarely changed. So, it’s just noise. Gotta work on porn laws, labor protections for those involved, ethical standards in industry itself, ad blocking, etc.
If you would like to discuss the ethical aspects go to a different forum. I would rrecommend the community around Thaddeus Russell’s podcast for a critical and reasoned take from people that actually interact with sex workers https://www.thaddeusrussell.com/podcast/2
I’ve mentioned it elsewhere, but I’m not here to discuss the ethical aspects, not am I in a position to be able to. My comments are related to the interviewer and his choice of questions.
Your gave opinions, stated as scare-hints without support:
… and then based upon the now well-built pretext that porn “isn’t right” (and is therefore ethically ‘wrong’) - you commented on what the interviewer should have done - i.e. they should have had the same opinions and conceptions as yourself - and they should have turned the interview into one about ethics.
The interview was interesting to read, because of the info about the tech. As bsima says, please take ethical discussion elsewhere.
As you said, I prefixed the controversial parts by saying that it was my opinion. But I don’t think that the interviewer must have shared my views. The point I was raising was that I thought it wasn’t appropriate for the interview to just ignore a quite relevant topic, since this was about PornHub specifically, not their parent company.
IMO, a just final question like
or
would have been more than enough, as to just show this is being considered. I’m not a journalist, so I don’t know how these questions could be phrased better, but I hope you do get my point.
…and yet, it’s the ethical aspects that you brought up.
Looking at this thread, I didn’t respond to people who started talking about the harmfulness of pornography or the lack thereof. This even though I would like to – yet I understand that it is off topic. In fact most of this sub-thread has been more about the meta-discussion.
All I can say is that I will be more careful not be too provoke these kinds of discussions in the future. I was thinking critically a lot about the topic the last few months, so my comment might not have been as neutral as some might have wished.
This is more than a little hyperbolic.
My analogy is that the direct consequences of technical questions are being more or less ignored, which I think is fair in both questions. Of course it’s not identical, but that’s stylistic devices for you.
I could come up with quite a few objections to pornography, but the chap in your video link is not only not convincing, he is also hinting that he watches porn even though he denies it. He backs up his statement “porn is degrading to women” by qualifying “just look at it” which implies that he does that enough to have an opinion.