Threads for adtac

  1. 2

    It’s 2020, we still don’t have a good Linux laptop which competes with Macbooks yet.

    1. 6

      What about Dell XPS Developer Edition?

      1. 1

        13” is a bit small for the developer version.

        1. 3

          They also have the Precision with Ubuntu pre-installed. It’s the XPS15 with better hardware under the hood IIRC.

      2. 3

        Thinkpads compete just fine. In fact, they beat Macbooks outright. I cannot find a single flaw with the T495s I’m typing this on. Incredible battery life, sharp screen, good keyboard and trackpad, a decent number of ports, good cooling, lightweight and portable. Personally I like the aesthetics of the Thinkpad more than the Macbook’s too, but that’s subjective.

        1. 3

          Thinkpads compete just fine. In fact, they beat Macbooks outright.

          As someone who happily chooses to use a ThinkPad T480 after many years of using Apple laptops, I disagree vehemently. I bought mine when my macbook pro died and the only new Apple replacements were the terrible keyboard, touchbar endowed models that maxed out at 16GB RAM. That didn’t work for me, so I went T480.

          The screen is a downgrade. The keyboard is an upgrade. The touchpad is a cruel joke. Fortunately, I can just turn the touchpad off and use the trackpoint. Battery life is better. The cooling is worse. CPU throttles regularly. I may open it up and re-paste it; I hear that helps.

          Getting Linux to work well on it was bumpy. I use Fedora. Setting up disk encryption so that it worked across two drives was a royal PITA. I still have to hold my jaw just right when I plug or unplug my thunderbolt 3 docking station. Most of the time I choose to shutdown first. Resolution scaling doesn’t work half as well as it did on Mac. Jetbrains tools can lock up the entire gui. The wired ethernet adapter on the Lenovo thunderbolt dock is hideously slow; it’s actually faster to use wifi. Multiple displays still suck compared to Mac.

          Make no mistake. I like this machine, and am happier overall with it than I was with my macbook setup. It wins for me, as a software developer, on balance. Especially when I consider that, when I bought it, this $2100 rig would’ve cost $3500 for something from Apple with half the RAM but a faster CPU and SSD.

          But there’s no way I’d say it wins outright. Even if you gave me a week to tweak Linux the best I could, I could not hand it to any of my Macbook-toting friends (who are not software developers) and expect them to have a better experience with my hand tweaked thinkpad than they have out of the box on their Macbook.

          1. 1

            The screen is a downgrade

            I strongly prefer the matte screen on the Thinkpads. I also got the 400nit low-power screen and its colour range is incredible. I have use Macbooks before briefly and they definitely have good screens (especially so 5-6 years ago, when they had the highest res screens in laptops), but my T495s’ screen is equally good, if not better, thanks to it being matte.

            The touchpad is a cruel joke

            Touchpads are the one thing that Macbooks have an edge in and I’ll admit that. But the T495s’ touchpad is nowhere near that and I like it a lot. I also use the trackpoint a lot; took a while to get used to, but it’s quite powerful.

            1. 1

              I strongly prefer the matte screen on the Thinkpads.

              While I think, based on looking over the shoulders of colleagues, that Apple has gotten the anti-glare coating on their glossy screens good enough that I could happily use them, I was comparing my T480’s screen to my (2011 or 2012?) MBP17’s matte 1920x1200 screen that it replaced. That MBP17 was by quite some distance my favorite laptop screen ever. If I could get that keyboard/battery/trackpad/screen with a modern motherboard, I’d happily do so.

              Based on your description of the 495, it sounds like they improved the matte screen between the T480 and the T495. I’d rate the 480’s as passable but not great.

              They may also have improved the touchpad; the T480’s touchpad makes me understand why so many Thinkpad users hate touchpads. (Or maybe Apple ruined me for those.) I like the trackpoint a great deal, though, so I’m happy as long as I can disable the touchpad. And I actually don’t run it fully disabled these days. I have all of its “click” functionality turned off, set its scrolling to two finger only mode, and use it like a big scroll wheel so that my trackpoint middle button functions like a traditional middle mouse button. I’m pretty happy with that.

              I really love the giant external battery on the T480. I routinely get 12 hours of heavy VMware usage or 22+ hours of browsing/editing usage with the 72Wh. I’m disappointed and annoyed that they seem to have discontinued this feature on the 490 series, and really hope they bring it back.

          2. 1

            Same question: what’s the battery life on Linux?

            1. 1

              I consistently get 9-10 hours and I haven’t even bothered to optimise it.

              1. 1

                To add another datapoint for you, on my T480 with the big 72Wh rear battery, I see 12-ish hours of heavy compiling/VM testing usage. 22+ hours of browsing and text editing. I’m running Fedora 31 with powertop and tlp packages to manage power, but no manual customization on those.

          1. 4

            While I’ll continue to use Firefox to keep developers accountable to other browsers and to do my part in maintaining a healthy market share (apart from a host of other reasons), I genuinely think it’s different this time. Browsers are doing a lot to maintain an open standard and things like ActiveX are no longer plaguing the browser market. DRM continues to be the proprietary evil it is, but other than that it’s a lot better than the IE days.

            1. 16

              Though nowadays the open standards can be a bit… misleading? A lot of them feel more like edicts from Google to enable features for ChromeOS, like WebUSB.

              1. 6

                I was happier not knowing that WebUSB is a thing. Thanks. :/

              2. 1

                Standards don’t seem to prevent web app makers from looking at you user agent and telling you to switch to Chrome. Google’s applications in particular are infamous for serving different code (with different performance and bugs) to different browsers.

              1. 3

                As there are no VMs, I can’t SSH into the machine and make changes, which is excellent from a security perspective since there is no chance of someone compromising and running services on it.

                What’s wrong with SSH? Extendeding this logic, if someone compromised your Google account, you’re toast. Just use passwordless login, a off-disk key with something like a Yubikey (password protected, of course), and disable root login.

                1. 4

                  SSH can be plenty secure but no SSH is even more secure.

                  1. 7

                    Until you invent a less-secure workaround for not having access to ssh.

                    1. 2

                      They’re using the appliance model here. They build the appliance with no ability to log into it. It’s uploaded to run on Google’s service. When time to fix or upgrade, a new one is built, the other is thrown away, and new one put in its place. It’s more secure than SSH if Google’s side of it is more secure than SSH.

                      Now, that part may or may not be true. I do expect it’s true in most cases since random developers or admins are more likely to screw up remote security than Google’s people.

                      1. 2

                        Uploading Docker images that can’t be SSH into IMHO is much more secure.

                    2. 3

                      If someone accesses my Google account, they can access my GCP account anyways. The advantage here is that my Google account is more protected (not just with 2-factor) but because Google is always on the watch out. For example, if I am logging in from the USA and suddenly there is a login from Russia, Google is more likely to block that or warn me about it. That’s not going to happen with a VM I am running in GCP.

                      Just use passwordless login, a off-disk key with something like a Yubikey (password protected, of course),

                      None of that protects against vulnerability in the software though. For example, my Wordpress installation was compromised and someone served malware through it. That attack vector goes away with docker container based websites (Attack vector-like SQL injection do remain though since the database is persistent)

                      1. 9

                        I am a PenTester by trade and one of the things I like to do is keep non-scientific statistics and notes about each of my engagements because I think they can help me point out some common misconceptions that are hard for people to compare in real world (granted these are generally large corporate entities not little side projects).

                        Of that data only about 4 times have I actually gotten to sensitive data or internal network access via SSH, and that was because they were configured for LDAP authentication and I conducted password sprays. On the other side of the coin, mismanagement of Cloud keys that has lead to the compromise of the entire cloud environment has occurred 15 times. The most common vectors are much more subtle, like Server Side Request Forgery that allows me to access the metadata instances and gain access to deployment keys, developers accidentally publishing cloud keys to DockerHub or a public CI or in source code history, or logging headers containing transient keys. Key managment in the cloud will never be able to have 2FA and I think that’s the real weakness, not someone logging into your Google account.

                        Also in my experience actual log analysis from cloud environments does not actually get done (again just my experience). The amount of phone calls from angry sysadmins asking if I was the one who just logged into production SSH during an assessment versus entire account takeovers in the cloud with pure silence is pretty jarring.

                        I often get the sense that just IP whitelisting SSH or having a bastion server with only that service exposed and some just networking design could go a long way

                        1. 1

                          The most common vectors are much more subtle, like Server Side Request Forgery that allows me to access the metadata instances and gain access to deployment keys, developers accidentally publishing cloud keys to DockerHub or a public CI or in source code history, or logging headers containing transient keys. Key managment in the cloud will never be able to have 2FA and I think that’s the real weakness, not someone logging into your Google account

                          Thanks for sharing this.

                          1. SSRF or SQL injection will remain a concern as long as its a web service irrespective of docker or VM
                          2. logging headers containing transient keys - this again is a poor logging issue which holds for both docker and VM
                          3. I agree that key management in the cloud is hard. But I think you will have to deal with that both on docker and VM

                          I often get the sense that just IP whitelisting SSH or having a bastion server with only that service exposed and some just networking design could go a long way This won’t eliminate most issues like SQL injection or SSRF etc. to a great extent. And IP whitelisting doesn’t work in the new world especially when you are traveling and could be logging in from random IPs (unless you always log into through VPN first)

                          1. 4

                            You seem to be kind of missing my point, I’m not arguing between Docker vs VMs or even application security. The original comment was about SSH specifically and I am making an argument that the corner cases for catastrophic failures with SSH tend to be around weak credentials or leaked keys which are all decently well understood. Whereas in the cloud world, the things that can lead to catastrophic failure (sometimes not even of your own mistakes) are much much more unknown, subtle, and platform specific. The default assumption of SSH being worse than cloud native management is not one I agree with, especially for personal projects.

                            IP whitelisting doesn’t work in the new world especially when you are traveling and could be logging in from random IPs

                            For some reason I hear this a lot and I seriously wonder, do you not think that’s how it’s always been? There’s a reason that some of the earliest RFC’s for IPv6 address the fact that mobility is an issue. I’m not necessarily advocating this in the personal project territory, but this is the whole point of designing your network with bastion hosts. That way you can authenticate to that one location with very strict rules, logging, and security policies and then also not have SSH exposed to your other services.

                            1. 2

                              All fair points.

                    1. 3

                      Use whatever you like and don’t listen to people on the internet telling you what to do.

                      1. 8

                        I agree; listen to this guy. (Your argument is a bit self-defeating…)

                        I think it’s good to be receptive to new, truthful information so you can take it into account when making your own decision. For example, this article contains the information that the order iof i: Int syntax has something in common with the order of lambda syntax.

                        1. 7

                          Because why would you care to evaluate arguments and reasoning when you can just trust your gut instinct and save a few minutes now to make things harder for yourself for the time you’re using your own language?

                          1. 2

                            why even discuss anything then?

                          1. 1

                            Is it possible to get the full article? It seems to be truncated (or is dig doing that?)

                            1. 1

                              that’s a limitation of UDP-based DNS AFAIK

                            1. 1

                              Do you think a better approach to disambiguations is possible? Can they be split and returned as multiple records?

                              $ dig domain.wpodns.adtac.in txt 
                              
                              ; <<>> DiG 9.11.14-RedHat-9.11.14-2.fc30 <<>> domain.wpodns.adtac.in txt
                              ;; global options: +cmd
                              ;; Got answer:
                              ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13825
                              ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
                              
                              ;; OPT PSEUDOSECTION:
                              ; EDNS: version: 0, flags:; udp: 4096
                              ;; QUESTION SECTION:
                              ;domain.wpodns.adtac.in.		IN	TXT
                              
                              ;; ANSWER SECTION:
                              domain.wpodns.adtac.in.	3600	IN	TXT	"Domain may refer to:\010\010\010== Mathematics ==\010Domain of a function, the set of input values for which the function is defined\010Domai"
                              
                              1. 1

                                For something like https://en.wikipedia.org/wiki/Set that quickly expands to a few dozen queries internally (chose “set” because I remember reading that it’s the word with the most number of definitions in a dictionary in my childhood)

                              1. 26

                                Hm, a language that uses product types instead of sum types to represent “result or error” is certainly not doing exceptions right.

                                1. 6

                                  Exactly

                                  If a function returns a value and an error, then you can’t assume anything about the value until you’ve inspected the error.

                                  You can still use the returned value and ignore the error. I’m not calling that “right”

                                  1. 2

                                    That is intentional and useful. If the same error has different consequences when called by different callers, the error value should be ignored. However, when writing idiomatic Go, you generally return a zero value result for most error scenarios.

                                    Assigning _ to an error (or ignoring the return values altogether) is definitely something that could be trivially caught with a linter.

                                    1. 1

                                      I still think it’s a mistake to allow both to coexist. In this case, you need an additional tool to catch something. Whereas with proper sum types, it is simply not possible to access a return value if you get an error. What you do with that is up to you.

                                      val, err := foo()
                                      if err != nil { /* log and carry on, or ignore it altogether */ }
                                      bar(val) // <- nothing prevent you from doing that, and it very likely a mistake
                                      
                                  2. 5

                                    Thought the same. It gets this right:

                                    Go solves the exception problem by not having exceptions.

                                    And here it goes wrong:

                                    Instead Go allows functions to return an error type in addition to a result via its support for multiple return values.

                                    1. 4

                                      Another case of “Go should have been SML + channels”, really. That would have been as simple, cleaner, and less error prone. But what can you expect from people who think the only flaw of C is that it doesn’t have a GC…

                                      1. 1

                                        Not just GC but also bounds checking.

                                        1. 1

                                          That’s so obvious that I forgot about it. I don’t know of any modern language that doesn’t have a (bound-checked) array type. On the other hand they decided to keep null nil…

                                          1. 1

                                            You have to keep in mind that C is most certainly not a modern language & they were intentionally starting with C & changing only what they felt would add a lot of value without adding complexity.

                                            1. 1

                                              For sure, I just don’t get why anyone would want that. 🤷 I guess.

                                              1. 2

                                                Primarily linguistic simplicity in the service of making code easier to read.

                                    1. 3

                                      I think I went a bit overboard lol. The funny thing is, I use them all frequently.

                                      alias g="git show"
                                      alias gh="git show HEAD"
                                      alias gs="git status"
                                      alias gl="git log"
                                      alias ga="git add"
                                      alias gaa="git add -A"
                                      alias gca="git commit --amend"
                                      alias gcm="git commit -m"
                                      alias gpr="git pull --rebase origin master"
                                      alias gpf="git push --force"
                                      alias gco="git checkout"
                                      alias gd="git diff"
                                      alias gbl="git branch -v"
                                      alias gbd="git branch -D"
                                      alias gri="git rebase --interactive"
                                      alias grc="git rebase --continue"
                                      alias gra="git rebase --abort"
                                      alias gst="git stash"
                                      alias gsta="git stash apply"
                                      alias gx="gco -- \*; git reset HEAD \*"
                                      alias gcp="git cherry-pick"
                                      alias gcpc="git cherry-pick --continue"
                                      alias gcpa="git cherry-pick --abort"
                                      alias gpar="git remote | xargs -L1 git push --all"
                                      
                                      1. 8

                                        A tip: --force-with-lease is a safer choice than just --force.

                                        1. 1

                                          alias g=“git show” alias gh=“git show HEAD”

                                          git show without any arguments will show you HEAD by default, so you don’t necessarily need a separate gh alias. (Of course, aliases are very personal and you should do whatever you like!)

                                        1. 2

                                          I’m a huge fan of https://fonts.google.com/specimen/VT323 - I use it everywhere!

                                          1. 10

                                            Holy fuck, this is the biggest abuse of clickbait I’ve ever seen. Not only does it use dubious maths to arrive at the sum, but the author just randomly picks a 0.5% number out of nowhere!

                                            1. 1

                                              Exactly. For an established brand like this I’d expect people to retry (perhaps by phone). Also for cards expiring 2020 there is no bug :)

                                              1. 5

                                                @adtac I don’t think it’s that clickbaity. The title says “potentially”, and he also used 0.05% and not 0.5%, and though the number is still random, the implication isn’t too bad for napkin calculations for fun and no profit.

                                                @vii

                                                Exactly. For an established brand like this I’d expect people to retry (perhaps by phone).

                                                I think most people I know wouldn’t, including myself.

                                            1. 2

                                              low-impact control center and multiple remote low-impact generation sites.

                                              Odd targets.

                                              1. 4

                                                This is probably similar to credit card scammers testing card validity by making a small purchase before the real thing. Lower visibility and a smaller chance of getting caught before the real attack. If this had actually brought down an entire state in America, you can bet it wouldn’t be in the lower half of the frontpage on Lobsters.

                                                Who knows how many other attacks have gone unnoticed in the past because of this.

                                              1. 18

                                                I use it myself and I really like it. Although I self host it, I still bought a subscription to support the dev. It used to be partly proprietary but he decided to just open source everything, which I thought was really cool. If anyone wants a dark theme scss file for it let me know.

                                                1. 17

                                                  I was originally planning on following the “open core” model a la Gitlab, but I became so disillusioned with proprietary software that I decided to never go down that path again. I’d never comfortably self host proprietary software in my personal servers, so why subject others to it?

                                                  Sometimes it’s perfectly possible to build a sustainable business with free software to support yourself; as an example, Commento is already profitable [1].

                                                  Btw, thanks for supporting the project!

                                                  [1] the same can’t be said of many IPO’d Silicon Valley companies, eh? ;)

                                                1. 8

                                                  Consider the case where a dependency that is still in use on feature has been removed on master. When featureis being rebased onto master, the first re-applied commit will break your build, but as long as there are no merge conflicts, the rebase process will continue uninterrupted. The error from the first commit will remain present in all subsequent commits, resulting in a chain of broken commits.

                                                  Merge commits do not solve this in any way. If master has removed a dependency, no amount of git gymnastics will bring it back magically. Git cannot possibly understand the dependencies in your code (it’s just a version control system), so merging and rebasing will have the same effect.

                                                  In this case, we hope that Git identifies commit f as the bad one, but it erroneously identifies dinstead, since it contains some other error that breaks the test.

                                                  That’s because you re-added the dependency in a commit G that was appended to the end of the tree. You should have re-added the dependency immediately after origin’s tip before your commits are rebased. That is, the order should’ve been ABCGDEF.

                                                  You pretend that the commits were written today, when they were in fact written yesterday, based on another commit.

                                                  Rebase does nothing like that. Git has two dates: commit date and author date. Commit date is updated when rebased, author date is kept intact (unless you go out of your way to specify --ignore-date).

                                                  You’ve taken the commits out of their original context, disguising what actually happened.

                                                  Context is perfectly preserved. I genuinely don’t understand what context is added by adding a meaningless merge commit saying “this is when I merged this branch” in a separate commit when you can derive that information from the commit date of the last commit (not author date, mind you).

                                                  Can you be sure that the code builds?

                                                  Once again, merge commits don’t solve this in any way.

                                                  I’ve come to the conclusion that it’s about vanity. Rebasing is a purely aesthetic operation. The apparently clean history appeals to us as developers, but it can’t be justified, from a technical nor functional standpoint.

                                                  Not true, I use rebase and cherry-pick liberally because it helps when I go through the commit log at a later date (which I do). It helps when I write release notes because everything is linear. It helps fixing merge conflicts at a commit-level, rather than one huge dump of changes to resolve them in a merge commit. Commits should be atomic and self-containing.

                                                  If that puts you off you might be better off using a simpler VCS that only supports linear history.

                                                  Stop patronising.

                                                  1. 9

                                                    I just quickly skimmed through the article and I have some issues with it.

                                                    one cannot rm /proc/1024 to kill the process

                                                    Okay, let’s say I can do this; that is, rm /proc/1024 is equivalent to kill 1024, which sends SIGTERM. How would I send the dozen other signals? Importantly, how would I send SIGKILL? You may say that we don’t need SIGTERM and SIGKILL separately, but we do; SIGKILL is handled by the kernel and it does not permit the process do anything. SIGTERM signals the process to terminate, which would allow it to do some cleanup work (like closing files). Without this distinction, it’d either be impossible to safely terminate processes or predictably terminate misbehaving processes. Given that we have different signals to send, how would that work in a “database engine”? A echo SIGKILL >/proc/1024/signals? Is that any different from kill -9 1024? In fact the latter is much nicer.

                                                    one cannot ls /proc/1024/open_files to see the list of all open files for this process

                                                    you can, though.

                                                    $ tty
                                                    /dev/pts/1
                                                    $ touch /tmp/file
                                                    $ tail -f /tmp/file
                                                    

                                                    separately,

                                                    $ tty
                                                    /dev/pts/2
                                                    $ ps -A -o pid,cmd | grep tail | grep -v grep
                                                      899 tail -f /tmp/file
                                                    $ ls -hal /proc/899/fd/
                                                    lrwx------ 1 adtac adtac 64 Aug  8 23:04 0 -> /dev/pts/1
                                                    lrwx------ 1 adtac adtac 64 Aug  8 23:04 1 -> /dev/pts/1
                                                    lrwx------ 1 adtac adtac 64 Aug  8 23:04 2 -> /dev/pts/1
                                                    lr-x------ 1 adtac adtac 64 Aug  8 23:04 3 -> /tmp/file
                                                    

                                                    Much (if not all) of the system configuration can be set up by opening a resource and toggling a few buttons, retyping strings or adjusting colors. There is no need to learn the syntax of a specific configuration file, and no wasting of the CPU time on parsing that text file and reporting errors if any.

                                                    No, you still need to do all of that processing. With a “database engine”, one would have to talk to a different process in the system with IPC, which is way slower. And even after you get the data, how are you going to validate it? For example, a config’s value may be an email address. But if it’s empty, for example, it’s an invalid value that should be caught at config parsing time.

                                                    Besides, trying to save cycles on configuration parsing is premature optimisation at its finest.

                                                    list of all processes belonging to user joe can be found faster by a database query rather than with ps aux | grep joe

                                                    You can just do this, which is just as simple as a “database query”.

                                                    $ ps -a -u joe
                                                    

                                                    Really, read the ps documentation. ps aux isn’t the only way to invoke the utility.

                                                    The universal database would also allow linking of an #include file directly into an includee; this will obsolete the arcane art of specifying compiler’s -I and -L flags and trying to predict which of several possible time.h files the compiler would actually pick.

                                                    How does having a “database engine” solve this?

                                                    1. 5

                                                      Okay, let’s say I can do this; that is, rm /proc/1024 is equivalent to kill 1024, which sends SIGTERM. How would I send the dozen other signals?

                                                      In Plan9, which took “everything is a file” much further than unix, they have process control files. So you echo kill > /proc/pid/ctl. You say that kill -9 is much nicer, though I can’t understand why – plan9 requires fewer commands to learn, a simpler cognitive model (you do everything by manipulating files), etc.

                                                      You can just do this, which is just as simple as a “database query”.

                                                      Again, it’s not. It’s a separate command. And unix/linux/etc commands are riddled with inconsistencies in its various flag arguments. If you have a single query language that can be applied across all system operations, that lowers the learning curve dramatically.

                                                      And ps is honestly a terrible example since it has two entirely different sets of flags for the same operations (SysV vs BSD). It’s a very tough command to learn, and once you learn it you’re very likely to stick with only half of the available flags.

                                                      How does having a “database engine” solve [include directives requiring library paths]?

                                                      Well, let’s look at how BeOS/Haiku does it. It’s fairly simple, as long as you’re not doing conditional #includes.

                                                      -> addattr -t string META:include /path/to/foo.h bar.c
                                                      

                                                      Then a compiler could use that information when building bar.c instead of requiring flags or environment variables defining a search path. I personally think it would be more useful if there were a “file” type, separate from the “string” type, that updated if the file moved, but that’s a whole other problem.

                                                      In short, both “everything is a file” (including GUI elements, etc) and “the filesystem is a database” have been implemented in open-source OSes before (plan9 and Haiku, respectively). I encourage you to read up on how they solved the problems you perceive with the concepts.

                                                    1. 11

                                                      Is “Set up your own” really great advice? Administer a server and trust yourself to correctly configure a VPN service? Yeah, no thanks. It doesn’t even give you the advantage of fighting IP geolocation since your VPS provider will probably assign you a static IP.

                                                      My advice on VPNs would be “Don’t use VPN services that require you make an account”. You don’t need an account for a VPN, just look at Mullvad. I trust Mullvad to keep me safe from copyright letters more than my ISP.

                                                      1. 11

                                                        It’s almost impossible to misconfigure wireguard, bar some absurd mistakes like publishing your server’s private key. Generate a wireguard key pair on the client and the server, copy the corresponding public keys to the other machine, and you’re done. No ciphersuites to choose from, no key size to configure, nothing. And you now have a private tunnel (truly private!) with virtually zero performance overhead. Wireguard is less chatty than most, so it works well with intermittent disconnections too.

                                                        1. 5

                                                          I agree with your take for the general public, but I think managing a self-administered VPN is within the capabilities of anyone reading this comment.

                                                          With base OpenBSD and maybe 20-30 lines between configuring iked and deploying an X.509 CA with ikectl, there’s little to screw up. This doesn’t solve the trust problem vis-a-vis one’s cloud provider, but if you’re avoiding known bad actors, this mitigation alone serves to decentralize your footprint to those engaged in surveillance capitalism.

                                                          1. 5

                                                            Wireguard is pretty straightforward to set up as well.

                                                            1. 1

                                                              I never got iked working. It’s a pain point to this day.

                                                            2. 2

                                                              I think “Set up your own” is a great advice when you have enough budget.

                                                              Because if your current VPN provider doesn’t hosts their servers themselves, you are decreasing number of providers that you need to trust. But also, it is really important to choose the right VPS provider.

                                                              1. 1

                                                                Every commercial VPN charges similar amounts to a bottom of the line VPS on digital ocean, which I thought was sufficient to run a VPN.

                                                              2. 2

                                                                What about ProtonVPN? A lot of people here seem to be using ProtonMail, so curious if this other service is trust-worthy too?

                                                              1. 2

                                                                Buy a domain name and then do everything that people are saying here. Fastmail allows custom domains. For each new registration, create new email name. For example: for Hilton.com, when registering provide hilton@yourdomain.com. In fastmail you can catch all the names. That will allow to understand who sold your email later.

                                                                1. 6

                                                                  I think that it is valuable to own your own domain, and I endorse this advice, but I do have a caveat to add to it.

                                                                  Generally speaking, using a custom domain for your email adds attack surface to any account linked to your email address. See this story about the Twitter handle @N. You very likely want to have a non-custom-domain email which you use for account recovery purposes.

                                                                  For me, the biggest thing about moving away from Google products is not actually that it loses the features or the network effects (Google is prone to shutting down everything I like, anyway), but that it loses the resilience against social engineering. Nothing like that is ever perfect, but at least you should try to minimize how many separate companies’ customer service process are part of your attack surface, and pick them carefully.

                                                                  I’m using my Google Employee hat here not to give these words greater weight, but to disclose my bias.

                                                                  1. 4

                                                                    In addition, the ICANN recommends registrants use an external address for administrative domain contacts: https://www.icann.org/en/system/files/files/sac-044-en.pdf

                                                                    1. 1

                                                                      This is exactly the kind of thing I was looking for, thank you for posting!

                                                                    2. 1

                                                                      Thank you for pointing this out. Unless I’m missing some piece in the chain, that would be: 1. The registrar 2. The DNS provider and 3. the email service, right? (I’m under the impression that with FastMail, they can provide both 2 and 3, but I’d have to check)

                                                                      1. 1

                                                                        That agrees with my analysis, yes.

                                                                    3. 3

                                                                      That will allow to understand who sold your email later.

                                                                      More importantly, it’ll make your identities across different websites unlinkable. Or at least harder to link.

                                                                      1. 1

                                                                        Would + tags (like you+hilton@example.net) help with that or are spammers getting smart and stripping them out?

                                                                        1. 2

                                                                          Some services [mistakenly] consider a plus character not valid for use in an email address.

                                                                          1. 1

                                                                            I’ve seen spammers that know about catch-all domains and are striping even the unique part. Oh well.

                                                                        1. 9

                                                                          This unfortunately doesn’t cover current Thunderbolt 3 (and maybe soon to be USB 4.0) cables. They use the same USB-C connector but have their own range of capabilities and add their own confusion in the mix by not clearly identifying which cables support what.

                                                                          For Thunderbolt on MacOS, you get a “Cannot Use Thunderbolt Accessory” notification when you plug the device in if it’s not working properly but there’s no additional information on why it’s not working or any indication whether it’s due to a cabling issue or other hardware failure.

                                                                          1. 5

                                                                            The whole situation is a total catastrophe.

                                                                            1. 2

                                                                              The way I see it, there are two dimensions: connectors and capabilities. If we want to support each capability in each connector (both ends of the wire), we’ll need to support the whole 2-D space, obviously. Perhaps the naming could be improved, but I really don’t see any problems with the number of combinations out there. Unless people are okay with losing compatibility (physical/software), which everyone is okay with unless things break for them.

                                                                            2. 1

                                                                              Maybe USB 4 will help clarify the issue: force all USB-4 cables to be Type C and limit the varieties to those with or without power delivery. Then you just have to make sure it’s a USB-4 cable, no more googling for "USB 3.1" "gen 2" "5 Amp"|"5A".

                                                                            1. 2

                                                                              I’m going to try https://cryptopals.com in a language I’m not familiar with just to see how long I take to learn a new language from scratch (it’s fun!). Most likely going to go with Rust.

                                                                                1. 1

                                                                                  Can’t an attacker just replace the hash with their malicious hash?

                                                                                  1. 1

                                                                                    There’s only one hash. Most curl attacks use the user-agent, timing attacks, etc., so if the returned script is malformed or malicious, the hash would not match whatever’s advertised on the website. This is only applicable when you read the script before piping it to sh. If you pipe scripts without reading, it’s a lost case and there’s no way to stop anybody.

                                                                                  2. 1

                                                                                    Is there any threat model where curl-hashpipe-sh is safer than straight curl-sh (with HTTPS and basic partial-content precautions)?

                                                                                    1. 1

                                                                                      It makes sense when your browser’s connection to the package website is trustworthy but the connection you’re curling from isn’t trustworthy.

                                                                                      Which, like, when does that happen? I put my browsers on jank WiFi more often than my servers, and if I can’t trust my server’s upstream why do I trust the install image or RAM?

                                                                                    2. 1

                                                                                      I started writing something similar a while ago but never finished it: https://github.com/zimbatm/curlsh

                                                                                      The tricky bit is that because curl and bash are available almost everywhere, they are being used for bootstrapping. So that tool would also have to be distributed widely.

                                                                                    1. 12

                                                                                      Some of us appreciate the attempt at politeness for people asking to ask. Feels really brusque otherwise.

                                                                                      1. 23

                                                                                        You can be polite while also just asking your question: “Hi there, I’m trying to do X with Y, and I’m stuck with Z. Any helpful tips? Thanks a lot!”

                                                                                        1. 2

                                                                                          Isn’t that basically what the linked articles in this thread suggest?

                                                                                        2. 11

                                                                                          If you’re at a gathering in-person, for sure; don’t just start a monologue. Such politeness is useless in async mediums like IRC and mailing lists IMO.

                                                                                          1. 12

                                                                                            Mailing lists, sure. On IRC or other nominal community centers, though, such adrupt inquiries feed even further into the one-way flow of effort in a lot of these communities.

                                                                                            It sucks to put the effort into helping newbies when all they do is show up, treat it like a real-time Stack Overflow, and vanish again until their next question.

                                                                                            I get the strong streak of antisocial behavior in programmers, but we’re not doing ourselves any favors by encouraging it.

                                                                                            1. 1

                                                                                              I find it polite to stick around, maybe help someone with a question I can help with. Trying to give back does get frustrating even when someone opens with a well-formulated question and quits. Maybe people don’t have shells or w/e so this shouldn’t be attributed to bad manners, but still…

                                                                                              Nothing is a utopia, because even as a member of the channel community, knowing people are online who can help you better than SO, doesn’t guarantee they give a fsck or have time for you.

                                                                                          2. 4

                                                                                            I, too, generally feel awkward about asking a question without also somehow indicating ‘I respect you and your time, and I understand you have no obligation to me, and any answer would be kindness and helpfulness on your part’. There is also, of course, the need to actually respect somebody’s time by getting straight to the point / making it as easy as possible for them to answer or decline.

                                                                                            I generally combine these needs by asking my question straight away, but putting a polite/humble opening at the start of the same message.

                                                                                            Hi, I’m not sure this is the right place to ask this question; please feel free to ignore this if it isn’t. My question is as follows: [summary, what I’m trying to do, details, what I’ve tried, …]

                                                                                            1. 3

                                                                                              The problem with the kind of tentative open as exampled in the article is that it is indistinguishable from spam. You don’t know if it’s worth your time until they come right out with what they need. One can certainly introduce themselves, and provide thanks in advance–to be polite and appreciative–without requiring your audience to guess at your intention or coax the topic out of you.

                                                                                              1. 3

                                                                                                It’s not politeness. People do this because they want to know they’ll get an immediate response if they bother to type out their question. They want to know they have a captive audience.