1. 4

    Coming from a job where I was a systems administrator in a Windows/Linux environment to my new job in a Mac environment I miss PowerShell a lot. I want to start using it on Mac, but the experience on Windows was fantastic, especially because of the deep system integrations. And prior to that job I was “Unix all the things!” so I definitely did not anticipate becoming a PS fan.

    1. 7

      I just use NetNewsWire locally on my phone. Anything I want to read elsewhere I throw in Instapaper. I just got tired of trying to find the right combo of aggregator and client, especially since I had to use so many different platforms. Going local has really simplified the system for me.

      1. 2

        I have a similar setup, NetNewsWire on phone and laptop and feedly for sync

        1. 2

          I have NNW on my phone and laptop, with similar, but not identical, subscription lists. I don’t want to bother with a third-party, so I just read most things twice (helps with retention).

        1. 25

          When I encounter material like this I just usually skip to the distribution recommendations.

          Avoid distributions that freeze packages as they are often quite behind on security updates.

          This is a common fallacy, and they give their reasoning below, that most security issues does not get a CVE. This is only true if you believe every use-after-free, buffer underflow/overflow and C issues are inherently security issues. You might disagree but I don’t think that is the case as many lack any demonstration of being exploitable. It avoid the actual problem of CVEs and security issues (in my opinion).

          Use a distribution with an init system other than systemd. systemd contains a lot of unnecessary attack surface; it attempts to do far more things than necessary and goes beyond what an init system should do. An init system should not need many lines of code to function properly.

          This is again the same old rambling from anti-systemd enthusiast. This is only true if you consider local exploitability, but I regard this as a non-issue. You have other problems at this point. Since you can’t prevent security issues, and exploitable issues, you should seek out projects that take security issues seriously and demonstrate they can handle them, along with them actually being discovered. Remember, the amount of CVEs in a project is a sign of maturity, not insecurity. I’d be more cautious of software that is popular but has no issues. It means people are not looking.

          Use musl as the default C library. musl is heavily focused on minimality which results in very small attack surface whereas other C libraries such as glibc are overly complex and prone to vulnerabilities. For example, over a hundred vulnerabilities in glibc have been publicly disclosed compared to the very few in musl. While counting CVEs by itself is often an inaccurate statistic, it can sometimes be used to represent an overaching issue such as in this case. musl also has decent exploit mitigations, particularly its new hardened memory allocator.

          Again, I disagree with the premise and the conclusion. While the musl project is great. I don’t think these comparisons are useful and just furthers FOSS maintainers distaste of CVEs.

          Preferably use a distribution that utilizes LibreSSL by default rather than OpenSSL. OpenSSL contains tremendous amounts of totally unnecessary attack surface and follows poor security practices. For example, it still maintains OS/2 and VMS support — ancient operating systems that are multiple decades old. These abhorrent security practices are what led to the dreaded Heartbleed vulnerability. LibreSSL is a fork of OpenSSL by the OpenBSD team that applies superior programming practices and eradicates a lot of attack surface. Within LibreSSL’s first year, it mitigated a large number of vulnerabilities, including a few high severity ones.

          While again, libressl is a cool project (at best). I don’t think it serves the same purpose it did after hearthbleed. OpenSSL has gotten a ton of eyeballs and development time since then and LibreSSL breaking APIs leading to extensive patching upstream makes it hard for proper adoption. It’s also still C so “superior programming practices” is just moot.

          TL;DR

          This reads like an ode to some favorite distribution or way of living instead of giving sound advice. If you want to harden you system you should consider a few options. Exploitability and post-exploitability.

          Post-exploitability: If systemd does get pwned, you can still mitigate some attack vectors. This is where compilation flags are important, along with kernel hardening. If this is what you care about, please do secure your system. A lot of the notes here are good in this regard as well. This is where I believe the threat-model from QubesOS comes inn (please correct me on this).

          However, if you care about exploitability following this guide doesn’t give you much. You want a distribution that cares about reacting to security issues, CVEs and applying the appropriate patches. Which distributions are these? I defer to the open-wall distro embargo list usually. Distributions that has the overhead to participate has the intentions of doing the right thing.

          If the distro is not on this list, try look up their security team and figure out if they are organized and publish advisories. This is again a good indication they are trying to do the right thing. But it’s important to realize that security teams in volunteer run distributions can never get all of the CVEs. This can only be done by the 3 enterprise distros: RedHat, Canonical and OpenSUSE. There are collaborations between all of us, but it’s a lot of work.

          Disclaimer: Been contributing to the Arch Linux security team since 2017.

          1. 10

            This article provides all sorts of advice but lacks a threat model. I don’t think its purpose is to provide one, and that’s why caution should be taken when implementing some of this. Really, a defense in depth style approach would be better: assume someone does have root via a local privilege escalation, if that’s what you’re worried about. How do you protect your assets?

            No one that’s running and has to secure a large academic cluster is going to run Gentoo, for instance. They’re probably going to install RHEL. The real question is how your defense in depth works, not if and when someone inevitably pwns your systemd.

            (Of course, if that’s part of your threat model, you may want to include it in your defense in depth too.)

            1. 5

              This article provides all sorts of advice but lacks a threat model.

              Yessss, more of this. We can discuss security but without a proper model of what we are protecting you are just doing a lot of fancy theater without really getting anywhere. Having a realistic model helps you implement proper measures and should be the starting point of any hardening endeavor. I think introducing some concepts such as “exploitability” and “post-exploitability” are maybe bad words, but gives people some words to hang ideas around.

              1. 2

                One thing I’ve heard reiterated is that a checklist is a “bare minimum” when implementing secure systems. This article is a checklist. Like any other checklist, following it exactly will likely result in an unusable system, and only following it will result in a false sense of security.

                1. 2

                  Those checklist are usually either guidelines or baselines internal to your company.

                  Guidelines are usually thought through and apply the security concepts mentioned above like defense in depth.

                  Baselines are the minimum that needs to be done for your system to meet the internal requirements or the company.

                  Although, a random checklist on the internet isn’t either of those 2 as it’s totally external to your company.

                  For example, no financial institution is going to use Void instead of RHEL because of the support. Neither they will start to upgrade all packages as soon as they’re available without testing it through their change management process.

                  1. 2

                    following it exactly

                    .. will result in a heterogeneous state where you can take all of these systems down that follow this guide to perfection?

                    1. 2

                      lol, yeah, exactly - at that point, you only need to look at the guide to figure out how to attack anything that used it. Though some checklists are more just best practices. Like, don’t use empty root passwords - that sort of thing. Monotonically increasing security is the idea, but obviously not all checklists are going to do that…

              2. 7

                Oh, almost forgot.

                The most common verified boot implementation is UEFI Secure Boot however this by itself is not a complete implementation as this only verifies the bootloader and kernel, meaning there are ways to bypass this:

                Yay, no anti-Secure Boot FUD. Always a relief.

                UEFI secure boot alone lacks an immutable root of trust so a physical attacker can still reflash the firmware of the device.

                Is wrong, most modern machines has TPMs which works as a immutable root of trust if you are utilizing secure boot. You don’t have to use either of the Intel nor AMD options for this. It would also detect firmware flashing.

                1. 1

                  A proper secure boot implementation should be doing measured boot too, no?

                  1. 1

                    Not “secure boot implementation”, but a “verified boot implementation” would be doing measured boot and secure boot.

                    Secure boot itself doesn’t do more then authenticate the files you are booting.

                2. 12

                  Use a distribution with an init system other than systemd. systemd contains a lot of unnecessary attack surface; it attempts to do far more things than necessary and goes beyond what an init system should do. An init system should not need many lines of code to function properly.

                  I saw this and just closed the tab.

                  1. 3

                    Right, I get that. But I’d still read over it as you might get an idea or two. But taking it at face value is not really going to give you a lot more then a painfully broken system.

                1. 5

                  This is an oldie but a goodie, for the simple fact that git is not only so common, but that knowledge of its ins and outs is valorized among programmers. I should be perfectly happy if someone read this entire page and the only lesson they took away was, ‘wow, git is a very poorly designed program.’

                  1. 6

                    I am in the midst of teaching my web development students how to use git and - as is true every year - I am appalled at how convoluted it is. Git is the software equivalent of English.

                  1. 1

                    The ratio of words-read to words-understood in that essay is making me rethink my life right now. I had absolutely no idea how deep that rabbit hole went.

                    1. 4

                      A split staggered keyboard makes no sense to me. I have an Ergodox and a Kinesis Advantage and the columns are much more logical if you can turn the parts or have better spacing.

                      Also, the relearning for those is trivial. Ok, the arrow keys are a pain!

                      1. 1

                        I love my ErgoDox but after months of trying I simply couldn’t get used to the default arrow keys so I moved them all to the right half in the Vim order along the bottom row and it’s working much better for me now. I just couldn’t adapt to using two hands for the arrow keys.

                        1. 1

                          Yeah, I also ditched the default layout. I don’t like that bottom row. I don’t use it now. The rest of the hardware layout is good though.

                      1. 2

                        In all these cases there was ways to keep a flat UI, while making the interactive elements stand out more.

                        For example, on the first example they went from what clearly looks like a button to a white box on a white background with a purple border. It should have been possible to make that button flat, but give it a subtle shadow or something to make it stand out more.

                        I think it’s an interesting study but they aren’t really saying what should be done to improve flat UIs. Going back to skeuomorphism is probably not an option as it would make the UI look dated, but there has to be a middle ground that can work.

                        1. 3

                          Going back to skeuomorphism is probably not an option as it would make the UI look dated, but there has to be a middle ground that can work.

                          I think the concern (and I am not saying this was your argument, the sentence just led me in this direction) with looking dated is where the problem pretty much lies. When you look at industrial or medical products, there is an overwhelming concern with usability, not novelty. Yet there is still variety. I think that UI designers can achieve variety and aesthetic pleasure without abandoning a good idea simply because it looks old. This will probably happen naturally over the coming decades as these technologies become normal, which is why we no longer see insane steering wheels except in concept cars.

                          1. 3

                            It should have been possible to make that button flat, but give it a subtle shadow or something to make it stand out more.

                            I don’t want to “no true Scotchman” things here, but at that point it’s no longer a “flat UI”, right? I think the objection is mostly against UI elements which are truly flat, not those that are “less 3D”. I often use shadows myself, which gives kind of a “pop-out 3D” similar to the 3D UI of yesteryear, except more, well, fashionable I guess?

                            If you look at the GNOME/Librem screenshots in the another reply here, then the “3D effect” is done by using a gradient background which is slightly different from the surrounding colour, but with a solid border colour which doesn’t pop out (which is what the old UIs used) – stuff like the Bootstrap CSS theme also does it like that by default (or at least, used to do – haven’t used it in a few years).

                            Even the buttons on Lobsters which have a solid border colour and solid background colour kind of “pop out”. It takes very little to fool out brain in to thinking something “pops out”, and the problem with flat UI is that it makes no effort at all to do that. Anything that does make that effort is – as far as I’m concerned – not really a flat UI.

                            1. 2

                              Exactly. There’s a lot you can achieve with subtle suggestions of depth without going in completely the opposite direction and distracting from the content (or just blending in; there’s a reason road signs aren’t detailed and realistic). I’m actually really glad for the cleaner interfaces we have today in general but it still needs to be done thoughtfully. There are definitely too many completely flat designs that just jump on the trend without putting any thought into why or how. That’s lazy. But a UI can certainly take elements of flat design while remaining highly usable.

                            2. 2

                              There most certainly exists a middle ground, signifiers are on a spectrum and the example of the underlined vs. just contrasting-colored links reveals you can get away with weakening them (although underlined links is not something I’d personally recommend) and users will still figure them out. This article goes into more details on how to improve flat UIs.

                            1. 2

                              Very interesting point about asynchronous conversation on a shared document. I am definitely on the market for decent tools here. I can’t find a tool that I’m happy with. I’ve tried

                              • google docs (and suite)
                              • MS word online
                              • quip
                              • confluence
                              • VS code
                              • ms whiteboard
                              • google colab
                              • markdown in git

                              but all fall short one way or another. poor collaborative edits, or lack of diagrams or image management, or even comments are either broken or a pain to use. I whish for a collaborative orgmode, but I’d be happy to settle with markdown with latex equations and comments. Closest I’ve used was quip, although it is not missing his share of problems. Does anyone has other suggestions?

                              1. 3

                                If you work in a G Suite environment, Coda.io is pretty great. In the last few weeks they have added support for non-Google logins so you don’t need to use G Suite or Gmail to authenticate.

                                1. 2

                                  Coda.io

                                  That looks very very interesting, also has latex math formulas!! thanks!

                              1. 24

                                I have run both static and Wordpress sites in the past and Wordpress definitely has a better publishing experience for most writers than a static site. Whenever the experience of publishing static sites is criticized, I usually only hear scoffing from developers glancing up from their wall of terminals. “How could this be hard for anyone else?” But the fact of the matter is that for most people, it is. If static is superior, then it would be awesome to see developers making tools that make it easier for the average writer to publish a static site instead of being confused about someone “not getting Jekyll”. Make it easy to do it the right, secure way. My position right now makes me the interface between a lot of “average” users and complicated tools and especially now that we are working remotely, I am intensely interested in making processes easier (and still safe) for my users, even if I have to sacrifice technical purity. But when I can help them complete the task in the best way, easily, then that’s a big win.

                                1. 25

                                  The tools exist. You can add a content manager on top of a static generated site, like Netlify CMS or Forestry, just to cite two of them. They’re called headlessCMS and there’re a lot of them.

                                  There’re others advantages on going static that @kev doesn’t talk on this post. For example, I think the most important for me is that you can pass the trouble and cost of maintaining a server with php, a database and a webserver running 24/7.

                                  But at the end of the day I don’t think there’s only one right way to do things. I agree with @kev:

                                  WordPress is far from perfect, but it works for me. If using a static site works for you, that’s great. It would be a very boring world if we all liked the same thing.

                                  1. 2

                                    I’ve tried Netlify, and I wasn’t impressed. Compared to just unzipping a version of WordPress on an Apache server and setting up a MySQL database, it felt very counter-intuitive and complicated to use and setup and I didn’t even get it to work in the end. All these headless CMSs seem very ad-hoc.

                                    I think the most important for me is that you can pass the trouble and cost of maintaining a server with php, a database and a webserver running 24/7.

                                    As @kev mentioned elsewhere in this thread, you still need a web server running to serve static content, which almost certainly has PHP enabled anyway. The only difference is that you don’t need a database. And I admit that databases are somewhat opaque compared to how static site generators structure the content, but there’s a reason that most CMSs store their data in a SQL database rather than in plain text files.

                                    (As a sidenote, I’m not a huge fan of CMSs either. I myself have a static site, I just don’t use a static site generator.)

                                    1. 1

                                      You generally don’t host a static site on an actual web server; Netlify or GitHub/GitLab Pages or S3 or whatever is a layer of abstraction on top of that. I use Google Cloud Storage for https://snazz.xyz, so I build my site locally and have a script that copies the files to my GCS bucket.

                                      For my site’s traffic usage, replicated hosting on multiple continents is free (and then I pay by the GB of bandwidth after the first 5GB). Plus, I don’t have to do any maintenance whatsoever.

                                      1. 5

                                        I just can’t see how this is simpler than just having a web server running. And I feel much more in control running my own web server on a VPS than hosting my site in Amazon’s or Google’s cloud.

                                        1. 1

                                          I understand the need for control but I think that one of the main benefits of running a static website is that there is almost no vendor lock-in. That’s why I feel confident about hosting my personal blog on Netlify. If the company goes out of business tomorrow, migrating to something else will take 10 minutes at most.

                                          1. 3

                                            There’s still much more vendor lock-in than with a simple web server. A WordPress installation looks identical, no matter what VPS it is hosted on. Same goes for Apache. But Netlify’s “in-browser edit” interface is different than GitHub’s, which is different than GitLab’s, and so on. If you want to be truly free, you can never really allow yourself to get used to Netlify/GitHub/etc, because if you get used to any specific service, the barrier for leaving it will be higher.

                                            It’s not a huge deal, but it’s a big enough deal for me to feel uncomfortable with it.

                                            1. 1

                                              From my observations, people treat the web editor as last resort (among github users, its use for anything is strongly discouraged—in collaborative development context, for valid reasons).

                                              The advantage people love those things for is pushbutton deploy: you push generated pages to git, and the rest happens without you. With Github Pages’ built-in Jekyll, you push source files/configs/templates to git and generation also happens without you.

                                              Myself I’m not fond of autogenerated files in git, and rsync+ssh to my own web server is all deployment automation I want (I made it a make target), but for some it’s a real selling point.

                                              1. 2

                                                From my observations, people treat the web editor as last resort (among github users, its use for anything is strongly discouraged—in collaborative development context, for valid reasons).

                                                The reason I focus on the web editor is that lots of people in this thread are presenting it as a perfectly viable alternative to WordPress’ web editor, which I don’t really think it is.

                                                The advantage people love those things for is pushbutton deploy: you push generated pages to git, and the rest happens without you. With Github Pages’ built-in Jekyll, you push source files/configs/templates to git and generation also happens without you.

                                                I don’t know… I can’t see how this isn’t just another step in the process of updating the site that just makes it more complicated. Wouldn’t the most effortless solution just be a traditional shared web host with FTP access?

                                                (Also, with GitHub + Jekyll, you still need to generate it on your own system to preview it, so I don’t see the benefit of any generation happening on GitHub, and don’t get me started on issues of version mismatch between GitHub’s Jekyll and my local installation…)

                                                1. 1

                                                  version mismatch between GitHub’s Jekyll and my local i on Netlify you can specify the version of SSG you’re using. I use Hugo and I can it to the same version as my local one.

                                          2. 1

                                            I think it’s just a tradeoff. Instead of managing SSH keys, apt unattended-upgrades, and Certbot, I just run this script on my computer every time I want to upgrade my site:

                                            #!/bin/sh
                                            
                                            cd ~/everything/site
                                            rm -rf public
                                            zola build && cd public
                                            gsutil -m rsync -d -r . gs://www.snazz.xyz
                                            gsutil -m rsync -d -r . gs://snazz.xyz
                                            

                                            Maybe this is more complex overall, but the amount of stuff I have to keep in my head is much reduced this way. I can see why you might prefer to maintain control over the web host, but I’m perfectly happy not having to do any sysadmin tasks.

                                            1. 2

                                              Instead of managing SSH keys, apt unattended-upgrades, and Certbot

                                              None of these are strictly necessary. I have no need for a non-self-signed SSL certificate, so I don’t need to worry about renewing it.

                                              Maybe this is more complex overall, but the amount of stuff I have to keep in my head is much reduced this way. I can see why you might prefer to maintain control over the web host, but I’m perfectly happy not having to do any sysadmin tasks.

                                              I understand, I didn’t mean that managing a web server is zero-effort in any way.

                                    2. 10

                                      10 years back, my friends over at TheConversation.com.au created a “live” static site generator.

                                      The public site was nginx serving html files off disk, but there’s a full CMS with a database, versioning, etc generating those files when an article is updated.

                                      This architecture had the significant advantage that it was really, really hard to cause a public facing outage.

                                      Also, truly huge amounts of traffic could be handled by a single, quite small server, even before adding a CDN to the picture.

                                      1. 4

                                        Yes, the ability for even a small server to handle enormous numbers of page views is a definite technical advantage of a static site. I just think the number of moving parts in an SSG end up nullifying the advantages in the eyes of the prospective publisher. Tools that couple the publishing interface (especially on mobile) of WordPress with the speed and small attack surface of a static site are definitely in order. @kev mentioned Grav as one such tool in some of his comments, although I haven’t had the chance to try it. But my experience with SSGs eventually drove me back to just writing straight HTML in a decent IDE.

                                        1. 3

                                          It’s been a loooong time, but isn’t that basically how MovableType worked?

                                          1. 2

                                            MovableType definitely has a mode which does that. Charlie Stross’ blog is static and built with MovableType (or used to be)

                                            1. 2

                                              One of the big features of WP over MT back in the day was “instant publishing”, you didn’t have to wait for the time-consuming “rendering” step.

                                          2. 1

                                            I used to do something very similar with a home baked thing in ruby. Once I hit on it I was surprised it wasn’t a more common pattern.

                                          3. 4

                                            Couldn’t agree more.

                                            By the way, if you’re looking for a happy medium, check out Grav (https://getgrav.org). I used it for a while, but did run into some issues with it.

                                            1. 3

                                              I think the thing missing from this discussion is that a lot of writers have their preferred text editor. I hate environments where I’m expected to write a large amount of text in anything other than my favourite text editor. Most writers I have met have an editor that they like (and often have customised a lot, with off-the-shelf plugins and custom key bindings / macros, even if they’re not programmers). If they’re using WordPress, they’ll write in something else, then copy and paste. If their favourite editor has native Markdown support, then they’ll use that and then copy it, otherwise they’ll paste text and then faff getting the formatting right.

                                              The thing that they usually want is a mechanism to push directly from their editor to a live preview and then to deployment. That’s generally easy to hook up with a static site generator driven from git and a scriptable editor (these days, that means anything that’s not Notepad), but for a lot of commercial sites it often ends up having a manual step involving someone emailing a word document.

                                              1. 3

                                                I started blogging with UserLand Radio and later Movable Type. While primitive by today’s standards, this software attempted to bridge these worlds. The blogging experience was through a dynamic application and the publishing output was static HTML.

                                                As time progressed, I would often output “SHTML” (server-side includes) or PHP with these tools. That way you include more complicated dynamic pages for contact forms, surveys, etc. without using CGI scripts.

                                                I’m curious as to whether the authors of tools like Grav, etc. have any experience with these older tools? Once Wordpress appeared on the scene, everyone ran in that direction of the easy of publishing, but we’ve had a lot of pain with maintenance and security vulnerabilities. I’m considering building something dynamic for my personal site which combines link blogging and photography because I know I’m more likely to publish to it when I have time, which also happens to be when I’m on my phone.

                                                1. 3

                                                  Radio and Frontier were quite advanced. They still have many features that are not present elsewhere. I miss Radio…

                                              1. 1

                                                Wistfully remembers the time before he was a sysadmin when this just meant checking his laptop to make sure it updated and then moving on with his life.

                                                1. 2

                                                  I am conflicted about this. The author implies (at least to me) that he used a deauth attack on the neighbor’s device. While I definitely am uncomfortable with the combination of these devices in the close quarters of an apartment complex, I am also concerned about interfering with the safety of someone else’s home. I would be interested to know what the legal standards are for apartments with exterior security equipment.

                                                  1. 1

                                                    No, he used a deauth attack on a device that he bought and owns.

                                                    I don’t have a video doorbell but my understanding is that their target market is homeowners who want to keep tabs on the most accessible part of their private property, the front door.

                                                    His main gripe seems to be a that someone in his apartment building bought a video doorbell and put it outside their door in the shared hallway. Now that person has a permanent, effortless record of his or her neighbors’ comings and goings and I can see how that could be problematic. However, instead of taking it up with the neighbor or landlord, the author chose to demonize the company that makes the doorbell instead.

                                                    1. 6

                                                      “However, instead of taking it up with the neighbor or landlord”

                                                      Like that was going to go anywhere.

                                                      “the author chose to demonize the company that makes the doorbell instead.”

                                                      The author chose to demonized the poor, security practices of a security company making surveillance equipment. Probably all the author can do other than ignore the problem.

                                                      1. 2

                                                        No, he used a deauth attack on a device that he bought and owns.

                                                        He only admitted to using the deauth attack on his own device, but it’s implied that he wouldn’t have bought his own device if it weren’t to learn how to protect himself from his neighbors’ devices.

                                                    1. 6

                                                      This utility is portable, which means that you are no longer required to go through the installation process and no leftovers will remain on the HDD after its removal. if you place the program file to an external data device, you make it possible to take Windows Update Blocker everywhere with you and use it on any computer you come in contact with, who has things configured just right, and doesn’t want to take the chance that an update might mess up their system.

                                                      As a sysadmin, this sounds like “You can mess around with lots of computers and forget which ones you left in a non-updating and unpatched state! Yipee!” I guess if you’re just hacking your personal workstation that’s one thing, but the idea of someone using this on multiple computers gives me a bad feeling…

                                                      1. 1

                                                        Is the notify form not working for anyone else? I’ve tried three different browsers and I keep getting a
                                                        You must write an e-mail.
                                                        error.

                                                        1. 1

                                                          It’s not just you, I ran into the same issue. So I’m either signed up about eleven times or it’s not working.

                                                          1. 0

                                                            Haha same.

                                                        1. 1

                                                          I’m a big user of Linux and BSD at home, but at work (I sysadmin a Pre-K to 12th grade school) the vast majority of our infrastructure is Windows-based, from servers to workstations. But instead of trying to force one to be like the other I just treat each one as what it is. So I’ve worked to optimize my use of Windows using “Windowsey” paradigms. PowerShell is pretty great, and window management is superior to Mac out of the box. My biggest frustration is that many of the end-user security defaults are not best-practice, but this is slowly starting to change.

                                                          1. 2

                                                            I live about an hour and a half away from NYC, but I’d love to attend if at all possible!

                                                            1. 2

                                                              In July 2017, security firm Dr. Web reported that its researchers had found Triada built into the firmware of several Android devices, including the Leagoo M5 Plus, Leagoo M8, Nomu S10, and Nomu S20.

                                                              Suggested article title: Google confirms that advanced backdoor was installed on a handful of Android devices by people who had supply chain access so good luck

                                                              Clickbaity title aside, if you have physical access to the manufacturing or firmware programming stages of any device, the end user is in no position to stop you from ruining their day. This might be less difficult to accomplish with a manufacturer like Leagoo, but not impossible with what we consider more “reputable” manufacturers like Apple or Samsung. But the name of the game with technology these days is hoping that manufacturers aren’t selling you out. Not a happy place to be.

                                                              1. 1

                                                                (I initially drafted this as a response to https://lobste.rs/s/fifilx/hello_android#c_w3eaug but found that it addresses a number of replies, so I chose to make it top level)

                                                                To everyone saying that you should just pick a different device: Compare that I can still buy a second hand iPhone 5S for as cheap as 150 EUR and receive all operating system updates by just hitting the update button. For just little more, I can get a refurbished 6S, and for a bit more a new one, which is still a very capable device. I have a wide range to choose for every purpose, with a similar core promise and a track record.

                                                                Which is @tedu’s biggest point: navigating the Android space for a secure device is terrible, because the ecosystem fails at basics: keeping devices in support. Especially for a backup/travel device, I want to be sure that I can just pull it out, hit “update” and be happy.

                                                                The reasons outlined are exactly why I moved off Apple in all aspects of my computing life except my phone: I have expectations for support, I want to keep my devices as long as possible and Apple has delivered in that space. I owned 3 smart phones since smart phones exist and I intend to use my current one until it dies.

                                                                A lot of the other issues described in the post are fundamental UX problems of the Android space. iOS has a very well working “update tonight” mode and will not use your data.

                                                                It’s a super-hard failure for Android, which set out to be an operating systems for everyone that they can’t keep everyone supplied with basics, notably operating system updates. Recommendations like “just buy top of the class/Google” are odd, because that kinda says that Android is only competitive with iOS if you buy in the same price class or from a single vendor. It’s basically supporting what tedu says directly in the post:

                                                                We can’t let the poors get access to the good updates.

                                                                Currently, if you want the poors get access to the good updates without having them to learn the intricacies of flashing their phone, the only proper recommendation is an older generation iPhone.

                                                                No one should need to learn how to flash their phone to stay secure.

                                                                I don’t want to say you shouldn’t be happy with Android or you shouldn’t get a phone that you can tinker with. But the outlined flaws are very well considered, rebuttals like “just get a different device” are falling way short of the effort that went into the original post.

                                                                I’m very rigid in one of my views: From a security and from a fairness perspective, we can’t have devices falling out of support that quickly and this is one of the biggest problems we currently have in the phone space. Defending Google or the vendors using Android on that front doesn’t advance things in a world where we have managed to supply free, usable, well-supported and cheap updates for any other mobile device like laptops.

                                                                Don’t get me wrong: the moment there’s a “Fedora/Ubuntu for Phones” which builds that track record, I’m off iOS.

                                                                1. 1

                                                                  Google’s Nexus lineup used to address the concern of having an affordable yet up-to-date device on the market, along with the OnePlus One. Now, I would recommend the Pixel 3a as that new device. Relatively cheap (although still grossly inflated compared to five years ago) and guaranteed the latest updates for at least three years. However, I’m watching Purism closely because I feel like it could bring some lengevity to the mobile space.

                                                                  1. 1

                                                                    “Used to” being the key word. “Guaranteed up to three years” is also quite short, given the talk about planned obsolescence going around and the track record of Google in practice.

                                                                    We’re in the odd situation where Apple gives no guarantees, but outclasses everyone else.

                                                                    The whole point of the original post is that while such devices may exist, ecosystem trust is also a thing and I cannot rely on an Android device (a name vendors have to apply for) to generally be supported beyond sales date :). That trust has been eroded.

                                                                1. 7

                                                                  I’ve noticed some of these comments as well. The ones that stand out in my memory were directed at the poster, and not the content. Which makes me think that “off-topic” would be appropriate, since “troll” implies the commenter’s motive is suspect, and that’s not always the case. “Off-topic” doesn’t ding his or her motives, but just points out that what was said isn’t really relevant to the discussion. Attacks on a person - intentional or otherwise - aren’t advancing the mission of this site. So I guess I would start with “Off-topic” and escalate to “troll” for something egregious that’s clearly intended to provoke a reaction.

                                                                  1. 2

                                                                    A longer typing demonstration has been posted, although I had hoped it would also show the use of the meta keys and the trackball.

                                                                    1. 2

                                                                      It seems like pressing the upper row of keys would be challenging since you’d be stretching your fingers straight out. Does it work differently in practice than I’m visualizing?

                                                                      1. 2

                                                                        Hi! Maker here. Thanks for the good question, we’d like to add this one to the list of FAQs we’re making now.

                                                                        Key reachability has been one of our primary concerns, so we’ve tested the prototype with someone with a hand that is probably smaller than most adults (middle finger length 7.5cm from the metacarpophalangeal joint to the fingertip), to confirm that all keys are really reachable from the default position. The answer is yes.

                                                                        But it also works with people with bigger hands. The trick is that you can adjust your placement on the “hand rest” to set the starting point of your fingers. So, the recommended usage is, you first adjust the position to make sure the fingers can reach the furthermost keys, and place the hand firmly on the hand rest. Then you can curl the fingers to reach all closer keys.

                                                                        If you have any other questions, please never hesitate to ask. Thank you. :)

                                                                        1. 1

                                                                          I watched the new demo video and I think I see more clearly what you’re describing. It does look like you’re able to reach without moving your whole hand or stretching your fingers unnaturally. Will the furthest rows decrease in actuation force compared to the closest rows?

                                                                          1. 2

                                                                            Oh… That’s a quite interesting idea. We haven’t tried it that way. The prototype uses Cherry MX blue switch for all keys. So far I haven’t felt uncomfortable to press the furthest keys (with our current default layout, they are mouse buttons). But perhaps different switches for the furthest buttons will improve the usability. I will certainly consider this. Thanks!