1. 2

    I use FrehRSS for my feeds, and the Reeder app on iOS, which connects to my FreshRSS instance. On the whole it works well and I’m very happy. Plus, it’s self-hosted which is nice.

    I’ve used Tiny Tiny RSS in the past, but the lead developer can be obnoxious at times, so it put me off in the end.

    I’ve also heard really good things like about Miniflux, but I like the simplicity of uploading the files and configuring a DB in a configuration file. Unfortunately, Miniflux is more involved than this. I do believe they offer hosting for a very small amount though.

    1. 1

      Looks interesting. Is there a similar app that can check passwords stored in pass?

      1. 3

        gopass has an audit feature.

          Detected a shared secret for:
        
          Password is empty or all whitespace:
        
          Password is mangled, but too common / from a dictionary:
        
          Password is too short:
        
          Password is too systematic:
        
        1. 1

          Perfect! Exactly what I was looking for :)

        2. 0

          Not sure, as I’ve never used Pass. However, if it has an export feature, you could do that and pivot the data in something like Excel.

        1. 13

          What a misleading title :) I thought this would be a code review of the password manager software itself.

          1. 1

            Oh, that’s a good point, I hadn’t considered that people would interpret the title as a Bitwarden code review. You’re absolutely right, it is misleading so I’ve updated the title, thank you for the feedback.

          1. 3

            I recently did a similar review of my KeePassXC password database. Since KeePassXC doesn’t have auditing tools like the Bitwarden ones used in the article, here’s how I audited:

            Reused passwords and weak passwords

            I right-clicked the Password column and unchecked Hide Passwords. Then I left-clicked the Password column to sort by password and scanned down the list. Reused passwords showed as identical adjacent entries.

            Exposed passwords

            I manually pasted the email addresses I use into https://haveibeenpwned.com/.

            1. 0

              Little more manual, but not by much and still the same result. Nice work and thanks for sharing as a I know A LOT of people use KeePass, myself included (for work).

            1. 36

              Or simpler without the need for additional <a>:

              <h1 id="pageTitle">Title</h1>
              
              ...
              ...
              
              <a href="#pageTitle">Back to top</a>.
              
              1. 26

                I don’t even understand why there was article written for this…It’s so obvious to anyone with any basic HTML understanding.

                1. 19

                  That’s the thing - many website owners, especially those who use WordPress, don’t know basic HTML. So they may be inclined to install a plugin instead of putting a couple of simple lines of HTML into their theme.

                  Plugins aren’t inherently bad, but they add unnecessary bloat. Especially for something as simple as this.

                  1. 4

                    unnecessary bloat

                    … then maybe don’t use WordPress. ;)

                    /s

                  2. 8

                    because if there is no article about it, this “obvious” knowledge becomes lost, overcomplicated solutions float to the top of search results, and everyone starts doing it the stupid way.

                    1. 8

                      As someone who recently had to start doing frontend work at my job, I just want to say this is non-obvious to me and I appreciate the article and comments.

                    2. 8

                      Apoarently “#top” or just “#” not only works (I remember #top from… The early days. Netscape 4.x?) - but it’s in the standard:

                      “Note: You can use href=”#top” or the empty fragment (href=”#”) to link to the top of the current page, as defined in the HTML specification.”

                      https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a

                      Where the spec is… I’d say more than a little obtuse: https://html.spec.whatwg.org/multipage/browsing-the-web.html#scroll-to-the-fragment-identifier

                      Ed2: not sure why the author felt a need to define a html4 style named anchor for #top..

                      Ed: Also TIL: in HTML5 linking directly to an ID is preferred over an explicit named anchor.

                      1. 4

                        According to whatwg and MDN this is the preferred method

                        1. 3

                          Problem with that is it would take you to the title of the page, which isn’t necessarily the top of the page.

                          1. 7

                            The id attribute can be given to any element so it could be done with a <header> or <article> element if that suits the page better. The name attribute on anchor elements is now considered obsolete.

                        1. 1

                          Very nice, though it seems my browser (Chromium 81.0.4044.129) ignores scroll-behavior.

                          1. 2

                            Interesting. I tested it with Brave (also based on Chromium) and it worked.

                          1. 0

                            I hate these use-ids & fragments for magical behavior - it messes up my browsing history and it’s annoying. I would expect a JS solution if JS is possible and an optional fallback to ids only when no JS is executed.

                            1. 22

                              This is literally plain HTML. If something is magical here, it is the usage of javascript to emulate a behavior that has been standard in the web since the nineties.

                              1. 5

                                I gave up on the back button roughly a decade ago.

                                1. 3

                                  I wanted to ask you what kind of browser would do such a silly thing, but apparently that’s (also?) what Firefox does: fragments do get added to history, and all the “back” button does is dropping the fragment.

                                  I still find it peculiar that there’s even a need for such button (on PC I have a Home button, and on mobile providing one should be the browser’s job imo), but seems like there is a good reason why people use JS for this after all.

                                  1. 24

                                    I like that it gets added to the history. You can press a link to a reference or footnote, and then press back to go to where you were.

                                    1. 4

                                      There has been a craze for “hash-bang URLs” that abused URL fragments for keeping state and performing navigation. This let JS take over the whole site and make it painfully slow in the name of being a RESTful web application.

                                      That was when HTML5 pushState was still too poorly supported and too buggy to be usable. But we’re now stuck with some sites still relying on the hashbang URLs, so removing them from history would be a breaking change.

                                      1. 2

                                        It’s always crazy to see how people abuse the anchor tag. My favourite personal abuse is I kept finding that SysAdmins and IT were always just emailing cleartext credentials for password resets and during pentests I’d often use this to my advantage (mass watching for password reset emails for example). So I jokingly ended up writing a stupid “password” share system that embedded crypto keys in the hash url and would delete itself on the backend after being viewed once: https://blacknote.aerstone.com/

                                        Again, this is stupid for so many reasons, but I did enjoy abusing it for the “doesn’t send server side” use case. EDIT: I originally had much more aggressive don’t use this messages, but corporate gods don’t like that.

                                        1. 1

                                          One useful trait of hash-bang URLs is that your path is not sent to the server. This is useful for things like encryption keys. MEGA and others definitely use this as lawful deniability that they cannot reveal the contents of past-requested content. Though, if given a court order I suppose they can be forced to reveal future requests by placing a backdoor in the decryption JS.

                                      2. 2

                                        Hmmm that’s a good point, and not something I had considered. Thanks for the feedback.

                                      1. 2

                                        Did you observe any screen testing, by the way? I just installed 20.04 on my intel laptop and I got very visible screen tearing. It went away after I changed the default DE to gnome wayland, but then I lost the new fractional scaling options and had to go back to either 100/200% scaling.

                                        1. 1

                                          Do you know if 20.04 removed the tear free option?

                                          https://wiki.archlinux.org/index.php/intel_graphics#Tearing

                                          I’m using this in Xubuntu 19.10 without any issues but I have not upgraded yet.

                                          1. 2

                                            I have two laptops with Intel video and TearFree is absolutely necessary in order to watch any video. I don’t know why it isn’t the default.

                                            1. 2

                                              I tried using that but I got very nasty graphical corruption (eg unreadable text). :/

                                            2. 1

                                              I can’t say I’ve noticed any screen tearing on my machine.

                                            1. 5

                                              Nice review, Kev. I’m currently running Pop!_OS 19.10 and have been trying out Ubuntu 20.04 in a VM. I also find the Ubuntu font jarring and difficult to look at. No idea know why that is, though!

                                              1. 1

                                                Ah, nice to see that I’m not alone with my distaste for the Ubuntu font. :-)

                                              1. 3

                                                I personally use Pop!_OS (Ubuntu based) because I like the direction System76 have taken the UI/UX. There’s very little do to out of the box, except run my package install script.

                                                If I wasn’t using Pop!_OS, it would be Ubuntu. I’m a “set it and forget it” kind of guy, so I like to get going and leave my OS alone. The fact that Ubuntu and/or Pop are supported by large companies, and therefore unlikely to go anywhere is a big bonus for me also. I try to stay away from the smaller indie distros.

                                                Ubuntu is also the biggest distro out there, so getting support if things go wrong is trivial.

                                                1. 3

                                                  Yes, and the big userbase means that it is supported by third parties like nvidia for cuda!

                                                1. 0

                                                  For things like email, I cannot accept anything other than 99.999999…% up time. There cannot be any dropped emails ever….ever. Services like this and the other mentioned in the post just sketch me out and I guess I just don’t have enough experience outside the Google email realm. Bias warning I guess.

                                                  I do see why people avoid services like Gmail and GSuite, but I just can’t see any of the reasons being big enough to warrant fully leaving Gmail or GSuite. The service is reliable and more or less bullet proof. And when your online life, and now more than ever, your physical life are tied to an email address, what more do you need other than a reliable service?

                                                  Imagine missing a critical email related to your bank account because this vendor restarted the mail servers during maintenance. “Sorry for the inconvenience” doesn’t cut it in this case.

                                                  When I dig around in the sites source and find a page like –> https://news.purelymail.com/posts/status/2020-03-04-planned-maintenance.html I get sketched out.

                                                  1. 7

                                                    For things like email, I cannot accept anything other than 99.999999…% up time.

                                                    E-mail was specifically designed to function in environments with like a 0.01% uptime. That’s why smtp has batching, relaying, and retrying in the specification - it was designed in the days of dial-up only connecting every now and then. Sometimes they come a couple hours late if you have an outrage, but they still come.

                                                    1. 1

                                                      The network is reliable THE NETWORK IS RELIABLE!

                                                    2. 3

                                                      Imagine missing a critical email related to your bank account because this vendor restarted the mail servers during maintenance. “Sorry for the inconvenience” doesn’t cut it in this case.

                                                      The protocol was designed to be resilient to these kinds of faults. A compliant sending server will retry.

                                                      99.999999%

                                                      Are you sure even gmail commit to 8-nines service availability? Even to paying customers?

                                                      1. 2

                                                        Here’s some interesting reading to soothe your mail anxiety:

                                                        RFC 5321: SMTP

                                                        …and specifically the part on retry strategies starting at page 66.

                                                        Email is, or used to be, defined by a relatively straight-forward set of RFC’s which sketched a fault-tolerant asynchronous store-and-forward system for moving textual content from one mailbox to another. The ‘straight-forwardness’ has been muddled by the addition of several access-control related facilities to deal with unsolicited mail, the ‘textual content’ has been replaced by HMTL and mime but the core protocols still stand.

                                                        1. 1

                                                          That’s true and all but for the second time now I’ve had mails not even reach my mail server, not sure how broken their setup must be.

                                                          I was sitting there, tailing logfiles on my mail server, sending test mails - nothing (signup for a random web shop and kickstarter’s forgot password mails earlier). Then I redid it with a gmail address, arrived in seconds.

                                                          So yeah, maybe it was SPF or anything else on the DNS level, or the route to my mail server was dead.. but it’s been working pretty consistently over the years…

                                                        2. 1

                                                          Imagine missing a critical email related to your bank account because this vendor restarted the mail servers during maintenance. “Sorry for the inconvenience” doesn’t cut it in this case.

                                                          If a mail server isn’t available it should retry, or the sender will get a bounce back informing them delivery failed. It won’t just disappear into the ether.

                                                        1. 2

                                                          I wouldn’t say SES is difficult. You setup a handful of DNS entries and you’re done. To add more email addresses you just click a link to verify it’s genuine.

                                                          As another commenter said, I personally think that email is as important as a mobile phone contact, or ISP. If you ant a quality product, especially for something so important, why not pay for it?

                                                          1. 1

                                                            For a dev it might be. I tried to find a solution that is simpler to setup and optionally useful for friends of mine

                                                          1. 1

                                                            Why was the static site bigger?

                                                            1. 1

                                                              It’s good to minify html as well

                                                              Unless the ratio of markup to content is wildly skewed to the former, this doesn’t buy you much but does come with a pile of disadvantages.

                                                              1. 1

                                                                I assume because the optimisations I have in Wordpress include minification.

                                                                1. 1

                                                                  I’m a little surprised HTML and CSS benefit that much from minification.

                                                                  Also, thanks for posting this in such detail. Reading it caused me to make a one line change to my .gitlab-ci.yml that cut the total page size on my (static generated, gitlab pages-hosted) blog from 440-ish k to 220-ish k.

                                                                  If anyone else uses hugo with gitlab pages, the way to enable compression is to change

                                                                  pages:
                                                                    script:
                                                                    - hugo
                                                                  

                                                                  to

                                                                  pages:
                                                                    script:
                                                                    - hugo && gzip -k -6 $(find public -type f)
                                                                  

                                                                  If there are resources with the same name ending in .gz next to the uncompressed resources on your site, gitlab will serve them up to browsers that accept gzipped pages. (-6 was only chosen to be explicit about gzip’s current default, not measured and thought out.)

                                                              1. 7

                                                                There is a single line of JavaScript on the Close button when in the menu. This is because the menu is a separate HTML page, so clicking the Close button invokes a tiny line of JS that takes you back 1 page.

                                                                javascript:history.back() 
                                                                

                                                                If you know how to accomplish this without JavaScript, please do let me know.

                                                                There’s a nifty hack regarding opening and closing mobile hamburger menus in CSS with a checkbox.

                                                                1. 2

                                                                  Thanks, I’ll take a look at that.

                                                                1. 3

                                                                  61% of people use some form of analytics, and most of it is Google Analytics! That’s quite the figure.

                                                                  1. 2

                                                                    I find that surprising given the usually privacy-conscious HN crowd. Maybe they are a small, but relatively vocal minority.

                                                                    1. 1

                                                                      Yup, I thought so too.

                                                                    1. 3
                                                                      1. Personally, I don’t update feeds when I update posts. I sometimes update posts for a little spelling mistake I’ve noticed, or something similar, so I don’t want people to perceive I’m spamming them.

                                                                      2. Most people tend to limit their feeds to the most recent 10 or 20 posts. It’s rare to see an RSS feed that contains all the content on the site.

                                                                      3. I wouldn’t say so, no. I’ve used PolitePol in the past and had a lot of success with it. https://politepol.com/en/

                                                                      4. Not that I know of. As long as your feed is valid, you should be fine. You can check its validity here - https://validator.w3.org/feed/

                                                                      Good luck!

                                                                      1. 2

                                                                        Appreciate the validity check, will be very handy.

                                                                        1. 1

                                                                          ad 2) I’m not sure if it’s my feed readers, but I have the impression I am not missing posts, so I don’t think that’s necessarily a good idea to limit it.

                                                                        1. 1

                                                                          Never heard of it before, but looks promising. I will give it a try!

                                                                          Is it possible to have webmentions as a comment seciton on a static website?

                                                                          1. 1

                                                                            You can, but like anything dynamic, you’d either need to implement the comments section in client-side JavaScript or else have some way of triggering your site to rebuild and redeploy whenever a new Webmention is posted.

                                                                            1. 1

                                                                              I’m doing that. My site is SSG that is built on my desktop and uploaded via good old rsync.

                                                                              I use this webmentions SaaS to collect my mentions and a client-side JS to fetch and display them. My design skills are limited, so you can have it looking much better than I did. This post has a lot of mentions at the end. The JS used is not minified and is at the end of the post, near line 152 if you want to check.

                                                                              Another cool service is bridgy which will send and collect mentions from silos and forward to your web site. I have it configured with my twitter and mastodon account. This way, I can have it send automatic tweets or toots when I post and also deliver replies and comments back to the original post.

                                                                              1. 1

                                                                                Yes, it’s possible with a dynamic webmention endpoint that somehow saves webmentions. I have such a dynamic endpoint that creates a file in a git repo, commits and pushes it. That triggers a pipeline to rebuild my blog with Hugo and then a link is displayed below the page which received the webmention.

                                                                                1. 1

                                                                                  I have done it on my static website using webmention.io. I just pull on my personal computer regularly the updates from webmention.io and then I generates the comment section using some Hugo features. Shameless plug to my blog post explaining this

                                                                                  1. 1

                                                                                    There’s https://webmention.io/ or you can have a self-hosted service somewhere.

                                                                                    1. 1

                                                                                      I believe so. I know a couple of people who are running Hugo and have this. However, I would assume that that the comments are updated only when the site is regenerates, rather than on-the-fly like WP.

                                                                                      1. 1

                                                                                        I use https://github.com/PlaidWeb/webmention.js to render my webmentions client-side so they’re always up to date, instead of needing to rebuild the site - I’ve written a bit more about it at https://www.jvt.me/posts/2019/06/30/client-side-webmentions/

                                                                                    1. 6

                                                                                      Welcome to the IndieWeb!

                                                                                      1. 0

                                                                                        Thanks! :-)

                                                                                      1. 2

                                                                                        But a static site is WAY quicker!

                                                                                        I don’t use dynamic sites, but if I had to, I think I’d just put a CDN or Varnish cache in front of them.

                                                                                        The website is dynamic, but unless there is a new article every second, you can cache the homepage and each article as if they were static HTML, right? Thus achieving the performance of a static site.

                                                                                        As for the case when it’s a high-traffic & high-frequency updating website (e.g., newspaper site), caching 1 minute or 2 can avoid a lot of useless same calls to the PHP or other backend code.

                                                                                        1. 1

                                                                                          Pretty much, yeah. I cache my site and use a CDN. That’s what makes its performance comparable to an SSG.

                                                                                          I suppose the advantages are that an SSG wouldn’t have to set that up in the first place, but once done, the writing process is much easier (I think) with WP.

                                                                                        1. 2

                                                                                          I actually prefer the static site as it least gets in the way of making content. If I used a CMS, I’d be too tempted to preview and tweak how things look. By forcing myself to just write a markdown file and get going I’m more likely to get something written.

                                                                                          1. 2

                                                                                            That’s very true actually. I don’t play around in the CMS that much, but it can be a distraction.

                                                                                            1. 2

                                                                                              Based on previous discussions here about SSGs, around 70% of the time is spend developing and tweaking the software, and the rest is for content.

                                                                                              (BTW this mirrored my own experiences back when I bothered to tweak my blog).

                                                                                              1. 1

                                                                                                I do occasionally have the temptation to tweak, but generally have been good about avoiding it. I do want to add sidenotes at some point that are mobile-friendly, and expect that to take a weekend as I obsess over which style I want to adapt.

                                                                                                But 90% of my usage is open markdown file, run ./site server to do a quick check over how images and figures rendered, and run ./site deploy