1. 1

    I was told that passwords should only be changed if they are known to be compromised.

    But that ignores silent unknown hacks.

    What is a good password rotation strategy?

    1. 3

      Every day I make a todo list on paper, in my notebook. I copy over any items from yesterday’s that I failed to do, trying to be realistic about what I’m going to accomplish. I have sets of other longer-term todo lists I pull from while constructing the daily ones.

      I enjoy the freedom that a physical books allows, being able to scratch my lists down next to drawings & diagrams.

      1. 2

        Plus it looks really cool to use a physical book for lists.

      1. 2

        TODOs written on yellow legal pads with mechanical pencils.

        1. 3

          Why specifically a mechanical pencil?

          1. 2

            I always liked the slender “lead” and there’s no need for a sharpener.

            1. 4

              Have you used the Uni rotating lead pencil. It is pretty nice.

              1. 2

                Nope, but I will check it out. Thanks!

        1. 6

          I just use Things. I have no plan to move away from Apple jail ecosystem in the foreseeable future so…

          1. 3

            I also use Things, just on my laptop though (I keep my phone off of email, calendar, etc.).

            Past monday the macOS Catalina update rendered my Macbook unbootable (sent to apple repair yesterday). In the meantime I’m running a live Ubuntu bootable thumb drive.

            While Things is not available off-Apple, it’s nice they store everything you do in a single SQLite database file. Until I have my Macbook back I’ll be running Things with a SQL editor.

            1. 1

              Past monday the macOS Catalina update rendered my Macbook unbootable (sent to apple repair yesterday).

              Same. Booted in safe mode, turned out it was a bad kext. Updated it and chugging along happily-er now.

              1. 1

                Mine doesn’t even respond to the boot time keystrokes in order to boot in safe mode, or verbose, or boot from a thumb drive…

                I tried everything, but there’s nothing I could do without tearing it down.

                May I know your model? Because a friend of mine also had his install broken. Also, is the bad kext related to Little Snitch? Thx.

                1. 1

                  MacBook Pro (15-inch, 2017) – the bad kext was a corporate MDM thing (“Carbon Black”). But yikes, yours sounds muuuuch worse. I could access safe mode. Recovery was working but even once booted into recovery the dialogs were lagging for 5+ minutes.

            2. 2

              Things

              This comment made me check it out, and damn. I’ve been using Todoist for a couple years and this blows it out of the water. Thanks!

              1. 1

                +1 for Things. I have a soft spot for the idea of a bullet journal but Things is just so good.

                1. 3

                  Things is the only software I’ve ever missed after leaving apple.

                  1. 1

                    I have a mac laptop, but an android phone, so I would be hesitant to use Things.

              1. 10

                I swear by my Bullet Journal. It replaced a text file-based system (a sort of pseudo org-mode) that depended too much on having a laptop or other computer.

                • The friction of having to rewrite todo items by hand helps me keep my lists short. That significantly reduces the stress of giant, “eternal” digital lists.
                • Keeping it physical means that my phone and all its countless distractions can stay in my pocket.
                • Marking things as done with a pen feels like more of an accomplishment than tapping a digital checkbox. All the more dopamine.

                Their app is a big help, too.

                1. 7

                  I personally found pen and paper to still beat digital note taking software. For some reason I always spend more time tweaking the software than actually using it. Either that or I get distracted by something else. With pen and paper you don’t have much to do but write, so I find it easier to stay focused.

                  1. 3

                    Exactly this, too. I had a small suite of scripts supporting my text file system. I fell off the wagon with it, in part, because of the effort that would have been involved in setting everything back up after a system reinstall. With a Bullet Journal, I just buy a new notebook and a new pen.

                  2. 2

                    I do something similar, though it’s far more basic than bullet journaling. I use a fairly thick journal, but it fits into a back pocket pretty easily.

                    I’ve been eyeing the reMarkable since it was first announced, but their only product is still much too big for my uses. And, honestly, I don’t know whether I’d be okay with something I have to keep charged. But I also don’t like the paper waste I generate, so I guess I’m still eyeing it.

                    1. 1

                      An ex-coworker of mine had a reMarkable. She really loved it. I think the key feature for her was the ability to import pdf documents and mark them them up. She also liked how easy it was to organize notebooks for different contexts (as a manager, she was constantly referring to 1 over 1 notes by person, or project notes, or policy deployment notes). She didn’t seem to have issues witht he battery very often–It was a recharge it every other day sort of thing, if I recall.

                      I personally prefer fancy fountain pens, otherwise I would have considered one myself.

                      1. 2

                        I’m a fountain pen person, myself, though I lean towards the non-fancy. Platinum Preppies are an amazing value, for example.

                    2. 1

                      I’m back on BUJO as well. I’ve had great success at using org-mode on more complicated projects, but my current project has few tasks, so it is easier to record my few todos in there. I don’t really bother with the calendar very much, I like the push notifications from my work calendar.

                      Bonuses include being able to doodle in boring meetings.

                      1. 1

                        I was using a text file based checklist system that I concocted myself, but moved the todos to Todoist. Which is decent, but I find it difficult to use it for larger projects, or for recording daily activities.

                        1. 1

                          Do you use it the way they describe it in their intro video? Or have you made any modifications of your own?

                          1. 1

                            Essentially the same, though I don’t really use the monthly calendar feature. (That says more about inattention to time than about that particular feature.)

                            I do use an ! notation to highlight the 1-3 things that most need to get done in a given day.

                            The most important thing, I’d say, is to give the base system a try for at least six weeks (so that I you see an entire month through and more) so that you can develop an informed instinct for what you personally want to change.

                        1. 5

                          My old dual disk NAS with two new mirrored drives in my brother’s basement in his 19” rack a wee bit unter the ceiling. My rack in the ground floor houses his NAS. Gentlemen’s agreement to not push backups before 2 am.

                          I once had an old thinkpad with two drives in my old company in the server room (I was the admin) but this seemed no good idea after this company was sued and got raided (fortunately the NAS was not of interest).

                          Family pictures and such are written on older 3.5” server HDs in silicone hard disk protector frames with USB adapters. Storage of the disks in a safe. The key is in the lock I use it as a fire safe locker, but want to avoid a brute force attack.

                          Before that I used a dedicated Plextor DVD recorder, gloves and virgin DVD medias, ammo canister as storage for the DVDs. Good method, but ALWAYS finish the session and write the TOC.

                          Before that I used DAT tape.

                          I had data loss due to

                          • gravity (slip, bonk, f<beep/>k… DTLA drives had glass platters and are incompatible with free falling in stair cases)
                          • humidity (flood in the basement. not tap water, but waste water with heating oil)
                          • magnetism (forgot to occasionaly reread tapes… for 15 years, plus my Mom put them under the roof)
                          • compression. I will never ever again compress backups.
                          1. 3

                            What was the compression format that you used that failed? How did it fail?

                            1. 1

                              (sorry for late answer, I’m sick @home)

                              bzip2, compressing a tar archive to preserve owner/permissions I could not extract the data any more after block reading errors.

                              It was “optimized” without the need for it: the backup media (DVD blanks were awfully expensive back then) was almost empty.

                          1. 4

                            It includes primarily proprietary as well as open - source components.

                            What is the reason behind the proprietary components? Why not all opensource?

                            1. 4

                              Near as I can tell because the developers see it as a proprietary OS with a few GPL components, notably the desktop, and that’s only GPL because of a bitter disagreement between one of the developers and the company that owned it at the time.

                              1. 4

                                The Amiga community was always seemingly hostle to having source available. It’s changed a lot over the years but even today there’s a huge amount of Amiga software that is going to perish when the developers give up on it.

                                I think part of it is that the Amiga kept a very active cottage industry of small one- and two-person development teams since it never had really strong major software house support*.

                                This lack of major software house support meant tat (a) a large proportion of Amiga developers made (or wanted to make) a living of Amiga software and had relatively easy entry to the market and an enthusiastic captive market and (b) software piracy was rampant.

                                * The Amiga had some support from major game developers like EA and LucasArts but only for a few golden years. It never had huge support from the really big companies, with only a few releases if any.

                                (This is all just speculation from someone who’s watched the Amiga for 30+ years.)

                                1. 1

                                  What is the reason behind the proprietary components? Why not all opensource?

                                  While you can use and test the OS for up to 30 minutes for free before you need to reboot, there is a license cost.

                                  Even if you do not put a value on your own time, developing a niche operating system is a costly endeavour since you need to buy stock piles of hardware to be able to test drivers. I am afraid Apple and AMD are not sending out free hardware samples so MorphOS can be made to support them.

                                  1. 2

                                    There’s an argument to be made that making it free software makes both the appeal and the ability to support a wide variety of platforms expand far beyond what a small team on a niche hardware platform is able (like Haiku, for example)

                                    1. 1

                                      There’s an argument to be made that making it free software makes both the appeal and the ability to support a wide variety of platforms expand far beyond what a small team on a niche hardware platform is able (like Haiku, for example)

                                      Well, AROS is an example for an open source operating system that was inspired by the Commodore Amiga platform. I do not think I am being unfair in saying that it is not in a better position than MorphOS despite using an open source license.

                                      Technically, AROS does support more processor architectures but the alternative ports to ARM and PowerPC are generally less complete, less stable, and have access to a smaller pool of third-party software compared to the Intel-compatible versions.

                                      Being focused on a limited amount of hardware devices and processor architectures is not necessarily a downside but can help to more effectively use your resources in order to provide a more polished end user experience. (Think Apple MacOS vs. Microsoft Windows.)

                                      In short, just making something open source does not magically make everything better. Even if you are generally an open source proponent, I think it is healthy to be able to acknowledge that.

                                      1. 1

                                        Being focused on a limited number of devices does make it easier to polish how it runs on that device for sure, but it also ties the software to the intrinsic appeal of the underlying hardware.

                                        I don’t think that it’s really possible to gauge what making the source of MorphOS freely available would do to it’s development or focus and whether that’s productive for it’s continued development, but interest and historical documentation would almost certainly benefit. But which of those is “success” is very subjective.

                                        1. 1

                                          making something open source does not magically make everything better

                                          “better” on its own does not mean much. Yeah, it does not inherently make it better in terms of quality, but it does in terms of other things. The freedom to modify the software, the long term preservation aspect, these things are extremely valuable.

                                          1. 1

                                            The freedom to modify the software, the long term preservation aspect, these things are extremely valuable.

                                            Being able to enter and use your neighbour’s car is also technically “valuable” if you get my point. Having potential value does not equal indisputable entitlements.

                                            More to the point, MorphOS already runs in qemu so the “preservation aspect” is pretty much covered.

                                  1. 3

                                    My first reaction was that creating a new regex syntax would complicate things. Regex is complex and confusing as it is, and adding another syntax would complicate adoption.

                                    I read the “why” section, and all the reasons seem valid.

                                    @andyc Do you see this new regex syntax becoming widespread in other applications?

                                    Have other regex syntaxes been created before, other than POSIX or Perl, what happened to them?

                                    1. 4

                                      That’s a fair question, I would say:

                                      (1) Oil is very backward compatible, so you’re not forced to learn anything new if you don’t want to.

                                      If you already know bash syntax, you can use it. The [[ construct works in Oil!

                                      https://github.com/oilshell/oil/blob/master/doc/regex-manual.md#oil-is-shorter-than-bash

                                      (but it seems to be so ugly that people resist learning it)

                                      You can also use string patterns with Oil:

                                      if (x ~ '[[:digit:]]+')   # string
                                      

                                      Eggexes use / /, but strings are still valid.

                                      (2) The main reason I thought this made sense is because it integrates seamlessly with egrep, awk, and other tools (see the doc) We don’t have to “boil the ocean” and write new versions of those tools that accept a different syntax.

                                      I did a previous version of Eggex in 2014 which you could only use from Python, and that wasn’t worth it. I showed it to a few people and that was it.

                                      (3) This syntax is somewhat familiar if you know lex or re2c. It’s not entirely new. I tried to provide a smooth upgrade path as usual:

                                      https://github.com/oilshell/oil/blob/master/doc/regex-manual.md#backward-compatibility

                                      (3) Perl 6 already jumped ship too, with an entirely new and exotic regex syntax. It uses quoted literals and unquoted operators like eggex. Larry Wall said something to the effect of “every language borrowed Perl 5 regex syntax but we’ve moved onto something better”. So I’m not the only one who thinks it’s justified :)

                                      And Eggex is much more conservative than Perl 6.

                                      1. 2

                                        Thank you for your reply, and thank you for your work on Oil. It looks very interesting.

                                    1. 1

                                      Neovim, iTerm, Firefox, git

                                      1. 3

                                        English: I know you mean programming languages, but since I’m not a native English speaker I consider English a tool. Something I don’t like about English is how random written English seems, specially vocals, they have so many sound variations, that’s specially difficult when you learn vocabulary by reading, sometimes I know the written word, but if someone pronounces it I can miss it if that’s the first time I hear it.

                                        1. 2

                                          This happened to me today. I was saying “init” to a co-worker, and he didn’t understand at first because of my pronunciation.

                                        1. 2

                                          The Go runtime is terrible and rules out Go’s applicability to a huge set of problems.

                                          1. 2

                                            Do you mean the garbage collection itself, or how it implements it?

                                            1. 3

                                              I’m referring more to Goroutines, though garbage collection imposes similar problems. Because goroutines can switch more or less randomly (at least from the programmer’s point of view) between green threads and real threads, all programs have to deal with the problems of the latter. If it were up to me I’d never use real threads and my code would be 100x simpler for it.

                                              1. 8

                                                I don’t think about threads when I’m writing Go. What are the set of problems where green threads switching to system threads is undesirable?

                                                1. 1

                                                  If a program is using just coroutines (green threads, but I’ll use the term coroutine for this) then only one coroutine is running at a time, so I can be pretty sure a sequence of instructions will be “atomic” with respect to other coroutines. With true, preemptive threading, all that goes out the window.

                                                  1. 2

                                                    I’m really confused by this exchange. One of the primary purposes of goroutines (coroutines, green threads) is to exploit the parallelism of the processor. This naturally requires synchronization for shared memory access. Are you and Drew saying you don’t care about it and don’t want to think about it?

                                                    1. 1

                                                      If I want to exploit the parallelism of the processor, I’ll run multiple instances and have them communicate (the actor model). Shared memory, in my opinion, is evil and makes reasoning about code very hard to impossible, depending upon how extensively it’s used.

                                                      1. 2

                                                        Isn’t that the whole idea behind using goroutines communicating through channels rather than threads modifying global state with mutexes? You could in theory write Go code with a bunch of goroutines modifying global state, but I don’t see why you’d do that when you dislike it so much and channels are so frictionless.

                                                        1. 1

                                                          That’s certainly an approach, unfortunately made more difficult by all the work you have to do to get those instances to behave nicely if you want to serve (say) 10k QPS on a single port.

                                                    2. 1

                                                      I think one key factor helping go here is that it leans so much on copy semantics. When you can avoid reference types in concurrent code (I mean, you usually want to do this in other languages too) you’re almost exclusively dealing with stack allocations and threads are a non-issue.

                                                      If you’re writing code that mutates something on the heap, you need to remember to put a lock around it in go, because you don’t know what else might be fiddling with it.

                                                      In python/ruby you don’t have this problem since there’s a GIL, and in rust you don’t have this problem because it won’t compile.

                                                2. 2

                                                  Can you list a few of those problems?

                                                  1. 3

                                                    The only possible response to running out of memory is a fatal panic.

                                                    1. 2

                                                      How much actual software does anything more constructive in that case? Heuristically, it’s so little that the Linux kernel (however controversially) doesn’t even give applications the chance—every allocation succeeds, with the OOM killer as a nuclear pressure relief valve.

                                                      This is not to say that it’s not a significant failing of the Go runtime, but I doubt it’s one that “rules out Go’s applicability to a huge set of problems”.

                                                      1. 3

                                                        I have found it a considerable obstacle to writing reliable servers on openbsd, where memory limits actually mean something. I don’t like it when my entire web server goes down because one request happened to be a bit large. I would like that request to fail and for other to continue. Or at the very least for some finite number of requests to fail before order is restored. I can certainly write such code in C.

                                                1. 1

                                                  I wrote a tool called DACT ( http://dact.rkeene.org/ ), which was not originally designed for archival compression may be a good option when combined with tar.

                                                  Some reasons why this is:

                                                  1. It splits the input file into a bunch of blocks and compresses (and verifies that the compressed data can be decompressed, optionally) each one individually (possibly using a different compression algorithm for each, but you can pick); tar uses a fixed output block size, so if you make the tar block size and the dact block sizes align then you can trivially recover from many kinds of corruption by ignoring broken dact blocks.
                                                  2. It has a couple of low-grade checksums (on the compressed and uncompressed data) – this could be improved with cryptographic hashes
                                                  3. Though many of the best compression algorithms are using external libraries like zlib and libbz2, there are some okay-ish ones I wrote that are simple enough to do by hand if needed

                                                  Good luck !

                                                  1. 1

                                                    Thanks for sharing.

                                                    How easy do you think it would be to do recovery on a damaged archive?

                                                    1. 1

                                                      Over all it shouldn’t be too difficult, the DACT format is described here: http://dact.rkeene.org/fossil/artifact/e942be8628bac375

                                                      So, it’s a stream of blocks, each of which describes how large it is, which is error-checkable (since if it doesn’t decompress to the right length, you know something is wrong with the block, and can start seeking forward in the archive for the next block, which is also error-checkable so there’s no harm in getting it wrong. In the end, you will know how many blocks you missed in total.

                                                  1. 2

                                                    As I’ve become more active with Z80 homebrew computing, I’ve been going back and looking at old CP/M software. A lot of them are in some custom archival or compression format, and then I have to go back and figure out first what software was used, and then I have to find it and hope I don’t run into a dead end. It’s always a delight to run across something that was archived with LZMA or ZIP; I can even work those files on my Linux machine. Kind of anecdotal evidence in support of the article.

                                                    I assume that gzip will be the same way in the future, though you might not want to use it for reasons outlined in the article.

                                                    1. 2

                                                      How many different old archive formats have you run into? Are they easy to find tools for, or to reverse engineering them?

                                                      1. 1

                                                        I’ve run into a handful of them; most of them I wasn’t sufficiently interested to look too much further into them, but a few hours of Googling didn’t turn up the tools themselves.

                                                    1. 5

                                                      My experience with my own data is that almost all of it already has its own compressed file format optimised for the kind of data it is. E.g. JPEG for photos, MP4 for video, etc. Adding another layer of compression to that is not only a waste if time but often makes the dataset slightly bigger. Text, program binaries, and VM images could be compressed for archival storage but consider the fact that storage has gotten ridiculously cheap while your time has not. But if you really want to archive something just pick a format that’s been around for decades (gz, bzip2, xz) and call it a day.

                                                      1. 4

                                                        “consider the fact that storage has gotten ridiculously cheap while your time has not. “

                                                        This is true for middle class folks and up. Maybe working class folks without lots of bills. Anyone below them might be unable to afford extra storage or need to spend that money on necessities. The poverty rate in 2017 put that at 39.7 million Americans. Tricks like in the article might benefit them if they’re stretching out their existing storage assets.

                                                        1. 4

                                                          Consider that a 1TB hard drive costs $40 - $50. That’s $0.04 per gigabyte. Now say you value your time at $10 an hour. Even one minute spent archiving costs more than than a 1GB of extra space, and the space saved is unlikely to be that much. If you don’t have $40 - $50, then of course, you can’t buy more space. That doesn’t mean space isn’t cheaper than time. It’s just another example of how it’s expensive to be poor.

                                                          1. 1

                                                            One other thing to add to the analysis is that one can burn DVD’s while doing other things. Each one only accounts for the time to put one in, click some buttons, take one out, label it, and store it. That’s under a minute. Just noting this in case anyone is guessing about how much work it is.

                                                            The time vs space cost still supports your point, though. Anyone that can easily drop $40-100 on something is way better off.

                                                        2. 3

                                                          Adding another layer of compression, especially if it’s the same algorithm, often won’t shrink the file size that much. However, it does make it very convenient to zip up hundreds of files for old projects, freelance work, and have it as a single file to reason about.

                                                          I would not be so cavalier with the archive file format. For me, it is far more important to ensure reliability and surviveability of my data. Zipping up the files is just for convenience.

                                                          1. 4

                                                            That’s why there is tar, which, by itself, doesn’t do any compression.

                                                            1. 1

                                                              I was thinking that tar suffers from the same file concatenation issue that affects other SOLID container formats. But it looks like cpio can be used to extract a tarball skipping any damaged sections.

                                                            2. 1

                                                              A benefit of zipping together files is that it makes transferring the zipped archive between machines/disks much easier and faster. Your computer will crawl at writing out a hundred small files, and one equally-sized file will be much faster.

                                                          1. 3

                                                            This is a good reminder that twitter is someone else’s computer, and they can remove your content for any arbitrary reason and should not be trusted with it. It’s bad that so many influential people use twitter as a form of public communication and everyone should be interested in using alternatives to twitter that don’t allow random people to arbitrarily start bureaucratic processes that result in the removal of your content.

                                                            1. 2

                                                              Twitter is easy to use, that’s one of the reasons I use it.

                                                              It is possible to have a public, but not someone else’s computer, shared space? I know it’s possible, but maybe the real question is if it is practical.

                                                              1. 0

                                                                Yeah, it’s possible to imagine specific software systems that are consistent with being public spaces where individuals have private control over their own data. This is what Urbit is trying to create, for instance, and it’s one of the things that decentralized social networking protocols like ActivityPub and Scuttlebut are trying to allow developers to build. There are good structural reasons why this is harder than just turning over your data to a centralized service, and they’ve attracted less engineering effort than centralized services so far.

                                                                1. 1

                                                                  Yeah, it’s possible to imagine specific software systems that are consistent with being public spaces where individuals have private control over their own data.

                                                                  Like… websites? Blogs? Forums?

                                                            1. 1
                                                              1. Build a spaced repetition flashcard app and website with usable minimalist UI.
                                                              2. Build a simple static site for every domain name I own, but have otherwise not used.
                                                              3. Build that forum software I’ve always wanted to create.
                                                              4. Build that twitter bot I keep day dreaming about.
                                                              5. Build that check-list app that meets my belief about what a check-list really is.
                                                              1. 6

                                                                On the topic of vi historical clones, you should probably mention vip (from 1986) or vi (from at least 87) as vi emulators for Emacs instead of viper. It find it far more fascinating that they already made these things just a few years after the project started.

                                                                Also, what’s your opinion on modern vi relatives like vis?

                                                                1. 4

                                                                  Like so many things in technology I like the idea of vis as a second system, an improvement over what we have learned in vi and vim. The issue is that the first system is so entrenched, not only code written (for languages) but habits and automatic actions ingrained in my brain, in the case of Vim.

                                                                  1. 2

                                                                    vis as a second system, an improvement over what we have learned in vi and vim. The issue is that the first system is so entrenched

                                                                    vis is really neat. But people forget that legacy (“entrenched”) is valuable. So when people say “X is better except for lack of ubiquity”, that’s like saying “X is better except it’s not”.

                                                                    1. 1

                                                                      I agree with you. Comparing individual aspects of software it’s possible to say one is better, but including other, non-technical aspects, could allow for a different outcome. It reminds me that software is more than just software, it’s people too.

                                                                1. 26

                                                                  The security issues outlined by this article are clear, so I won’t comment on them. I did want to comment on this peripheral point:

                                                                  This same vulnerability also allowed the attacker to DOS any user’s machine. By simply sending repeated GET requests for a bad number, Zoom app would constantly request ‘focus’ from the OS.

                                                                  I’ve long thought that OSes (or WMs, whatever) should pretty much never bring a window (or dialogue box, or any such UI element) into view, or take input focus (keyboard or mouse) without explicit user interaction (i.e. key press, mouse click or screen tap). Instead, they should indicate to the user that some application or widget “wants attention”, for example by making an application’s entry in the WM’s task bar blink/flash. Then, the user can choose to take explicit, manual action (e.g. in this example, click on the task bar) to bring the window or other UI element up to z-index 0 and allow it to take input focus.

                                                                  Time and again, all through the years, we experience the UX pain of happily and intentionally typing (or clicking) in one UI element, and something pops up, takes input focus, and we unintentionally send keystrokes or mouse clicks into that surprise UI party crasher. How many decades of UI and UX research have come and gone since the earliest GUIs came on the scene? This kind of thing should never happen – yet, it does, and I find that just a little ridiculous.

                                                                  I welcome any counterexamples showing a case where it would be a good thing for a new UI element to steal input focus without the user first performing an input.

                                                                  1. 14

                                                                    If you peruse Raymond Chen’s blog at Microsoft, there are multiple entries about customers who want to make sure that their window is placed front and center and grabs all input. It’s easy for even a non-malicious application developer to convince themselves that their product is so good that this behavior will actually be welcomed by users.

                                                                    Chen does not agree, by the way.

                                                                      1. 1

                                                                        Thanks!

                                                                        It does look like both items link to the same post, though.

                                                                        1. 3

                                                                          Oops, copy/paste failure. Sorry about that! I’ve fixed the second one.

                                                                    1. 5

                                                                      I recently switched back to a Linux laptop, and a feature I love over OSX is that when an app wants focus, instead of just taking focus, PopOS (probably gnome?) displays a toast saying “NeedyApp is ready”. I can switch to it when I’m ready too.

                                                                      1. 3

                                                                        When my terminal opens a system dialog to unlock my password manager. I’ve hardcoded that as the only exception to “no stealing focus” in my i3 config.

                                                                        1. 3

                                                                          What does your “no stealing focus” config entry(-ies?) look like?

                                                                          1. 2

                                                                            I see your point, and can accept that others have different preferences, but if it were me, I’d let such a thing remain not an exception, and just stay unfocused and flashing in the task bar. But then, I reboot scarcely 5 times a year, so unlocking like this is something I rarely do.

                                                                          2. 1

                                                                            Indeed – I’ve been thinking lately that UIs should be given much of the same consideration we give to APIs regarding things like race conditions (as in your example) and backwards compatibility. The user, after all, is ultimately another component interacting with other components of the overall system…

                                                                            1. 1

                                                                              Reminds me of the javascript popup-bombs from the early 2000s. A never ending stream of popup windows and dialogues, close one and three appear.

                                                                            1. 17

                                                                              Using a static site generator means that you have to keep track of two sources: the actual Markdown source and the resulting HTML source.

                                                                              This is not representative of my experience. I can delete my generated /public folder and re-run Hugo again to generate a new site. I only have to keep track of my source.

                                                                              Finally, you constantly have to work around the limitations of your static site generator.

                                                                              This is very true. I use Hugo, and it changes very fast, often breaking some things. Also, it started out as a blog site generator, and adding features to make it a generic site generator has required paying attention to releases and updating my custom theme to prevent my site from breaking.

                                                                              But how can I then keep the style and layout of all my posts and pages in sync?

                                                                              I actually think this is kind of neat. I like the idea of a web of interconnected pages each similar but a little different, evolving over time. It reminds me of the old web.

                                                                              🤔 I should redesign my website.

                                                                              1. 6

                                                                                Hugo is dang frustrating. Recently I accidentally updated it from the distro, jumping ahead 15 versions. At some point they made pygments go from rendering my site in <100ms to rendering in 15s. Then I tried switching to Chroma, but it turns out there’s no extension system, so to tweak my code highlighters I needed to manually compile Chroma and then manually compile Hugo.

                                                                                Then I found out that you use a tweet shortcode, you can’t preview your site offline. That broke me and I went back to a 2016 build.

                                                                                1. 1

                                                                                  I feel you. That is why I use specific versions of hugo for my site, not one supplied by a distro.

                                                                                  Also I recommend using hljs. I like it (progressive enhancement) better than the pygments. I’ve had some problems with pygments in the past, but I cannot recall it. (Also had some minor annoyances with hljs, but I’m generally OK with it).

                                                                                  Regarding the tweet shortcode… I never used that (I prefer screenshots of tweets and links, as I do not trust linking third party sources. They can change/disappear, and it would ruin my point). Could you link an issue or something so I could understand it? It made me curious.

                                                                                  1. 1

                                                                                    Yeah, I think I’ve been lucky in that not much has broken for me.

                                                                                    The biggest issue I ran into a couple of months ago was a change that prevent me from using a / in a tag name to fake sub-paths. I used this to match my old URL schema from before I was using Hugo.

                                                                                    Hugo removed (”“fixed””) the tag issue, so I updated my blog to use another solution, BUT then Hugo reverted the change. Ha!

                                                                                  2. 4

                                                                                    I used to write my blog (for the past 19 years) in raw HTML. Then a few months ago I implemented my own markup language (based upon Orgmode with a lot of extensions for how I write posts), but I still only keep the rendered HTML, not the original source. That way, I’m not stuck with supporting outdated versions of my markup formatter.

                                                                                    1. 2

                                                                                      This is not representative of my experience. I can delete my generated /public folder and re-run Hugo again to generate a new site. I only have to keep track of my source.

                                                                                      But even then, you must keep both the source and the result in your head, because you have to worry about how your Markdown will be translated to HTML (dependent on the version and implementation of Markdown) and how your posts will be arranged in public/.

                                                                                      This is very true. I use Hugo, and it changes very fast, often breaking some things. Also, it started out as a blog site generator, and adding features to make it a generic site generator has required paying attention to releases and updating my custom theme to prevent my site from breaking.

                                                                                      This is a great summary of the problem! Websites created with static site generators are quite fragile.

                                                                                      1. 4

                                                                                        But even then, you must keep both the source and the result in your head, because you have to worry about how your Markdown will be translated to HTML (dependent on the version and implementation of Markdown) and how your posts will be arranged in public/.

                                                                                        I’m still not aligned with you on this. I do have to think about my HTML when I am building my custom theme, not when I’m building the site or uploading it.

                                                                                        The only thing I have to consider when creating new content is where my feature image is located and use the correct path in my content’s meta data. The thumbnails and their use in the page are auto-generate, and the content is put where it needs to be, and related links menu items and homepages are all updated per my theme.

                                                                                        1. 1

                                                                                          I suppose it depends on what type of content you write, but I find it hard to maintain things like footnotes and tables across different implementations of Markdown and static site generators (or even the same generator but a different version). Not to mention things like elements with custom classes and inline HTML.

                                                                                          What I mean is, if you’re writing HTML directly you always know what the resulting HTML will be. If you write in Markdown, you always have to spend time thinking about what the HTML will look like. You usually have a good enough idea, but you could be wrong.

                                                                                          1. 1

                                                                                            What I mean is, if you’re writing HTML directly you always know what the resulting HTML will be. If you write in Markdown, you always have to spend time thinking about what the HTML will look like.

                                                                                            I think this is why I hate ORMs. Instead of just writing the SQL I want, I have to learn how to express it in whatever weird interface the library exposes – assuming it’s even expressive enough.

                                                                                            1. 1

                                                                                              Is Markdown really in that much of a state of flux?

                                                                                              I’m using the OG Daring Fireball edition, I do know that there’s been some work to integrate stuff like footnotes and ToC (which Gruber opposed on philosophical grounds). It’s been frozen since… 2011? 2004

                                                                                              I’d rather use a tool that made it easier for me to write a blog post every day (let’s face it, every month more like it) than worry about future incompatibility in the HTML rendering tool I’m using.

                                                                                              1. 1

                                                                                                Is Markdown really in that much of a state of flux?

                                                                                                Not Gruber’s Markdown, but yes, certainly if you’re switching between implementations (or versions thereof) or static site generators (or versions thereof) that use different implementations.

                                                                                                I’d rather use a tool that made it easier for me to write a blog post every day (let’s face it, every month more like it) than worry about future incompatibility in the HTML rendering tool I’m using.

                                                                                                Indeed – that’s exactly how I feel writing posts in HTML. Nothing can go wrong, no matter what system I’m using or what programs I have installed.

                                                                                          2. 2

                                                                                            Websites created with static site generators are quite fragile.

                                                                                            Only if the site generator ever changes. I use one written in Clojure that has not been changed in years.

                                                                                            1. 1

                                                                                              That is why I made my own. A few lines of Python and your ready.

                                                                                            2. 1

                                                                                              I actually think this is kind of neat. I like the idea of a web of interconnected pages each similar but a little different, evolving over time. It reminds me of the old web.

                                                                                              🤔 I should redesign my website.

                                                                                              This cracked me the hell up :D

                                                                                            1. 12

                                                                                              This appeals to me. However:

                                                                                              But how can I then keep the style and layout of all my posts and pages in sync? Simple: don’t! It’s more fun that way. Look at this website: if you read any previous blog post, you’ll notice that they have a different stylesheet. This is because they were written at different times. As such, they’re like time capsules.

                                                                                              While that’s kind of cool in its own way, I don’t prefer it. Especially when it comes to a site menu.

                                                                                              My first web sites were hand-coded HTML. My motivation to learn PHP was that I wanted a consistent menu across all pages, and copy-paste was not maintainable, so I landed on PHP’s include. From there it was down the rabbit hole to becoming a web developer.

                                                                                              I use a static site generator now for nathanmlong.com, which I mostly write in Markdown. It wouldn’t kill me to write HTML, but I don’t want to copy and paste a menu everywhere.

                                                                                              1. 8

                                                                                                Case in point about the downsides, the cv link is correct on the author’s homepage. It is not correct on this page. That’s an easy mistake to make, and I’ve definitely made versions of it. However, it’s much more pleasant to fix when you can fix it everywhere by updating a template.

                                                                                                1. 3

                                                                                                  Thanks for the heads up :-)

                                                                                                  Edit: Solved by sed -i 's,href="cv",href="../cv",' */*.html. In my mind, simpler than a CMS or static site generator.

                                                                                                  1. 3

                                                                                                    “Simpler”, sure, maybe. At least for now. But maybe it won’t always be such a trivial sed command. Maybe you wrote the html slightly different in certain spots.

                                                                                                    A simple or custom-built static site generator would avoid mistakes like this altogether. You could have one file for your head element. Nicer menus, sidebar, etc. And you could still write most or all of it in pure html if you wanted to.

                                                                                                    Simpler doesn’t necessarily mean better.

                                                                                                    1. 1

                                                                                                      If you need the same template for all of your pages, then yes – a templating engine is a good idea.

                                                                                                      But if you don’t need this, then a templating system makes the process unnecessarily complicated. Creating a template in a special language and fitting all pages to the same mold takes much more effort than most realize, especially in comparison with just writing single HTML pages.

                                                                                                      For example, look at my software page. I have some fancy HTML and CSS to render sidenotes in the margin (unless you use a small screen). Because the page is “self-contained”, I don’t have to worry if I ever edit the style sheet for other posts. But if I used a templating engine, I would have to worry about it.

                                                                                                2. 5

                                                                                                  Everything old is new again (or something like that)… you can always use server side includes for common elements.

                                                                                                  1. 2

                                                                                                    I like keeping my content and the final HTML site separate, and using the content to generate the site. It makes my content more flexible, but also makes generating the global menus easy, which is important to me so that my readers get a good experience.

                                                                                                    1. 2

                                                                                                      I haven’t actually used it but the caddy web server appears to have built-in templating features: https://caddyserver.com/docs/template-actions

                                                                                                      1. 1

                                                                                                        Dreamweaver supported keeping sites’ themes consistent when I tried it long ago. It was templates or something. Maybe one of the open editors can do that, too. Otherwise, it would be a good feature for them to add.

                                                                                                        1. 1

                                                                                                          I hear you. I think the obvious solution then is to use something like PHP or SSI. Of course, that’s another layer of complexity, but not as much as a static site generator or CSS.