1. 4

    I’d like to write something book-like someday, and I’d idly mused about doing print on demand. This has convinced me that I actually don’t want to do that.

    This print work is fascinating and valuable and hard, and I don’t want to do it. I want to git push book readers_brain.

    1. 19

      Author here. You can definitely do a print edition of a book much easier than I did. I specifically chose to:

      • Hand illustrate everything.
      • Use a two-column design with lots of asides.
      • Include every single line of code in the book along with contextual location information so that readers can build up a complete program.

      If your technical book doesn’t do all that, it’s a lot easier. A lot of CS books are done in LaTeX where the process is mostly automated. It is still a decent amount of work to edit, find a copyeditor, index, design a cover, etc. If you don’t have any graphic design background, you’ll end up with a more beautiful book if you go with a conventional publisher.

      1. 7

        The asides in the margins remind me of Tufte’s books. Did you take any influence from them?

        I think I remember reading something about how he also made sure that every diagram and the text that referred to it appears on the same page? And it involves a ton of labor. Not sure if that is true but I think so.

        I’m very impressed by the craftsmanship! :)

        1. 5

          Oh, yes, I have been a Tufte fanboy for many years. :)

        2. 4

          Thanks for the reply! All of that work definitely comes across, I wish my CS textbooks looked as good as the pictures you featured.

          Side thing: I’m totally bookmarking your tile_pages script. That kind of non-text document-modification-verification tool seems super useful.

        3. 6

          I used a professional publisher, but did all of the typesetting of my books myself using LaTeX. The only difficult part was making the copyright page correspond to the publisher’s template. I wrote in a semantic markup language that happened to be valid LaTeX syntax, so for the ePub editions, I wrote a small program that parsed this into a DOM-like structure and emitted semantic HTML. I would probably just use something like PanDoc and Sile if I did this again. I did all of the diagrams with OmniGraffle (sadly, my favourite drawing program is an older version of OmniGraffle that doesn’t run on modern macOS - Omni Group went full hipster and turned their beautifully clean NeXT-inspired UIs that were a joy to use into things that looked great while posing with a MacBook in a coffee shop and are a pain to use).

          I spent a while tinkering with the typeset output because I’m a typography geek, but it wasn’t really required. I used URW Grotesk for the titles and the standard LaTeX fonts for pretty much everything else.

          Getting a good copyeditor is probably the hardest part if you’re self publishing. I worked with some fantastic ones and some mediocre ones, even with a professional publisher in the mix.

          640 pages over 15 months is a very nice sedate rate, it averages to less than 1.5 pages per day. That’s a nice sustainable pace.

          1. 6

            I’m writing a book about publishing for technical people and the interior formatting has almost been the doom of the project because I wanted an index in the print edition. As munificent says, there are much easier ways to do it depending on your particular priorities.

            My chapter about interior formatting has a recommendation specifically geared toward technical people: AsciiDoc. The format provides a good deal of power, and the tooling is pretty good — but a lot depends on what you want your end product to look like. Crafting Interpreters is a beautiful work and there’s no easy way to get that kind of format without the work (other than hiring that work out, either to a traditional publisher or independently).

            Leanpub also does a good job of giving you tooling to target ebook and print.

            1. 2

              I’m probably going to use leanpub, primarily because I have no idea what I’d want a book too look like. I just want it to be readable.

              Ironic that a book about publishing has troubles in the designing/publishing pipeline!

              1. 4

                I just want it to be readable.

                You can try pandoc. I use it to convert GitHub style markdown to pdf/epub. The default conversion by pandoc is good enough, but I spent some time customizing it. There are templates made by others that you can use too. That said, I haven’t done print versions, only ebooks.

                I wrote a blog post about my set up (includes other resources at the end too): https://learnbyexample.github.io/customizing-pandoc/

                1. 3

                  Part of the value of the book is that I’m blazing the trail somewhat.

                  There are actually a number of fine choices for people who don’t care about having an index, and there’s even a program that will add an index to your PDF file. So it’s not terrible in general. I’ve published 4 novels and that experience was quite pleasant!

            1. 1

              Thanks to you and the others for all of the work put into making this a great online community!

              1. 4

                LLVM is gradually in the process of moving to GitHub code review. Phabricator has a few nice features (in particular, the idea of dependent revisions, so you can have one PR on top of another and have it automatically rebased on head when the first is merged) but the UI is awful. It’s marginally better if you use arc on the command line, but then you need a load of PHP goo installed on your dev machine to do things that GitHub does just with git commits (Phabricator supports multiple revision control systems, but these days it’s mostly used with git). It’s currently a big barrier to new contributors because figuring out how to use Phabricator is a painful learning experience.

                1. 4

                  Github PR review UI is far behind Phabricator usability.

                  1. 3

                    That’s subjective. In my view, GitHub’s PR interface does about 70% of what I want and it’s sometimes annoying when I find myself in that 30%, but it works very well for the 70% that it does. I hope that over time it will evolve to fill in the rest. In contrast, Phabricator does 400% of what I want and it does most of it badly. There’s always a way of doing what I want with Phabricator, but it invariably involves spending ten minutes reading docs. I’ve never read documentation on GitHub’s PR interface.

                    To give an example, here is one of the most common things I do in a PR that I have no idea how to do in GitHub that I have no idea how to do in Phabricator: suggest a small change. With GitHub, I select the range of lines I want to modify and add a comment with a ```suggestion block in it. This can then be merged into the PR branch directly from the web UI. I presume something similar is possible in Phabricator (at least allowing the person to pull the change back with arc) but I’ve never done it.

                    I’ve occasionally written a review with GitHub and forgotten to submit it, but it’s quite rare because the comment UI differentiates between leaving a single comment and starting a review. After I’ve started a review, I know that I need to do something explicit to finish the review. Phabricator is always in ‘start a review’ mode and so I think around 60% of my Phabricator reviews have ended up sitting there until I go back the next time someone comments and discover my draft is still there.

                    1. 3

                      We’ve found GitHub’s UI to have improved a fair bit in recent years, getting closer to parity with Phabricator. One thing in GitHub’s favor is its API: we’ve got a team using GitHub and they really like some of the Slack integration they’ve set up more than the Phabricator dashboard, for example.

                      We’ve also found that it’s more effort to get engineers up to speed with Phabricator than with GitHub for PRs.

                      1. 3

                        I agree that engineers are already familiar with Github. I don’t use Phabricator anymore, but I recall that the “context” around a diff was much better than Github. This is especially relevant when reviewing C/C++ where the surrounding context is of utmost importance in order to track objects’ lifetime (yes even if you use RAII).

                        1. 1

                          Yes, context around a diff was definitely much better in Phabricator than GitHub. This has gotten better on GitHub in recent times, though I haven’t done a detailed look at the two to see if Phab is still better.

                    2. 2

                      FWIW, you can replicate dependent revisions on GitHub with what GitHub users call “stacked PRs”.

                      Given branches main, feature/A and feature/B such that feature/A is a dependency of feature/B, creating a PR from A to main and B to A will automatically update the base of the latter to main when the former is merged.

                      I’ve personally found the experience to be quite clunky whenever I’ve had to do it, but the option exists. There’s definitely space here for some CLI automation.

                      1. 4

                        I’ve seen this, but it has the annoying property that you have to merge the PRs into each other and so things don’t end up in trunk until you’ve merged the last one. The Phabricator model lets you merge each feature and rebase, which avoids big divergence.

                    1. 5

                      I’m sad to see this. In my previous job at CodeYellow, we used Phabricator’s ticket tracker extensively. The other features we didn’t use so much, but just the ticket tracking and especially the Kanban-like project boards were extremely useful in giving insight in a project. In my new job at bevuta IT we’re using Redmine and I absolutely hate it. It’s like a worse, more enterprisey version of Trac; I can never find anything in it.

                      For CHICKEN, we’re still using Trac, and for all its limitations it’s acceptable, at least for smaller projects it is. I certainly don’t hate it, but I don’t love it either. Is there any decent ticket tracker out there? The GitLab and GitHub trackers are okay for bug tracking in tiny projects, but they’re not that great for feature planning and development. And for larger projects they’re effectively useless.

                      1. 2

                        I have yet to see a ticket tracker that serves all uses well. We’re tracking a very large project in Jira and it’s working well, though that’s partly because of the API and some custom stuff we’ve built on top. Oh and “working well” also includes “annoying slow at times” and “fairly expensive”. Jira can be made to do almost anything.

                      1. 13

                        Incredibly trite article given the huge scope of the work they’ve done. They like the logo, the tools are “great”, generics would be nice and it’s quickish - and that’s it.

                        500k lines (even of go, which is quite verbose) is a chunky codebase and it’s clearly been done under load and presumably their existing engineers had to learn a new language while doing it. Wish they’d said more about the details.

                        1. 9

                          Their original article about Goliath honestly went into a lot more details; this is more the, “we’re done!” post, as far as I can tell. (That post also talks about why Kotlin, which is what I wanted when I was still there, lost out to Go.)

                          (And that article, in turn, had a number of lead-up posts discussing the massive Python refactoring required to enable this “change the wheels on the running train” program, including e.g. writing the Slicker Python refactoring tool. It’s worth tracking back through the links if you want all the gritty details.)

                          [Edit: and you might also like this article on how GraphQL was used to phase in things slowly. So there’s a lot of meat there; just not in this particular post.]

                          1. 1

                            Did they seriously look at anything other than Kotlin and Go?

                            1. 3

                              I left before this project really moved much, but I don’t think so. .NET Core didn’t exist yet (or was very new; I forget which), and no one wanted pure Java.

                          2. 3

                            I hope to write more about the Goliath project overall and how we’ve been running it. We’re actually not done yet.

                            Just for fun, this year I started giving out badges internally for things that were essentially random, e.g. who committed the 500,000th line of Go. When we hit that particular milestone, I decided to take a survey and get people’s opinions about the language, and then I took those results and wrote the blog post. To me, there was something to the fact that things were going essentially to plan with no big surprises. Not quite as interesting as if our conclusion had been “Go is terrible, we’re switching to Rust” :)

                            I’m happy to answer other questions and will take requests for future posts. We’re pretty open about our tech :)

                            The GraphQL post that @gecko linked to has details of how we’ve migrated while everything stays in motion. We will have more to say about our graphql client code (used for service-to-service calls), which has some interesting bits to it.

                            I don’t think learning Go has been much of a stumbling block for folks. We have had discussions around things like “when do you wrap errors” or “do you return a NotFound error or just nil if something is missing from the database”, but I’m not sure that most of those sorts of details are interesting.

                            1. 1

                              I hope to write more about the Goliath project overall and how we’ve been running it. We’re actually not done yet.

                              That would great and very useful - my interest is partly because I have now personally seen one or two troubled Python->Go rewrites (and no successful ones yet). The sad nature of things is that rewrites that don’t work out do not lead to any blog posts but at least it should be possible to read about the ones that did. Look forward to reading what you write next

                              1. 1

                                I wouldn’t be surprised if ones that are “troubled” are ones where they either didn’t have full org support, or where they didn’t have their eyes wide open about what they’re getting into. We did a lot of investigation early on to be able to size the work and have a realistic sense of just how much it was and everyone was on board all the way up through the board of directors.

                              2. 1

                                I do wanna say @dangoor that I am really impressed by how smoothly this process all went, and I’ve really enjoyed the series of posts from you and others on the team over the last four years. I’m genuinely disappointed I wasn’t around to see that all go down/to help with it. Congratulations!

                                1. 1

                                  Thanks! We’re not done yet, but we’ve made huge progress and it has gone smoothly overall. Lots of work by lots of folks.

                                  Would’ve been great to have you be a part of the project, but I hope you’ve been enjoying your work in the intervening years!

                            1. 6

                              Thanks for sharing. I enjoy reading these retrospectives both when they’re fresh as well as when they’re a few years down the road. I’ve been looking at Go on and off for a few years but haven’t built anything in it yet. I am trying out a graphql server with it at the moment and find it a bit confusing at times, but overall pretty nice.

                              It’s good timing for me to see these types of posts when I’m evaluating some python lambdas I have currently that need to be changed significantly for a new system of record.

                              1. 2

                                Glad you found it helpful!

                              1. 8

                                Dave also did a ton of work on the “ES6 module system” (the standard JavaScript modules we have today). From that blog post of his, you can see that one of the things he wanted was “Future-compatibility for macros”.

                                1. 10

                                  Microservices solve an organizational problem, not a technical problem. After all if your service is scalable, you can just have many copies of a monolith (and if it isn’t, splitting it into microservices won’t magically help). The problem they’re intended to solve is that in large projects in large organizations, one team tends to hold another team back, maybe by months or years, waiting for everyone to be ready for a synchronous product release.

                                  If you were a dictator with perfect information, you could reduce it to an optimization problem: what’s the cost of introducing a new service (including ongoing future cost and the overhead of maintaining a new communication interface), compared to the cost of adding new functionality to an existing service. Unfortunately dictators with perfect information don’t exist and a lot of organizations make poor decisions.

                                  1. 6

                                    Microservices solve an organizational problem, not a technical problem. After all if your service is scalable, you can just have many copies of a monolith

                                    I largely agree with this, but one thing I’ll note about Khan Academy’s transition from a monolith to separate services is that we have certain services that have greatly benefitted from being separate services. Our monolith was built on Google App Engine and scaled out okay thanks to App Engine’s autoscaling. A couple of things that are technically better now that we have services:

                                    1. The service responsible for providing information about our content is able to have a much larger cache in each instance, and there are only a small number of these instances.
                                    2. We largely use Google Cloud Datastore as our database, and that matches the App Engine model perfectly well because there’s no notion of “too many connections” to Datastore. But we have some parts of our application that benefit from a relational DB, and PostgreSQL does care about the number of connections. At peak times, our monolith could have way too many instances for PostgreSQL. Now, that DB is accessed through a separate service which only has a few instances at peak.

                                    I do agree that microservices have a lot of intertwining with how the org is set up and works, but there are legitimate technical reasons to break a system apart.

                                  1. 1

                                    Nice post! I’ve always been tempted to write a book, but never done it, partly because I know what a time-suck it would be.

                                    I have to say that copy-editing has really gone down the tubes, even in commercially-published books. I usually notice grammar errors on every page of recent books, and in many cases could probably guess the author’s nationality within a few pages, especially if they’re Russian/Slavic or Indian.

                                    (Academic presses are the worst. I remember a multi-author anthology on P2P from Springer that cost something like $300, where it was obvious they’d done zero copy-editing and some chapters were almost unreadable.)

                                    1. 4

                                      In terms of copy-editing, you kinda get what you pay for. Indie publishers usually don’t have much money to spend on professionals, so they’ll often have no copy editing (that is my case) or maybe they’ll get some basic copy editing done. I’m fully aware that I’m part of the problem here, I just didn’t had the money to hire someone for an honest price. As someone who is not a native speaker, I’m sure that lots of my phrases sound strange and sometimes they fell like you’re reading Portuguese instead of English.

                                      The book I’m working on now is a fiction book, and for that one I’m saving enough to get developmental editing, copy-editing, and cover designer, the full package, right? :-)

                                      Regarding slopy copy-editing and typesetting in traditional publishing, I have some opinions that ring true to me but that I have no evidence for. Many of these publishers are putting multiple books out per month. The development ecosystem, specially the Web, moves too fast and publishers can’t produce books fast enough to benefit from being at the market at the right time, so in the end everything is kinda rushed. Publishing fast and getting into the market reduces their production cost and increases the chance of people buying the book before the topic at hand becomes boring or deviate too much from the book’s content.

                                      For example, about a decade ago I worked with a major publisher in a book that ended up being cancelled due to external reasons. They had the editor communicating with me almost weekly, revising and commenting my drafts as I saved them. It felt great and I learned a ton even if the book never saw the light of the day. On a recent book with another publisher, the editing period felt rushed and in my subjective opinion it was too short for the kind of editing my non-native speaker text needs before it is ready. I voiced this to some beta readers, but they were fine with the content, so I guess it might just have been my own insecurities.

                                      In the end, it is a bit of a trade-off: you can have more books or fewer books with better quality. You can’t really have quality and large output unless people start paying more for books so that the authors have funding to pay decent prices for the professionals they need. A similar problem plagues Journalism as well, the fetish of speed doesn’t give Journalists enough time to do deep investigations and research. It is all about breaking news and working the wire, publishing a gazilion small articles with no depth. Speed is a deceitful master.

                                      With all the tools and services available for indie publishing, it is no wonder that many writers focused on tech content will prefer to go with the self-publishing route instead of traditional publishing. I think that I’m OK with having books that don’t have perfect copy-editing and typesetting, if that gets more content out there. I’d be deeply sad if the other option for these authors would be simply to not publish. Maybe, the real culprit of these problems is that indie tech writers are not really aware that they need to hire these professionals at all. Many don’t have any training regarding publishing and are doing it by the seat of their pants. Blog posts and books about it might help spread the awareness of such needs.

                                      1. 2

                                        For what it’s worth, I would not have guessed you aren’t a native English speaker if you hadn’t brought it up. You write it better than many Americans I know :)

                                        1. 1

                                          Thanks a lot, this warms my heart. I always think my English sounds kinda broken :-)

                                        2. 1

                                          It can’t completely replace a human editor, but ProWritingAid can help clean up a manuscript before it reaches others and is not terribly expensive. It’s also able to open Scrivener files directly.

                                          1. 1

                                            I’m a happy customer of proWritingAid as well :-) I really recommend it for everyone.

                                        3. 1

                                          I have to say that copy-editing has really gone down the tubes

                                          And so did the typesetting. Orphans and widows are the new normal in print. Why even bother to buy a printed book if its quality barely exceeds the result of printing a plaintext file?

                                          1. 1

                                            Which is weird, since InDesign, Pages, and (IIRC) Word all have widow/orphan prevention. Not sure if it’s enabled by default, but you’d think whomever designs the stylesheets for a book publisher would know to turn it on.

                                            TBH I don’t mind them that much. My pet peeves, typographically, are typewriter quotation marks and awful line stretching due to a missing hyphenation. Oh, and Helvetica/Arial/Verdana as body text.

                                            1. 1

                                              Unfortunately the quality of ebooks is really bad too, I’ve tried various readers and unless you just put a PDF on an eInk display it looks like you’re reading a Word document without the window chrome :(

                                              1. 1

                                                Well, ebooks in non-paged formats can’t be good. PDFs can, and I agree many authors/publishers neglecting it. Hell, I keep telling people to stop neglecting the basic quality of life features of PDFs that I even made a reusable guide to it. ;)

                                                1. 1

                                                  ebooks in non-paged formats can’t be good

                                                  They could, if the currently-visible text were rendered into the desired rectangle with the TeX box-and-glue algorithm rather than whatever Blink/WebKit/khtml fork the industry has settled on.

                                            2. 1

                                              I’m not saying professional copy-editors are useless, but for some books you ask yourself if the author didn’t let any single person proof read it.

                                              Which kinda blows my mind, but maybe I’m the weird one for having offered to proofread several thesises for friends.

                                              1. 1

                                                I guess that for many self-published tech authors, the time some third-party sees the book is at the publication date. Many will be writing in a vacuum, without consulting anyone, not even their friends, regarding the book. I’ve been like that too. Sometimes you don’t show it to others with fear of imposing some obligation on them. You are afraid they’d rather not read or check your book, but are doing it because it would look bad if they didn’t. In the end it is just insecurity, people are afraid to impose on their friends, and sometimes unaware that there are highly skilled freelancers available for those tasks.

                                                Another important aspect, which plays a more important role than people realise, is the need for speed. This is more observable in traditional publishers. They’re kinda terrified of some other publisher pushing a competing title just before their own title reaches the market. I don’t feel this fear is really warranted, but hey they are the hundred-years old companies who know how this works, they are probably right. I do know the fear that creeps on you as you’ve been working for a long time in some title and then competing titles start popping out on the market, and you calculate how many more will pop before you can reach it. To be honest, I don’t see other authors as competitors, we’re all in this together, but if you’re writing a book about a new fancy web framework, and some weeks before you launch, eight titles arrive in the market with the exact same topic and start making a splash, you worry if there will still be a demand for it by the time your book arrives in the market.

                                                This fear tends to make publishers cut corners in an attempt to arrive in the market earlier. Good copy-editing is one of the victims of this frenzy.

                                            1. 1

                                              Your Roguelike Development with JavaScript book sounds like a lot of fun! Congrats on the release.

                                              This blog post, and others like it that I’ve seen, are what have driven me to work on a book about publishing for technical poeple. I’m guessing that I’m about 75% through the first draft.

                                              As a self-published fiction author, I see self-published tech authors leaving a lot of opportunity on the table and making things harder on themselves because they don’t know about some of the great stuff that has been developed for indie authors. Things like Scrivener, Vellum, and Reedsy are what I’m talking about… you’ve clearly explored the indie publishing space a lot more than most. Thanks for sharing!

                                              Edit to add: Oh, and I just noticed you’re using Draft2Digital’s Universal Book Links for the links in your post. You’ve definitely come across the wonderful indiepub tools out there :)

                                              1. 1

                                                Thanks for the kind words :-)

                                                I try to keep up to date with all that wide and indie publishers are using. I’m writing fiction as well (not yet ready to publish though) and these tools have been invaluable for me.

                                                Great idea on the book about publishing for technical people, I think there are a lot of technical people who can benefit from it. Keep pushing!

                                                1. 1

                                                  Thanks! Good luck with your fiction! It’s definitely a very different market.

                                              1. 2

                                                I write technical documents in sphinx. What would you are the relative benefits of using Scrivener instead?

                                                1. 4

                                                  Not OP, but I have used Scrivener for years, though for fiction. Here’s what I like about it, which may or may not actually be things you care about:

                                                  1. Long documents are broken up into smaller chunks (called scrivenings), with essentially arbitrary levels of nesting. You can select multiple of these and then have the main document area show all of those chunks together.
                                                  2. It has a powerful “compile” feature that can take your book and produce ready-to-go ebook and print formats (or Word, if you need to get the doc into someone else’s hands)
                                                  3. If you’re working on the kind of thing where you’d want to change the flow of the document, there’s a corkboard view that lets you move the individual scrivenings around
                                                  4. Store notes that are not part of the final doc in the same Scrivener doc, just in a separate folder within it. All neatly accessible in the UI.
                                                  5. Each scrivening can have its own metadata, tracking its status (first draft, second draft, final draft, etc.), or with notes specific to it.
                                                  6. Can sync with iOS for writing/updating on the go (I sometimes prefer to use my iPad for this)
                                                  7. Word count targets are nice when you’re trying to get a project done

                                                  I’m sure I’m leaving a lot of features out. This is one of those tools that I think will suit some people’s brains and not others. Kind of like todo list apps.

                                                  1. 1

                                                    I never used sphinx so I can’t comment on it, but @dangoor comment is spot on. I think that Scrivener is beneficial for those who are writing longer works as they can keep research, notes, links, and the content all in one place. It feels like a companion or an assistant, always ready to help you write your book.

                                                  1. 8

                                                    I feel like the graph vs. tree thing is something of a false dichotomy. At my workplace, we use Confluence and use its tools to make the information as accessible as possible:

                                                    • Yes, there’s a tree. That’s just how Confluence works
                                                    • We cross link a lot — Confluence makes this easy. In fact, easier than it ever has because the linking dialog you get with cmd-K (on Mac) quickly searches all of the pages for title matches
                                                    • We use tags/labels on pages providing another way to search
                                                    • a top-level page for our engineering docs provides quick searches for those docs, plus the collection of labels

                                                    Our documentation isn’t perfect and, sure, it’s not always obvious where in the hierarchy people should put stuff, but there really are a lot of ways to find information in Confluence.

                                                    1. 3

                                                      I don’t think it is a false dichotomy. Adding links between pages in a tree-based structure might try to approximate the graph structure, but it doesn’t fix the underlying issue. Whatever hierarchy is created will be wrong, and certain documents or information will not fit within it. This means this information is either lost, placed in a poor location or the hierarchy has to go through a restructure. The graph model doesn’t have this issue because there is no inherent structure. There is no hierarchy to get wrong.

                                                      Of course, you can just treat a tree-based documentation system as if it was a graph based system, and use tags/labels, cross-links and a flat hierarchy, but at that point why use the system over something that is designed to support that use-case?

                                                      1. 3

                                                        Q: What’s the difference between a tree-based structure and an index of pages/sections where the index is organised by category and the category has subcategories (usually numbered, e.g. 3.2.1)?

                                                        Perhaps the difference is if you mandate creating the index first, rather than it being a future summary of existing material? Perhaps the correct term is graph-first vs tree-first?

                                                        1. 1

                                                          Q: What’s the difference between a tree-based structure and an index of pages/sections where the index is organised by category and the category has subcategories (usually numbered, e.g. 3.2.1)?

                                                          If I’m understanding you correctly they are effectively the same thing. Textbooks, for instance, are tree-based structures.

                                                          Perhaps the difference is if you mandate creating the index first, rather than it being a future summary of existing material? Perhaps the correct term is graph-first vs tree-first?

                                                          The issue is that an index is only going to be correct at a specific snapshot in time. Unlike textbooks, company documentation is a living entity that changes over time (although textbooks do have revisions). Once a tree-like index is added to the documentation it will eventually become out-of-date and require redoing, or cause the documentation to rot.

                                                          It doesn’t really matter when the index is added, I think we should attempt to avoid it completely. The alternative in a graph-based documentation system is to have categories and tags, with the main difference being that a single page can be in multiple categories or have multiple tags.

                                                    1. 3

                                                      Rendering HTML snippets server-side is a good idea until it isn’t. For most websites it should be a good approach, but you need to know the limitations and that they won’t be an issue in your case.

                                                      If you need to consider a lot of client state when rendering, you either need to pass the full state on every request or to keep sessions server-side. And if you need more than one server instance, sessions become painful.

                                                      And if you use the same data over and over to render a multitude of different views, both the bandwidth and the server load might become an issue.

                                                      Cache invalidation is also something to think about, and I bet many people pick a React stack just to avoid thinking about what needs rerendering.

                                                      We’ve been there before. So on one hand it’s tried and true technology, but on the other hand we (should) know the limits of the approach. I’ve personally seen very difficult problems caused by an HTML snippet approach in two different projects, and minor issues in a couple more.

                                                      I’m no fan of modern web, but there are reasons we got where we are.

                                                      1. 2

                                                        I can imagine that a lot of sites could get by with the hotwire approach by having just enough client side state to avoid sessions on the server. Bits that need to be dynamic on the client can still be, and the server can still send JSON where needed. It’s more that the default approach is to have the server generate all of the visible stuff on the page, which can actually make cache invalidation/state management issues on the client side much simpler.

                                                        You wouldn’t make Figma with hotwire, of course, but I think there are probably a fair number of content sites and simpler apps that could become simpler this way.

                                                        I might argue that an email client like Hey is probably not a good fit, but I guess it works for them 🤷‍♂️

                                                      1. 2

                                                        I use Feedly as well and have for years. Recently upgraded to Pro because of the email subscription and integration with Reddit.

                                                        I agree that their UI isn’t always great. Fortunately, there seem to be many apps that support Feedly’s API. I use Reeder, personally.

                                                        1. 3

                                                          You can buy gigabit ethernet c-mount cameras for around the same price as a nice webcam, and less than a DSLR.

                                                          Or alternatively Niklas Fauth is building one from scratch. https://twitter.com/FauthNiklas/status/1265017260575465474

                                                          1. 2

                                                            Can you easily use the video from those cameras with Zoom/Google Meet? (i.e. does it act like a local webcam?)

                                                            1. 2

                                                              Yes, on Linux at least. UV4L has an IP stream to v4lc converter that makes them available as standard camera sources. So it’s not total plug and play, but very achievable.

                                                            2. 1

                                                              What’s the latency for these ethernet cameras? Can they be used for video calling?

                                                              1. 2

                                                                Yes, plenty good enough for video calling. Way less lag than the call itself has.

                                                            1. 4

                                                              The benefit of sticking to RC is much-reduced memory consumption. It turns out that for a tracing GC to achieve performance comparable with manual allocation, it needs several times the memory (different studies find different overheads, but at least 4x is a conservative lower bound). While I haven’t seen a study comparing RC, my personal experience is that the overhead is much lower, much more predictable, and can usually be driven down with little additional effort if needed.

                                                              This is highly questionable: Yes, RC requires less memory, but it’s baseline is much slower than GC.

                                                              Plus, if one created new GCed systems these days, one certainly wouldn’t go with a language in which ~everything (e. g. Java) is a reference to be traced and collected.

                                                              GC is fine, but if references make up more than 20% of your memory consumption, you are doing it wrong.

                                                              1. 8

                                                                I wonder if the “much slower” part applies when you have some measure of control over how long the operations take. Retaining and releasing an NSObject on the M1 is almost 5 times faster than on Intel, and even twice as fast when emulating Intel.

                                                                Certainly makes it harder to make performance comparisons when the CPUs behave so differently.

                                                                1. 2

                                                                  I’d expect that these performance improvements to also benefit GC, though not that much and depending on the GC algorithm used.

                                                              1. 2

                                                                Cool! Thanks for sharing!

                                                                For anyone who doesn’t want to take on building such a thing, I can recommend the Stream Deck which gives you programmable buttons with color LCD keycaps.

                                                                1. 1

                                                                  I agree that this is a nice bit of UX, but I’m not sure how much the typical user cares. Even so, if the tooling is there to do it, why not?

                                                                  Relatedly, the Go team recently accepted a proposal for the core go tool to support file embedding.

                                                                  1. 2

                                                                    He starts out by explaining that the typical user does care:

                                                                    Passing these around to friends and seeing some of them try to share the apps by copying the exes then wonder why they break made me realize something: to a lot of users, the app is the icon they click and everything else is noise. And the user is right.

                                                                    As he goes on to say, Mac apps have always been like this. In fact before Mac OS X (2001), an app really was a single file. This worked because the old Mac APIs had a “Resource Manager”, which was conceptually similar to an embedded archive, and apps made calls like GetResource('PICT',128) to load associated data, instead of the filesystem.

                                                                    During Mac OS X development (1998 IIRC; I was at Apple then but not in the exact area this was happening) there was internal debate about whether to keep this or whether to use bundles (directories that look like files in the GUI) as NeXTSTEP did. The Cocoa (OpenStep) APIs all assumed bundles, and it would have been hard to change them all away from the filesystem API.

                                                                    Apparently some people built a quick prototype that did exactly what the blog post describes — mounted a Zip archive in the filesystem so Cocoa could run unmodified, while still having a single-file app. I heard that the app’s launch time regressed a lot, so the idea was dropped (performance was already bad enough in 10.0.)

                                                                    But I wish they’d persevered on that approach. They could probably have optimized it a lot. As a bonus, apps would have been smaller (files in them wouldn’t be padded to 4K sector boundaries), copying them would have been a lot faster, and reading a bundled file into memory would be super fast if the entire app file were mmaped.

                                                                  1. 13

                                                                    It has become difficult for me to avoid the suspicion that this class of complaint is another way of saying that semver often doesn’t work very well in practice.

                                                                    1. 18

                                                                      I think it does work well in practice, for packages that practice it. I think a lot of people still have this “only want to increment the major version for big stuff” mindset as opposed to “major version just denotes breaking changes and it’s okay if we’re releasing version 105.2”.

                                                                      1. 4

                                                                        And for packages which can practice it. Many packages can’t change anything without altering previous behavior. It’s hard to think “people might depend on this bug, so it’s a breaking change.”

                                                                        1. 2

                                                                          I was thinking about this recently too… normally you would think of adding a new function as being a minor change - not breaking compatibility but not just an internal fix either..

                                                                          But, on the other hand, if you add a new function, it might conflict with an existing name in some third party library the user also imports and then boom they have a name conflict they must resolve.

                                                                          So you could fairly argue that all changes are potentially breaking changes…

                                                                          1. 5

                                                                            Isn’t this why namespaces are a thing?

                                                                            1. 3

                                                                              Not in languages like C, which still does have libraries.

                                                                              1. 2

                                                                                They’re a thing but there are frequently ways to have this problem anyway

                                                                                E.g.

                                                                                 from dependency import *
                                                                                

                                                                                in python. “Don’t do that” is fair to say, but if somebody downstream already has they have to deal with fixing the ambiguity.

                                                                                You can have subtler versions of this for example in C++ ADL can bite you:

                                                                                int foo(tpnamespace::Type v) { ... }
                                                                                

                                                                                if your dependency later adds a function named foo in their namespace the meaning of

                                                                                foo(...)
                                                                                

                                                                                in your program may change.

                                                                                A world where every identifier is fully qualified to avoid running into this after an upgrade starts to look similar to a world with no namespaces at all.

                                                                                1. 1

                                                                                  This is precisely it, you can import all and get conflicts. In the D language I use, you can do it with decent confidence too: the compiler automatically detects it and offers ways to very easily specify it (you can just be like alias foo = mything.foo; to give it priority in this scope, among other things).

                                                                                  But nevertheless, if the conflict happens in a dependency of a dependency because one of its dependencies added something… it can be a pain. Some unrelated change caused a name conflict compile error that is halting your build.

                                                                                  (of course personally I say the real wtf is using dependencies with dependencies on dependencies. but meh)

                                                                          2. 3

                                                                            I think a lot of people still have this “only want to increment the major version for big stuff”…

                                                                            This has been my experience as well. Forcing a major increment for breaking changes has a secondary benefit in that it encourages developers to think hard about whether they actually need to break an API, or can judiciously deprecate to provide a smooth path forward.

                                                                          3. 11

                                                                            I would like to point out that you’re far less likely to come across a blog post that says “I’ve been using SemVer for the past several years and it’s been working very well in practice”. SemVer is probably one of those things that, when it works, you get to not think about it much and carry on with whatever it was you were actually trying to do.

                                                                            1. 4

                                                                              This class of complaint is part of how the system works in practice.

                                                                              Semver is basically a way of managing expectations between API producers and consumers. Complaining when API produces don’t follow the stated guidelines is part of the feedback loop to maintain consensus about what changes are allowed.

                                                                              1. 2

                                                                                Exactly. The only other thing I would add is something about scale in terms of the number of independent people working on independent projects that can be combined together in any one of a number of ways. Or in other words, a lack of centralization.

                                                                                If the scale problem didn’t exist, then I absolutely would not want to deal with semver. But to some extent, the problems with semver are the problems with communication itself.

                                                                            1. 8

                                                                              In addition to the complaint about not following the breaking change requirement, I also dislike when packages spend years with tons of production users but refuse to reach “1.0” because they don’t want to commit to the semantic versioning requirement (lookin’ at you Hugo and Buffalo).

                                                                              By leaving things at 0.x.y, users have to assume that every single 0.x change could break them, and that’s annoying.

                                                                              1. 2

                                                                                I’ll add Terraform to this list. It’s otherwise a great tool, but version upgrades from 0.n to 0. n+1 have been a pain. That said, I believe the developers think this is the best way to maintain the project at the moment.

                                                                                1. 2

                                                                                  I think it’s less annoying than companies locking into something arbitrarily. I prefer this honesty in projects because, hey, maybe they will break stuff whenever they want. I want to know that a project might do this.

                                                                                  I usually interpret this as the company not taking time to commit to not breaking. With projects like Hugo that’s perfectly fine as I get what I pay for. I’d much rather them take this approach than releasing a new major version every month and not actually breaking anything (lookin’ at you Firefox). Functionally, it’s the same as 0.x.y, but it’s hard or even impossible to tell when they really release breaking stuff.