1.  

    I agree with the ideas presented here, though I don’t care for the particulars around Gemini. That said, imperfect things have been known to take off :)

    The author talks about possibly wanting something that’s weekly or monthly. This is actually pretty common and popular today: the email newsletter. The podcast is another example of this. It’s possible to go overboard even with this slower-to-update content.

    I still think the idea of more of a push toward content that has a chance to go beyond hot takes is worthwhile. Even commenting systems like this one kind of encourage quick responses … some of those responses are quite helpful and interesting. Others, especially depending on the platform and moderators, can be toxic. It’s much more labor intensive, but “letters to the editor” style of comments seem potentially more valuable.

    So far, I think Lobsters strikes a nice balance. The front page doesn’t move that quickly. Comment threads don’t tend to get long and out of control, or filled with vitriol and noise (which I’m sure is due to good moderation and a community that supports better discourse).

    In summary, I think there are already ways for people to jump off of the endless scrolling treadmill if they wish, at least when it comes to “news”.

    1.  

      ^ This. I subscribe to stack overflow weekly newsletters and read LWN only once a week on Thursdays when the weekly edition is released. I browse SlowerNews.com. I much prefer Lobsters over Hacker News for this reason. Currently, I’m thinking of unsubscribing from most NYTimes newsletters due to news overload.

    1. 3

      I really like this, and can see myself using this in some form.

      One feature that’s missing here that would push it over the line to “killer app” for me, would be the ability to share data as well as applications. That way, you could sync your tasks from a todo app from your laptop to your phone, for example.

      Anyway, thanks for sharing!

      1. 3

        You could probably get jlongster’s CRDT implementation running in here.

        1. 4

          schism would probably be a better options since it’s also in ClojureScript :)

          1. 2

            Oh very cool I’ll check these out. Thanks!

      1. 4

        Wouldn’t it make more sense to have some kind of HTTP header and/or meta tag that turns off javascript, cookies and maybe selected parts of css?

        If we could get browser vendors to treat that a bit like the https padlock indicators, some kind of visual indicator that this is “tracking free”

        Link tracking will be a harder nut to crack. First we turn off redirects. Only direct links to resources. Then we make a cryptographic proof of the contents of a page - something a bit fuzzy like image watermarking. Finally we demand that site owners publish some kind of list of proofs so we can verify the page is not being individually tailored to the current user.

        1. 11

          The CSP header already allows this to an extent. You can just add script-src none and no JavaScript can run on your web page.

          1. 1

            very true. not visible to the user though!

          2. 5

            Browsers already render both text/html and application/pdf, and hyperlinking works. There is no technical barrier to add, say, text/markdown into mix. Or application/ria (see below), for that matter. We could start by disabling everything which already requires permission, that is, audio/video capture, location, notification, etc. Since application/ria would be compat hazard, it probably should continue to be text/html, and what-ideally-should-be-text/html would be something like text/html-without-ria. This clearly works. The question is one of market, that is, whether there is enough demand for this.

            1. 5

              Someone probably should implement this as, say, Firefox extension. PDF rendering in Firefox is already done with PDF.js. Do the exact same thing for Markdown by: take GitHub-compatible JS Markdown implementation with GitHub’s default styling. Have “prefer Markdown” preference. When preference is set, send Accept: text/markdown, text/html. Using normal HTTP content negotiation, if server has text/markdown version and sends it, it is rendered just like PDF. Otherwise it works the same, etc. Before server supports arrive, the extension probably could intercept well known URLs and replace content with Markdown, for, say Discourse forums. Sounds like an interesting side project to try.

              1. 8

                Browsers already render both text/html and application/pdf, and hyperlinking works. There is no technical barrier to add, say, text/markdown into mix.

                Someone probably should implement this as, say, Firefox extension.

                Historical note: this is how Konqueror (the KDE browser) started. Konqueror was not meant be a browser, but a universal document viewer. Documents would flow though a transport protocol (implemented by a KIO library) and be interpreted by the appropriate component (called KParts) (See https://docs.kde.org/trunk5/en/applications/konqueror/introduction.html)

                In the end Konqueror focused on being mostly a browser, or an ad-hoc shell around KIO::HTTP and KHTML (the parent of WebKit) and Okular (the app + the KPart) took care of all main “document formats” (PDFs, DejaVu, etc).

                1. 2

                  Not saying it’s a bad idea, but there are important details to consider. E.g. you’d need to agree on which flavor of Markdown to use, there are… many.

                    1. 2

                      Eh, that’s why I specified GitHub flavor?

                      1. 1

                        Oops, my brain seems to have skipped that part when I read your comment, sorry.

                        The “variant” addition in RFC 7763 linked by spc476 to indicate which of the various Markdowns you’ve used when writing the content seems like a good idea. No need to make Github the owner of the specification, IMHO.

                      2. 1

                        What’s wrong with Standard Markdown?

                    2. 2

                      markdown

                      Markdown is a superset of HTML. I’ve seen this notion put forward a few times (e.g., in this thread, which prompted me to submit this article), so it seems like this is a common misconception.

                    3. 4

                      Why would web authors use it? I can imagine some small reasons (a hosting site might mandate static pages only), but they seem niche.

                      Or is your hope that users will configure their browsers to reject pages that don’t have the header? There are already significant improvements on the tracking/advertising/bloat front when you block javascript, but users overwhelmingly don’t do it, because they’d rather have the functionality.

                      1. 2

                        I think the idea is that it is a way for web authors to verifiably prove to users that the content is tracking free. Markdown renderer would be tracking free unless buggy. (It would be a XSS bug.) The difference with noscript is that script-y sites still transparently work.

                        In the invisioned implementation, like HTTPS sites getting padlock, document-only sites will get cute document icon to generate warm fuzzy feeling to users. If icon is as visible as padlock, I think many web authors will use it if it is in fact a document and it can be easily done.

                        Note that Markdown renderer could still use JavaScript to provide interactive features: say collapsible sections. It is okay because JavaScript comes from browser, which is a trusted source.

                      2. 3

                        Another HTTP header that maybe some browsers will support shoddily, and the rest will ignore?

                        1. 2

                          I found HTTP Accept header to be well supported by all current relevant softwares. That’s why I think separate MIME type is the way to go.

                        2. 2

                          I think link tracking is essentially impossible to avoid, as are redirects. The web already has a huge problem with dead links and redirects at least make it possible to maintain more of the web over time.

                          1. 2
                          1. 4

                            This (at least judging from the examples and the introduction) makes a mess of progressive enhancement and accessibility. We do not want other elements to become interactive; on the contrary, we want to erase from collective memory the fact that we ever used divs and anchors as buttons in the first place. There’s not a single keyboard event listener in the source code…

                            1. 4

                              At least the “quick start” example on the site’s front page uses a <button>. I don’t think this is incompatible with a11y, but the examples certainly don’t do a good job of promoting good habits there.

                              1. 1

                                Furthermore:

                                Note that when you are using htmx, on the server side you respond with HTML, not JSON. This keeps you firmly within the original web programming model, using Hypertext As The Engine Of Application State without even needing to really understand that concept.

                                If you’re returning HTML, I’m assuming you’ll be returning the whole page (i.e. the way no-JS websites usually work), and to truly make the JS optional, you’d want to:

                                <form method='post' action='clicked/' hx-swap='outerHTML' hx-select='#myWidget'>
                                  <button>Click me</button>
                                </form>
                                

                                That being said, I’m sure the HTML-attribute-driven approach has merits (also of interest, from the bookmark archive: https://mavo.io/).

                                1. 1

                                  There’s no specific need to return the whole page.

                                  Detect the request as coming via XHR, and omit the parts you don’t need.

                                  1. 1

                                    Yes, you could have the library add a header you can pick up on the server, and respond more succinctly. I was thinking about the minimal amount of markup / extra work that still maintains compatibility with no-JS.

                                    (Also, I guess I was a bit in disbelief that the meaning of that quoted paragraph was that you’re supposed to respond with HTML fragments, but the other examples make clear that this is in fact the intention)

                                    1. 1

                                      X-Requested-With has been a defacto standard for JS libs that wrap XHR for years, and a bunch of backend frameworks already support it (i.e. just rendering the view specific to an action without any global layout or header+footer type templating) https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Requested-With

                                      This approach works very well for maintaining no-JS compatibility: the header is only set by JS, so for no-js requests it’s just a regular request, and the ‘full page’ render is completed as per usual.

                                2. 1

                                  i asked author on hn about nojs, and they said, when possible

                              1. 2

                                I was inspired a bit by Roam. I’ve recently started using a frankensystem… I use Bear for notes that are only useful to me.

                                For notes that could potentially be useful to others, I’ve started using NValt + a small bit of Go code + Hugo + GitHub + Netlify to generate my website, complete with backlinks between notes.

                                1. 1

                                  I appreciate the teaching you’re doing here to help people get going with a clean, simple static site. I also like the suggestion to put a checklist in the template.

                                  I don’t know about your plans for the end of the series, but I wonder if you’d want to recommend a simple static site generator to automate the checklist pieces. I like Hugo, but I don’t think it qualifies as “simple”… I wonder if there’s a generator that basically does just the bare minimum above a static site like this? (I assume there’s gotta be.) You could point people at staticgen, but that seems overwhelming given that you’ve created a resource which works well for those starting out.

                                  Edited to add: I’d hate to see RSS go away, and maintaining index pages and RSS feeds is something that static site generators do well.

                                  Also: you’re certainly entitled to any license you want to choose, but by choosing a ShareAlike license, you’re essentially requiring the folks who build on your light site to release all of their content under a ShareAlike license. While I’m happy to share lots of things, I don’t know that I’d want all of my writing to be endlessly reproducible.

                                  1. 1

                                    I’ve been thinking about how I can wrap it all up at the end. Integrating it with something like Hugo or jekyl is what I’m thinking about doing, but as you said, that’s a whole different ballgame.

                                    I’ve seen some scripts on GitHub that automate blog post generation, so I may look into adapting one of those. TL;DR I’m not sure yet, but I agree there needs to be something to improve this workflow.

                                    Great point on the license - I need to either change it, or clarify that it relates to the website’s code, not any blog posts produced. Thanks!

                                  1. 11

                                    I like Go’s version of this: they declared it 1.0 when they were ready to guarantee backwards compatibility and really work for it. They didn’t have a lot of things on your list, and it clearly wasn’t a problem.

                                    What Go had was that compatibility guarantee and features people wanted.

                                    1. 3

                                      I think this is key. Dependency management years or decades down the line is a much bigger concern in production systems than any fleeting technical problem (it’s always possible to roll your own solution, in any language, for any problem the language doesn’t solve for you) & the biggest factor in dependency management is whether or not new code and old code can coexist. Fears about old code getting broken by upgrades is why a lot of big companies are still running linux 2.4 kernels, standardizing on python 1.x, or maintaining OS/360 machines decades after nobody in their right minds would use that tech for new projects.

                                      Where I work, there was a push for experiments in new features to be done in julia because julia was a lot better suited to our problems (doing simple statistics on large data sets in close to real time, where plenty of hardware was potentially available) than the languages we were using (a mix of java and perl). When julia announced that they were about to decide on a 1.0 standard, this was very exciting, because it meant we could write production code in julia that wasn’t tied to a particular minor version of the language & theoretically there could be independent implementations of the compiler that adhere to the spec.

                                      On that subject – number of independent implementations is sometimes a concern. Occasionally, with a language where there’s only one usable implementation, the developers will make a change that makes existing code infeasible (either broken or too slow for your use case) and you’ll need to decide between rewriting your code & keeping an old version of the language (which, over enough time, eventually becomes maintaining a fork of the old version, as builds eventually break against newer versions of dependency libraries and such). When there are multiple independent implementations, it’s less likely that they will all break your code in the same way at the same time, so you have the additional option of switching to a different implementation. This is less common than one would hope, though – there aren’t even very many serious C compilers anymore, there never were very many serious fully-independent implementations of javascript or perl, the attempts at alternative impementations of C# and Java have fallen out of date with the flagship implementations, and in a strange twist of fate, python leads the pack here (with the odd company of prolog, smalltalk, and apl in tow)!

                                      1. 1

                                        Yeah I’m pretty sure Rust was the same way. They declared 1.0 when they felt like they weren’t going to break anything. I’m pretty sure they knew at the time that major features like async/await would be added after 1.0.

                                        I think Go had:

                                        • editor support but no IDE spuport.
                                        • no package manager. Did the “go” tool even exist at 1.0?
                                        • no package index
                                        • good testing tools
                                        • no debugger
                                        • some integration with Google monitoring tools but probably nothing external

                                        And very importantly they had good docs.

                                        I think a good test of 1.0 is you can write good docs with a straight face. Explaining a language helps debug the corners.

                                        Despite all those missing things, people started using it anyway. So there are two different questions: 1.0 and “production ready”. But I would say there will be early adopters who use a 1.0 in production (and some even before that!)

                                      1. 2

                                        PR’s are only a thing because devs can’t manage branches and personal workflow, and would rather lean on the tool to solve their social problems.

                                        Look, if you work in a team and are well organized, and the team is communicating well within itself, you don’t have to fret about PR’s. You can just let the devs have their own branches, and when necessary, have a merge party to produce a build for testing.

                                        Alas, not all projects can attain this level of competence. PR’s are there to allow community, even if folks can’t organise well enough to know each other and how to contribute to each others work in a positive flow.

                                        For many projects, a rejected PR is just as valuable as an auto-merge. It gets people communicating.

                                        1. 2

                                          One potential problem with giving each dev their own branch and merging all branches at once to make a build is that build has a lot of potential changes and when something inevitably goes wrong it isn’t always clear what exactly caused it to break. Good communication and code quality can mitigate issues with integrating the code, but I don’t think it completely eliminates it. If the alternative is releasing a build with multiple PRs anyway then this might not be a problem, but if your alternative is releasing a build with every single change then its a distinct disadvantage.

                                          1. 2

                                            Why would you want to have a merge party where you merge in a whole bunch of stuff at once rather than reviewing and merging smaller changes one at a time?

                                            1. 1

                                              Maybe because your team is productive.

                                              Because you’re pushing features forward, trust your fellow devs, and everyone is working well enough that it doesn’t matter - and it means that features can be tested in isolation. Plus, its very rewarding to get a branch merge done, you suddenly get a much bigger and better app for the next round of work ..

                                              1. 1

                                                Not in the long run and not if your code is in production. One of the goals of review process is to get the others in the team acquainted with the changes, so they could support that in the future, when the author goes on a vacation or leaves the company. Merge parties cram way to much information for a purposeful comprehension. Remember, coding is mainly joint activity of solving business problems, and its product will require support and maintenance.

                                                1. 1

                                                  Our Merge parties include team review, so .. not really encountering the issues you mention.

                                                  1. 1

                                                    How big are your change requests, how long do these reviews last?

                                                    1. 1

                                                      Weekly, takes a day for the team, and then we start the 3-day tests ..

                                                      But look, whatever, not everyone’s use case is the same, The point is, scale according to your needs, but don’t ignore the beauty of PR’s as a mechanism, when you need them. If you don’t need them, do something else that works.

                                          2. 1

                                            You can just let the devs have their own branches, and when necessary, have a merge party to produce a build for testing.

                                            Is that really a thing? I’ve never been on a team that did that, and it sounds like a mess (to put it lightly). I can’t imagine that it would scale well.

                                            1. 1

                                              For sure. I do it with the 3 other devs in my office space. We just tell each other ‘hey, I’m working on branch’, then when the time is right, we all sit together and merge.

                                              I mean, its probably the easiest flow ever.

                                              If you don’t do this, I wonder why? I guess its communication.

                                              1. 3

                                                I do it with the 3 other devs in my office space.

                                                That makes sense: the “3” and “in my office space” are a particular set of constraints. If you’ve got a system worked on by more people, and distributed across locations, that doesn’t scale in quite the same way, I think.

                                                1. 1

                                                  Get back to me when you’ve tried it with more than 10 people on the team or a project with more than 500k lines of code…

                                                  1. 1

                                                    Yeah, works just as fine at that scale too. Key thing is: devs communicating properly.

                                            1. 2

                                              Bold move, I hope that their reasons for it are justified and they’ll come out OK on the other end as I really love what Khan Academy is doing.

                                              How will you handle the server-side rendering for React if the services are in Go?

                                              1. 1

                                                Thanks, and I agree that it’s a challenge.

                                                We’ve been doing server-side React rendering for quite a while already. We’re changing the flow a bit, because it used to be that requests would go to Python which would then call out to Node for server side rendering. Going forward, our CDN will talk to the Node React render server directly to get complete pages.

                                              1. 2

                                                I’d be happy to answer any questions you have. We’ll definitely follow up with more blog posts as we go along.

                                                1. 3

                                                  Thank you for writing the article.

                                                  I have two questions:

                                                  1. Why did you decide to migrate all APIs to GraphQL?
                                                  2. How do you generate optimal database queries from a GraphQL request? ¹

                                                  ¹ I only have superficial knowledge about GraphQL, so my questions could be a bit naïve.

                                                  1. 3

                                                    Good questions!

                                                    First of all, I’ll give a shoutout to Michael Nygard for his documenting architecture decisions blog post. I didn’t have to create my own answer to the question of migrating to GraphQL. I was able to just look up the architecture decision record (from September 2017).

                                                    We saw the benefits as:

                                                    • Data from GraphQL queries is typed, which can help the front-end operate more reliably on it. Our REST endpoints also have type information, but we aren’t currently exporting this information to the front-end.
                                                    • GraphQL queries can collect a variety of data in one request. This should improve performance since the same data might take multiple REST API calls to fetch. Also GraphQL queries can omit unneeded data.
                                                    • Performance can also be improved since that web front-end client Apollo does caching, and this can help avoid repeated calls to the backend for data.
                                                    • Manually testing GraphQL queries is easier than testing REST APIs because of the built-in query explorer.
                                                    • Front-end developers can make some changes to queries without requiring any backend work to provide extra data.

                                                    and the drawbacks as:

                                                    • It can be harder to debug GraphQL queries when implementing them, since by default errors are handled rather than causing a traceback. We may want to explore improving the debugging story.
                                                    • For one-off queries, it may be easier to implement a REST endpoint, because to implement a GraphQL query involves building classes that describe the types of the different parts of the response rather than just describing the signature.
                                                    • Our current implementation of GraphQL makes us reliant on Apollo and Graphene. Both libraries have quirks that have to be learned and worked around.

                                                    As far as generating optimal database queries, GraphQL server implementations generally have the idea of writing a “resolver” which knows how to look up entities in the database. Since GraphQL’s schema is a graph, that means you could have an entity that’s an “invoice” and you could ask for a specific invoice by ID and request its line items (which are each distinct entities as well). In an RDBMS, there’s a normal 1:many relationship between invoice and line items, and the resolver would be able to collect up all of the line items with a single query.

                                                    Generally speaking, you can design your GraphQL schema to make lookups reasonably efficient for whatever kind of database tech you’re using. (If you’re using a graph database, it becomes really natural, I should think. We’re not, though.)

                                                    I’ll note that our new GraphQL setup is more complex because of federation. Looking up entities in our datastore doesn’t change, but the queries themselves go through a query planner which distributes parts of the query to different services.

                                                1. 7

                                                  Rik Arends has been working on makepad which he describes as “Makepad is a Rust IDE for VR that compiles to wasm/webGL, osx/metal, windows/dx11 linux/opengl”.

                                                  I have an Oculus Quest, which is also 1600x1440 per eye. It’s a completely wireless, standalone unit which is great, and Makepad is being built to run on it. I quite like it for games and experiences like that, but text wouldn’t be super sharp. That said, Rik has been actively working on this stuff, and I’m just guessing what the experience might be like.

                                                  If I’m actually coding something for VR, I think it could be awesome. But I don’t think I’d find much value in it over a 2D interface for normal coding.

                                                  1. 3

                                                    Thanks for the link to Makepad! I’ll be excitedly following along.

                                                    The main advantage I see to using VR for coding (in a traditional setup with a text editor, not some awesome visualisation of your software) is “real estate”. I use two 4k 27” monitors, but I’ve worked with three before in an office (before the company ran out of money, go figure) and found it even better. I like being able to see different parts of my code side by side.

                                                    I’m envisioning coding in VR to be like having a whole bunch of seamless monitors floating around my head.

                                                    I work from home now, but if I worked in an open plan office the distraction-elimination aspect of VR would be a big plus. Imagine working in a Japanese Zen garden instead of being surrounded by other people under flourescent lighting!

                                                    1. 1

                                                      Sorry for a super delayed reply…

                                                      I think that VR needs a whole lot more pixels before we’ll get to that sort of feeling. I like the image you’re projecting, and would love to see that happen but the hardware definitely has a ways to go.

                                                      1. 1

                                                        For sure. After doing more research I found that the useful metric here is “pixels per degree”. Retina devices do ~60. Anything higher isn’t really noticeable by humans. Mainstream VR devices don’t even crack 20.

                                                        Worse, Carmack said in his Oculus keynote last year that up until now they’ve been riding the coattails of the phone industry, but from now on will have to foot the bill for developing higher pixel density screens.

                                                  1. 7

                                                    The devil is in the details, though. The web-technology-based toolkits of course are huge, but the native toolkits vary widely in size. The problem is that under a cetain size, you start sacrificing very real things like non-Western-language support and accessibility for disabled users.

                                                    But man…the web-technology-based toolkits are huge.

                                                    1. 1

                                                      But man…the web-technology-based toolkits are huge.

                                                      Well, yea.. using a full featured web browser to draw a UI for a ‘native’ app is a silly idea. That’s the trade-off for lowering the “UI toolkit” bar.

                                                      1. 1

                                                        The web-technology-based toolkits of course are huge, but the native toolkits vary widely in size.

                                                        Sciter is interesting because it’s in between. They basically made their own HTML+CSS engine specifically for making desktop apps, and apparently that approach has worked for memory consumption at least.

                                                      1. 9

                                                        Additionally, the technology landscape had shifted away from the tools we chose in late 2012 (jQuery, Signals, and direct DOM manipulation) and toward a paradigm of composable interfaces and clean application abstractions.

                                                        I always find statements like this amusing. Composable interfaces and clean application abstractions are what I always heard was what programs were supposed to be. Did people in 2012 not care about writing good programs?

                                                        Are they going to look back in another seven years and say “we shifted away from react, redux, and virtual DOM manipulation toward a paradigm of composable interfaces and clean application abstractions” as the winds shift again? Or will it shift to “we shifted away from X toward performant interfaces and clean application implementations?”

                                                        Just silly.

                                                        1. 5

                                                          Judging by [other] tech companies’ blog posts, there always seems to be enough time to move to $CURRENT_JS_ZEITGEIST_FRAMEWORK versus actually writing clean code to make your choice of libraries/frameworks irrelevant in the long run.

                                                          To their credit, it looks like Slack did the less sexy thing here while also upgrading to the current hotness.

                                                          1. 7

                                                            FWIW, React is 5 years old now. Where I work, we’ve been using it for all of that time and don’t have any plans to switch. Sure, maybe we’d use Preact to save some bytes on the wire sometimes, but it’s still fundamentally the same.

                                                            I’m not saying there will never be a thing that replaces React or that there aren’t people out there using some new hotness. My point is more that React is fundamentally easier to reason about that jQuery DOM manipulation, and until someone offers a real step change, I’d expect a lot of folks to stick with it.

                                                            1. 3

                                                              Related to this, I’m always surprised when a game company is able to switch rendering libraries (OpenGL -> Vulkan) in a few man-weeks but then I remember they usually abstract over it in their framework/engine.

                                                            2. 7

                                                              Did people in 2012 not care about writing good programs?

                                                              No they did not and they don’t now.

                                                              1. 3

                                                                A more accurate way of phrasing it would be “Additionally, as the original team got expanded and replaced, the Slack team had shifted away from the rapid prototyping tools we used in late 2012 and toward a paradigm of composable interfaces and clean application abstractions.” Apparently, they think their company is the whole universe.

                                                                1. 1

                                                                  It was a lot more work and discipline to develop and maintain a nicely organized codebase with their old toolsthan with their new tools, partly because composability and clean abstractions weren’t explicit design goals for those tools. React/redux really was a major improvement over previous front-end arrangements.

                                                                1. 7

                                                                  I came across this article as an interesting read. However, having worked at Uber, where we were super early adopters of microservices, getting to over 1,000 microservices which brought a lot of unexpected pain points, I do feel to add the downsides of microservices - and why you don’t really hear engineers from Uber boasting about how great thousands of microservices are.

                                                                  First, it’s testing, specifically, the difficulty of integration testing that results in outages. When you have microservices that depend on each other and are deployed independently, one of the most common causes of outages will be ServiceA deployed, then ServiceB - unaware of the latest change in ServiceA - deployed and boom, a problem that an integration test could have caught. Ok, so how do we write that test? Well, we now either need to have the same codebase or stop any deploys from going out without checking out the latest code for the services and running the tests they depend on. Ok, so that’s not really autonomous deployment… and try solving this problem for dozens of dependent services.

                                                                  Second, it’s library versions and conventions. When you start with 2 or 3 microservices that used to be the same monolith, you probably have the same version of libraries and use the same conventions. Fast forward to 15 microservices and a vulnerability discovered in an old version of a dependency. Chances are, the versions of third-party libraries will be all over the place as each microservice will update at different times - making some of them a security vulnerability. And the conventions on what style to follow or what linting rules to have will also drift apart.

                                                                  Third, it’s about (build) tooling. With a monolith, there is the same linting, static analysis, test coverage requirements in-place. With microservices, unless there’s some team helping with tooling, it will likely be pretty adhoc: some services having a high quality bar, others not really.

                                                                  Finally, ownership and repsonding to incidents. When it’s easy to create microservices, it’s tempting to do so. But people often underestimate the maintenance need of these - or just ignore it, if it’s too much. Over time, this might mean to zombie services - ones that are either not maintained/monitored actively or ones that are deployed, but have little to no use. Developers of small services might move on and leave these behind until someone else stumbles across them.

                                                                  All the above being said, we still use microservices extensively… except we’re conscious of (not) creating overly small and simple ones, as well as realize that investing in tooling to solve the testing and library versions/convetions pain points is a must.

                                                                  1. 2

                                                                    We’re starting to break up our monolith and are definitely worried about the pain points you mention.

                                                                    Do you have an opinion about Pact to try to deal with some of the integration testing issues?

                                                                    We’re also planning to not go “micro” with our services. Our current plan is for roughly 1 service for every 2 engineers, but half of those services won’t even change very often (think stuff like feature flags). Hopefully the relatively small number of services will make our lives more manageable as well.

                                                                  1. 2

                                                                    Unless I entered something wrong, IngramSpark will print a 500 page paperback color with 70 lb paper, 10 books for $280 including shipping in the US. Here’s a link to their color printing page. (To be extra clear: this is $28 per book! way less than Lulu mentioned elsewhere)

                                                                    I use both IngramSpark and KDP for my fiction books. IngramSpark has the nice advantage that bookstores will order from them (though you have to give a substantial discount off of retail price for that to work). I’ve been happy with my IngramSpark copies, though again that was black and white.

                                                                    1. 1

                                                                      Thanks for this, I had not heard of IngramSpark. Their pricing is very similar to Amazon’s, but they offer more options.

                                                                      You raised a point I had not thought about, selling via bookshops. My experience, as a buyer, is that small bookshops only stock popular technical books; shops in University towns might stock the more technical material.

                                                                      I’m tempted to keep the price under 30 (pounds of dollars), just to increase sales volume.

                                                                      1. 1

                                                                        Yeah, bookshops don’t stock my books, but you can walk into basically any book store (in the US, at least, not sure about the international reach) and request to order the book. I have gotten a couple of sales this way.

                                                                    1. 39
                                                                      1. A new build system
                                                                      1. 1

                                                                        I keep thinking about generalizing the Myrddin build system (https://myrlang.org/mbld, example at https://git.eigenstate.org/ori/mc.git/tree/mbld/bld.sub).

                                                                        I want to make it work with C and C++. I find it pleasant to use, and I want to use it in more places.

                                                                        It avoids the overconfigurability of most other build build systems.

                                                                        1. 1

                                                                          Isn’t it’s simplicity inherent to the fact that it only supports one language?

                                                                          1. 1

                                                                            I don’t think so. As long as the target types stay the same, I think it’s possible to add more languages without exploding in complexity.

                                                                        2. 1

                                                                          Ha! been there

                                                                          Agreed that this is a great example!

                                                                          1. 1

                                                                            Congrats on shipping :)

                                                                            My Rake rewrite never made is past the finish line, it’s rusting somewhere on my disk.

                                                                            1. 1

                                                                              Thanks! Paver was a simple tool in a simpler time :)

                                                                              It’s been maintained by others for the past 10 years or so.

                                                                        1. 1

                                                                          As someone administering Jira and Confluence, I’d say bug tracker and wiki. Especially bug tracker

                                                                            1. 2

                                                                              Oh, that’s a good one! I must admit it has crossed my mind a few times, but indeed it sounds like one person could not pull that off. (OR CAN THEY!)

                                                                              I am, however, very excited about recent projects implementing parts of that engine, e.g. recently a flexbox library written in Rust and available cross-platform.

                                                                              1. 1

                                                                                Or a browser-like engine focused on apps

                                                                              1. 51

                                                                                We use Confluence, like we did at my last job.

                                                                                I fucking hate Confluence.

                                                                                1. 7

                                                                                  Same. It’s awful. Search is a train wreck, and it doesn’t even use wiki-markup.

                                                                                  We also use something like Doxygen, except it errs out instead of generating the docs.

                                                                                  1. 6

                                                                                    I’ll be the counterpoint here… Confluence sucks, but I think it sucks less than the other solutions I’ve seen. At least for certain problems, and especially when you need a resource for non-developers.

                                                                                    A sibling comment says “Search is a train wreck”. To which I say: try searching google docs. Confluence search will at least show you matches within a doc when you search, so you have a better chance of figuring out which doc is actually the one you want.

                                                                                    Confluence has the ability to embed various kinds of content in the page, which is quite nice. Google Drawings seems uniquely designed to make ugly drawings. In Confluence, I can use PlantUML or Draw.io.

                                                                                    Their new editor supports typing markdown keystrokes to do formatting. The rollout has been kind of bad, but I think the direction of the new editor is good.

                                                                                    “Spaces” can be confusing at first, but I think it will help us scale up to the organization sanely.

                                                                                    So if the problem you’re solving is “I want to document the API of this project”, Confluence is a terrible choice. If your problem is “I want a place in which all kinds of people in the organization can find docs, discussions, and decisions around various things we’re doing”, Confluence works better than the other choices I’ve seen so far.

                                                                                    1. 2

                                                                                      The sad thing with Confluence is they used to (about 5 years ago) have a method of inserting wiki markup so that wiki pages could be generated and pasted in, or if you just didn’t want to use the (frankly awful) WYSIWYG editor you didn’t have to. But they ripped that out.

                                                                                      Atlassian actually paid us a site visit to gauge opinions from everyone in the company. Pretty much everyone in the company asked for wiki markup to come back, to which they responded “huh”.

                                                                                      The API for Confluence is also pretty bad. We have some tools for generating document trees, e.g. when creating documentation for a new service we run a script which creates a tree of pages from templates, but little things like not being able to turn off “notify watchers” on an edit via the API means you can hammer people’s inboxes (which in turn makes Gmail mad).

                                                                                      I agree that it sucks, but sucks less than other solutions. Which in itself is pretty sad.

                                                                                    2. 5

                                                                                      We used to use confluence a bit, but we sort of stopped using it because, well, we also don’t love confluence.

                                                                                      1. 8

                                                                                        Confluence is awful, but having a constellation of markdown files and Google docs is even worse. One source of truth.

                                                                                        The value we got from Confluence is that it gave a place where we could keep dev stuff next to business stuff, so it was easy for people to reference things more easily and have less silo-ing.

                                                                                        My only real complaint is that integrating with Confluence via a bot is a pain in the neck–we did this to have a wiki automatically updated with product information from deploys and builds, and that was Not Fun.

                                                                                        1. 3

                                                                                          The value we got from Confluence is that it gave a place where we could keep dev stuff next to business stuff, so it was easy for people to reference things more easily and have less silo-ing.

                                                                                          Something key here is that business folks generally have little interest in editing Markdown files and using Git. Confluence and other such systems may be monstrously annoying (and they are), but they’re better than alternatives.

                                                                                          Also, it works in the other direction. Devs want constellations of Markdown, but biz likes constellations of Word documents with comments and tracking. Trust me, Confluence is a better choice.

                                                                                          1. 1

                                                                                            We dragged them halfway with Confluence, then? We just need to Zeno them to git and markdown (eventually) ;-).

                                                                                            1. 1

                                                                                              Of course, that goes both ways—I find Con(ef)fluence’s UI so utterly intolerable that I will interact with it the absolute minimum required to not get fired, which in practice means literally never. So now it’s a wiki just for the business side, which is probably better than the “NAS full of outdated Word documents” approach it probably replaced, but it still not very useful.

                                                                                              Literally any other wiki software I’ve ever seen would be preferable.

                                                                                              But I’m not sure I agree with you that having business and tech share a wiki is a good plan. Business folks love putting paperwork (e.g. “this deployment of this service was signed off on by these people”) into the wiki, which you must never ever ever ever allow, or it will immediately dilute the useful content to homeopathic proportions and make the whole thing useless. So now you either need to be draconian about allowing business folks to put stuff in, in which case they won’t use it, or it turns into a paperwork repository, in which case nobody will use it.

                                                                                            2. 1

                                                                                              Confluence is awful, but having a constellation of markdown files…

                                                                                              That’s a bit of a false dichotomy. Most wikis can provide search, history, notifications…

                                                                                            3. 4

                                                                                              We use confluence too. I really wish they never made the change to the “smart” editor that prevents users from editing plain markdown (or similar).

                                                                                              Read some of the historical tickets around that change if you’re in the mood to shed a tear.

                                                                                              In the end though, the key thing is to have one central jumping off place to get to your documentation and confluence works ok for that.

                                                                                              1. 3

                                                                                                Hear, hear. It’s a nightmare.

                                                                                                1. 2

                                                                                                  I can’t add anything new here - we too use Confluence and almost everyone hates it, but as I and dangoor have mentioned in other comments, there might not be anything better.

                                                                                                  One thing we did a while ago was move alert references out of Confluence to a git repo which uses mkdocs. This way if Confluence is down or there’s some network issue meaning we can’t get to it, on-call engineers can still have a local copy of all the alert references.

                                                                                                  1. 1

                                                                                                    I like Confluence. It has a WYSIWYG editor that actually works. Plenty of plugins for lots of features and integrations. Recently it even gained real time collaboration.

                                                                                                  1. 3

                                                                                                    I agree that webrings should make a comeback, but WebRing.org still exists and I don’t really see any benefit to having git as the datastore unless you can submit a site without issuing a pull-request. The opengraph/cards are definitely a nice touch though.

                                                                                                    1. 4

                                                                                                      Wow, one look at WebRing.org explains to me why its existence is not enough to dissuade the creation of a new webring implementation. It’s a mess of ads and poor design. While it’s only nerd-compatible, I do like the idea of an implementation which results in a JSON representation that can be dropped into a static site. To be able to implement modern webrings in a way that doesn’t add to the “one more thing on the net that’s tracking me” seems like a win.