1. 2

    I was inspired a bit by Roam. I’ve recently started using a frankensystem… I use Bear for notes that are only useful to me.

    For notes that could potentially be useful to others, I’ve started using NValt + a small bit of Go code + Hugo + GitHub + Netlify to generate my website, complete with backlinks between notes.

    1. 1

      I appreciate the teaching you’re doing here to help people get going with a clean, simple static site. I also like the suggestion to put a checklist in the template.

      I don’t know about your plans for the end of the series, but I wonder if you’d want to recommend a simple static site generator to automate the checklist pieces. I like Hugo, but I don’t think it qualifies as “simple”… I wonder if there’s a generator that basically does just the bare minimum above a static site like this? (I assume there’s gotta be.) You could point people at staticgen, but that seems overwhelming given that you’ve created a resource which works well for those starting out.

      Edited to add: I’d hate to see RSS go away, and maintaining index pages and RSS feeds is something that static site generators do well.

      Also: you’re certainly entitled to any license you want to choose, but by choosing a ShareAlike license, you’re essentially requiring the folks who build on your light site to release all of their content under a ShareAlike license. While I’m happy to share lots of things, I don’t know that I’d want all of my writing to be endlessly reproducible.

      1. 1

        I’ve been thinking about how I can wrap it all up at the end. Integrating it with something like Hugo or jekyl is what I’m thinking about doing, but as you said, that’s a whole different ballgame.

        I’ve seen some scripts on GitHub that automate blog post generation, so I may look into adapting one of those. TL;DR I’m not sure yet, but I agree there needs to be something to improve this workflow.

        Great point on the license - I need to either change it, or clarify that it relates to the website’s code, not any blog posts produced. Thanks!

      1. 11

        I like Go’s version of this: they declared it 1.0 when they were ready to guarantee backwards compatibility and really work for it. They didn’t have a lot of things on your list, and it clearly wasn’t a problem.

        What Go had was that compatibility guarantee and features people wanted.

        1. 3

          I think this is key. Dependency management years or decades down the line is a much bigger concern in production systems than any fleeting technical problem (it’s always possible to roll your own solution, in any language, for any problem the language doesn’t solve for you) & the biggest factor in dependency management is whether or not new code and old code can coexist. Fears about old code getting broken by upgrades is why a lot of big companies are still running linux 2.4 kernels, standardizing on python 1.x, or maintaining OS/360 machines decades after nobody in their right minds would use that tech for new projects.

          Where I work, there was a push for experiments in new features to be done in julia because julia was a lot better suited to our problems (doing simple statistics on large data sets in close to real time, where plenty of hardware was potentially available) than the languages we were using (a mix of java and perl). When julia announced that they were about to decide on a 1.0 standard, this was very exciting, because it meant we could write production code in julia that wasn’t tied to a particular minor version of the language & theoretically there could be independent implementations of the compiler that adhere to the spec.

          On that subject – number of independent implementations is sometimes a concern. Occasionally, with a language where there’s only one usable implementation, the developers will make a change that makes existing code infeasible (either broken or too slow for your use case) and you’ll need to decide between rewriting your code & keeping an old version of the language (which, over enough time, eventually becomes maintaining a fork of the old version, as builds eventually break against newer versions of dependency libraries and such). When there are multiple independent implementations, it’s less likely that they will all break your code in the same way at the same time, so you have the additional option of switching to a different implementation. This is less common than one would hope, though – there aren’t even very many serious C compilers anymore, there never were very many serious fully-independent implementations of javascript or perl, the attempts at alternative impementations of C# and Java have fallen out of date with the flagship implementations, and in a strange twist of fate, python leads the pack here (with the odd company of prolog, smalltalk, and apl in tow)!

          1. 1

            Yeah I’m pretty sure Rust was the same way. They declared 1.0 when they felt like they weren’t going to break anything. I’m pretty sure they knew at the time that major features like async/await would be added after 1.0.

            I think Go had:

            • editor support but no IDE spuport.
            • no package manager. Did the “go” tool even exist at 1.0?
            • no package index
            • good testing tools
            • no debugger
            • some integration with Google monitoring tools but probably nothing external

            And very importantly they had good docs.

            I think a good test of 1.0 is you can write good docs with a straight face. Explaining a language helps debug the corners.

            Despite all those missing things, people started using it anyway. So there are two different questions: 1.0 and “production ready”. But I would say there will be early adopters who use a 1.0 in production (and some even before that!)

          1. 2

            PR’s are only a thing because devs can’t manage branches and personal workflow, and would rather lean on the tool to solve their social problems.

            Look, if you work in a team and are well organized, and the team is communicating well within itself, you don’t have to fret about PR’s. You can just let the devs have their own branches, and when necessary, have a merge party to produce a build for testing.

            Alas, not all projects can attain this level of competence. PR’s are there to allow community, even if folks can’t organise well enough to know each other and how to contribute to each others work in a positive flow.

            For many projects, a rejected PR is just as valuable as an auto-merge. It gets people communicating.

            1. 2

              One potential problem with giving each dev their own branch and merging all branches at once to make a build is that build has a lot of potential changes and when something inevitably goes wrong it isn’t always clear what exactly caused it to break. Good communication and code quality can mitigate issues with integrating the code, but I don’t think it completely eliminates it. If the alternative is releasing a build with multiple PRs anyway then this might not be a problem, but if your alternative is releasing a build with every single change then its a distinct disadvantage.

              1. 2

                Why would you want to have a merge party where you merge in a whole bunch of stuff at once rather than reviewing and merging smaller changes one at a time?

                1. 1

                  Maybe because your team is productive.

                  Because you’re pushing features forward, trust your fellow devs, and everyone is working well enough that it doesn’t matter - and it means that features can be tested in isolation. Plus, its very rewarding to get a branch merge done, you suddenly get a much bigger and better app for the next round of work ..

                  1. 1

                    Not in the long run and not if your code is in production. One of the goals of review process is to get the others in the team acquainted with the changes, so they could support that in the future, when the author goes on a vacation or leaves the company. Merge parties cram way to much information for a purposeful comprehension. Remember, coding is mainly joint activity of solving business problems, and its product will require support and maintenance.

                    1. 1

                      Our Merge parties include team review, so .. not really encountering the issues you mention.

                      1. 1

                        How big are your change requests, how long do these reviews last?

                        1. 1

                          Weekly, takes a day for the team, and then we start the 3-day tests ..

                          But look, whatever, not everyone’s use case is the same, The point is, scale according to your needs, but don’t ignore the beauty of PR’s as a mechanism, when you need them. If you don’t need them, do something else that works.

              2. 1

                You can just let the devs have their own branches, and when necessary, have a merge party to produce a build for testing.

                Is that really a thing? I’ve never been on a team that did that, and it sounds like a mess (to put it lightly). I can’t imagine that it would scale well.

                1. 1

                  For sure. I do it with the 3 other devs in my office space. We just tell each other ‘hey, I’m working on branch’, then when the time is right, we all sit together and merge.

                  I mean, its probably the easiest flow ever.

                  If you don’t do this, I wonder why? I guess its communication.

                  1. 3

                    I do it with the 3 other devs in my office space.

                    That makes sense: the “3” and “in my office space” are a particular set of constraints. If you’ve got a system worked on by more people, and distributed across locations, that doesn’t scale in quite the same way, I think.

                    1. 1

                      Get back to me when you’ve tried it with more than 10 people on the team or a project with more than 500k lines of code…

                      1. 1

                        Yeah, works just as fine at that scale too. Key thing is: devs communicating properly.

                1. 2

                  Bold move, I hope that their reasons for it are justified and they’ll come out OK on the other end as I really love what Khan Academy is doing.

                  How will you handle the server-side rendering for React if the services are in Go?

                  1. 1

                    Thanks, and I agree that it’s a challenge.

                    We’ve been doing server-side React rendering for quite a while already. We’re changing the flow a bit, because it used to be that requests would go to Python which would then call out to Node for server side rendering. Going forward, our CDN will talk to the Node React render server directly to get complete pages.

                  1. 2

                    I’d be happy to answer any questions you have. We’ll definitely follow up with more blog posts as we go along.

                    1. 3

                      Thank you for writing the article.

                      I have two questions:

                      1. Why did you decide to migrate all APIs to GraphQL?
                      2. How do you generate optimal database queries from a GraphQL request? ¹

                      ¹ I only have superficial knowledge about GraphQL, so my questions could be a bit naïve.

                      1. 3

                        Good questions!

                        First of all, I’ll give a shoutout to Michael Nygard for his documenting architecture decisions blog post. I didn’t have to create my own answer to the question of migrating to GraphQL. I was able to just look up the architecture decision record (from September 2017).

                        We saw the benefits as:

                        • Data from GraphQL queries is typed, which can help the front-end operate more reliably on it. Our REST endpoints also have type information, but we aren’t currently exporting this information to the front-end.
                        • GraphQL queries can collect a variety of data in one request. This should improve performance since the same data might take multiple REST API calls to fetch. Also GraphQL queries can omit unneeded data.
                        • Performance can also be improved since that web front-end client Apollo does caching, and this can help avoid repeated calls to the backend for data.
                        • Manually testing GraphQL queries is easier than testing REST APIs because of the built-in query explorer.
                        • Front-end developers can make some changes to queries without requiring any backend work to provide extra data.

                        and the drawbacks as:

                        • It can be harder to debug GraphQL queries when implementing them, since by default errors are handled rather than causing a traceback. We may want to explore improving the debugging story.
                        • For one-off queries, it may be easier to implement a REST endpoint, because to implement a GraphQL query involves building classes that describe the types of the different parts of the response rather than just describing the signature.
                        • Our current implementation of GraphQL makes us reliant on Apollo and Graphene. Both libraries have quirks that have to be learned and worked around.

                        As far as generating optimal database queries, GraphQL server implementations generally have the idea of writing a “resolver” which knows how to look up entities in the database. Since GraphQL’s schema is a graph, that means you could have an entity that’s an “invoice” and you could ask for a specific invoice by ID and request its line items (which are each distinct entities as well). In an RDBMS, there’s a normal 1:many relationship between invoice and line items, and the resolver would be able to collect up all of the line items with a single query.

                        Generally speaking, you can design your GraphQL schema to make lookups reasonably efficient for whatever kind of database tech you’re using. (If you’re using a graph database, it becomes really natural, I should think. We’re not, though.)

                        I’ll note that our new GraphQL setup is more complex because of federation. Looking up entities in our datastore doesn’t change, but the queries themselves go through a query planner which distributes parts of the query to different services.

                    1. 7

                      Rik Arends has been working on makepad which he describes as “Makepad is a Rust IDE for VR that compiles to wasm/webGL, osx/metal, windows/dx11 linux/opengl”.

                      I have an Oculus Quest, which is also 1600x1440 per eye. It’s a completely wireless, standalone unit which is great, and Makepad is being built to run on it. I quite like it for games and experiences like that, but text wouldn’t be super sharp. That said, Rik has been actively working on this stuff, and I’m just guessing what the experience might be like.

                      If I’m actually coding something for VR, I think it could be awesome. But I don’t think I’d find much value in it over a 2D interface for normal coding.

                      1. 3

                        Thanks for the link to Makepad! I’ll be excitedly following along.

                        The main advantage I see to using VR for coding (in a traditional setup with a text editor, not some awesome visualisation of your software) is “real estate”. I use two 4k 27” monitors, but I’ve worked with three before in an office (before the company ran out of money, go figure) and found it even better. I like being able to see different parts of my code side by side.

                        I’m envisioning coding in VR to be like having a whole bunch of seamless monitors floating around my head.

                        I work from home now, but if I worked in an open plan office the distraction-elimination aspect of VR would be a big plus. Imagine working in a Japanese Zen garden instead of being surrounded by other people under flourescent lighting!

                        1. 1

                          Sorry for a super delayed reply…

                          I think that VR needs a whole lot more pixels before we’ll get to that sort of feeling. I like the image you’re projecting, and would love to see that happen but the hardware definitely has a ways to go.

                          1. 1

                            For sure. After doing more research I found that the useful metric here is “pixels per degree”. Retina devices do ~60. Anything higher isn’t really noticeable by humans. Mainstream VR devices don’t even crack 20.

                            Worse, Carmack said in his Oculus keynote last year that up until now they’ve been riding the coattails of the phone industry, but from now on will have to foot the bill for developing higher pixel density screens.

                      1. 7

                        The devil is in the details, though. The web-technology-based toolkits of course are huge, but the native toolkits vary widely in size. The problem is that under a cetain size, you start sacrificing very real things like non-Western-language support and accessibility for disabled users.

                        But man…the web-technology-based toolkits are huge.

                        1. 1

                          But man…the web-technology-based toolkits are huge.

                          Well, yea.. using a full featured web browser to draw a UI for a ‘native’ app is a silly idea. That’s the trade-off for lowering the “UI toolkit” bar.

                          1. 1

                            The web-technology-based toolkits of course are huge, but the native toolkits vary widely in size.

                            Sciter is interesting because it’s in between. They basically made their own HTML+CSS engine specifically for making desktop apps, and apparently that approach has worked for memory consumption at least.

                          1. 9

                            Additionally, the technology landscape had shifted away from the tools we chose in late 2012 (jQuery, Signals, and direct DOM manipulation) and toward a paradigm of composable interfaces and clean application abstractions.

                            I always find statements like this amusing. Composable interfaces and clean application abstractions are what I always heard was what programs were supposed to be. Did people in 2012 not care about writing good programs?

                            Are they going to look back in another seven years and say “we shifted away from react, redux, and virtual DOM manipulation toward a paradigm of composable interfaces and clean application abstractions” as the winds shift again? Or will it shift to “we shifted away from X toward performant interfaces and clean application implementations?”

                            Just silly.

                            1. 5

                              Judging by [other] tech companies’ blog posts, there always seems to be enough time to move to $CURRENT_JS_ZEITGEIST_FRAMEWORK versus actually writing clean code to make your choice of libraries/frameworks irrelevant in the long run.

                              To their credit, it looks like Slack did the less sexy thing here while also upgrading to the current hotness.

                              1. 7

                                FWIW, React is 5 years old now. Where I work, we’ve been using it for all of that time and don’t have any plans to switch. Sure, maybe we’d use Preact to save some bytes on the wire sometimes, but it’s still fundamentally the same.

                                I’m not saying there will never be a thing that replaces React or that there aren’t people out there using some new hotness. My point is more that React is fundamentally easier to reason about that jQuery DOM manipulation, and until someone offers a real step change, I’d expect a lot of folks to stick with it.

                                1. 3

                                  Related to this, I’m always surprised when a game company is able to switch rendering libraries (OpenGL -> Vulkan) in a few man-weeks but then I remember they usually abstract over it in their framework/engine.

                                2. 7

                                  Did people in 2012 not care about writing good programs?

                                  No they did not and they don’t now.

                                  1. 3

                                    A more accurate way of phrasing it would be “Additionally, as the original team got expanded and replaced, the Slack team had shifted away from the rapid prototyping tools we used in late 2012 and toward a paradigm of composable interfaces and clean application abstractions.” Apparently, they think their company is the whole universe.

                                    1. 1

                                      It was a lot more work and discipline to develop and maintain a nicely organized codebase with their old toolsthan with their new tools, partly because composability and clean abstractions weren’t explicit design goals for those tools. React/redux really was a major improvement over previous front-end arrangements.

                                    1. 7

                                      I came across this article as an interesting read. However, having worked at Uber, where we were super early adopters of microservices, getting to over 1,000 microservices which brought a lot of unexpected pain points, I do feel to add the downsides of microservices - and why you don’t really hear engineers from Uber boasting about how great thousands of microservices are.

                                      First, it’s testing, specifically, the difficulty of integration testing that results in outages. When you have microservices that depend on each other and are deployed independently, one of the most common causes of outages will be ServiceA deployed, then ServiceB - unaware of the latest change in ServiceA - deployed and boom, a problem that an integration test could have caught. Ok, so how do we write that test? Well, we now either need to have the same codebase or stop any deploys from going out without checking out the latest code for the services and running the tests they depend on. Ok, so that’s not really autonomous deployment… and try solving this problem for dozens of dependent services.

                                      Second, it’s library versions and conventions. When you start with 2 or 3 microservices that used to be the same monolith, you probably have the same version of libraries and use the same conventions. Fast forward to 15 microservices and a vulnerability discovered in an old version of a dependency. Chances are, the versions of third-party libraries will be all over the place as each microservice will update at different times - making some of them a security vulnerability. And the conventions on what style to follow or what linting rules to have will also drift apart.

                                      Third, it’s about (build) tooling. With a monolith, there is the same linting, static analysis, test coverage requirements in-place. With microservices, unless there’s some team helping with tooling, it will likely be pretty adhoc: some services having a high quality bar, others not really.

                                      Finally, ownership and repsonding to incidents. When it’s easy to create microservices, it’s tempting to do so. But people often underestimate the maintenance need of these - or just ignore it, if it’s too much. Over time, this might mean to zombie services - ones that are either not maintained/monitored actively or ones that are deployed, but have little to no use. Developers of small services might move on and leave these behind until someone else stumbles across them.

                                      All the above being said, we still use microservices extensively… except we’re conscious of (not) creating overly small and simple ones, as well as realize that investing in tooling to solve the testing and library versions/convetions pain points is a must.

                                      1. 2

                                        We’re starting to break up our monolith and are definitely worried about the pain points you mention.

                                        Do you have an opinion about Pact to try to deal with some of the integration testing issues?

                                        We’re also planning to not go “micro” with our services. Our current plan is for roughly 1 service for every 2 engineers, but half of those services won’t even change very often (think stuff like feature flags). Hopefully the relatively small number of services will make our lives more manageable as well.

                                      1. 2

                                        Unless I entered something wrong, IngramSpark will print a 500 page paperback color with 70 lb paper, 10 books for $280 including shipping in the US. Here’s a link to their color printing page. (To be extra clear: this is $28 per book! way less than Lulu mentioned elsewhere)

                                        I use both IngramSpark and KDP for my fiction books. IngramSpark has the nice advantage that bookstores will order from them (though you have to give a substantial discount off of retail price for that to work). I’ve been happy with my IngramSpark copies, though again that was black and white.

                                        1. 1

                                          Thanks for this, I had not heard of IngramSpark. Their pricing is very similar to Amazon’s, but they offer more options.

                                          You raised a point I had not thought about, selling via bookshops. My experience, as a buyer, is that small bookshops only stock popular technical books; shops in University towns might stock the more technical material.

                                          I’m tempted to keep the price under 30 (pounds of dollars), just to increase sales volume.

                                          1. 1

                                            Yeah, bookshops don’t stock my books, but you can walk into basically any book store (in the US, at least, not sure about the international reach) and request to order the book. I have gotten a couple of sales this way.

                                        1. 39
                                          1. A new build system
                                          1. 1

                                            I keep thinking about generalizing the Myrddin build system (https://myrlang.org/mbld, example at https://git.eigenstate.org/ori/mc.git/tree/mbld/bld.sub).

                                            I want to make it work with C and C++. I find it pleasant to use, and I want to use it in more places.

                                            It avoids the overconfigurability of most other build build systems.

                                            1. 1

                                              Isn’t it’s simplicity inherent to the fact that it only supports one language?

                                              1. 1

                                                I don’t think so. As long as the target types stay the same, I think it’s possible to add more languages without exploding in complexity.

                                            2. 1

                                              Ha! been there

                                              Agreed that this is a great example!

                                              1. 1

                                                Congrats on shipping :)

                                                My Rake rewrite never made is past the finish line, it’s rusting somewhere on my disk.

                                                1. 1

                                                  Thanks! Paver was a simple tool in a simpler time :)

                                                  It’s been maintained by others for the past 10 years or so.

                                            1. 1

                                              As someone administering Jira and Confluence, I’d say bug tracker and wiki. Especially bug tracker

                                                1. 2

                                                  Oh, that’s a good one! I must admit it has crossed my mind a few times, but indeed it sounds like one person could not pull that off. (OR CAN THEY!)

                                                  I am, however, very excited about recent projects implementing parts of that engine, e.g. recently a flexbox library written in Rust and available cross-platform.

                                                  1. 1

                                                    Or a browser-like engine focused on apps

                                                  1. 51

                                                    We use Confluence, like we did at my last job.

                                                    I fucking hate Confluence.

                                                    1. 7

                                                      Same. It’s awful. Search is a train wreck, and it doesn’t even use wiki-markup.

                                                      We also use something like Doxygen, except it errs out instead of generating the docs.

                                                      1. 6

                                                        I’ll be the counterpoint here… Confluence sucks, but I think it sucks less than the other solutions I’ve seen. At least for certain problems, and especially when you need a resource for non-developers.

                                                        A sibling comment says “Search is a train wreck”. To which I say: try searching google docs. Confluence search will at least show you matches within a doc when you search, so you have a better chance of figuring out which doc is actually the one you want.

                                                        Confluence has the ability to embed various kinds of content in the page, which is quite nice. Google Drawings seems uniquely designed to make ugly drawings. In Confluence, I can use PlantUML or Draw.io.

                                                        Their new editor supports typing markdown keystrokes to do formatting. The rollout has been kind of bad, but I think the direction of the new editor is good.

                                                        “Spaces” can be confusing at first, but I think it will help us scale up to the organization sanely.

                                                        So if the problem you’re solving is “I want to document the API of this project”, Confluence is a terrible choice. If your problem is “I want a place in which all kinds of people in the organization can find docs, discussions, and decisions around various things we’re doing”, Confluence works better than the other choices I’ve seen so far.

                                                        1. 2

                                                          The sad thing with Confluence is they used to (about 5 years ago) have a method of inserting wiki markup so that wiki pages could be generated and pasted in, or if you just didn’t want to use the (frankly awful) WYSIWYG editor you didn’t have to. But they ripped that out.

                                                          Atlassian actually paid us a site visit to gauge opinions from everyone in the company. Pretty much everyone in the company asked for wiki markup to come back, to which they responded “huh”.

                                                          The API for Confluence is also pretty bad. We have some tools for generating document trees, e.g. when creating documentation for a new service we run a script which creates a tree of pages from templates, but little things like not being able to turn off “notify watchers” on an edit via the API means you can hammer people’s inboxes (which in turn makes Gmail mad).

                                                          I agree that it sucks, but sucks less than other solutions. Which in itself is pretty sad.

                                                        2. 5

                                                          We used to use confluence a bit, but we sort of stopped using it because, well, we also don’t love confluence.

                                                          1. 8

                                                            Confluence is awful, but having a constellation of markdown files and Google docs is even worse. One source of truth.

                                                            The value we got from Confluence is that it gave a place where we could keep dev stuff next to business stuff, so it was easy for people to reference things more easily and have less silo-ing.

                                                            My only real complaint is that integrating with Confluence via a bot is a pain in the neck–we did this to have a wiki automatically updated with product information from deploys and builds, and that was Not Fun.

                                                            1. 3

                                                              The value we got from Confluence is that it gave a place where we could keep dev stuff next to business stuff, so it was easy for people to reference things more easily and have less silo-ing.

                                                              Something key here is that business folks generally have little interest in editing Markdown files and using Git. Confluence and other such systems may be monstrously annoying (and they are), but they’re better than alternatives.

                                                              Also, it works in the other direction. Devs want constellations of Markdown, but biz likes constellations of Word documents with comments and tracking. Trust me, Confluence is a better choice.

                                                              1. 1

                                                                We dragged them halfway with Confluence, then? We just need to Zeno them to git and markdown (eventually) ;-).

                                                                1. 1

                                                                  Of course, that goes both ways—I find Con(ef)fluence’s UI so utterly intolerable that I will interact with it the absolute minimum required to not get fired, which in practice means literally never. So now it’s a wiki just for the business side, which is probably better than the “NAS full of outdated Word documents” approach it probably replaced, but it still not very useful.

                                                                  Literally any other wiki software I’ve ever seen would be preferable.

                                                                  But I’m not sure I agree with you that having business and tech share a wiki is a good plan. Business folks love putting paperwork (e.g. “this deployment of this service was signed off on by these people”) into the wiki, which you must never ever ever ever allow, or it will immediately dilute the useful content to homeopathic proportions and make the whole thing useless. So now you either need to be draconian about allowing business folks to put stuff in, in which case they won’t use it, or it turns into a paperwork repository, in which case nobody will use it.

                                                                2. 1

                                                                  Confluence is awful, but having a constellation of markdown files…

                                                                  That’s a bit of a false dichotomy. Most wikis can provide search, history, notifications…

                                                                3. 4

                                                                  We use confluence too. I really wish they never made the change to the “smart” editor that prevents users from editing plain markdown (or similar).

                                                                  Read some of the historical tickets around that change if you’re in the mood to shed a tear.

                                                                  In the end though, the key thing is to have one central jumping off place to get to your documentation and confluence works ok for that.

                                                                  1. 3

                                                                    Hear, hear. It’s a nightmare.

                                                                    1. 1

                                                                      I like Confluence. It has a WYSIWYG editor that actually works. Plenty of plugins for lots of features and integrations. Recently it even gained real time collaboration.

                                                                      1. 1

                                                                        I can’t add anything new here - we too use Confluence and almost everyone hates it, but as I and dangoor have mentioned in other comments, there might not be anything better.

                                                                        One thing we did a while ago was move alert references out of Confluence to a git repo which uses mkdocs. This way if Confluence is down or there’s some network issue meaning we can’t get to it, on-call engineers can still have a local copy of all the alert references.

                                                                      1. 3

                                                                        I agree that webrings should make a comeback, but WebRing.org still exists and I don’t really see any benefit to having git as the datastore unless you can submit a site without issuing a pull-request. The opengraph/cards are definitely a nice touch though.

                                                                        1. 4

                                                                          Wow, one look at WebRing.org explains to me why its existence is not enough to dissuade the creation of a new webring implementation. It’s a mess of ads and poor design. While it’s only nerd-compatible, I do like the idea of an implementation which results in a JSON representation that can be dropped into a static site. To be able to implement modern webrings in a way that doesn’t add to the “one more thing on the net that’s tracking me” seems like a win.

                                                                        1. 2

                                                                          Is node still single threaded?

                                                                          Also, some of the graphs are confusing. The range bars dip below what should be the minimum limit. Sometimes the latency is below the 90ms (100 - 10%) floor. In one graph, go receives a request in negative microseconds.

                                                                          1. 3

                                                                            Is node still single threaded?

                                                                            Yes, Node is still essentially single threaded. (Essentially because you can, as of 11.7, create Worker threads, but it’s nothing like goroutines or BEAM processes)

                                                                            1. 1

                                                                              Agree. Bars are based on standard error and I added them as both positive and negative, yet they’re likely to reflect mostly positive error from the graphed mean line. What would be the right way to do this?

                                                                              1. 2

                                                                                A lot of natural processes actually follow a lognormal distribution, which nicely handles the impossibility of values below zero.

                                                                            1. 35

                                                                              I increasingly feel that as individual it’s kind of pointless to protest against systemic problems like this. Systemic problems need systemic solutions. We didn’t fix problems with CFCs, DDT, and asbestos by individuals boycotting them or choosing alternatives; we fixed it by recognizing that the current situation was not in the common good by any rational standard, and passing legislation to restrict or outlaw the harmful substances.

                                                                              Unfortunately the current political zeitgeist in both the United States and the EU is such that it’s very hard to address this. We can’t even address climate change or factory farming in any meaningful manner – topics where the vast majority of people think something should be done and where the solutions are blindingly obvious – so I have little hope on this topic.

                                                                              A lot of people are not in favour of government interventions. I’m not really, either. But for better or worse, it is the only real organisation we have to look out for the common good. Wielded correctly, it can be a great force for good. I don’t really buy the arguments that government regulations on environment, privacy, etc. automatically lead to a “nanny state”. Traffic laws are very strict, and that seems to work well enough.

                                                                              I do think that we should also think very hard about some things we don’t want governments to do, as well as how the government communicates with its citizens. In particular, laws that attempt to legislate morality are probably not a good idea, and don’t even get me started about stuff like the Windrush Scandal, where the government simply cheated its own citizens. Things like this erodes people’s patience with government, making it even harder to pass desperately needed laws on privacy and the environment.

                                                                              A lot of the current political attention is taken up by extraordinary silly events: Brexit, Donald Trump, etc. All of this further destroys faith in government. The Republicans/Conservatives don’t really care, because they were against government in the first place. By running the government in an inept way they are proving their own point. There is a very strange incentive to mess things up.
                                                                              No matter what will happen with elections in the next few years, I think that it’s still a massive win for the anti-government right-wing, as it pushed perception of the government as in inept organisation among people of all political convictions.

                                                                              This got a bit more political than I intended to; but I think it’s important, as all the time spent writing privacy tools could perhaps better be spent arguing your local legislature, or supporting a political party that opposes this kind of stuff, or … something. Honestly, I’m not entire sure how to address it. My own solution has thus far mostly been to think about it and debate it on occasion, which is interesting, but not very effective.

                                                                              To answer the question: I do the “low hanging fruit” stuff; whitelist cookies, block 3rd-party cookies outright, adblocker, etc. Anything more than that probably has diminishing returns. This is mainly aimed at the “I want all your data” internet companies. I am not so worried about the NSA to be honest.

                                                                              1. 4

                                                                                You might enjoy this talk by PHK: https://www.youtube.com/watch?v=3jQoAYRKqhg. Tongue in cheek analysis of how we’ve gotten ourselves into this situation and that political problems require political solutions.

                                                                                1. 2

                                                                                  as individual it’s kind of pointless to protest against systemic problems like this

                                                                                  There’s a difference between protesting again (effecting change at large) and dealing with (effecting change for yourself) systemic problems. To me the question reads like it’s about the latter.

                                                                                  1. 1

                                                                                    I get your point, but I don’t think the two are that distinct, especially considering that preventing getting tracked is quite hard, and preventing NSA-type programs even harder.

                                                                                    1. 3

                                                                                      I agree with you about the difficulty in preventing NSA-type programs, and I also agree that “preventing getting tracked is quite hard”. Where I differ is in the “getting tracked” part: I think an individual can greatly reduce how much they are tracked, if they wish to do so.

                                                                                      To your original point: as an individual, I can greatly reduce my carbon emissions… which would be a drop in the ocean and wouldn’t improve climate change outcomes and how they affect me. Reducing my external data footprint is totally possible and improves my privacy.

                                                                                      1. 2

                                                                                        You’re correct that the situations aren’t completely analogous. But if you look at the general (non technical) population then we see that effective tracking protection is very hard, if not almost impossible. So while we both have the knowledge, skills, time, and patience to reduce our data footprint to quite some degree, most people simply don’t.

                                                                                        So now we have the options, we either go and try and educate every individual, or we take collective action to make things better for everyone. Given that the entire premise of tracking the hell out of everyone without their consent is ridiculous to start with and provides little benefit, and that educating individuals is hard and time-comsuming, it seems to me that taking collective action is the only viable course of action if we really want to change things.

                                                                                        1. 1

                                                                                          My point was that data footprint is one area in which we can improve our own individual situation and help others we know to do the same for theirs. I agree 100% with you that the situation is terrible and that collective action would be better.

                                                                                          A company like FastMail could conceivably make a tool that:

                                                                                          1. Migrates from gmail to FastMail and sets up forwarding on gmail with markers on the message on FastMail to show which service providers need email address changed
                                                                                          2. Bundles Firefox with a good balance between privacy and usability (Facebook container!)
                                                                                          3. Bundles a VPN (I guess ProtonMail could do this too, because I think they actually offer a VPN)

                                                                                          If someone made a turnkey package that gets people 80% of the way there without too much effort, that seems like a win. Obviously, people have to pay for those services. That’s one problem: it’s not clear to me how many people are willing to trade some money for privacy?

                                                                                  2. 1

                                                                                    I feel like it’s important to tell you that no, at most it’s a bare majority of people, not vast majority, who think solutions to farming and climate change are obvious. As an example, talking from the United States, our carbon numbers are already so small that the science says even if our emissions went to zero today, it wouldn’t make a dent at all in the globe’s climate. At this point the only obvious answer is to move inland.

                                                                                    1. 1

                                                                                      As an example, talking from the United States, our carbon numbers are already so small that the science says even if our emissions went to zero today, it wouldn’t make a dent at all in the globe’s climate

                                                                                      That’s true of ~every country. Not sure what lead you to think it’s a helpful addition to the conversation.

                                                                                  1. 1

                                                                                    I work at Khan Academy (an education non-profit) and math is one of our strongest subjects. People tend to like our stuff because of simple, clear explanations given in just a few minutes per topic. If you look at the vectors and spaces page, for example, you see a bunch of very specific topics so you can jump in and move on within minutes.

                                                                                    Different people learn better from different sorts of material, so find the style that works best for you!

                                                                                    1. 1

                                                                                      It is being so difficult and yet so excruciating to leave both GMail (for ProtonMail in my case) and Chrome (for Brave).

                                                                                      1. 4

                                                                                        Why is it difficult or excruciating? There are many email services and browsers, many of which are better than gmail and Chrome in my opinion.

                                                                                        1. 2

                                                                                          Speaking just for myself here, but I’m probably up to 40 or 50 online accounts, each of which has my email address.

                                                                                          Changing them all to a new email address most certainly would be difficult.

                                                                                          1. 7

                                                                                            When I switched away from gmail, I forwarded all messages to my new account. It took ~6 months before I had gotten most of my contacts to use the new address, and I still got a ‘real’ message every month or so after 2 years.

                                                                                            It’s been about a year since I last got an important message to it. Everything I use has changed over in the fullness of time.

                                                                                            Now that I own the address, I can move providers without having to go through that again :)

                                                                                            TL;DR: The best time to plant a tree is 20 years ago; the second best time is today.

                                                                                            1. 3

                                                                                              My primary email address has been at a domain I own since 2005, and I’m really thankful for that. Right now, my address just sends the mail along to gmail, but I can move to another email provider without having to change things with all of the services I use.

                                                                                            2. 0

                                                                                              Not for me.

                                                                                              1. 3

                                                                                                Can I ask why?

                                                                                                1. 0

                                                                                                  I am missing functionality from Brave and ProtonMail.

                                                                                            3. 2

                                                                                              It is mostly difficult because Gmail is a great FREE service. That is of course for a reason - YOU are the money generating product. So if you want to turn that around, you will have to accept that you will have to pay for an alternative.

                                                                                              1. 3

                                                                                                I already pay for ProtonMail and I would gladly pay for a browser. That’s not the point. GMail and Chrome, IMHO are far superior products still.