1. 6

    How come nobody wrote about javascript becoming the new C++ yet? I do find pattern matching pretty nice though.

    1.  

      More like perl.

      1.  

        It isn’t and it isn’t likely to. C++ isn’t C++ just because it has lots of features. C++ is C++ because it has lots of features and they don’t quite work together.

        ES features work together mostly rather smoothly. Things like classes and statics desugar straightforwardly to prototypes and properties on constructors, async doesn’t interact poorly with anything in particular. GC papers over a multitude of sins: you can’t get lots of the really nasty interactions from uninteresting looking code, like introducing a segfault by closing over an iterator that got invalidated.

      1. 8

        Is it just me or are other people also bothered by the over use of emoticons and the low quality of this writing? I wish that style of writing would stay confined to SMS and not bleed into technical articles.

        1. 14

          I don’t know how you measure “quality of writing”, but I thought it was fine.

          No typos I could read and good sentence structure. Tasteful use of emojis I thought. Distasteful would be every one or two paragraphs, but they only used like 5 in the whole article.

          Just a different way to express. A waaaay more casual one.

          1. 5

            i’m a fan of emoji used to decorate text, but not used to replace words.

            1. 3

              That’s a good way to put it. Indeed.

            2. 5

              Tasteful use of emojis I thought.

              Maybe it’s just me but when I see “Oh 💩, it compiles to JavaScript.” I just have to think of trying-to-be-cool parents, which I just find tiering. And that’s setting aside that I don’t believe vulgar language should be used at all in written documents.

              1. 1

                Can’t disagree with that, tbh (on the use of the shit emoji).

              2.  

                “Tasteful use of emojis” sounds like “subtle trolling” to me.

                Although it’s ok, I only see the same “tofu box” emoji.

                1.  

                  ‘Subtle trolling’ is quite like emoji use in that if you notice it, it’s wrong. The whole point of trolling is to rial someone up without them realising they’ve been rialed up.

                  1.  

                    Did you mean to write “rile” or is this comment itself an example of subtle trolling? ;) ❤

                    1.  

                      I did mean to write ‘rile’, yeah.

                    2.  

                      I am reacting to the over-use of emoji as a way to get personal. It’s good, but companies tends to use it to promote a product, so I have gotten allergic. Made with :love: by $BigCorp. Put a :tiger: in your :engine: …

                      On the other hand, I see no reason against putting emoji on one’s own blog, readers are not pushed straight onto the posts after all…

                2. 4

                  Didn’t bother me since it was at least a different style. I like seeing a mix of styles. If it annoyed you, I think you’ll like some of his comments on the HN thread which get right to the point. Specifically, he as a list of what’s bad and what’s great.

                  1.  

                    Indeed, these are more substantial. I might be a little bit burned out by the “code ninjas” out there and the impression that software engineering is a dying art. Now you can do a two month bootcamp on React and VSCode and get a job. Even Google stopped asking for CS degrees.

                    1. 5

                      Careful. In my day job I work on implementing dependent type systems, with an eye for improving low level binary format parsing by leveraging formal verification. And yet I dropped out of CS and I use VS Code. Opening up other pathways to people getting into programming does not mean that we have to discount the importance of a high quality CS education. We would also be wise to not assume that a CS degree correlates with a good aptitude for programming.

                      1.  

                        Yes, you’re absolutely right, and I’d really like a wider range of people get into software engineering, CS degree or not. However, from my anectodal experience, I find there is a growing gap in knowledge and values, and am wondering why it seems so.

                        1.  

                          I find in fact that many university courses are actually doing more harm than good, pedalling decades old software engineering practices (like the gospel of Java, OOP, imperative programming and UML) rather than teaching core principles of programming languages, mathematics, and algorithms that age more slowly, and are critical to encouraging and inspiring the next generation of CS researchers. This is partly industry’s fault, and partly the fault of universities.

                          I see industrial programming as more a vocational trade, and employers should shoulder more of the burden of teaching up-to-date best practices. Let the universities do what they do well: theory, and don’t expect CS graduates to be excellent programmers from day one, but do expect them to be able to eventually become much more effective and nimble in the long run than a entry level boot-camp employee (depending on that eployee’s desire for self-education). By the same token I think universities should not get caught up in chasing the treadmill of the latest technology, and be up front to prospective students about that.

                      2.  

                        Even Google stopped asking for CS degrees.

                        As far back as 2008, Steve Yegge was saying you don’t need a CS degree to get a job at google: https://steve-yegge.blogspot.com/2008/03/get-that-job-at-google.html

                        So I would say Google’s hiring practices and the bootcamp movement are largely unrelated, at least, seeing as I was a relatively early bootcamp grad and that was in 2011/2012.

                  1. 1

                    The major problem with a function like posix_spawn() is the number of tweaks one can make to a process. Sure, you have replacements for stdin, stdout and stderr and maybe a new working directory, and oh, we can also switch users (if root) and the environment but there’s also system limits like “number of open file descriptors” and “maximum memory to use” and “core file size”. There might be more that I’m missing.

                    1.  

                      Yeah, there’s lots. A potentially unbounded number, plus tomorrow someone will always think of one more they’d like to add.

                      The approach of having an opaque attrs object solves this much better than trying to meet every possible use case with a many-parameter function.

                      It makes sense to take the most commonly changed things as parameters, just to make most call sites clearer.

                    1.  

                      Slightly odd corner case in Lobsters’ tagging system here: this is a written article. It is tagged video because it’s about video. Usually that tag is meant to be used to allow people to filter out posts where the content is a video rather than the written word.

                      Not sure if lobsters’ tagging actually has a solution for this? So I’m going to just go off on a tiny bikeshed aside here: perhaps the ‘video’ tag could be called type:video or something like that? The same thing comes up for people wanting to filter out posts that are links to PDFs then being unable to see ordinary written posts on the web about topics like PDF rendering.

                      1.  

                        Yeah I was actually a little wary about putting the video type but there didn’t seem to be any other tag that fit better - definitely nothing on HLS or streaming.

                        I think a type: prefix sounds like a great solution.

                      1. -10

                        We don’t need another kilograms of heavy UIs to make people “click ok and get on with things”, we need people to understand how the Git actually works.

                        Remember that Git is not a new iPhone and it doesn’t need to be used by idiots, morons and people with zero knowledge - actually, it lifts the bar a bit, but if you sit down and focus on how Git really works, you’ll grasp the ability to answer yourself a question “What I need to do to achieve that…?” in less than a hour.

                        Instead of copying and pasting magic spells from Stuck Overblown.

                        1. 22

                          Please could you not be this acerbic here in future? Thanks.

                          1. 5

                            I think both sides are right, people could become more expert at easier tools. The nice thing with this UI is it doesn’t seem to hide the underlying git commands, you can hover over the buttons to see what they do.

                            1. 3

                              What @0x2ba22e11 said.

                              1. 1

                                Despite your tone, you’re not entirely wrong when it comes to coders.

                                Admittedly Git could do with some overhaul in its command line arguments to make it more logical, but it’s a power tool and it’s possible to wrap your head around it. Not everyone does, but I think they should. Like with their editor/IDE of choice.

                                If someone’s more productive with a UI like this and there’s no real detriment, let the markets decide.

                                Git can be used to track non-code content, though. The Apollo 17 project comes to mind, as it’s one of my favorite things online.

                                I contributed to it using CLI Git, but for less technical people to get involved in things like that, there’s nothing wrong with a GUI.

                              1. 1

                                Even after reading the first several paragraphs and skimming the rest of this article I still couldn’t tell what a “memory model” is. Does anyone have a decent explanation?

                                1. 2

                                  A memory model is a mathematical model of what happens in a program where there are multiple concurrently running threads of execution which may read and write to shared memory. It specifies what guarantees the synchronisation primitives are required to provide.

                                  As an application developer, you can use it to reason precisely about what guarantees you’re supposed to be getting from the system by reasoning about the code you wrote and what primitives you used, without having to reason about their internal implementations.

                                  As a system implementer, it gives you a semantics to implement so you can try to do that with the absolute minimum amount of synchronisation that will possibly suffice. So long as the result is that programs’ observable behaviour meets the spec, you can do whatever.

                                  AFAIK the first good example that was widely used was the Java Memory Model. It’s also a relatively simple and well explained one, so I’d recommend reading about it.

                                  1. 2

                                    Fantastic answer! Exactly what I was looking for.

                                    1. 2

                                      Thank you. I’m glad to hear it helped.

                                      1. 2

                                        I just remember this link, Close encounters of the Java Memory Model kind which really extensively explains the JMM in great detail (and links to some other more introductory text too)

                                    2. 1

                                      Memory model. Lobsters has many of them, too.

                                    1. 8

                                      Always a joy to read Conor’s writing :)

                                      Note that Epigram has been dead for a while, Idris is its spiritual successor (I believe it actually evolved from an attempt to build an Epigram compiler). Idris is explicitly aiming to be a “real” programming language; Agda is very similar, but is more often used from a mathematical/logical side of Curry-Howard, rather than the programming side.

                                      Neither Idris or Agda have the 2D syntax of Epigram, but they both have powerful Emacs modes which can fill in pieces of code (Haskell’s “typed holes” is the same idea, but (as this paper demonstrates) Haskell’s types are less informative).

                                      1. 10

                                        Indeed, I suppose it’s that Idris evolved from what was intended to be the back end of Epigram. It certainly owes a lot to Conor McBride and James McKinna’s work on Epigram. I don’t know if “real programming language” is exactly the right way to put it, though, so much as being the language I wanted to have to explore the software development potential of dependent types. Maybe “real” will come one day :).

                                        1. 1

                                          Will we see a longer form post/paper or something that isn’y merely Twitter teasers about Blodwen anytime soon? :)

                                          1. 6

                                            Hopefully! I know I need to get on with writing things so that more people can join in properly. I have plenty of time to work on it for a change in the next few months, so that’s nice…

                                            1. 1

                                              Do you have a writeup about it? I’m wondering why you’re replacing Idris which is somewhat established already, I mean that probably is the reason you’re replacing it, but still I wonder what concretely necessitated a whole new language instead of a 2.0

                                              1. 6

                                                It isn’t a whole new language, it’s a reimplementation in Idris with some changes that experience suggests will be a good idea. So it’s an evolution of Idris 1. I’ll call it Idris 2 at some point, if it’s successful. It’s promising so far - code type checks significantly faster than in Idris 1, and compiled code runs a bit faster too.

                                                Also, I’ve tried to keep the core language (which is internally called ‘TTImp’ for ‘type theory with implicits’) and the surface language cleanly separated. This is because I occasionally have ideas for alternative surface languages (e.g. taking effects seriously, or typestate, or maybe even an imperative language using linear types internally) and it’ll be much easier to try this if I don’t have to reimplement a dependent type checker every time. I don’t know if I’ll ever get around to trying this sort of thing, but maybe someone else will…

                                                I started this because the Idris implementation has a number of annoying problems (I’ll go into this some other time…) that can only be fixed with some pretty serious reengineering of the core. So I thought, rather than reengineer the core, it would be more fun to see (a) if it was good enough to implement itself, and (b) if dependent types would help in any way.

                                                The answer to (a) turned out to be “not really, but at least we can make it good enough” and to (b) very much so, especially when it comes to name manipulation in the core language, which is tricky to get right but much much easier if you have a type system telling you what to do.

                                                I don’t have any writeup on any of this yet. It’ll happen eventually. (It has to, really - firstly because nobody ever made anything worthwhile on their own so a writeup is important for getting people involved, and secondly because it’s kind of what my job is :))

                                                1. 1

                                                  I’m so excited by all of this, can’t wait to see what comes out of it, and it can’t come soon enough:D Idris totally reinvigorated my love for programming tbh

                                            2. 2

                                              I just follow along with the commits. Edwinb is usually pretty good with his commit messages, so you can kind of get a story of the development from that! :)

                                              1. 1

                                                I’ve got to admit it’s very weird reading a reply by someone with your identical name/spelling, thanks!

                                              2. 1

                                                What’s Blodwen?

                                                1. 2

                                                  An implementation of Idris in Idris: https://github.com/edwinb/Blodwen/

                                                  Has a neat implementation of Quantitative Type Theory that I’m hoping to integrate in my own programming language!

                                                  1. 1

                                                    Nice! What’s your language? Btw your second link is broken

                                                    1. 3

                                                      Fixed! This is mine: https://github.com/pikelet-lang/pikelet - scratching my itch of Rust not being enough like Idris, and Idris being not designed with low level systems programming in mind. Probably won’t amount to much (it’s rather ambitious), but it’s been fun playing around, learning how dependent type checkers work! I still need to learn more about what Epigram and Idris do, but it takes passes of deepening to really get a handle on all the stuff they learned. I’m probably making a bunch of mistakes that I don’t know about yet!

                                                      1. 1

                                                        Nice logo. Looks harmless and fun.

                                                        1. 1

                                                          Nice. I’m starting to realize how I wasn’t the only one to have thought “wouldn’t it be nice to have a purely functional systems language with cool types”:D

                                                          What I wanted to make was very similar to Idris, but I would’ve put way more focus on lower-level stuff. Honestly, my way of combining it was likely misguided as I was a total rookie back then (still am, but comparatively, I at least know how much I didn’t know…)

                                                          1. 1

                                                            Oh cool! I’m sure there are others with a similar itch to scratch - it’s just coming into the realm of the adjacent possibility. Pikelet is just my attempt at pushing that process forwards.

                                                            We’ve been growing a nice community over at https://gitter.im/pikelet-lang/Lobby - you are most welcome to drop by if you like!

                                                        2. 2
                                                2. 1

                                                  Thanks for putting this in context, that’s really useful.

                                                  Also: sounds like I’m missing a (200x) in the title, if you know the correct year.

                                                1. 3

                                                  Cute! And this

                                                  computers are about humans, and […] it is not possible to reason in an aseptic way just thinking at the technological implications

                                                  is very well put. ❤

                                                  1. 3

                                                    Urgh, damn it. I guess I should download Wikipedia while Europeans like me are still allowed to access all of it… It’s only 80 GB (wtf?) anyway.

                                                    1. 3

                                                      That and the Internet Archive. ;)

                                                      Regarding Wikipedia, do they sell offline copies of it so we don’t have to download 80GB? Seems like it be a nice fundraising and sharing strategy combined.

                                                      1. 3

                                                        I second this. While I know the content might change in the near future, it would be fun to have memorabilia about a digital knowledge base. I regret throwing to the garbage my Solaris 10 DVDs that Sun sent me for free back in 2009. I was too dumb back then.

                                                        1. 2

                                                          Its a bit out of date but wikipediaondvd.com and lots more options at dumps.wikimedia.org.

                                                          I wonder how much traffic setting up a local mirror would entail, might be useful. Probably the type of thing that serious preppers do.

                                                          1. 1

                                                            You can help seeding too.

                                                        2. 4

                                                          Actually Wikipedia is exempt from this directive, as is also mentioned in the linked article. While I agree that this directive will have a severely negative impact on the internet in Europe, we should be careful not to rely on false arguments.

                                                          1. 1

                                                            Do you remember the encyclopedias of the 90s? They came on a single CD. 650MB.

                                                            1. 5

                                                              To be explicit, this is not a “modern systems are bloated” thing. The English Wikipedia has an estimated 3.5 billion words. If you took out every single multimedia, talk page, piece of metadata, and edit history, it’d still be 30 GB of raw text uncompressed.

                                                              1. 4

                                                                Oh that’s not what I was implying. The commenter said “It’s only 80 GB (wtf?)”

                                                                I too was surprised at how small it was, but them remembered the old encyclopedias and realized that you can put a lot of pure text data in a fairly small amount of space.

                                                                1. 1

                                                                  Remember that they had a very limited selection with low-quality images at least on those I had. So, it makes sense there’s a big difference. I feel you, though, on how we used to get a good pile of learning in small package.

                                                                2. 1

                                                                  30 GB of raw text uncompressed

                                                                  That sounds like a fun text encoding challenge: try to get that 30GB of wiki text onto a single layer DVD (about 4.6GB?)

                                                                  I bet it’s technically possible with enough work. AFAIK Claude Shannon experimentally showed that human readable text only has a few bits of information per character. Of course there are lots of languages but they must each have some optimal encoding. ;)

                                                                  1. 2

                                                                    Not even sure it’d be a lot of work. Text packs extremely well; IIRC compression ratios over 20x are not uncommon.

                                                                    1. 1

                                                                      Huh! I think gzip usually achieves about 2:1 on ASCII text and lzma is up to roughly twice as good. At least one of those two beliefs has to be definitely incorrect, then.

                                                                      Okay so, make it challenging: same problem but this time an 700MB CD-R. :)

                                                                      1. 4

                                                                        There is actually a well-known text compression benchmark based around Wikipedia, the best compressor manages 85x while taking just under 10 days to decompress. Slightly more practical is lpaq9m at 2.5 hours, but with “only” 69x compression.

                                                                        1. 1

                                                                          What does 69x compression mean? Is it just 30 GB / 69 = .43 GB compressed? That doesn’t match up with the page you linked, which (assuming it’s in bytes) is around 143 MB (much smaller than .43 GB).

                                                                          1. 5

                                                                            From the page,

                                                                            enwik9: compressed size of first 10e9 bytes of enwiki-20060303-pages-articles.xml.

                                                                            So 10e9 = 9.31 GiB. lpaq9m lists 144,054,338 bytes as the compressed output size + compressor (10e9/144,054,338 = 69.41), and 898 nsec/byte decompression throughput, so (10e9*898)/1e9/3600 = 2.49 hours to decompress 9.31GiB.

                                                                          2. 1

                                                                            Nice! Thanks.

                                                              1. 18

                                                                Not surprised. I read Clean Code a year back and Martin’s style there was to advocate an extremely specific, almost-obsessive style of coding without deeply evaluating the why or talking about context. The impression was that it was the Right Way To Code, and any deviation from that was a failing. Stands to reason it’d be the same here.

                                                                For a good book on software architecture, I’d recommend Documenting Software Architectures by Paul Clements et al. While it’s focused on just representing architecture, I found learning how to explain things lead me to being better able to think about architectures.

                                                                1. 5

                                                                  Thank you for recommending that book, I am contemplating about ordering it.

                                                                  I think as a “junior” dev I read Clean CodER and it actually taught me a few things, so my impression from my first encounter of his writing was very positive. Then I reread it again as I was now in a more senior role and contemplated recommending it to junior devs starting their first job. I was sadly disappointed by the book. I also get the same feeling when I watch RCM talks on youtube, etc. I think its the holier-than-thou attitude that repels me now, as a junior I looked for guidance and I didn’t mind so much, now I find his advice too simplistic and the “Clean X” brand shallow maybe even a bit narcistic.

                                                                  His recent ranting blog posts are also disappointing, but I guess not unexpected.

                                                                  1. 8

                                                                    I read “Coders at Work” and “The Architecture of Open Source Applications” a while back. I wouldn’t describe either as exclusively full of great advice. In fact both often document some really questionable decisions, completely uncritically. However, I think they give a glimpse of people working with very different mindsets from one another because of very different constraints, and projects that make large scale technical decisions for very different reasons.

                                                                    1. 3

                                                                      For intermediate developers, I can also highly recommend The Pragmatic Programmer. It’s been years since I read it, so I don’t know how well it held up over the years, but I remember when I read it, it had a big influence on how I view programming.

                                                                  1. 1

                                                                    I’m a bit confused by the mention of putting parts of the heap in slower memory. Shouldn’t that be the kernel + swap’s job? Or does the kernel only page out entire process address spaces (which would clearly be too coarse for what the article’s discussing)?

                                                                    1. 4

                                                                      Linux on x86 and amd64 can page out individual 4kiB pages, so the granularity of that is fine.

                                                                      It’s plausible that they might be able to get much better behaviour bybydoing it themselves instead of letting the kernel do it. Two things spring to mind:

                                                                      If they’re managing object presence in user space, they know which objects are in RAM so they can refrain from waking them up when they definitely haven’t changed. Swap is mostly transparent to user processes. You really don’t want to wake up a swapped out object during GC if you can avoid it, but you don’t know which objects are swapped out without calling mincore() for every page, which is not very fast.

                                                                      Other thing that springs to mind: AFAIK handling page faults is kinda slow and an x86 running Linux will take something like a (large fraction of) a microsecond each time a fault occurs. AFAIK the fault mechanism in the CPU is quite expensive (it has to flush some pipelines) at least. So doing your paging-in in userspace with just ordinary instructions that don’t invoke the OS or the slow bits of the CPU may be a big win.

                                                                    1. 1

                                                                      I wonder what they have been using that data for. Were they doing something boring with it like selling it to advertisers or were they doing something interesting like, say, maybe finding malware distribution sites by finding URLs that have really strong positive correlation with malware-infected computers?

                                                                      1. 2

                                                                        I wonder how many projects are collecting data just for the heck of it, because everyone else is doing that too and mayyybe it’ll be useful one day?

                                                                      1. 2

                                                                        Something I’m not clear on after reading that: does this work because http-fetch resumes at the start of each pack file, or is http-fetch actually able to resume in the middle of a pack file? Hence does this work pretty much any time a repo is available via http or does it require the remote side to do some extra work to break up the pack files into chunks small enough that resuming works?

                                                                        1. 2

                                                                          I’m sorry but I didn’t test that case, I assumed that the “resume incomplete packfile” case would variate from git server implementations, and because of that focused in assumption that it would start download the packfile from the beginning since I wanted a universal solution that should work with any server implementation.

                                                                          1. 2

                                                                            Ah, thanks. I wasn’t quite sure whether the splitting the packfiles into 1MB blocks was something you did to make it easier for resumption to work, or whether it was only done to make testing that the method works easier.

                                                                            FWIW, all the commonly used httpds that I know of (e.g. Apache, Microsoft IIS, nginx, I’m almost sure lighttpd does too) support HTTP range requests for static files out the box with no configuration. I wouldn’t be surprised if resuming individual files via HTTP turned out to work on every single implementation that you find in the wild.

                                                                        1. 1

                                                                          I’m impressed by how good a job this article makes of establishing its motivation.

                                                                          My first thought upon reading the title was “that sounds like it could be useful but something is probably direly wrong if you need that”. I wasn’t thinking about bad network connectivity, I was thinking about repos that are large fractions of a terabyte. The first paragraph then immediately explains why this can actually be a problem for ordinary sized repos, too.

                                                                          1. 6

                                                                            … and yet another language realizing that spending all its bracket budget on luxuries like array syntax backfires spectacularly.

                                                                            The first rule of language design should be: “You got 4 pairs of brackets, (), {}, [] and <>. Use them wisely – once they are spent, they are pretty much gone forever.”

                                                                            I’m all for generics, but the draft syntax coupled with Go’s ident Type syntax turns the code into a one-letter alphabet soup.

                                                                            1. 2

                                                                              Dare we resort to 「」 and 『』?

                                                                              No. ;)

                                                                              1. 1

                                                                                I like it. Why should type parameters use a totally different syntax to normal parameters? They’re not really that different. From the sounds of it, they’re going to be able to be dispatched at runtime sometimes anyway (a bit like how in Go objects will be heap allocated if you return pointers to them but stack-allocated otherwise, I guess).

                                                                                Also I’m firmly of the opinion that lists are waaay more important to programming than types.

                                                                                1. 2

                                                                                  Why should type parameters use a totally different syntax to normal parameters?

                                                                                  Because they are completely different. If you want to turn Go into a dependently-typed language where the difference between types and terms ceased to exist, go ahead, but until then, types and terms are fundamentally different and should receive different syntax (especially because you can often infer the types), so it’s important to see which one of the parameter lists was left out.

                                                                              1. 2

                                                                                A question that I didn’t see touched on in the article: can you confirm that the upward trend seen in the logs in WP Engine really is likely real people and not only an artefact of the amount of click-fraud spam bots operating on the internet increasing over time? Ideally you should be able to look at the rates at which people go through conversions that the spam bots can’t fake (like e.g. buying stuff) and verify that they go up and down roughly in line with the stats reported by WP Engine?

                                                                                I think the advice to concentrate more on conversion numbers (e.g. someone bought a thing) seems like a very good idea. Bots aren’t going to fake all of those so well.

                                                                                1. 10

                                                                                  I know the Excel team maintain a C(++?) compiler that they use internally for Excel, I don’t really know if it’s used elsewhere

                                                                                  1. 7

                                                                                    I have a very vague memory of reading a blog post from someone at Microsoft (I think Raymond Chen) years ago which mentioned that they had a compiler whose entire purpose was to build exactly one DLL that shipped with Windows. I think its purpose was something along the lines of bridging 16- and 32-bit ABIs for backwards compatibility with Windows 3 programs, so it needed to have a mixture of both 32-bit and 16-bit code in it, and apparently that’s not something that any sensible compiler’s code generator will ever do for you, so someone made a one-off for it?

                                                                                  1. 1

                                                                                    It looks like a particle accelerator experiment in a cloud chamber married a cluster of galaxies.

                                                                                    1. 12

                                                                                      Kind of an aside, but I’m pleased by the lack of vitriol in this.

                                                                                      1. 13

                                                                                        Almost all of Theo’s communications are straightforward and polite. It’s just that people cherry-picked and publicized the few occasions where he really let loose, so he got an undeserved reputation for being vitriolic.

                                                                                        1. 2

                                                                                          Pleasantly surprised, even.

                                                                                        1. 1

                                                                                          Diagnosing and fixing the issue was good. Coming up with and publishing the uptime faker was great.