1. 2

    I’m a big fan of having a template for commit messages for your open source project. As an example, finagle and netty both have templates, and it makes it much easier to understand the purpose of a commit, and then also how it achieved the purpose, which is what we typically want out of the commit message.

    1. 16

      The classic version of this is “How to ask questions the smart way”.

      1. 10

        Aside from being poorly written, this article tries to skewer event loops with the argument that CPU work will block your thread and prevent you from achieving high throughput. This doesn’t have to do with blocking vs non-blocking. Your CPU resources will be consumed regardless of the approach you take. The actual difference in throughput here is how many cores you can consume. Presumably if you were actually trying to achieve high throughput in production with node, you would have several node processes per machine and load balance your work across the different processes.

        I’m not a huge proponent of node, but this article is not good.

        1. 7

          I helped out with the Twitter decision to vote no on JSR 376, and here’s what we said. The short version is that we felt like the JSR doesn’t have very much bang for the buck as it stands, although it’s an opportunity to tackle a real problem.

          1. 5

            Something a bit less obvious–you can write an async recursive loop with futures if you’re clever, but your future implementation needs to have built in support for it. In scala (using Twitter futures, although Scala futures support this too as of 2.10.3)

            def loop(): Future[Unit] = Future.sleep(5.seconds).onSuccess { _ =>
              println("loopin'")
            }.before(loop())
            

            This is quite tricky–the originally returned future is never satisfied, and this keeps on telescoping in forever, so if you’re not smart about this you’ll end up with a big space leak. If you’re curious, the scala promise implementation has a good explanation of how this works.

            1. 17

              I don’t think journalistic ethics have caught up with the ethics around doxing yet. The problem is that journalism tries to answer some basic questions, like “who, what, where, why, when, how” and historically, “who” has been a meatspace “who” because that was the only “who”. Now folks have persistent online identities, so it would be reasonable to refer to this guy just as MalwareTech and it would be fine. “Who” in this case doesn’t have to just be, “an anonymous person online” because MalwareTech is himself an identifiable person online, separate from what he does in meatspace.

              Clearly, some kinds of doxing aren’t OK in journalism, like publishing someone’s address, telephone number, or social security number, but violations of privacy have always been somewhat fuzzy. Cf paparazzi, or revealing who Elena Ferrante was.

              For what it’s worth, the SPJ code covers this kind of thing, but I suspect it will still take more time for journalists to get a good sense of how this works in the internet era.

              1. 3

                And now I was sitting here, slightly confused whether Simon Peyton-Jones gave an enlightning talk about online privacy that I missed before I followed the link …

              1. 3

                So eventually, after we platformize our hack and are comfortable from having run parallel infrastructures for some time, we’ll be handing off our DNS infra to the folks that probably know how to do it better than us.

                So far I’m 2 for 2 on “companies I’ve heard of running their own complicated DNS set up, despite it not being a core part of their business” vs “companies who would have been far better off outsourcing their DNS.”

                1. 1

                  What does this look like when you’re in your own datacenter?

                  1. 1

                    One of:

                    • You put the entirety of your zones on the external DNS service and you put only caching (if any) nameservers inside the DC.
                    • You put the public-facing part of your zones on the external DNS service and you do split-horizon DNS to have an internal.example.com subdomain visible only inside your DC, inside which all your internal-only records go. You put at least one nameserver inside your DC which responds to requests from inside your DC only, believes itself to be authoritative for internal.example.com, and delegates all other requests.

                    In both cases you get an API and UI for managing your publicly visible DNS entries because every worthwhile DNS provider does that.

                    1. 1

                      You can still outsource DNS in that scenario. Maybe it makes less sense though, but it’s equally possible as when you’re entirely cloud hosted.

                    2. 1

                      I think that an ideal state is “companies should only do the core part of their business”, but the reality is that “companies have to own whatever they need to own to ensure their customers can access their product”.

                      If that means running your own code or your own DNS or your own fileserver then that’s what you gotta do. It’s obviously more expensive but some companies don’t have the luxury of saying (as I’ve heard many on hn say) “amazon is down lol that means the internet’s broken guess we can go to lunch until it’s working again”.

                      1. 1

                        This can’t possibly be true if “own” means “run themselves”. Every company that sells products using the internet needs, amongst many other things, DNS service. Proportionally very few of those companies are capable of running a DNS service with higher uptime than, say, Route 53.

                    1. 2

                      My main problem with commit messages in git is typically that I find them inscrutable after a few months, or if it’s on a piece of code I’m not familiar with. My team has adopted a strategy of ensuring that commit messages have a motivation for the commit and then explain how it fixes the problem, which I really like, and you can find here. It’s quite lightweight.

                      1. 4

                        I work on finagle, which underlies the technology that duolingo switched to (finatra) so I have a horse in this race, but I wanted to talk a little more about what you were saying about averages being useless.

                        The tricky thing is that latency is something we get when we make a request and get a response. This means that latency is subject to all kinds of things that actually happen inside of a request. Typically requests are pretty similar, but sometimes they’re a little different–like the thread that’s supposed to epoll and find your request is busy with another request when you come in, so you wait a few extra microseconds before being helped, or you contend on a lock with someone else and that adds ten microseconds, and then all of those things add up to being your actual latency.

                        In practice, this ends up meaning that your latency is subject to the whims of what happens inside of your application, which is probably obvious to you already. What’s interesting here is what kinds of things might happen in your application. Typically in garbage collected languages the most interesting thing is a garbage collection, but other things, like having to wait for a timer thread, might also have an effect. If 10% of your requests need to wait for a timer thread that ticks every 10ms, then they’ll create a uniform distribution from the normal request latency + [0, 10ms).

                        This ends up meaning that when people talk about normal metrics being mostly useless for latency, this is usually because they mean the aggregate latency, which has samples which had to wait for the timer thread, and has samples which had to sit through a garbage collection, etc. However, it isn’t that the distribution they construct are particularly odd, but more that the distributions are composed of many other quite typical distributions. So there’s a normal distribution for the happy path, and then a normal distribution for the garbage collections, and a uniform distribution for when you were scheduled on the timer, and put all together they end up making a naive average difficult to interpret.

                        But we can make an informed guess here, which is that probably the happy path is around 10ms now, and was probably around 750ms before, which is a quite nice improvement. As far as the unhappy path, my suspicion is that JVM gc pauses are better than Python gc pauses, but it’s quite difficult to tell for sure. My guess would be that their gc pauses are on the order of 100s of milliseconds, and were previously also on the order of 100s of milliseconds, so that the p9999 is probably still better than the p50 they saw previously.

                        Anyway, this is just to say that averages are useless, but also that just knowing the p50 or p99 is also sort of useless. Really what I want to be able to see are the actual histograms. As a side note, finatra exports real histograms, so if you get ahold of one of the duolingo people, would be pretty interested to see some of those graphs.

                        1. 2

                          Agreed, a histogram – or any more details around performance – would have been useful. It’s unclear what they measured and what was sped up, so it’s hard to evaluate anyway.

                          And this is the problem with precision, but not accuracy: if you’re telling me 750ms and 10ms, that send me a different signal than 750ms and 14ms. In fact, if I wasn’t going to dive deep into the perf aspects, I might have either dropped numbers altogether (and stated “more than an order of magnitude improvement”), or said “50 times faster”, and then I would’ve gotten the gist of the speedup (which seems awesome) without tripping over the concrete numbers (especially 14).

                        1. 3

                          I think this article is interesting, but unrelated to the core interests of lobsters, which are computer technology. I think folks can find these kinds of articles on other websites, and it would be best to keep this kind of stuff off lobsters to keep the signal to noise ratio high.

                          1. 16

                            I slightly disagree. It fits in well. UI design is a very important part of CS. I will caveat my comment with this probably should have the off-topic tag.

                            Edit: I swear there used to be an off topic category.

                            1. 19

                              I think CS people tend to find these kinds of metro-layout type discussions interesting for the even more specific reason that they’re closely related to CS sub-areas like automated graph layout. In fact if automated graph layout worked ‘perfectly’ for some definition of perfectly, you would just use that to make metro maps.

                              1. 4

                                Indeed. The blog post from transit app, was an excellent read on this topic:

                                https://medium.com/transit-app/how-we-built-the-worlds-prettiest-auto-generated-transit-maps-12d0c6fa502f

                            2. 8

                              I disagree completely. computer science is largely about representation and communication of data, and few sets of data affect more people than those related to transit systems. even if one finds the visual / graphic aspects uninteresting, the implicit analysis of the data set can inform all manner of algorithmic thinking.

                            1. 22

                              Are there any stripe folks on lobsters who know why stripe chose Consul as the service-discovery tool, instead of straight-up DNS or zookeeper? b0rk phrases it as “effective and practical” but hashicorp’s first commit to Consul was almost exactly three years ago, so if they’ve been using it since it came out. In comparison, kubernetes that she contrasted as “a new technology coming out”, had its first commit two and a half years ago, but it was already 250 files, so it was probably in development for at least half a year before that. I wonder if maybe the way Stripe talks about Consul has changed since they started using it–since they’ve used it for a couple of years, they think of it as battle-hardened, even though in the larger world of distributed systems it is not particularly broadly used. This might be true for Stripe, since they have already worn down the rough edges for their use case, but I don’t think if I was looking at choosing a service discovery system for a company that I would consider choosing Consul anti-flashy.

                              One thing that worried me about Consul is exactly what Stripe ran into when running it, that it’s built on top of a framework for guaranteeing consistency. In practice, strong consistency might not be what you want out of a service discovery framework. It might be appropriate if you really don’t want your servers to ever accidentally talk to the wrong server, and you reuse names (or IP addresses, which is often what service discovery nodes point to), since after you remove a node from a cluster, you can be pretty certain that all clients will see it provided the service discovery cluster is talking to people. In the long run, a better solution to this problem than requiring strong consistency in your service discovery tool is requiring that clients and servers authenticate each other, so they agree that they’re talking to who they think they’re talking to. If I was picking a flashy new service discovery framework, I would probably look at eventually consistent tools like Netflix’s Eureka. If I was trying to do something battle-hardened, I would probably pick Zookeeper.

                              Looking at Zookeeper naively, you might ask, “Why is this strongly consistent hierarchical file system the go-to default for service discovery?” One thing is that it was designed for generic configuration changes, so it receives updates via the “watches” API, and the “ephemeral znodes” API, which can be the fundamental building blocks of a service discovery tool. That’s the long and short of why people have used it for practically a decade.

                              Other than that, zookeeper doesn’t have a lot that particular commends it. It does great throughput (for a strongly consistent store), and many people have used it for service discovery for a long time so you can be pretty confident in it. On the other hand, when it’s not confident it can make progress safely, it just doesn’t make progress–this can mean that new nodes can’t start up (because you can add them to a cluster) and that old nodes can’t be removed (because you can’t remove them from the cluster). Leader elections can also be pretty painful. Unfortunately, these are also problems that Consul faces, because it made the same choices about consistency that Zookeeper did.

                              Now that they’re using DNS on top of Consul, they have two single points of failure. Although we treat DNS like air, and assume it’s an unlimited resource, DNS is still a name server, and it can still go down. With that said, DNS is really battle-hardened, so usually the problem comes when somebody poisons your DNS server somehow. This is problem is mitigated by being in an environment where experts run your DNS servers, but it can still be bad.

                              The other thing is that network partitions are real, and you don’t necessarily want to take down your own website because your service discovery cluster can’t talk to every remote host. Just because your service discovery cluster is partitioned from them, doesn’t mean that they’re partitioned from each other! The nastiest problem then isn’t when Consul is down, but when Consul is up and is sending you garbage that makes you think everyone is down. An end solution ends up being to only trust Consul as a source for adding new information–your load balancer assumes that Consul tells you about new nodes, and ignores information about dead nodes until it can validate for itself that they’re dead.

                              As b0rk mentioned, DNS can be pretty slow to update, which is usually the reason why people don’t want to use just DNS. If you’re happy with DNS’s propagation speed, it might make sense to cut the middleman and skip running your own service discovery tool. With that said, it can be a hassle to have to wait on the order of minutes for a service discovery update–in particular, it makes rolling restarts especially slow, since if you want to sequence rolling 5% of your cluster each time, you’ve added at least twenty minutes to your deploy. You can use blue/green deploys to make it easier to roll back, but as your cluster size grows, it becomes increasingly expensive.

                              With all that said, I think this is a really cool story of taking a technology, getting it to work (upstreaming bugfixes! be still my heart), improving reliability by relaxing consistency, and simplifying healthchecking. Despite my skepticism of the newfangled hashicorp stuff, service discovery is a well-known danger area, so having zero incidents in a year is pretty dang good. I hope companies continue to put out blog posts like this one–the history and explanations of why decisions were made are great. Stripe does have an advantage since they’re employing b0rk though ;)

                              1. 1

                                Are these guys duplicating their own work? http://www.scala-native.org/

                                Sorry, bad wording: are Scala guys duplicate the work done by Java guys?

                                1. 3

                                  I think scala-native doesn’t target the JVM. AOT compilation allows a mixed-mode style, so that it’s still on the JVM but some libraries are precompiled to native code. Note that there’s even a style of running that still allows JIT-ing of AOT-ed code.

                                  For what it’s worth, the scala-native people aren’t coming up with something that has never been done before. There are existing tools to compile java to native code, most notably gcj. I think what’s exciting about scala-native are the proposed extensions to scala which allow you to control your memory in a much finer-grained way, and the improved FFI.

                                  1. 1

                                    Is it the same guys?

                                    I would expect scala-native to support the native ABI (or at least lightweight bindings to it), whereas that doesn’t seem to be a goal for this project. Whether that’s actually an important use case (and/or worth the cost of working without support for anything written in Java) is an open question.

                                    1. 1

                                      More like Oracle trying to badly duplicate what already works in Scala.

                                      Will be Scala.js vs. GWT all over again: Two implementations, one works, one doesn’t.

                                      (I expect that Java-AOT like GWT will not even try to have any sensible kind of interop/integration into JS/native, making them foreign objects in the respective place. Java-AOT will likely be some shrink-wrapped JVM+app code thing.)

                                    1. 1

                                      This will be really useful when it comes out. From the perspective of someone who helps write (gently) latency-sensitive systems, we expect that we have to warm up all JVM-based services in order to help with many things. Of the top of my head, hydrating caches, resizing socket buffers, JIT-ing, getting GC heuristics going, connection establishment, and ensuring lazily evaluated code is already run. All of these have been tunable or fixable, except for JIT-ing, which absolutely must happen, and for which there is no other way of doing other than exercising the code paths. This change will allow us to consider a brave new world where we don’t need to figure out how to coordinate warm up requests for all of our applications. This will be especially useful for applications with a broad workload, where it’s a hassle to figure out how to warm up every workload, and for cases where it’s difficult or impossible to send synthetic traffic that doesn’t mutate a persistent store.

                                      I’m pretty excited for JDK9. It has been a long time coming, but it looks like there will be some really exciting goodies in there.

                                      1. 6

                                        These articles are interesting, but they often gloss over what is the most interesting part to me, the decision to go from something off the shelf to something home-grown. Why doesn’t HAProxy scale horizontally? Why not something like nginx? Are there hardware load balancers that could have helped here? What’s the expected ROI of scaling load balancing horizontally instead of vertically? What are other folks doing in the industry?

                                        1. 2

                                          I’ve noticed an interesting desire amongst engineers to save money and use open source, ignoring the cost of their effort.

                                          A site we run at $work also had performance issues with our HA Proxies. We simply replaced them with commercial software LBs and went on to other things. We didn’t blog about it because the project took a month or so and wasn’t very glamorous.

                                          1. 5

                                            I don’t think it’s necessarily a desire to save money, it’s a desire to use software you can understand, modify and enhance as needed. I’m guessing the commercial load balancer you’re using is pretty much a black box - if you have problems you’re going to have to rely on vendor support ([insert horror story here]) instead of being able to fix it yourself. Troubleshooting is a helluva lot easier if you have source code…

                                            Yes, going with a commercial product is better in a lot of cases, but there are always trade-offs.

                                            1. 4

                                              Agreed - there’s always the risk of bugs and black boxes. On that topic, the question is if the velocity you gain is worth it? After all - many are comfortable to run on EC2 with ELBs, despite both of them being very opaque.

                                              Bug wise, I can only talk about my experience; we’ve had no major implementation bugs and the experience has been very smooth. We have been running these devices for several years.

                                              This of course could change but as a reference point, I also have a cluster of several hundred Nginx proxies which very work well, but we’ve had some showstopper bugs over the years. At those times, having the ability to dive into the code has not been helpful due to the complexity of the product and the fact that these bugs happen infrequently enough that we don’t have an nginx code internals expert on staff. Sure we can read/fix the code, but the MTTR is still high.

                                              In GHs case, they now need to maintain at least 1 or 2 members of staff full time on this codebase otherwise their knowledge will begin to degrade. The bus factor is high.

                                              For future features, they can at best have a small team working on this problem without it becoming a distraction for a company their size. I do see they plan to open source this, which may reduce the size of that issue, assuming the project gets traction.

                                              In my case, I pay a vendor for a team of hundreds of specialists working full time on this problem. We have gained many features for “free” over the past years.

                                              In terms of debugging - the inspection capabilities on the software we chose have been unmatched by anything else I’ve used. We can do realtime deep inspection of requests. This isn’t anywhere near the blackboxyness of ELBs which most are comfortable to use.

                                              For control, the cluster has a very complete REST API and to assist teams, somebody wrote a Terraform provider in their 20% time.

                                              We run the devices in a service provider model, meaning we have a centrally managed hardware platform and then we do containerised virtual loadbalancer deploys so that teams who have unique use cases can get their own instances. The devices are integrated into our BGP routing mesh and so traffic is seamlessly directed to where it should be. This is all vendor supported.

                                              ITO traffic, we do tens of gigs over millions of connnections at peak. We use many protocols - not just HTTP and a good portion of our requests are short lived.

                                              As you might infer, I’m very happy with my choice :)

                                        1. 2

                                          I haven’t ever really thought of the distinction, but looking over the list of usages, I think that tuples feel the most pythonic in situations where the arity is fixed. Overloading tuple as “read-only list” is of course possible but feels like a waste of tuple as vocabulary.

                                          It’s kinda weird that this post starts off as pointing to the “read-only list” as a bad explanation, but then basically recommends using tuples as read-only lists.

                                          1. 1

                                            I think that the author is pointing out tweets that he disagrees with, not saying that “read-only list” is a bad explanation.

                                            Note:

                                            Now to start off, I want to say that I respect the hell out of David Beazley. The guy literally wrote the book on Python, and he knows way more about Python than I ever will. He’s also one of the most entertaining Python people you can follow on Twitter. But hey, that doesn’t mean I can’t disagree sometimes.

                                          1. 1

                                            Could we make the science tag work the way the pdf tag works, where you also need another tag to submit it? That way we could keep the “computer-y” science and keep filtering / searching for it neatly, but stuff that’s just science can see itself out?

                                            1. 2

                                              Finished The Ethical Slut, which left a mixed taste in my mouth (shush). Perhaps it’s just the times and communities that I’m aware of, but a lot of the book seemed to be defensive and oddly patronizing about what it covered (the practice of polyamory). During its initial publication, it kinda makes sense that they’d pick language and constantly reaffirm their readers, but at this point it just reads as kinda bubbly, overly sentimental, and a bit out of touch with the times.

                                              That said, I’d still reccommend it to others for the same reason it was reccommended to me: exposure to a very different way of thinking about people and relationships.

                                              Other than that, continuing my grind through the Programming Phoenix book. Sad thing is that anything I learn from it will probably be out-of-date in like six months, but hey, might as well try. :)

                                              Anybody have good recs for military fantasy or science fiction? I need some popcorn reading (and yes I’ve read everything related to the Slammerverse). Maybe I should pick up one of the translations of the Legend of Galactic Heroes novels…?

                                              1. 3

                                                My favorite science fiction book is Diaspora by Greg Egan, and I recommend it whenever I can. It’s hard sci-fi, though.

                                                For popcorn sci-fi, I like everything by Brian Daley. He’s written many Star Wars novels, but he writes other stuff too.

                                                1. 2

                                                  I just started Distress by Greg Egan and enjoyed it so far, even if for a non native english speaker the first chapter is challenging to keep up with.

                                                  1. 2

                                                    I love Egan’s work. Have you had a chance to read Permutation City? Absolutely fantastic work about cellular automata and the hard problem of consciousness.

                                                    1. 1

                                                      Not yet! So far I’ve only read Diaspora and Teranesia, but I plan to read his other work. Books pile up so quickly…

                                                      1. 2

                                                        If you liked diaspora, you will also enjoy permutation city and schild’s ladder.

                                                  2. 3

                                                    If you haven’t read the Imperial Radch trilogy by Anne Leckie, you should check it out.

                                                  1. 1

                                                    I’m getting back on the reading train, so I won’t bore folks with everything I’m reading, but greatest hits I’ve finished so far this month have included “A Tale for the Time Being” by Ruth Ozeki, “The Brief Wondrous Life of Oscar Wao” by Junot Diaz, and “Ratfucked” by David Daley. The next few books on my list are “Bowling Alone” by Robert Putnam, “Hopscotch” by Julio Cortazar, and “Altered Carbon” by Richard Morgan. I also recently received the new Jonathan Safran Foer book in the mail, so I might have to rearrange my queue to squeeze it in.

                                                    For everything I read you can follow me on goodreads.

                                                    1. 8

                                                      Just an opinion…I’d prefer not to see much space stuff posted here, because it’s basically kinda pointless.

                                                      If it’s an article about engineering an engine (with equations or schematics), writing avionics software, or the math involved in orbital calculations, I think such stories are a good fit.

                                                      Unfortunately, space articles tend towards popsci pieces that have very little real actionable information and overall are of low quality–“Check out this one weird trick SpaceX uses to build their engines”, “NASA spent 30MUSD on a new rocket, you’ll never guess what happened next”, etc. etc. They cover a topic that by definition matters to a miniscule percentage of anybody reading this site, and tend to just be junk food for the brain.

                                                      If people want to be inspired to leave our pale blue dot, more power to them; that said, they should get inspiration on a forum that caters to that sort of content. Lobsters is primarily a site for the practitioner of technology in software and hardware engineering, with a smattering of related business and art.

                                                      Letting in space is a trend towards diluting the content and discussion.

                                                      1. 5

                                                        I agree strongly with this. I’d rather see us going in the direction of removing the science tag rather than subdividing it into space.

                                                        1. 1

                                                          I often enjoy the content under the science tag, and I assume some others do too. I think it would be more beneficial to keep allowing uninterested users to filter the science tag rather than than removing it altogether.

                                                          1. 5

                                                            No, I’d rather have those users go somewhere else for science. Suppose 80% of users were interested in American politics–it still wouldn’t be appropriate for lobsters.

                                                            1. 1

                                                              Science and technology have a lot of overlap. If we’re going to ban science articles, we should probably ban culture, too. It’s better to allow anything within the general umbrella of ‘technology’ and let users filter anything they don’t like.

                                                              1. 3

                                                                My position is more nuanced than what I wrote. Here’s what I’ve written previously:

                                                                I think we have a lot of tags that are clearly relevant to lobsters when they are tech focused (like culture, person, potentially government) but don’t make sense as standalone tags. For example, under “person” we could have an obituary for David Bowie, but it would probably be off-topic for lobsters.

                                                                Maybe a good compromise would be to make these tags like the video / pdf tags, where they don’t stand by themselves, but instead qualify for lobsters because of the other tags? That way, if someone wanted to post something under the “person” tag, they would have to make sure they could add another, more lobsters-y tag, like “unix”.

                                                                I think that we’re basically agreeing, but my concern is about the role of tags as gatekeepers for content. I don’t think some of our too-broad tags (like community, like science) are serving the community well.

                                                                (repost of https://lobste.rs/s/ht7s9h/adding_policy_government_tag/comments/w70xki#c_w70xki)

                                                                1. 1

                                                                  Ah, but the culture tag is mostly used to address workplace practices and mores of developer ecosystems. As such, it is quite on topic for the softer side of the practice of software and hardware engineering.

                                                                  I don’t disagree that science often overlaps with us, but the vast vast majority of science reporting and articles online that get linked around are feelgood garbage.

                                                                  There is a very real Overton Window in action, and the feature creep of science and space goes basically something like:

                                                                  1. “Science is kinda related to technology, so we should allow science submissions.”
                                                                  2. “Space is a type of science, so we should allow pictures of space.”
                                                                  3. “Pretty bird pictures involve biology, so that’s kinda like science and space, so let’s do that.”
                                                                  4. “Well hey I found a cool set of pictures about things birds sit on and we allow bird pictures so yeah.”
                                                                  5. “Here’s 12 things successful techies do to attract birds to their backyard.”
                                                                  6. ah fuck it buzzfeed.

                                                                  This is especially egregious with cogsci, culture, and politics, and signals the mid-stage cancer of an online community.

                                                          2. 3

                                                            Unfortunately, space articles tend towards popsci pieces that have very little real actionable information

                                                            I don’t find articles with equations and schematics particularly actionable either.

                                                            1. 2

                                                              I don’t use Docker or BSD at the moment, but I still can appreciate that that content might be a handy reference later.

                                                              If an article can’t make us better at something later, it probably shouldn’t be here–except for things like art which sort of have their own built-in dispensation.

                                                            2. 3

                                                              I don’t mind posts related to space so long as they are at least indirectly about technology and they fit my heuristic for any topic (be it programming, maths, biology, nanotech, medicine, &c.), which is to say, it meets either one of two criteria: 1) would I likely have not seen this content at all had it not been posted here or 2) do I think the fine folk here might have interesting information about or insights on the content. The latter in particular can be a delightful reason to see pop-ish content, because often a fellow crustacean will follow up with links that go into much more pleasing detail.

                                                              </my2cents>

                                                            1. 7

                                                              A signals intelligence agency, holding on to undisclosed bugs that let it gather signals intelligence? Inconceivable.

                                                              1. 3

                                                                This is nuanced. The NSA intends to gather intelligence, but also to prevent the gathering of intelligence by other intelligence agencies. It sounds like there’s a tension between leaving yourself at risk and keeping your enemies at risk. The question becomes whether the NSA does its job better when it gathers almost no intelligence with these kinds of exploits but also protects America from them by disclosing them, or by gathering lots of intelligence but leaving America at risk by not disclosing them, or through a mix of disclosure and non-disclosure. A mix is probably going to be the right answer, but negotiating where that line in the sand should be is a potentially scary matter to leave in the hands of the NSA, but also must necessarily be left there, since disclosing which zero-days it has would itself be a tool for both folks to protect themselves against, or for other actors to use against it.

                                                                I don’t know if there’s a good solution here. I guess we have to really hope that we’ve set up incentives correctly for the NSA.