1. 8

    Reviews for quality are hard and time consuming. I personally can’t really review the code looking at the diff, I can give only superficial comments. To understand the code, most of the time I need to fetch it locally and to try to implement the change myself in a different way. To make a meaningful suggestion, I need to implement and run it on my machine (and the first two attempts won’t fly). Hence, a proper review for me takes roughly the same time as the implementation itself.

    I think this isn’t true for most PRs I deal with. To make a meaningful suggestion, I often need to think about the code by browsing the project’s source code. On projects that I really know, I may have already thought how to implement certain feature and it’s fun to see what others came up with. I only run the code that I’m reviewing when I suspect there’s something wrong with it but it’s not clear from reading the code. I generally don’t try to implement it myself unless I came up with a different solution and explaining it in words takes more time than writing some code / pseudo code.

    So, instead of scrutinizing away every last bit of diff’s imperfection, my goal is to promote the contributor to an autonomous maintainer status. This is mostly just a matter of trust. I don’t read every line of code, as I trust the author of the PR to handle ifs and whiles well enough (this is the major time saver). I trust that people address my comments and let them merge their own PRs (bors d+). I trust that people can review other’s code, and share commit access (r+) liberally.

    This saves time, but doesn’t improve quality. It’s the objective of code review to find bugs and design issues. IMHO, it’s a waste of time to pretend the code was reviewed. Even the most senior programmer makes mistakes. Scrutinizing code from someone who is senior is as important as scrutinizing code from someone who is not.

    1. 34

      If there are any questions or remarks, I am right here!

      1. 15

        I wish I could invite this story multiple times. The perfect combination of being approachable, while still being packed with (to me) new information. Readable without ever being condescending.

        One thing I learned was that DNA printers are a thing nowadays. I had no idea. Are these likely to be used in any way by amateur hackers, in the sense that home fusion kits are fun and educational, while never being useful as an actual energy source?

        1. 14

          So you can actually paste a bit of DNA on a website and they’ll print it for you. They ship it out by mail in a vial. Where is breaks down is that before you inject anything into a human being.. you need to be super duper extra totally careful. And that doesn’t come from the home printer. It needs labs with skilled technicians.

          1. 7

            Could any regular person make themselves completely fluorescent using this method? Asking for a friend.

          2. 5

            You may be interested in this video: https://www.youtube.com/watch?v=2hf9yN-oBV4 Someone modified the DNA of some yeast to produce spider silk. The whole thing is super interesting (if slightly nightmarish at times if you’re not a fan of spiders).

            1. 1

              So that’s going to be the next bioapocalypse then. Autofermentation but where as well as getting drunk, you also poop spider silk.

          3. 8

            Love the article. Well done.

            1. 5

              Thanks for the awesome article! Are there any specific textbooks or courses you’d recommend to build context on this?

              1. 12

                Not really - I own a small stack of biology books that all cover DNA, but they cover it as part of molecular biology, which is a huge field. At first I was frustrated about this, but DNA is not a standalone thing. You do have to get the biology as well. If you want to get one book, it would have to be the epic Molecular Biology of the Cell. It is pure awesome.

                1. 2

                  You can start with molecular biology and then a quick study of bio-informatics should be enough to get you started.

                  If you need a book, I propose this one, it is very well written IMO and covers all this stuff.

                2. 2

                  Great article! I just have one question. I am curious why this current mRNA vaccine requires two “payloads” ? Is this because it’s so new and we haven’t perfected a single shot or some other reason?

                  1. 2

                    It’s just the way two current mRNA vaccines were formulated, but trials showed that a single shot also works. We now know that two shots are not required.

                    1. 2

                      The creators of the vaccine say it differently here: https://overcast.fm/+m_rp4MLQ0 If I remember correctly, they claim that one shot protects you but doesn’t prevent you to be infective, while the second make sure that you don’t infect others

                    2. 2

                      As I understand it[1] a shot of mRNA is like a blast of UDP messages from the Ethernet port — they’re ephemeral and at-most-once delivery. The messages themselves don’t get replicated, but the learnt immune response does permeate the rest of the body. The second blast of messages (1) ensures that the messages weren’t missed and (2) acts as a “second training seminar”, refreshing the immune system’s memory.

                      [1] I’m just going off @ahu’s other blogs that I’ve read in the last 24 hours and other tidbits I’ve picked up over the last 2 weeks, so this explanation is probably wrong.

                      1. 1

                        Not an expert either, but I think this is linked to the immune system response, like some other vaccines, the system starts to forget, so you need to remind him what the threat was.

                      2. 1

                        Is there any information on pseudouridine and tests on virus encorporating it in their DNA?

                        The one reference in your post said that there is no machinery in cells to produce it, but the wiki page on it says that it is used extensively in the cell outside of the nucleus.

                        It seems incredibly foolhardy to send out billions of doses of the vaccine without running extensive tests since naively any virus that mutated to use it would make any disease we have encountered so far seem benign.

                        1. 1

                          From https://en.wikipedia.org/wiki/Pseudouridine#Pseudouridine_synthase_proteins:

                          Pseudouridine are RNA modifications that are done post-transcription, so after the RNA is formed.

                          That seems to mean (to me, who is not a biologist) that a virus would have to grow the ability to do/induce such a post-processing step. Merely adding Ψ to sequences doesn’t provide a virus with a template to accelerate such a mutation.

                          1. 1

                            And were this merely a nuclear reactor or adding cyanide to drinking water I’d agree. But ‘I’m sure it will be fine bro’ is how we started a few hundred environmental disasters that make Chernobyl look not too bad.

                            ‘We don’t have any evidence because it’s obvious so we didn’t look’ does not fill me with confidence given our track record with biology to date.

                            Something like pumping rats with pseudouridine up to their gills then infecting them with rat hiv for a few dozen generations and measuring if any of the virus starts encorporating pseudouridine in its RNA would be the minimum study I’d start considering as proof that this is not something that can happen in the wild.

                            1. 2

                              As I mentioned, I’m not a biologist. For all I know they did that experiment years ago already. Since multiple laymen on this forum came up with that concern within a few minutes of reading the article, I fully expect biologists to be aware of the issue, too.

                              That said, in a way we have that experiment already going on continuously: quickly evolving viruses (such as influenza) that mess with the human body for generations. Apparently they encountered pseudouridine regularly (and were probably at times exposed to PUS1-5 and friends that might have swapped out an U for a Ψ in a virus accidentally) but still didn’t incorporate it into their structure despite the presumed improvement to their fitness (while eventually leading our immune system to incorporate a response to that).

                              Which leaves me to the conclusion that

                              1. I’d have to dig much deeper to figure out a comprehensive answer, or
                              2. I’ll assume that there’s something in RNA processing that makes it practically impossible for viruses to adopt that “how to evade the immune system” hack on a large scale.

                              Due to lack of time (and a list of things I want to do that already spans 2 or 3 lifetimes) I’ll stick to 2.

                        2. 1

                          I enjoyed the article, reminded me of my days at the university :-)

                          So here are some quick questions in case you have an answer:

                          • Where does the body store info about which proteins are acceptable vs not?
                          • How many records can we store there?
                          • Are records indexed?
                          • How does every cell in the body gets this info?
                          1. 12

                            It is called negative selection. It works like this:

                            1. Body creates lots of white blood cells by random combination. Each cell has random binding sites binding to specific proteins and will attack them.
                            2. Newly created white blood cells are set loose in staging area, which is presumed to be free of threats. All cells triggering alarm in staging area kill themselves.
                            3. White blood cells, negatively selected not to react to itself, mature and are released to production.
                            1. 1

                              Interesting, thanks for sharing!

                            2. 5

                              How does info spread through the body

                              I came across this page relatively recently and it really blew my mind.

                              glucose is cruising around a cell at about 250 miles per hour

                              The reason that binding sites touch one another so frequently is that everything is moving extremely quickly.

                              Rather than bringing things together by design, the body can rely on high-speed stochastic events to find solutions.

                              This seems related, to me, to sanxiyn’s post pointing out ‘random combination’ - the body:

                              • Produces immune cells which each attack a different, random shape.
                              • Destroys those which attack bodily tissues.
                              • Later, makes copies of any which turn out to attack something that was present.

                              This constant, high-speed process can still take a day or two to come up with a shape that’ll attack whatever cold you’ve caught this week - but once it does, that shape will be copied all over the place.

                              1. 2

                                I did some projects in grad school with simulating the immune system to model disease. Honestly we never got great results because a lot of the key parameters are basically unknown or poorly characterized, so you can get any answer you want by tweaking them. Overall it’s less well understood than genetics, because you can’t study the immune system in a petri dish. It’s completely fascinating stuff though: evolution built a far better antivirus system for organisms than we could ever build for computers.

                            1. 3

                              BOFH 3 used to be funny. Not anymore

                              1. 1

                                I read the 3rd entry in the linked list - it wasn’t funny then, either.

                              1. 26

                                All highly biased articles that fail to explain their inflammatory titles suck.

                                1. 3

                                  I wouldn’t even call this an article, it’s just a bunch of random quotes and a list of things someone doesn’t like for some pretty non-obvious reasons we can only guess at (in some of the cases, anyway).

                                  1. 1

                                    Completely agree. I was hoping someone could shine some light on some of the choices. Obviously, this comes from someone involved with Plan 9. But if there were no Plan 9, how would this list look like?

                                    1. 1

                                      It’d likely be something like suckless. Here’s a rant about POSIX locales that might be relevant. These are easy to find if you search. I’d propose you do so.

                                      I think cat-v can be summarized as striving for mathematical-like simplicity and solving the problems in right place. I find this simplicity invaluable because it reduces noise a lot.

                                  1. 3

                                    That list is incomplete. It only shows committers associated with a GH account.

                                    1. 2

                                      Agreed. Wish it showed non linked using just nicknames, or email usernames.

                                      1. 6

                                        Incidentally, today I worked a bit on a tool to get author statistics on any git repo. It’s not quite ready, so no source yet, but here are the top authors (everyone with more than 2k commits) and the range during which they were active:

                                        282,220 commits by 453 authors from Jul 1992 to Oct 2020
                                        
                                        26,159  9%      Oct 1994–Oct 2020       christos <christos@NetBSD.org>
                                        12,844  5%      Jun 1995–Oct 2020       thorpej <thorpej@NetBSD.org>
                                        10,036  4%      Jan 2000–Oct 2020       wiz <wiz@NetBSD.org>
                                        8,786   3%      Apr 1993–Mar 2005       mycroft <mycroft@NetBSD.org>
                                        7,896   3%      Jul 1992–Oct 2020       mrg <mrg@NetBSD.org>
                                        7,286   3%      Jul 1997–Nov 2017       matt <matt@NetBSD.org>
                                        6,405   2%      Mar 1993–Mar 2005       cgd <cgd@NetBSD.org>
                                        6,283   2%      Dec 1999–May 2016       pooka <pooka@NetBSD.org>
                                        6,091   2%      Oct 1996–Aug 2020       lukem <lukem@NetBSD.org>
                                        5,324   2%      Dec 1999–Oct 2020       tsutsui <tsutsui@NetBSD.org>
                                        5,220   2%      Dec 2001–Oct 2020       jmcneill <jmcneill@NetBSD.org>
                                        4,662   2%      Jul 2000–Oct 2020       skrll <skrll@NetBSD.org>
                                        4,492   2%      Jun 1999–Mar 2006       itojun <itojun@NetBSD.org>
                                        4,435   2%      Mar 2000–Oct 2020       martin <martin@NetBSD.org>
                                        4,182   1%      Sep 2005–Jun 2020       joerg <joerg@NetBSD.org>
                                        3,928   1%      Mar 1999–Jun 2020       ad <ad@NetBSD.org>
                                        3,785   1%      Jul 1993–Feb 2005       pk <pk@NetBSD.org>
                                        3,706   1%      Jul 1999–Sep 2020       jdolecek <jdolecek@NetBSD.org>
                                        3,442   1%      Jun 2001–Apr 2015       yamt <yamt@NetBSD.org>
                                        3,388   1%      Nov 1997–Oct 2020       msaitoh <msaitoh@NetBSD.org>
                                        3,246   1%      Apr 1997–Mar 2006       augustss <augustss@NetBSD.org>
                                        3,125   1%      May 2011–Oct 2020       riastradh <riastradh@NetBSD.org>
                                        2,955   1%      Nov 1997–Oct 2020       simonb <simonb@NetBSD.org>
                                        2,798   1%      Jun 1997–Jun 2014       drochner <drochner@NetBSD.org>
                                        2,767   1%      Feb 2014–Sep 2020       maxv <maxv@NetBSD.org>
                                        2,638   1%      Oct 2002–Nov 2019       dyoung <dyoung@NetBSD.org>
                                        2,410   1%      Apr 1997–May 2014       kleink <kleink@NetBSD.org>
                                        2,389   1%      Jun 1997–Sep 2020       bouyer <bouyer@NetBSD.org>
                                        2,355   1%      Nov 2007–Oct 2020       dholland <dholland@NetBSD.org>
                                        2,106   1%      Jan 2003–Jun 2014       dsl <dsl@NetBSD.org>
                                        2,094   1%      May 2005–Oct 2020       macallan <macallan@NetBSD.org>
                                        2,091   1%      May 2001–Oct 2020       uwe <uwe@NetBSD.org>
                                        2,063   1%      Jun 1993–Mar 1998       jtc <jtc@NetBSD.org>
                                        2,029   1%      Feb 1995–Aug 2008       fvdl <fvdl@NetBSD.org>
                                        

                                        I don’t think this is quite accurate either, since people submitting patches to the mailing list and such will probably have the author set to whoever applied it. I see some occasional references to “patch from somesuch@example.com” in some commit messages, but it’s not very structured. I don’t think CVS records committer and author separately, so this is probably the best you’ll be able to get without trying to grep the actual authors out of the commit messages with some heuristic.

                                        The activity range is also a bit misleading, as some people went away for 10 years and then came back, and one author has almost 900 commits in a month (the HTML version shows this in a chart, but that’s not ready yet). Besides, “number of commits” is of course only a rough indication of actual useful contributions in the first place, but it’s more or less the best you can get with an automated tool.

                                        The full list is here: https://gist.github.com/arp242/ea24e64943622ea0678de9f77c11f53f

                                        Just the metadata without the actual file contents (git clone --filter=blob:none) is 441M by the way. For comparison, Linux is 1.2G (and that only goes back to 2005, as history before that isn’t in git).

                                        1. 2

                                          Cool :)

                                    1. 2

                                      Nice review. One thing not covered is the whole ‘how to detect QUIC compatibility’ on initial connection. For example there is talk of using DNS for this.

                                      1. 1

                                        Last time I’ve checked that, the server had to send alt-svc header via HTTP/2 or HTTP/1. For reference, HTTP/2 upgrade happens via TLS NPN or ALPN.

                                        1. 1

                                          That’s how it works, but DNS is also an option.

                                        1. 7

                                          QUIC being hard to parse by router hardware is a feature, not a bug. IIRC (and I may not) this is why encryption was originally introduced in the protocol. I believe that it wasn’t until TLS 1.3 started maturing that it was integrated into QUIC to also provide strong security guarantees, but to be honest I’m really unsure on this point and I’m too lazy to Google at the moment. Maybe someone else can tell us?

                                          In any case, the reason QUIC being hard to parse by routers is a feature is because it ensures protocol agility. I don’t know the details but there are things that in theory could be done to improve TCP’s performance, but in practice cannot because routers and other middleboxes parse the TCP and then break because they’re not expecting the tweaked protocol. QUIC’s encryption ensures that middleboxes are largely unable to do this, so the protocol can continue evolving into the future.

                                          1. 2

                                            While there are definitive benfits to it, like improved security from avoiding all attacks that modify packet metadata, it also means you can’t easily implement “sticky sessions” for example, i.e. keeping the client connected to the same server on the whole connection duration. So yeah, it’s always a convinience/security tradeoff isn’t it…

                                            1. 2

                                              I am not really a QUIC expert but I don’t really understand the issue here. The Connection ID is in the public header, so what prevents a load balancer from implementing sticky sessions?

                                              1. 2

                                                Oh I’m far from an expert too. You’re right, if the router understands QUIC it will be able to route sticky sessions. If it only understands UDP (as is the case with all currently deployed routers) - it won’t be able to, since the source port and even IP can change within a single session. But that’s a “real-world” limitation, not the limitation of the protocol, actually.

                                                1. 5

                                                  What kind of router are you thinking of?

                                                  A home router that can’t route back UDP to the google chrome application is just going to force google to downgrade to TCP.

                                                  A BGP-peered router has no need to deal with sticky sessions: They don’t even understand UDP.

                                                  A load balancer for the QUIC server is going to understand QUIC.

                                                  A corporate router that wants to filter traffic is just going to block QUIC, mandate trust of a corporate CA and force downgrade to HTTPS. They don’t really give two shits about making things better for Google, and users don’t care about QUIC (i.e. it offers no benefits to users) so they’re not going to complain about not having it.

                                                  1. 2

                                                    You should take a look at QUIC’s preferred address transport parameter (which is similar to MPTCP’s option). This allows the client to stick with a particular server without the load balancing being QUIC aware.

                                              2. 2

                                                Google QUIC used a custom crypto and security protocol. IETF QUIC always used TLS 1.3.

                                              1. 3

                                                Given that he’s written a book on the subject, I think he would be able to get a $400k+ salary in SV. I think the comparison of royalties and SV salaries doesn’t hold.

                                                1. 3

                                                  Converting all my Bitbucket Mercurial repositories to GitHub. Bye bye Atlassian.

                                                  1. 3

                                                    I humbly recommend hg.sr.ht, it’s a bit minimalistic but I like that about it.

                                                    Whatever you choose, I sympathize on ditching Atlassian. We use it at work and occasionally I really wish we had something, anything, else.

                                                    1. 3

                                                      I’ve considered sourcehut but at this time I just want a single place to put all my repositories. I’ve been using GH for many years and it’s not perfect but at least I don’t think they will delete my repositories any time soon.

                                                  1. 3

                                                    What about performing sentiment analysis on all those commit logs?

                                                    1. 2

                                                      I’m reminded of once reading about a team that set up their computers to take a webcam snapshot of the developer’s expression (face) when a git conflict occurred.

                                                      1. 2

                                                        Haha I’d think most commit messages would be a little too dry/robotic to get a good sentiment reading on? What do you think?

                                                        1. 1

                                                          Well… I was bored the other day which led me to just posting this… https://lobste.rs/s/0zxoap/suggested_improvements_for_tool

                                                        1. 11

                                                          I’d say low compilation/synthesis time. Waiting minutes at a time for each run will ruin you.

                                                          1. 1

                                                            True since the beginning of software development!

                                                          1. 6

                                                            Saturday I’m heading up to New Hampshire to learn about motorcycle suspension.

                                                            1. 2

                                                              Did you ever read Trevitt’s book?

                                                            1. 18

                                                              A lot of comments here are about individual actions, which are all great—but part of the point of my article is that joint political action is that much more powerful.

                                                              1. 2

                                                                Sure it is, but like you said:

                                                                Thing is, policy-makers aren’t doing very much.

                                                                What makes you think they will change their mind? We are not even voting them out of office. Quite the opposite.

                                                                1. 1

                                                                  Lasting change won’t happen until we are better represented.

                                                                  When is the first Lobste.rs user going to run for federal office?

                                                              1. 3

                                                                In places where biking isn’t possible (too long, not safe, etc.), I think WFH is probably the best way to reduce CO2/NOx/etc. My list would be:

                                                                1. Work from home
                                                                2. Eat local food
                                                                3. Ecological home improvements (solar panels, hybrid water heaters, etc.)

                                                                The last is important if you’re doing the first. ;-)

                                                                1. 1

                                                                  If someone can run arbitrary programs inside your container they already have plenty of access, so adding Manhole doesn’t seem like much of a security risk.

                                                                  Actually, this makes it very easy to change a running program and inject a bunch of code without any traceability. So, in terms of doing a post-break in analysis, it might be harder if the intruder used Manhole.

                                                                  1. 4

                                                                    One reason why variables have been abbreviated is because we quickly get bored of typing long names. These days I see more people using longer names with the advent of auto completion. One thing that is not mentioned is how much of a problem this is when a developer is familiar with the abbreviations used.

                                                                    1. 2

                                                                      I have been using autocompletion in text editors and IDEs for 15+ years. You make it sound as if this is a new development.

                                                                      1. 1

                                                                        Of course it’s not a new development, but it has vastly improved in the last 5 years and I believe people have been using it more across many languages.

                                                                      1. 5

                                                                        Seems like an unreasonable request to me.

                                                                        1. 8

                                                                          I’ve been running Homebrew using a custom $HOME-based prefix for years and it works most of the time.

                                                                          1. 2

                                                                            Not to be confused with checking for pointers being non-null in goto-based error handling code.