1. 13

    Leaving a job because the tech is old seems very odd to me.

    However, 5 years is an incredibly long time to spend doing any one thing. I certainly wouldn’t stay in one job that long unless it were maybe one that let me change what I was doing every few years without leaving?

    1. 18

      Just to add to your first paragraph: It did seem that the author tried to make suggestions about which tech would be better, but didn’t get any support. It can be quite frustrating to work with legacy code and not getting listened to because your ideas doesn’t have an immediate return on investment (“Yeah, good idea, but we don’t have the time/resources/money to set your plan into action this time either”). Programming is creative, and having ideas repeatedly struck down kills the creativity and joy.

      1. 7

        That’s a very sensible thing to do IMO. If your skills are stagnating and your day-to-day work gives you no opportunity to advance them, then at some point you’ll want to update your skills. Either with something new, or something old that you’re unfamiliar with. Don’t mistake this advice for “chasing the shiny new thing”; rather an acknowledgement that new skills are important to your professional development/expanding your mind.

        1. 2

          I literally cannot stop aquiring new tech skills (or other skills, even!). My job is almost never involved in this.

          I understand that I’m not everyone. But that’s why I said it seems odd to me and not that it’s a bad idea. Maybe it’s a good idea for you. I don’t go to work for the tech (or I would have a very different job!)

        2. 2

          It may seem odd but I got to be in the very same position a couple of months back. I was a contractor for a company that deals with hedge funds, and some financial stuff like that, my excitement over that industry has never been really there though but the renumeration was damn great.

          However, we had to manage such an old codebase this company had built inhouse using Zend Framework 1! At the time, I took the job with the idea we were going to be using recent tech obviously, they had said it’s PHP + recent JS flavours (vuejs, react JS). I tried to trudge on for a while but could just not get productive. I ended up resigning because I felt so bad that I couldn’t deliver the value they sought nor was I bothered to learn outdated tech and have an obsolete skillset.

        1. 5

          I feel that 5 years (or similar amount of time) lag is actually a good thing. Why rush to try new tech, when he current one (old?) is working?

          1. 10

            That would depend on who you ask. It might be a good choice for the business/employer. If however the employee wants to develop his/her own skills in newer areas and make sure not to end up only knowing old tech that aren’t attractive to future employers, then it would be bad for the employee’s future career.

            The distinction might also depend on which tech and its lifecycle on the professional job market. Some really old tech are still very much relevant today, but that’s not true in every case. So if your current job is about something that is, let’s say, “deprecated” and won’t lead your career any way, then it might be smart to quit while it hasn’t impeded your future job offerings too much.

          1. 4

            Having servers listening to ssh on publicly accessible networks feels wrong, having worked in places where bastions / jumpboxes were the norm. This is useful if, say, vulnerabilities are found in the openssh server code. This also simplifies offboarding.

            1. 8

              I’ve heard this argument before but it’s never really made sense to me for three reasons:

              • There have been vastly are more remote code execution vulnerabilities in the Linux network stack than in the OpenSSH server.
              • OpenSSH runs privilege separated. Compromising the process that runs on port 22 doesn’t give you anything except a stepping stone for talking via a tightly controlled interface to the parent.
              • Perimeter security is not really security. If I can compromise your jump box, I can easily then ssh to every other machine on the network, using automated tooling.

              If OpenSSH makes a noticeable difference to your attack surface, I am very impressed by your baseline security. if you’re running a general-purpose network stack, from any OS, this is almost certainly not the case, irrespective of the other services you’re running.

              Offboarding should be done by disabling the accounts (globally). If you’re relying on disabling access to a jump box to disable remote access, you’d better be absolutely sure that your perimeter security is 100% secure. Spoiler: it isn’t. Worse, the kind of malicious employee that you actually want to lock out will have trivially punched a hole in it before they left (for example, by setting up a task on another box that, on boot, sshs to a machine outside your firewall and does remote port forwarding. This can be done as any unprivileged user on any system in your network and then allows them to initiate arbitrary inbound connections to your network).

              1. 2

                Having some sort of egress control from privileged environments is so important and yet so often ignored or actively resisted. (Folks in my experience really want access to NPM and GitHub from everywhere, even if they know intellectually it’s probably a bad idea.) Even simple audit logging of processing opening outbound connections to “new” hosts could save you from a lot of bad outcomes (i.e., the backdoor you mention just sitting around idle for months until someone decides to use it).

                The main argument in favor of bastion/jumphost setups IMHO is simplifying network routing. You can keep private IPs for your internal network and use the bastion as a bridge, vs. having to put boxes that may have no other reason to receive direct IP traffic from the Internet at large into a publicly-routable network.

                Regardless, I agree that the odds are that your biggest hole will be elsewhere (secrets management, unsecured database and internal API endpoints, automatic installation of untrusted libraries and/or containers, etc., etc.)

                1. 2

                  Exactly. I’ve never understood why people think their product’s http server is somehow innately more secure than ssh, a battle-tested tool used for mutually authenticated remote access. I think people just get spooked when they see all the failed login attempts from sniffers and whatnot like it’s somehow different than a crawler poking around port 80.

                  1. 1

                    The places I’ve worked, we had one key for the jump box and another for the hosts behind it. The automated tooling only worked if you had both. We also had different keys for dev, staging, and prod. Our internal systems could not reach the outside world, they could only reach the gateway systems. It was more work, but I think it would beat the problem you describe above.

                  2. 1

                    I agree. Note that if an organization uses certs for normal day-to-day operations, offboarding would simply be to not issue the daily cert for the given user. Older certs would have expired.

                  1. 4

                    All of this is effectively what AppEngine provided out of the box 12 years ago.

                    1. 3

                      We’re also going to continue to bump up against limitations on the data layer. This is when we are going to want to start looking into partitioning and sharding the database.

                      Weird that sharding is only mentioned in the final section.

                      These both require more overhead, but effectively allow the data layer to scale infinitely.

                      I disagree. Even sharding runs into problems of hot partitions/keys and data locality.

                      I’d say, in fact, that all of these solutions are mechanisms to solve data locality. CPUs are infinity times faster than memory / network / storage these days. Therefore scaling is a “simple” matter of putting compute beside all the relevant data.

                      Simple. 🙄🤬😭

                      1. 3

                        Weird that sharding is only mentioned in the final section.

                        Sharding solves problems but makes life harder for ops people and, sometimes, provides no benefit. Say you guess wrong and only users starting with M get popular, your sharding system is basically wasted while everything still falls over.

                        1. 1

                          There’s a new generation of databases that have reduced toil for our ops people anyway. I’m thinking about things like Spanner, DyanmoDB, and TiDB. All with different models, but all promise horizontal scalability in their own way.

                      2. 3

                        Even though this has been possible on PaaS solutions for a while, I’d say the article is still valuable.

                        I find it a good rule of thumb to understand why the stack beneath is built the way it is. Maybe a limitation is an enabler for future growth, or it might just be a limitation of the PaaS platform, as the platforms often strive to be generic and usable for most workloads. Without knowing the reasoning behind it, you couldn’t know if your use case could benefit from something less generic.

                        Another good advice is to scale as you get users. Maybe the service isn’t gaining any traction because your app was a month or two too late to market because of scaling ahead. Or excessive scaling could kill the startup due to high costs.

                      1. 2

                        I can see this leading to digital tariffs, like its physical counterpart for goods crossing borders. The EU could impose a tariff/surcharge for traffic going to or from the US or China and thus favour and/or protect EU-based technology companies.

                        Like in the physical world, one must have an enforcement if some traffic isn’t lawful: in this case a firewall.

                        The reason I speculate about this scenario is that an outright block might seem too authoritarian, but a tariff would maybe been seen as protecting local workers, which might be a more favourable public image.

                        1. 1

                          woah crazy. I know nothing about selfhosting like that. What’s that it talks about with a reverse proxy? How’s that go from a public IP on vultr to a closet in your home?

                          1. 4

                            I think his main motivation was to not expose his residential IP address. If he’s on a dynamic IP address, a cron job could access a secret URL on his VPS over at Vultr. The receiving end over at the VPS could then look at the client’s IP address (ie. his current residential IP address) and reconfigure nginx (or HAProxy, etc.) to route traffic to that specific IP adress instead of the old one.

                            1. 1

                              If you wanna be fancy you can have a gpg key pair and send an encrypted payload to a doesn’t-need-to-be-secret url. Swap out for other asymmetric encryption schemes to your liking. I just picked what I know.

                              1. 1

                                You can also simply run a client for any dynamic DNS at home, then you have an A record to lookup. It’s probably more reliable to not rely on DNS propagation times and just push your IP periodically, like @enpo wrote.

                                I use duckdns.org, but I don’t host anything public at home, but I do access my MP3s with subsonic, so I don’t need the static IP.

                              2. 2

                                In addition to @enpo’s suggestion, you could easily do this with a VPN or SSH.

                                1. 1

                                  Ah! Why didn’t I think of that :)

                                  The clever thing about a VPN is that you can leave the residential firewall as-is with no open ports to the outside world.

                                  You’ll also benefit from basically no delay if your ISP changes your IP address as the VPN would just reconnect.

                                  On top of this, your layer 7 reverse proxy (nginx, HAProxy, etc.) wouldn’t need to be reconfigured in the event of an IP address change, as the IP address over the VPN would be private (RFC 1918), and most importantly: static.

                                  Just benefits and no downsides :)

                              1. 1

                                If I understood the manual page correctly, this generates a config file for OpenBGPd, BIRD and others, but what I didn’t find any documentation for is how often to run the binary. I guess the program should run on an interval in order to get fresh certificates in the config file.

                                1. 4

                                  I recommend running rpki-client from cron(8) at least once an hour. See this example crontab entry: https://github.com/openbsd/src/blob/master/etc/crontab#L22

                                  I’ll work to update the man page to hint at once an hour. Thanks

                                  1. 1

                                    Thank you! :)

                                1. 3

                                  Perl5:

                                  #! /usr/bin/env perl
                                  use Modern::Perl '2015';
                                  ###
                                  
                                  my %hist;
                                  my $id = 1;
                                  while (<DATA>) {
                                      chomp;
                                      push @{$hist{$_}}, $id;
                                      $id++;
                                  }
                                  my $rank =1;
                                  foreach my $score (sort {$b<=>$a} keys %hist) {
                                      foreach my $id (@{$hist{$score}}) {
                                          printf("id: %2d, points: %2d, ranking: %d\n",
                                                 $id, $score,$rank
                                                );
                                      }
                                      $rank += scalar @{$hist{$score}};
                                  }
                                  __DATA__
                                  39
                                  26
                                  50
                                  24
                                  39
                                  67
                                  39
                                  50
                                  48
                                  24
                                  
                                  1. 2

                                    That encouraged me to do it in (mostly) AWK:

                                    $ cat ranking.txt
                                    1,39
                                    2,26
                                    3,50
                                    4,24
                                    5,39
                                    6,67
                                    7,39
                                    8,50
                                    9,48
                                    10,24
                                    
                                    $ cat ranking.awk
                                    BEGIN {
                                        FS = ",";
                                        OFS = ",";
                                    
                                        ranking = 0;
                                        count = 0;
                                        last_points = 0;
                                    
                                        print "ranking", "points", "id";
                                    }
                                    
                                    {
                                        if (last_points == $2) {
                                            count++;
                                        } else {
                                            ranking += count + 1;
                                            count = 0;
                                            last_points = $2;
                                        };
                                    
                                        if (ranking == 0) {
                                            ranking = 1;
                                        };
                                    
                                        print ranking, $2, $1;
                                    }
                                    
                                    $ sort -t, -k2nr ranking.txt | awk -f ranking.awk
                                    ranking,points,id
                                    1,67,6
                                    2,50,3
                                    2,50,8
                                    4,48,9
                                    5,39,1
                                    5,39,5
                                    5,39,7
                                    8,26,2
                                    9,24,10
                                    9,24,4
                                    
                                  1. 3

                                    I don’t think I would enjoy using an offline laptop, but I might enjoy having some sort of limiting with regards to speed (dial-up?) and time spent with the network link open. That would encourage me to use the network more efficiently and I think that would be a fun challenge.

                                    1. 1

                                      I’m really curious. I want to say that this looks really interesting to me and I’d like to try (likely not posible at corporate day job) and I really think I would enjoy this.

                                      What parts of the experience do you think you would not enjoy?

                                      1. 2

                                        I would probably miss not being able to catch up on RSS, email and IRC from time to time.

                                        I often read documentation and think about cool projects I could do with it, but I seldomly bother to actually go through with it. Being offline would probably trigger me to actually do some projects. Maybe I would enjoy it after all?

                                    1. 5

                                      Here you have much smaller, and IMHO much more readable example of the same:

                                      defmodule Ranking do
                                        def ranking(participants) do
                                          participants
                                          |> Enum.sort(&(&1.points >= &2.points))
                                          |> Enum.map_reduce({nil, 0, 0}, &do_ranking/2)
                                          |> elem(0)
                                        end
                                      
                                        defp do_ranking(%{points: points} = curr, {%{points: points}, rank, count}) do
                                          count = count + 1
                                          {Map.put(curr, :ranking, rank), {curr, rank, count}}
                                        end
                                      
                                        defp do_ranking(curr, {_, rank, count}) do
                                          next_rank = rank + count + 1
                                          {Map.put(curr, :ranking, next_rank), {curr, next_rank, 0}}
                                        end
                                      
                                        def rank_mock_data() do
                                          [
                                            %{id: 1, points: 39},
                                            %{id: 2, points: 26},
                                            %{id: 3, points: 50},
                                            %{id: 4, points: 24},
                                            %{id: 5, points: 39},
                                            %{id: 6, points: 67},
                                            %{id: 7, points: 39},
                                            %{id: 8, points: 50},
                                            %{id: 9, points: 48},
                                            %{id: 10, points: 24}
                                          ]
                                          |> ranking()
                                        end
                                      end
                                      
                                      Ranking.rank_mock_data()
                                      |> inspect(pretty: true)
                                      |> IO.puts()
                                      

                                      EDIT: Earlier version was slightly buggy, as it didn’t returned ranks always increased by 1, this fixes it.

                                      1. 1

                                        Cool! Thanks for sharing :)

                                      1. 6

                                        A few more things on the subject:

                                        When secrets are passed as argument they are discoverable by any process on the machine. Instead, it should be good form to only accept paths to secrets as arguments and rely on the POSIX filesystem to enforce ACL.

                                        The flag parser should convert empty values to defaults if there is one. For example if the default user is root and $USER is empty, -user "$USER" would default as root. This is necessary to avoid having all the users of the CLI find out about the default and re-expand it in their script with "${USER:-root}".

                                        For more complex cases where a configuration file starts being needed, I have seen a few using the command-line args as the configuration format. This has the advantage to enforce that all configuration options will also be overridable with the command-line.

                                        1. 1

                                          When secrets are passed as argument they are discoverable by any process on the machine.

                                          I think for the most part this is a theoretical problem. It may have been true when shared machines were more common (especially on badly configured ones which allow ps to show processes from other users), but even with VPSs you typically have one user running one program. With containers even more so. If someone has sufficient access to access this kind of information then chances are they can access things like a path with secrets or /proc/self/env, too.

                                          At any rate, last time I had this conversation I wasn’t able to find any examples of actual breaches happening because of secrets being read from the process information, and neither was my co-worker, so I don’t think this is something to overly worry about.

                                          The flag parser should convert empty values to defaults if there is one. For example if the default user is root and $USER is empty, -user “$USER” would default as root. This is necessary to avoid having all the users of the CLI find out about the default and re-expand it in their script with “${USER:-root}”.

                                          I guess that depends; sometimes you want to pass an empty string. In addition, if you don’t want to pass a value then, well, don’t pass a value. Preëmptively adding -option "$OPTION" to every flag kind of defeats the point of explicit overrides from the environment. While there are probably some use cases for -flag “${FLAG:-def}”`, I think that in general is should probably be used sparingly.

                                          1. 4

                                            I agree with you that the first one is partially a /proc problem, PID hiding is the standard on any hardened configuration and always a pain point in lots of Linux hosts. If you look at some of Grsecurity’s configuration options this can be accounted for.

                                            That being said, I totally disagree with your evaluation of breaches and the use of /proc. I have actively used /proc abuses to compromise AWS keys about 4 times in the last year, and actively use race conditions abusing /proc to find calls from sudo/su to hijack files that are user controlled for privilege escalation. Here is an example of one way to do that, it uses ps internally to read /proc but achieves the same thing.

                                            1. 1

                                              But you already had access to the system, no? That Python script certainly requires access to run that script. If you have that kind of access then there are many things you can do.

                                              That script of yours seems to replace the content of myscript when someone runs sudo ./myscript? If you have that kind of access then you’re pwned anyway. Your script seems like a fancy wrapper around find . -type f -a -executable and/or grep sudo .history and then injecting that exploit in the scripts you find. Hell, you could make a sudo alias in their shell config? Either I’m not fully understanding how that script works, or it’s a bit less dramatic than it may seem at first glance.

                                              If you look at some of Grsecurity’s configuration options this can be accounted for.

                                              you don’t need Grsecurity: there’s the hidepid mount option. And on BSD systems there’s a sysctl to do the same.

                                              1. 2

                                                The Python script was just an example of how you could use it, there are literally infinite ways to abuse that. But when talking about the “risk” being that /proc will store secrets it’s generally assumed that your threat model is a local user file read access no?

                                                Just to clarify that, you can just use some sort of read primitive for reading from /proc, but you don’t actually have shell. In the cases of AWS key hijacking I actually just needed a SSRF with support for file:// which would allow me to read the contents of /proc/$PID/cmdline (In this case I had to bruteforce the pid).

                                                You also have to remember that often times payloads may not be running in the context of a user with login and full shell access, ie the www user from a running web service.

                                                1. 1

                                                  I actually just needed a SSRF with support for file:// which would allow me to read the contents of /proc/$PID/cmdline

                                                  Right; that is probably a better example of your point than that Python script.

                                                  How would you protect against this, beyond hidepid=1? The most common method is to use environment variables, but you can read /proc/self/environ as well, so this doesn’t seem to offer a lot of additional protection against some sort of exploit that allows filesystem access from the app.

                                                  The sugegstion from the original commenter who started the thread (“good form to only accept paths to secrets as arguments and rely on the POSIX filesystem to enforce ACL”) doesn’t strike me as helping here, either.

                                                  1. 1

                                                    Yeah I wasn’t super clear about that so sorry for the confusion, I really should draft up some quick fire examples for these.

                                                    The difficulty of protection on mainline Linux is part of the reason I think a lot of people advocate for not doing flag based secrets, but like you point out there are also environment flags! As far as I know there is no “baked” way of properly protecting proc from the users own processes. The last time we discussed some work arounds for this we actually set hidepid=2 which is more restrictive and then launched the process in a privilege seperation model (ie like OpenSSH) that way the config was applied at the supervisor or through a broker.

                                                    Frankly, I think that’s crap. I think the better way to deal with this is RBAC through SELinux, Grsecurity, and friends. But, that can be a bit too much of an ask for most people as the actual support for SELinux is only in a few distros and Grsecurity is no longer easy to obtain.

                                                    1. 1

                                                      A simple chroot() should be enough; you don’t need OpenSSH-style privsep as for most applications as there’s no part that needs privileged access after starting. That may not be easy for all applications though, especially things like Ruby on Rails and such. It’s probably easier to add for things like a statically compiled Go binary. You can perhaps also unmount procfs in a container, I never tried.

                                                      I think stuff like SELinux is very powerful, but also very complex and hard to get right. In practice, most people don’t seem to bother, and even if you do it’s easy to make I mistake. Every time I’ve used it I always found it very opaque and was never quite sure if I had covered everything. I think something like unveil() from OpenBSD is a much better approach (…but it doesn’t exist in Linux, yet anyway).

                                                      1. 1

                                                        Generally I don’t like to consider chroot(8) and namespaces(7) security protections, but in this specific case I think that they would work pretty well to prevent access and really is what I should have been thinking of.

                                                        The reason I pointed out RBAC systems was because I have managed both a pretty large scale SELinux and Grsecurity based RBAC deployment, and you are 100% right about SELinux. It is the biggest pain. Grsecurity RBAC is actually one that I hope more people go back and play with for inspiration, it is trivial to set up and use and even has a learning system that can watch what a system is doing. I used to build Grsecurity profiles as part of testing by running all tests in monitor mode and automatically applying the profiles at deploy time and if possible applying gran to use model checking, and very very rarely ran into issues. But, yes they are not “simple” off the bat.

                                                        I was sort of staying in Linux land, but I think a better way to handle things is unveil() in most cases, I just don’t know of a way to replicate that in Linux without some sort of RBAC.

                                                        1. 1

                                                          So the more I think about it, the more it seems that this is a much harder problem then I thought.

                                                          For now I added a note about hidepid=1 and that securing secrets is complex. I can’t really find a good/comprehensive article on the topic.

                                                          One method that seems reasonably secure without too much complexity is to read the values you want and then modify the cmdline and/or environment to remove sensitive information. That’s not 100% secure, but at least it prevents leaking secrets with stuff like pwned.php?file=../../../../../proc/self/environ and is probably “secure enough” to protect against most webapp vulnerability escalations.

                                                          There are also some dedicated apps/solutions for this (like Hasicorp’s Vault). Not sure what to think of that.

                                                          1. 1

                                                            Imagine if you launch a container, as many web apps do these days, then you’d read a secret file into the application and then promptly delete the file. If anyone found an exploit in the application in regards to the file system, the secret would be gone already.

                                                            If the container is restarted, the file would be accessible again as it is recreated from the container image. You’d probably want to not listen for any new connections before the secret initialization has completed.

                                                            Would this be a good solution or would it introduce other problems?

                                          2. 1

                                            Secrets are the one thing I prefer to pass via Environment Variables or Config files. Everything else I prefer to use flags for pretty much the same reason I don’t want to use them for environment variables.

                                          1. 5

                                            ’In the general case, applications written in PHP are I/O bound.. ’ .. really? my Wordpress is incredibly slow because of waiting for IO?

                                            ‘where the JIT really shines is in the area of mathematics’.. because that’s where most people needed optimisation in PHP?

                                            1. 3

                                              I haven’t benchmarked Wordpress, so my answer would only be anecdotal evidence: Most of the stuff I write in PHP (typical web applications) do indeed use a good amount of time waiting for the database. If I spot a bottleneck, it’s often caused by a bad query to the database.

                                              I agree that today’s most typical use cases for PHP will not benefit much from JIT. The author does however mention that this will allow for more interesting use cases in the future.

                                            1. 7

                                              This uses two webservices (webmentions.io, Bridgy) and two Javascript libraries (Elventy, Preact) with who knows how many dependencies. Not exactly a simple setup.

                                              If I ever get tempted to add comments to my (static site) blog, then I would rather add some simple PHP and keep it free of Javascript. I’m not tempted though because these days discussions seem to happen on the aggregators (like lobste.rs) anyways and I’m fine with that.

                                              1. 1

                                                I’d probably also use a quick self-written script to process webmentions. If the project didn’t have any servers at all, I’d probably host it on AWS Lambda or similar.

                                                The current status quo is, as you’ve pointed out, based around commenting happening on aggregators. An aggregator isn’t much different from say Facebook or other social media sites. If they decide to change policies or goes bankrupt, your content are often gone.

                                                By utilizing your own domain to write comments, publish your own posts, or even giving a like/boost, you’re set for the future. You are in control of the content. The blog you’re interacting with can publish a copy inline with a link back to you, via a webmention. That way, if someone who has commented via their own site goes offline, you’d still have a meaningful comment section on your own site.

                                                The only issue is how to find great content. This can be mitigated by following other sites. If a site you follow posts a comment or a like about a third site, you could check it out and maybe follow it as well.

                                                Besides, you could feed comments from aggregators into your site, as the article describes when talking about Facebook. An issue I’ve been wondering about, without getting any clear answer to, is copyright. When a blog sends you a webmention, it’s some sort of message saying inlining the content is okay. Not sure if the same story goes for Facebook or aggregators. Would be great if someone has some insight they’d like to share.

                                                1. 1

                                                  It seems to me that Webmentions has not considered the legal aspects completely.

                                                  This copies comments from someone published elsewhere and republishes it on my site. Am I allowed to do that? The example in the article does not contain licence info (e.g. Creative Commons) which would explicitly allow republication.

                                                  I’m under german law, so I have to moderate what is commented on my website. I’m on the hook if someone publishes Nazi propaganda on my website. It seems that filtering (e.g. for spam) is not really thought through yet.

                                                  In total, I have no desire to inline any one line comment to my blog. What would be nice is (automatic) backlinks to other blog posts linking me and aggregators (HN discussion page). I once tried to track referrers for that but (at least for my audience) referrers are useless today.

                                                  1. 1

                                                    Completely agree on the copyright issue. That one has bothered me for a while. I haven’t implemented webmentions on my site because of that. I could refrain from inlining the content and rather just provide a link, but I’m not sure if that is too poor of an user experience.

                                                    You don’t have to publish webmentions automatically. You could review them and only publish if you deem them sane and spam-free – just as you would with regular comments.

                                                  2. 1

                                                    The current status quo is, as you’ve pointed out, based around commenting happening on aggregators. An aggregator isn’t much different from say Facebook or other social media sites. If they decide to change policies or goes bankrupt, your content are often gone.

                                                    The problem with this line of reasoning is that a lot of content aggregators, particularly the big names like Reddit and Stack Overflow, outlive the blogs they link to and crib from. Not the other way around.

                                                1. 2

                                                  I use email as social media. Mainly because my dad doesn’t see the point of anything else. His email server has an alias that delivers to him, my mom, my brother, and me. An email with a subject and single image attachment isn’t substantially different than an instagram post.

                                                  Ridiculous though it may be, it works pretty well. And since my brother lives in Beijing (16 hours ahead), email is actually better than something more synchronous like a group chat.

                                                  1. 2

                                                    Cool. Are there any security measures in place that make sure that not some random sender can spam you all?

                                                    1. 1

                                                      Nope! Just complete and utter obscurity. My dad’s email server isn’t exactly highly trafficked.

                                                  1. 1

                                                    How is code review done if topic branches only live on your local computer and merging also happens there before it’s pushed to origin?

                                                    1. 1

                                                      I believe this was written before the pull request model was common, but there’s no reason why you can’t have that the feature branch merge done via a PR.

                                                    1. 8

                                                      One must also make sure that the feature flags are ephemeral. We have 310 different configuration options in our application at the time of writing this. Many of them are flags to enable or disable certain features. This makes sense because not every customer wants it alike.

                                                      I would have used a more temporal system for flags if we were to implement them during A/B testing or similar. Every configuration option that has every existed must be kept around for legacy reasons.

                                                      1. 10

                                                        This was a hard-learned lesson for me. I have “temporary” feature flags that have been in production for nearly a decade now. Any feature flag system I’d be integrating today needs some kind of expiry date and notification process.

                                                        1. 2

                                                          Yeah, I think you have to have a process in place to integrate feature flagged stuff into your product after a while so you don’t have to deal with them a decade later. That’s, of course, can be done if you have SaaS or single-install-source solution. If you have a situation like Enpo above with different, customized, installations for each client you are pretty much toast.

                                                        2. 2

                                                          That is … mind boggling. How on earth do you even attempt to test any amount of the 796945721845778842475919139075620414921136393375300542318387282866272704053390430589730860455603066078739412704697191536795836240056697896249671921625859110264739008206646881054299114131923686294626708836563443497056478753259286321601841784170972278535996798204378021886389407248684599038298054366260840551142981370313185123638250325060383962886770938435048882386658766596481560405515515254199457134973524360454582648135836670684347420975064802837641388048575559158251497106943523511427144443326952041559678971773755844300171372821558992540618349430789236271936082094239920238839249942858712222326623974397184065086164132932404402666686761 setup options?

                                                          How many of those flags are single-deploy / single-user ones written sort of on-demand for a certain client and hence only used by one deploy? Was doing it as a fork / patch / (other way using version control) ever considered? How is it day to day to work with?

                                                          Sorry, I have so many questions – it is just such an extreme case I am so curious how it actual works day to day – is it pain most days or just something you don’t think about?

                                                        1. 2

                                                          I was actually going to write an article on this with “Commit Driven Development” or something similar title. The idea is to commit each successive code update in program/project so that while looking over the commit one can understand the evolution of it.

                                                          This help in two ways: 1. It shows how a simple program becomes large application/project and 2. Teach project development with an explanation so that one can follow to write a similar program to learn the development.

                                                          1. 2

                                                            Another fun idea would be to make a base project with some basic features, and then give a group of students a few tasks to complete within the existing code base. After completing the work, the group could discuss how they have solved it and how that is reflected in the commit messages. There would probably be a lot of different takes on the assignment.

                                                            Learning outcomes would include:

                                                            • Project methodology (agile)
                                                            • Writing good commit messages
                                                            • Having a good history
                                                            • How other groups have solved the same task
                                                            1. 1

                                                              Exactly. I’ve tried to document a similar thing ( but not able to complete ). Here is the example, that I was writing but with little different context and then I thought about “commit driven dev..”

                                                              I know this link does not look good in terms of CSS.

                                                          1. 3

                                                            As someone who’s been trying to get their team at work to do this, I couldn’t agree more.

                                                            A related thought I’ve had recently: there’s (in some ways at least) sort of of a continuum between code review and pair programming. As commits get increasingly fine-grained and well-described (in their commit messages), reviewing them and providing feedback on each starts to resemble the reviewer and the author collaboratively developing the code, only it’s more asynchronous (i.e. doesn’t require both people to be sitting in front of the same screen at the same time).

                                                            1. 2

                                                              My take on pair programming is that it often reduces the number of errors during the pre-commit phase. I don’t think that pair programming removes any and all bugs because of its synchronous nature.

                                                              Doing code review asynchronous could go in any direction. If the reviewer isn’t focused at that particular time, it would lead to a sub-optimal review process. However, I would like to argue that a reviewer in the zone would be best suited to find any issues. Being asynchronous means that you have better time to understand the code and can test it yourself.

                                                              You’ll often get dragged in during a synchronous pair programming session. You hear your colleague’s train of thoughts and get excited about the task at hand. This is where I lose some of my ability to think twice about an idea.

                                                              The same could probably get archived by looking at the diff alone the day after a pair programming session. It might reveal a new perspective.

                                                            1. 1

                                                              I am wondering if commits added during code review should be “fixup” of the previous commit. Those fixup commits would be autosquashed before the branch gets merged.

                                                              • History not altered during code review process
                                                              • History not polluted on master

                                                              What do you think about this?

                                                              1. 2

                                                                Great idea! Thank you for sharing :)

                                                                It did feel “wrong” to recommend using --force-with-lease during the review process. Using git commit with either --fixup or --squash after code review would be best of both worlds.

                                                              1. 2

                                                                Despite having the same goal as the author we argue against squashing and interactive rebasing here:

                                                                http://blog.plasticscm.com/2018/10/checkin-with-reviewers-in-mind-how-to-fix-pull-requests.html

                                                                1. 2

                                                                  To quote the link:

                                                                  We like to preserve those valuable explanations.

                                                                  In my case, there are none or few valuable commit messages at first. I’m not that disciplined. That’s why I’ll do sub-par commit messages at first. When the entire feature is done, it’ll be a lot clearer how it evolved and what would be logical units of work.

                                                                  I agree that if you get it right the first time around, a rebase would be of no use. That might be a skill I eventually get the hang of.

                                                                  1. 1

                                                                    I guess for us it’s kind of easy because we develop Plastic SCM with Plastic SCM (which forces us to do branch per task development). More on that here:

                                                                    http://blog.plasticscm.com/2016/06/plastic-vs-git-rerere-and-rebase.html