1.  

    Why did this end up a slack channel rather than IRC? Many, to most, of these tools have IRC presences and frankly I just like not having to run multiple slack instances, which have numerous drawbacks.

    1.  

      IRC works really well when you have a core team who spend a lot of time in a topic-specific channel where visitors can stop by and engage with them. That’s great for channels focused on individual open source tools, as you point out. One of Slack’s strengths for community channels is that it handles low volume conversation better because you can check it intermittently and catch up on what you missed. Being able to respond to questions that were asked when you weren’t around is a big plus.

      Our intention here definitely wasn’t to try to take the place of project-specific IRC channels at all. We just felt that there wasn’t really place to talk about experiences with different tools and tactics at a higher level. We get emails from people all the time about topics like this, and our hope is that making these sort of discussions public will be helpful to the community.

      1.  

        It’s absolutely not the the case that I fear that you’re trying to displace people from project channels to slack, apologies if it’s come across that way. It’s more frustration that for a community that’s fairly well established on IRC, there’s pressure to fragment across platforms.

        While you present the argument that Slack is better for low-traffic communities, I’m not sure I agree. You mainly rely on these points:

        • You can check slack intermittently and catch up on what you missed
        • You can respond to things that happened when you weren’t around

        Both of these points are well covered by IRC. While it’s true that the core protocol doesn’t cover it, it’s now the standard practically to use a bouncing service that permits it, or self host your own. A bunch of ZNC specific providers can be found here: https://wiki.znc.in/Providers , and additionally there’re services like https://www.irccloud.com/ that provide the entire system including a web client.

        Slack has some downsides, like really poor community management, instead deferring to out-of-band systems to deal with things like harassment as well, essentially showing it’s colours as a business service offering. For example, there’s no ability for an individual user to ignore another they do not get along with or are being harassed by, with Slack instead suggesting this be resolved with HR policies.

        1.  

          “Slack is better than IRC” is like saying “gmail is better than SMTP”.

          Slack is owning:

          • the server (that replaces the IRC server)
          • the heavy client (that replaces the IRC client / bouncer)
          • the light web client / application (that replaces an SSH server)

          People who appreciate running themself the programs they use go to IRC (get its hand dirty).

          People who prefer not be involved in maintaining anything go to Slack (living in the “cloud”).

          This is how I get my hands dirty: on a server: $ abduco -A irc weechat.

          You can even have this in a laptop .bashrc:

          alias irc='ssh user@your-server.tld abduco -A irc weechat'
          

          And then you have the same feature of “being able to respond to questions that were asked when you weren’t around”. :)

      1. 2

        That’s kinda cool, though it seems like a lot of effort vs bundling youtube-dl into your lambda package.

        1. 4

          There are definitely a variety of much easier ways to extract audio from YouTube videos! The tutorial isn’t really meant to be a highly realistic use-case, but rather a general illustration of how these tools, services, and techniques can be used in conjunction with each other. The general approach of prototyping simple APIs with Express and then deploying them using API Gateway/Lambda is an extremely useful pattern.

        1. 1

          Despite all Node.js-related content in this article, is anybody here using Snap or Flatpak already?

          1. 3

            I used FlatPak to install one or two Desktop apps but I wasn’t impressed. That could be the packagers’ fault or the system’s.

            All real problems aside t’s also a bit annoying as you seem to have to run the applications with a long command line that I kept forgetting, maybe just providing a directory with shims/wrapper scripts with predictable names would’ve gone a long way (I mean, /usr/local/bin might be debatably ok as well)

            My solution for non-GUI-heavy things so dar has been nixpkgs - so I can for example run a brand new git or tmux on Ubuntu 16.04.

            1. 2

              You might also be interested in checking out Exodus for quickly getting access to newer versions of tools like that. It automatically packages local versions of binaries with their dependencies, so it’s great for relocating tools onto a server or into a container. You can just run

              exodus git tmux | ssh my-ubuntu-server.com
              

              and those tools are made available in ~/.exodus/bin. There’s no need for installing anything special on the server first, like there is with Snap, Flatpak, and Nix.

              1. 1

                Thanks, I’ve heard about exodus but I think it’s a bit of a hack (a nice one though) and first I’d need to have those new versions installed somewhere, which I usually don’t :)

                I’m actually a big fan of package managers and community effort - just sometimes I’m on the wrong OS and would have certain tools in a “very fresh” state - so far nixpkgs is perfect for me for this.

            2. 2

              I use snap for a few things, and have even made a classic snap or two of some silly personal stuff. They seem to work fine, but ultimately feel out of place due to things like not following XDG config paths. They also get me very little over an apt repo, or even an old-school .deb, since most of the issues (e.g. you must be root) remain. Generally speaking, given that Linux distros already have package managers, I’m more interested in things like AppImage, which bring genuine non-package but trivial to install binaries to Linux.

              (What I really want is to live in a universe where 0install took off, but I think that universe is gone,)

              1. 2

                Yes, quite a few popular projects: Spotify, Skype, Firefox, Slack, VLC, Heroku, etc.

              1. 1

                meh, this is really a cat and mouse game. just test it like:

                if (navigator.webdriver || navigator.hasOwnProperty('webdriver')) {
                  console.log('chrome headless here');
                }
                

                And there goes the article until the author can find a way to bypass this now…

                1. 6

                  The point of the article is sort of that it’s a cat and mouse game. The person doing the web browsing is inherently at the advantage here because they can figure out what the tests are and get around them. Making the tests more complicated just makes things worse for your own users, it doesn’t really accomplish much else.

                  const oldHasOwnProperty = navigator.hasOwnProperty;
                  navigator.hasOwnProperty = (property) => (
                    property === 'webdriver' ? false : oldHasOwnProperty(property)
                  );
                  Object.defineProperty(navigator, 'webdriver', {
                    get: () => false,
                  });
                  
                  1. 1

                    Yet there are other ways that surely make it possible for a given time window, like testing for a specific WebGL rendering that chrome headless cannot perform. Or target a specific set of bugs related only to chrome headless.

                    https://bugs.chromium.org/p/chromium/issues/detail?id=617551

                    1. 1

                      Well, eventually you just force people to run Chrome with remote debugging or Firefox with Marionette in a separate X session, mask the couple of vars that report remote debugging, and then you have to actively annoy your users to go any further.

                      I scrape using Firefox (not even headless) with Marionette; I also browse with Firefox with Marionette because Marionette makes it easy to create hotkeys for strange commands.

                      1. 1

                        Even if there were no way to bypass that, don’t you think that you’ve sort of already lost in some sense once you’re wasting your users’ system resources to do rendering checks in the background just so that you can restrict what software people can choose to use when accessing your site?

                        1. 3

                          If headless browser is required to scrape data (and not just requesting webpages and parsing html), then website is already perverse enough. Noone will be suprised more if it would also run webgl-based proof of work before rendering most expensive thief-proof news articles from blob of malbolge bytecode with webgl and logic based on GPU cache timing.

                          1. 1

                            You’re paying a price, certainly. But depending on your circumstances, the benefits might be worth the cost.

                    1. 3

                      Great article!

                      Weirdly, when I go to that link (or even reload the page) I end up 3/4 of the way down the page instead of at the top. Browser bug or something to do with the page scripting?

                      1. 1

                        This appears to be caused by the embedded iframes stealing focus. I tried some workarounds, but they unfortunately don’t seem to resolve the issue. If anybody knows a better fix for this, then I would love to hear it!

                      1. 2

                        Oh man, so many cool tools!

                        BTW, you have a typo in the link to the powerline-shell page.

                        And the x-macro link too!

                        1. 1

                          Thanks, that should be fixed now!

                        1. 3

                          A very Pareto optimal post :)

                          I would question whether or not the approach taken is suitable for finding “good” blog posts. Hacker News gets gamed by plenty of people. There’s also a cult of multiple personalities that seems to take hold with people regularly getting their blog posts submitted because their posts will be guaranteed to be upvoted. It guarantees votes for the poster to be first to submit a Gabriel Weinberg or Daring Fireball link, regardless of quality.

                          Still, the Pareto approach is really well explained here, and it’s a shining example of the difference between a HN optimal and good post IMHO :)

                          1. 3

                            Thanks! I completely agree about the cults of personality. That was my motivation for the second list of posts where I restricted the maximum number of distinct submitters for a blog. It significantly limited the number of candidate blogs, but it did effectively eliminate the blogs that people race to submit.

                          1. 3

                            How did you handle duplicates in this analysis? Many posts get submitted multiple times and it’s not clear if you counted those as different or combined them.

                            1. 3

                              I didn’t do any sort of deduplication, so articles that were submitted multiple times were considered distinct in the analysis. I think that makes sense for the average/mean/median scores, but perhaps the duplicates should have been subtracted from the total article count for each blog. I just did a quick pass over the data and it looks like 92.4% of the URLs submitted to Hacker News are unique. When limiting that to the submissions that were identified as blog articles, the fraction is only slightly higher at 93.5%. I don’t think that should make that much of a difference, but you still raise a good point!

                              1. 2

                                I would expect a really high percentage of URLs to be unique because of spam (and articles that are so low-quality as to be almost indistinguishable from spam). I don’ t mean to make work for you, but what does it look like if you only consider articles that get 5 upvotes? Or reach the front page?

                                1. 3

                                  97.1% of blog submissions that get at least 5 upvotes are unique and that rises to 98.2% for submissions that get at least 10 (which is the number that I like to use as an approximation for making the front page). This is probably partially caused by even really great articles having a good chance of never getting upvoted though (in addition to spammy submissions being removed).