1. 7

    The first argument seems to ignore the fact you can (and IMO should) use ProxyJump/ProxyCommand instead of agent forwarding. I don’t buy the other arguments either. Automation can help to turn every host into a bastion, but this is just increasing your surface of attack.

    1. 22

      Even a blog post needs tags, categories, and images. When it comes to stock images, there are two sites that have a wide selection of free to use, no credit required works:

      Strong disagree here. I’m a firm believer of the “clear and cold” writing style: your writing should be clear and concise. The reader isn’t there for your memes or hero headers or zany gifs. They’re there for your words. If an image doesn’t make the words clearer, then it doesn’t belong.

      Case in point: at 97 KB, your typewriter image is the heaviest thing on the site. All it does is make me have to scroll in order to read your actual content.

      1. 11

        The useless practice of hero images has become so prevalent that I acquired a habit of scrolling past them without even looking.

        The memes also have negative value because they take up space but carry no information.

        Related to this, in newspaper articles I often see random images that have nothing to do with the article like, say, a man waiting for a bus in an article about mass transit. To add insult to injury, it’s also captioned with “A man waiting for a bus”.

        1. 3

          I do agree with you for images, but not for tags and categories. Sometimes I stumble upon a great article on a subject and would like to find more articles from the same author on this subject. When there are tags, they are often useful, when there is not… you have to go the the archives (that sometimes you cannot even have…) and ctrl+f on several pages several key words to find what you’re looking for. Sometimes, some Google foo helps but sometimes not.

          To me it’s like some blogs that don’t serve RSS because the author don’t use it himself. This drives me mad.

          1. 2

            It seems many recommend the use of these header images to increase engagement. Of course, this is often from SEO websites that may do that to compensate for lack of contents. And there is no source for such a claim. I didn’t find if there was any appropriate research work on this topic. I also don’t like to scroll an unrelated image to see content, but maybe many people find it engaging.

            1. 3

              To be precise: to increase engagement on social media, for the post to have a thumbnail.

          1. 22

            So (1) nobody cared to improve the old tools, writing new ones is more fun, and (2) updating the old tools to reflect current reality would break old scripts, so the rational choice is to both let the old tool rot (thus quite possibly breaking anything that relies on it) as well as introduce a new tool that definitely isn’t compatible with the old scripts. Why do I feel like this line of arguing has a problem or two?

            Pray tell, what happens when the interface provided by the iproute2 utilities no longer reflects reality. Let them rot and write yet another set of new tools? Break them? Introduce subtle lies?

            Oh by the way, if you’re configuring IPv6 on linux, don’t use the old tools. They’re subtly broken and can waste you a lot of time. I’ve been there. Don’t mention it.

            Meanwhile, I’m glad that OpenBSD’s ifconfig can show more than one IP per interface. And I can use the same tool to set up my wifi too. It’s a tool meant to work.

            1. 12

              The BSDs are maintaining the kernel and the base system in locksteps. This is not the case for Linux distributions. Over the years, Linux developers started to do the same. That’s why we now have iproute2, ethtool, iw and perf, which are userland tools evolving at the same speed as the kernel (and sharing the version number).

              1. 8

                nobody cared to improve the old tools

                The people who want to use the old tools want them to keep working the same way they always have. They already work that way, so the people who want to use the old tools have no motivation to make changes.

                updating the old tools to reflect current reality would break old scripts

                It would also piss off the people who want to keep using the old tools, since by definition they would no longer keep working the same way.

                The names ifconfig and netstat are now claimed and cannot be re-used for a different purpose, in much the same way that filenames like COM1 and AUX are claimed on Windows and cannot be re-used.

                Meanwhile, I’m glad that OpenBSD’s ifconfig can show more than one IP per interface.

                My understanding is that OpenBSD reserves the right to change anything at any time, from user-interface details down to the C ABI. “The people who want to use the old tools” are discouraged from using OpenBSD to begin with, so it’s not surprising that OpenBSD doesn’t have to wrestle with these kinds of problems.

                1. 4

                  the right to change anything at any time

                  While this is true, I think you are taking it a little too literally. You won’t, for example, upgrade to the latest snapshot and find that ls has been replaced by some new tool with a completely different name, or that CVS has been replaced by GIT. And while POSIX doesn’t require (best I can tell) a tool named ifconfig it’s very unlikely you would find it replaced by something else.

                  1. 5

                    Right. And by following the discussions on tech@, I’ve gotten the impression that Theo (as well as many other developers) do deeply care about avoiding unnecessary chance to the user facing parts as tools get replaced or extended. Case in point, daemon configuration. The system got extended with the rcctl tool, but the old way of configuring things via rc.local and rc.conf.local still works as it always did. Nothing like the init system swaps on Linux. Still, extending or changing the behavior of a tool even at the risk of breaking some old script seems to be preferred to making new tools that require everyone to adapt.

                    After a decade of using Linux as well as OpenBSD, I’d say that OpenBSD is way more committed to keeping the user facing parts finger compatible while breaking ABI more freely (“we live in a source code world”). In the Linux world I’ve come to expect ABI compatibility but user compatibility gets rekt all the time.

              1. 1

                I don’t understand why keepalived is being used instead of an exabgp watchdog script. You can write a simple health check utility in perl, python, etc and have that directly announce or withdraw the route in exabgp instead of having exabgp watch for files on a filesystem.

                1. 1

                  Keepalived primary role is to configure IPVS. So, it needs to do healthchecking and ExaBGP can just reuse the results of these checks. The second reason is that healthchecking the VIP from the load-balancer itself may bring several small issues (among them, RP filtering, source address selection and routing).

                  1. 1

                    But why use IPVS and ExaBGP. Your router should be able to automatically do the load balancing for you with anycast? At work we do not use any hardware load balancers or layer 3 IPVS stuff like CARP/VRRP/HSRP. Everything is handled with ExaBGP for all services (web, database, etc) and it works beautifully.

                    1. 2

                      With just ECMP routing, any change will break existing connections. Add an additional node in your database cluster and many of your existing connections will break. This may or may not be acceptable depending on your use case. Also note IPVS and VRRP have no relation except being used together through Keepalived.

                      1. 1

                        Not using persistent database connections so I guess it wouldn’t matter.

                1. 3

                  I wanted some comments on my blog recently without using disqus and preferably without needing a server at all as the blog is already static. I ended up putting together a prototype that uses AWS lambda to handle comment submissions and generate a static json index file which can be read by some javascript.

                  Runs for free and works well enough for low traffic sites: https://github.com/joealcorn/chatter

                  1. 1

                    At some point, I wanted to do something similar. The only sore point is it was unclear for me if you can prevent lambdas from running in parallel. Otherwise, you have a small race condition when you regenerate the JSON index file. It’s solvable by using Dynamo as a distributed lock. Or by not doing anything: as long as you have comments flowing (which is likely if you have a race condition), the index will become correct at some point.

                    Thanks for sharing the code!

                    1. 1

                      I got around the issue of needing locks (wanted to be completely server-less and essentially free to run) by having the index generation regenerate the entire index rather than update the existing one, but there is another race condition with my solution.

                      The way it works at the moment is:

                      • Have a lambda function write new comments to a bucket with a flake id as the filename (so they’ll be roughly time ordered)
                      • Have a second lambda function to generate the index file that will execute when certain events are fired on the bucket (file created, deleted, updated)

                      The race condition I hit is because of S3’s eventual consistency - when the generation function runs the newly created comment might not show up in the file list. My solution would be to kick off the generation function on a delay, but this has not been implemented.

                      And of course, if you’ve many many comments the index generation will slow down, but this works fine for my use case.

                      Zappa makes all of this really easy to manage, I wouldn’t want to use lambda without this tooling.

                  1. 3

                    Does anyone know if there’s a simpler alternative to Google Analytics which only shows hit counts? For my site, all I’d love to know is which pages have been viewed how many times. I really don’t care about anything else.

                    I wish Netlify would provide some sort of basic log analysis of static sites, telling me the view count of each page.

                    1. 5

                      If you have access to your web-server logs, Goaccess may be a good candidate. It’s quite easy to use and not really intrusive.

                      1. 1

                        I actually don’t since I’m on Netlify. Otherwise this would be an ideal solution.

                        Most of the static websites are hosted on either Github Pages or Netlify and (as far as I know) neither of those allow you to see the access logs.

                        1. 4

                          You can host a 1x1 pixel on Amazon S3 and enable logging for the associated bucket. Add a query string to identify the current page. A simple transformation on the logs (to remove original URI, keeping only the one in query string) and you should be able to use GoAccess.

                      2. 1

                        Does anyone know if there’s a simpler alternative to Google Analytics which only shows hit counts?

                        I think what you’re looking for is a web counter from the 90’s :)

                        1. 1

                          I don’t! But this sounds like a good service for someone to provide. Something SUPER lightweight. Could even eventually show it on https://barnacl.es

                          1. 1

                            back in the days https://www.awstats.org/ was a thing

                            1. 1

                              It still is. I know quite a few customers who still use awstats.

                          1. 5

                            Use of a wireless card as an AP is painful. Most of the time, you can only use one of the frequency band (2.4 or 5) because there is one radio. And many cards overly restrict the 5 GHz band. I got similar problems with an Intel 7260. It’s safer to buy a “dumb” AP which knows how to map VLAN to SSID and use it instead of a wireless card.

                            1. 11

                              With any provider, the same “scam” technique would work with “+”. The author seems to assume Netflix would know the canonical form of such an address, but “+” doesn’t have to be special. Some providers use “-” instead (free.fr for example). You can use any character you want: in Postfix, this is recipient_delimiter.

                              1. 1

                                With zsh, there is builtin support to name directory (static named directories). It’s also possible to let an entire function decide how to translate a name to a directory (dynamic named directories). Moreover, it’s possible to change to a directory without cd. It doesn’t have the fuzzy matching stuff, so some work is still needed to get the same results. I am only using the first feature, but I have described both of them in this blog post: https://vincent.bernat.im/en/blog/2015-zsh-directory-bookmarks. What’s great with such features is that it works everywhere a filename works. You can use “cd”, but you can also directly call your editor on a file and you get completion.

                                1. 32

                                  In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.

                                  There’s another Lobster thread right now about how distributions like Debian are obsolete. The idea being that people use stuff like npm now, instead of apt, because apt can’t keep up with modern software development.

                                  Kubernetes official installer is some curl | sudo bash thing instead of providing any kind of package.

                                  In the meantime I will keep using only FreeBSD/OpenBSD/RHEL packages and avoid all these nightmares. Sometimes the old ways are the right ways.

                                  1. 7

                                    “In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.”

                                    I think this misses the point. The relevant claim was that npm has a good general approach to packaging, not that npm is perfectly written. You can be solving the right problem, but writing terribly buggy code, and you can write bulletproof code that solves the wrong problem.

                                    1. 5

                                      npm has a good general approach to packaging

                                      The thing is, their general approach isn’t good.

                                      They only relatively recently decided locking down versions is the Correct Thing to Do. They then screwed this up more than once.

                                      They only relatively recently decided that having a flattened module structure was a good idea (because presumably they never tested in production settings on Windows!).

                                      They decided that letting people do weird things with their package registry is the Correct Thing to Do.

                                      They took on VC funding without actually having a clear business plan (which is probably going to end in tears later, for the whole node community).

                                      On and on and on…

                                      1. 2

                                        Go and the soon-to-be-official dep dependency managment tool manages dependencies just fine.

                                        The Go language has several compilers available. Traditional Linux distro packages together with gcc-go is also an acceptable solution.

                                        1. 4

                                          It seems the soon-to-be-official dep tool is going to be replaced by another approach (currently named vgo).

                                        2. 1

                                          I believe there’s a high correlation between the quality of the software and the quality of the solution. Others might disagree, but that’s been pretty accurate in my experience. I can’t say why, but I suspect it has to do with the same level of care put into both the implementation and in understanding the problem in the first place. I cannot prove any of this, this is just my heuristic.

                                          1. 8

                                            You’re not even responding to their argument.

                                            1. 2

                                              There’s npm registry/ecosystem and then there’s the npm cli tool. The npm registry/ecosystem can be used with other clients than the npm cli client and when discussing npm in general people usually refer to the ecosystem rather than the specific implementation of the npm cli client.

                                              I think npm is good but I’m also skeptical about the npm cli tool. One doesn’t exclude the other. Good thing there’s yarn.

                                              1. 1

                                                I think you’re probably right that there is a correlation. But it would have to be an extremely strong correlation to justify what you’re saying.

                                                In addition, NPM isn’t the only package manager built on similar principles. Cargo takes heavy inspiration from NPM, and I haven’t heard about it having a history of show-stopping bugs. Perhaps I’ve missed the news.

                                            2. 8

                                              The thing to keep in mind is that all of these were (hopefully) done with best intentions. Pretty much all of these had a specific use case… there’s outrage, sure… but they all seem to have a reason for their trade offs.

                                              • People are angry about a proposed go package manager because it throws out a ton of the work that’s been done by the community over the past year… even though it’s fairly well thought out and aims to solve a lot of problems. It’s no secret that package management in go is lacking at best.
                                              • Distributions like Debian are outdated, at least for software dev, but their advantage is that they generally provide a rock solid base to build off of. I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.
                                              • While I don’t trust curl | sh it is convenient… and it’s hard to argue that point. Providing packages should be better, but then you have to deal with bug reports where people didn’t install the package repositories correctly… and differences in builds between distros… and… and…

                                              It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place… there are plenty of good, solid options for development and we’re moving (however slowly) towards safer, more efficient build/dev environments.

                                              But maybe I’m just telling myself all this so I don’t go crazy… jury’s still out on that.

                                              1. 4

                                                Distributions like Debian are outdated, at least for software dev,

                                                That is the sentiment that seems to drive the programming language specific package managers. I think what is driving this is that software often has way too many unnecessary dependencies causing setup of the environment to build the software being hard or taking lots of time.

                                                I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                                                Often it is possible to install libraries at another location and redirect your software to use that though.

                                                It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place…

                                                I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                                                I’m growing more disillusioned the more I read Hacker News and lobste.rs… Help me be happy. :)

                                                1. 1

                                                  So like squeak/smalltalk images then? Whats old is new again I suppose.

                                                  http://squeak.org

                                                  1. 1

                                                    I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                                                    You could say the same thing about Docker. I think package managers and tools like Docker are a net win for the community. They make it faster for experienced practitioners to setup environments and they make it easier for inexperienced ones as well. Sure, there is a lot you’ve gotta learn to use either responsibly. But I remember having to build redis every time I needed it because it wasn’t in ubuntu’s official package manager when I started using it. And while I certainly appreciate that experience, I love that I can just install it with apt now.

                                                  2. 2

                                                    I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                                                    Speaking of Python specifically, it’s not a big problem there because everyone is expected to work within virtual environments and nobody runs pip install with sudo. And when libraries require building something binary, people do rely on system-provided stable toolchains (compilers and -dev packages for C libraries). And it all kinda works :-)

                                                    1. 4

                                                      I think virtual environments are a best practice that unfortunately isn’t followed everywhere. You definitely shoudn’t run pip install with sudo but I know of a number of companies where part of their deployment is to build a VM image and sudo pip install the dependencies. However it’s the same thing with npm. In theory you should just run as a normal user and have everything installed to node_modules but this clearly isn’t the case, as shown by this issue.

                                                      1. 5

                                                        nobody runs pip install with sudo

                                                        I’m pretty sure there are quite a few devs doing just that.

                                                        1. 2

                                                          Sure, I didn’t count :-) The important point is they have a viable option not to.

                                                        2. 2

                                                          npm works locally by default, without even doing anything to make a virtual environment. Bundler, Cargo, Stack etc. are similar.

                                                          People just do sudo because Reasons™ :(

                                                      2. 4

                                                        It’s worth noting that many of the “curl | bash” installers actually add a package repository and then install the software package. They contain some glue code like automatic OS/distribution detection.

                                                        1. 2

                                                          I’d never known true pain in software development until I tried to make my own .debs and .rpms. Consider that some of these newer packaging systems might have been built because Linux packaging is an ongoing tirefire.

                                                          1. 3

                                                            with fpm https://github.com/jordansissel/fpm it’s not that hard. But yes, using the Debian or Redhat blessed was to package stuff and getting them into the official repos is def. painful.

                                                            1. 1

                                                              I used the gradle plugins with success in the past, but yeah, writing spec files by hand is something else. I am surprised nobody has invented a more user friendly DSL for that yet.

                                                              1. 1

                                                                A lot of difficulties when doing Debian packages come from policy. For your own packages (not targeted to be uploaded in Debian), it’s far easier to build packages if you don’t follow the rules. I like to pretend this is as easy as with fpm, but you get some bonus from it (building in a clean chroot, automatic dependencies, service management like the other packages). I describe this in more details here: https://vincent.bernat.im/en/blog/2016-pragmatic-debian-packaging

                                                              2. 2

                                                                It sucks that you come away from this thinking that all of these alternatives don’t provide benefits.

                                                                I know there’s a huge part of the community that just wants things to work. You don’t write npm for fun, you end up writing stuff like it because you can’t get current tools to work with your workflow.

                                                                I totally agree that there’s a lot of messiness in this newer stuff that people in older structures handle well. So…. we can knowledge share and actually make tools on both ends of the spectrum better! Nothing about Kubernetes requires a curl’d installer, after all.

                                                              1. 1

                                                                Having one file to specify dependencies and one file to lock them is quite flexible. You can choose to commit the lock file or not. The proposed approach removes this flexibility and the choice. Similar results could be obtained with dep by using a different solver (sticking to minimal versions) to generate the lock file.

                                                                1. 2

                                                                  In Go, the dependencies are already specified in the source code, so we don’t need a file for this. The go.mod file is used to specify dependency versions. The go.mod file can act as as a lock file because of the Minimal Version Selection algorithm.

                                                                1. 3

                                                                  I wish more folks involved in packaging for Linux distros were familiar with Homebrew. Obviously not everything Homebrew does is applicable to Debian, but the ability for folks to show up and easily contribute new versions with a simple PR is game changing. Last night I noticed that the python-paramiko package in Debian is severely out of date, but the thought of trying to learn the various intricacies of contributing to Debian well enough to update it is turns me right off.

                                                                  1. 15

                                                                    As an upstream dev of code that’s packaged with Homebrew, I have noticed that Homebrew is by far the sloppiest of any packagers; there is basically no QA, and often the packagers don’t even read the instructions I’ve provided for them. I’ve never tried it myself, but it’s caused me a lot of headaches all the same.

                                                                    1. 2

                                                                      I just looked at the packaging information for paramiko and I have more questions than before:

                                                                      How does this setup even work in case of a security vulnerability?

                                                                      1. 4

                                                                        Unfortunately, Debian has still a strong ownership model. Unless a package is team-maintained, an unwilling maintainer can stall any effort to update a package, sometimes actively, sometimes passively. In the particular case of Paramiko, the maintainer has very strong opinions on this matter (I know that first hand).

                                                                        1. 1

                                                                          Strong opinions are not necessarily bad. Does he believe paramiko should not be updated?

                                                                        2. 3

                                                                          How does this setup even work in case of a security vulnerability?

                                                                          Bugs tagged as security problems (esp. if also tagged with a CVE) get extra attention from the security team. How that plays out depends on the package/bug, but it can range from someone from the security team prodding the maintainer, all the way to directly uploading a fix themselves (as a non-maintainer upload).

                                                                          But yeah in general most Debian packages have 1-2 maintainers, which can be a bottleneck if the maintainer loses interest or gets busy. For packages with a lot of interest, such a maintainer will end up replaced by someone else. For more obscure packages it might just languish unmaintained until someone removes the package from Debian for having unfixed major issues.