1. 14

    I personally really like sslh: it’s a protocol multiplexer that accepts all connections and acts as a reverse proxy by picking an upstream server based on the protocol that each client seems to expect. For example, if a client connects and immediately sends traffic that looks like an HTTP request, sslh will let an upstream HTTP server deal with the client. If the client does nothing, after a configurable timeout sslh will proxy the connection over to an SSH server.

    I use sslh at work in order to expose Prometheus metrics and a SSH server on a single port of a Docker container. Just like this snaps fingers I’ve gained SSH access to all of my containers that are also scraped by Prom.

    1. 2

      Ah, that’s a pretty cool tool. I hadn’t seen anything quite so generalized before.

      1. 1

        Why do you use a container for more than one process?

        1. 3

          I set up supervisord as the entry point in a base image, and then have derived images add their config files into the directory from which supervisord picks up program configs. I’ve tried conjuring up a similar setup using OpenRC and whatever SysV init comes with BusyBox, but it never did turn out as smooth as my supervisord setup, so I’m rolling with that.

          Edit: you asked “why?” and I answered the “how?” - well done! As to why: one of my projects at work ships as a Docker container that runs on machines that I have no access to. For ease of debugging, tailing logs, and generally poking around, nothing beats having good old shell access. Running sshd inside my container alongside the main process nicely sidesteps the need to provision shell access to the Docker host, which is organizationally unpalatable when done on a systematic basis. Simply hiding behind Prom’s metrics port is way easier.

          1. 1

            The how is just as interesting, as I have tried it with systemd (which blatantly refuses to run in docker), so I now use supervisord as well for mostly the same purpose. However, I thought that a container was meant for one process, thus perceiving myself to do containers “wrong”. Lightweight VMs for trusted software where actual kvm would be way to much.

            What kind of project is it?

            1. 1

              You can also use something like https://github.com/Yelp/dumb-init which works very well too. Probably more lightweight then supervisord though.

              1. 1

                The nice part of supervisord is it also works everywhere else as well, so you only have to learn one tool.

                1. 1

                  I use dumb-init in containers that only have a single process in them, while supervisord allows me to ship multiple independent but related processes in a single container.

                2. 1

                  I’ve come to accept that anything goes inside a Docker container that would have previously gone into a complete Linux system. After all, Docker is just convenience machinery atop Linux namespaces, which is exactly in line with how I’m using Docker: to isolate & virtualize a complete system.

                  The project I’m currently working on is a log ingestion daemon for a very temperamental legacy application that emits Valuable Business Data™️ in a variety of mostly textual formats. The daemon acts as a bridge between this legacy application and a streaming data pipeline by tailing files and transforming them into streams of events. Most of the difficulty with this daemon has to do with really weird stuff that the legacy application does, so convenient introspection via a sidecar sshd has proven invaluable.

              2. 2

                “One process per container” is unjustified dogma left over from the early days of docker. I like the notion of one service per container, where any given application is comprised of multiple smaller services. When a service depends on two or two or more proceses that are tightly coupled together and it never would make sense to handle them separately, they should go in the same container.

                And when the service is in fact only one process, there’s still the issue that it probably doesn’t behave anything like init, which can be a problem, especially if it does any forking.

            1. 3

              I wonder how long docker will remain at the “forefront” of containers.

              If not for Docker’s vendor adoption, would “podman for dev, and k8s for prod”, start becoming more common?

              1. 1

                I don’t know, I only hope we’d see container software that would work on more Unixes, like FreeBSD and OpenBSD (and maybe OSX[1]).

                [1] Yes, I know Docker kinda works on OSX.

                1. 2

                  I dearly love FreeBSD, but reading the mailing lists sometimes makes me wonder how anything ever gets done. The amount of pushback occasionally seen for sometimes seemingly trivial improvements[1] …. and the historically long support lifecycle – major versions supported for 5 years!

                  It’s no wonder that the wheels grind slowly sometimes, and things tend to have fairly clunky usability at times (for things like jails, bhyve, etc). Good tech, often saddled with middling interfaces that have to change somewhat glacially.

                  Hats off to all the devs[2] who continue to “get things done” anyway.


                  [1]: Wasn’t the term “bike-shedding” even coined on the FreeBSD mailing lists back in ’99 or so?
                  [2]: Same goes for devs for OpenBSD, DragonFlyBSD, NetBSD, Linux, etc.

                  1. 2

                    Only those who are skilled in the ways of placating the hordes of angry greybeards shall be deemed worthy of a commit bit. /s

                    1. 1

                      sometimes seemingly trivial improvements

                      I have problems with such things. Sometimes the improvements fix a problem that doesn’t need fixing.

                  2. 1

                    Docker itself will probably always have a place, but the current trend in devops is cutting Docker out of the loop and orchestrating containers at a lower level. The buzzwords for these lower-level components escape me because I’m not as entrenched in this area as I’d like to be.

                  1. 4

                    Having a Makefile or similar is a really good idea: it’s easier and less error-prone than having to remember a whole bunch of steps each time. I think the author would have benefitted from going a bit further: running make via a VCS hook. That consolidates two steps, and ties the neccessity of running make with the nice-to-have of VCS (which we might otherwise forget or avoid). Note that this also requires breaking the habit of running make manually; if it’s deeply ingrained in our muscle memory we could try changing the target name, to break our “autopilot”.

                    For a site that lives in a single directory on a single machine, it’s probably easiest to publish via a post-commit hook. My site has a few remotes which I push to as backups/mirrors, so I have one of those publish the site via a post-receive hook; this lets me commit early and often, without worrying about half-finished things being published (although I also have a separate directory for unfinished work, that I can git mv into place when I’m happy with it).

                    1. 2

                      :O this is wonderful, I totally forgot about commit hooks. Thank you for this tip! I also have a similar setup, frontend for dev and public for build. I’m going to look into this further. https://githooks.com/ seems like a pretty good resource.

                      Do you have a proprietary server running yr site, and hence have control of server-side hooks?

                      I feel, to prevent publishing in unpublishable things, I either gotta come up with some protocol to determine whether a commit contains unpublishable things so not the publish if that commit is pushed, or continue doing it manually, especially since my original problem was not committing, not forgetting to publish.

                      Since my site lives in S3 I could probably leverage GitHub webhooks/lambda to further automate, hmmmm

                      1. 2

                        I don’t use “server-side hooks”; I push changes from one place on my laptop to another ;)

                        I make changes to my site via a working copy at ~/blog, which pushes to a bare clone at ~/Programming/repos/chriswarbo-net.git. That bare clone has a post-receive hook which publishes the site; it also propagates those new commits to a copy on my server and a mirror on github. I actually manage all of my git repos this way; although none of my other projects are Web sites so they don’t do the publishing step.

                        Regarding unpublishable things: I just stick them in a directory called /unfinished which isn’t linked to from other pages. When something’s finished I’ll move it to a location which is linked to (either /blog or /projects).

                        1. 1

                          ¡Nice, that’s clever, I think I’m going to adopt/steal that approach!

                      2. 2

                        Having a Makefile or similar is a really good idea:

                        Make is an amazingly powerful tool as long as you don’t stray too far from its core competency of turning $this_file.a into $this_file.b and building a graph of the dependencies and processes for doing that. When your Makefile has more dummy targets than real ones, that’s a good sign you should have just written a shell script instead. (I’m looking at you, Pelican.)

                        1. 1

                          Oh sure, by “or similar” I just meant a single command, needing no arguments, to build+test+push+etc. which can be easily extended. A script will do, or a complex build system du jour will do; although Make is (probably) fine

                          I used to use Make for my site (which I render from Markdown using Pandoc), but ended up with two problems:

                          • The Makefile became very complicated, as I tried to avoid repeating myself by calculating filenames, dependencies of index pages, etc. It seemed to work, but I had to learn a lot about (GNU) Make’s special variables, evaluation order, multiple-escaping, recursive invocation, etc.
                          • The output directory would sometimes be stale, for reasons I couldn’t figure out; so I got into the habit of deleting it first, which loses the only real advantage of Make. This is especially bad since some pages take a long time to render, since they do a bunch of computation during the rendering.

                          I now use Nix, since I was already using it for per-page dependencies, and its language is saner than Make.

                        2. 2

                          Another way to do this would be just have the Makefile test if the VCS is in a clean state or not. If it’s not, you can exit and fail to run with a message “Hey you! commit first!” or just a warning if you want :)

                          a git example:

                          git status | grep clean
                          

                          will exit 1 if clean is not found, and Make will then of course exit as well. I’m sure there is a better way, but the above works quite well in practice. (Obviously there are edge cases: like having a new file with the name clean that is not yet committed)

                        1. 0

                          Wow, Blue’s is still around…

                          1. 2

                            The same reason nearly every proprietary software company targets Windows first or Windows exclusively: if there’s a PC on someone’s desk or in their lap, statistically speaking, it’s probably running Windows. Cross-platform compatibility is expensive and time-consuming to do and video games are some of the most complicated software made. There’s no such thing as, “well just use this API and you get cross-platform for free” and those that advertise it are selling something very limited and not useful to series gaming dev.

                            Also, while there is a large overlap between gamers and computer geeks, no video game company wants to alienate their non-geek fans holding money. Simplifying the supported platform down to a certain CPU, OS, and video card makes this possible.

                            1. 3

                              There’s no such thing as, “well just use this API and you get cross-platform for free” and those that advertise it are selling something very limited and not useful to series gaming dev.

                              Is anyone advertising cross-platform development “for free”? If you’re talking about SDL, Unity, Unreal, etc. I can confirm that they really aren’t very limited whatsoever.

                              1. 1

                                Also the parent poster only talks about the downsides of cross-platform development while there are definite upsides as well, especially with regards to code quality. But that’s not so important and paid attention to in games development.

                            1. 6

                              PHP gets a lot of crap from the people I currently work with, but I really never had a problem with it. I’m more of a sysadmin type, not a strong coder. I don’t pick up languages easily but I always found PHP to be pretty straightforward to work with. There was an MVC framework for PHP called CodeIgniter that was excellent… even someone with my limited ability was able to set up a functional CRUD app within about 30 minutes of sitting down with it.

                              I’m convinced that 99% of PHPs poor rep comes from the sheer amount of bad code written with it, simply because it lowered the barrier to web programming so dramatically. Believe me, I’ve had my share of seeing (and supporting) crappy PHP. But I’ve also seen some very elegant frameworks and libraries written with it.

                              1. 3

                                Personally, I dislike the weird syntactical things and inconsistencies more than anything. Concat with a .? Instance calls with instance->method()? Static methods with Class::method()? camelCase() or snake_case() in the standard library. explode() and implode() instead of split() and join()? And what is up with the use Something\SomethingElse syntax? I just don’t get it.

                                1. 3

                                  The . operator for concatenation and the -> method calls are Perl legacy and Perl got that from C/C++.

                                  1. 1

                                    I know where they come from, but it doesn’t make it any better

                                    1. 1

                                      It doesn’t matter, it honestly doesn’t. I always see a ton of bike shedding among holier-than-thou developers against PHP and it always gives me a bad impression of them. It’s like watching people making fun of fat people at a gym.

                                      1. 1

                                        For one, that analogy makes no sense at all. Having an opinion on the syntax (saying I don’t like them) doesn’t mean you have to agree with me. It also has nothing to do with making fun of anyone or anything. It’s a personal preference.

                                        1. 1

                                          Looks like I replied to the wrong thread. We all have our preferences and that’s fine. What I was referring to in my comment is the disgusting way the developer community dog-piles on a language largely out of personal preference; I have personally witnessed it alienate good people and I have had enough.

                                  2. 3

                                    I don’t mind the things when PHP is inconsistent with other languages. I hate when it’s inconsistent with itself. Why array_map, count, and strlen? Why is stdClass spelled like that, when DateTime is spelled like that?

                                    1. 1

                                      That’s more of what I was getting at in the camelCase vs snake_case thing. It really makes no sense, there’s no rhyme or reason about why things are the way they are. I would think at some point, they’d look at their standard library and say “let’s actually put the standard in this.” It would be backwards incompatible, sure, but they could alias and deprecate for a few versions to make it easier on slow pokes. It’s just disappointing that it’s still a cluster.

                                      I built my NHL94 stat tracking site with PHP because I had buddies who said they’d help and it’s all they knew (spoiler: they didn’t help), but that was the last major thing I did with it (2013). I pretty much refuse to use it for anything if I have a choice in the matter. There are langs that do everything it does and more that are put together better.

                                1. 14

                                  Just some feedback on that README: all those animated GIFs would be a lot easier to read if they were just pain text:

                                  $ some_command
                                  output
                                  
                                  $ other_thing
                                  more output
                                  

                                  It’s just really hard to follow with all the animation, and you can’t look at anything for more than what feels like half a second before the text disappears.

                                  1. 7

                                    I’ll make the change. Thanks for the feedback.

                                    1. 5

                                      Yes. Animations and videos are never the way to go for answering your audience’s very first question of, “what the heck is this and why should I care?” If your README.me says “play this video to learn about FooBar”, you’ve already lost me as an interested user.

                                      1. 2

                                        Personally, I agree, but there are people who would rather watch a video.

                                        1. 3

                                          I am in the video camp, my compromise in the end was a single gif followed quickly by text, hopefully it keeps everyone happy now.

                                      2. 2

                                        Not to mention that they’re not accessible to people who use a screenreader.

                                        1. 2

                                          That is an excellent point that I had not considered. Hard to get out of your own bubble.

                                      1. 8

                                        It depends largely on how many network interfaces you need. If this box will be your firewall/router too, the APU2 board https://www.pcengines.ch/apu2.htm ticks all of these boxes:

                                        • 3x independent gigabit interfaces
                                        • 64-bit x86 CPU with AES-NI instructions (which lets you run any Linux/BSD your heart desires)
                                        • cheap
                                        • small
                                        • low-power
                                        • fanless
                                        • hacker-friendly

                                        If you only need 1 network interface, then the sky is the limit. Virtually any used x86-64 hardware manufactured in the last decade can work, so hit up your friends and family for their old netbooks and whatnot. If you want to buy new, there are Intel NUCs, NUC knock-offs, Chinese-made embedded systems on Amazon (one brand name of these is QOTOM), low-power ATX mobo+CPU combos off of Newegg, old chrome boxes, the new $35 Atomic Pi.

                                        And those are just the x86 options. If speed is not super important, the Raspberry Pi can work okayish. There are some pretty decent higher-powered ARM devices on the market now, especially in the $50-$100 price range. If I was to buy one of those, I would try to get one supported by Armbian: https://www.armbian.com/

                                        1. 2

                                          I currently have the old ALIX board which has the advantage of power over ethernet which is missing from the APU2 boards.

                                          1. 1

                                            +1 for the APU line of stuff. initial setup over serial can be a pain, but they are uniquely capable and well made.

                                            1. 1

                                              If you only need 1 network interface, then the sky is the limit. Virtually any used x86-64 hardware manufactured in the last decade can work, so hit up your friends and family for their old netbooks and whatnot. If you want to buy new, there are Intel NUCs, NUC knock-offs, Chinese-made embedded systems on Amazon (one brand name of these is QOTOM), low-power ATX mobo+CPU combos off of Newegg, old chrome boxes, the new $35 Atomic Pi.

                                              That’s a great point. I’ve seen old Intel Atom NUCs around $50 on eBay, those might fit the bill.

                                            1. 10

                                              Does this mean we’re gravitating towards a de facto Linux monoculture? As much as I’m happy it’s based on a GPL kernel, I’m also feeling a kind of sadness over reduction of diversity… Even though Windows was already really more or less POSIX-like, at least compared to some OSes from earlier days. Now, apart from fringe/research/hobby OSes (though these cannot be completely ignored, fortunately), the only thing I can think of that feels like it may reasonably flirt with the mainstream would probably be Fuchsia?

                                              1. 13

                                                GuixSD on GNU Hurd or bust.

                                                1. 7

                                                  I don’t think we will see Windows go by the wayside, we will see POSIX go.

                                                  I think what we will see will be a birth of LINUX as the defacto cross-platform standard. SmartOS did it(linux compat) ,FreeBSD did it(linux compat) and Windows did it (WSL). All the cloud/VM vendors stop at Linux compat, and maybe windows compat. Occasionally one will specialize for macOS as well(parallels comes to mind), but most basically stop at Linux support.

                                                  Sure, the FreeBSD one isn’t all caught up to 4.19 standards, but neither was WSL1. SmartOS mostly is caught up. WSL2 will be caught up, and likely stay that way.

                                                  I think this isn’t really a note against the WSL1 approach so much as they didn’t want to spend so many resources babysitting the linux changelog, which tends to be massive. WSL2 should make handling the linux changelog much, much easier…. one hopes :)

                                                  I’m not sure I’m personally in favor of POSIX dying, but I think that’s the future here. Linux compat won.

                                                  1. 3

                                                    The NT kernel is not going anywhere. They’re just shipping a papered-over VM + a patched Linux kernel with Windows.

                                                    1. 6

                                                      In theory yes; but I seen an argument here and there that “meh, it seems we don’t really have to support Windows, as it’s got WSL, so we can just keep coding only for Linux”. Extending this some time into future, I would expect the NT kernel might phase out into insignificance. (A.k.a. become commoditized, into “just a one more semi-hardware platform for Linux”.)

                                                      1. 7

                                                        Extending this some time into future, I would expect the NT kernel might phase out into insignificance

                                                        Not likely. The only reason Windows has survived into the twenty-teens is because of Microsoft’s muscling of Windows into enterprises and onto consumer PCs in the past plus a fanatical devotion to maintaining compatibility with software compiled back in the mid 1990’s. If Microsoft as a company was starting to falter under its own jurassic weight, there might be some merit to the argument as competitors (Apple) swooped in to claim the abandoned marketshare, but that’s not the case. The company is stronger than ever.

                                                        My crazy, wild, out-there prediction is that it won’t be too long before Microsoft simply starts giving away Windows to consumers. They do about a billion in revenue from consumer Windows, almost all of that via OEMs presumably. So giving it to end-users will not hurt anything but would do wonders to encourage adoption among the next generation of developers. These new coders will probably have no particular loyalty to Apple when Windows is just as good for web development now that it ships with a fully functional copy of Linux under the hood.

                                                        1. 5

                                                          As an individual developer, that’s how I feel about it. I’m not interested in developing or maintaining a Windows front end for my Mac emulator; it’s enough to know that it works in a Linux VM.

                                                          This is another instance of the same logic that gives us “We don’t need to make a native desktop app because you can use our website in a browser.” If supporting Windows by supporting Linux is cheating, then so is shipping an Electron app.

                                                          1. 4

                                                            The thing is, I don’t want nor intend to slap some quick label on it, like “cheating”. It’s just that I see some awesome advantages of the situation, as well as some maybe not so obvious …disadvantages? questions? concerns? It definitely puts me in some kind of a meditative/pensive mood, and I wanted to share those… observations? as I find them interesting, and think they may be unexpected and thought provoking to others. That’s also kinda why I tried to emphasize this less obvious side of things.

                                                          2. 4

                                                            Extending this some time into future, I would expect the NT kernel might phase out into insignificance.

                                                            I saw this sentiment on HN, too. I don’t buy it, though. The main customers of Windows kernel are consumers, businesses, and huge businesses (enterprises). Windows is superior to Linux for consumers since it has better user experience. It’s main competitors are tablets and netbooks for those who mainly use browsers. The gamers will continue to drive sales. Then, there’s workstation users like A/V who mostly split between Windows and Mac with more transitioning to Windows over time.

                                                            On business side, they basically need a lot of desktops, servers, COTS apps, and custom apps. The apps were probably built to work closely with Windows kernel since it’s path of least resistance. Lots of the Microsoft and 3rd party tech will have obscure protocols and data formats to make escape to or integration with competitors harder. That’s called intentional lock-in. Even without it, lots of the companies (esp enterprises) will have built huge stacks of software that they can’t port without risk of major losses. Microsoft keeping strong backward compatibility means choosing them is low to no risk. That rigged choice is unintentional lock-in.

                                                            Microsoft’s billions of dollars worth of lock-in to things strongly coupled with Windows kernel means it isn’t going anywhere for a while. The only thing that could negatively effect the kernel is copyright/patent reform that lets people build and sell bug-for-bug compatible implementations of any software. Then, a project like ReactOS might get massive investment from governments or companies that don’t want to be dependent on Microsoft. Meanwhile, companies are dependent and can’t afford to move, so they’ll stay on there much like they and others have on mainframes for decades. Yeah, people said all this stuff about mainframes and AS/400’s, too. Still highly profitable businesses due to lock-in.

                                                            1. 1

                                                              Ah, I follow now.

                                                          3. 2

                                                            This particular development doesn’t mean that, no. That is already well underway on the ‘server’ end. Despite the relative merits of other UNIX-like OSs, the majority will use Linux. It’s down to who knows it and who you can hire. The choice is between distros.

                                                            Microsoft know this and have embraced it. .NET is now a first class citizen on Linux. Now Linux will be a first class citizen on Windows. For developers, this is excellent. Server OS running perfectly (with Docker!) alongside your workstation OS? Just brilliant. Maybe this will tempt some away from their Macs.

                                                            I was all ready to use W10 as workstation OS and thought the plans looked perfect, but the quality issues (sort but I’m not waiting for that start menu to open!) and the advertising have kept me coming back to MacOS.

                                                            1. 2

                                                              You might even say they’ve embraced and extended it…

                                                            2. 2

                                                              Does this mean we’re gravitating towards a de facto Linux monoculture?

                                                              I think the answer is yes, but I’m hoping the answer is no. Apart from competition being a good thing, I also hope the future is something based on message passing, not a traditional kernel.

                                                              1. 1

                                                                MacOS is still BSD-based.

                                                              1. 9

                                                                I’m glad they mentioned that the filesystem operations will be faster. I wish they would explain further as to how they achieved that. This was my biggest concern using WSL last year while I was working on a Magento 2 site. Serving pages was slow, DI compilation was slow, JS build scripts were slow, everything was slow.

                                                                1. 16

                                                                  I’m guessing it’s probably the obvious route given the description they use – Linux gets its own block backing store like a regular VM and manages its own cache just like a regular disk, and all the ugly coherence issues are completely dodged. Nobody ever needed a 1:1 filesystem translation in the first place, and they’re too hard and/or impossible to build without suffering the woeful perf issues WSL1 had

                                                                  Really sad they’re giving up on the WSL1 approach – an independent reimplementation of Linux had immense value for all kinds of reasons. Must be heartbreaking for a lot of people who worked so hard to get it to where it was

                                                                  Anyone care to place wagers on how long it’ll take after release before it’s hacked to run FreeBSD? :)

                                                                  1. 15

                                                                    Must be heartbreaking for a lot of people who worked so hard to get it to where it was

                                                                    I thought the same thing! I’m also a bit bummed because the current implementation makes use of the “personalities” feature of the NT kernel, whereas the new one is a VM.

                                                                    1. 10

                                                                      It is a bit of a shame indeed. Also a good example of Agile development. I suppose the initial developers thought it would be a nice way to go, so they tried it out and shipped it. I suppose they couldn’t be sure in advance that the file system would be that much slower, or that people would be that concerned about it. Now that they know that, they had to switch over to a VM to get the performance level that people demanded. Too bad, but hard to predict ahead of time.

                                                                      1. 6

                                                                        Nobody ever needed a 1:1 filesystem translation in the first place

                                                                        Honestly, that was the one reason I would find WSL interesting: seamlessly sharing data between Linux and Windows programs.

                                                                        1. 3

                                                                          an independent reimplementation of Linux had immense value for all kinds of reasons

                                                                          There’s nothing really stopping someone else from doing it (I encourage them to call it ENIW).

                                                                          For what it’s worth, I believe FreeBSD still has a Linux emulation layer.

                                                                          1. 2

                                                                            I believe SmartOS does too.

                                                                          2. 3

                                                                            Yeah this was the logical conclusion of the WSL team initially targeting ABI compatibility with Linux. Diversity of implementation would probably have had a better chance of surviving if the WSL team targeted POSIX source compatibility, like macOS. That would have given them more wiggle-room.

                                                                            That’s not to say the WSL team made the wrong decision to target ABI compatibility, they likely have different goals for their corporate customers.

                                                                            1. 6

                                                                              Their goal wasn’t an OS though, they want to use the old Mac OS X argument: you can run all your Windows programs on your Mac, so your only machine should be a Mac.

                                                                              Just swap Mac and Windows.

                                                                              1. 2

                                                                                If they just wanted POSIX source compatibility, they could have continued development of the POSIX subsystem / SFU. Apparently there’s something to be gained from being able to run unmodified Linux binaries (otherwise we’d just have Windows Ports). My guess is that the goal is to grow Azure – it’s likely to be a bigger revenue source than Windows in the future.

                                                                              2. 3

                                                                                Really sad they’re giving up on the WSL1 approach – an independent reimplementation of Linux had immense value for all kinds of reasons. Must be heartbreaking for a lot of people who worked so hard to get it to where it was

                                                                                It seemed like it involved a lot of kludges onto NT though - to the point it seemed the easier approach to me was refurbishing the old POSIX subsystem and making it feel modern. There was a lot of (wasted) potential there.

                                                                                1. 1

                                                                                  Anyone care to place wagers on how long it’ll take after release before it’s hacked to run FreeBSD? :)

                                                                                  It sounds like it’s a modified Linux kernel, so it would probably also have to be a modified FreeBSD kernel.

                                                                              1. 3

                                                                                These two top my list because although there are a lot of options, I haven’t yet found anything that does exactly what I want in either category:

                                                                                • Household expense tracker that has the reporting I want and the simplicity my wife wants
                                                                                • Photo archive with very flexible organization options and a ridiculous amount of metadata attached to individual photos. I recently discovered Amazon Photo and that’s pretty close but then all of my photos belong to Amazon.

                                                                                I’ll get around to them soon. No, really. This time for sure.

                                                                                1. 3

                                                                                  looks a lot too much complicated

                                                                                  1. 2

                                                                                    while I have not used it myself, https://mailinabox.email/ is useful to some people and arguably easier to set up.

                                                                                    1. 1

                                                                                      yes looks like more doable

                                                                                      1. 1

                                                                                        I set up Mailinabox in 2015 on a Vultr box and, aside from upgrades, it has had zero downtime! Love the project

                                                                                      2. 1

                                                                                        If you just want an email address and have no desire to learn how mail on the Internet works, yes, it is much too complicated and you should not bother. You are better off buying your email account from a provider and paying for it either in cash (e.g. fastmail) or in personal data (gmail). Or by deploying one of the free email-in-a-box solutions to your VPS and hope that they do what you want.

                                                                                        If, however, you want to understand how mail works on the real internet and the challenges involved in making it work well, then there is no substitute for running your own mail server and its subsystems. The same way you can’t really learn to program without actually doing a fair amount of it.

                                                                                        1. 1

                                                                                          I agree. If anything, this article convinced me to never, ever even attempt to run my own mail server.

                                                                                          1. 1

                                                                                            Poste makes all of this much easier.

                                                                                            1. 2

                                                                                              poste.io looks like doable

                                                                                          1. 5

                                                                                            Wow, this is a nice walkthrough of a setup that’s very similar to mine.

                                                                                            I just spent part of my weekend grokking and getting DKIM and DMARC working on my personal mail server. The motivation was that Gmail suddenly decided to start sending all mail from my domain into people’s spam folders and a large number of the people that I email use Gmail. A little while after I got these working, mail from me starting going to inboxes again. (I have a couple of unrelated Gmail accounts to test with.) I always thought that just SPF would be good enough for a small-time single-instance mail server like mine but apparently that’s not true anymore.

                                                                                            Right now I use a third-party spam filtering service that does a terrific job. I almost never get false negatives and only get one or two false positives a month. Is rspamd comparable out of the box or do you have to spend a lot of time training it?

                                                                                            1. 2

                                                                                              What third party spam filtering service are you using?

                                                                                              1. 1

                                                                                                Rspamd needs a bit of training volume before it starts attempting to classify messages based on the probabilistic filtering, but it isn’t much. I was happy with it after about a month, and I don’t send or receive much mail.

                                                                                                The other antispam measures it uses were enough to block most of the spam in the meantime.

                                                                                              1. 2

                                                                                                “passive revocation” === expiration

                                                                                                1. 2

                                                                                                  I think everybody knows what expiration is and how it’s implemented. The point is that expiration can also be used deliberately as a revocation policy.

                                                                                                  1. 1

                                                                                                    Yeah this title seems to be a good candidate for a moderator to fix.

                                                                                                  1. 3

                                                                                                    It sounds like you want a wiki!

                                                                                                    I use Dokuwiki to organize my own notes (a.k.a. my second brain). It has its own markup syntax. I’m not super thrilled with it anymore but it works fine if you don’t need anything fancy and is very easy to set up.

                                                                                                    At work, our tribal knowledge is in Mediawiki. It’s not trivial to install and get going, but not super difficult either. Lots of extensions, lots of extensibility. Easy to use and has WYSIWYG editing these days, so you don’t normally have to know its markup in order to use it.

                                                                                                    I really want to try BookStack but haven’t gotten around to it yet.

                                                                                                    1. 2

                                                                                                      I’m assuming he means AMD64, not the old Titanium x64 ;)

                                                                                                      Although OP is no doubt happy with their rube goldberg setup, all of the arguments against an AMD64 box fall apart in light of modern intel-based SBCs like the lattepandas. As /u/whjms pointed out, NUC power consumption can be pretty low too. My i5 NUC is whisper quiet and low power.

                                                                                                      The key problem with ARM SBCs isn’t CPU grunt, but RAM. That’s getting better with things like Jetson TX2 but they’re crazy expensive for normal use.

                                                                                                      Don’t get me wrong, it’s a good post on how to set up a decent ARM cluster though.

                                                                                                      1. 3

                                                                                                        When you start talking about price vs performance and power utilization for a single box, you’re going to do a lot better with something like a NUC, or an el-cheapo ATX motherboard and case, some Chinese x86-64 box, or even a 5-year-old laptop with its lid closed.

                                                                                                        But when you specifically want a cluster for whatever those reasons might be (redundancy, scalability, science), ARM starts to make more sense since the SBCs for ARM are small, cheap, and power-efficient. Between 2-8 GB of RAM and 4 CPU cores per cluster node is pretty usable for a lot of applications.

                                                                                                        There are small-ish x86 SBCs but they usually don’t compete with ARM in terms of price and power efficiency. The closest one I’ve seen so far is the recently-released Atomic Pi for $35. The price point is right, but it takes about twice as much power as a similar ARM SBC, from what I can tell. (There is also some speculation that all of these Atomic Pi units came from a manufacturing run ordered by a major car company which then cancelled the product. Which means that once these were bought at auction for a song and once sold out, there probably won’t be any more.)

                                                                                                        1. 2

                                                                                                          I assume you mean Itanium not Titanium.

                                                                                                          ARM SBC still use many times less power, costs many times less than NUC, take up less physical space and generate less heat. It just depends on what your use case requires. Intel NUC are an absolutely wonderful option as well, but so is an SBC cluster, depending on what you’re doing with it.

                                                                                                          1. 1

                                                                                                            I assume you mean Itanium not Titanium.

                                                                                                            Yup, thanks for the correction or I’d spend the rest of my life calling it Titanium. I haven’t seen one in a very long time…

                                                                                                            1. 4

                                                                                                              we used to call it the Itanic.

                                                                                                          2. 1

                                                                                                            The key problem with ARM SBCs isn’t CPU grunt, but RAM

                                                                                                            ROCK64 and ROCKPro64 come with up to 4GB of DDR4. I don’t think they’re dual channel though :(

                                                                                                            1. 1

                                                                                                              The board in the original post (Odroid N2) also has 4GB DDR4. Some SBCs even have regular dual-channel DDR4 (laptop) RAM slots (Odroid H2).

                                                                                                          1. 5

                                                                                                            Was IE6 frustrating to design for after it lost the lead in standards support while retaining the lead in users?

                                                                                                            Yes, surely.

                                                                                                            Is it wrong to exclude a useragent just because it is frustrating to design for and you are too rushed to write fallback code?

                                                                                                            In my opinion, yes. Because that takes away from the Web’s goal of accessibility for all.

                                                                                                            What if your visitor has no choice but to use IE6?

                                                                                                            We should learn from this story… But what?

                                                                                                            1. 4

                                                                                                              Perhaps IE6 might have been good because it held back the web as a platform for a while; it depends on how bleak your view of the web is.

                                                                                                              1. 6

                                                                                                                What if your visitor has no choice but to use IE6?

                                                                                                                What if a visitor has no choice but to use netcat without a pager? Do you ensure your HTML will fit in a short scrollback buffer?

                                                                                                                1. 1

                                                                                                                  What if a visitor has no choice but to use netcat without a pager? Do you ensure your HTML will fit in a short scrollback buffer?

                                                                                                                  Uh, I’ll bite. Who is browsing the web in 2019 with netcat?

                                                                                                                  Follow-up question: How would you watch a video on youtube using netcat?

                                                                                                                  1. 2

                                                                                                                    Follow-up question: How would you watch a video on youtube using netcat?

                                                                                                                    I’ll bite. What if they just read the comments?

                                                                                                                    1. 4

                                                                                                                      Then the joke’s on them. Never read the comments!

                                                                                                                    2. 1
                                                                                                                      Neo's eyes light up as he steps closer to the screens
                                                                                                                      that seem alive with a constant flow of data.
                                                                                                                      
                                                                                                                                          NEO
                                                                                                                                Is that...?
                                                                                                                      
                                                                                                                                          CYPHER
                                                                                                                                The Matrix?  Yeah.
                                                                                                                      
                                                                                                                      Neo stares at the endlessly shifting river of
                                                                                                                      information, bizarre codes and equations flowing across
                                                                                                                      the face of the monitor.
                                                                                                                      
                                                                                                                                          NEO
                                                                                                                                Do you always look at it encoded?
                                                                                                                      
                                                                                                                                          CYPHER
                                                                                                                                Have to.  The image translators
                                                                                                                                sort of work for the construct
                                                                                                                                programs but there's way too much
                                                                                                                                information to decode the Matrix.
                                                                                                                                You get used to it, though.  Your
                                                                                                                                brain does the translating.  I
                                                                                                                                don't even see the code.  All I
                                                                                                                                see is blonde, brunette, and
                                                                                                                                redhead.  You want a drink?
                                                                                                                      
                                                                                                                      1. 1

                                                                                                                        That was a hyperbolic reaction to ‘what if your visitor has no choice but to use IE6’.

                                                                                                                        IE6 has been dead and buried for years; barely anything on the internet functions for IE6 anymore (for good reasons - eg IE6 doesn’t support any good ciphers for https, and you don’t want to allow downgrade attacks).

                                                                                                                        1. 3

                                                                                                                          This comment is anachronistic. IE6 wasn’t “dead and buried for years” in 2009, when YouTube displayed this banner. The char in the article shows it had about 25% marketshare, which is pretty significant!

                                                                                                                        2. 1

                                                                                                                          Not sure if this would work on YouTube, but you could extract the link to the .mp4 file, download it, and then use a textmode video viewer to watch it.

                                                                                                                          You could also review stuff like the video description.

                                                                                                                        3. 1

                                                                                                                          I would like to accomodate them somehow. I have not thought about this problem much so far.

                                                                                                                          But I have tested and verified basic functionality of my site with Links, Lynx, without JS, IE8 (6 to come), and NN3.

                                                                                                                        4. 2

                                                                                                                          The IE6 problem was, in part, self-inflicted. When they had >95% of the market share they should have just modified the standards in several key areas, like the different box model. IE was the standard, not some stuffy document written by some guys at W3C.

                                                                                                                          Quite a few of these items were just arbitrary without a clear objective “better” way to do things. Funny enough everyone is now recommending that you do * { box-sizing: border-box }, which is IE’s old box model :-/

                                                                                                                          There were some other pain points too, but this was often the biggest that was also really hard to work around.

                                                                                                                          1. 3

                                                                                                                            You have an argument for the box model.

                                                                                                                            A lot of other IE6 behaviors, however, were simply nonsense. Floats and margins could be applied twice, or be applied a second time but only to the first line of the impacted text, or duplicate text, or changing the size of a box when the only CSS property that you change is color.

                                                                                                                            There’s plenty of nonsense in the web specifications, too (the way inline borders work, for example, is just stupid). But let’s be real, here, most of IE6’s spec violations were buggy behavior that nobody would ever want.

                                                                                                                            1. 1

                                                                                                                              To clarify: I wasn’t advocating the the W3C specs should be rewritten to be “bug compatible” with IE. You are correct in pointing out there were also many genuine bugs, but IMHO the webdev community made their lives harder by stubbornly sticking to the W3C standards instead of making a few changes that were a genuine disagreement of opinion, the box model being the most obvious one (and also the most frustration one, it was the #1 problem I had with even pretty trivial websites).

                                                                                                                              Another example might be IE’s attachEvent vs. W3C’s addEventListener, and probably a few more.

                                                                                                                            2. 2

                                                                                                                              To some extent, this is what Chrome is doing now. While not re-writing standards, they are using bleeding-edge, unapproved standards (that often change later), resulting in sites with the “Works only in Chrome”, much like the old “Best viewed in IE6”-stickers.

                                                                                                                              Chrome is at least (fairly) consistent, whereas IE was, as the article also explains, unpredictable in so many ways. But there is a reason HTML5 was published along with a “standards-compliant way of parsing HTML” in code, and it helped everyone in the long-term.

                                                                                                                              1. 1

                                                                                                                                I don’t think the situations are that comparable, as IE6 grew out of the “standards war” with Netscape. The situation in the mid/late 90s was much more chaotic than it is now. If you think IE sucked then try Netscape.

                                                                                                                            3. 2

                                                                                                                              As we have seen from the aftermath of this story, very few people truly have “no choice” but to use IE6. They may be strongly encouraged to use it by various forces, but sufficient forces in the other direction can still trigger mass-migrations.

                                                                                                                              And this “frustrating” and “rushed” seems to be an attempt to frame the developers as lazy. I think this is a misleading frame. Of course we/they can make anything work in anything, given enough time. But time spent creating hacky workarounds for buggy old browsers used by a small minority is time not spent implementing useful features that everyone else can use.

                                                                                                                              1. 1

                                                                                                                                It’s a history. Despite impression from textbooks, not all history comes with a tidy list of lessons learned bullet points.

                                                                                                                                1. -1

                                                                                                                                  after it lost the lead in standards support

                                                                                                                                  IE6 never lead in standards support, that was the whole problem. Microsoft implemented whatever they felt like from the standards or de facto usage and there was no consistency in what was or was not supported. Some things worked well, some not at all, some were obviously buggy and behaved exactly the opposite way they were supposed to and MS gave zero fucks about any of it. Their only concern was that their web browser shipped with their OS and that’s the sole reason for its popularity. There was no incentive for Microsoft to cater to web developers until Google came along and started pushing Chrome unto the world.

                                                                                                                                  Is it wrong to exclude a useragent just because it is frustrating to design for and you are too rushed to write fallback code?

                                                                                                                                  Designing a website for IE6 was not the same as today’s mixed Javascript environment where you just load a polyfill or whatever to get the functionality you want. Many very basic CSS and Javascript features were often entirely missing (or worse, and more usually) entirely broken, meaning you literally could not make your site work and look the same in IE as in all other browsers without simply dropping a whole bunch of functionality.

                                                                                                                                  Because that takes away from the Web’s goal of accessibility for all.

                                                                                                                                  If that’s the angle you want to take, lack of support for accessibility standards was yet another one of IE6 major failings. I agree that sites should be designed to degrade the “experience” gracefully based on the capabilities of the user agent but here in reality, marketing departments want their sites to look a certain way and the half-a-percent of their users with a non-mainstream browser aren’t at the top of the priority list.

                                                                                                                                  1. 3

                                                                                                                                    There was no incentive for Microsoft to cater to web developers until Google came along and started pushing Chrome unto the world advertised Firefox on the most popular web page in the world.

                                                                                                                                    • Google advertised Firefox in April 2006.
                                                                                                                                    • Internet Explorer 7 was released in October 2006.
                                                                                                                                    • Google Chrome was released in September 2008.
                                                                                                                                    1. 3

                                                                                                                                      IE6 never lead in standards support, that was the whole problem.

                                                                                                                                      IE actually was pretty good in standards from 1997-2001, especially relative to Netscape’s floundering at it of the time. Their CSS implementation was superior, especially as Netscape 4.x just compiled CSS to JavaScript. The problem was Microsoft squandered and abused their lead and monopoly.

                                                                                                                                  1. 9

                                                                                                                                    I take it a step further and try to buy used or refurbished when I can. Saves tons of money and never deal with beta-quality releases. I don’t do anything (e.g. gaming) that requires the latest and greatest hardware. The pace of hardware advancement today is nothing like it was 15 years ago where a 2-year-old machine was effectively a doorstop. My daily driver at home is a 5 year-old Dell Latitude with 4 cores, 16 GB of RAM and a 500 GB SSD. I plug it into a fancy dock with all the ports and it has left me wanting for absolutely nothing so far.

                                                                                                                                    1. 2

                                                                                                                                      Hey, even gaming is fine on old computers - just pull a https://www.xkcd.com/606/.

                                                                                                                                      1. 4

                                                                                                                                        Christ… that comic came out 10 years ago. Time flies.

                                                                                                                                        1. 3

                                                                                                                                          You laugh, but I really did play Portal for the first time YEARS after it was released, and only because it was free on Steam at the time.

                                                                                                                                      1. 11

                                                                                                                                        You attain more successful working (and personal) relationships if you start with the assumption that the other party is trustworthy until they prove otherwise. Because statistically speaking, that is likely to be the case. When optimists and pessimists compete for success, the optimists win. Every. Single. Time.

                                                                                                                                        I definitely understand the concern, nobody wants to be taken advantage of, but I don’t think anything you’ve suggested is proof that they’re out to get free work from you. It is absolutely possible that this could be their motive but any sane hiring manager or business professional understands that these kinds of practices hurt their reputation in the long run.

                                                                                                                                        The first interview also involved a lot of very specific questions along the lines of “we’re trying to solve this problem right now; how would you do that?”, which made me feel like they were just pumping me for solutions to their current problems, rather than interviewing me for a job.

                                                                                                                                        They’re trying to gauge your knowledge of the technology they use and there simply isn’t a better way to do that. They’re trying to see whether you come up with some of the same ideas that they have had so far, or if you have any novel ideas that could be worth looking into. Is there any chance that they will take a few of those ideas and run with them, without hiring you? Absolutely. Any time you give advice for free, that is a risk you take. Most of us will gladly make this trade-off in exchange for the chance to strengthen a potentially valuable relationship.

                                                                                                                                        There’s a common misconception that as tech professionals, our most valuable assets are our skills, knowledge, and experience. This is incorrect. Our most valuable asset is time, and our expertise is what makes the time valuable. Anyone can learn what we know, we are not special snowflakes. The only reason the hiring manager cares about your expertise is because it makes your time more valuable to them than if they had to train all of that knowledge into you. Thus, you have to demonstrate your knowledge to them to show them that your time is more valuable to them than the other applicants’ time is.

                                                                                                                                        Are there even any other ways to demonstrate your expertise as a software architect than to, you know, architect some software? Chances are, if you do a good job on this, the person hiring you will say, “great job solving that problem, we have many more that we would like to pay you to solve.”

                                                                                                                                        1. 2

                                                                                                                                          Amen! I think people tend to overevaluate their ideas, what is important is the capacity to turn them into concrete artifacts. The prerequisite, though, is to be able to articulate your vision clearly, as an architect can’t build a cathedral by himself.

                                                                                                                                          1. 1

                                                                                                                                            Well put! :)

                                                                                                                                          1. 1

                                                                                                                                            Author here, I’d greatly appreciate any feedback, ideas, critiques! Thanks :)

                                                                                                                                            1. 1

                                                                                                                                              I’m personally not a fan of Python async code as it adds visual noise in my opinion, but I can understand why you would chose that model.

                                                                                                                                              Apart from that noise, the task model looks well thought out and approachable. I think that is pretty important in any kind of tool that wants to be an alternative to Puppet, Ansible and what have you. Part of what made Ansible big is probably the number of modules that volunteers added themselves.

                                                                                                                                              1. 1

                                                                                                                                                Having people contribute back is key to the success of something like this. While Ansible adopted their third-party repository of modules to use, my hope with Pitcrew is to make it more first-party, think homebrew with the “fork and pr” strategy of contributing to it.

                                                                                                                                                I’m also struggling with the noise of async code in python, really wish it could be more like Ruby fibers.

                                                                                                                                              2. 1

                                                                                                                                                How does this compare to fabric? Or ansible with mitogen?

                                                                                                                                                1. 1

                                                                                                                                                  I don’t have hundreds of hosts handy, so not easy for me to benchmark it, but that would be fun work.

                                                                                                                                                  Fabric seems to use a process per connection, so seems like a downside instead of using nonblocking io.

                                                                                                                                                  It looks like it should be quite similar to Mitogen. Having spent much time in the Yaml forests of Ansible, I can say I don’t personally want live there (see: inner platform effect). Also, Pitcrew should be able to support an inverted control strategy, where you sync the code then execute it local to where it’s running, to avoid roundtrip latency.