1.  

    This is incredible. I wonder if there is a legal effort to produce specs for Wine to have clean-room code.

    1.  

      Most likely, but it might be good to stay under the radar until the dust settles since Microsoft is already going after the guy who compiled the source code (see Glaeqen’s comment above).

      1.  

        They like to picture themselves as nice and open source friendly.

        But they do not hesitate to enforce copyright on 15+ yo software.

        1.  

          Would you not seek to enforce copyright on a book you wrote fifteen years ago? Or song?

          1.  

            I mean, MS aren’t selling XP any more, while books and songs still have value. I guess the most charitable explanation is that parts of this are still in Windows 10. Still, this angers my inner rms

            1.  

              XP is (probably) full of source code that MSFT paid other companies for and used with their permission. Even if they wanted to, they probably can’t release a working source tree of Windows XP without getting permission to do so from the other license holders. And for what? Giving people explicit permission to use a product that they no longer are interested in supporting? It’s all downsides.

              Still, this angers my inner rms

              I’m pretty sure RMS would see the unauthorized release of proprietary source code as wrong and unethical.

              1.  

                I’m pretty sure RMS would see the unauthorized release of proprietary source code as wrong and unethical.

                Sorry, but this is my RMS, not yours

                Anyway, I don’t care much really, but no-one is asking MS to support anything or give permission.

                1.  

                  no-one is asking MS to support anything or give permission.

                  Indeed not, this is just a childish prank. Anyone with a cursory knowledge about how software licensing works (both proprietary and FLOSS) will steer well clear of this.

                2.  

                  I’m pretty sure RMS would see the unauthorized release of proprietary source code as wrong and unethical.

                  I have my doubts, particularly if the binaries have been released beforehand.

                  Now, personally, in the case of Windows XP, and considering the amount of computers that depend on it (and were abandoned when Microsoft abandoned XP), I believe the regulator should step in and actually force microsoft to free the source code, in the name of balance of power between Microsoft and its users.

                  Creator rights and business rights should be protected, but not beyond what’s reasonable. In this situation, the public interest should weight far more, and the government should act thus.

                  This would be a compromise already, an alternative to forcing Microsoft to maintain Windows XP forever. With the freed source. Windows XP users could pool their money into maintaining XP themselves.

              2.  

                15 years

                No, as I actually like the EU green’s proposal regarding copyright terms (5 year, extendable twice to 15yr by registering and paying a fee).

                15 years is already plenty, in keeping the original spirit of copyright, which was to give authors a temporary monopoly, in the interest of the public domain.

                With excessive copyright terms, the author gets little to no benefit, while the public domain suffers greatly.

                1.  

                  the EU green’s proposal regarding copyright terms (5 year, extendable twice to 15yr by registering and paying a fee)

                  Do you have a source for that? A cursory Google shows up nothing of relevance.

                  1.  

                    Unfortunately not. And this is easily from 5~10 years ago.

                    I do not know what their current stance is, nor have I seen much activity in the topic (“copyfight”, pirate parties, etc) in a long time. Which saddens me.

                    I do however see that the greens still seem to care about the topic.

                    1.  

                      OK, I found something related but UK rather than EU.

                      1.  

                        Yeah, that actually meant “life + 14”:

                        The vision then goes on to propose “generally shorter copyright terms, with a usual maximum of 14 years”. By this, we mean that rather than the current maximum of 70 years after the creator’s death, it should only be 14 years after their death. Unfortunately, as written, this appears a bit ambiguous and has caused confusion, so it needs clearing up!

          1. 0

            No one is dumb enough to use some twistedly licensed language. There are too many good ones out there which are free-licensed. Who the hell is going to use this Zen shit?

            1. 6

              Apparently a reasonable number of Japanese people. Proprietary languages are more popular outside of mainstream software engineering and/or the western world

              1. 3

                The only closed-source compilers I know of are for highly popular languages, C/C++. (I think the D compiler used to be closed-source? But I don’t know if it still is so I’m not counting it.)

                It seems like it’d be very hard for a newish language to gain traction as a commercial product, given the massive chicken/egg problem.

                I’m guessing the attention Zen is getting in Japan may have more to do with Japanese-language docs and marketing. Which is something other language projects could emulate, of course.

                1. 3

                  DMD, the reference D compiler, is now fully licensed under the boost software license. This only happened within the last two years maybe (?) so you’re forgiven for not knowing.

                2. 2

                  You say mainstream but you mean populist: there’s more proprietary software being developed than open source, even if there’s more people using open source than proprietary.

                  1. 1

                    Sure, but they’re using closed-source tooling? Sounds alien to me. Why would you?

                    1. 3

                      It’s more common than you think: e.g. When your customers have an iPhone, or you sell consulting for Salesforce, it’s clearly inevitable, but even just consider how many Visual Studio users there are you have to appreciate just how many more people are using closed-source tooling (for something) than open-source?

                      1. 1

                        Seems obvious now in hindsight. Fair point.

              1. 2

                Great stuff. I didn’t know of search/replace. Does anyone here know if there is a flag or combination of tools so I can get truncated matches? i.e. so that the match is:

                $ rg dep
                README.md:
                7: ... Kubernetes *dep* loyment ...
                

                rather than the full line? I do want some context but if I searched a minified file I don’t want the whole line.

                1. 3

                  I think the solution you got in the replies is the best work-around for now, but there is an open feature request for this: https://github.com/BurntSushi/ripgrep/issues/1352

                  1. 1

                    Cool. Thank you.

                  2. 3

                    Printing only the matching part is -o, the same flag grep uses for this.

                    I always use -M 240 to cut off very long lines

                    You could rg -o '.{0,40}pattern.{0,40}' maybe?

                    Caveat: I only know what like 4% of rg’s flags do so maybe there’s a more direct way

                    1. 3

                      Clever trick, matching some amount before and after and then chopping with M. I like it. Cool. Let me see if I can just edit this in myself. I, too, know very few of the flags (and there are so many!).

                      1. 3

                        Thank you! The -M thing was a separate thought, I just have that there with an alias because super long lines mess up my workflow

                    2. 2

                      That’s a usecase I haven’t come across before, but I can see why it would be helpful with minified and other such lengthy inputs. The rg -o '.{0,40}pattern.{0,40}' suggestion in another comment seems the best way for this.

                      If you are on github, you could also ask the community: https://github.com/BurntSushi/ripgrep/discussions

                    1. 4

                      Big fan of all these tools. Honestly, my primary reason for fandom is that the language is much more approachable - easier syntax, easier dependency management, easier compilation. This means I can edit it much more easily. I have insignificant forks for these for local nonsense. This is the good life.

                      I’d honestly never even try to edit find, mostly because the thought of having to write C bothers me. I just know I’ll fuck up memory management somehow and the tool will segfault.

                      Plus either these guys are superstars or the language lends itself to making the architecture really good. I added an xsv command with some flags to do some stats and it was like 30 mins of work. Then I compile it, stick it in my ~/bin and I am more powerful than the gods. At the time, I learnt enough Rust to make that happen. Barely even knew the language well.

                      1. 16

                        I’ve asked this the last time, but does anyone know why “rewritten in rust” and “overuse of colour and emojis” correlate? I have no need to switch from coreutils, but as someone who disables colours in my terminal sessions, I wouldn’t even want to (with the exception of ripgrep, where I get the technical advantage over rgrep).

                        1. 30

                          I kind of think that “overuse of color and emojis” is a bit of an oversimplification, but I take your meaning. Or at least, I might say, “more thought and care given toward the overall user experience.” However, you might disagree with that, since you might think that colors and emojis actually make the user experience worse. (Although, to be honest, I’m not sure how much these tools use emojis.) With that said, I think it’s at least reasonable to say that many of the “new” tools (and not just Rust) are at least paying more attention to improving the overall user experience, even if they don’t actually improve it for every user. For example, I know for ripgrep at least, not everyone likes its “smart” filtering default, and that is absolutely a totally reasonable position to have. There’s a reason why I always include smart filtering in every short description of ripgrep; if you aren’t expecting it, it is not only surprising but frightening, because it violates your assumptions of what’s being searched. It’s a disorienting feeling. I know it all too well.

                          As for why this is happening, I’m not sure. If we wanted to get hand wavy about it, my personal take is that it’s some combination of lower barriers to entry to writing these kinds of tools and simultaneously providing more head space to even think about this stuff. So that means that you not only have more people entering the space of writing CLI tools, but you also have more breathing room to pay attention to the finer details of UX. This isn’t altogether surprising or unprecedented. Computing history is littered with building new and better abstractions on top of abstractions. As you move higher up the abstraction ladder, depending on the quality of said abstractions, you get more freedom to think about other things. This is, after all, one of the points of abstraction in the first place. And Rust is definitely an example of this IMO. And it’s not just about freeing yourself from worry about undefined behavior (something that I almost never have to do with Rust), but also about easy code reuse. Code reuse is a double edged sword, but many of these applications shared a lot of code in common that handle a lot of the tricky (or perhaps, “tedious” is a better word) details of writing a CLI application that conforms to common conventions that folks expect.

                          I also don’t think it is the only phenomenon occurring either. I think building these kinds of tools also requires tapping into a substantial audience that no longer cares (or cares enough) about POSIX. POSIX is a straight jacket for tools like this, and it really inhibits one’s ability to innovate holistically on the user experience. The only way you can really innovate in this space is if you’re not only willing to use tools that aren’t POSIX compatible, but build them as well. My pet theory is that the pool of these people has increased substantially over the past couple decades as our industry has centralized on fewer platforms. That is, my perception is that the straight jacket of POSIX isn’t providing as much bang for its buck as it once did. That isn’t to say that we don’t care about portability. We do. There’s been a lot of effort in the Rust ecosystem to make everything work smoothly on Linux, macOS and Windows. (And POSIX is a big part of that for Unix, but even among Unixes, not everything is perfectly compatible. And even then, POSIX often doesn’t provide enough to be useful. Even something as simple as directory traversal requires platform specific code. And then there’s Windows.) But beyond that, things drop off a fair bit. So there’s a lot of effort spent toward portability, but to a much more limited set of platforms than in the older days. I think the reason for that is a nearly universal move to commodity hardware and a subsequent drop in market share among any platform that isn’t Windows, macOS or Linux.

                          Sorry I got a bit rambly. And again, these are just some casual opinions and I didn’t try to caveat everything perfectly. So there’s a lot of room to disagree in the details. :-)

                          1. 5

                            Just to provide feedback as a user of ripgrep, xsv, bat, broot. I have experienced no annoyance with respect to colourization or emojification of my terminal emulator. If I had to hypothesize, I think easy Unicode support in Rust allows people to embed emojis so they do.

                            1. 4

                              The key is overuse. Some colour can sometimes be very helpful! Most most of these tools paint the screen like a hyperactive toddler instead of taking the time to think of what would improve the user’s experience.

                              1. 26

                                taking the time to think of what would improve the user’s experience

                                I addressed this. Maybe they have taken the time to think about this and you just disagree with their choices? I don’t understand why people keep trying to criticize things that are border-line unknowable. How do you know how much the authors of these tools have thought about what would actually improve the user experience? How do you know they aren’t responding to real user feedback that asks for more color in places?

                                We don’t all have to agree about the appropriate amount of color, but for crying out loud, stop insinuating that we aren’t taking the appropriate amount of time to even think about these things.

                                1. 2

                                  “How much colour is too much colour” is kind of an empirical question; while design is certainly some matter of taste and trade-offs, generally speaking human brains all work roughly the same, so there is one (or a small range of) “perfect” designs. It seems quite a different problem than Ripgrep’s smart filtering you mentioned in your previous comment, which has more to do with personal preference and expectations.

                                  See for example these Attention management and Color and Popout pages; the context here is very different (flight control systems), but it’s essentially the same problem as colour usage in CLI programs. I don’t know if there’s more research on this (been meaning to search for this for a while, haven’t gotten around to it yet).

                                  Have some authors spent a long time thinking about this kind of stuff? Certainly. But it’s my observation based on various GitHub discussions and the like that a lot of the time it really does get added willy-nilly because it’s fashionable, so to speak. Not everything that is fashionable is also good; see the thin grey text on website fashion for example (which thankfully died down a wee bit now) which empirically makes things harder to read for many.

                                  When I worked on vim-go people would submit patches to the syntax highlighting all the time by adding something for some specific thing. Did that improve readability for some? Maybe, I don’t know. For a while most of these patches were accepted because “why not?” and because refusing patches is kind of draining, but all of the maintainers agreed that this added colouring didn’t really improve vim-go’s syntax highlighting and were superfluous at best. There certainly wasn’t a lot of thought put in to this on our part to be honest, and when we started putting thought in to it, it was too late and we didn’t want to remove anything and break people’s stuff.

                                  1. 6

                                    “How much colour is too much colour” is kind of an empirical question; while design is certainly some matter of taste and trade-offs, generally speaking human brains all work roughly the same, so there is one (or a small range of) “perfect” designs. It seems quite a different problem than Ripgrep’s smart filtering you mentioned in your previous comment, which has more to do with personal preference and expectations.

                                    While I agree that it’s a quantifiable question, there’s 2 classic problems here.

                                    All quantifications in user design are “70% of users find this useful” for statement A, and “60 % don’t find it useful” for statement B. The often committed mistake is then assuming that you should implement “A & ^B”, ignoring that you now need to analyse the overlap.

                                    The second is that good quantification are a lot of work and need tons of background knowledge, with standard books on color and interface perception doubling as effective close combat weapons.

                                    A classic answer to the above problem is that good UI uses at least two channels, potentially configurable. So if if the group that doesn’t find B useful isn’t having problems with it, having both is a good option. Your cited Color and Popout page is a very good example of that. And it gracefully degrades for people that do e.g. not see color well. And especially emoji based CLI programs do that very well: Emoji don’t take up a lot of space, are easily differentiable, are accessible to screen readers while still keeping their symbolic character - the line afterwards is the thing for people that need the details.

                                    While I agree with your fashion argument, but see it in a much more positive light: user interface trends have the benefit of making good basics the default - if they are successful. This is community practice learning - I would say that the phase of gray text made the design community realise that readability is not optional when reading text. This may seem trivial, but it isn’t unsurprising that this trend came up when visual splendor was much easier available in websites and the current focus of that time.

                                    For a practical summary of research and reading, I can highly recommend “Information Visualization: Perception for Design” by Colin Ware. Take care, though, it was updated for the 4th Edition this year and many vendors still try to sell you the 3rd. For a book of around 70$, I’d hate if you fell into that trap ;). It’s the book I learned from in University courses and found it very accessible, practical, but also scientifically rigorous. It also spends a lot of time on when visual encoding should be applied, when not and especially has clarity and accessibility as its biggest goals.

                                    Also, even scientific research isn’t protected of the fads you describe: for a long time, with enough computational power available, everyone tried to make visualisations 3 dimensional. That’s generally seen as a mistake today, because either you just add fake depth to you bar diagram while it still remains essentially 2D (wasting the channel), or you run into problems of perspective and occlusion, which make it hard to judge distances and relationships making you turn the image all the time, because 3D data is still projected on 2D. Reading 3D data is a special skill.

                                2. 4

                                  What are some examples? Curious what makes you think that the authors did not consider user experience when implementing nonstandard features specifically in pursuit of user experience? No doubt their efforts may not land well with some of the users. I just think it’s a bit dismissive to assume that the authors didn’t put thought into their open source projects, and pretty rude to characterize the fruits of their labor as a “hyperactive toddler”.

                                  1. 8

                                    As a personal data point: I use fd, ripgrep, and hexyl, and they’re fine. However, I tried exa (a replacement for ls) and exa -l colors absolutely everything, which I find overwhelming compared to ls -l (which for me colors just the file/directories/symlinks). To me it seems like exa developers pushed it a bit too far :-)

                                    1. 5

                                      Cool. It definitely seems that exa in particular colorizes a lot of things by default. My initial thought is “wouldn’t it be nice if I could customize this” and it turns out you totally can via the EXA_COLORS variable (see man exa).

                                      I think the ideal colorized tool would roughly do the following: it would make coloring configurable, ship with reasonable defaults, and then some presets for users with disabilities, colorblindnesses, or those who prefer no color at all.

                                      1. 2

                                        exa -lgh --color=never

                                        seems flag heavy but that’s just me and there’s probably more than one way to do it

                                        1. 7

                                          Flag heaviness doesn’t matter much in this case though, since it can be trivially aliased in shell configuration.

                                      2. 3

                                        compared to ls -l (which for me colors just the file/directories/symlinks).

                                        This is likely local configuration, whether you’re aware of it or not. GNU ls will happily color more or fewer things, in different ways, based on the LS_COLORS environment variable and/or configuration files like ~/.dir_colors. See also the dircolors utility.

                                        1. 1

                                          Interesting, TIL. I don’t have a ~/.dir_colors but LS_COLORS is indeed full of stuff (probably added by fish?). In any case, exa was a bit much, the permissions columns are very very colorful. Maybe it’s to incentivize me to use stricter permissions 😂

                                        2. 0

                                          Agree. Most people’s shell prompts are essentially rainbow unicorn vomit.

                                    2. 1

                                      If we wanted to get hand wavy about it, my personal take is that it’s some combination of lower barriers to entry to writing these kinds of tools and simultaneously providing more head space to even think about this stuff

                                      It seems plausible, adding colours or UI extensions sounds like a good “first patch” for people learning Rust and wanting to contribute to “real world” projects.

                                      1. 6

                                        That’s not exactly what I had in mind. The syntax highlighting that bat does, for example, is one of its central features AFAIK. I don’t know exactly how much integration work it took, but surely a decent chunk of the heavy lifting is being done by syntect. That’s what I mean by headspace and lower barriers to entry.

                                    3. 17

                                      why “rewritten in rust” and “overuse of colour and emojis” correlate?

                                      JS community does the same. I think it’s not specific to Rust, but specific to “modern” rewrites in general (modern for better or worse).

                                      I see a similar thing in C/C++ rewrites of old C software – htop and ncmpcpp both use colours while top and ncmpc did not. Newer languages, newer eyecandy.

                                      1. 5

                                        JS community does the same. I think it’s not specific to Rust, but specific to “modern” rewrites in general (modern for better or worse).

                                        The phrase “modern” is a particular pet peeve of mine. It’s thrown around a lot and doesn’t seem to add anything to most descriptions. It is evocative without being specific. It is criticism without insight. Tell me what makes it “modern” and why that is good. The term by itself means almost nothing which means it can be used anywhere.

                                        1. 2

                                          AIUI “modern” as it relates to TUIs means “written since the ascendence of TERM=xterm-256color and Unicode support, and probably requires a compiler from the last 10 years to build.” Design wise it’s the opposite of “retro”

                                          I don’t see how it’s a criticism (what’s it criticizing?), Or why every word needs to be somehow insightful, It’s just a statement that it’s a departure from tradition. It’s like putting a NEW sticker on a product. It doesn’t mean anything more than “takes more current design trends into account than last year’s model”

                                          1. 1

                                            I think a “new” sticker on a product tells you more than sticking “modern” in a software project page. At least you know it isn’t used/refurbished. What constitutes modern is a moving target. It may be helpful if you had knowledge of the domain in which it’s being used, but otherwise it’s just fluff.

                                            Worse, I think it doesn’t present a nuanced view of the design choices that go into the product. In my mind it subtlety indicates that old is bad and new is good. That thinking discourages you from learning from the past or considering the trade offs being made.

                                            Moreover I think it bugs me because I work in a NodeJS shop. When I ask people what’s great about a package they tell me it’s modern. It’s just modern this or modern that. It barely means anything. So maybe take this with a grain of salt.

                                            1. 2

                                              Huh. I think this must be a cultural difference. Working with C and C++ packages, ‘modern’ has a bit more meaning because of the significant changes that have happened in the languages themselves in a reasonably recent fraction of their existence. (For example, “modern” C++ generally avoids raw pointers, “modern” C generally doesn’t bother with weird corner cases on machines that aren’t 32 or 64 bit architectures I can currently buy)

                                              It’s even true to a lesser extent in python, “modern” usually refers to async/generators/iterators as much as possible, while I agree that “modern” definitely does lack nuance, it fits in an apt package description and means roughly “architected after 2010,” and I think this is a reasonable use of 6characters.

                                              1. 2

                                                Here’s another way of looking at it:

                                                You make a library. It’s nice and new and modern. You put up a website that tells people that your package is modern. The website is good, the package is good. It’s a solved problem so you don’t work on it any more. Ten years pass and your website is still claiming that it is modern. Is it? Are there other ways that you could have described your project that would still be valid in ten years? In twenty years?

                                                The definition of modern exists in flux and is tied to a community, to established practices, and, critically, a place in time. It is not very descriptive in and of itself. It’s a nod, a wink, and a nudge nudge to other people in the community that share the relevant context.

                                                1. 1

                                                  I definitely see your point, but I’d also argue that if I put something on the internet and left it alone for 10 years, it would be obvious that it’s “modern” (if it’s still up at all) is that of another age. If you’d done this 10 years ago, you’d likely be hosted by sourceforge, which these days is pretty indicative of inactivity. It also doesn’t change that your package is appreciably different than the ones serving a similar purpose that are older.

                                                  There are buildings almost 100 years old that count as “modern” (also, there are ‘modern’ buildings made after ‘postmodern’ ones. Wat?) It’s a deliberately vague term for roughly “minimal ornament, eschewing tradition, and being upfront about embracing the materials and techniques of the time” what “the time” is is usually obvious due to this (and IMO it is in software as well). The operative part isn’t that it’s literally new, more that it’s a departure from what was current. and when a modern thing gets old, it doesn’t stop being modern, it just gets sidelined by things labelled modern that embrace the tools and techniques of a later time. Architects and artists don’t have an issue with this, why should we?

                                                  Libuv is I think a good example IMO. I’d call it “modern”, but it’s not new. That said, it doesn’t claim to be.

                                                  Honestly, given how tricky it is for me to pin this down I feel like I should agree with you that it’s cruft, but I just… Don’t… I think it’s cause there’s such a strong precedent in art and architecture. Last time I was there, Half of the museum of modern art was items from before the Beatles.

                                                  I do think it sounds a bit presumptuous

                                                  1. 1

                                                    Honestly, given how tricky it is for me to pin this down I feel like I should agree with you that it’s cruft, but I just… Don’t… I think it’s cause there’s such a strong precedent in art and architecture. Last time I was there, Half of the museum of modern art was items from before the Beatles.

                                                    Haha, well, I think we’ll have to agree to disagree then.

                                                    Ultimately, I’m being a bit of hardliner. There is value in short hand and to be effective we need to understand things in their context. I think being explicit allows you to reach a wider audience, but it is more work and sometimes we don’t have the extra energy to spread around. I’d rather have the package exist with imprecise language than have no package at all.

                                        2. 2

                                          That’s a fair point, I guess I have just been noticing more Rust rewrites, or haven’t been taking JS CLI-software seriously?

                                          1. 6

                                            I don’t blame you – I haven’t been taking JS software seriously either ;) Whenever I see an interesting project with a package.json in it I go “ugh, maybe next time”. Rust rewrites at least don’t contribute to the trend of making the software run slower more rapidly than the computers are getting faster.

                                        3. 10

                                          but as someone who disables colours in my terminal sessions

                                          As someone who appreciates colors in the terminal, I’m pretty into it. I think it’s just a personal preference.

                                          1. 2

                                            Wrong, but ok ;)

                                            But seriously: I don’t think so many tools and projects would be putting the effort into looking the way they do, if nobody wanted it. I just think that colour is better used sparingly, so that issues that really need your attention are easier to spot.

                                          2. 10

                                            Because it’s easy in Rust. It has first-class Unicode support, and convenient access to cross-platform terminal-coloring crates.

                                            1. 5

                                              I suspect that the pool of tool users has expanded to incorporate people with different learning styles, and also that as times change, the aesthetic preferences of new users track aesthetic changes in culture as a whole (like slang usage and music tastes).

                                              Personally, I find color extremely useful in output as it helps me focus quickly on important portions of the output first, and then lets me read the rest of the output in leisure. I’ve been using *nix since I was a kid, and watching tools evolve to have color output has been a joy. I do find certain tools to be overly colorful, and certain new tools to not fit my personal workflow or philosophy of tooling (bat isn’t my cup of tea, for example). That said not all “modern” rewrites feature color, choose being the example that comes up for me immediately.

                                              (On emojis I’m not really sure, and I haven’t really seen much emoji use outside of READMEs and such. I do appreciate using the checkmark Unicode character instead of the ASCII [x] for example, but otherwise I’m not sure.)

                                              1. 3

                                                I think it is more of a new tool trend than new language trend. I see similar issue in other new tools not written in Rust.

                                                1. 2

                                                  Perhaps it’s simply that Rust has empowered a lot of young people, and young people like colors and emojis?

                                                  1. 1

                                                    I wrote this blog post as an answer to this article. I am also wondering why this “overuse of color” is so popular among “rewritten in rust” kind of tools.

                                                    1. 1

                                                      I think this is generally true of CLI tools written since Unicode support in terminals and languages is commonplace. I don’t have any examples but I’ve gotten a similar impression from the go.community. I think emojis and colors in terminals are kind of in Vogue right now, as is rewriting things in rust, so… Yeah, that’s my hypothesis on the correlation.

                                                      Aside, as someone with rather bad visual acuity and no nostalgia for the 80s, I like it.

                                                    1. 27

                                                      3 of the 17 tools are from @sharkdp.

                                                      He wrote:

                                                      hexyl could also be added to this list as a replacement for xxd.

                                                      1. 4

                                                        bat is great; it is used in neuron to provide search preview on the console along with fzf and ripgrep.

                                                        1. 1

                                                          Why did they pick new names?

                                                          Wouldn’t it be possible (and simpler for migration) to replace the existing tools instead, like e. g. BSD did when they replaced the Linux tools with their self-implemented ones? (Just in this case for safety, not ideology.)

                                                          1. 17

                                                            It can be tricky to replace the standard ones since third party scripts might depend on a particular implementation detail you missed in your clone.

                                                            1. 11

                                                              Isn’t BSD older than Linux, and based on actual Unix? Maybe GNU is a better example of replacing the original Unix tools.

                                                              1. 5

                                                                That would probably be something like this: https://github.com/uutils/coreutils

                                                                1. 3

                                                                  The Linux utilities were named after the BSD ones. The BSD ones share a history with the AT&T Unix ones. Sometimes the Linux ones are disambiguated with a “g” prefix (for “GNU”).

                                                                  1. 2

                                                                    To allow breaking changes to API (which for a CLI is just flags and behaviour).

                                                                1. 4

                                                                  Feels like half this conversation isn’t talking about what the problem is actually. It’s probably the equivalent of this:

                                                                  # config.json
                                                                  {
                                                                   "number_of_threads": 1
                                                                  }
                                                                  
                                                                  #load_config.py
                                                                  def get_num_threads():
                                                                    config = json.loads("/my/config.json")
                                                                    return int(config.get('num_threads'))
                                                                  

                                                                  This is totally an API thing. Scala Map[A,B]#get takes an A and returns an Option[B]. So does Python, but you’ll only discover it at runtime if you aren’t familiar with the API. Java returns a B which in the world of Java means B | null. Python’s get returns B | None but B is less constrained. So yeah, when you use a tool you’ve got to know all the pieces of the tool. Tool which needs less memorization of its idiosyncrasies is better than other tool. Lets you operate at higher level.

                                                                  1. 19

                                                                    A mostly text-based shell Interface to my computer, which is not stuck in the last century: https://matklad.github.io/2019/11/16/a-better-shell.html

                                                                    1. 7

                                                                      Interesting things happen with arcan-tui and userland. Powershell and powershell-a-likes are not the answer.

                                                                      1. 1

                                                                        Userland is really nifty and definitely breaks some new ground in UINX user/computer interaction.

                                                                      2. 4

                                                                        Pretty much agree with your post. Removing the distinction between shell and terminal emulator would allow new and interesting modes of operation. One of them could be pausable and introspectable pipes. Another one could be remote SSH sessions that have access to the same tools as the local one.

                                                                        1. 3

                                                                          Try power shell

                                                                          1. 3

                                                                            First paragraph of the post explains that I am not looking for powershell. It indeed is a big improvement over bash, but in areas I personally don’t care about.

                                                                            1. 1

                                                                              If you read the post this isn’t what the OP is going for. Powershell brings some excellent new capabilities to the table with object pipelines, and has some nice new ideas around things like cmdlets and extensability, but his post goes into much more detail about user experience aspects Powershell doesn’t even come close to providing.

                                                                            2. 3

                                                                              Why cargo test blocks my input? Why can’t I type cargo test, Enter, exa -l, Enter and have this program to automatically create the split?

                                                                              What I really want is an extensible application container, a-la Emacs or Eclipse, but focused for a shell use-case.

                                                                              I would like Oil to be able to support this kind of thing, and at least in theory it’s one of the most promising options.

                                                                              And ironically because I’m “cutting” the interactive shell, it will should be more possible than with bash or other shells, because we’re forced to provide an API rather than writing it ourselves.

                                                                              I had a discussion with a few people about that, including on lobste.rs and HN. The API isn’t very close now, but I think Oil is the best option. It can be completely decoupled from a terminal, and only run child processes in a terminal, whereas most shells can only run in a terminal for interactive mode.

                                                                              Related comment in this thread: https://lobste.rs/s/8aiw6g/what_software_do_you_dream_about_do_not#c_fpmlmo

                                                                              Basically a new “application container” is very complementary to Oil. It’s not part of the project, but both projects would need each other. bash likely doesn’t have the hooks for it. (Oil doesn’t either yet, but it’s has a modular codebase like LLVM, where parts can be reused for different purposes. In particular the parser has to be reused for history and completion.)

                                                                              1. 3

                                                                                Amusingly, using :terminal in neovim changed a lot of things for me. I could then go to normal mode and go select text further up in the ‘terminal’. Awesome!

                                                                                1. 2

                                                                                  I mapped it to CTRL-Z to get a consistent behaviour between terminal and non-terminal Neovim

                                                                                  1. 2

                                                                                    Yeah, this speaks to some of the power he references in his post that emacs brings to the table. IMO one of the things that makes neovim so impressive is that it takes the vim model but adds emacs class process control.

                                                                                    I’d love it if people would do more with the front end / back end capabilities neovim offers, beyond just using it for IDE integrations and the like.

                                                                                  2. 2

                                                                                    You’re basically describing a regular computing environment.

                                                                                    1. 1

                                                                                      Sounds like your idea and my idea have some interesting possibilities when combined :)

                                                                                    1. 6

                                                                                      Decent write-up overall. I did have a couple of niggles though.

                                                                                      The first thing I look when using a new toolset is whether it has an easy way to make it available for my user, without using the distribution package manager to install it system-wide.

                                                                                      I didn’t really understand this part of the article. Both of these languages compile to binaries that don’t require a VM to run (like python). Why is he concerned about installing it “system wide”? Go can be installed locally too by the way. There’s no need for these python-esque virtual environment managers.

                                                                                      The first problem I found using Go, was when I was figuring out how the module resolution worked along with the GOPATH, it became quite frustrating to set up a project structure with a functional local development environment.

                                                                                      I haven’t read about someone complaining about GOPATH in a very long time. That really surprised me.

                                                                                      Reasons I would use Rust
                                                                                      If the project has critical requirements about security
                                                                                      If the project has critical requirements about performance

                                                                                      I sympathize with the difficulty here because languages are extremely difficult to categorize like this, but I feel like he’s suggesting that Go isn’t safe and doesn’t focus on performance. It very much is safe and has a very strong focus on performance.

                                                                                      1. 1

                                                                                        I feel the same way with toolchains. I want them in the scope of my user, not polluting for all users. That’s because I sleep easier knowing that I can diagnose issues by shifting into a different user instead of having to mount a different /. It probably comes from years of someone installing brew or pip as root and having a random hodge-podge of crap all over your system that makes each project non-replicable.

                                                                                      1. 9

                                                                                        What is your favorite pitfall in Date?

                                                                                        Has to be toISOString(). Claims to return ISO8601, which contains the timezone offset, but instead it just gives you the GMT string, even though it’s perfectly aware of the timezone information:

                                                                                        // It's 15.44 in Europe/Warsaw
                                                                                        > dt.getTimezoneOffset()
                                                                                        -120
                                                                                        > dt.toISOString()
                                                                                        '2020-08-02T13:44:03.936Z'
                                                                                        
                                                                                        1. 5

                                                                                          That is a valid ISO 8601 timestamp. The ‘Z’ (“zulu”) means zero UTC offset, so it’s equivalent to 2020-08-02T15:44:03.936+02:00.

                                                                                          1. 3

                                                                                            Oh, it is valid, yes. It’s just less useful than one containing the TZ information that is stored in that Date object. It’s correct, but less useful than it could be (and with little extra effort).

                                                                                            1. 3

                                                                                              Ah, I misunderstood you, then. When you wrote “claims to return ISO 8601” I thought you meant that it wasn’t actually an ISO 8601 string.

                                                                                              So what you mean is that the “encoding” of the of the ISO 8601 string should reflect the local timezone of the system where you call .toISOString()? I.e. 2020-08-02T15:44:03.936+02:00 if you called .toISOString() on a CEST system and 2020-08-02T09:44:03.936-04:00 if you called it on an EDT system?

                                                                                              1. 2

                                                                                                I’d expect it to not lose the timezone information, given that it already uses a format that supports that information. It’s not incorrect, it’s just less useful that it could be. Perhaps that’s just the implementation, not the spec – but I’m yet to see it implemented differently. It’s not a huge deal, it’s just frustrating that it could’ve been better at a little cost and yet no one bothered, apparently.

                                                                                                It’s not about the system it’s called on – that determines the timezone that’s already in the object, as my code snipped showed. I’d expect the data that’s already there to be included in the formatting, instead of being converted to UTC, lost and disregarded. If implemented correctly better, toISOString could’ve been a nice, portable, lossless serialization format for Dates – but as it is, a roundtrip gives you a different date than you started with, because it will now always come back as UTC.

                                                                                                1. 2

                                                                                                  I would actually assume that getTimezoneOffset is a class method that just looks at your system’s configured time zone and does not read anything from the Date object. I’m pretty sure the object does not store information about the timezone of the system in which it was generated, because it’s never needed. You can always convert to the timezone you want at read time.

                                                                                                  This is also what PostgreSQL does. If you create a column for “timestamps with timezone” it will discard the timezone information at write time and just use UTC (because why not?). The only thing that is different when you choose a timestamp column with timezone is that at read time it will convert values from columns to the configured timezone. All it stores is the number of seconds since the epoch.

                                                                                                  If you look at Firefox’s JS source, it looks like they also just store the seconds since the Unix epoch in a Date object, no timezone information: https://github.com/mozilla/gecko-dev/blob/d9f92154813fbd4a528453c33886dc3a74f27abb/js/src/vm/DateObject.h

                                                                                              2. 3

                                                                                                I don’t believe Date contains a time offset. As far as I’m aware, like many languages, the problem is not that the APIs ignore the time offset - they would have to silently reach into the client locale to get it, which would be misleading and make it easy to create bugs. the problem is that they named it “Date” when it’s really just a point in absolute time. Combine a Date with the client locale’s time offset and you’ve got yourself a date, but a Date is not a date.

                                                                                            2. 5

                                                                                              This is a namespacing error that’s common when methods are on objects like this. getTimezoneOffset is a property here of the client locale, not of the date time object.

                                                                                            1. 6

                                                                                              Related comment: https://lobste.rs/s/xbl6uc/cloudflare_outage_on_july_17_2020#c_nt8atu

                                                                                              For most people, I recommend using shared hosting for websites, rather than standing up your own on a VPS. Shared hosting has the main advantage of the cloud and some more – that somebody else maintains the system for you.

                                                                                              I have a Linode server with some of my own sites, and the uptime is a lot lower than that of the Dreamhost site. So basically I use the VPS for playing around with stuff, and shared hosting for sites I actually want to be up.

                                                                                              I feel like I should write a blog post about shared hosting because a large number of people seem not to know what it is. Short answer: A single computer can serve a lot of websites!


                                                                                              Also the combined cost is very doable: I pay exactly $10 a month for the VPS (could be $5), and less than $10 a month for the shared hosting. I’d rather pay a token amount for steady service than be sucked in to a tech stack with free offerings.

                                                                                              1. 3

                                                                                                A static site should be ok almost anywhere. And a good shareholding company should be good at uptimes.

                                                                                                1. 2

                                                                                                  The last time I was thinking about writing about shared hosting, I came across this interesting 2008 back-and-forth about shared hosting and Rails, between the creator of Rails and Dreamhost itself:

                                                                                                  https://dhh.dk/posts/21-the-deal-with-shared-hosts

                                                                                                  https://www.dreamhost.com/blog/how-ruby-on-rails-could-be-much-better/

                                                                                                  Basically around 2005-2008 frameworks like Rails and Django became popular. They did not work well on shared hosting for various reasons.

                                                                                                  So people started using VPSes and eventually the cloud. And a deficiency of those systems is that if you’re not mindful of server performance, you might need to add something like Cloudflare on top. You lost that when you moved off shared hosting.

                                                                                                  However, the funny thing is that Rails and Django are no longer as popular. But we didn’t move back to shared hosting, even though they started to make sense again!

                                                                                                  I think there are some newer hosts that are designed for more of the client JS-heavy website architectures, like https://www.netlify.com/ but I haven’t used them. It does seem like the shared hosting services had a lot of technical infrastructure in place to go into that market, but they didn’t really understand the open source software that people wanted to deploy (e.g. the web dev trends). It seems like a missed opportunity.

                                                                                                  Shared hosting gives me a shell on a box that somebody else maintains, and that serves my web pages reliably, and that’s pretty much what I need. Cloud hosting doesn’t give you that. Cloudflare is extra complexity and insecurity for most use cases.

                                                                                                  1. 3

                                                                                                    I have used Netlify and Vercel, but considering that JS runs on browser, actually any webserver is good for a static site with JS.

                                                                                                    I have my own VPS to host my sites mainly because I like it, but Shared hosting should be more than OK for the traffic I get.

                                                                                                    1. 3

                                                                                                      I think there are some newer hosts that are designed for more of the client JS-heavy website architectures, like https://www.netlify.com/ but I haven’t used them.

                                                                                                      But that’s not the same as old school shared hosting, that’s a big Cloud™ Edge® CDN thing, you don’t become more independent and decentralized because of it.. Purely for reliability, sure, heck you could add AWS Lambda to that, to keep your old little server apps running no problem.

                                                                                                      a deficiency of those systems is that if you’re not mindful of server performance, you might need to add something like Cloudflare on top

                                                                                                      I highly doubt that anyone needed a MitM proxy because of insufficient VPS performace when an old school shared box would’ve been sufficient.

                                                                                                      Cloudflare took off because it’s free, “cloud” bandwidth (EC2 and S3 especially) is not free, and they made loooots of promises. “Come to us, we make everything faster and securer and better, we’re so great and it’s free if you’re not enterprise! We’ll defend you against DDoS and scary attackers trying to SQL inject your MongoDB based app and scary shady spammy Tor users trying to do shady things on your read only public static pages too!! We have nice DNS hosting too! All free!!”

                                                                                                      My guess would be that far fewer people put CF in front of their sites because they needed it, rather than just responding to the marketing.

                                                                                                      1. 1

                                                                                                        I started to use them for DNS for some side domains I want DNS service for free, and my current DNS provider DNSimple only offer 5 domains included in my package. But could have gone to Linode too, also for free. I think Linode runs over Cloudflare, or is it DigitalOcean who does that?

                                                                                                        1. 1

                                                                                                          As I found out during the CF incident, DNSimple have switched to using CF DNS servers with whitelabel branding.

                                                                                                          1. 1

                                                                                                            Oof

                                                                                                        2. 1

                                                                                                          Yeah I’ve never used it, but my point was that I think Dreamhost could have gone into that market. They just needed to “bridge the gap” with some tools.

                                                                                                          They had basically all the server infrastructure in place (?) The Cloud CDN stuff is mostly an implementation detail, and as long as it serves the traffic, nobody cares (aside from marketing by induced anxiety). The user experience is what counts, e.g. git push to deploy.

                                                                                                          With dreamhost I have to set up my own 5 line rsync script. Which is obviously doable but it is a barrier. I think learning shell is perhaps one of the most significant barriers for most people using shared hosting.

                                                                                                          People on this site may not be able to relate to that, but if you spend some time looking over the shoulder of a well-paid software engineer or other tech employee with say 1-5 years experience, you will see they have little experience with the shell.

                                                                                                          The cloud lets them kinda avoid the shell. They don’t have to think about file permissions, e.g. is this directory executable? Where do I find the damn logs in this shared hosting setup? The lack of logs really stymied me a different shared host 10-15 years ago.

                                                                                                          But it seems Dreamhost somewhat got left behind by Rails/Django, and left behind again by JS frameworks and static site generators. Although I’d be interested to hear from people who are still using shared hosting with those open source tools.


                                                                                                          Also what I mean with the VPS comment is that it’s pretty easy for the uninitiated to misconfigure a web server or database. So instead of fixing underlying issue, they might patch a cache on top.

                                                                                                          The config burden is on you with a VPS, whereas it’s not with shared hosting.

                                                                                                          Honestly 90%+ of the caches I’ve ever seen are patching over some performance issue that the developers/sys admins didn’t understand. It’s the lazy performance fix. Cloudflare is to some extent the lazy performance fix.

                                                                                                          1. 3

                                                                                                            I think learning shell is perhaps one of the most significant barriers for most people using shared hosting.

                                                                                                            Not all shared hosts even allow shell access. As I remember, shared hosting was all about FTP :D

                                                                                                            1. 1

                                                                                                              (late reply) Dreamhost definitely allows it, and I think most do these days.

                                                                                                              I think I caught on to “Shared Hosting 2.0”. I remember Shared Hosting v1 did NOT support SSH because it wasn’t very safe. And yes I remember all the Windows and Mac programs that supported FTP to publish to servers because of this.

                                                                                                              But probably by the time Shared Hosting 2 came around, it had already gotten a bad name in some circles. And when it didn’t run frameworks like Rails and Django, that was sorta the nail in the coffin.

                                                                                                              But I really think it is quite good now. So yeah I want to write about it, and the SSH vs. FTP issue is a good thing to mention. I would NOT use if it I only had FTP access. The whole point is to get a shell!

                                                                                                              And it helps that the shell is on a Debian machine which is very similar to my own Ubuntu machine. Lots of things “just work”.

                                                                                                      2. 2

                                                                                                        I’ve been on Dreamhost since 2004, and sometimes I get weird looks, but it’s remarkably low-fuss. All content comes from Makefiles that stitch together HTML. When I do occasionally need some non-static functionality, I’ve usually been able to add it using small CGI scripts. Coding CGI for simple tasks feels like a breath of fresh air after working with complex web frameworks.

                                                                                                        1. 1

                                                                                                          Yeah I actually run a FastCGI script in Python on Dreamhost! It works great, although the Python support for FastCGI has completely rotted! (I had to fork some old Python 2 FastCGI lib support)

                                                                                                          FastCGI lets me keep a zip file open across requests, with its index :) And save Python startup time.

                                                                                                          If you see the .wwz prefix here, that’s a zip file with a ton of files served by a FastCGI script.

                                                                                                          https://www.oilshell.org/release/0.8.pre8/test/spec.wwz/survey/osh.html

                                                                                                          I think I mentioned this on the blog like 3 years ago… But yeah I think it would be cool if you can write FastCGI scripts in Oil. Shared Hosting is really a better cloud for so many use cases.

                                                                                                          Contrary to popular belief, the uptime of a single box often exceeds the uptime of a distributed system. (TODO: should write a blog post about this, I stated that here awhile ago and lots of people agreed.)

                                                                                                        2. 1

                                                                                                          The real problem is elsewhere, imho. Had a VPS for over a decade. Worked flawlessly, great uptime, no trouble, rarely updated and still no outcomes of compromise (identity not stolen, domains not sending bad mail, nothing that actually affected me then or in the two years since I turned it off). Provider decided to sunset the product. Had to backup everything. Never spent the time to put it back up. Now my website doesn’t work.

                                                                                                          The only thing that was critical was mail, so I did that.

                                                                                                          1. 2

                                                                                                            AWS VPS is $5/mo. It’s also the same service as their enterprise product which makes most of their profit. And amazon is (for better or worse) slowly eating the world. So it’s very unlikely to be shut down.

                                                                                                            1. 1

                                                                                                              True, I wonder if a time will ever come where they’ll say “It’s time to up sticks and move off your t1.micro” but it doesn’t sound likely.

                                                                                                              Well, I’m still probably not going to get myself into that state again. Cattle, not pets, for me.

                                                                                                              1. 1

                                                                                                                Well, I’m still probably not going to get myself into that state again. Cattle, not pets, for me.

                                                                                                                Curious, how do you intend to do that? You can get a physical server, but you still have to arrange for rack space and an internet connection. You can be your own ISP, but that’s prohibitively expensive. Not to mention, both of those options will be more volatile than a cloud provider.

                                                                                                                1. 1

                                                                                                                  My current thought is probably a cloud persistence layer (so managed SQL, probably Aurora, with managed storage, probably S3), DNS on the cloud (which obviously I have now instead of my BIND) and everything else on Kube.

                                                                                                                  Never going back to the solo VPS with my stack on it. It was the right thing to do because I was broke but I’m not broke now and I can easily pay five times as much to not have to worry about it being up in the future. Cloud managed services for everything, as far as I’m concerned.

                                                                                                        1. 1

                                                                                                          Protobuf has worked fine with grpc servers. Fulfills any need I have. Used Thrift in the past. Also fine. Both generally ergonomic.

                                                                                                          1. 1

                                                                                                            Very cool. Are there native graphics toolkits with the reusability and layout support that the web offers? Cross platform would be great but even just Gnome/Linux would be fine.

                                                                                                            1. 3

                                                                                                              Software didn’t go wrong. Everyone realized that I care about a product that enables new functionality so much that I will put up with some bugs for that.

                                                                                                              Buggy software > non-existent software for most of the space of bugs and software

                                                                                                              1. 2

                                                                                                                What I would appreciate is that as we add accommodations for people, we add a user agent setting that says “I require minimal differentiation” or it becomes commonplace to use the “if you need X click here to differentiate” and then I can navigate things without having to fill out a few hundred entries on forms.

                                                                                                                1. 2

                                                                                                                  Almost every payment form defaults to using the same billing address and shipping address, but allows you to differentiate. Names could easily work the same way without a surveillance-helping user agent setting.

                                                                                                                1. 4

                                                                                                                  Is there a way to improve startup time for snaps? I’m not particularly concerned with them except for the fact that they’re slow to launch and also print lots on my mount -l

                                                                                                                  1. 2

                                                                                                                    They’re slow to start up? I’ve never noticed that and am curious how you’ve measured that.

                                                                                                                    That being said the mount issue is real. systemd also generates .mount units for those mounts which end up in /etc/systemd, which in turn means that my etckeeper install gets polluted with a whole lot of total garbage.

                                                                                                                    1. 2

                                                                                                                      It’s visually apparent, but because you asked, I decided to screen record and see if it’s actually the case and it certainly is. Look for yourself. I have only SSDs mounted and these are all on the same drive so I don’t imagine that’s the problem.

                                                                                                                      These are available on the Ubuntu Software store if you want to replicate.

                                                                                                                  1. 4

                                                                                                                    Seems like the right way to do it. This looks like perfect thing for derivative distro. Also a good way to test if long tail packages without a package maintainer can go away.

                                                                                                                    1. 1

                                                                                                                      long tail packages without a package maintainer

                                                                                                                      Maintainers are optional. Plenty of packages lack a maintainer. This is absolutely alright as long as when it breaks somebody steps in to fix them. If nobody does, then the packages can be indeed removed.

                                                                                                                      Please do not remove that which isn’t broken.

                                                                                                                      1. 2

                                                                                                                        Hmm, that’s true. If there’s no maintenance burden, why bother removing them? Okay, I sympathize.

                                                                                                                        1. 3

                                                                                                                          There is a maintenance burden. Every package in the archive, whether or not there’s an active maintainer or not, adds to a burden felt by all maintainers, and ultimately, users. They inflate the package DB metadata, which makes everyones package updates take longer, they get tangled up in transitions (e.g. libc6 updates, or Debian-specific changes like packaging updates to match newer policy versions), and by existing in the archive they both imply a level of support to users that isn’t there, and hurt the reputation of Debian as a reliable distribution.

                                                                                                                          1. 1

                                                                                                                            By your logic, Debian should stop packaging so much software and just shrink.

                                                                                                                    1. 15

                                                                                                                      Cloud Run sounds cool I guess, and I might try it sometime. But honestly, I don’t see a problem with just getting a conventional server. I have a $5/month Digital Ocean server, and I run like 10 things on it. That’s the nice thing about a plain old Linux server, as long as none of your individual things takes up a ton of resources or gets too much traffic, you can fit quite a few of them on one cheap server.

                                                                                                                      1. 2

                                                                                                                        Do you manage SSH certs for those 10 yourself? What happens when the services go down? What about logging?

                                                                                                                        1. 4

                                                                                                                          It’s all running on 1 server, so there’s only one SSH key to manage. Well, one for every device I connect to it from, but that’s not that many, and there really isn’t anything to manage.

                                                                                                                          Everything is set up through SystemD services. I wrote control files for the services that didn’t already have them (Nginx, Postgres, etc). It’s perfectly capable of restarting things and bringing them up if the server reboots. Everything that has logs is set up with logrotate and transports to SumoLogic. I did set up a few alerts through there for services that I care about keeping running and have been troublesome in the past. Also have some automatic database backups to S3. These are all one-off toy projects used pretty much only by me, and this level of management has proved sufficient and low-maintenance enough to keep them up to my satisfaction.

                                                                                                                          Of course, I would re-evaluate things and probably set up something dedicated and more repeatable if any of those services ever got a significant number of users, generated revenue, or otherwise merited it. There’s plenty of options for exactly how, and which one to use would depend on the details.

                                                                                                                          1. 3

                                                                                                                            They said a single server so yes a single SSH key I’d imagine, every major init system on Linux has service crash detection and restart, and syslog (and if you are feeling brave GoAccess).

                                                                                                                            1. 1

                                                                                                                              Assuming you meant SSH and mistyped cert instead of key it’s one machine so one key.

                                                                                                                              Assuming you meant SSL instead of SSH. I run everything in Docker compose. I use this awesome community maintained nginx image[1] that sets it up as a reverse proxy and automates getting let’s encrypt certificates for each domain I need with just a little config in the compose file.

                                                                                                                              From there I write a block in the nginx configuration for each service, add the service to my compose file and voila it is done.

                                                                                                                              [1]https://docs.linuxserver.io/images/docker-letsencrypt

                                                                                                                              1. 1

                                                                                                                                Good point, could have meant SSL Certs. I use the Let’s Encrypt automated package. It’s quite good these days - can set up your nginx config for you mostly-correctly right off the bat, and renews in place automatically. I just set up a cron job to run it once a week, pipe the logs to Sumologic, and then forget about it. Worked fine automatically when I was serving multiple domains from the same nginx instance too, though I’m not doing that right now.

                                                                                                                                1. 1

                                                                                                                                  Sorry, I did mean SSL certs. You are right about automating it and that’s what I would do for professional work. For a side-project, however, I prefer eliminating it completely and letting Google do it.

                                                                                                                                  From there I write a block in the nginx configuration for each service, add the service to my compose file and voila it is done Can you share more details of your setup here?

                                                                                                                              2. 1

                                                                                                                                I used this too but then my provider sunset the hardware I was on and migration was a nightmare because it’s easy to fall into bad patterns with this mode.

                                                                                                                                Admittedly it was over 10 years of cruft but still.

                                                                                                                                1. 2

                                                                                                                                  That did honestly kind of happen to me too. I had a server like that running with I think Ubuntu 14.04 LTS for quite a while. Eventually I decided it needed upgrading to a new server with 18.04 - security patches, old instance, etc. It was a bit of a pain figuring out the right way to do the same things on a much newer version. It only really took about a full day or so to get everything moved over and running though, and a good opportunity to upgrade a few other things that probably needed it and shut off things that weren’t worth the trouble.

                                                                                                                                  I’d say it’s a pretty low price overall considering the number of things running, the flexibility for handling them any way I feel like, the low price, and the overall simplicity of 1 hosting service and 1 server instead of a dozen different hosting systems I’d probably be using if I didn’t have that flexibility.

                                                                                                                              1. 3

                                                                                                                                I used to use Arch and that was fun because there was lots of stuff in AUR and the rolling release worked well. But then I moved away for a year, came back, and turned my machine on and I couldn’t upgrade. Finding the sequence of packages was an intractable problem. None of the package managers (I’d used pacman religiously) could do it.

                                                                                                                                The Ubuntu stuff worked mostly, though. So now I use Ubuntu.

                                                                                                                                Funny. I think I started with Red Hat around Red Hat 7. At least I distinctly remember correctly partitioning everything, then fucking the bootloader installation up so that you couldn’t get to Windows, then fucking the partitioning up while trying to fix that. Dad was mildly mad. No one was convinced that using the Linux desktop alone was a good idea. Remember being very excited for Red Hat 9 Shrike to come out :)

                                                                                                                                So RH > Fedora > Ubuntu > Debian > Ubuntu > Arch > Ubuntu and I’m honestly never going to try another distro. Only got Ubuntu because they shipped CDs across the world! Unbelievable. So hard to get modern software in India and I had the newest compiler for free! Also my first international package. Wish I’d had digital cameras back then. I was so very excited! Obviously being a child I also played around with Enlightenment DR17 and software composited desktops, making it utterly unusable by anyone who wasn’t in love with HΛCKΣR░ΛΣSTHΣTIC (ほ園ラ)

                                                                                                                                1. 2

                                                                                                                                  As an aside, the first thing you end up with is default values, and the syntax:

                                                                                                                                  PARAM_ONE=${1:-$DEFAULT_PARAM_ONE}
                                                                                                                                  

                                                                                                                                  does the trick.

                                                                                                                                  I think only Ruby comes close to having a nice modern language with easy shelling out without libraries installed. Perl is painful (positional arguments, aaagh), and Python requires you to pip up to get stuff but vanilla Ruby gives you backticks.