1. 3

    Very cool. It’s this kind of stuff that’s often not very well documented in the usual places.

    1. 1

      Thanks!

      1. 8

        I am sort of wondering whether it wouldn’t be possible to just ship a 32 bit x86 executable instead of amd64, then the 32 bit tricks could potentially be pulled. Since it doesn’t seem to need any libraries it wouldn’t incur any additional dependencies I think.

        1. 3

          I didn’t think of trying that…

          1. 1

            I thought he was using 32-bit already? On my system, __NR_pause is defined to be 29 (his number) in asm/unistd_32.h, and 34 in asm/unistd_64.h. He’s also using eax over rax… Perhaps using int 80h and not syscall uses the 32-bit abi?

            1. 2

              int 0x80 is definitely the 32 bit Linux system call entry point.

              1. 1

                To be honest, my knowledge of assembler is minimal and even that is 20 years out of date…

            1. 2

              Very useful thing, especially when you can spawn more than one concurrent “straces” (handy for multiple variants of the same binary).

              As strace is also very helpful for sysadmins too (we have a prover here that you can debug almost anything with strace + tcpdump), I just thrown it on my company’s internal mailing list. Hope they’ll appreciate it.

              But the distribution model for such “sysadmin toolbelt” tools in Python is kinda problematic for old-timers who use CentOS 7 on their desktop, still have Perl as default lang and only started diggging Py2 recently. There should be a way to contain whole dependency trail + interpreter of such package into single static binary to put in /usr/local/bin like they do.

              Also for portability on remote servers, which is a more reasonable argument for most of you.

              1. 2

                It’s available as a Docker image, if that helps (tho I doubt it if the env is locked down):

                docker pull imiell/autotrace

                I could look at unrolling to single python file somehow… would that help?

                1. 1

                  That’s no problem for me, I can use pip install --user … and have $HOME/bin on my $PATH.

                  Docker container is of course very handy, but I don’t think it would appeal for such people I described above because it’s “too hip”.

                  Don’t get me wrong, I’m not strictly looking at this particular software at this time, but wondering a bit wider about software distribution for non-C stuff which is not yet packaged by BigCo Enterprise Linuxes.

                  1. 1

                    Unfortunately the “static binary” story on Python is pretty meh right now. Perhaps one day there will be support in the interpreter for creating things like that.

                2. 2

                  BTW, one thing I learned while doing this was that you can’t run strace and ltrace on the same pid. Or at least it looked that way to me. So I wonder whether you can run two straces?

                  1. 1

                    There should be a way to contain whole dependency trail + interpreter of such package into single static binary to put in /usr/local/bin like they do.

                    try https://lobste.rs/s/os2xxj/exodus_painless_relocation_linux

                    1. 1

                      That doesn’t create a single static binary as far as I can tell.

                  1. 4

                    This:

                    https://zwischenzugs.com/2018/05/21/autotrace-debug-on-steroids/

                    A terminal takeover and auto-debugger of processes that you want to examine.

                    1. 8

                      This is fantastic. I’ve been pretty vocal about this on twitter, largely along side @aprilwensel / @compassioncode, I love and use SO all the time, but it definitely isn’t a friendly place. My experiences contributing rather than consuming have been disappointing enough that I likely wouldn’t ever do it again if things didn’t change.

                      Take this example: https://stackoverflow.com/questions/10967795/directinput8-enumdevices-sometimes-painfully-slow/40449680#40449680

                      I posted this answer to this extremely obscure DirectX issue. This was literally the only result on google for the keywords I was searching. (Along with those stupid aggregator sites that just copy the content from SO…) I was a new account and could not post a comment on the question, only an answer. My answer is not an answer, it’s merely that I have the same problem and here’s what I’ve learned, and I made that known. The person that asked the question replied to my answer with “Very interesting! Thanks! […] It’s good to know I’m not the only one it affects!”. I GOT 3 PEOPLE ATTEMPTING TO DELETE MY ANSWER. How dare I try to contribute something useful? Better delete it for not actually answering the question! Thankfully I was able to respond to the deletions and keep it. Now it seems like someone else responded with a similar “I’m getting this too, but haven’t solved it” answer, and I’m glad they did because it’s starting to piece together this problem.

                      That sucked, and made me jaded towards contributing.

                      Another example is: https://stackoverflow.com/questions/3240633/web-based-vnc-client No it’s not programming, but it was like the first result for web based vnc client. It has like 31 votes, the answer has 23. It’s obviously a useful post. Yet it’s closed as off topic. It seems like content people want to see based on the upvotes. Now it’s just frozen in time with outdated content, and that sucks. Yes there’s other sites where this is better suited, I’d argue that a VNC client is primarily a developer tool though. I’d be surprised if developers/IT in some form make up less than say 70% of people that use VNC. You can ask questions about other developer tools like vim, emacs or MSVC on SO.

                      For the duplication issue what happens currently is if the question is identified as a duplicate, the question is closed with a message that feels like “Hmfp, RTFM, Why didn’t you search, this is a duplicate of this other question, use your eyes.” What should happen instead is, A: Don’t close the question. B: Link it as a possible duplicate of the other question. C: Post a message like, “We suspect that this question is a duplicate of this one and have associated it with the other question. If you believe this is incorrect then [click here] to revert this.” Then improve the flow for people ending up on this question to funnel them to the parent question, while still letting people answer this one, and better show the list of duplicate questions in the parent (depending on how well they can do SEO with this, obviously linking to a bunch of essentially dead/duplicate questions wouldn’t be great on the SEO side.)

                      1. 2

                        I gave up on SO for similar reasons - the mods seemed to care less about having a useful site than enforcing the ‘rules’.

                        1. 2

                          Even before the unfriendliness and mod absurdity, or at least before complaining about it got popular, I haven’t had very good experience with using SO for hard problems like the one you answered. I don’t think SO’s system is well-designed for attracting people capable of offering insight on such questions, or surfacing those types of questions to them when they do happen to show up and browse. It seems more like a system where newbies who are missing semicolons get their advice in ~10 seconds, while problems hard enough to stump experienced devs for hours or more get crickets.

                          1. 2

                            That looks very useful and I wasn’t aware of it.

                            But also looks quite different to me. Indeed she explicitly says:

                            “Granted mounting is not a requirement of building docker images. You can always go the route of orca-build and umoci and not mount at all. umoci is also an unprivileged image builder and was made long before I even made mine by the talented Aleksa Sarai who is also responsible for a lot of the rootless containers work upstream in runc.”

                            This pursues that approach, and is concerned with raw builds rather than k8s.

                            1. 1

                              ^and is^and the OP is^

                              1. 1

                                FWIW you may edit your comments on this site, it’s much nicer than Twitter. ;)

                                edit: oh, there’s a time limit.

                          1. 1

                            There’s a lot of conflation between ‘git’ and ‘github’ here.

                            Point 3 is utterly bogus. If your needs are so specific, use git, not github.

                            ‘Setting up a website for a project to use Git requires a lot more software, and a lot more work, than setting up a similar site with an integrated package like Fossil.’

                            I run a git server at home. All I need is ssh. What’s ‘a lot of software’ about that?

                            1. 4

                              The part you quoted talks about providing a whole website for the project, not just repository access.

                              Fossil seems to have a built-in web server, which provides access to the repository, tickets and wiki. The closest thing distributed with Git is GitWeb, which requires Perl and a web server speaking CGI. It only allows you to browse the repository. For anything else you need even more third-party software like Gitea for example.

                              So setting up a whole website for a project using Git indeed…

                              requires a lot more software, and a lot more work, than setting up a similar site with an integrated package like Fossil.

                              1. 1

                                Fair enough, but adding JIRA or something is simple enough. What the author is saying is: ‘I want fossil’, which is fine, but isn’t a good reason not to use git.

                                1. 1

                                  cgit works fine. Often things aren’t packaged together on purpose within linux. This doesn’t mean you can only use gitweb. Also you can send patches over email if you choose. It’s not really fair to call it third party software since literally all of it including git is third party software.

                                2. 1

                                  I agree with you about the conflation between git and Github. Github doesn’t play very well with cmd line git. you can’t send/receive patches for example. For lots of people git == Github, they are interchangeable, you and I know differently.

                                  like @seschwar said, you seem to have missed the first part of the sentence, “website for a project”. This requires more than ssh and git. Fossil is a statically linked binary that includes it all. And with the fossil ui command you get the entire website and all functionality locally as well, and it will all seamlessly sync with as many other copies of the repo as you want, the server is not special in any way, except that it exists at a safe, well-known address and becomes the main published repo.

                                1. 11

                                  It’s not true that no-one thought the early internet was rubbish. I did, and a lot of my peers did too. We just saw a slow and clunky technology filled with problems and didn’t have the imagination to see further. Strikes me that this is exactly like blockchain. Also, the title talks about Bitcoin, but then discusses blockchain, which is confusing. It’s like dismissing Yahoo, and then dismissing the internet because of it. Very odd.

                                  1. 12

                                    Email was already faster than physical post in 1992 when I got on the internet as a student.

                                    Mailing lists and Usenet presented an awesome opportunity for interaction with people all over the world.

                                    Bitcoin purports to be a better payment system. I can go online now, find a widget on Alibaba, pay for it with my credit card and get it delivered in a week or so. In what way does BTC improve on this scenario?

                                    1. 2

                                      Bitcoin purports to be a better payment system.

                                      Bitcoin is a technology. Many people have worked on it for various reasons, and it’s used by many people for various purposes. It doesn’t make sense to talk about its purport as if that were a single unified thing.

                                      At least for now it’s an alternate payment system, with its own pros and cons.

                                      Cryptocurrencies are still actively iterating different ideas. Many obscure ideas are never tried for lack of a network effect. Bitcoin and its brethren are young technology. I don’t think we can truly understand its potential until people have finished experimenting with it. That day hasn’t come.

                                      I think there is an innate human tendency to rush to judgment, to reduce the new to the seen before. When we do so, I think we miss out on the potential of what we judge. This is particularly true for young technology, where the potential is usually the most important aspect.

                                      Email was faster than physical post in 1992 but without popular usage lacked general utility. In hindsight it all seems so obvious however.

                                      1. 9

                                        I wrote:

                                        Bitcoin purports to be a better payment system.

                                        You write:

                                        Bitcoin is a technology. Many people have worked on it for various reasons, and it’s used by many people for various purposes. It doesn’t make sense to talk about its purport as if that were a single unified thing.

                                        I’m going off the whitepaper here:

                                        Bitcoin: A Peer-to-Peer Electronic Cash System

                                        Abstract. A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.

                                        I’ve been following the cryptocurrency space since I first installed a miner on my crappy laptop and got my first 0.001 BTC from a faucet, and the discussion has overwhelmingly been about Bitcoin as a payment system, or the value of the token itself, or how the increasing value of the token will enable societal change. Other applications, such as colored coins, or the beginnings of the Lightning Network, have never “hit it off” in the same way.

                                        1. 1

                                          Hmm, I’m not sure how that abstract is supposed to show how bitcoin purports to be a “better” payment system, just that it was originally envisioned as a payment system.

                                          Anyway, since then the technology presented in that paper has been put to other uses besides payments. Notarization, decentralized storage and decentralized computation are some examples. A technology is more than the intention of an original inventor.

                                          Other applications, such as colored coins, or the beginnings of the Lightning Network, have never “hit it off” in the same way.

                                          Evaluating the bitcoin technology, if that’s what we’re discussing, requires more than looking at just the bitcoin network historically. It’s requires looking at other cryptocurrencies, which run under similar principles. It also requires that we understand how the bitcoin network itself might improve itself in the future. It doesn’t make sense to write off bitcoin technology simply for slow transaction times, when there remains a chance that the community will fix it in time, or when there are alternatives with faster transaction times.

                                          Besides that, there are the unthought-of uses that the technology may have in the future. And even ideas that people have had that have never been seriously tried. With all that in mind, the potential of bitcoin technology can’t really be said to be something we can grasp with much certainty. We will only understand it fully with experimentation and time.

                                          1. 4

                                            Notarization, decentralized storage

                                            There was quite a bit of tech predating Bitcoin that used hashchains with signatures or distributed checking. I just noted some here. So, people can build on tech like that, tech like whatever counts as a blockchain, tech like Bitcoin’s, and so on. Many options without jumping on the “blockchain” bandwagon.

                                            1. 1

                                              Well the advantage of a cryptocurrency blockchain vs the ones you cite is that:

                                              • you have a shared, “trustless”, appendable database including an ability to resolve version conflicts
                                              • the people who provide this database are compensated for doing so as part of the protocol

                                              A cryptocurrency blockchain has drawbacks, sure, but it’s not like it doesn’t bring anything to the table.

                                            2. 3

                                              Unfortunately, what you said can be applied to every emerging tech out there. See VR and AR. The difference is that VR and AR has found enterprise-y niches like furniture preview in your home or gaming. Likewise, crypto-currency has one main use case which is to act as a speculative tool for investors. Now, crypto currency’s main use case is becoming threatened from regulation on a national level (see China, South Korea). Naturally, it’s practicality is being called into question. No one can predict the future and say with 100% certainty that X tech will become the next internet. But, what we’re saying is that the tech did not live up to it’s hype and it’s pointless to continue speculating until block chain has found real use cases.

                                              1. 1

                                                Unfortunately, what you said can be applied to every emerging tech out there.

                                                Yes, probably.

                                                The difference is that VR and AR has found enterprise-y niches like furniture preview in your home or gaming.

                                                Personally I’m skeptical that furniture preview and gaming truly explore the limits of what these technologies can do for us.

                                                Likewise, crypto-currency has one main use case which is to act as a speculative tool for investors.

                                                I mean, right now you can send money electronically with it.

                                                Now, crypto currency’s main use case is becoming threatened from regulation on a national level (see China, South Korea).

                                                You seem to be saying that regulation is going to happen everywhere. How could you know that?

                                                No one can predict the future and say with 100% certainty that X tech will become the next internet.

                                                I’m not talking about the difference between 99% certainty and 100% certainty. My argument is that we don’t understand the technology because we haven’t finished experimenting with it, and it’s through experimentation that we learn about it.

                                                But, what we’re saying is that the tech did not live up to it’s hype

                                                The life of new technology isn’t in its hype - its in its potential, something which I think we haven’t uncovered. There’s tons of crazy ideas out there that have never even seen a mature implementation - programs running off prediction markets, programmable organizations, and decentralized lambda, storage, and compute.

                                                it’s pointless to continue speculating until block chain has found real use cases.

                                                Not sure what you mean by speculating - financially speculating? I’m not advocating for that. Perhaps you mean speculating in the sense of theorizing - in that case I think there is value in that since the “real use cases” that you are demanding only get discovered through experiment, which is driven by speculation.

                                        2. 1

                                          And if we shift now the debate on blockchain as a whole and not just bitcoin?

                                      1. 2

                                        I feel like “the good parts” may be apt in cases where there’s a lot that isn’t good about the thing in question – such as in the case of Crockford’s famously-titled book on jervascrupt. Is that somehow the case with git log?

                                        1. 1

                                          Depends on whether you consider the less penetrable parts of the man page to be bad. Readers that don’t probably won’t be interested in the post.

                                        1. 6

                                          Isn’t good old graphviz a perfectly good tool for producing git diagrams.

                                          1. 2

                                            Quite probably - I’ve used that too quite a bit - https://zwischenzugs.com/2017/12/18/project-management-as-code-with-graphviz/ - have you done this before/seen this written up?

                                          1. 8

                                            Sort of an aside discussion, but the author’s choice to distribute the code as a Docker image: is that becoming a thing now?

                                            I’m notorious among my peers for installing and trying everything under the sun, and usually having to blow out and reinstall my computer about once a year (usually coinciding with Apple’s release of an updated MacOS). Maybe I’m late to the party, but Docker images are a much cleaner way of distributing projects in a working environment, are they not?

                                            1. 13

                                              This feels like the kind of thing I’d be grumpy about if I were any older; software distribution is one of our oldest and therefore most-studied problems. Java tried to solve it with a universal runtime. Package managers try to solve it with an army of maintainers who manage dependencies. Giving up on all that and bundling the entirety of latex and all its dependencies (one of the intermediate images is 3 point 23 fucking gigs!) just to distribute a 279 line style file and make it easier to use feels… kind of excessive?

                                              That said, I’m not old and grumpy and this is awesome. I kind of hope that this becomes a thing, it’s easy to install and easy to remove (and know that you’ve left no traces on your system) and this image will presumably be usable for a very long time.

                                              EDIT: I wrote the above comment while I was waiting for the image to finish downloading. It’s now finished and the final image takes up 5.63GB of my disk space. I don’t mind for this one-off package but would certainly mind if this method of distribution started catching on. Maybe we should just all use nix?

                                              1. 3

                                                I wrote the above comment while I was waiting for the image to finish downloading. It’s now finished and the final image takes up 5.63GB of my disk space. I don’t mind for this one-off package but would certainly mind if this method of distribution started catching on. Maybe we should just all use nix?

                                                Docker has some mechanisms for sharing significant parts of those images… at least if they’re created from the same base. The problem obviously is that people are free to do whatever, so that sharing is far from optimal.

                                                1. 1

                                                  Agreed, I assumed this was going to be something like a 200 python script with maybe 2 or 3 dependencies.

                                                2. 4

                                                  A docker image is the new curl|sh install method.

                                                  Basically ignore any concerns about security, updates, ‘I want this shit now Ma.’

                                                  1. 4

                                                    A random docker image is less likely to fuck up your home dir, though.

                                                    1. 2

                                                      I’ve spent a lot more time working with the shell than Docker. I find this Docker image a lot easier to understand and verify than various things I’ve been told to curl | sh.

                                                      1. 1

                                                        Couldn’t you just download and verify a script with curl -o filename.sh http://domain.name/filename.sh? How does a random Docker image end up being easier to verify? With a script you can just read through it, and verify what it does. With a Docker image you basically have to trust an image from 2014 of an entire operating system.

                                                        This honestly looks like one of the worst candidates for a Docker image. You have a tiny plaintext file which is all this is installing, and you are being told to download a multi gigabyte blob. I can understand why some people recommend using Docker for development, and running things and places you might not have control of the entire system, it here just seems unnecessary here.

                                                        1. 1

                                                          I don’t see how it’s installing just a style. It’s installing TeX, which is a big hairy package.

                                                          When I pull down the install for rustup, I end up with a 360 line shell script, which isn’t super easy to verify. For haskell’s stack, it’s 720. I swear I’ve seen 1500 before.

                                                      2. 1

                                                        Agree re security (you could get hacked) but at least it won’t accidentally wipe your hard drive while trying to uninstall (as has happened a few timed I’m aware of).

                                                      3. 3

                                                        In this case especially, as the instructions to install and configure are pretty painful:

                                                        https://github.com/Jubobs/gitdags

                                                        Oh, there are none. But there is this:

                                                        http://chrisfreeman.github.io/gitdags_install.html

                                                        As an aside, the Docker image has a couple of features I’m quite proud of (in a small way).

                                                        1. The default command of the container outputs help text.

                                                        2. If the convert_images.sh script spots a Makefile, it runs it, eg:

                                                        https://github.com/ianmiell/gitdags/blob/master/examples/Makefile

                                                        which reduces build time significantly if you have a lot of images.

                                                        1. 4

                                                          Just scrolling through that second link gives me anxiety; oh my god this is going to go wrong in fifty different ways. Getting it all together in a configured package (docker image) was pretty smart.

                                                          1. 3

                                                            I don’t know… Looking at the second link, the install instructions are actually fairly simple if you have TeX and the dependencies installed. Even if you don’t, like it’s just a LaTeX distribution, the tikz package, and the xcolor-solarized package.

                                                            In which case the instructions are only:

                                                            $ cd ${LATEX_ROOT}/texmf/tex/latex && git clone https://github.com/Jubobs/gitdags.git
                                                            $ kpsewhich gitdags.sty   # Check to see if gitdags.sty can be seen by TeX
                                                            

                                                            I feel like an entire Docker container is a little overkill. Might be an OK way to try the software, especially if you don’t have a TeX distribution installed, but it wouldn’t be a good way to actually use it.

                                                            1. 1

                                                              From the link:

                                                              ‘First, do NOT use apt-get to install. The best is to install TexLive from the Tex Users Groug (TUG).’

                                                              1. 1

                                                                Yeah like I said, the instructions are fairly simple if you have a TeX distribution Installed. If that version does happened to be from a distribution, I’m sure it works anyways - he did say the best way.

                                                                If you don’t happen to have TeX installed, it’s not that complicated to install it from manually from TUG anyways.

                                                              2. 1

                                                                Looking at the second link, the install instructions are actually fairly simple if you have TeX and the dependencies installed

                                                                Yeah, I already have TeX on my system, so I don’t really see what the problem is.

                                                              3. 1

                                                                The default command of the container outputs help text.

                                                                I haven’t seen that before; for a “packaging” container (rather than an “app deployment” container) it’s a nice touch I’ll be copying, thanks :-)

                                                              4. 2

                                                                I’ve used this very package with nix on OS X, I think a docker image is… a bit over the top personally. It wasn’t that bad to install and setup compared to any other latex package.

                                                              1. 3

                                                                As a developer who’s worked with this company - I can really appreciate their process and how they operate.

                                                                1. 1

                                                                  Did we work together?

                                                                  1. 1

                                                                    Indirectly, probably - worked with the Sportsbook at an older company.

                                                                1. 1

                                                                  I just use the Python program RASH. It stores bash history in an sqlite db.

                                                                  1. 1

                                                                    Nice - that’s what I was originally looking for…

                                                                  1. 3

                                                                    My passwords regularly end up in my $HISTFILE, both by accident and when connecting to certain services, it would be good to not store that in a central repository. Not sure how you would tackle this issue…

                                                                    1. 3

                                                                      Store the hash and blacklist content in the $HISTFILE based on the hash. If you get that one in a quadrillion false positive then you just accept that you lost some data for the sake of security.

                                                                      1. 1

                                                                        Yeah I like that….

                                                                      2. 2

                                                                        Aside from the shared secret security, it would be easy to add a blacklist file to the code (checking it’s an 0400 file). I could implement this if you want.

                                                                        1. 1

                                                                          Some encryption would be required - am pondering whether this should be SSL or a simpler scheme using the shared key.

                                                                          1. 3

                                                                            If you think about it, there’s not actually a need for the central server to read the logs: it just needs to store & serve them to authorised clients.

                                                                            You could have a single key shared by the clients, with SHA256(key || ‘client-server key’) being the client-server connexion key and SHA256(key || nonce) being the line-encryption key. Then the clients have simple configuration and the server cannot read the records, but all clients can read any client’s records.

                                                                            More complex schemes are possible, but this should be good enough for what I think you want to do.

                                                                            1. 1

                                                                              ssh tunnel?

                                                                              1. 1

                                                                                Am keen to keep it as simple as possible - re-using the secret key seems like the shortest path (but I may be missing a technique).

                                                                        1. 3

                                                                          Neat idea. How about making your .bash_history file a named pipe and have the script read from that, instead of using PROMPT_COMMAND?

                                                                          1. 4

                                                                            I did this at a hackathon. I found it kept having weird sync issues with a lot of shells open, but didn’t have time to fully investigate.

                                                                            1. 1

                                                                              Won’t this block your shell after a while if the reading process somehow dies?

                                                                              1. 1

                                                                                Yep. Equally PROMPT_COMMAND could cause delays/hangs depending on connectivity to the remote system.

                                                                              2. 1

                                                                                That’s a very creative idea that I hadn’t thought of. I did want to avoid interfering with the client system’s history mechanism though.

                                                                                1. 1

                                                                                  Ah, good point!

                                                                              1. 3

                                                                                you can get free dns services (including just-slave) from hurricane electric: https://dns.he.net/

                                                                                1. 3

                                                                                  Also, Cloudflare provides DNS in their free plan. Though it doesn’t cover all record types, it’s still pretty good.

                                                                                  1. 2

                                                                                    That’s very interesting (and quite rare), thanks! How did you hear about it?

                                                                                    1. 2

                                                                                      i run a he ipv6 tunnel for some years now, i guess it was recommended by a friend back then. i can only recommend the free hurricane electric services, never had any problems. they even send me a new t-shirt when my free-ipv6-certification-sage-t-shirt got lost in international mail :)

                                                                                  1. 6

                                                                                    I also run my own DNS server, but I prefer to maintain just the master. I pay ~$15/yr to outsource the slaves to a third party company who specializes in such things, and I don’t have to worry as much if my VPS provider decides to go down for a few hours, etc. I get a more reliable DNS system, and I still get to maintain control, graph statistics, etc, to my heart’s content.

                                                                                    Glad to see the discipline of self-hosting isn’t completely going the way of the dodo in this day and age!

                                                                                    1. 2

                                                                                      Any recommendation for a good third part company for such outsourcing?

                                                                                      I also run my own DNS. The main reason is that I run my own mail using https://mailinabox.email/, which has been a reasonably simple and pain-free experience. Paying someone to get better stability could be interesting.

                                                                                      1. 3

                                                                                        I have added nameservers from BuddyNS to my secondary DNS. For the moment I’m just using their free plan since I’ve delegated to only one nameservers out of the 3 which are serving my zones, and the query count is low enough to keep me on the free plan.

                                                                                        1. 1

                                                                                          I loved BuddyNS but I went over their query limit and the only payment they accept is PayPal and I boycott PayPal after they stole $900 from me… I wish they would take other forms of payment

                                                                                        2. 3

                                                                                          I asked for some recommendations online. My biggest requirements were a ‘slave only’ offering, DNSSEC/IPv6 support, and ‘not Dyn’ (I just can’t give Oracle money these days). With all that in mind, I ended up choosing dnsmadesimple.com (edit: looks like they’re $30/yr, not $15 as above. Mea culpa) It was seriously easy to get everything set up (less than 20 minutes!) and now I don’t have to worry about what happens when my master goes down.

                                                                                          1. 1

                                                                                            Do you mean dnsmadeeasy.com or do you mean dnsimple.com?

                                                                                            dnsmadesimple.com doesn’t exist

                                                                                            1. 2

                                                                                              My deepest apologies, this is what I get for Internetting when I’m about four cups of coffee short.

                                                                                              dnsmadeasy.com is the correct one.

                                                                                          2. 3

                                                                                            Hello everyone! This is my first post. :)

                                                                                            I’m Vitalie from LuaDNS. We don’t offer slaves right now (only AXFR transfers), but if you don’t mind to fiddle with git, you can add your Bind files to a git repository and push them to us via GitHub/Bitbucket/YourRepo. You can keep using your DNS servers for redundancy as slaves.

                                                                                            You get backups via git and free Anycast DNS for 3 zones. :)

                                                                                            Shameless Plug

                                                                                          3. 1

                                                                                            Interesting - that’s not a bad idea.

                                                                                            If I were a corp I wouldn’t want this method, but for the single user, the investment has been well worth the pay-off - even if I decide to go with a vendor in future, I’ll understand what I’m paying for.

                                                                                          1. 1

                                                                                            Sort of odd to describe using trap for cleanup but not mention trap ... EXIT.

                                                                                            1. 4

                                                                                              Ever typed anything like this?

                                                                                              $ grp somestring somefile
                                                                                              -bash: grp: command not found
                                                                                              

                                                                                              Sigh. Hit ‘up’, ‘left’ until at the ‘p’ and type ‘e’ and return.

                                                                                              Yeah, but I finnd using “up” “Ctrl-a” “Ctrl-d” “grep” easier, especially as an emacs user.

                                                                                              Generally speaking that would say that’s the biggest “hidden” feature of bash: emacs bindings by default. And that’s not only limited to movement commands like C-a, C-e, C-p, M-b, etc. You can kill lines or words with C-k or M-d, and yank them back in when needed with C-y. There’s even an “infinite kill-king” (by far the coolest name for a editor feature), to replace the last yanked section with the next item in the kill-ring. Of course, not everything is implemented, so theres no hidden mail client or M-x butterfly, but if you already are used to the default emacs editor binding, you get used to this quickly. And afaik, all tools that require GNU readline can do this. I just tested it out with Python, and it seemed to work.

                                                                                              I also recall reading something about vi-bindings in the bash manpage, but I can’t testify on how useful harmful, annoying or useless they are.

                                                                                              1. 6

                                                                                                Emacs bindings by default is also one of the biggest hidden features of MacOS: the bindings work in all GUI text fields.

                                                                                                1. 1

                                                                                                  Wow, I learned something new today. Prompted by your comment, I found this table comparing emacs bindings, OSX’s version of emacs bindings, and the more traditional Mac-style bindings for various operations.

                                                                                                  Looks like emacs’s M- bindings are mapped to ctrl-opt- on MacOS, which isn’t super convenient (e.g. I don’t see myself getting in the habit of using ctrl-opt-f over opt-rightarrow to move forward a word), but most of the C- bindings are convenient enough.

                                                                                                  1. 1

                                                                                                    I just discovered this a few days ago by accident because I have it set in GTK so I can use the bindings in my browser. I had to use a colleague’s (who is on macOS) browser and I just used them without thinking and only later realised ‘hey, wait a minute, why did that work?’.

                                                                                                    1. 1

                                                                                                      This is a major reason why I stay on OS X. I’m pretty sure I could reconfigure some Linux to get most of this, but probably not all of the niceness of text fields

                                                                                                      Would love to be proven wrong though

                                                                                                      1. 3

                                                                                                        I haven’t used GNOME for a while now, but I remember there being an option somewhere to used emacs keybindigs. And as it seems, all you need nowadays is to install the GNOME tweak tools, to activate it.

                                                                                                        (Alternatively, just use Emacs as your OS, heard they’ve got good support for emacs keybindings)

                                                                                                        1. 2

                                                                                                          Just FYI: That page is outdated, being written for 2.x era Gnome. Now the Emacs Input toggle is under the Keyboard & Mouse section.

                                                                                                    2. 3

                                                                                                      Yeah, won’t disagree. Occasionally, I find myself reaching for the caret because it comes to mind first.

                                                                                                      I’m an avid vi user, but the vi bindings on the command line never take for me. I always go back to the emacs ones.

                                                                                                      1. 3

                                                                                                        I use vi vindings and love them! I also never use ^ because I prefer interactive editing.

                                                                                                        It’s really nice that they work in Python and R as well as bash (because Python and R both use readline).

                                                                                                        In fact I think a large part of the reason that my OCaml usage trailed off is that the REPL utop doesn’t support readline. It only has emacs bindings!

                                                                                                        For those who don’t know, here is the beginning of my .inputrc:

                                                                                                        $ cat ~/.inputrc 
                                                                                                        set editing-mode vi
                                                                                                        
                                                                                                        set bell-style visible    # no beep
                                                                                                        
                                                                                                        1. 2

                                                                                                          Deleting words with C-w is also very helpful ime.

                                                                                                          1. 1

                                                                                                            I use fc for that. Opens your $EDITOR with the last command in a file, the edited command will be run