1. 9

    This is a good starting point. My list of 6 ways to “level up” would look something like this:

    • NSE scripts and LUA engine - I fully agree on this one. It’s actually crazy the amount you can do with it and I think it’s one of the underrated parts of nmap. I’ve started writing rules on the fly to try and deal with firewalls and other things that get in the way.
    • Timing options - One major thing that I think bites people when starting to get deeper into nmap is “scans taking too long”. The moment I realized that some of the defaults are a little too aggressive was the moment my options started to grow and fix some of my pain points. Reducing retries, reducing timeout to an acceptable amount for the type of network, and changing version detection intensity are all things I suggest when dealing with say a /8 and UDP scans. Also look into timing templates, I haven’t been fully happy with mine yet, but it’s a WIP that might fix some of my pain points.
    • Contextualize - Are you on link local? Use ARP/NDP/etc. Got a firewall being a jerk and telling you all ports are up? Switch to full connect scans (honestly I do this a ton now).
    • OS detection sucks - I almost always don’t do OS detection these days, so many routes and “security devices” change the results that it’s just going to slow you down. The only exception is nmap IPv6 and NDP, I am actually trying to isolate the bug, but I can’t seem to get NDP ICMP to work properly without the “OS Detection”
    • Interacting with the XML/Grep output - this is key to my survival, I don’t think I could conduct a pentest without some heavy parsing. Also this teaches you to be weary about certain unnamed script outputs.
    • Spoofing - This is much less common for the vast majority of users, but I’ve run into a few situations recently where I could ARP poison and then use nmap spoofing options to trick clients into accepting my UDP packets. It’s tricky and I still want to re-lab the stuff to write it up, but there aren’t many tools that do it as easily.
    1.  

      You should write that up. I’d read it. I’m just someone trying to improve my networking skills in my own time, rather than a pentester (hence aiming it at casual users).

      1. 8

        I’m trying to build this up:

        https://therunbooks.com/

        barely anything there at the moment, but this might give a flavour:

        https://therunbooks.com/doku.php?id=networking:dns-lookup-failure

        Looking for help :)

        1. 1

          This is a cool idea. I’ll try and remember it as I deal with problems in the future.

          1. 1

            Thanks! It feels like a lonely furrow, and it might end up being only useful to me :) So any feedback welcome.

        1. 3

          Um. Following this link I got redirected to some kind of spam website, that was blocked by my browser.

          Edit: clicked through a bunch more times to try and reproduce, got something slightly different:

          1. 2

            I’ve seen this on compromised WordPress sites before. If it’s the same as what I investigated previously, they do something like push the spam/ad/etc. to 1% of traffic and that makes it difficult to inspect/discover.

            1. 1

              Does it say Comcast in there? Could that be targeted to that connection?

              1. 1

                That’s… worrying. It’s a bog-standard wordpress site. What happens if you go to https://zwischenzugs.com?

                1. 1

                  I clicked through a dozen times and nothing happened. It definitely didn’t happen every time on the original link either.

                  1. 11

                    Looks like it’s a malicious ad coming in. Hard to say which ad network it came from, since the site is loading an obscene number of them…

              1. 0

                There are so many…

                tmux - meant I could ‘pick up where I left off’ when working on the move and crappy internet connections

                vim - super fast and available everywhere

                  1. 9

                    The author is wrong.

                    It’s not about accidental vs intentional security, it’s about the effectiveness of the technology in reaching the end goal.

                    Flying: did I get to my destination?

                    Elevator: did it take me to the right floor?

                    Voting machines: did it count the votes correctly?

                    It’s way way harder to be sure about the last one when computers are involved, because humans aren’t.

                    1. 0

                      I find it strange that etc hosts is not parsed into a trie. it is used linearly.

                      1. 2

                        Isnt it evaluated linearly to be deterministic …? If it was a Trie datastructure what would that buy you and would “a lookup” still be deterministic for a sysadmin to understand precedence defined within the file.

                        I suppose a big question is – is it quicker to find a plain needle in the file vs build a Trie for a typical hosts file …

                        1. 2

                          If it was a Trie datastructure what would that buy you

                          log time instead of linear time. This allows you to have a very large hosts file without slowing down internet use.

                          would “a lookup” still be deterministic

                          yeah, you can ensure that happens in the implementation.

                          I suppose a big question is – is it quicker to find a plain needle in the file vs build a Trie for a typical hosts file …

                          Building the trie is slower than searching through the file but the only needs to be done when the file is edited.

                          1. 2

                            Have you numbers on how big it would have to be? Without knowing that, this reeks of premature optimization.

                      1. 3

                        Very cool. It’s this kind of stuff that’s often not very well documented in the usual places.

                        1. 1

                          Thanks!

                          1. 8

                            I am sort of wondering whether it wouldn’t be possible to just ship a 32 bit x86 executable instead of amd64, then the 32 bit tricks could potentially be pulled. Since it doesn’t seem to need any libraries it wouldn’t incur any additional dependencies I think.

                            1. 3

                              I didn’t think of trying that…

                              1. 1

                                I thought he was using 32-bit already? On my system, __NR_pause is defined to be 29 (his number) in asm/unistd_32.h, and 34 in asm/unistd_64.h. He’s also using eax over rax… Perhaps using int 80h and not syscall uses the 32-bit abi?

                                1. 2

                                  int 0x80 is definitely the 32 bit Linux system call entry point.

                                  1. 1

                                    To be honest, my knowledge of assembler is minimal and even that is 20 years out of date…

                                1. 2

                                  Very useful thing, especially when you can spawn more than one concurrent “straces” (handy for multiple variants of the same binary).

                                  As strace is also very helpful for sysadmins too (we have a prover here that you can debug almost anything with strace + tcpdump), I just thrown it on my company’s internal mailing list. Hope they’ll appreciate it.

                                  But the distribution model for such “sysadmin toolbelt” tools in Python is kinda problematic for old-timers who use CentOS 7 on their desktop, still have Perl as default lang and only started diggging Py2 recently. There should be a way to contain whole dependency trail + interpreter of such package into single static binary to put in /usr/local/bin like they do.

                                  Also for portability on remote servers, which is a more reasonable argument for most of you.

                                  1. 2

                                    It’s available as a Docker image, if that helps (tho I doubt it if the env is locked down):

                                    docker pull imiell/autotrace

                                    I could look at unrolling to single python file somehow… would that help?

                                    1. 1

                                      That’s no problem for me, I can use pip install --user … and have $HOME/bin on my $PATH.

                                      Docker container is of course very handy, but I don’t think it would appeal for such people I described above because it’s “too hip”.

                                      Don’t get me wrong, I’m not strictly looking at this particular software at this time, but wondering a bit wider about software distribution for non-C stuff which is not yet packaged by BigCo Enterprise Linuxes.

                                      1. 1

                                        Unfortunately the “static binary” story on Python is pretty meh right now. Perhaps one day there will be support in the interpreter for creating things like that.

                                    2. 2

                                      BTW, one thing I learned while doing this was that you can’t run strace and ltrace on the same pid. Or at least it looked that way to me. So I wonder whether you can run two straces?

                                      1. 1

                                        There should be a way to contain whole dependency trail + interpreter of such package into single static binary to put in /usr/local/bin like they do.

                                        try https://lobste.rs/s/os2xxj/exodus_painless_relocation_linux

                                        1. 1

                                          That doesn’t create a single static binary as far as I can tell.

                                      1. 4

                                        This:

                                        https://zwischenzugs.com/2018/05/21/autotrace-debug-on-steroids/

                                        A terminal takeover and auto-debugger of processes that you want to examine.

                                        1. 8

                                          This is fantastic. I’ve been pretty vocal about this on twitter, largely along side @aprilwensel / @compassioncode, I love and use SO all the time, but it definitely isn’t a friendly place. My experiences contributing rather than consuming have been disappointing enough that I likely wouldn’t ever do it again if things didn’t change.

                                          Take this example: https://stackoverflow.com/questions/10967795/directinput8-enumdevices-sometimes-painfully-slow/40449680#40449680

                                          I posted this answer to this extremely obscure DirectX issue. This was literally the only result on google for the keywords I was searching. (Along with those stupid aggregator sites that just copy the content from SO…) I was a new account and could not post a comment on the question, only an answer. My answer is not an answer, it’s merely that I have the same problem and here’s what I’ve learned, and I made that known. The person that asked the question replied to my answer with “Very interesting! Thanks! […] It’s good to know I’m not the only one it affects!”. I GOT 3 PEOPLE ATTEMPTING TO DELETE MY ANSWER. How dare I try to contribute something useful? Better delete it for not actually answering the question! Thankfully I was able to respond to the deletions and keep it. Now it seems like someone else responded with a similar “I’m getting this too, but haven’t solved it” answer, and I’m glad they did because it’s starting to piece together this problem.

                                          That sucked, and made me jaded towards contributing.

                                          Another example is: https://stackoverflow.com/questions/3240633/web-based-vnc-client No it’s not programming, but it was like the first result for web based vnc client. It has like 31 votes, the answer has 23. It’s obviously a useful post. Yet it’s closed as off topic. It seems like content people want to see based on the upvotes. Now it’s just frozen in time with outdated content, and that sucks. Yes there’s other sites where this is better suited, I’d argue that a VNC client is primarily a developer tool though. I’d be surprised if developers/IT in some form make up less than say 70% of people that use VNC. You can ask questions about other developer tools like vim, emacs or MSVC on SO.

                                          For the duplication issue what happens currently is if the question is identified as a duplicate, the question is closed with a message that feels like “Hmfp, RTFM, Why didn’t you search, this is a duplicate of this other question, use your eyes.” What should happen instead is, A: Don’t close the question. B: Link it as a possible duplicate of the other question. C: Post a message like, “We suspect that this question is a duplicate of this one and have associated it with the other question. If you believe this is incorrect then [click here] to revert this.” Then improve the flow for people ending up on this question to funnel them to the parent question, while still letting people answer this one, and better show the list of duplicate questions in the parent (depending on how well they can do SEO with this, obviously linking to a bunch of essentially dead/duplicate questions wouldn’t be great on the SEO side.)

                                          1. 2

                                            I gave up on SO for similar reasons - the mods seemed to care less about having a useful site than enforcing the ‘rules’.

                                            1. 2

                                              Even before the unfriendliness and mod absurdity, or at least before complaining about it got popular, I haven’t had very good experience with using SO for hard problems like the one you answered. I don’t think SO’s system is well-designed for attracting people capable of offering insight on such questions, or surfacing those types of questions to them when they do happen to show up and browse. It seems more like a system where newbies who are missing semicolons get their advice in ~10 seconds, while problems hard enough to stump experienced devs for hours or more get crickets.

                                              1. 2

                                                That looks very useful and I wasn’t aware of it.

                                                But also looks quite different to me. Indeed she explicitly says:

                                                “Granted mounting is not a requirement of building docker images. You can always go the route of orca-build and umoci and not mount at all. umoci is also an unprivileged image builder and was made long before I even made mine by the talented Aleksa Sarai who is also responsible for a lot of the rootless containers work upstream in runc.”

                                                This pursues that approach, and is concerned with raw builds rather than k8s.

                                                1. 1

                                                  ^and is^and the OP is^

                                                  1. 1

                                                    FWIW you may edit your comments on this site, it’s much nicer than Twitter. ;)

                                                    edit: oh, there’s a time limit.

                                              1. 1

                                                There’s a lot of conflation between ‘git’ and ‘github’ here.

                                                Point 3 is utterly bogus. If your needs are so specific, use git, not github.

                                                ‘Setting up a website for a project to use Git requires a lot more software, and a lot more work, than setting up a similar site with an integrated package like Fossil.’

                                                I run a git server at home. All I need is ssh. What’s ‘a lot of software’ about that?

                                                1. 4

                                                  The part you quoted talks about providing a whole website for the project, not just repository access.

                                                  Fossil seems to have a built-in web server, which provides access to the repository, tickets and wiki. The closest thing distributed with Git is GitWeb, which requires Perl and a web server speaking CGI. It only allows you to browse the repository. For anything else you need even more third-party software like Gitea for example.

                                                  So setting up a whole website for a project using Git indeed…

                                                  requires a lot more software, and a lot more work, than setting up a similar site with an integrated package like Fossil.

                                                  1. 1

                                                    Fair enough, but adding JIRA or something is simple enough. What the author is saying is: ‘I want fossil’, which is fine, but isn’t a good reason not to use git.

                                                    1. 1

                                                      cgit works fine. Often things aren’t packaged together on purpose within linux. This doesn’t mean you can only use gitweb. Also you can send patches over email if you choose. It’s not really fair to call it third party software since literally all of it including git is third party software.

                                                    2. 1

                                                      I agree with you about the conflation between git and Github. Github doesn’t play very well with cmd line git. you can’t send/receive patches for example. For lots of people git == Github, they are interchangeable, you and I know differently.

                                                      like @seschwar said, you seem to have missed the first part of the sentence, “website for a project”. This requires more than ssh and git. Fossil is a statically linked binary that includes it all. And with the fossil ui command you get the entire website and all functionality locally as well, and it will all seamlessly sync with as many other copies of the repo as you want, the server is not special in any way, except that it exists at a safe, well-known address and becomes the main published repo.

                                                    1. 11

                                                      It’s not true that no-one thought the early internet was rubbish. I did, and a lot of my peers did too. We just saw a slow and clunky technology filled with problems and didn’t have the imagination to see further. Strikes me that this is exactly like blockchain. Also, the title talks about Bitcoin, but then discusses blockchain, which is confusing. It’s like dismissing Yahoo, and then dismissing the internet because of it. Very odd.

                                                      1. 12

                                                        Email was already faster than physical post in 1992 when I got on the internet as a student.

                                                        Mailing lists and Usenet presented an awesome opportunity for interaction with people all over the world.

                                                        Bitcoin purports to be a better payment system. I can go online now, find a widget on Alibaba, pay for it with my credit card and get it delivered in a week or so. In what way does BTC improve on this scenario?

                                                        1. 2

                                                          Bitcoin purports to be a better payment system.

                                                          Bitcoin is a technology. Many people have worked on it for various reasons, and it’s used by many people for various purposes. It doesn’t make sense to talk about its purport as if that were a single unified thing.

                                                          At least for now it’s an alternate payment system, with its own pros and cons.

                                                          Cryptocurrencies are still actively iterating different ideas. Many obscure ideas are never tried for lack of a network effect. Bitcoin and its brethren are young technology. I don’t think we can truly understand its potential until people have finished experimenting with it. That day hasn’t come.

                                                          I think there is an innate human tendency to rush to judgment, to reduce the new to the seen before. When we do so, I think we miss out on the potential of what we judge. This is particularly true for young technology, where the potential is usually the most important aspect.

                                                          Email was faster than physical post in 1992 but without popular usage lacked general utility. In hindsight it all seems so obvious however.

                                                          1. 9

                                                            I wrote:

                                                            Bitcoin purports to be a better payment system.

                                                            You write:

                                                            Bitcoin is a technology. Many people have worked on it for various reasons, and it’s used by many people for various purposes. It doesn’t make sense to talk about its purport as if that were a single unified thing.

                                                            I’m going off the whitepaper here:

                                                            Bitcoin: A Peer-to-Peer Electronic Cash System

                                                            Abstract. A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.

                                                            I’ve been following the cryptocurrency space since I first installed a miner on my crappy laptop and got my first 0.001 BTC from a faucet, and the discussion has overwhelmingly been about Bitcoin as a payment system, or the value of the token itself, or how the increasing value of the token will enable societal change. Other applications, such as colored coins, or the beginnings of the Lightning Network, have never “hit it off” in the same way.

                                                            1. 1

                                                              Hmm, I’m not sure how that abstract is supposed to show how bitcoin purports to be a “better” payment system, just that it was originally envisioned as a payment system.

                                                              Anyway, since then the technology presented in that paper has been put to other uses besides payments. Notarization, decentralized storage and decentralized computation are some examples. A technology is more than the intention of an original inventor.

                                                              Other applications, such as colored coins, or the beginnings of the Lightning Network, have never “hit it off” in the same way.

                                                              Evaluating the bitcoin technology, if that’s what we’re discussing, requires more than looking at just the bitcoin network historically. It’s requires looking at other cryptocurrencies, which run under similar principles. It also requires that we understand how the bitcoin network itself might improve itself in the future. It doesn’t make sense to write off bitcoin technology simply for slow transaction times, when there remains a chance that the community will fix it in time, or when there are alternatives with faster transaction times.

                                                              Besides that, there are the unthought-of uses that the technology may have in the future. And even ideas that people have had that have never been seriously tried. With all that in mind, the potential of bitcoin technology can’t really be said to be something we can grasp with much certainty. We will only understand it fully with experimentation and time.

                                                              1. 4

                                                                Notarization, decentralized storage

                                                                There was quite a bit of tech predating Bitcoin that used hashchains with signatures or distributed checking. I just noted some here. So, people can build on tech like that, tech like whatever counts as a blockchain, tech like Bitcoin’s, and so on. Many options without jumping on the “blockchain” bandwagon.

                                                                1. 1

                                                                  Well the advantage of a cryptocurrency blockchain vs the ones you cite is that:

                                                                  • you have a shared, “trustless”, appendable database including an ability to resolve version conflicts
                                                                  • the people who provide this database are compensated for doing so as part of the protocol

                                                                  A cryptocurrency blockchain has drawbacks, sure, but it’s not like it doesn’t bring anything to the table.

                                                                2. 3

                                                                  Unfortunately, what you said can be applied to every emerging tech out there. See VR and AR. The difference is that VR and AR has found enterprise-y niches like furniture preview in your home or gaming. Likewise, crypto-currency has one main use case which is to act as a speculative tool for investors. Now, crypto currency’s main use case is becoming threatened from regulation on a national level (see China, South Korea). Naturally, it’s practicality is being called into question. No one can predict the future and say with 100% certainty that X tech will become the next internet. But, what we’re saying is that the tech did not live up to it’s hype and it’s pointless to continue speculating until block chain has found real use cases.

                                                                  1. 1

                                                                    Unfortunately, what you said can be applied to every emerging tech out there.

                                                                    Yes, probably.

                                                                    The difference is that VR and AR has found enterprise-y niches like furniture preview in your home or gaming.

                                                                    Personally I’m skeptical that furniture preview and gaming truly explore the limits of what these technologies can do for us.

                                                                    Likewise, crypto-currency has one main use case which is to act as a speculative tool for investors.

                                                                    I mean, right now you can send money electronically with it.

                                                                    Now, crypto currency’s main use case is becoming threatened from regulation on a national level (see China, South Korea).

                                                                    You seem to be saying that regulation is going to happen everywhere. How could you know that?

                                                                    No one can predict the future and say with 100% certainty that X tech will become the next internet.

                                                                    I’m not talking about the difference between 99% certainty and 100% certainty. My argument is that we don’t understand the technology because we haven’t finished experimenting with it, and it’s through experimentation that we learn about it.

                                                                    But, what we’re saying is that the tech did not live up to it’s hype

                                                                    The life of new technology isn’t in its hype - its in its potential, something which I think we haven’t uncovered. There’s tons of crazy ideas out there that have never even seen a mature implementation - programs running off prediction markets, programmable organizations, and decentralized lambda, storage, and compute.

                                                                    it’s pointless to continue speculating until block chain has found real use cases.

                                                                    Not sure what you mean by speculating - financially speculating? I’m not advocating for that. Perhaps you mean speculating in the sense of theorizing - in that case I think there is value in that since the “real use cases” that you are demanding only get discovered through experiment, which is driven by speculation.

                                                            2. 1

                                                              And if we shift now the debate on blockchain as a whole and not just bitcoin?

                                                          1. 2

                                                            I feel like “the good parts” may be apt in cases where there’s a lot that isn’t good about the thing in question – such as in the case of Crockford’s famously-titled book on jervascrupt. Is that somehow the case with git log?

                                                            1. 1

                                                              Depends on whether you consider the less penetrable parts of the man page to be bad. Readers that don’t probably won’t be interested in the post.

                                                            1. 6

                                                              Isn’t good old graphviz a perfectly good tool for producing git diagrams.

                                                              1. 2

                                                                Quite probably - I’ve used that too quite a bit - https://zwischenzugs.com/2017/12/18/project-management-as-code-with-graphviz/ - have you done this before/seen this written up?

                                                              1. 8

                                                                Sort of an aside discussion, but the author’s choice to distribute the code as a Docker image: is that becoming a thing now?

                                                                I’m notorious among my peers for installing and trying everything under the sun, and usually having to blow out and reinstall my computer about once a year (usually coinciding with Apple’s release of an updated MacOS). Maybe I’m late to the party, but Docker images are a much cleaner way of distributing projects in a working environment, are they not?

                                                                1. 13

                                                                  This feels like the kind of thing I’d be grumpy about if I were any older; software distribution is one of our oldest and therefore most-studied problems. Java tried to solve it with a universal runtime. Package managers try to solve it with an army of maintainers who manage dependencies. Giving up on all that and bundling the entirety of latex and all its dependencies (one of the intermediate images is 3 point 23 fucking gigs!) just to distribute a 279 line style file and make it easier to use feels… kind of excessive?

                                                                  That said, I’m not old and grumpy and this is awesome. I kind of hope that this becomes a thing, it’s easy to install and easy to remove (and know that you’ve left no traces on your system) and this image will presumably be usable for a very long time.

                                                                  EDIT: I wrote the above comment while I was waiting for the image to finish downloading. It’s now finished and the final image takes up 5.63GB of my disk space. I don’t mind for this one-off package but would certainly mind if this method of distribution started catching on. Maybe we should just all use nix?

                                                                  1. 3

                                                                    I wrote the above comment while I was waiting for the image to finish downloading. It’s now finished and the final image takes up 5.63GB of my disk space. I don’t mind for this one-off package but would certainly mind if this method of distribution started catching on. Maybe we should just all use nix?

                                                                    Docker has some mechanisms for sharing significant parts of those images… at least if they’re created from the same base. The problem obviously is that people are free to do whatever, so that sharing is far from optimal.

                                                                    1. 1

                                                                      Agreed, I assumed this was going to be something like a 200 python script with maybe 2 or 3 dependencies.

                                                                    2. 4

                                                                      A docker image is the new curl|sh install method.

                                                                      Basically ignore any concerns about security, updates, ‘I want this shit now Ma.’

                                                                      1. 4

                                                                        A random docker image is less likely to fuck up your home dir, though.

                                                                        1. 2

                                                                          I’ve spent a lot more time working with the shell than Docker. I find this Docker image a lot easier to understand and verify than various things I’ve been told to curl | sh.

                                                                          1. 1

                                                                            Couldn’t you just download and verify a script with curl -o filename.sh http://domain.name/filename.sh? How does a random Docker image end up being easier to verify? With a script you can just read through it, and verify what it does. With a Docker image you basically have to trust an image from 2014 of an entire operating system.

                                                                            This honestly looks like one of the worst candidates for a Docker image. You have a tiny plaintext file which is all this is installing, and you are being told to download a multi gigabyte blob. I can understand why some people recommend using Docker for development, and running things and places you might not have control of the entire system, it here just seems unnecessary here.

                                                                            1. 1

                                                                              I don’t see how it’s installing just a style. It’s installing TeX, which is a big hairy package.

                                                                              When I pull down the install for rustup, I end up with a 360 line shell script, which isn’t super easy to verify. For haskell’s stack, it’s 720. I swear I’ve seen 1500 before.

                                                                          2. 1

                                                                            Agree re security (you could get hacked) but at least it won’t accidentally wipe your hard drive while trying to uninstall (as has happened a few timed I’m aware of).

                                                                          3. 3

                                                                            In this case especially, as the instructions to install and configure are pretty painful:

                                                                            https://github.com/Jubobs/gitdags

                                                                            Oh, there are none. But there is this:

                                                                            http://chrisfreeman.github.io/gitdags_install.html

                                                                            As an aside, the Docker image has a couple of features I’m quite proud of (in a small way).

                                                                            1. The default command of the container outputs help text.

                                                                            2. If the convert_images.sh script spots a Makefile, it runs it, eg:

                                                                            https://github.com/ianmiell/gitdags/blob/master/examples/Makefile

                                                                            which reduces build time significantly if you have a lot of images.

                                                                            1. 4

                                                                              Just scrolling through that second link gives me anxiety; oh my god this is going to go wrong in fifty different ways. Getting it all together in a configured package (docker image) was pretty smart.

                                                                              1. 3

                                                                                I don’t know… Looking at the second link, the install instructions are actually fairly simple if you have TeX and the dependencies installed. Even if you don’t, like it’s just a LaTeX distribution, the tikz package, and the xcolor-solarized package.

                                                                                In which case the instructions are only:

                                                                                $ cd ${LATEX_ROOT}/texmf/tex/latex && git clone https://github.com/Jubobs/gitdags.git
                                                                                $ kpsewhich gitdags.sty   # Check to see if gitdags.sty can be seen by TeX
                                                                                

                                                                                I feel like an entire Docker container is a little overkill. Might be an OK way to try the software, especially if you don’t have a TeX distribution installed, but it wouldn’t be a good way to actually use it.

                                                                                1. 1

                                                                                  From the link:

                                                                                  ‘First, do NOT use apt-get to install. The best is to install TexLive from the Tex Users Groug (TUG).’

                                                                                  1. 1

                                                                                    Yeah like I said, the instructions are fairly simple if you have a TeX distribution Installed. If that version does happened to be from a distribution, I’m sure it works anyways - he did say the best way.

                                                                                    If you don’t happen to have TeX installed, it’s not that complicated to install it from manually from TUG anyways.

                                                                                  2. 1

                                                                                    Looking at the second link, the install instructions are actually fairly simple if you have TeX and the dependencies installed

                                                                                    Yeah, I already have TeX on my system, so I don’t really see what the problem is.

                                                                                  3. 1

                                                                                    The default command of the container outputs help text.

                                                                                    I haven’t seen that before; for a “packaging” container (rather than an “app deployment” container) it’s a nice touch I’ll be copying, thanks :-)

                                                                                  4. 2

                                                                                    I’ve used this very package with nix on OS X, I think a docker image is… a bit over the top personally. It wasn’t that bad to install and setup compared to any other latex package.