1. 4

    Update: posted to HN just now. You’re welcome to upvote it there for an initial boost of visibility if you like. Hope this won’t trigger their “voting ring detector” or something.

    edit: Eh, flopped on HN anyway. Pity, but I had to be prepared for this, it’s always a roulette there. Probably better for my mental health this way :P I’m even more happy then that I could share it with you here on lobste.rs, got so many awesome comments with valuable discussion and kind words, and first of all, a chance to tell you about it and hopefully make your work, and life through this, a teeny tiny bit better/easier with it. Ah, by the way: I don’t have an active twitter account, so if you find some people sharing this there (or elsewhere), or some custom screenshots/screencasts they did with it, I’d be very happy if you could let me know via a private message on lobste.rs! But no pressure about it. TIA

    1. 2

      Yes, I think the direct link excludes your vote. Better go to “https://news.ycombinator.com/show” and manually up-vote “Show HN: up – a tool for writing Linux pipes with instant live preview”. Good luck!

      PS: Worst case you might get reselected by the mods or you can I think somehow ask mods to resubmit your story.

      Seems it got resubmitted https://news.ycombinator.com/item?id=18292712

      1. 1

        Hahaha, lol, the storm unwinds indeed, if with a small delay :D Funny to watch it happen :) Thanks a lot for the pingback! :) And quite some interesting ideas and thoughts there, too. Oooh, will be a lot to digest after the dust settles down. I’m starting to have second thoughts if this won’t result in some files lost accidentally indeed for someone :( Eh, hope not, and let’s hope it doesn’t take me too long to release a new version with at least a pausing functionality…

        edit: And I’m still surprised with the amount of positivity and interest I’m getting even there! It really feels warm in the heart to read every and one post, here, there, everywhere, from people who wanted such a tool, and now discover it exists (even if they sometimes weren’t even aware of that wanting!)… so that I could be helpful for them… uh, just hoping now that I won’t be getting too many “give me back my lost files” issues on github starting a week from today, till the end times…

    1. 6

      I wish VFMD became more popular. It has a nice, exhaustive and deterministic spec, and potential for being well-specified with regards to worst case complexity.

      1. 3

        How is this different than, or related to, Common Markdown?

        1. 2

          It predates Common Markdown. It’s a real specification, it is unambiguous and exhaustive (I haven’t read CM, but from what I heard it’s more example-based, and still ambiguous and non-exhaustive). As a genuine spec, it’s kinda “declarative algorithm expressed in words”; it’s precise as to expected observed behaviors, but over that it doesn’t force a particular implementation (thus it’s not a “specification by reference implementation”). It comes with a huge test suite, starting from simple cases, up to and including various non-trivial cases, known to be ambiguous among various implementations.

          Also I believe that with a few non-disruptive extra restrictions (an O(N) regexp engine, and how many lines can a link span) it could be made O(N·M) worst-case complexity (N being number of lines in text, M being length of the longest line).

          However, it didn’t win the popularity contest, as there’s only one guy behind it, who’s not a “SE celebrity” or a good marketing/PR guy. But he did put the extra effort to make it well readable and to maintain a clean and solid website for VFMD, still keeping it active after many years.

          1. 3

            It’s a real specification, it is unambiguous and exhaustive (I haven’t read CM, but from what I heard it’s more example-based, and still ambiguous and non-exhaustive).

            I think you are missing something, because this is exactly the goal of CommonMark Spec and the reason why it was created is the ambiguity of daring fireball’s markdown description.

            From the latest spec:

            […] John Gruber’s canonical description of Markdown’s syntax does not specify the syntax unambiguously. Here are some examples of questions it does not answer: […]

            or the website:

            John Gruber’s canonical description of Markdown’s syntax does not specify the syntax unambiguously. […]

            @akavel wrote:

            […] there’s only one guy behind it […]

            I think what made CommonMark so successful was that they’ve been very open to input from day one (aka there is a big community participation forum) and that they’ve released a lot of specification versions over a short period of time (considering how old the initial Markdown draft is).

            1. 1

              I searched some more now, and what I currently found is a “commonmark vs vfmd” page on vfmd wiki. It compares VFMD with CommonMark as of 2014, when the latter was emerging. The page links to a 2014 discussion on HN between vfmd’s author and one of CommonMark authors. I suppose the quote that stayed in my memory, which IIUC was valid at least at the time, was:

              The problem is that there’s no formal grammar and the spec of “Standard Markdown”, while being more specific than John Gruber’s, is still full of ambiguities.

              Some examples of ambiguities:

              […]

              Thing is, a specification-by-example like this would have to keep an ever-growing list of corner cases and give examples for each of them. […] Hence the need for a formal grammar, which is the shortest way of expressing something unambiguously. […] (Shameless plug: vfmd (http://www.vfmd.org/) is one such Markdown spec which specifies an unambiguous way to parse Markdown, with tests and a reference implementation.)

              (“Standard Markdown” was the original name of CommonMark, before John Gruber objected to this naming.) And John MacFarlane’s (CM co-author’s) reply further down basically confirmed this:

              Your comments (coming from someone who has actually tackled this surprisingly difficult task) are some of the most valuable we’ve received[…] We considered writing the spec in the state machine vein, but I advocated for the declarative style. It may be worth rethinking that and rewriting it, essentially spelling out the parsing algorithm.

              I haven’t tracked CommonMark since then, so I don’t know whether they processed this feedback eventually, or not. Whereas VFMD was ready then already. And it was not “less open”; it was just less popular, and its author was not an Internet celebrity.

              1. 1

                Ah, thank you for checking why you’ve remembered this that way - makes more sense now.

                “commonmark vs vfmd” page on vfmd wiki

                Tried out many of the examples in Pandoc, some of the criticism appears to have changed, others not - some of the examples are I guess personal preference and no “right” way to do it, no matter how much you discuss the topic.

                I haven’t tracked CommonMark since then, so I don’t know whether they processed this feedback eventually, or not.

                I can’t find any implementation in vein, or something that helps you implementing CM - besides reading spec and source, but there are so many implementations of CommonMark now. But yes, it has processed on this feedback.

                Whereas VFMD was ready then already. And it was not “less open”; it was just less popular, and its author was not an Internet celebrity.

                There are claims that they’ve already worked on CommonMark in 2012. And even if not, sometimes ideas, concepts, projects that are coming later, do erase someones work that never had the opportunity to take off.

                So I can’t tell you why VFMD hasn’t taken off, but CommonMark for whatever reasons did. I think Roopesh Chander work on VFMD is stunning and his efforts should be praised…

                I don’t know who you are referring to as an “Internet celebrity”, whether you mean “John MacFarlane” or “Jeff Atwood”.

                Also Pandoc’s first commits are in November 2009, so long before VFMD, I totally get why someone who built something like Pandoc would put efforts into creating CommonMark.


                It predates Common Markdown. It’s a real specification, it is unambiguous and exhaustive (I haven’t read CM, but from what I heard it’s more example-based […]

                After this discussion I don’t see any proof for your claim and “from what I’ve heard in 2014” is really not helpful - other than confusing people with polarizing statements. Read both specs for a proper discussion. Both have examples.

                1. 2

                  I do understand now, thanks to this discussion, that CM may have changed more than I used to expect over this time. If I get back to some efforts on parsing Markdown in future, I will most certainly take CM seriously into consideration and comparison now, to develop an opinion on it anew and with a fresh eye. I’m very happy to hear they may have improved so much. With that said, at the time being, I cannot invest my time into this comparison, and don’t have a need for this. But I will sure be more considerate with my claims in this area from now on. Thank you very much for this again.

                  I generally agree with what you’ve written in this last post. As to Pandoc, VFMD docs do mention and acknowledge the influence of John’s work quite clearly. By “Internet celebrity”, I mean Jeff Atwood; and I totally don’t want to mean this in a bad way. Just as a statement of fact, and some meditation on importance of publicity and popularity on technology impact and adoption. Even if it makes me somewhat sad, that idealised “pure merit” is not enough to succeed; but that’s just how this world works, so I find it pointless to argue with that.

        2. 1

          Interesting, will have to look into that further.

        1. 7
          1. pushd / popd vs ‘cd -‘

          A word of caution on the use of pushd/popd in scripts: keep their uses close together or you can get very lost. I prefer using subshells for this sort of thing since, when the subshell is finished, the environment (including the current working directory) is restored.

          1. source vs ‘.’

          One thing not mentioned here is how source will find things. From the reference manual:

          If filename does not contain a slash, the PATH variable is used to find filename. When Bash is not in POSIX mode, the current directory is searched if filename is not found in $PATH.

          It’s generally a good idea to always use a full or relative path to a file being sourced (that is, something with a slash in it) or you could be in for some real surprises.

          1. 4

            […] I prefer using subshells for this sort of thing […]

            I’d usually go with putting cds into a shell-function, rather than spawning a subshell.

            Mostly because it’s for my taste easier to read, argue and test.

            modify files
            create directory
            ( cd directory
            run something
            )
            proceed in original pwd
            

            vs.

            run_something() {
              [ -d "$1" ] || _error "dir missing $1"
              cd "$1"
              run something
            }
            
            modify files
            create directory
            run_something directory
            proceed in original pwd
            

            I guess it might also help with trap-statements.

            1. 5

              Hrm, I don’t think that will work.

              #!/bin/bash 
              
              run_something() {
                  cd tmp
              }
              
              echo "${BASH_VERSION[*]}"
              echo "Before: " $(pwd)
              run_something
              echo "After: " $(pwd)
              

              When I run it:

              % ./tmp/t.sh 
              4.4.19(1)-release
              Before:  /home/woz
              After:  /home/woz/tmp
              
              1. 4

                Oh, wow - thank you for checking. Can’t reproduce what I gave as an earlier example either.

                Sorry for my misinformation, can’t check my scripts at my old employer anymore - maybe I wrapped it into a function and still used a subshell (which contradicts my criticism).

                I guess I’d go then with using a subshell inside a function, but still doesn’t make my earlier statement more correct.

                #!/bin/bash 
                
                run_something() {
                    (
                        cd tmp
                    )
                }
                
                echo "${BASH_VERSION[*]}"
                echo "Before: " $(pwd)
                run_something
                echo "After: " $(pwd)
                
            2. 4

              A word of caution on the use of pushd/popd in scripts: keep their uses close together or you can get very lost.

              I tend to favour push/pop, but I also use indentation to help match them up, e.g.

              mkdir foo
              pushd foo
                bar baz quux
              popd
              
            1. 1

              What is it that keeps Mercurial from using python 3? It looks like the only thing using python2.7 on my system.

              1. 5

                It’s a big porting effort. hg essentially works with bytes, not with strings, and having the fundamental string type change from being bytes to codepoints is a huge disruption. PEP 461 was a big request of the hg devs before any porting could happen. That just made it feasible to port; now all that’s left is a lot of hard work.

                Progress is being made, but slowly.

                1. 1

                  There was also the “Python startup time” issue, which is probably less of an issue in comparison to what @JordiGH pointed out - but it’s nonetheless interesting.

                  1. 2

                    AIUI, Gregory’s current attempt towards fixing that seems to be to have a Rust loader that only starts up Python for tasks that would take long enough to justify paying the startup time.

                    The Python devs are also trying to reduce warmup time, but I’m a bit skeptical that they’ll manage to bring it down far enough.

                1. 5

                  These projects are way easier to deploy when you use Docker. It will hide 90% of the stupid stuff that is automated for you in the Dockerfile.

                  1. 4

                    Is this a bug or a feature? One could fail to explain the “stupid stuff”, and be tripped up by a part of it that really mattered.

                    It’s not enough to be “open source”, it needs to be transparent and credible too, so that one can reasonably maintain it. Hiding things in a Dockerfile doesn’t pass this test.

                    1. 8

                      I think it’s probably “enough” for a project to be whatever the maintainers want it to be. A Dockerfile is just an abstraction like a Makefile or even a shell script; the built artefact is effectively a tar file containing the assembled software, ready for deployment. I’m not a fan of the ergonomics of the docker CLI, but the idea that you’re “hiding” anything with Docker any more than you are with any other packaging and deployment infrastructure seems specious at best.

                      1. 0

                        Instead of focusing on a single word, try considering the other, opposing two - being credible and transparent. Clearly this isn’t.

                        For one thing, the reason you don’t do this is that it’s easy to be taken advantage of and place exploitative code in a big pile of things. For another it’s bad form to not communicate your work well, because maintainer’s struggling to deal with an issue don’t create more (and possibly even worse) versions they might claim “fix” something, and in the fog of code it might not be easy to tell which end is up.

                        I’m surprised you’d defend bad practice, since nearly everyone has had one of these waste a few hundred hours of their time. Your defense sounds even more specious than focusing on the wrong word and missing the point of the comment.

                        1. 2

                          I highlighted the word enough because your comment seems to have come from a place of entitlement and I was trying to call that out. The project doesn’t owe you anything.

                          Indeed, most of my comment was attempting to address your apparent suggestion that using a Dockerfile instead of some other packaging or deployment mechanism is somehow not transparent (or, I guess, credible?). I’m not really defending the use of Docker in any way – indeed, I don’t have any use for it myself – merely addressing what I think is specious criticism.

                          Regardless of what point you were trying to make, your comment comes across as an effectively baseless criticism of the project for not delivering their software in a way that meets your personal sense of aesthetics. Things are only hidden in a Dockerfile to the extent that they are conveniently arranged for consumption by consumers that do not necessarily need to understand exactly how they work. This isn’t any different to any other process of software assembly that abstracts some amount of the internal complexity of its operation from the consumer; e.g., I am not in the habit of reviewing the Linux kernel source that executes on my desktop.

                          If you want to know how the Dockerfile works, you could always look inside! It seems no more or less transparent or credible than a shell script or a markdown document that executes or describes a similar set of steps.

                          1. -1

                            I build them so I know whats inside. You’re looking for something to be outraged at, and find it in my words.

                            Perhaps you can defend those who write programs with meaningless variable names, and stale comments that no longer reflect the code they were next to.

                            Point your anger at somewhere else. Meanwhile who speaks up for something unintentionally vague or misleading. Or are you also going to defend syntax errors and typos next.

                            1. 1

                              I’m not angry – merely flabbergasted! How is a Dockerfile “vague or misleading”? By definition, it contains a concrete set of executable steps that set up the software in the built image.

                    2. 1

                      I hate the docker setups that are just one piece of the setup and you are expected to spend a few days writing a docker compose file to piece together the whole thing

                      1. 1

                        Which problems do you encounter when writing docker-compose files? I’ve mostly had the experience that the upstream Dockerfile is horrible (for example Seafile is trying to pack everything into a single image - which causes feature-creep for the setup scripts) - but writing docker-compose.yaml always felt rather straight forward (besides Volumes, I’m still confused by Volume management on occasion).

                    1. 2

                      Something like this could easily be supported on Linux, either by calling .fehbg from a script or by contributing such a feature to tools like Komorebi (disclaimer, haven’t tired it out yet).

                      Also there appears to be support for .heif by libheif1 (which is earliest in Debian Testing/Buster available right now) and ImageMagick also appears to have support starting with version 7.0.7-22.

                      1. 2

                        I’m still a bit worried about the difficulty to “downscale” Kubernetes to one single node, or to a simple 3 nodes cluster with low-end machines.

                        GKE documentation says 25 % of the first 4 GB are reserved for GKE: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#memory_cpu

                        What is your experience regarding this?

                        Edit: Sorry Caleb, I discovered your post here just after having emailed you with the same question :-)

                        1. 4

                          Just started reading introductory docs and encountered this

                          2 GB or more of RAM per machine. Any less leaves little room for your apps.

                          So, Kubernetes software uses 2 GB for its own needs? That’s huge amount of memory. I remember php+mysql wikis and forums ran on VMs with 128 Mb without problems, including OS and database.

                          1. 3

                            I haven’t tested it myself, but I remember having read this in Kubernetes docs. This is what gave me cold feet…

                            I’d be confused if a regular node would always have this memory requirement. I mean, how would people create a k8s Raspberry Pi cluster then?

                            1. 1

                              I’m confused too. I’m wondering if the requirement in GKE docs is not about optional features like StackDriver and Kubernetes dashboard. I haven’t had the time to test it myself. Curious if someone here knows more about this?

                            2. 3

                              This would only be for the master nodes, which are provided for free on GKE.

                              On several machines that I have they are more around 400m which inclues the kube-proxy (reverse proxy management for containers), flannel (network fabric), and kubelet (containers management). That can seem huge, but it offers guarantees and primitives that php+mysql wiki would use to be easily deployable, and hopefully more resilient to underlying failures.

                              1. 1

                                I haven’t tested it myself, but I remember having read this in Kubernetes docs. This is what gave me cold feet…

                              2. 1

                                This would only be for the master nodes, which are provided for free on GKE.

                                1. 2

                                  Are you sure? The part of the doc I linked is specifically about the cluster nodes and not the master.

                                  1. 1

                                    sorry, wrong thread… It does reserve 25%, which is a safe bet to ensure that the kub-system pods are running correctly.

                              1. 3

                                I migrated text file notes into a local MediaWiki instance on my laptop several years ago, that was fun at first, but I grew unhappy with the editing process being too slow. At some point I moved the contents with the help of pandoc into gitit (Markdown files kept in a Git repo with a MediaWiki like interface). That allowed occasional edits through the console and proper history management (e.g. reverting on squashing edits). I was still unhappy with the inconvenience of editing text in a web browser and increasingly missed the editing facilities I have with running emacs in a terminal. So I switched the setup once more.

                                I’m now using a simple Python script that uses asciidoctor (or pandoc) to render md or adoc files from a Git repo as HTML and serves that at a localhost port for FF or Chrome. It adds an invisible one-liner ‘…/page?edit’ link element at the end of each HTML page (with an accesskey=“e”) that triggers the Python script to start up a terminal window with emacs and the md/adoc source of the page I’m currently browsing. Emacs is using git-auto-commit mode for the Git repo directory. Put together, that means I can browse through my Git repo in a Wiki-like fashion and at the press of Shift+Alt+e get an emacs terminal window right into the current document where edits are auto-tracked in Git. Interaction is instant and this combines all the benefits of wiki browsing, Git and text editing through emacs for me.

                                1. 2

                                  I was running a gitit installation as a daemon on my laptop for years, set it up initially for my software engineering studies at TU Vienna, but eventually dropped out and stopped using gitit. Gitit is/would be nice - but isn’t maintained a lot, notably by pandoc developing further than gitit - but there are still sometimes contributions.

                                  Today I’d probably go with something that integrates with emacs, provides a nice web/mobile/kindle-view and has good search capabilities.

                                  At some point I’ve setup a blog with Ikiwiki, but I really can’t recommend this tool.

                                1. 4

                                  If you haven’t noticed, YouTube’s ranking mechanisms became more “fraudulent” at least in the recent months. So besides having a service that provides un-editorialized non-blackbox algorithm rankings, this service might also provide a view to purposefully excluded content.

                                  An example of how bad it is, they’ve started (on purpose or not) removing videos of uploaders that aren’t taking part of the YouTube monetarization + have a Patreon link in their description. 0 1

                                  I’d expect/hope that such practice would be suit in EU courts for unfair competition.

                                  I wonder how “Fair Trending” implemented their technology, if they are using official YouTube APIs and if Alphabet is going to kill this service once it gains traction?

                                  Their about 2 page doesn’t answer the “how do we get to the data?”.


                                  Disclaimer: This is a crossposting of my comment on ycombinator news.

                                  1. 4

                                    Fair Trending uses the YouTube API. My lawyer reviewed their API ToS before I built the app to see if it would be a violation and we couldn’t find anything.

                                    1. 1

                                      Thank you for your response and sounds really polished that you went that far with this project.

                                      1. 2

                                        Thanks for the info about YouTube removing and deprioritizing smaller channels, very enlightening on what’s gonig on https://www.youtube.com/watch?v=WRB8O08PjnA

                                  1. 5

                                    I swear by org-mode. https://orgmode.org

                                    1. 2

                                      What do you do when you’re not at a computer?

                                      1. 2

                                        I’m new to the emacs crowd, but just today I’ve installed Orgzly on my Android phone, syncing is a little bit odd though. I’m not sure though if I prefer the built-in calendar/reminders system or rather go with some emacs <=> CalDav integration (if such a thing exists).

                                        1. 1

                                          That post seems a bit like an overkill to me. I personally prefer to use the built-in sync with Dropbox (disclaimer: only built in in the Google Play version, not the F-Droid one), but people that keep it clean from closed code recommend Syncthing to do it

                                          1. 1

                                            You can call it overkill, but right now it’s the only way of syncing with this tool - I don’t have Play Store and I also don’t have Dropbox. I think Dropbox is acting in bad faith.

                                            1. 5

                                              Have you considered using Syncthing? It’s a peer to peer file synchronization utility that doesn’t rely on Google, and doesn’t store your data anywhere but your devices.

                                              1. 1

                                                Syncthing is mentioned in the thread I’ve linked to in my initial comment. I’ll still give it a try, since I haven’t considered it at all. Note: I haven’t used Syncthing in the past two years, maybe it has improved.

                                                1. 1

                                                  Syncthing is pretty terrible on Android, regularly was out of sync, and took my battery from ~28 hours to ~4. Wondering if there are specific setups that use less cpu for syncthing.

                                                  1. 1

                                                    I must have randomly stumbled into a working configuration, since my Keepass database stays pretty well-synced and my phone will usually last a day without needing charging. Sorry it doesn’t work for you, though.

                                                2. 1

                                                  I keep my org-mode files in my Nextcloud instance, and in the Android app mark all the files to be kept in sync. Orgzly auto-syncs them now, no need for Tasker or anything.

                                        1. 2

                                          Lots of things can already be done with portable sh code. typeset for example is quite powerful but mentioned nowhere in the repo:

                                          $ var=VaL; typeset -l lvar=$var; typeset -u uvar=$var
                                          $ echo $lvar $uvar
                                          val VAL
                                          

                                          In fact, this builtin is often wrapped by the shell itself, at least in OpenBSD’s ksh:

                                          $ alias | fgrep typeset
                                          autoload='typeset -fu'
                                          functions='typeset -f'
                                          integer='typeset -i'
                                          local=typeset
                                          
                                          1. 2

                                            typeset is not posix

                                            • posix sh has no way to do what typeset -f does
                                            • bash has no way to do what typeset -fu does in your ksh
                                            • typeset -i doesn’t really have the same effect in bash and ksh
                                            • the posix way to convert a string to lowercase/uppercase involves tr or awk
                                            • most popular shells support local, but it’s not posix either

                                            “portable sh code” != “it works on my openbsd”

                                            1. 1

                                              Note how I did not speak of POSIX; you mistake the ksh example as general assumption. My reply to pl’s comment tells you what I meant with “portable” (admittedly, wording was a bit misleading).

                                              1. 1

                                                what’s your definition of portable?

                                            2. 1

                                              What do you mean with portable sh code? I just tried it with dash and it has no typeset built in.

                                              1. 2

                                                At least available in Bourne and Korn shell derivatives; (Debian’s) Almquist Shell does not implement that particular builtin.

                                            1. 3

                                              Does it work properly with stdin and EOFs sent with ^D?

                                              1. 6

                                                Yes.

                                                1. 1

                                                  I didn’t know this was an issue. :-) tail -f - hitting ctrl+d does not quit the program, but cat - does.

                                                1. 1

                                                  Can someone maybe explain when one should use an API framework and when something like a micro framework such as flask or django?

                                                  1. 6

                                                    I wouldn’t call Django a micro framework. It is more a full featured web frameworks which works well for building web applications. However, it is not intentionally designed for building small footprint APIs (although it is perfectly possible, e.g. together with django-rest-framework). Flask is a general purpose framework but with a smaller footprint. This is a real micro framework imho. Molten looks much like flask but it adds some interesting aspects like type hinting, dependency injection and is primarily designed for building APIs.

                                                    So, in general the more your use case leans towards a full blown (multi page) web app, the more likely Django might be a good candidate to look at, the more you’re just building APIs, the more something like Molten could be a thing to look at. Flask is somewhere in between.

                                                    However, as all three of those frameworks would certainly be able to fit nearly more or less any web based use case, remember there are some other things to consider when choosing a framework, such as developer’s knowledge or preference, availability of third party libs, popularity/community etc.

                                                    1. 2

                                                      Yeah Django is optimized for making it incredibly easy to create server side database driven applications. It’s also great for creating APIs, but it’s still optimized for the use case of “define your data model and we’ll take care of the rest”.

                                                      For some people like myself who are web dev newbies, that can be a huge bonus, but lots of folks want more flexibility and less overhead, so packages like this come in.

                                                  1. 3

                                                    So cool!

                                                    When reading articles such this, main questions I’m asking myself are “Why didn’t I want to figure this earlier? and “What else am I missing out”?

                                                    I mean, I’ve read this Wikipedia section and a page by ~mascheck at some point - but this posting puts #! in executable files into a much better perspective.

                                                    1. 5

                                                      Looks really cool, there are way too little alternatives to Discourse and I hope most developers/admins will agree - that mailman or hyperkitty never managed to become a decent web-application (with, or without JavaScript).

                                                      A link to forum.nim-lang.org in the git repo would be nice, though. :-)

                                                      The rst-syntax example page is interesting, as we’ve learned on 1st April on @lobsters, you want to scrape/mirror/resize/convert foreign image embeddings.

                                                      1. 5

                                                        Thanks for the feedback, I added a little link below the image to forum.nim-lang.org.

                                                        Indeed, this issue regarding foreign image embeddings didn’t even pop into my head. I shall make a note of it and hope nobody embeds a huge image in the meantime :)

                                                        1. 3

                                                          Sure! :-)

                                                          Ah, completely missed that the image is a link to the forum, but now it’s better - might become a longer list, once more projects are using NimForum :-).

                                                          PS: Maybe one could get you a lobsters “nim-hat”.

                                                      1. 6

                                                        Main home server:

                                                        Jukebox:

                                                        Tiny virtual servers:

                                                        • DNS (bind9)
                                                        1. 3

                                                          Thank’s for the pointer to weeWX, I’ve more thought of using Grafana to display weather data. Are you able to create alerts (something is moving in your flat) with motion?

                                                          1. 3

                                                            Yes, you can tell Motion to run a command when motions starts, when motion ends etc. I don’t use that functionality at home, but at work I use it to send an XMPP message e.g. when somebody enters the serverroom and when the video is completed (including a link to the video), so I can keep track of who enters and what they do.

                                                            I have had to fiddle a little with ignoring part of the image that constantly flickers in the server room; I can recomment Motion, it works well.

                                                            weewx does enough that I haven’t bothered doing something with the data myself - I’ve only changed the display (colours and such) to integrate it into my website.

                                                        1. 4

                                                          Self-hosting is for me a long term project and I’m working on it infrequently… I should probably write a blog-posting at some point. I really need to start using some provisioning/automation tool… I can’t decide which ‘container’ technology I’d like to use.

                                                          Already hosting:

                                                          • Monitoring
                                                          • Music Player Daemon
                                                          • NFS (I’ve a dedicated storage, which is physically seperated from the application hosting hardware)
                                                          • WireGuard as transport-layer encryption and authentication for the seperate nfs-exports
                                                          • WireGuard VPN
                                                          • nginx

                                                          Planned:

                                                          • mail (not decided which setup)
                                                          • radicale
                                                          • a web photo view, hopefully with rich metadata
                                                          • git-annex (I still haven’t figured out how I can have a git-annex non-bare-metal repo, which let non annex-aware application access the data)
                                                          • some self hosted ‘dropbox’ alternative (not decided which tool)
                                                          • some issue tracker
                                                          • Firefox Sync Server
                                                          • XMPP/Matrix
                                                          • DNS
                                                          • Offsite and/or cloud backup (I ‘only’ got 2.5 Megabyte/s upload, so 4TB to upload will take at least three weeks)

                                                          The whole setup (three computers) are using constantly about 60W (I’ve an energy meter installed).

                                                          The setup costs me about 30 Euro for the Internet, ~12 Euro for electricity and 5 Euros for some server in a datacenter.

                                                          If I’d store backups on backblaze ‘B2’, it’d cost me at least 20 Euros per month to have cloud-backups. (0.005 Cent per GB for storing uploaded data) and 0.01 Cent per GB if I need to retrieve the data. I should probably not mention this in public, but another possibility would be running the Backblaze Personal Backup in Wine (which I’ve tried out in 2014) - but this would be clearly a violation of the terms, and you’d have to hack something together, that ‘transparently’ encrypts all files infront of the backblaze wine client, and still is able to support delta uploads.

                                                          1. 5

                                                            Beautifully-done illustrations on top of the good info.

                                                            1. 4

                                                              I’m thankful for the rich “metadata” structure of svg’s. Some 1, 2 are done in Visio:

                                                              <!-- Generated by Microsoft Visio, SVG Export StringTable.svg Page-12 -->

                                                              Others in Inkscape:

                                                              <!-- Created with Inkscape (http://www.inkscape.org/) -->

                                                              I’ve been experimenting https://draw.io in order to avoid Visio, their FOSS model is a little bit weird (I’m not sure how open it’s actually). At least you can use file-system export (compressed base64 encoded xml structure) from their web interface. html “export” is an embedded svg and svg’s might need additional manual rework, in case you want to publish them.

                                                              1. 2

                                                                Yeah I’d really like to know how he made the illustrations! They are great.

                                                              1. 3

                                                                I’ve managed to check it out last night, and it appears to be working as advertised.

                                                                Key generation is super awesome, built in QRcode reader to transfer configuration/public-keys between a desktop would be a great feature for semi-automated setups.

                                                                The error reporting is still a little bit weird, for example I can’t configure 10.0.0.1/24 as Allowed IPs for a Peer with the error message: “Bad address”. 10.0.0.0/24 works though, so maybe just a user error.


                                                                With the Wireguard(WG) Android connectivity I can/could now:

                                                                • Stream music to my phone from my mpd-server with httpd/lame as output configured (MPDroid), or just configuring my mpd-server at home (works already)
                                                                • Accessing my phone via. Termux/sshd (works already), sshfs via LTE works unexpectedly well OR adb via. VPN.
                                                                • Do backups with Syncopoli and rsync:// instead of ssh (Keyfile management in Syncopoli is confusing)
                                                                • Sync with radicale calendar server (probably contacts/notes too?)
                                                                • Access read-only monitoring web-interface, getting alerts via. self hosted Matrix instance?
                                                                • Report back the location of my phone (couldn’t find a tool for that yet, Termux API examples can report the location, though - might be done with a python script then)

                                                                None of this requires root, I’m using CopperheadOS, which has root-access disabled.

                                                                I need to figure out how to properly protect random apps to access those services. rsync:// supports secret-based-authentication, so that might be good enough.

                                                                Basically I’d like to avoid having each service to do it’s own authentication/key management, but to have one ‘global instance’ (WG) to do deal with encryption instead.

                                                                I’ve seen Orbot supports setting tunneling per app basis, so might be possible to implement for WG too.

                                                                I’m still not sure if this all makes sense, but it feels rewarding to setup, so I’m trying to push forward what is possible. Especially backups are a huge painpoint in Android, I hope I’ll solve that for myself soon.

                                                                Everything could be replaced by $VPN-technology, but WG, besides tor, is the first tool that kept me exited for long enough.

                                                                1. 3

                                                                  Report back the location of my phone

                                                                  I’ve found OwnTracks works well for this use case. Reports back location and battery info. Downside is that MQTT brokers are a bit fiddly to configure and use.

                                                                  1. 1

                                                                    Thank you for the pointer, unfortunately they won’t provide a Google services free version (ticket.

                                                                    1. 1

                                                                      That’s certainly a bummer. Skimming the thread, seems to be a result of there being no free replacements for the geofencing APIs.

                                                                  2. 1

                                                                    Key generation is super awesome, built in QRcode reader to transfer configuration/public-keys between a desktop would be a great feature for semi-automated setups.

                                                                    The TODO list actually has this on it. Hopefully we’ll get that implemented soon. You’re welcome to contribute too, if you’re into Android development.

                                                                    The error reporting is still a little bit weird, for example I can’t configure 10.0.0.1/24 as Allowed IPs for a Peer with the error message: “Bad address”. 10.0.0.0/24 works though, so maybe just a user error.

                                                                    The error reporting is very sub-par right now indeed. We probably should have more informative error messages, rather than just bubbling up the exception message text.

                                                                    That “bad address” is coming from Android’s VPN API – 10.0.0.1/24 is not “reduced” as a route; you might have meant to type 10.0.0.1/32. Probably the app could reduce this for you, I suppose. But observe that normal Linux command line tools also don’t like unreduced routes:

                                                                    thinkpad ~ # ip r a 10.0.0.1/24 dev wlan0
                                                                    Error: Invalid prefix for given prefix length.
                                                                    thinkpad ~ # ip r a 10.0.0.0/24 dev wlan0
                                                                    thinkpad ~ # ip r a 10.0.0.1/32 dev wlan0
                                                                    
                                                                  1. 1

                                                                    Cool. I wasn’t aware of PRoot, rootless and the rootless-container project in general. Since there is no mention of fakeroot and fakechroot, do you know how this compares?

                                                                    1. 2

                                                                      fake{root,chroot} is based on an LD_PRELOAD-like syscall interception. It has the advantage of not depending on the kernels namespace implementation, but the disadvantage of having a performance penalty.

                                                                      proot is an frontend for linux namespaces.

                                                                      1. 1

                                                                        Thank you for your response, I see. So it’s not possible to run it inside a cointainer then? fakeroot with ldpreload is a pain, you basically can’t debootstrap Jessie on Stretch because of this.

                                                                        1. 1

                                                                          I thought one of them did LD_PRELOAD interception, which was fast enough that you don’t notice the performance penalty, but doesn’t work for things (e.g. Go binaries?) that make syscalls directly rather than going through libc’s wrappers. and the other did ptrace() interception, which works on everything, but makes syscalls much slower (though compilers spend a large proportion of their time doing things which aren’t syscalls, so it’s like a 20% perf hit for random C programs last time I tried).

                                                                          1. 2

                                                                            Both are using LD_PRELOAD. What you are thinking of is fakeroot-ng(1), which is ptrace(2)-based.

                                                                            1. 1

                                                                              Thank you.