1. 8

    No “generic” library or framework I’ve seen ever has been able to deliver 100% re-usability. Even string libraries aren’t entirely reusable; for example, constant-time comparison is required in many security applications, but non-security applications tend to favour raw speed. Of course you could add a flag to make it more generic and re-usable.

    If you keep adding flags like this for components that are large enough you end up with so many flags for each different kind of sub-behaviour (or so many versions variants of the same component) that it becomes unwieldy to use, maintain and performance will suffer too.

    That’s why “use the right tool for the job” is still great advice, and so is Fred Brooks’ old advice to “build one to throw away, you will anyway” when building something new.

    1. 4

      I’ve always worked toward the “guideline” that an abstraction should shoot to cover 80% of the problem, but should be very easy to “punch through” or “escape” for that last 20%

      If possible, I won’t “add a flag” to support a feature, but will instead try to write the library in a way that allows it to be disabled or skipped when needed. Suddenly the worry of the “perfect abstraction” goes away, and you are left with a library that handles most cases perfectly, and allows another lib or custom code to take over when needed.

      1. 3

        That’s a very good approach. I also like the opposite approach, which is the “100% solution” to a narrowly (but clearly!) defined problem. The Scheme SRE notation is an example of this, as is the BPF packet filter virtual machine.

        This allows you to make a tradeoff to choose whether a tool fits your needs.

        1. 1

          I’ve always worked toward the “guideline” that an abstraction should shoot to cover 80% of the problem, but should be very easy to “punch through” or “escape” for that last 20%

          I always liked Python’s convention of exposing everything (almost; I’m not sure if the environment of a closure is easily exposed), and using underscores to indicate when something should be considered “private”.

          I emulate this in Haskell by writing everything in a Foo.Internal module, then having the actual Foo module only export the “public API”.

        2. 1

          This seems like something that should be solved outside the library that deals with string manipulation. For example, in Clojure I’d write a macro that ensured that its body evaluated in a constant time. A naive example might look like:

          (defmacro constant-time [name interval args & body]
            `(defn ~name ~args
               (let [t# (.getTime (java.util.Date.))
                     result# (do ~@body)]
                 (Thread/sleep (- ~interval (- (.getTime (java.util.Date.)) t#)))
                 result#)))
          

          with that I could define a function that would evaluate the body and sleep for the remainder of the interval using it:

          (constant-time compare 1000 [& args]
             (apply = args))
          

          I think that decoupling concerns and creating composable building blocks is key to having reusable code. You end up with lots of Lego blocks that you can put together in different ways to solve problems.

          1. 6

            To me that smells like a brittle hack. On one hand you might end up overestimating the time it will take, thus being slower than necessary, or you could underestimate it, which means you’d still have the vulnerability.

            Also, if the process or system load can be observed at a high enough granularity, it might be easy to distinguish between the time it spends actually comparing and sleeping.

            1. 1

              I specifically noted that this is a naive example. This is my whole point though, you don’t know what the specific requirements might be for a particular situation. A library that deals with string manipulation should not be making any assumptions about timing. It’s much better to have a separate library that deals with providing constant timing and wrapping the string manipulation code using it.

              1. 6

                Except in this case constant time is much more restrictive than wall-clock time. It’s actually important to touch the same number of bits and cache lines – you truly can’t do that by just adding another layer on top; it needs to be integral.

                1. 1

                  In an extreme case like this I have to agree. However, I don’t think this is representative of the general case. Majority of the time it is possible to split up concerns, and you should do that if you’re able.

                  1. 5

                    But that’s the thing. The temptation of having full generality “just around the corner” is exactly the kind of lure that draws people in (“just one more flag, we’re really almost there!”) and causes them to end up with a total mess on their hands. And this was just using a trivial text-book example you could give any freshman!

                    I have a hunch that this is also the same thing that makes ORMs so alluring. Everybody thinks they can beat the impedence mismatch, but in truth nobody can.

                    I guess the only way to truly drive this home is when you implement some frameworks yourself and hit your head against the wall a few times when you truly need to stretch the limitations of the given framework you wrote.

                    1. 2

                      My whole argument is that you shouldn’t make things in monolithic fashion though. Instead of doing the one more flag thing, separate concerns where possible and create composable components.

                      Incidentally, that’s pretty much how entire Clojure ecosystem works. Everything is based around small focused libraries that solve a specific problem. I also happen to maintain a micro-framwork for Clojure. The approach I take there is to make wiring explicit and let the user manage it the way that makes sense for their project.

                      1. 3

                        Monolithic or not, code re-use is certainly a factor in the “software bloat” that everyone complains about. Software is getting larger (in bytes) and slower all around – I claim a huge portion of this is the power of abstraction and re-using components. It just isn’t possible to take the one tiny piece you care about: pull a thread long enough and almost everything comes with it.

                        Note that I’m not really making a value judgement here, just saying there are high costs to writing everything as generically as possible.

                        1. 1

                          You definitely have a point here, on my first job I was tasked to implement a feature, this was a legacy WinAPI app. It involved downloading some data via HTTP (iirc it downloaded information on available updates for the program). Anyways, I was young and inexperienced, especially on the windows platform. The software was pure C mainly, but a few extensions had been coded in C++.

                          So when I wrote my download code, I just used STL iostream for the convenience of the stream operators. Thing is, I was the first C++ code in the code base to use a template library, all the other C++ code was template-free C-with-classes style. The size of the binary doubled for a tiny feature.

                          I rewrote the piece in C, and and the results were as expected, no significant change in size for the EXE. Looking back it makes me shudder what I was tasked to implement and what I implemented. However, I am also not happy with the slimmed down version of my code.

                          Nowadays the STL is just not a big culprit anymore, when you look at deployment strategies that deploy statically-linked go microservices within fat docker images onto some host.

            2. 1

              That constant time comparison doesn’t work, because you can still measure throughput. Send enough requests that you’re CPU bound, and you can see how far above the sleep time your average goes.

          1. 6

            I love and use emacs every day as a text editor. Tools like org mode and just general emacs customization is great!

            However, outside of the text editing sphere, the emacs implementation of thing such as shell, email, and a window manager always seem “almost there” but unfortunately not useable. This saddens me because I would love to never leave emacs.

            That being said, things like TRAMP completely shifted my ideas on how to manage remote files, so who knows. I am optimistic about the continued progress of the emacs ecosystem.

            1. 8

              Yes, I agree! For the shell environment, the brawback of emacs buffers becomes apparent. Most shell emulations (emacs has several) work fine as long as the executed programs do not produce much text, like cating a large file. When that happens, the shell becomes sluggish or freezes up, which in turn increases the cognitive burden, i. e. “May I execute this line or will this stop my workflow?” This is a major reason why I do not use the shell within Emacs. In general st feels much more resonsive than Emacs and that saddens me.

              For mail, I simply do not have that much mail that I consider the elaborate mail configurations necessary. Mostly I just do IMAP searches to find a messge I’m looking for and that works well enough for me. But I still find the approach with offline mailboxes quite nice; but there are still some smaller corners.

              As far as I understand it, when exwm is used, the window manager will freeze up, if emacs hangs and that is something that I do not want to experience, hence I’ve tried to make emacs play nicer with the window manager by mostly opening new Emacs frames instead of the internally managed windows. I’m satisfied with that setup.

              TRAMP is almost there. I wish it had a mosh-like connection to the remote server, but I understand that this is actually quite hard to implement. But still ssh editing via tramp works quite nicely, especially once you configure ssh aliases and keys properly.

              1. 4

                As a heavy Emacs in terminal user I’m pretty happy with the ability to just bg emacs and run cat and less when needed. And having a terminal multiplexer helps too of course.

                But I realize that if you’re in a windowing environment having everything in Emacs becomes more desirable.

                As an aside, isn’t a “normal” terminal emulator like rxvt already much faster than Emacs? What does st bring to the table?

                1. 3

                  bg

                  May I ask you how you put a emacs (in terminal mode, i.e. emacs -nw) in the background? I am running emacs with spacemacs + evil configuration (mostly for org-mode) and C-z messes the state completely up, the key-bindings don’t work as usual, but doesn’t put emacs in the background. Maybe it’s spacemacs’ fault. Just wondering.

                  1. 2

                    I use vanilla emacs, running under tmux. I just hit Ctrl-Z and it’s in the background, visible in the output of jobs. fg brings it back.

                    I think it’s your specific configuration in this case.

                    1. 1

                      Thank you! Then indeed it’s probably the spacemacs configuration in the terminal mode. Will have to look there.

                      1. 3

                        Ctrl-z is the default toggle key for evil. You can set evil-toggle-key to some other shortcut:

                        https://askubuntu.com/questions/99160/how-to-remap-emacs-evil-mode-toggle-key-from-ctrl-z

                        1. 1

                          Many thanks! It helped indeed and I learned something.

                          I find it so strange, that Ctrl-Z has been chosen for this toggle mode, if this is the combination that is used in terminals to send programs to the background. Maybe there are not many people using emacs in the terminal with evil mode.

                          1. 1

                            The dude in the answers who modified the source to fix this really doesn’t understand the Emacs mindset ;)

                    2. 3

                      Yeah, I prefer the window environment, especially for writing TeX documents and using pdf-tools to view it. Most of the time I have a terminal around somewhere, so I use both simultanously. For example, I have three windows open with the TeX code in one emacs frame, the pdf in another an then the terminal that runs latexmk -pvc.

                      As an aside, isn’t a “normal” terminal emulator like rxvt already much faster than Emacs? What does st bring to the table?

                      Yes, I used urxvt before but switched to st at some point. The differnces between those two are minor compared to a shell inside emacs. The blog post by Dan Luu showed that st performed quite well, and further highlights the point about throughput of the emacs shells. But yeah, the preference for st is mostly personal.

                      1. 2

                        Alright, that’s giving me LaTeX flashbacks from uni, I know just what you mean!

                    3. 1

                      Most shell emulations (emacs has several) work fine as long as the executed programs do not produce much text, like cating a large file. When that happens, the shell becomes sluggish or freezes up, which in turn increases the cognitive burden, i. e. “May I execute this line or will this stop my workflow?” This is a major reason why I do not use the shell within Emacs. In general st feels much more resonsive than Emacs and that saddens me.

                      I’ve found it’s long lines that cause Emacs to freeze. I tried working around this by having a comint filter insert newlines every 1000 characters, which worked but with really long lines that filter itself would slow down Emacs. One day I got fed up, and now I pipe the output of bash through a hacked version of GNU fold to do this newline insertion more efficiently. Unfortunately bash behaves differently when part of a pipe, so I use expect to trick it into thinking it’s not. Convoluted, but WorksForMe(TM)!

                      (The code for this is in the fold.c and wrappedShell files at http://chriswarbo.net/git/warbo-utilities/git/branches/master ).

                    4. 2

                      However, outside of the text editing sphere, the emacs implementation of thing such as shell, email, and a window manager always seem “almost there” but unfortunately not useable. This saddens me because I would love to never leave emacs.

                      Shell depends, as @jnb mentions, for a lot of text it’s cumbersome, but especially with eshell, if you alias find-file and find-file-other-window (eg. ff and ffo) then you get something you can get very used to, very quickly.

                      Maybe it’s not universal, but I’ve been using Gnus for a while now, and I just can’t change to anything else ^^. Integration to org-mode is great, the only thing that’s lacking imo is good integrated search with IMAP.

                      Honestly, I can’t say anything about window managers. I use Mate, and it works.

                      1. 1

                        The search in Gnus and various other quirks (like locking up sometimes when getting new mail) caused me to finally switch to notmuch recently. I miss some of the splitting power, but notmuch gets enough of what I need to be content. The search in notmuch is really good, although it has a potentially serious hinderance, so I can’t recommend it without reservations.

                        find-file from eshell is why I’ve been making a serious effort to try it out. I also implemented a /dev/log virtual target (M-x describe-variable <RET> eshell-virtual-targets) so I could redirect output to a new buffer easily.

                      2. 2

                        Regarding the shell. I also had shell issues but now use the shell exclusively in emacs. I work over ssh/tmux into a remote machine and only use the emacs term. I made a little ansi-term wrapper to allow the benefits of eshell (well the scrolling, yanking, etc) but it uses ansi-term still so it can use full screen stuff like htop. I’ve been using it for years now. Might help be worth checking out.

                        plug: https://github.com/adamrt/sane-term

                        1. 1

                          Oh my God. Not only that is beautiful and perfectly suits what I was aiming to do, it also solves a couple of tangent problems I had with the section about loading the environment variables from .profile. Thank you so much!

                          1. 1

                            definitely will! I always run into issues with curses programs in emacs shell modes, which is the only thing that keeps me from using emacs shell exclusively,

                        1. 2

                          Quite useful! I always use -e, but have avoided -u since my scripts tend to provide lots of optional env vars; I’m not too familiar with the many cryptic sigils that bash allows inside ${}, but I’ll try to remember :- (mnemonic “that Prolog thing”) for giving default values so that I can start using -u by default.

                          1. 1

                            Looks good! I really like zippers on lists, but still haven’t internalised them for nested datastructures :(

                            I’m currently writing JSON-to-JSON commands which need to avoid holding whole objects/arrays in memory (basically “mapping” over a few levels of nested objects, e.g. summing the array in {"x1":{"x2":[1,2,3]}} to get {"x1":{"x2":6}}). I’m currently doing this in a hacky way by manually reading one Char at a time from stdin, rather than using aeson’s parser. Do you think Waargonaut might be suitable for such streaming usage? (Note that one consequence of this requirement is that invalid input can cause parse errors after some output has already been given)

                            1. 2

                              There is work being done on the hw-json package that would enable streaming but it’s still early days. I don’t support “write” using the succinct data structure yet, only “read”. Although it is something I am working on.

                              Until I have that or streaming working I would suggest having a look at lens-aeson, that would let you have a ByteString -> ByteString function but you can lens down into the JSON and change values, without having to decode the entire thing into memory.

                              For your example, you would end up with something like:

                              inputByteString & key "x1" . key "x2" . _Array %~ sum
                              

                              Doing that from memory so take it with a sack of salt.

                              Waargonaut will have this functionality too, just at the moment aeson has me beat by 113 contributors and ~7 years of effort. :D

                              1. 1

                                Ah, I didn’t know lens-aeson (or GHC) was smart enough to fuse producers and consumers like that. I’ll give it a go; if it works it’ll be much nicer than my current pile of IO () actions (actually m () where m is IO in main and State in tests).

                                1. 3

                                  My tenuous understanding is that there has been an immense amount of work put in to make the ByteString operations fuse and be fast. Commensurately the majority of lens functionality boils down to lambdas and fmap which GHC is very very good at optimising. So the two sort of collide and then we have nice victories like that. :)

                            1. 1

                              I played around with Maude many years ago; I never used it for anything serious, but did write an implementation of the BitBitJump language/machine.

                              1. 0

                                I like the idea that canonicalization moves code closer to its hylaean form.

                                1. 2

                                  Chaitin uses the phrase “elegant” to refer to the shortest possible form of a program (although practically speaking, I imagine such programs would look to a human like line noise ;) )

                                1. 2

                                  I use mu4e in Emacs. I used to use Gnus, but it’s slow and Emacs’s lack of concurrency causes it to freeze the UI when working. Mu4e offloads querying, etc. to a separate commandline tool, so Emacs remains responsive.

                                  1. 1

                                    Mu4e for me as well. I wrote something about my setup a few years ago, and it’s still more or less the same.

                                  1. 2

                                    what laptop do you use?

                                    I use an IBM X60s thinkpad, refurbished by GlugLug (now MiniFree) with LibreBoot.

                                    What drew you to it?

                                    It was the first laptop to get FSF’s “Respects Your Freedom” certification. I bought it as soon as it was announced. I also find the 12” screen to be just right for a portable machine; it’s larger, but lower resolution, than the OLPC XO-1 I was using previously, and not as cumbersome as the 15” machine I had a decade ago.

                                    1. 7
                                      1. pushd / popd vs ‘cd -‘

                                      A word of caution on the use of pushd/popd in scripts: keep their uses close together or you can get very lost. I prefer using subshells for this sort of thing since, when the subshell is finished, the environment (including the current working directory) is restored.

                                      1. source vs ‘.’

                                      One thing not mentioned here is how source will find things. From the reference manual:

                                      If filename does not contain a slash, the PATH variable is used to find filename. When Bash is not in POSIX mode, the current directory is searched if filename is not found in $PATH.

                                      It’s generally a good idea to always use a full or relative path to a file being sourced (that is, something with a slash in it) or you could be in for some real surprises.

                                      1. 4

                                        […] I prefer using subshells for this sort of thing […]

                                        I’d usually go with putting cds into a shell-function, rather than spawning a subshell.

                                        Mostly because it’s for my taste easier to read, argue and test.

                                        modify files
                                        create directory
                                        ( cd directory
                                        run something
                                        )
                                        proceed in original pwd
                                        

                                        vs.

                                        run_something() {
                                          [ -d "$1" ] || _error "dir missing $1"
                                          cd "$1"
                                          run something
                                        }
                                        
                                        modify files
                                        create directory
                                        run_something directory
                                        proceed in original pwd
                                        

                                        I guess it might also help with trap-statements.

                                        1. 5

                                          Hrm, I don’t think that will work.

                                          #!/bin/bash 
                                          
                                          run_something() {
                                              cd tmp
                                          }
                                          
                                          echo "${BASH_VERSION[*]}"
                                          echo "Before: " $(pwd)
                                          run_something
                                          echo "After: " $(pwd)
                                          

                                          When I run it:

                                          % ./tmp/t.sh 
                                          4.4.19(1)-release
                                          Before:  /home/woz
                                          After:  /home/woz/tmp
                                          
                                          1. 4

                                            Oh, wow - thank you for checking. Can’t reproduce what I gave as an earlier example either.

                                            Sorry for my misinformation, can’t check my scripts at my old employer anymore - maybe I wrapped it into a function and still used a subshell (which contradicts my criticism).

                                            I guess I’d go then with using a subshell inside a function, but still doesn’t make my earlier statement more correct.

                                            #!/bin/bash 
                                            
                                            run_something() {
                                                (
                                                    cd tmp
                                                )
                                            }
                                            
                                            echo "${BASH_VERSION[*]}"
                                            echo "Before: " $(pwd)
                                            run_something
                                            echo "After: " $(pwd)
                                            
                                        2. 4

                                          A word of caution on the use of pushd/popd in scripts: keep their uses close together or you can get very lost.

                                          I tend to favour push/pop, but I also use indentation to help match them up, e.g.

                                          mkdir foo
                                          pushd foo
                                            bar baz quux
                                          popd
                                          
                                        1. 16

                                          This pleases me, since it’s exactly what the Web was made for:

                                          • Scratching an itch, without having to ask for anyone’s permission
                                          • A “user agent” being given a task to perform, and going off to perform it on behalf of the user
                                          • Solving problems using information aggregation/retrieval services (torrent search engines)
                                          • A “mash up” of different services (torrent searches and subtitle archives)
                                          • Once the data is obtained, letting the user choose how to process it (save or stream, choice of players, etc.)
                                          • Sensible defaults, with the ability to override, or even hack on the code to better suit the user’s goals

                                          Compare this to the prevailing view of the Web today:

                                          • No “user agents” other than browsers and search crawlers
                                          • No browsers other than Chrome, usually Firefox and IE/Edge, perhaps Safari
                                          • No search crawlers other than Google’s
                                          • Information cannot be accessed without permission (e.g. API keys)
                                          • Information is hoarded in silos, allowing “mashups” might give ‘competitors’ an advantage
                                          • Users must retrieve information manually, since that way they’ll see the accompanying advertisments
                                          • Pages don’t serve data, they provide “apps”
                                          • Pages are blank unless their Javascript is run (sometimes they show an animated GIF ‘spinner’, misleading users into thinking that something is happening)
                                          • External/user-provided processing is discouraged, since it reduces “engagement” with the app
                                          1. 1

                                            Unfortunately we are living in the real world, where things have to be commercially viable.

                                            1. 12

                                              I think “have to” is rather strong. Lots of good stuff has come from personal projects, volunteer efforts, charitable organisations, industry bodies, governmental departments, etc. Is Lobste.rs “commercially viable”, or is it “not too costly”?

                                              Plus commercial viability doesn’t have to be at odds with the original ideas for the Web.

                                              1. 3

                                                Not all things.

                                                1. 3

                                                  Some things, sure.

                                                  If you push a great many services out to the edge, like streaming and filesharing and searching, all of the sudden the bar for commercial viability drops a lot farther than you’d expect.

                                              1. 21

                                                That’s a good question! Here is a quick braindump, happy to provide more information on all of these points.

                                                Basic physical needs

                                                IQ and focus are affected by these things:

                                                • get enough (~8h) sleep every day
                                                • stay hydrated
                                                • exercise cores / cardio a bit
                                                • (personal) meditation to improve focus / self awareness
                                                • (if possible) find a good work environment
                                                Disciplin

                                                creating good automatism allows to go faster and not break out of the flow

                                                • comfortable dev environment, that’s very personal
                                                • logical git commit
                                                • do one thing after the other. multi-tasking gives the impression of work but is very inefficient.
                                                • use a binary search approach for debugging
                                                • learn to say no nicely (some people try to push their work onto you)
                                                • learn to create focus times in the day with no interruptions
                                                Knowledge

                                                Learn how things work to be able to think on first principles. StackOverflow doesn’t have answers for everything.

                                                1. 5

                                                  This is a great post and the advice here is greatly underrated in our industry. The difference in my quality of work on a day where I’ve had 8 hours of restful sleep vs. a day where I had 6 hours of sleep and am dehydrated, or have a lingering cold, or something similar is dramatically more than you’d expect. Everyone sort of accepts that if you have a migraine, or the flu, your work will suffer. But even the littler things make a big difference when you get to the (for me anyway) very high-level intellectual utilization that programming demands.

                                                  As a process thing, whenever possible I like to create a personal branch and make a series of small commits as I go, knowing that I will squash them into more logical groupings before merging my work. This lets me experiment with a direction reversibly without forcing my peers to see 25 commits over a single workday.

                                                  I’m also a big fan of carving out uninterrupted blocks of time (no meetings, chitchat, chores, etc.) but as I work fully remote this is likely both easier for me as well as more desirable to me, assuming people to some extent self-select into fully remote work.

                                                  1. 1

                                                    Thanks! If only I could post this to myself 10 years ago :)

                                                  2. 2

                                                    use a binary search approach for debugging

                                                    What does this mean?

                                                    1. 5

                                                      I assume somewhat like git bisect, e.g. we know that version/commit 30 has a bug, we know that version 10 didn’t. Rather than checking version 29, then 28, etc. instead check version 20 (preferably automatically, with a regression test). If it works, check version 25, and so on. This can make narrowing down issues much easier, especially when (a) every commit can be checked (e.g. it always compiles successfully, doesn’t crash during initialisation, etc.; this can be enforced with things like pre-commit hooks) and (b) the commit history has lots of small commits, rather than e.g. squashing large changesets into a single diff.

                                                      Note that the same approach can be taken when printf debugging, or stepping in a debugger: check the state near the start/input part of the code, check the state near the end/output part, and check the state roughly half way through the processing. This narrows down the problem to one half of the program/trace. Add checks roughly half way through the dodgy half, and iterate until the problematic step is found. This can be more efficient than e.g. stepping through each statement one after another; or reading page after page of logs.

                                                      1. 4

                                                        I assume somewhat like git bisect

                                                        Ah, I wouldn’t exactly call that debugging. Debugging, to me, is the step that comes after finding the problem commit, if there is such a thing.

                                                        Note that the same approach can be taken when printf debugging, or stepping in a debugger

                                                        Mmm-hmm. All fine until it’s race conditions you’re debugging. My week at work..

                                                        1. 3

                                                          Mmm-hmm. All fine until it’s race conditions you’re debugging. My week at work..

                                                          Yeah, consistently reproducing an error is an important prerequisite, but sometimes the most difficult part!

                                                          1. 1

                                                            Ah, I wouldn’t exactly call that debugging. Debugging, to me, is the step that comes after finding the problem commit, if there is such a thing.

                                                            In that case, you might still not “get” what’s happening. In that case, if you can figure out your input, the expected output, and what the output actually is, you have a good starting point.

                                                            From there, if you are judicious about choosing what part of the input to vary, you can quickly eliminate classes of problems. “Oh, the username probably doesn’t matter here because this shows up all the time”, “ah turns out that it’s this inner function that is returning problematic data, so stuff after it might not be the cause”

                                                            Draw a circle around the entire state of your commit, and then find ways to cut off huge chunks of it until you’re certain that the bug is in what’s left. If you’re not rigorous about this it doesn’t work well, but if you figure out “logical certainties” you’ll likely quickly isolate the bug

                                                            (This is part of why TDD works well in debugging. You found a snippet of code that is misbehaving. You dig deeper until you find a minimal problematic snippet. A quick refactor to place that snippet in its own function, and now you have a test case!)

                                                            1. 1

                                                              Yeah, I keep thinking race conditions. Where you could draw multiple circles that all converge at different points, all points being code that is obviously correct. The commit is right, it’s just probabilistically exposing the presence of a bug somewhere else in the code base. And that’s why TDD doesn’t work, because the bug isn’t in the minimal problematic snippet.

                                                              1. 1

                                                                I’m not real sure what your race conditions looked like, but you could maybe synchronize everything in a program (wrap mutex, log access, &c), or synchronize nothing, or something in between. That would be sort of binary searchable, or at least div-&-conquerable.

                                                                You’re not writing Go, by chance, are you?

                                                                1. 1

                                                                  C

                                                        2. 5

                                                          Not sure this is what OP mean, but it reminds me of the approach I take to debugging. Your goal is to come up with a hypothesis that cuts the search space in half. Think through what could be causing this bug and try to falsify each possibility. This talk by Stuart Halloway does a good job explaining the approach. https://www.youtube.com/watch?v=FihU5JxmnBg

                                                          1. 2

                                                            Binary search is an algorithm that allows to find the solution in O(log(N)) attempts. It’s the same algorithm used by git-bisect but it can be used in the real world as well.

                                                            It’s part of the discipline because it forces to make assumptions and probe the extremities first instead of trying things randomly. Usually it’s possible to make assumptions about the bug location. Split in the middle with some debug statement, find which side is broken. Loop.

                                                        1. 6

                                                          I used to learn more about computer science in my spare time, reading papers, playing with niche and research systems (cutting edge and ancient), etc. especially w.r.t. programming language theory.

                                                          This lead to me getting a scholarship and quitting my Web dev job to do a PhD; hence it’s now not a hobby, either because I’m getting paid to do it, or because none of my time is really “spare” now (depending on how cynical I’m feeling ;) ).

                                                          I also like cycling, not as a sport but just getting outside and exploring (e.g. “pootling”).

                                                          I’m into heavy metal, so I keep an eye out for local gigs and go to a few festivals every year (it’s a great way to catch up with old friends who’ve spread out over the country/world).

                                                          I’ve got quite into real ale/craft beer and cider too, but that’s largely a coping mechanism for being British, where alcohol is a large component of social life, but the ‘standard’ drinks are horrible lagers and fizzy pop cider. I’ve tried making home brew a couple of times too, but my chocolate stout tasted more like farm runoff than a delicious dessert :(

                                                          1. 4

                                                            A while ago I was looking for a tool to benchmark a project across commits and draw pretty graphs (like in this article) and it seemed like (a) almost nobody does this and (b) those who do seem to mostly roll their own tools.

                                                            There are a handful of more general-purpose implementations, but the nicest I came across was Airspeed Velocity. As the name implies, it was originally built for Python, but interestingly it allows plugins to manage the execution environment; this is meant to allow users to choose between virtualenv or anaconda, but I hijacked it to use Nix. This makes it essentially language-agnostic: the benchmarks are still written as Python scripts, but since they’re run in arbitrary environment (defined by Nix), we can just call out to a subprocess (written in whatever language, with whatever dependencies) to do the actual work. So far I’ve used it to benchmark projects written in Racket and Haskell :)

                                                            1. 3

                                                              Is Nix actually doing bit for bit binary reproducible builds?

                                                              Edit: like reproducible-builds.org

                                                              1. 10

                                                                In many cases, yes we do produce bit-for-bit reproducible builds. A substantial amount of Nixpkgs produces bit-for-bit reproducible builds. Nix has built-in tools to help verify that, too, like --check and the repeat option: https://nixos.org/nix/manual/#conf-repeat

                                                                1. 2

                                                                  Nix “derivations” (roughly, build products) are categorised as “fixed output” or not. All Nix derivations are identified using a hash, based on their inputs, but fixed output derivations also have another hash hard-coded in their metadata. After the build has finished, the output is checked against this hash and Nix will abort if they don’t match. This is mostly used for downloading/checking out source code, to prevent problems caused by URLs serving up different files. Fixed output derivations aren’t guaranteed to be reproducible (e.g. if the URL’s content has changed), but it does guarantee that if it succeeds then the output will be identical (modulo hash collisions).

                                                                  Derivations which aren’t fixed-output don’t have their output checked in this way, so reproducibility mostly comes down to trusting the build scripts. As @grahamc says, Nix has facilities to compare the output of two runs of the same build and see if they’re the same.

                                                                  On a related note there is also a feature for “intentional” builds: these take normal build outputs, identified by the hashes of their inputs, and allow them to be referred to by the hash of their outputs.

                                                                1. 1

                                                                  I like the idea of TAP, but it doesn’t actually enable very much. As far as I can tell the only tooling built on it is a Jenkins plugin, the Smolder visualisation and some commands which draw emoji on the terminal. Plus these seem to break if the total number of tests isn’t known up-front. I was expecting to find a wealth of tools for analying and exploring datasets of test results, but AFAICT there aren’t any :(

                                                                  1. 3

                                                                    I think a more general/abstract way of saying this is that indented strings don’t compose (easily). For example, given strings a, b and c, how can we construct an if/then/else with condition c and branches a and b? Concatenating the strings with some keywords and newlines will break if a and b contain their own indented lines (e.g. their own if branches created by concatenating). We could alter the strings, to find/replace some extra indentation after any newlines, but editing in-band metadata like that is dangerous; for example, it may corrupt string literals. To do so safely we’d need to actually parse the contents, which might be difficult considering they’re only fragments, the parse might be ambigious, etc. Since we’re doing metaprogramming, the strings may also contain artefacts for some metalanguage (e.g. CPP macros) which will break our parser.

                                                                    A more general solution is to use a more complicated datastructure which tracks indentation separately to the content, e.g. using pairs of (indentation_level, list_of_lines_or_blocks). This can work, but is more complex and requires programmers to write libraries for doing this in every programming language (most languages provide string concatenation by default). It also requires a rendering step to produce the final string (e.g. for writing to a file), which requires us to keep track of two different representations, and has the same parsing difficulties if we want to use existing strings of code in our metaprogramming (either read from a file, or because some other metaprogramming step rendered too early).

                                                                    Note that these metaprogramming problems are real. They may not be felt directly by day to day users of a language, and there is some merit to claims that metaprogramming should be avoided to prevent confusion. However, there’s no avoiding this in tooling, for example any time we want to give an informative error message or suggestion (linters, compilers/transpilers/interpreters, etc.), generate code from a template (doctest, formatters, refactoring tools, static/dynamic analysis, etc.). The harder it is to make tooling, the less tooling there will be; this affects all language users indirectly.

                                                                    At this point I also feel the need to point out the benefits of distinguishing between reading (determining a program’s structure) and parsing (determining a program’s language constructs). Lisps make this distinction, which makes tooling so easy that we can choose whether we prefer writing with delimiters or with indentation; we can also choose whether to use math notation, prefix notation or infix notation; or a mixture of all the above, and use simple tools to convert between them as desired.

                                                                    1. 13

                                                                      You can’t call something an “objective argument” when your only evidence is “experience”. At the very least do a proper controlled study.

                                                                      1. 7

                                                                        I think I can if my definition is right. I go by popular usage of the words:

                                                                        Subjective: Something that’s in one’s own mind whose form or reasioning outsiders cant see. Maybe also something derived from that.

                                                                        Objective: Something in real world that’s measurable where we know we’re talking about the same thing.

                                                                        Empirical: Builds on objective claims adding things like experiments.

                                                                        The linked statement woukd be objective because it was based on real-world measurements we can all understand of language style and problems. It’s not scientific or emperical since there were no controlled studies and replication to be sure there was a causal link. This objective claim does get people thinking, though. It can also be starting point for empirical studies and claims.

                                                                        That was my reasoning anyway.

                                                                        1. 5

                                                                          “Popular usage” is only an objective criterion if we accept your definition of “objective”. Which makes it a circular definition. “Objective” actually means “existing independent of or external to the mind” or “uninfluenced by emotions or personal prejudices”, as per Farlex. The argument you present sounds very much like the product of a particular mind, and very little effort has been made to examine it as part of a broader context.

                                                                          Also, science isn’t rationalizing your claims with evidence. That’s just debate. Science is only unbiased when one starts from a clean slate and lets the evidence found speak for itself.

                                                                          Finally, regardless of your intentions, these kinds of posts will always be seen as flamebait. This is something many people have a subjective view on, to little substantive end. But I appreciate that much of the response has been focused on the hard claims being made, rather than personal beliefs—even if it is ultimately personal beliefs motivating that response.

                                                                        2. 1

                                                                          I think it’s fine to say it’s an objective argument, but that doesn’t imply that it’s the most important argument. I think it’s a legitimate bullet point on a list of pros/cons, but a comprehensive list will have plenty of other bullet points too.

                                                                        1. 2

                                                                          I was shouting “de Bruijn codes!” at the screen while watching; then he explained de Bruijn codes, and it made me happy :)

                                                                          1. 1

                                                                            As a genuine question from someone who hasn’t used procedural programming productively before, what would be the benefits of a procedural language to justify its choice?

                                                                            1. 3

                                                                              I would say less conceptual/cognitive overhead, but I don’t know if that’s something that can be said of this language as a whole, as I have no experience with it.

                                                                              By that I mean something like: I have a rough idea of what code I want from the compiler, how much mental gymnastics is required to arrive at the source-level code that I need to write?

                                                                              I would imagine that’s an important consideration in a language designed for game development.

                                                                              1. 4

                                                                                Yeah, it makes perfect sense.

                                                                                To dumb down Kit’s value prop, it’s a “Better C, for people who need C (characteristics)”.

                                                                              2. 2

                                                                                On top of alva’s comment, they compile fast and are easy to optimize, too.

                                                                                1. 1

                                                                                  I looked this up for some other article on lobste.rs. I found wikipedia to have a nice summary

                                                                                  https://en.wikipedia.org/wiki/Procedural_programming

                                                                                  Imperative programming

                                                                                  Procedural programming languages are also imperative languages, because they make explicit references to the state of the execution environment. This could be anything from variables (which may correspond to processor registers) to something like the position of the “turtle” in the Logo programming language.

                                                                                  Often, the terms “procedural programming” and “imperative programming” are used synonymously. However, procedural programming relies heavily on blocks and scope, whereas imperative programming as a whole may or may not have such features. As such, procedural languages generally use reserved words that act on blocks, such as if, while, and for, to implement control flow, whereas non-structured imperative languages use goto statements and branch tables for the same purpose.

                                                                                  My understanding is that if you use say C you are basically using procedural language paradigms.

                                                                                  1. 2

                                                                                    Interesting. So basically what was registering in my mind as imperative programming is actually procedural.

                                                                                    Good to know. Thanks for looking it up!

                                                                                    1. 2

                                                                                      I take “imperative” to mean based on instructions/statements, e.g. “do this, then do that, …”. An “instruction” is something which changes the state of the world, i.e. there is a concept of “before” and “after”. Lots of paradigms can sit under this umbrella, e.g. machine code (which are lists of machine instructions), procedural programming like C (where a “procedure”/subroutine is a high-level instruction, made from other instructions), OOP (where method calls/message sends are the instructions).

                                                                                      Examples of non-imperative languages include functional programming (where programs consist of definitions, which (unlike assignments) don’t impose a notion of “before” and “after”) and logic programming (similar to functional programming, but definitions are more flexible and can rely on non-deterministic search to satisfy, rather than explicit substitution)

                                                                                      1. 1

                                                                                        If functional programs don’t have a noton of before and after, how do you code an algorithm? Explain newton’s method as a definition.

                                                                                          1. 1

                                                                                            both recursion and iteration say “do this, then do that, then do … “. And “let” appears to be assignment or naming so that AFTER the let operation a symbol has a meaning it did not have before.

                                                                                            open some namespaces
                                                                                            open System
                                                                                            open Drawing    
                                                                                            open Windows.Forms
                                                                                            open Math
                                                                                            open FlyingFrog
                                                                                            

                                                                                            changes program state so that certain operations become visible AFTER those lines are executed, etc.

                                                                                            1. 3

                                                                                              It is common for computation to not actually take place until the result is immediately needed. Your code may describe a complicated series of maps and filters and manipulations and only ever execute enough to get one result. Your code looks like it describes a strict order the code executes in, but the execution of it may take a drastically different path.

                                                                                              A pure functional programming language wouldn’t be changing program state, but passing new state along probably recursively.

                                                                                              1. 1

                                                                                                but you don’t really have a contrast with “imperative” languages - you still specify an algorithm. In fact, algorithms are all over traditional pure mathematics too. Generally the “state” being changed is on a piece of paper or in the head of the reader, but …

                                                                                              2. 1

                                                                                                so that AFTER the let operation

                                                                                                If we assume that let is an operation, then there is certainly a before and an after.

                                                                                                That’s not the only way to think about let though. We might, for example, treat it as form of linguistic shorthand; for example treating:

                                                                                                let x = somethingVeryLongWindedInvolving y in x * x
                                                                                                

                                                                                                as a shorthand for:

                                                                                                (somethingVeryLongWindedInvolving y) * (somethingVeryLongWindedInvolving y)
                                                                                                

                                                                                                There is no inherent notion of before/after in such an interpretation. Even if our language implements let by literally expanding/elaborating the first form into the second, that can take place at compile time, alongside a whole host of other transformations/optimisations; hence even if we treat the expansion as a change of state, it wouldn’t actually occur at run time, and thus does not affect the execution of any algorithm by our program.

                                                                                                Note that we might, naively, think that the parentheses are imposing a notion of time: that the above tells us to calculate somethingVeryLongWindedInvolving y first, and then do the multiplication on the results. Call-by-name evaluation shows that this doesn’t have to be the case! It’s perfectly alright to do the multiplication first, and only evaluate the arguments if/when they’re needed; this is actually preferable in some cases (like the K combinator).

                                                                                            2. 2

                                                                                              If functional programs don’t have a noton of before and after, how do you code an algorithm?

                                                                                              Roughly speaking, we define each “step” of an algorithm as a function, and the algorithm itself is defined as the result of (some appropriate combination of) those functions.

                                                                                              As a really simple example, let’s say our algorithm is to reverse a singly-linked-list, represented as nested pairs [x0, [x1, [x2, ...]]] with an empty list [] representing the “end”. Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty. Here’s an implementation in Javascript, where reverseAlgo is the algorithm I just described, and reverse just passes it the new empty list:

                                                                                              var reverse = (function() {
                                                                                                function reverseAlgo(result, input) {
                                                                                                  return (input === [])? result : reverseAlgo([input[0], result], input[1]);
                                                                                                };
                                                                                                return function(input) { return reverseAlgo([], input); };
                                                                                              })();
                                                                                              

                                                                                              Whilst Javascript is an imperative language, the above is actually pure functional programming (I could have written the same thing in e.g. Haskell, but JS tends to be more familiar). In particular, we’re only ever defining things, in terms of other things. We never update/replace/overwrite/store/retrieve/etc. This style is known as single assignment.

                                                                                              For your Newton-Raphson example, I decided to do it in Haskell. Since it uses Float for lots of different things (inputs, outputs, epsilon, etc.) I also defined a bunch of datatypes to avoid getting them mixed up:

                                                                                              module Newton where
                                                                                              
                                                                                              newtype Function   = F (Float -> Float)
                                                                                              newtype Derivative = D (Float -> Float)
                                                                                              newtype Epsilon    = E Float
                                                                                              newtype Initial    = I Float
                                                                                              newtype Root       = R (Float, Function, Epsilon)
                                                                                              
                                                                                              newtonRaphson :: Function -> Derivative -> Epsilon -> Initial -> Root
                                                                                              newtonRaphson (F f) (D f') (E e) (I x) = if abs y < e
                                                                                                                                          then R (x, F f, E e)
                                                                                                                                          else recurse (I x')
                                                                                              
                                                                                                where y  = f x
                                                                                              
                                                                                                      x' = x - (y / f' x)
                                                                                              
                                                                                                      recurse = newtonRaphson (F f) (D f') (E e)
                                                                                              

                                                                                              Again, this is just defining things in terms of other things. OK, that’s the definition. So how do we explain it as a definition? Here’s my attempt:

                                                                                              Newton’s method of a function f + guess g + epsilon e is defined as the “refinement” r of g, such that f(r) < e. The “refinement” of some number x depends on whether x satisfies our epsilon inequality: if so, its refinement is just x itself; otherwise it’s the refinement of x - (f(x) / f'(x)).

                                                                                              This definition is “timeless”, since it doesn’t talk about doing one thing followed by another. There are causal relationships between the parts (e.g. we don’t know which way to “refine” a number until we’ve checked the inequality), but those are data dependencies; we don’t need to invoke any notion of time in our semantics or understanding.

                                                                                              1. 2

                                                                                                Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty.

                                                                                                Algorithms are essentially stateful. A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program. A “functional” language relies on a smaller set of control mechanisms to reduce, in theory, the complexity of algorithm specification, but “recursion” specifies what to do when just as much as a “goto” does. Single assigment may have nice properties, but it’s still assignment.

                                                                                                To me, you are making a strenuous effort to obfuscate the obvious.

                                                                                                1. 3

                                                                                                  Algorithms are essentially stateful.

                                                                                                  I generally agree. However, I would say programming languages don’t have to be.

                                                                                                  When we implement a stateful algorithm in a stateless programming language, we need to represent that state somehow, and we get to choose how we want to do that. We could use successive “versions” of a datastructure (like accumulating parameter in my ‘reverse’ example), or we could use a call stack (very common if we’re not making tail calls), or we could even represent successive states as elements of a list (lazy lists in Haskell are good for this).

                                                                                                  A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program.

                                                                                                  I don’t follow. I think it’s perfectly reasonable to say that Prolog code encodes algorithms. How does Prolog’s use of a “universal algorithm” (depth-first search) imply that Prolog code isn’t algorithmic? Every programming language is based on “a kind of universal algorithm”: Python uses a bytecode interpreter, Haskell uses beta-reduction, even machine code uses the stepping of the CPU. Heck, that’s the whole point of a Universal Turing Machine!

                                                                                                  “recursion” specifies what to do when just as much as a “goto” does.

                                                                                                  I agree that recursion can be seen as specifying what to do when; this is a different perspective of the same thing. It’s essentially the contrast between operational semantics and denotational semantics.

                                                                                                  I would also say that “goto” can be seen as a purely definitional construct. However, I don’t think it’s particularly useful to think of “goto” in this way, since it generally makes our reasoning harder.

                                                                                                  To me, you are making a strenuous effort to obfuscate the obvious.

                                                                                                  There isn’t “one true way” to view these things. I don’t find it “strenuous” to frame things in this ‘timeless’ way; indeed I personally find it easier to think in this way when I’m programming, since I don’t have to think about ‘time’ at all, just relationships between data.

                                                                                                  Different people think differently about these things, and it’s absolutely fine (and encouraged!) to come at things from different (even multiple) perspectives. That’s often the best way to increase understanding, by find connections between seemingly unrelated things.

                                                                                                  Single assigment may have nice properties, but it’s still assignment.

                                                                                                  In name only; its semantics, linguistic role, formal properties, etc. are very different from those of memory-cell-replacement. Hence why I use the term “definition” instead.

                                                                                                  The key property of single assignment is that it’s unobservable by the program. “After” the assignment, everything that looks will always see the same value; but crucially, “before” the assignment nothing is able to look (since looking creates a data dependency, which will cause that code to be run “after”).

                                                                                                  Hence the behaviour of a program that uses single assignment is independent of when that assignment takes place. There’s no particular reason to assume that it will take place at one time or another. We might kid ourselves, for the sake of convenience, that such programs have a state that changes over time, maybe going to far as to pretend that these hypothetical state changes depend in some way on the way our definitions are arrangement in a text file. Yet this is just a (sometimes useful) metaphor, which may be utterly disconnected from what’s actually going on when the program (or, perhaps, a logically-equivalent one, spat out of several stages of compilation and optimisation!).

                                                                                                  Note that the same is true of the ‘opposite’ behaviour: garbage collection. A program’s behaviour can’t depend on whether or not something has been garbage collected, since any reference held by such code will prevent it from being collected! Garbage collection is an implementation detail that’s up to the interpreter/runtime-system; we can count on it happening “eventually”, and in some languages we may even request it, but adding it to our semantic model (e.g. as specific state transitions) is usually an overcomplication that hinders our understanding.

                                                                                                  1. 1

                                                                                                    A lot of what you see as distinctive in functional languages is common to many non-functional languages. And look up Prolog - it is a very interesting alternative model.

                                                                                                    1. 1

                                                                                                      A lot of what you see as distinctive in functional languages is common to many non-functional languages.

                                                                                                      You’re assuming “what I see”, and you’re assumption is wrong. I don’t know where you got this idea from, but it’s not from me.

                                                                                                      I actually think of “functional programming” as a collection of styles/practices which have certain themes in common (e.g. immutability). I think of “functional programming languages” as simply those which make programming in a functional style easier (e.g. eliminating tail calls, having first-class functions, etc.) and “non-functional programming languages” as those which make those styles harder. Most functional programming practices are possible in most languages.

                                                                                                      In other words, I agree that “A lot of [features of] functional languages is common to many non-functional languages”, but I have no idea why you would claim I didn’t.

                                                                                                      Note that in this thread I’ve not tried to claim that, e.g. “functional programming languages are better”, or anything of that nature. I was simply stating the criteria I use for whether to call a style/language “imperative” or not; namely, if its semantics are most usefully understood as executing instructions to change the state of the (internal or external) world.

                                                                                                      And look up Prolog - it is a very interesting alternative model.

                                                                                                      I’m well aware of Prolog. The research group I was in for my PhD did some fascinating work on formalising and improving logic programming in co-inductive settings; although I wasn’t directly involved in that. For what it’s worth I’m currently writing a project in Mercury (a descendent of Prolog, with static types among other things).

                                                                                        1. 1

                                                                                          So procedural languages are similar to imperative languages, but with somewhat more abstraction?

                                                                                      1. 4

                                                                                        The only thing which stops me from diving into Emacs is a reliable terminal emulator. I know ansi-term, but I also know its bugs, like completely ignoring window size change and SIGWINCH forwarding.

                                                                                        1. 6

                                                                                          I went “full Emacs” some months ago, when I switched to Eshell as my main shell… I recommend giving it a try! I’ve not opened a regular terminal since, I think.

                                                                                          Edit: the above was possible because over time Emacs has supplanted my other usages of a terminal:

                                                                                          • Magit
                                                                                          • mu & mu4e for email
                                                                                          • ag-mode as grep replacement
                                                                                          • dired for file/directory exploration
                                                                                          • TRAMP for accessing remote files (as we’ve migrated to ephemeral machines without ssh access I don’t really use this any more)
                                                                                          1. 2

                                                                                            But you’re probably not working with numerous remote machines, mainly baremetal, with no ability to safely export configuration files / profile to the destination user. I know TRAMP, but it’s not about remote file access, but remote execution/support.

                                                                                            Also, even if I want to go “full Emacs”, there are still some advanced n/curses application which I don’t or can’t leave at all, often on these “remote machines”.

                                                                                            Let’s say I just want a proper terminal emulation in Emacs – ansi-term is quite close to that. but it fails in full-screen sessions (basically misinterpreting/dropping ECMA “escape” codes), as well as missing SIGWINCH forwarding I mentioned. Of course I don’t force anyone to write it exclusively for me, but I’m kinda shocked there’s still no proper support for complete VT100/VT5xx capabilities in GNU Emacs, as it’s about 30+ years old.

                                                                                            1. 1

                                                                                              but I’m kinda shocked there’s still no proper support for complete VT100/VT5xx capabilities in GNU Emacs, as it’s about 30+ years old.

                                                                                              I think this is due to the fact that for most commonly-used ncurses programs, it’s both less work and more benefit to just write a pure-elisp frontend and skip the curses UI entirely. Obviously doesn’t work for all obscure curses programs out there, but I think that has a lot to do with why ansi-term hasn’t had these shortcomings fixed.

                                                                                            2. 1

                                                                                              It’s important to note that Eshell was not meant as a shell replacement by its authors, but as a way to easily get output from commands to a buffer, so it is very limited in features.

                                                                                              That said, if it works for you, you do you!

                                                                                              1. 3

                                                                                                Do you have a cite for that? I’d believe that was true at one point, but it’s certainly not the case any more; it is a very full-featured shell these days.

                                                                                                1. 2

                                                                                                  It used to be here, but it appears to not be there anymore! I rescind my statement.

                                                                                            3. 5

                                                                                              That’s a rather unconventional complaint about Emacs; normally it’s the other way around (“why would I want a terminal in my text editor?”) ;)

                                                                                              You mentioned ansi-term, and others have mentioned Eshell, so I thought I’d mention that I use shell-mode for most commandline stuff. It’s a “dumb” terminal and hence can’t handle things like curses, but that lets it act more like a normal Emacs buffer (e.g. I can navigate and edit the content, without the terminal or running command intercepting my keypresses). It seems to handle window size changes and SIGWINCH perfectly well (i.e. ROWS and COLUMNS get updated).

                                                                                              You’re right that Emacs is pretty bad at non-dumb terminals, like with ansi-term. I don’t bother with those modes at all and use a separate terminal (st + dtach + dvtm) for things which need that, like curses applications (that’s also much faster!). IMHO the value of Emacs is to have a standard way to navigate/search/edit buffers of text (plus easy scripting). TUIs like curses are more like character-based graphics than actual text, they’re not meant to be user navigable or editable (it will mess us the display and changes might get overwritten). Likewise applications which control user interaction (e.g. capturing key presses) don’t work well with the unified way of doing things (keybindings, etc.) that Emacs imposes.

                                                                                              Note that there are many ways to do things with Emacs, so if terminals are your main pain point you could try (if you haven’t already) different mixes of components. For example, you could try running Emacs inside a terminal, or inside screen/tmux/dvtm so your terminals are alongside Emacs rather than inside it. You could use emacsclient/emacs –daemon to have multiple instances connected to a single session. You mention TRAMP in another comment; note that it can handle remote shells as well as just files. This is most obvious in eshell, where we can do cd /ssh:user@remote:/home/user and carry on as normal, with TRAMP sending our commands over to the remote machine. If we’re connected to a remote machine with TRAMP (e.g. in eshell, or dired) when we run M-x shell the resulting shell will be running on that machine, and things like tab-completion for filenames will be sent over TRAMP.

                                                                                              1. 2

                                                                                                This is probably the most comprehensive answer I’ll find in the web to this date, thank you :)

                                                                                                But, in shell-mode (M-x shell) $COLUMNS and $LINES aren’t updated at all. However, they do in M-x eshell. Pretty weird considering that first one knows a bit about ANSI/ECMA, while eshell does not.

                                                                                                TRAMP is a very nice tool even if I didn’t knew about remote execution, but now seems to be even better, thanks again :)

                                                                                                But still, sometimes I have the need to spawn these curses applications so I’m constantly looking for the alternative options - running external terminal emulator seems to be trade-off, but it’s still “out of the block” in Emacs world. And, when diving into Emacs, I took the “use Emacs for (almost) everything” approach as I think it should be taken this way.

                                                                                                There’s a thing about XEmbed, but I can’t find any reliable docummentation, seems to be experimental feature yet.

                                                                                                That EXWM might be great with xterm being fed by ELisp-structured data parsed into X resources, but it’ll require some additional code :)

                                                                                              2. 5

                                                                                                One option is to use Emacs as a window manager via EXWM—then you can have an xterm as a buffer!

                                                                                                1. 1

                                                                                                  I could web-search for it, but I’d rather ask a person: is that one based on xwidget?

                                                                                                  1. 4

                                                                                                    It’s actually just a window manager written in Emacs Lisp using a pure Emacs Lisp implementation of the X11 protocol!

                                                                                                    1. 1

                                                                                                      Ah, like StumpWM, except elisp instead of common lisp.

                                                                                                    2. 1

                                                                                                      No, it is a completely unrelated project

                                                                                                  2. 1

                                                                                                    What specifically do you need terminal emulation for? In my experiance curses applications (like WICD) worked well enough, maybe being slightly slugish.