“Software Debugging for Microcomputers”, Robert C. Bruce (1980). (It’s the last book I wanted to get for my next debugging book review.)

    Based on what I’ve read so far the book is pretty poor, but it’s hard to say how bad it is for its time. The programming advice is terrible, though. The author comes so close to getting the key point and then completely misses the mark. One of the more egregious mistakes is talking about functions in BASIC (not subroutines) and then never using them (that I can tell). Even worse, functions are explained wrong. So it teases you with this powerful abstraction mechanism that not only do you not have, but is never used! (Edit: I found out that there is a version BASIC for the machine in the book that supports functions as the author describes them.)

    There’s an abridged version under a different title you can get on the Internet Archive, although it’s missing the part about functions.

    1. 24

      Maybe they’re a code smell, but you’d better get used to them because getting rid of them is often worse than the cure.

      The examples presented are toys. The author is able to eliminate them because the domain they are working in is well defined and there are enough examples in order to work out a model for it. It also seems that the models are resiliant to new information or cases. That’s a luxury.

      I doubt this happens a lot, especially when dealing with business logic. It certainly hasn’t happened much in my work. It’s tempting to see a special case then then try to get rid of it by abstracting things in some manner (I like the phrase “extended domain” used in another comment) but that abstraction tends to do at least one of two things: it makes expressing that special case (or even most cases) less concise or incur a noticable performance penalty. It may be worth it. It might not. It’s hard to know before you do it sometimes.

      And then there’s the problem of thinking that the abstraction can adequately describe all the cases that you think could happen, only to find out something much different comes along. You might end up bending that abstraction anyway, making things uglier than just accepting that you’ve got some special cases to deal with.


        Hey Geoff,

        I think all these caveats are fair and I should have made them clearer in the article (see my response to Michael below). I especially agree that the following is a risk of using this technique unwisely:

        You might end up bending that abstraction anyway, making things uglier than just accepting that you’ve got some special cases to deal with.

        When new information arises, you need to be open to re-evaluating or scrapping your extended domain.

        The only thing I’d challenge is this:

        I doubt this happens a lot, especially when dealing with business logic.

        In my experience it happens a decent amount, especially when you view “removing special cases” on a spectrum. You might not get your pristine palace of an abstraction, but you might find one that improves your code, or part of your code, and doesn’t have any downsides. Or you might not. It’s just a code smell, something to be alert to. It doesn’t necessarily mean things are rotten.

      1. 5

        As someone who feels pretty comfortable with pointers, I found this article to be:

        1. longer than 5 minutes to go through,
        2. fairly confusing and unfocused, and
        3. provides background info at too varied of abstraction levels. There’s assembly, calling conventions, syntax, and types.

        I may be biased as I’m in the middle of instructing a beginner C course right now, and this would definitely go over the students’ heads. I find its much more helpful to offer different mental models and visualizations of the underlying abstract machine. One explanation will not satisfy and click with everybody.

        I do like beginning with something seemingly simple. “What is a variable?” is a great question to start peeling back the layers in C. Especially defining a pointer as a variable! A common misconception I see is conflating pointers and objects on the heap. A variable is an object on the stack. A variable has a name so you can use it. Some objects don’t have a name so they’re a little harder to use. A pointer can be variable. A pointer can point to anywhere, so they could help you use that object without a name! (or any other object). Alas, as I mentioned earlier, this would also whoosh over some heads.

        1. 3

          But not all variables live on the stack—global and static variables (even those defined in functions) live in the data segment [1]. Probably a better high level definition—a variable is a named location to store data, but said location can be ephemeral (in the case of non-static variables defined in a function).

          [1] Or bss segment.

          1. 1

            That is indeed a better definition! I try to avoid the topic of global variables with beginners to avoid nasty practices, but they should not get a faulty definition.

            1. 1

              And note that some variables may only ever exist in registers.

          1. 12

            No! Every new sprint is a mini waterfall and the stakeholders secretly still treat your sprints as grant tasks.

            1. 1

              What is a grant task?

              1. 2

                I have a feeling “grant” was meant to be Gantt.

                1. 1

                  Yeap, it was a typo - sorry about that.

            1. 9

              I haven’t deeply researched the history of project methodologies, but I’d be unsurprised if the belief that “all we did was waterfall until Agile came along” isn’t true. Pretty much every OOP book I’ve read from like 1980-1995 talks about how it works with an “Incremental model”, where you deliver short term wins continuously and constantly reevaluate customer requirements.

              1. 9

                You might find this thread helpful. Turns out iterative development was the norm, waterfall was an illustration of what wasn’t done, and one guy used that to con folks for money on top of DOD’s top-down management. Then, the world changed for the worst. That’s the story so far.

                1. 4

                  Nothing I’ve read from programming books since the 1970s suggests that Waterfall was “the way” that software was developed. I’ve also not researched this deeply, but I don’t think Waterfall was ever taken seriously. Everything I read talks about the importance of testing intertwined with implementation, in some form, when developing software. Furthermore, there was always talk of iteration to get APIs to be better. The key difference from the early days is that those cycles were longer, which gives the appearance of Waterfall in action.

                  1. 3

                    I think our understanding of classical engineering is partially shaped by the agile industrial complex.

                    Nowadays if you attend a Scrum training somewhere you get a trainer who typically never worked in a classical engineering project. Yet they have learned the story that classical engineering was waterfall and that no project was succesful ever and that every developer suffered. So they perpetuate this story. Since they don’t know stuff, they just randomly talk about waterfall/classical methods/v-model as if they were all the same. This portrayal of classical methods has shaped our mental model so much, that it also shaped how classical engineering is implemented.

                    I have an older colleague who always dies a little every time a 30-something agile consultant starts lecturing people on the perils of classical engineering and the gospel of agility. “If the V-Model was waterfall, it would be the ’'-Model”.

                    Bertrand Meyers article I also found interesting on this http://se.ethz.ch/~meyer/publications/methodology/agile_software.pdf . Especially his defense of requirements documents IIRC.

                    Maybe this is a general thing to remark about adoption of ideas here. It is always a good idea to read and study the primary and early sources for ideas. Quite often, ideas deteriorate while they are adopted. Most material on user stories is a very shallow remainder of the ideas that Cohn wrote down in the 2000s in his book. The same goes for the original “waterfall paper”, the agile manifesto, even the scrum guide (ever realized it does not mention agile once, it does not mention story once?). I’d be actually curious about historic, alternative frameworks/processes that were created at the same time but that were not widely adopted and have silently vanished. I think a lot of wisdom could be found.

                    1. 2

                      Do requirements documents need to be defended?

                      If you have no requirements document, how do you know when you’ve fulfilled your contractual obligations?

                      1. 1

                        You probably don’t have contractual obligations. You’re probably building internal software and the requirements could change for those on some sprint-like basis. And that’s okay.

                        Essentially you just have a tight iteration loop and all is well.

                        1. 1

                          Well, many agile consultants will tell you they are the devil and not agile. (They are not agile if you set requirements in stone at some point). When in fact, user stories (advocated as the alternative) are just a different flavour of requirement engineering.

                          Personally I think writing a large requirements document upfront might be a bit risky if you already fear that your project is based on risky assumptions. On the other hand in the Scrum-hamster wheel, it is sometimes hard for the team and the PO to think for a bit and come up with a better solution than the lowest hanging fruit.

                          I have bad experience with requirements documents when customers commissioned software development, because often they were too detailed and kind of outlined problems the user didn’t have. Then I would have favoured a more agile approach.

                          In SaaS contexts, I would have loved to have a well-maintained requirements document at times, especially when parts of the system were rewritten or exchanged

                    2. 2

                      From the picture of the process of Agile in the article, I see: Design, Develop, Test, Deploy. But where is verification? Poof, gone.

                      Typically I say that there are two kinds of validation (“does the software posses certain qualities?”): testing and verification. The former is established empirically (user tests, examples, etc.) the latter established mathematically (through proof and reasoning).

                      Testing essentially establishes that software has the intended functionality; after succesful testing, there should not be any confusion over what the software does, i.e. no misunderstanding over that the software provides certain functionality.

                      Verification essentially explores unknown consequences, not foreseen by testing. But only by rigorous argumentation, one can explore all possibilities in which problems may lie hidden. After succesful verification, there should not be any junk in the software, i.e. no unexpected behaviors.

                      Both phases provide essential information for clear and consise documentation.

                      Agile vs waterfal discussions seem to attack a strawman, while not providing any useful information to conmplete the verification phase.

                    1. 5

                      Using the assignment operator = will instead override CC and LDD values from the environment; it means that we choose the default compiler and it cannot be changed without editing the Makefile.

                      This is true, but really, if you want to ensure variables are set in a Makefile, pass them as overrides (make ... CC=blah ...), don’t set them in the environment. The environment is a notoriously fragile and confusing way to specify all these things. (They certainly don’t work with the GCC and binutils stuff I work with on a regular basis!)

                      My advice for Makefile is be explicit. It’s tedious and boring, but so much easier to debug.

                      1. 4

                        The reason that setting things in the environment is fragile is because people follow advice to ignore the environment. It’s very useful for cross compilation to simply set the appropriate environment and go.

                        1. 1

                          There’s also no easy way to pass arguments and options to a makefile except through environmental variables. You can also play games with the target, but there’s only so much you can do with that.

                          1. 2

                            I don’t believe that’s true. You can also pass macro definitions as arguments to make; e.g., make install PREFIX=/opt/tools

                            1. 1

                              Yes, overrides passed on the command line can be arbitrary expansions.

                              % cat Makefile
                              STUFF = 1 2 3 4
                                      @echo $(FOO)
                              % make 'FOO=$(firstword $(STUFF))'
                              1. 0

                                Yeah, but environmental variables are turned into make variables in the same way as variables after the make command. The only difference is that they also get placed in the environment of subcommands.

                                1. 2

                                  I’m reasonably sure that is not true either. From my reading of the manual, an explicit assignment of a macro within the Makefile will override a value obtained from the environment unless you pass the -e flag to make. The manual suggests the use of this flag is not recommended. In contrast, a macro assignment passed on the command line as an argument will override a regular assignment within the Makefile.

                                  Additionally, some macros receive special handling; e.g., $(SHELL), which it seems is never read from the environment as it would conflict with the common usage of that environment variable by user shells.

                                  1. 2

                                    As far as I can tell, they both get placed in the environment of subcommands. The manual is (as per many GNU manuals) unclear on the matter: “When make runs a recipe, variables defined in the makefile are placed into the environment of each shell.” My reading is that anything set in Make should be passed through, but this does not appear to be the case.

                                    % cat Makefile
                                    FOO = set-from-make
                                            @sh ./t.sh
                                    % cat t.sh
                                    echo "FOO is '${FOO}'"
                                    % make
                                    FOO is ''
                                    % FOO=from-env make
                                    FOO is 'set-from-make'
                                    % make FOO=from-override
                                    FOO is 'from-override'
                                    1. 1

                                      IMO the GNU make manual is pretty clear on this.


                                      Variables can get values in several different ways:

                                      • You can specify an overriding value when you run make. See Overriding Variables.
                                      • You can specify a value in the makefile, either with an assignment (see Setting Variables) or with a verbatim definition (see Defining Multi-Line Variables).
                                      • Variables in the environment become make variables. See Variables from the Environment.
                                      • Several automatic variables are given new values for each rule. Each of these has a single conventional use. See Automatic Variables.
                                      • Several variables have constant initial values. See Variables Used by Implicit Rules.


                                      An argument that contains ‘=’ specifies the value of a variable: ‘v=x’ sets the value of the variable v to x. If you specify a value in this way, all ordinary assignments of the same variable in the makefile are ignored; we say they have been overridden by the command line argument.


                                      Variables in make can come from the environment in which make is run. Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value. However, an explicit assignment in the makefile, or with a command argument, overrides the environment. (If the ‘-e’ flag is specified, then values from the environment override assignments in the makefile. See Summary of Options. But this is not recommended practice.)

                                      1. 1

                                        Yes, I don’t disagree with any of this and it’s consistent with usage. My point was about variables getting into the environment of shell commands in recipes. The wording suggests all variables are put into the environment, but based on the first result in the example that’s clearly not the case.

                                        1. 1

                                          Oh I see. The manual is less clear on that point:

                                          By default, only variables that came from the environment or the command line are passed to recursive invocations. You can use the export directive to pass other variables.

                                          It should probably say “passed to child processes through the environment” or something similar.

                                          $ cat Makefile
                                          export VAR2='hi'
                                                  echo $$VAR1
                                                  echo $$VAR2
                                          $ make
                                          echo $VAR1
                                          echo $VAR2
                          1. 14

                            Before the 2000’s, software development was mostly done in a Waterfall approach.

                            This viewpoint that has become a commonplace and fashionable today, is largely inaccurate. Before 2000, most complex problems in business, commerce and industry were undertaken using incremental and iterative approaches.

                            A single linear analysis and design, then development, then testing methodology had fallen from favor for general software development long before the “agile” movement.

                            It saddens me somewhat that history has been re-written for many young developers to be that “enlightened” agile replaced “evil” waterfall.

                            This article on the history of a incremental and iterative approaches might be interesting:- http://www.craiglarman.com/wiki/downloads/misc/history-of-iterative-larman-and-basili-ieee-computer.pdf

                            1. 6

                              You’re absolutely correct. I really dislike the way “waterfall” is used left and right these days to describe how things were, because it’s inaccurate: the term waterfall was coined as a description of something that should be avoided. It was an observation of a failing development model, never did its author recommend it as a practice.

                              1. 4

                                I suspect authors use the “Waterfall” canard as both a rhetorical tool to bolster their claims and just to be lazy. Getting the nuance of the evolution of software development right is tough, and it’s especially tough to distill it down to a paragraph with reasonable accuracy.

                                I’d be a lot happier if it was just left out of these kinds of pieces, even when they say useful things (like I think this article does). You are absolutely correct: this characterization of software practices is more or less historical negationism.

                              1. 19

                                I’m deep diving c2 and it’s a mindbending mix of primary-source history, paradigm archaeology, elitist flamewars, and utter crackpottery. This is a proto “falsehoods programmers believe about X” I found in a tangent of a tangent of an argument on whether OOP or databases were better. Because those two things are apparently incompatible.

                                The linked article is pretty great though!

                                1. 16

                                  “I’m deeping diving c2 and it’s a mindbending mix of primary-source history, paradigm archaeology, elitist flamewars, and utter crackpottery.”

                                  Those were heady times, but threads on lobste.rs are going to appear much the same in 20 years.

                                  One of my favorite things is to go to used book stores and flip through print books and magazines from 20-60 years ago. You get the sense of life of an era. You see misplaced passionate certainty, and then you look at us and you have a chance to imagine what the future will think.

                                  1. 4

                                    I discovered it shortly before they shut down comments from new folks. c2 is awesome. I recommend digging through it all. :)

                                    1. 3

                                      …it’s a mindbending mix of primary-source history, paradigm archaeology, elitist flamewars, and utter crackpottery.

                                      Ah yes. Good thing none of those has ever showed up on Lobsters. /s

                                      …an argument on whether OOP or databases were better. Because those two things are apparently incompatible.

                                      They actually are.

                                      1. 2

                                        A mismatch doesn’t equal incompatible. My work handles lots of data that absolutely should be in a relational database and it’s worth any additional friction it might cause is a reasonable trade-off for other benefits (reporting, sane modelling, etc).

                                        1. 2

                                          I don’t think I’ve seen anybody on lobsters (yet!) claim that they solved the halting problem.

                                          1. 1

                                            …that happened?

                                        2. 2

                                          …and it’s a mindbending mix of primary-source history, paradigm archaeology, elitist flamewars, and utter crackpottery.

                                          I find the same phenomenon in books about “programming your microcomputer” from the 70s and 80s, especially the crackpottery.

                                          1. 1

                                            That list of peculiarities sounds like a weird mix of challenging fun and hellish nightmare.

                                            1. 1

                                              There was an interesting crackpot (or?) on C2 called “TopMind” (“top” for “Table-Oriented Programming”). He was antagonistic towards OOP but the OOP people were pretty hostile to his critiques too, so I kind of liked him as an underdog. And OO was almost religious there, due to its founding as an OO pattern discussion forum. TopMind’s ideas included the benefits of storing procedural code in tables, the superiority of FoxPro, etc.

                                              1. 1

                                                I think the most cringeworthy example here is this, which is a perfect example of How Not To Argue For HOF.

                                            1. 4

                                              “ your architecture should allow to run HLL code much faster than a compiler emitting something like RISC instructions, without significant physical size penalties”

                                              My reason is more about running reliably and/or securely even if there is a penalty. I’m curious if it could speed things up, though. To support the possibility, Intel’s is already a higher-level architecture than the RISC-like instructions the micro-architecture uses. Intel’s is also one of the highest-performance implementations. Perhaps a full-custom design of a HLL-centric processor could similarly boost things. I don’t know, though. I will address two of these.

                                              “JWZ’s Lisp-can-be-efficient-on-stock-hardware claim isn’t much better than Smalltalk-can-be-efficient-on-custom-hardware, I find. Just how can it be?”

                                              Something like this that achieves lower, performance penalty than the 25% author allows for.

                                              “There are various other kinds of computers, such as convenient realizations of neural networks or cellular automata, but they’re nowhere as popular either, at least not yet””

                                              Deep, neural networks got super popular. Then, engineers figured out you could ignore a lot about how modern chips were designed, esp precision, when implementing them. Also, that analog implemented them really well. Quite a diverse array of custom architectures making them faster, use less energy, etc. Some are analog/digital hybrids. Most common deployment involves regular CPU’s with GPU’s since they’re commodities. Hardware implementation on cutting-edge nodes, especially analog, has high cost.

                                              1. 2

                                                My reason is more about running reliably and/or securely even if there is a penalty.

                                                I think this too. I’d rather work on an architecture that makes compilation simpler and, more importantly, memory management safer.

                                                1. 3

                                                  ARM has Jazelle for this purpose too, although I don’t think it’s very popular.

                                                  1. 1

                                                    There’s also Java processors like aJile and JOP. JOP’s comparison page lists more. I’m not sure how often they’re used. I can see the benefits of some, though.

                                                  2. 1

                                                    Yeah, but I am pretty sure that it is dead because sitting to regular arm code was faster.

                                                  1. 6

                                                    You can find me @GeoffWozniak@hackers.town and @GeoffWozniak@mastodon.club. I’m migrating to the hackers.town account for most things (Emacs, debugging, old tech books, some Canadian political things) and will use the mastodon.club one less since the site, although happily in Canada, tends to fail a lot and doesn’t get updated.

                                                    1. 3

                                                      I like hackers.town. One of the admins recently started using my guide to try to re-theme the site:


                                                      I think he posted it on User Styles. I was really happy to see someone use one of my tutorials.

                                                    1. 3

                                                      I use Emacs for writing, development, and pretty much anything text-based, as well as mail, a directory browser and sometimes file manager. I’ve been using it for over 20 years and I’m starting to force myself to use things like M-x find-dired and M-x rgrep more to find out if I can ditch the shell for that sort of thing. It’s showing promise. I use it even more than I used to, although I have no plans to use it as a window manager. (emacs -nw forever!)

                                                      But I think the author is being remarkably charitable in some respects. I tried EMMS. It was bad. (Is it still bad?) And as has been mentioned in this thread, the shell emulation is okay, but falls down when there is a lot of output. If you read the author’s Eshell companion piece, there is a lot more overselling. The lack of input redirection in Eshell is a killer. Throwing large amounts of output into a buffer is incredibly slow and saying you don’t need to use grep in a pipeline means you don’t know what grep is for. If you have really large files to work with, Emacs is not suitable. Sure, M-x grep is great, but good luck to you if the file it’s in is, say, a 1GB+ log file from your build. And less almost works in M-x ansi-term for these cases. Almost.

                                                      I love Emacs, I’ll continue to use it, but I’m not going to say use it for everything. Terminals and shell interaction is still better in some cases.

                                                      1. 6

                                                        I love and use emacs every day as a text editor. Tools like org mode and just general emacs customization is great!

                                                        However, outside of the text editing sphere, the emacs implementation of thing such as shell, email, and a window manager always seem “almost there” but unfortunately not useable. This saddens me because I would love to never leave emacs.

                                                        That being said, things like TRAMP completely shifted my ideas on how to manage remote files, so who knows. I am optimistic about the continued progress of the emacs ecosystem.

                                                        1. 8

                                                          Yes, I agree! For the shell environment, the brawback of emacs buffers becomes apparent. Most shell emulations (emacs has several) work fine as long as the executed programs do not produce much text, like cating a large file. When that happens, the shell becomes sluggish or freezes up, which in turn increases the cognitive burden, i. e. “May I execute this line or will this stop my workflow?” This is a major reason why I do not use the shell within Emacs. In general st feels much more resonsive than Emacs and that saddens me.

                                                          For mail, I simply do not have that much mail that I consider the elaborate mail configurations necessary. Mostly I just do IMAP searches to find a messge I’m looking for and that works well enough for me. But I still find the approach with offline mailboxes quite nice; but there are still some smaller corners.

                                                          As far as I understand it, when exwm is used, the window manager will freeze up, if emacs hangs and that is something that I do not want to experience, hence I’ve tried to make emacs play nicer with the window manager by mostly opening new Emacs frames instead of the internally managed windows. I’m satisfied with that setup.

                                                          TRAMP is almost there. I wish it had a mosh-like connection to the remote server, but I understand that this is actually quite hard to implement. But still ssh editing via tramp works quite nicely, especially once you configure ssh aliases and keys properly.

                                                          1. 4

                                                            As a heavy Emacs in terminal user I’m pretty happy with the ability to just bg emacs and run cat and less when needed. And having a terminal multiplexer helps too of course.

                                                            But I realize that if you’re in a windowing environment having everything in Emacs becomes more desirable.

                                                            As an aside, isn’t a “normal” terminal emulator like rxvt already much faster than Emacs? What does st bring to the table?

                                                            1. 3


                                                              May I ask you how you put a emacs (in terminal mode, i.e. emacs -nw) in the background? I am running emacs with spacemacs + evil configuration (mostly for org-mode) and C-z messes the state completely up, the key-bindings don’t work as usual, but doesn’t put emacs in the background. Maybe it’s spacemacs’ fault. Just wondering.

                                                              1. 2

                                                                I use vanilla emacs, running under tmux. I just hit Ctrl-Z and it’s in the background, visible in the output of jobs. fg brings it back.

                                                                I think it’s your specific configuration in this case.

                                                                1. 1

                                                                  Thank you! Then indeed it’s probably the spacemacs configuration in the terminal mode. Will have to look there.

                                                                  1. 3

                                                                    Ctrl-z is the default toggle key for evil. You can set evil-toggle-key to some other shortcut:


                                                                    1. 1

                                                                      Many thanks! It helped indeed and I learned something.

                                                                      I find it so strange, that Ctrl-Z has been chosen for this toggle mode, if this is the combination that is used in terminals to send programs to the background. Maybe there are not many people using emacs in the terminal with evil mode.

                                                                      1. 1

                                                                        The dude in the answers who modified the source to fix this really doesn’t understand the Emacs mindset ;)

                                                                2. 3

                                                                  Yeah, I prefer the window environment, especially for writing TeX documents and using pdf-tools to view it. Most of the time I have a terminal around somewhere, so I use both simultanously. For example, I have three windows open with the TeX code in one emacs frame, the pdf in another an then the terminal that runs latexmk -pvc.

                                                                  As an aside, isn’t a “normal” terminal emulator like rxvt already much faster than Emacs? What does st bring to the table?

                                                                  Yes, I used urxvt before but switched to st at some point. The differnces between those two are minor compared to a shell inside emacs. The blog post by Dan Luu showed that st performed quite well, and further highlights the point about throughput of the emacs shells. But yeah, the preference for st is mostly personal.

                                                                  1. 2

                                                                    Alright, that’s giving me LaTeX flashbacks from uni, I know just what you mean!

                                                                3. 1

                                                                  Most shell emulations (emacs has several) work fine as long as the executed programs do not produce much text, like cating a large file. When that happens, the shell becomes sluggish or freezes up, which in turn increases the cognitive burden, i. e. “May I execute this line or will this stop my workflow?” This is a major reason why I do not use the shell within Emacs. In general st feels much more resonsive than Emacs and that saddens me.

                                                                  I’ve found it’s long lines that cause Emacs to freeze. I tried working around this by having a comint filter insert newlines every 1000 characters, which worked but with really long lines that filter itself would slow down Emacs. One day I got fed up, and now I pipe the output of bash through a hacked version of GNU fold to do this newline insertion more efficiently. Unfortunately bash behaves differently when part of a pipe, so I use expect to trick it into thinking it’s not. Convoluted, but WorksForMe(TM)!

                                                                  (The code for this is in the fold.c and wrappedShell files at http://chriswarbo.net/git/warbo-utilities/git/branches/master ).

                                                                4. 2

                                                                  However, outside of the text editing sphere, the emacs implementation of thing such as shell, email, and a window manager always seem “almost there” but unfortunately not useable. This saddens me because I would love to never leave emacs.

                                                                  Shell depends, as @jnb mentions, for a lot of text it’s cumbersome, but especially with eshell, if you alias find-file and find-file-other-window (eg. ff and ffo) then you get something you can get very used to, very quickly.

                                                                  Maybe it’s not universal, but I’ve been using Gnus for a while now, and I just can’t change to anything else ^^. Integration to org-mode is great, the only thing that’s lacking imo is good integrated search with IMAP.

                                                                  Honestly, I can’t say anything about window managers. I use Mate, and it works.

                                                                  1. 1

                                                                    The search in Gnus and various other quirks (like locking up sometimes when getting new mail) caused me to finally switch to notmuch recently. I miss some of the splitting power, but notmuch gets enough of what I need to be content. The search in notmuch is really good, although it has a potentially serious hinderance, so I can’t recommend it without reservations.

                                                                    find-file from eshell is why I’ve been making a serious effort to try it out. I also implemented a /dev/log virtual target (M-x describe-variable <RET> eshell-virtual-targets) so I could redirect output to a new buffer easily.

                                                                  2. 2

                                                                    Regarding the shell. I also had shell issues but now use the shell exclusively in emacs. I work over ssh/tmux into a remote machine and only use the emacs term. I made a little ansi-term wrapper to allow the benefits of eshell (well the scrolling, yanking, etc) but it uses ansi-term still so it can use full screen stuff like htop. I’ve been using it for years now. Might help be worth checking out.

                                                                    plug: https://github.com/adamrt/sane-term

                                                                    1. 1

                                                                      Oh my God. Not only that is beautiful and perfectly suits what I was aiming to do, it also solves a couple of tangent problems I had with the section about loading the environment variables from .profile. Thank you so much!

                                                                      1. 1

                                                                        definitely will! I always run into issues with curses programs in emacs shell modes, which is the only thing that keeps me from using emacs shell exclusively,

                                                                    1. 3

                                                                      “There are still exceptions to this where you do need a native app so you see people often reaching for Electron…”

                                                                      This is deeply unfortunate.

                                                                      1. 1

                                                                        I wish I know more about Lisp Machine architecture. I wonder how closely this stuff could map to a modern RISC architecture (like ARM), and if there would be some way of exploiting some instruction to do hardware type checking quickly?

                                                                        1. 6

                                                                          If you can track down the paper Architecture of the Symbolics 3600 then you’ll probably learn a lot.

                                                                          I don’t think it maps well to something like ARM, but it could probably be made to work. This presentation should help give you an idea.

                                                                          1. 3

                                                                            Recent arm versions have hardware support for memory tagging - this might be useful when accessing memory. You could assume that value stored is of common type (tag) and get cpu to verify it for you while you try to execute happy path.

                                                                            Additionally many modern cpu support much smaller address space than 64 bits so storing those type tags in pointers wouldn’t mean that fixnums need to be 24-26 bits wide but more like 48 bits wide.

                                                                            Sadly my knowledge about all of this is very limited :(

                                                                            1. 1

                                                                              The AMD64 architecture is specifically designed to discourage using the high bits in a pointer as tag bits because it causes hell every time they try to expand the address space.

                                                                              If your values are always aligned to 8 bytes or such (which AMD64 helps facilitate, cause the stack has to start aligned), you have a few low bits which are unused and can be used for tags freely.

                                                                              1. 1

                                                                                That said, it’s no harder to mask off high bits than low bits.

                                                                                1. 1

                                                                                  Very true, but masking the low bits is more future-proof.

                                                                          1. 5

                                                                            I like (and agree) with the sentiment, but the argument as presented is not convincing. I suspect that’s because it’s trying too hard to push the SCM product as opposed to talk about writing commits/checkins/whatever with the reviewers in mind.

                                                                            The case presented is not compelling because it’s just as plausible and possible to do in Git (and probably Mercurial too). Maybe PlasticSCM makes it easier? I’m not sure. Regardless, the point about squashing commits is weak since you could just as easily squash the entire series commits to the smaller series presented. Furthermore, there’s no reason the commit message on the single commit that touches over 100 files can’t be as descriptive as a small series of commits to help guide the reviewers.

                                                                            1. 3

                                                                              This is how the core Mercurial team works, btw. The unit of review is the commit, not the PR (which the core hg team doesn’t even really do).

                                                                              It produces commits that are each individually understandable, which is great because your log is actually readable and contains useful information:


                                                                              Look at how small commits tend to be, and look at how commit messages tend to explain just what this one change is doing. This also means that your commit history is now source-level documentation thanks to hg annotate/blame. The commit message is when your tools are forcing you to write something about your code, so you should take the opportunity to actually write something meaningful.

                                                                              A history that nobody takes time to write is one that nobody takes time to read either, and at that point, what you really wanted was an ftp server to host your code with the occasional rollback mechanism to undo bad uploads.

                                                                              1. 1

                                                                                Except for the advertising section, it’s pretty similar to what I ask for my team, that they commit per component or logical unit (altough they clearly aren’t listening, maybe I need to be more strict)

                                                                                They could also propose to use rebasing to transform the checkpoint form to the reviewer form, I undesrtand it could be used for that.

                                                                              1. 4

                                                                                I don’t know if the author is being sarcastic, but it fits COBOL to a T including the fixed point calculations, and it is even listed. Can OP confirm?

                                                                                1. 3

                                                                                  The buttons at the bottom are clickable.

                                                                                  1. 4

                                                                                    The post is an extreme case of burying the lede.

                                                                                    1. 1

                                                                                      Ah, missed that :(

                                                                                  1. 14

                                                                                    I’m all for unionizing, not out of “unionize all the things”, but because I’m interested in how different power structures might effect different results: experimentation, if you will, but not at the cost of free association.

                                                                                    I grew up in a family of public school teachers. I’ve seen unions mess up badly.

                                                                                    1. 24

                                                                                      My wife is a teacher in the public system and we both have mixed feelings about the union representing teachers. It does some wonderful things, but then goes out of its way to defend some absolute garbage people simply because they are in the union. And heaven forbid it if you have any critcisms of the how the union operates. Neither of us think it’s so much a union thing as it is the lack of care put into building a large organization since similar problems exist in companies.

                                                                                      1. 14

                                                                                        I’m not super-attached to the idea of unions, but it’s pretty obvious to me that we are getting exploited by the companies–especially startups–that we work for.

                                                                                        I’m not sure that a full-blown union system is the answer, mostly because I trust the soft skills and systems thinking of engineers about as far as I can thrown them, but we need to start organizing as a class of labor on some basic things that keep screwing up the market for all of us:

                                                                                        • Forced arbitration
                                                                                        • Broad NDAs
                                                                                        • Broad non-competes
                                                                                        • Broad assignments of invention and other IP
                                                                                        • Lack of profit sharing
                                                                                        • Bad equity for early-mid stage engineers
                                                                                        • Uneven salary systems

                                                                                        Every company and startup gets some of these wrong, and few (if any of them) right, but because it’s accepted as “standard practice” we all end up having to endure them.

                                                                                        I don’t think we can find a one-size-fits-all solution for, say, salary ranges or other more esoteric issues, but my belief is that those specific things enumerated above are both achievable and universally beneficial for developers. They would benefit both the folks that think they can be the smartest engineering in the company and somehow make out like in the 90s, and the lifers who just quietly and competently do their jobs and switch companies when it’s time.

                                                                                        We need to push for them.

                                                                                        1. 2

                                                                                          “I trust the soft skills and systems thinking of engineers about as far as I can thrown them”

                                                                                          I was a bit surprised to read that. I know engineers are infamous for falling short on “soft” skills but isn’t systems thinking supposed to be a forte of engineers?

                                                                                          1. 2

                                                                                            One would think so!

                                                                                            In my experience the first thing most smart (note: not wise, just smart) engineers reach for when consulted with a misbehaving situation, especially involving humans, is a system. They have this idea that some intricate set of deterministic protocols and social customs will save them from the ickiness and uncertainty of dealing with other sentient rotting meat. They’re invariably wrong.

                                                                                            Outside of dealing with other people in meatspace, my current work in web stuff has similarly colored my opinion of “systems thinking”, to the point where I basically don’t trust anybody to reliably engineer anything larger than a GET route backed by a non-parameterized query to a sqlite database–they tend to want to add extra flexibility, containers, config files, a few ansible scripts for good measure, maybe some transpiler to the mix to support a pet stage 1 language feature, and all this other nonsense.

                                                                                            So, sadly, I’m reluctant to trust those folks who overengineer and underempathize to successfully build and manage a union.

                                                                                            1. 2

                                                                                              Engineers are famous for thinking that a new bit of technology could revolutionize systems which include human social behaviors.

                                                                                              I’ve met 2-3 engineers in the past decade who I would call ‘systems thinkers’. I’d like to make it onto my own list, someday.

                                                                                          2. 6

                                                                                            I have on my reading list https://www.press.uillinois.edu/books/catalog/47czc6ch9780252022432.html, which talks about the self-organized unions in 1930s that preceded NLRB, and the ways in which they were more democratic and more responsive to membership.

                                                                                            1. 1

                                                                                              I eagerly await your synopsis of it and maybe I’ll pick it up myself. I enjoy your writing!

                                                                                              1. 0

                                                                                                Taft-Hartley in the 1950s had a terrible effect on unions, partly by banning wild-cat strikes and boycotts both of which forced union leaders to be responsive to members.

                                                                                            1. 29

                                                                                              I share the author’s frustrations, but I doubt the prescriptions as presented will make a big difference, partly because they have been tried before.

                                                                                              And they came up with Common Lisp. And it’s huge. The INCITS 226–1994 standard consists of 1153 pages. This was only beaten by C++ ISO/IEC 14882:2011 standard with 1338 pages some 17 years after. C++ has to drag a bag of heritage though, it was not always that big. Common Lisp was created huge from the scratch.

                                                                                              This is categorically untrue. Common Lisp was born out of MacLisp and its dialects, it was not created from scratch. There was an awful lot of prior art.

                                                                                              This gets at the fatal flaw of the post: not addressing the origins of the parts of programming languages the author is rejecting. Symbolic representation is mostly a rejection of verbosity, especially of that in COBOL (ever try to actually read COBOL code? I find it very easy to get lost in the wording) and to more closely represent the domains targetted by the languages. Native types end up existing because there comes a time where the ideal of maths meets the reality of engineering.

                                                                                              Unfortunately, if you write code for other people to understand, you have to teach them your language along with the code.

                                                                                              I don’t get this criticism of metaprogramming since it is true of every language in existence. If you do metaprogramming well, you don’t have to teach people much of anything. In fact, it’s the programmer that has to do the work of learning the language, not the other way around.

                                                                                              The author conveniently glosses over the fact that part of the reason there are so many programming languages is that there are so many ways to express things. I don’t want to dissuade the author from writing or improving on COBOL to make it suitable for the 21st century; they can even help out with the existing modernization efforts (see OO COBOL), although they may be disappointed to find out COBOL is not really that small.

                                                                                              If you do click through and finish the entire post you’ll see the author isn’t really pushing for COBOL. The key point is made: “Aren’t we unhappy with the environment in general?” This, I agree, is the main problem. No solution is offered, but there is a decent sentiment about responsibility.

                                                                                              1. 1

                                                                                                Also if you want a smaller Lisp than CL with many of it’s more powerful features, there’s always ISLisp, which is one of the more under-appreciated languages I’ve seen. It has many of the nicer areas of CL, with the same syntax (unlike Dylan which switched to a more Algol-like), but still has a decent specification weighing in at a mere 134 pages.

                                                                                              1. 4

                                                                                                If it’s OK to post non-programming stuff…

                                                                                                Tonight I’ll be singing and playing keyboard as part of a live band performance in Seattle. I’ll be spending part of today practicing for that.

                                                                                                Then tomorrow I fly back home to Kansas to be with my family (parents and siblings) for a week.

                                                                                                1. 1

                                                                                                  If it’s OK to post non-programming stuff…

                                                                                                  Yes! The point is to share things you do that may or may not be tech related to get to know fellow lobsters.