1. 7

    Using a notebook, and writing out what I’ve tried so far by hand.

    The number of circles I can drive myself in without self reflection far outweigh any amazing gdb tricks I can paste here.

    1. 1

      +1 to that. I treat it as some sort of scientific notebook and write down my hypotheses, experiments, results, etc.

    1. 6

      This is one of the biggest advantages of mocks, that you don’t need their underlying type to be extensible, just their API stable.

      I don’t understand why it’s not “make your code interface stable, and add new interfaces rather than upgrade old” as opposed to “just don’t assume any code you call’s behavior will follow any assumptions at any time”.

      1. 1

        Make new interfaces but keep the old. One is silver, the other is gold.

      1. 7

        This doesn’t discuss the huge advantage protobufs have at google that may even be worth all the pain described: the amount of amazing tooling made is absolute insanity. There are storage formats, sql-like query engines, pretty printers abound, magical serializes that can read/write anything written in the future or past, … the list is much longer and more proprietary.

        The technical and social pressure to make sure your code works with this 10000 engineering years of tooling labor is much much bigger.

        Also they have an interesting claim:

        At the root of the problem is that Google conflates the meaning of data with its physical representation.

        I question anyone that doesn’t think at some point the rubber meets the road. Your encoded data has to have some meaning associated with it at some level.

        When you’re at Google scale, this sort of thing probably makes sense. After all, they have an internal tool that allows you to compare the finances behind programmer hours vs network utilization vs the cost to store x bytes vs all sorts of other things. Unlike most companies in the tech space, paying engineers is one of Google’s smallest expenses. Financially it makes sense for them to waste programmers’ time in order to shave off a few bytes.

        Paying engineers is actually not one of the smallest expenses (look at Google’s quarterly statements, stock-based comp alone is equivalent to ~50% of all capital costs..) and not the purpose of the tool. For all but INSANE amounts of engineering, it’s usually better to waste the network, cpu, disk, whathaveyou than software engineers.

        Finally, I hate protobufs as well and despise I use them in may day-to-day work, but this rant felt really lacking inside and out.

        1. 6

          I think the point is that protobuf may be the right choice for Google, but almost certainly the wrong choice everywhere else.

          1. 1

            magical serializes that can read/write anything written in the future or past,

            Show me a protobuf 3 decompiler that doesn’t depend on .NET

          1. 10

            Taking control of your personal knowledge is one of the greatest experiences I’ve had. I switched to emacs purely for Org-mode, stayed for the elisp in ~2006 or so.

            Plain text also has a wonderful property of being a super sturdy format you know you’ll be able to read. I’ve made all kinds of quick reports out of my org files because I can just run grep | sed | awk quickly, and then write up elisp if I want to keep it around longer.

            Unfortunately, it also has the property of being a filesystem/local interface, and thus most mobile/ipad/etc users would balk at the steps you need to go through to get your notes where you’d like them. Dropbox is probably the only solution that works well for users, but then it’s dropbox. Note the auto-committing logic here makes it very difficult to actually use your backups as you’re fighting against ongoing commits running in the same workspace. I really need to write the emacs extension that allows me to use tramp w/arbitrary binary that handles data reading/writing.

            1. 3

              Taking control of your personal knowledge is one of the greatest experiences I’ve had

              I’m in the same boat. I strongly believe that personal knowledge, such as one’s personal notes, should be based on a future-proof system.

              Unfortunately, [plain-text note taking] also has the property of being a filesystem/local interface, and thus most mobile/ipad/etc users would balk at the steps you need to go through to get your notes where you’d like them

              We may not be able to create as smooth an UX as all those mobile note taking apps, but I believe we should be able to come close enough. I currently dogfood my own app (called Cerveau based on the open-source neuron project) which allows me to edit git-backed plain text notes from web browser and mobile. It basically allows you to use Git(hub) as storage, while providing a nice editing and browsing interface.

              1. 2

                I’m in the same boat. I strongly believe that personal knowledge, such as one’s personal notes, should be based on a future-proof system.

                Strongly agree here.

                I manage my knowledge via a web-based system which uses text files as its base data storage format, which I can always zip, move, and adapt to a new system.

                1. 1

                  That sounds interesting, what tool do you use?

                  1. 1

                    Thank you for your interest.

                    I use a system called “hike” at this time, which you can see a demo of here: http://hike.qdb.us/

                    The username and password are both admin, I use that to keep out crawl bots, because at this time they fill the system with lots of junk.

                    It’s still a work in progress, and I think only useful to myself, but here are some general ideas:

                    Textfiles are identified by their hashes.

                    You can attach something an existing file using the >>hash format

                    Hashtags are used for grouping and categorizing.

                    A hash-tag in a parentless item is assigned to that item. However, if an item has a parent (using >>), the hashtag is assigned to the parent item.

                    Let me know if you have any questions.

              2. 2

                Unfortunately, it also has the property of being a filesystem/local interface, and thus most mobile/ipad/etc

                This is something that has held me back many times from using such system. In the past I did only use markdown notes in a single directory more like a journal (so no wiki and pretty messed) and syncing with Syncthing or Dropbox, which I believe is one of the best big providers for just plain text. And still editing on mobile was a pain back then, not sure if currently there’s any better interface to work on plain text.

                When that started not filling my needs I tried Evernote and moved to Notion.so which I have been using since then. I’m open to explore alternative, which made me end up here looking at VimWiki which I could couple up with WebDAV to sync with my vps.

                On a note: Notion has a nice export to Markdown and CSV, that almost nails it for me. Basically following up directory structures for the content you have in the platform.

                1. 5

                  This is why Joplin fits my need perfectly.

                  • Syncs via WebDAV
                  • Great mobile apps
                  • Markdown everything.
                  • On desktop it has an option to spawn your favorite external editor of choice so Vim away!
                  • 100% open source

                  I’m in love. I haven’t ever felt this ‘together’ in terms of my personal and professional knowledge bases ever (no hyperbole).

                2. 2

                  I’m not sure how great plain text is compared to say sqlite. Faster tagging, full text search, one file, are all things that would make a note-system better in my eyes, than having free-format files lying around in a file system. And it’s not like data has to be plain-text for it to survive, sqlite and a lot of other formats with public domain/free software parsers are just as accessible, perhaps even more when considering how file-system unfriendly mobile devices are (often there’s not even a proper file manager).

                  1. 1

                    SQLite is one of the more compatible formats, but I still can’t edit it by hand.

                    My solution is to have a tree of plaintext files, which are then indexed into SQLite for indexing and searching.

                    1. 1

                      This feels like putting the cart before the horse to me. If you want optimized search why not retain the power and recoverability of plain text and use sqlite for indexing and metadata storage?

                      1. 2

                        I don’t recognize any special power in plain text. You just need a tool to access the database, and it’s as good as plain text, when it comes to unix utilities. Plus you don’t have to bother with duplicate states and updating the indexing or metadata storage, since it’s all in one file.

                        1. 1

                          Your sqlite file gets corrupted - Game Over.

                          One text file gets corrupted? You lose whatever bytes from that one text file.

                          Every decision is a trade off between utility and convenience.

                          1. 2

                            Your sqlite file gets corrupted - Game Over.

                            Not necessarily, I mean first of all it’s easy to create a backup, and then there are tools to recover as much as possible. Sure you could engineer an attack to corrupt just the right bytes, but then I could just as well say “what if you run rm -f *.md.

                      2. 1

                        Doing this now and it’s great. Title and content of notes are text fields, tags and links are structured many-to-many relations. Keeping metadata out of the note contents means I don’t have to parse anything and querying notes is super easy.

                        1. 1

                          What tool do you use for that?

                          1. 1

                            I wrote my own.

                    1. 2

                      It’s hard not to reach for pen, paper & a calculator (M-x calc) right off the bat. Before you realize it you’re looking up exact numbers and seeing what the calculation should be.

                      I’ve been trying to force myself to do the mental math & estimation on the spo. I don’t write much/anything down and see how far off I was some later time. I’m starting to learn more of these tricks. The dumbest one I’ve done is just memorize all kinds of useful numbers & terms. Anki reminds me to practice all my SI units from time to time.

                      It’s especially helpful in conversations where people are bouncing back and forth a few different options, but everyone is acting like all options are equal. They are rarely equal.

                      1. 1

                        I find myself reaching for the calculator more, but more to check magnitude while I think. I’d like to rely on recall a bit more, when in good practice it makes for more fluid thought.

                        Two qualities of fluid BoE calculation is that it doesn’t pull you out of the zone when you are trying to be creative. Reifying the creative thought in the “will it work specifically” mindset converts the mind from ideation to editorialization.

                        The second one related to the first is that BoE calculations mostly serve as indicators that the idea won’t not work (double negative is necessary). This prunes the solution space while staying creative.

                      1. 3

                        Generally very good set of rules. A couple of exceptions, at least in part.

                        Avoid defining variables used only once

                        If a function takes a few parameters, especially when they’re repeated primitive types, it’s often extremely useful to use single-use variables to disambiguate. I’d generally prefer to see

                        var (
                          cfg   = x.GetConfig()
                          addr  = config.Address(id)
                          level = y.GetLevel()
                          retry = config.ShouldRetry(z)
                        return other.NewThing(addr, level, retry)


                        cfg := x.GetConfig()
                        return other.NewThing(cfg.Address(id), y.GetLevel(), cfg.ShouldRetry(z))

                        especially as the set of intermediate values grows.

                        Prefer github.com/pkg/errors#Wrap to stdlib errors wrapping

                        IMO, the rationale isn’t convincing enough to justify the dependency.

                        1. 3

                          I have a mixed feelings about pkg/errors vs stdlib; I switched my app to stdlib a few weeks ago, and I just woke up to have 24 of these errors in my mailbox:

                          Session.GetOrCreate: context deadline exceeded

                          There’s only one place this can logically occur in the function, but I kinda miss the stack trace to confirm that this is really the right location, and also because right now I’m not sure if this happens from the a HTTP call or a cron for example. I know this is fixable with better error wrapping context, but it’s easy to forget and sometimes also tricky to get right, as you don’t want to add too much context as that will just lead to duplicate info in a long error string.

                          I also miss Wrap() returning nil on nil errors; it removes the need for a lot of if err != nil checks at the end of functions.

                          I already wrap the errors package to add Wrap() and Wrapf(), and I think I’ll add stack traces too. Either way, I think pkg/errors still has some value.

                          1. 1

                            you don’t want to add too much context as that will just lead to duplicate info in a long error string

                            IME, never been an issue, and strictly preferable to automated (file:line) stack traces.

                            1. 1

                              I’ve seen it quite a few times with stuff like:

                              _, err := os.Open(file)
                              if err != nil {
                                  return fmt.Errorf("somefun %q: %w", file, err)

                              Which, in Go 1.14, produces:

                              somefun "/file": open /file: no such file or directory

                              There are some other functions as well. The standard library is better about this now compared to a few years ago, but in the past it didn’t always add useful context like the filename or whatnot. With 3rd party libraries you never know what they do (return err is still a common pattern, in spite of many people’s best efforts to convince Gophers that blindly returning errors is not a good idea), so you actually need to check.

                              It’s all just a certain amount of cognitive overhead that I’d rather spend on actually solving the issue at hand. It kind of reminds me of C and the like where you need to worry about finicky memory management details. It’s not unworkable or anything, but I think it could be improved (we’ll see what happens with the “Go 2” proposals, although they don’t really solve this specific issue IIRC, and arguably even makes it worse).

                          2. 1

                            IMO, the rationale isn’t convincing enough to justify the dependency.

                            What makes a dependency justified to you? To me it’s simple deep interfaces. SQL, unix, etc. pkg/errors while not deep, is very very simple. Wrapf is an intern project at best.

                            1. 1

                              What makes a dependency justified to you?

                              In your terms, the dependency must be good (i.e. simple, or in John Ousterhout terms, narrow) and substantial (i.e. deep). If it’s substantial but not good, it’s a non-starter. If it’s good but not substantial, I’ll copy the functionality into my project.

                              1. 1

                                I’ll copy the functionality into my project.

                                That sounds like vendoring with extra steps? If you copy/rewrite while staring at the code, you’re already taking on the IP burden, why not just copy the dependency, and add patches where necessary?

                                1. 1

                                  Because a little copying is better than a little dependency.

                          1. 16

                            This reminds me of the complexity clock article. With anything sufficiently complicated, you will probably need the power of a programming language. Once you’re at that point, you might as well use an established language for your configuration, instead of rolling your own.

                            1. 2

                              Oooh, what a nice thought-provoking article! That’s going in my bookmarks. Thanks for the link.

                              Once you’re at that point, you might as well use an established language for your configuration, instead of rolling your own.

                              I also like the second part of their advice: “so go back to coding the config in your project’s main language, and invest in your build/deploy cycle to make config-changing [via deployment] trivial.”

                              1. 2

                                Whereas, folks using a language like Lisp could do every approach equally with tool support and lightning-fast deployments. That or any language that makes building DSL’s within the general-purpose language easy. If it’s strongly typed, then you get the type checks on the final output. An example would be Ivory language embedded in Haskell.

                                This still supports the author’s claim that the developers should use a better “build-test-deploy” cycle.

                                1. 1

                                  My thinking is this clock is basically “we keep picking things that don’t allow for each part of the clock simultaneously”.

                                  You can always add a “we only call READ not EVAL when loading config” and have users figure out how they want to generate s-expressions if limiting dynamism is important.

                                  1. 1

                                    Sounds like accurate thinking.

                                2. 1

                                  I’ve seen this anti-pattern in a few places, and I completely agree. All programming languages are imperfect, but all of the established languages are worlds better at expressing logic than any DSL or mess of XML, JSON, etc you could ever dream up.

                                1. 2

                                  An emacs config will teach you the power of documentation.

                                  If you do not comment the why, motivation, etc, to some lines in your config, you won’t know when you can remove something decades down the line.

                                  I’d do the following:

                                  • play with emacs, along with it’s default keybindings for a while, hack up a personal config, however you’d’ like, but use as few packages as possible.

                                  • after a few months of that, try out the evil binding based distributions (DOOM, spacemacs).

                                  • decide whether or not to include evil mode in your emacs config.

                                  Your config will probably live for years - put it in version control.

                                  Personally, my emacs config is here.

                                  1. 3

                                    This is a great repo!

                                    Idris really shines here, as does the java & smt example - none of which I ever would have thought about even trying. These comparative things can lead to dumb stuff, but for me it’s really great to see things that are not spark or coq.

                                    1. 1

                                      I learnt some Idris and really liked it. Based on https://whatisrt.github.io/dependent-types/2020/02/18/agda-vs-coq-vs-idris.html I decided to spend some effort to learn Agda. It was far from trivial to set up and takes very long to compile. So far I’m not impressed (but I so far I have only spent one evening, so I only just started).

                                    1. 1

                                      Bazel. I recommend bazel. The learning curve is steep, but the payoff is tremendous. Imagine a world where you have fully-deterministic build-artifacts (container images and binaries, code coverage, and a list of all build-actions). The level of control and insight you gain into the build, actions, and dependency-graph (querying for optimal order of service-rollout, by client dependencies, for example) is incredible. There are similar out there, like buck, but once you get past the wtf stage, bazel is just incredible (use bazelisk).

                                      You can take a look at my scraps in github.com/dan-compton/world. Note that I’ve got a number of things going on there – deploying nats-streaming and grpc-based bidi pub-sub, a node app, a port of migra (declarative schema management) – all of which are built and/or run by the build system. My deploys are handled via jsonnet and rules_k8s (or yaml in some cases because I’m not a fan of jsonnet (the object oriented kind)).

                                      With some minor effort it’s possible to deploy services to my local minikube cluster THEN TO GCP by just switching the cluster target. How cool is that!?

                                      1. 1

                                        Does bazel relate to splitting a monorepo into several small repos remotely? I don’t follow.

                                        1. 2

                                          Sure, you can create a virtual monorepo with bazel. https://www.youtube.com/watch?v=2gNlTegwQD4

                                          1. 1

                                            Does a monorepo require a monolithic build system?

                                      1. 1

                                        Also in the “weird messaging” department, this comment on preemptible goroutines:

                                        [Programs that use syscall] will see more slow system calls fail with EINTR errors. Those programs will have to handle those errors in some way…

                                        I’m assuming this change didn’t break compatibility, so this means those programs should already be handling those errors, but getting EINTR on “more” of them may reveal bugs. Or is it saying you’ll get EINTR where you never could before? That “more” is confusing.

                                        1. 1

                                          A system at $work that’s been running without problems for 2 years broke when we updated to Go 1.14 (and works when we build the exact same code under 1.13). It’ll be fun to find out where the problem is. We don’t do any syscalls directly, and didn’t see error messages, so fun times ahead.

                                          1. 1

                                            I think it’s an error code that you already should be handling, but programs may be assuming happy path on accident. Because preemption presumably uses signals to stop a goroutine on a thread you are now basically guaranteed to get occasional eintr.

                                            1. 1

                                              Lots of people do not loop on EINTR, and if their syscalls are fast enough, they never run into it.

                                              This will create lots and lots of bugs in production code IMO. It’s unfortunate because I don’t think their API guaranteed anything else, but behavior of a popular enough API is your API, and that’s it.

                                            1. 12

                                              I’m not sure I’ve ever seen someone write up an architecture or design doc where category theory was the linchpin. Calculus? Sure. Statistics? Almost every time.

                                              I’m a bit baffled by this article, I’m not sure I understand who’s big rant against Haskell is it’s not future-proof enough? Most folks rail against Haskell as being not current-proof enough.

                                              1. 4

                                                While only a minor point in the article, I was intrigued by the reflection that the conversion from Ruby to Go was relatively easy because the test suite executes Hub’s executable as a user would, instead of calling individual functions.

                                                This has me thinking a few things:

                                                1. “Integration”-style testing, where you interact with a system as a user would, is strictly better than unit testing
                                                2. Often there is a cost to this style of testing, though, as it can be challenging to setup different states
                                                3. Maybe CLI’s are particularly well suited to this style of testing, since there’s usually only so many ways to interact with them (stdin, stdout, flags, etc)

                                                I wonder how Hub dealt with the CLI making network requests in tests, though?

                                                1. 6

                                                  Akkartik’s trace testing provides a similar piece of non-language specific indirection.

                                                  It always bothers me to some degree, that heavily depending on the programming language for you testing environment (like mock objects) provides great ergonomics for creating tests. However, they cement an implementation so deeply, I wonder if they create much more commitment than any author intended.

                                                  1. 5

                                                    I guess different people mean different things when they say ‘better’, but I’d agree in the vague sense that I would rather be comfortable with some integration tests, than with good ‘unit test’ coverage and no integration/end-to-end tests.

                                                    Also, if your system wasn’t tested in the first place, integration testing is often a necessary step before you can start unit testing as it requires reorganizing and decoupling your code.

                                                    1. 3

                                                      Answering my own question: it looks like the tests use given blocks to specify a fake API and responses that the CLI hits.

                                                    1. 1

                                                      Technical debt is too broad of a term, and our industry should have more terms for things we would not deem “new features”.

                                                      To me technical debt is not something where the execution and behavior of the system is equivalent in all black box observable ways, except maybe memory access patterns. That is more likely an ergonomics issue.

                                                      • Problem 1: the system has technical debt, and has definable risk to the system soon. Heroism rarely helps, as you need to engineer your way back out of the problem. Maybe you call into external systems you have deprecated, you use an old API that you’re solely supporting now, an old database needs to be migrated to the new database, etc. These can be listed, prioritized, and generally worked on just like any feature work. If you can define the risk, you can define the expected value of the change.

                                                      • Problem 2: the system has poor ergonomics, and has unknowable risk to future project velocity. Heroism usually helps here in the form of many refactor patches. These are usually bad class hierarchies, using libraries in weird unintended ways, lack of modularity, poor reusability in tests, etc. These are very difficult to prioritize, because defining the expected value of the change requires metrics most organizations don’t have (time to develop new feature in the area of code with the ergonomic problem).

                                                      The solutions for each of these usually look radically different. I wish we had even more terms for each type, so software engineering could come up with general patterns in how to address each.

                                                      1. 8

                                                        C++ is great if someone else set up all the tooling, compilers, and libraries for me.

                                                        1. 3

                                                          This is a really cool idea for a small widescreen laptop, where tiling left-to-right is desirable, but tiling top-to-bottom results in windows too short to be useful.

                                                          The first tiling window manager I ever used was Ion, which unfortunately ended up in a debate between the author and distro maintainers over distributing modified and/or outdated versions of the software under the same name. Similar debates keep popping up even today.

                                                          After Ion, I tried other alternatives such as awesome and wmii, but it wasn’t the same. Today I just use whatever window manager happens to be installed, with hotkeys to navigate to whichever window is above/below/left of/right of the currently focused window (https://github.com/cout/windowfocus). This works for me on the desktop, but I might have to try PaperWM next time I’m using linux on my laptop (I particularly like the scratch layer – Ion had a separate untiled workspace instead).

                                                          1. 2

                                                            iirc PaperWM started as a fork of ion3 and then they rewrote it so you might find some roots there. I’ve personally tried tons of tiling WMs but I couldn’t find a replacement for the joy of manually splitting + tabs that ion3 gave.

                                                            Then I migrated to notion which you might enjoy (https://github.com/raboof/notion) as it’s a fork of Ion3 with fixes/improvements after Tuomov dropped development. A new version is going to be released soon without Ion’s licensed code, so hopefuly distros will be willing to adopt it :)

                                                            Edit: typos

                                                            1. 1

                                                              Been using notion full time for a while, it’s great. I’ve been using ion since ~ion2’s release. Long lived software is wonderful.

                                                            2. 2

                                                              ion3 was a good window manager, but the maintainer was difficult to get along with.

                                                              i3wm is an entirely new window manager based on the ion3 model of subdividing windows (as opposed to the awesome/dwm model of automatically resizing all the windows when a new one appears), and it’s pretty great. I personally use it in combination with GNOME Flashback (so I have all the GNOME goodies like volume-control keys and disk automounting).

                                                              1. 2

                                                                I really like a lot of i3wm’s features, but I literally can’t live without key chorded full screen zoom and, while I know that integrating compiz or the like is possible, I’m not sure how to achieve it, so I went back to running KDE/Gnome.

                                                            1. 12

                                                              I haven’t worked with a quality focused team since ~2009, so it has nothing to do with weakness, and turning this into a moral choice that someone is making seems misplaced to me. I think it’s a capitalist choice, and yet again capitalism optimizing for nothing useful.

                                                              The worse is better theory winning is not some victory lap for C, but I believe just a part of the fact that consumers / clients have no other choices, and if they do the cost and effort of switching is almost an impossible hurdle. The idea of me switching to an iPhone or my wife switching to Android is almost an insurmountable set of unknown complexity.

                                                              1. 2

                                                                I don’t think the article really states it as a moral choice, but rather as an emergent property of software development as it is practiced.

                                                                1. 1

                                                                  I’m sure there’s a philosophical name for this. It’s a practice that results in morally problematic results, despite that practice not being a deliberate moral choice. Sort of like how capitalism as currently practiced fills the ocean with microplastic garbage despite nobody making a choice to do that.

                                                                  1. 5

                                                                    Hot take: most “morality” is just a matter of aesthetics. Billions of people would presumably rather be alive than not existing because a non-capitalist system is grossly inefficient at developing the supporting tech and markets for mass agriculture. Other people would prefer that those folks not exist if it meant prettier beachfront property, or that their favorite fish was still alive.

                                                                    Anyways, that’s well off-topic though I’m happy to continue the conversation in PMs. :)

                                                                    1. 8

                                                                      Just as “software development” is a pretty broad term, “capitalism” is a pretty broad term. I wouldn’t advocate eliminating capitalism any more than I would advocate eliminating software development. The “as currently practiced” is where the interesting discussion lies.

                                                                    2. 3

                                                                      There’s an economic name for it - externality - though economics is emphatically not philosophy.

                                                                      1. 1

                                                                        Sort of like how capitalism as currently practiced fills the ocean with microplastic garbage despite nobody making a choice to do that.

                                                                        This is a classic False Cause logical fallacy.

                                                                        Capitalism is not the cause of microplastic pollution. The production of microplastics and subsequent failure to safely dispose of microplastics is the cause of microplastic pollution.

                                                                        Microplastics produced in some centrally-planned wealth-redistribution economy would be just as harmful to the environment as microplastics produced in a Capitalist economy (although the slaves in the gulags producing those microplastics would be having less of a fun time).

                                                                        Further example:

                                                                        • Chlorofluorocarbons were produced in Capitalist economies.
                                                                        • Scientists discovered that chlorofluorocarbons are poking a hole in the ozone layer and giving a bunch of Australians skin cancer.
                                                                        • People in Capitalist economies then decided that we should not allow further use of chlorofluorocarbons.
                                                                        1. 3

                                                                          Again, the key phrase here is not “capitalism”, but “as currently practiced”. Capitalism doesn’t cause microplastics, but it doesn’t stop them either. In other words microplastics are “an emergent property of capitalism as it is practiced”. You could practice it differently and not produce microplastics, but apparently the feedback mechanism between the bad result (microplastics/bloated software) and the choices (using huge amounts of disposable plastics/using huge amounts of software abstractions) is not sufficient to produce a better result. (Of course assuming one thinks the result is bad to begin with.)

                                                                          1. 0

                                                                            Of course assuming one thinks the result is bad to begin with.

                                                                            That is really the heart of the matter, as far as I see it. In contemporary discourse, capitalism as a values system (versus capitalism as a set of observations about markets) does not have a peer, does not have a countervailing force.

                                                                            I’m sure there’s a philosophical name for this

                                                                            @leeg brought this up as well, but “negative externality” is in the ballpark of what you are looking for . An externality is simply some effect on a third party, and whose value is not accounted for within the system. Environmental pollution is a great example of a negative externality. Many current market structures do not penalize pollution at a level commensurate with the damage caused to other parties. Education is an example of a positive externality: the teachers and administrators in schools rarely achieve a monetary reward commensurate with the long-term societal and economic impact of the education they have provided.

                                                                            Societies attempt to counteract these externalities by some degree of magnitude (regulations and fines for pollution, tax exemptions for education), and much ink is spilled in policy debates as to whether or not the magnitudes are appropriate.

                                                                            Bring back in my first statement, that capitalism (née economic impact) is not only values system, but is the only system that is assumed to be shared in contemporary discourse. This results in a lot of roundabout arguments, in pursuit of other values, being made in economic terms.

                                                                            What people really wish to convey, what really motivates people, may be something else. However, they cannot rely on those values being shared, and resort to squishy, centrist, technocratic studies and statistics that hide their actual values, in hopes other people will at least share in the appeal to this-or-that economic indicator (GDP, CPI, measures of inequality, home ownership rates, savings rates, debt levels, trade imbalances, unemployment, et cetera). This technocratic discussion fails to resolve the actual difference in values, and causes conflict-averse people to tune it out entirely, thus accepting the status quo (“capitalism”). I lament this, despite being very centrist and technocratically-inclined myself.

                                                                            Rambling further would eclipse the scope of what is appropriate for a post on Lobsters, so I will chuck it your way in a DM.

                                                                  1. 11

                                                                    Huzzah, more spooky action at a distance, just what programs need. The points of contact between modules become your messages, which are essentially global in scope. And the rules may contradict each other or otherwise clash, and understanding what’s going on requires you to go through each module one by one, understand them fully, and then understand the interactions between them. This isn’t necessarily a deal breaker, but it also isn’t any simpler than any other method.

                                                                    Interesting idea, but I’m deeply unconvinced. It seems like making an actual complex system work with this style would lead to exactly the same as any other paradigm: a collection of modules communicating through well-defined interfaces. Because this is a method of building complex machines that our brains are good at understanding.

                                                                    1. 7

                                                                      IMO this comes from the fact that the act of writing/extending software easily that you’ve spent N years understanding and reading software later are two entirely different activities, and push your development style in different directions.

                                                                      The ability to write software that integrates easily pushes folks to APIs that favor extension, inversion of control, etc. This is the “industrial java complex” or something like it - and it appears in all languages I’ve ever worked on. I’ve never seen documentation overcome “spooky action at a distance”.

                                                                      The ability to read software and understand it pushes you to “if this and this, then this” programming, but can create long methods, lots of direct coupling of APIs etc. I’ve never seen folks resist the urge to clean up the “spaghetti code” that actually made for delicious reading.

                                                                      It’s my opinion that this is where we should build more abstractions and tools for human software development, similar to literate programming, layered programming, or model oriented programming. One set of tools are for writing software quickly and correctly, and another set of tools for reading and understanding, i.e. macroexpand-1 or gcc -E style views of code for learning & debugging, and a very abstract easy to manipulate view of code that allows for minimal changes for maximal behavioral extension.

                                                                      ¿por qué no los dos?

                                                                      1. 2

                                                                        The points of contact between modules become your messages, which are essentially global in scope.

                                                                        This was exactly my thought, too. It reminds me of a trade-off in OOP where I think you had to decide whether you want to be able to either add new types (classes) easily or add new methods easily. One approach allowed the one, the other approach the other. But you could not have both at the same time. Just can’t wrap my head around what exactly was the situation… (it might have been related to the visitor pattern, not sure anymore)

                                                                        In this case, the author seems to get easy addition/deletion of functions by having a hard time changing the “communication logic” / blocking semantics (which operation blocks which other operation, defined by block and waitFor). While in the standard way the “communication logic” is easy to change, because you just have to replace && by || or whatever you need, but the addition of new functions is harder.

                                                                        1. 3

                                                                          That’s sometimes known as the “expression problem”.


                                                                      1. 7

                                                                        I use rc(1), the only shell that doesn’t confuse me endlessly with absurd quoting problems. Now I actually enjoy writing shell scripts…

                                                                        1. 2

                                                                          I wrote a dotfile manager in rc and it was such a breath of fresh air. Just reading the documentation honestly made me happy, and not much documentation does that! I don’t think I could ever use it as an interactive shell though, and I still write most scripts in portable sh, but I do wish rc were more ubiquitous.

                                                                          1. 1

                                                                            I loved using RC but eventually gave up and use zsh (home) and bash (work).

                                                                            1. 1

                                                                              I use rc as my fulltime shell as well - specifically Byron’s rc which cleans up some of the silly “if not” logical things.

                                                                            1. 18

                                                                              Torn between “this is a clickbait title, language is irrelevant the important part is using the right algorithm” and “well, having set operations in the standard library does sure help with that”.

                                                                              1. 9

                                                                                I think the biggest difference is that Python allows you to focus on the task at hand, rather than plumbing like implementing linked lists, dealing with memory, etc. This makes it much easier to use the right algorithm.

                                                                                The downside is that some “simple” operations in Python can be quite complex under the hood, and that in C you always have a good idea of what exactly is happening.

                                                                                1. 13

                                                                                  Unless you’re in the habit of looking at the generated assembly, you really /don’t/ though. C as a shorthand assembler was really only a thing on the PDP, and we have been drifting further away from that since.

                                                                                  1. 1

                                                                                    Right, fair enough. I meant that there are (usually) no hidden complexities in C code, and that it’s usually reasonably obvious what the computer will do (although perhaps not “exactly”). In Python, it can be quite easy to create really slow code if you don’t have a good insight in how Python treats your code.

                                                                                    This is mostly an issue with new(ish) programmers, who have experience with only Python (or similar languages) and lack a certain “insight” in to these things.

                                                                                    1. 3

                                                                                      I meant that there are (usually) no hidden complexities in C code

                                                                                      You mean like cache misses? Branch mispredictions? Compiler optimisations you’d never think would happen due to undefined behaviour? Never mind memory safety.

                                                                                      1. 1

                                                                                        You mean like cache misses? Branch mispredictions?

                                                                                        There’s no way to avoid those though, even with assembly. In that regard C’s performance profile is very similar to assembly, and I say that as someone who isn’t impressed with the machine code that modern compilers produce.

                                                                                        The undefined behaviour is a really good point though. GCC 2.95 4 lyfe. :(

                                                                                        1. 1

                                                                                          There’s no way to avoid those though, even with assembly.

                                                                                          True, but I don’t think that changes the fact that I consider “no hidden complexities in C code” to be incorrect. What you’re saying is that there’s no way to avoid it, and my point is that writing C is no silver bullet (or, IMHO, silver. Or a good bullet).

                                                                                        2. 1

                                                                                          I totally get your point, but arp242 talked about hidden complexities, not hidden simplifications.

                                                                                          I’d probably rather have a cache miss than an assignment turned into a deep copy.

                                                                                          1. 1

                                                                                            I don’t consider anything I wrote to be a hidden simplification.

                                                                                            Deep copies only matter if they show up in the profiler, at which point you’ll know exactly where they are.

                                                                                            1. 2

                                                                                              I’m sorry. (I considered compiler optimizations to be hidden simplifications).

                                                                                  2. 4

                                                                                    I’m just mad it was written in C in the first place when sort, uniq, and cut would’ve been just as slow and probably used more memory and taken up multiple process table slots…but would’ve been more UNIXy.

                                                                                    1. 1

                                                                                      This is the logic that gets you C++