1. 1

    I use checklists. Right now I’m using Notion for these. Notion may have templates for what you’re looking for. You can also make custom templates easily… Another option that may be interesting is using meditations for daily tracking

    1. 1

      Some composition of updating my resume, attempting to choke people, inefficiently watering my small makeshift veggie garden, and setting up a worm bin

      1. 2

        “It’s hard for juniors” is a strange angle in my opinion. There are a lot of things that are difficult at that level. That doesn’t mean we should throw them out.

        Some projects reap the benefits from a microservice architecture. Some don’t. We should all be able to agree that (hopefully) no one is forcing us to use microservices. Use what makes sense. Being sensational about the “lets all use microservices” sensationalism gets us nowhere.

        1. 3

          listening: The Swerve: How the World Became Modern

          personal reading: starting The Master and Margarita

          career reading: probably going to pivot from SICP to Clean Architecture

          1. 5

            I partly grew up in the UK, where the culture is that one must be quite sophisticated to read Russian literature.

            When I started dating my current girlfriend — who is Russian — I boasted that I had just finished reading The Master and Margarita.

            She gleefully responded with “Oh! I really enjoyed that book when I read it as a child!”

            1. 2

              The Russian school curriculum always struck me as ridiculous. I seriously doubt that children can truly comprehend Crime and Punishment, War and Peace, Fathers and Sons, or Master and Margarita (I certainly didn’t, although I loved Master and Margarita). Whole layers of meaning would simply go unrecognised and unexamined.

            2. 3

              The Master and Margarita is one of my favourite books, hope you enjoy it!

            1. 2

              Listening to The Path to Power (The Years of Lyndon Johnson #1) in the car. What an in depth book. I know more about early-mid 20th century Texas politics than I do about the whole span of my state’s politics.

              Glad I’m almost through it (around 38h deep right now.)

              Personal reading: The Obstacle Is the Way

              Tech: Working (slowly) through SICP, about to pick up where I left off in Introduction to Graph Theory

              1. 2

                woo!

                • To Save Everything, Click Here: The Folly of Technological Solutionism by Evgeny Morozov. I wish I could force everyone working in tech to read this book. It was published in 2014 but could have come out yesterday. Largely, its about the moral consequences of technical decisions large companies are making.
                • The Foundation series by Isaac Asimov. I somehow had this idea of Asimov’s corpus being full of stodgy, hard to read sci-fi, but I was dead wrong. The world he builds is amazing and the series reads well.
                • The Earthsea Series by Ursula K. LeGuin. If you haven’t read this, do it now. Don’t wait.
                • The Takeshi Kovacs series by Richard Morgan. I just so happened to read this at the beginning of the year before the Netflix series came out. If you liked that, the books are even better. If you’re looking for cyberpunk fiction, this will scratch that itch.
                • 1Q84 by Haruki Marukami. I don’t even really know how to describe this book. Look for fantasy, scifi, time travel, love, critiques of religion, so much going on here. Highly recommend.
                1. 2

                  I have to chime in to also suggest trying something by Murakami

                1. 1

                  Recently I stumbled across https://martinfowler.com/articles/feature-toggles.html, might be worth checking out if you want a more in depth look at feature flag implementations.

                  1. 1

                    $work:

                    • finish up some small but important tasks for some worker services
                    • return to a stale feature branch from a while back on our main product service
                    • evaluate the current effectiveness of our logs on Elasticsearch
                    • set something up to delete old Elastic indices

                    !$work:

                    • finish up migrating side project to postgres to play with PostGIS
                    • wire side project to normalize incomplete addresses w/ the Open Street Maps public API
                    • hopefully make it through a few pages of SICP, progress through the book has stalled for a while
                    • figure out some rabbit holes to go down to continue to hone my chops (feel like I need to push through a learning wall)
                    1. 20

                      My sense now is that Alan Kay’s insight, that we can use the lessons of biology (objects are like cells that pass messages to each other), was on target but it was just applied incompletely. To fully embrace his model you need asynchrony. You can get that by seeing the processes of Erlang as objects or by doing what we now call micro-services. That’s where OO ideas best apply. The insides can be functional.

                      1. 17

                        “If you want to deeply understand OOP, you need to spend significant time with SmallTalk” is something I’ve heard over and over throughout my career.

                        1. 5

                          It’s also a relatively simple language with educational variants like Squeak to help learners.

                          1. 7

                            I have literally taken to carrying around a Squeak environment on USB to show to people. even experienced engineers tend to get lost in it for a few hours and come out the other side looking at software in a different way, given a quick schpiel about message passing.

                          2. 4

                            If you don’t have any Smalltalk handy, Erlang will do in a pinch.

                            1. 2

                              And if you don’t have Erlang handy, you can try Amber in your browser!

                            2. 1

                              I went through the Amber intro that /u/apg shared. I’d love to dive deeper. If anyone has any resources for exploring SmallTalk/Squeak/Etc further, I’d love to see them. Especially resources that explore what sets the OO system apart.

                              1. 2

                                I’m told that this is “required” reading. It’s pretty short, and good.

                            3. 16

                              I even wrote a book on that statement. My impression is that “the insides can be functional” could even be “the insides should be functional”; many objects should end up converting incoming messages into outgoing messages. Very few objects need to be edge nodes that turn incoming messages into storage.

                              But most OOP code that I’ve seen has been designed as procedural code where the modules are called “class”. Storage and behaviour are intertwingled, complexity is not reduced, and people say “don’t do OOP because it intertwingles behaviour and storage”. It doesn’t.

                              1. 2

                                This.

                                Whether the implementation is “functional” or not, the internals of any opaque object boundary should at least be modellable as collection of [newState, worldActions] = f(old state, message) behaviours.

                                We also need a unified and clearer method for namespacing and module separation, so that people aren’t forced to make classes (or closures-via-invocation) simply to split the universe into public and private realms.

                                To say that the concept of objects should be abandoned simply because existing successful languages have forced users to mis-apply classes for namespacing is as silly as the idea that we should throw out lexical closures because people have been misusing them to implement objects (I’m looking at you, React team).

                              2. 5

                                If there’s one lesson I’ve learned from software verification, it’s that concurrency is bad and we should avoid it as much as possible.

                                1. 8

                                  I’m not entirely sure this is correct. I’ve been using Haskell/Idris/Rust/TLA+ for a while now and I’m now of the opinion that concurrency is just being tackled at the wrong conceptual level. In that most OOP/imperative strategies mix state+action when they shouldn’t.

                                  Also can you qualify what you mean by concurrency? I’m not sure if you’re conflating concurrency with parallelism here.

                                  I’m using the definitions offered by Simon Marlow of Haskell fame, from Parallel and Concurrent Programming in Haskell:

                                  In many fields, the words parallel and concurrent are synonyms; not so in programming, where they are used to describe fundamentally different concepts.

                                  A parallel program is one that uses a multiplicity of computational hardware (e.g., several processor cores) to perform a computation more quickly. The aim is to arrive at the answer earlier, by delegating different parts of the computation to different processors that execute at the same time.

                                  By contrast, concurrency is a program-structuring technique in which there are multiple threads of control. Conceptually, the threads of control execute “at the same time”; that is, the user sees their effects interleaved. Whether they actually execute at the same time or not is an implementation detail; a concurrent program can execute on a single processor through interleaved execution or on multiple physical processors.

                                  While parallel programming is concerned only with efficiency, concurrent programming is concerned with structuring a program that needs to interact with multiple independent external agents (for example, the user, a database server, and some external clients). Concurrency allows such programs to be modular; the thread that interacts with the user is distinct from the thread that talks to the database. In the absence of concurrency, such programs have to be written with event loops and callbacks, which are typically more cumbersome and lack the modularity that threads offer.

                                  1. 5

                                    Also can you qualify what you mean by concurrency?

                                    Concurrency is the property that your system cannot be described by a single global clock, as there exist multiple independent agents such that the behavior the system depends on their order of execution. Concurrency is bad because it means you have multiple possible behaviors for any starting state, which complicates analysis.

                                    Using haskell/rust/Eiffel here helps but doesn’t eliminate the core problem, as your system may be larger than an individual program.

                                    1. 10

                                      All programs run in systems bigger than the program

                                      1. 1

                                        But that’s not an issue if the interaction between the program and the system is effectively consecutive (not concurrent), I think is the point that was being made. A multi-threaded program, even if you can guarantee is free of data races etc, may still have multiple possible behaviors, with no guarantee that all are correct within the context of the system in which operates. Analysis is more complex because of the concurrency. A non-internally-concurrent program can on the other be tested against a certain input sequence and have a deterministic output, so that we can know it is always correct for that input sequence. Reducing the overall level of concurrency in the system eases analysis.

                                        1. 2

                                          You can, and probably should, think of OS scheduling decisions as a form of input. I agree that concurrency can make the state space larger, but I don’t believe it is correct to treat concurrency/parallelism as mysterious or qualitative.

                                      2. 3

                                        Using haskell/rust/Eiffel here helps but doesn’t eliminate the core problem, as your system may be larger than an individual program.

                                        They help in reducing the scope into the i/o layer interacting with each other. I think an example would be helpful here as there isn’t anything to argue for your stated position so far.

                                        But lets ignore language for the moment and give an example from my work. We have a network filesystem that has to behave generally like a POSIX filesystem across systems. This is all c and in kernel, so mutexes and semaphores are the overall abstractions in use for good or ill.

                                        I’ve been using TLA+ both as a learning aide in validating my understanding of the existing code, and to try to find logic bugs in general for things like flock() needing to behave across systems.

                                        Generally what I find is that these primitives are insufficient for handling the interactions in i/o across system boundaries. Aka lets take a call to flock() or even fsync(), you need to ensure all client systems behave in a certain way when one (or more) systems make a call. What I find is that the behavior as programmed works in general cases, but when you setup TLA+ to mimic the mutex/semaphores in use and their calling behavior, they are riddled with logic holes.

                                        This is where I’m trying to argue that the abstraction layers in use are insufficient. If we were to presume we used rust in this case, primarily as its about the only one that could fit a kernel module use case, there are a number of in node concurrent races across kernel worker threads that can just “go away”. Thus freeing us to validate our internode concurrent behavior logic via TLA+ and then ensuring our written code conforms to that specification.

                                        As such, I do not agree that concurrent programming should be avoided whenever possible. I only argue that OOP encourages by default bad practices that one would want to use when programming in a concurrent style (mixing state+code in an abstraction that is ill suited for it). It doesn’t mean OOP is inherently bad, just a poor fit for the domain.

                                        1. 1

                                          I feel that each public/private boundary should have its own singular clock, and use this to sequence interactions within its encapsulated parts, but there can never really be a single global clock to a useful system, and most of our problems come from taking the illusion of said clock further than we should have.

                                      3. 4

                                        I would go exactly tangential and say that the best software treats concurrency as the basis of all computation. in particular, agnostic concurrency. if objects are modeled to have the right scope of visibility and influence, they should be able to handle messages in a perfectly concurrent and idempotent manner, regardless of cardinality.

                                        1. 2

                                          Take Clojure for example, and I think concurrency is not that bad, and there is no reason to avoid it. Mutability and intertwining of abstractions is what leads to problematic situations. Functional programming solves that by its nature.

                                          1. 4

                                            Even if the program is immutable, you ultimately want the program to have some effect on the outside world, and functional programming doesn’t magically fix the race conditions there. Consider having a bunch of immutable, unentwined workers all making HTTP requests the same server. Even if there are no data races, you can still exceed the rate limit due to concurrency.

                                      1. 2

                                        I’d love to see this pattern used more where it makes sense. My biggest concern is making sure documentation is there for the return values, ideally editor supported.

                                        1. 1

                                          Totally agree. That’s a space where JS (and to be fair most of the dynamic languages) can still improve a lot.

                                        1. 3

                                          I’ve had the Little Schemer sitting on my shelf for far too long. I really need to go through it. I wonder if I can go through the book using CHICKEN.

                                          1. 4

                                            CHICKEN happens to provide everything you need by default, but any modern implementation will do. You may need to define a few small procedures as you go if your platform doesn’t already have them – atom?, add1 and sub1 from memory, and possibly others – but recent editions include definitions for these and actually executing the programs is almost beside the point anyway, for the first half of the book at least.

                                            1. 2

                                              Probably, although in my experience every exercise is perfectly doable on paper and gives a welcome break from the screen.

                                              1. 4

                                                I also prefer going through those exercises on paper. But like @evhan said, you can use CHICKEN, just like about any other Scheme (with a handful of small extra definitions that are nonstandard; IIRC those are mentioned in the preface).

                                                1. 1

                                                  Personally I’d love to see a breakdown of what it takes to go through The * Schemer and SICP outside of racket land. I’ll have to give CHICKEN a try!

                                                  1. 2

                                                    I’m not sure what would be needed for SICP or the other schemers, but I’ll keep that in mind.

                                                    As for the little schemer, the book itself provides the definition / implementation for every function as you go, except for a few exceptions, such as quoting and define, but it does tell you how they are called in both scheme and common lisp.