1. 1

    emacs has been slowly infecting my entire workflow for the past 7 years almost. I started with vanilla emacs, failed. Then moved to spacemacs and it was great for a while then the dev branch because where all the features were and it became a stability mess. Now im on doom emacs and life is great. Having an LSP mode for every language i work in, easy enough to integrate a package from MELPA or something and a consistent experience across all my languages/projects has been amazing. Also org keeps me from losing my mind

    1. 2

      It respects .gitignore/.dockerignore so it was a great tool to use when diagnosing weird large docker images

      1. 2

        Did anyone else go to libera.chat and find their nick had already been registered?

        1. 9

          Many people did. Someone ran a bot to try to scoop up usernames. Contact support in #libera and they’ll sort you out.

        1. 1

          I’m rewriting a database migration (Marklogic -> Postures, pray for my soul) and working on doing some formal methods work for a new product offering

          1. 1

            I’ve been teaching c++ to college students for a few years now and I wish I explained this topic as well as this

            1. 7

              I felt this one. I haven’t been doing this, relatively, that long but it feels like we are eschewing formal documentation and planning for a more fly by the seat of our pants, get it right with enough iteration approach. I’ve talked to a few “technology savvy investors” in the past few years that have told me that software architects are an antipattern.

              On the one hand the people against a more formal architecture approach might be right. If you break your features down to such manageable chunks that they can go from zero to hero in production in a sprint do we need need a full design process and documentation that will end up wrong or out of date before anyone has a chance to fix a bug?

              I think the problem ends up being with the struggle of maintaining a cohesive architecture across all of these manageable l, hopefully vertical, slices of functionality. I’ve come into a few shops who have followed a low/no documentation route (“The user story is the documentation!” Said the project manager) and indeed they ship code fast but by the time I get there we get tasked with trying to build a unifying style or standard to move everything to because the original developers are gone/dead/committed to an asylum for their health and no one is willing to make changes to the legacy codebase with any confidence and ticket times are climbing.

              So where is the middle ground? I would be happy to never have to create another 10+ page proposal, print it out and defend it in front of people who are smarter than me, and know it but there is something missing in modern software development and I think it’s some sort of process of getting our bad ideas out in the planning phase before writing code. Formal methods for sure fill this role but no one has the time outside of the “if my code fails someone dies” crowd.

              I guess this is a long winded way of saying “I use formal planning to get my bad ideas out of my system before I create software that works but breaks in subtly or is just flat out wrong.

              1. 8

                I think it’s a market thing. Subtly broken software that’s out now is better than well-designed software a year from now, from a customer adoption perspective….sadly.

                1. 2

                  I agree. Usually (hopefully) the lag isn’t that pronounced but it’s all how fast can we get a feature in front of users to drive adoption or validate our “product market fit” to investors. I’m glad where I’m at we have a formal architecture review, albeit abbreviated, for new features so we are trying to stradle that fine line between velocity and hopefully we’ll designed code. Sadly this just means the architect (me) gets a lot of extra work

                2. 5

                  Have you heard the good word of lightweight formal methods

                  (Less facetiously: the key thing that makes the middle ground possible finding notations that are simple enough to be palatable while also being machine checkable. I volunteer a lot with the Alloy team because I think it has potential to mainstream formal methods, even moreso than TLA+.)

                  1. 2

                    A friend has gotten me on a mcrl2 kick for modeling one of our new systems. I’ll check out alloy as well. I love playing with pluscal/tla+ when I have the time for it

                    1. 4

                      The thing I really like about Alloy is that the whole BNF is like a page. So it’s not as big a time-investment to learn it over something like mcrl2 or TLA+.

                      Also, I think you’re the second person to mention mcrl2 in like a week, which is blowing my mind. Before this I’d only ever heard about it from @pmonson711.

                      1. 3

                        I think I’ve finally warmed up to the idea of writing more about how and why I lean on mCrl2 as my main modelling tool.

                        Regarding Alloy and the time investment, I agree the syntax is much simpler. It did take me a huge mental investment to get comfortable with modelling anything that changes overtime. It’s a great tool for data structure/relations modelling but if you need to understand how things change with time, than TLA+’s UNCHANGED handling feels like a necessity.

                        1. 2

                          I think thats the most valuable part of the whole exercise for me. I usually model everything I can as some sort of state machine with well defined entrance and exit behaviors so seeing over time the affect of changes to that system is really useful to me

                          1. 2

                            I’m in the process of working out how to change Alloy’s pitch away from software specifications and more towards domain modeling, precisely because “how things change” is less omnipresent in requirements modeling than in software specs.

                            1. 1

                              I think I gave a less positive view of Alloy than I wanted in this comment. Alloy can certainly model time, but I always seem to end up ad-hoc defining the UNCHANGED much like the in the talk Alloy For TLA+ Users around the 30min mark.

                            2. 2

                              Haha! Oddly he is the friend who put me on that path

                              1. 2

                                Thanks you two for bringing up mcrl2, I wouldn’t have discovered it if y’all didn’t talk about it!

                                1. 1

                                  The thing I really like about Alloy is that the whole BNF is like a page. So it’s not as big a time-investment to learn it over something like mcrl2 or TLA+.

                                  I think that is a main selling point to me especially. I have a lot of interest in formal methods and modeling because i can see how damn useful it can/will be. Just carving out the time and the mental bandwidth to do it is a struggle. I am by no means good at any of the tools but I am putting a lot of effort to make the time investment worth it not just to me but the product as a whole. Currently I am trying to unravel my thoughts on how we apply/enforce a fairly granular security model in our system and its been a great thought exercise and Im fairly confident it will prove useful, or drive me insane.

                          1. 1

                            Event sourcing is such an intriguing pattern that has deceptively sharp edges. Ive heard the “you get an audit log for free” a few times but the event store ends up being a very poor audit log and querying the raw events is usually hard/cumbersome or impossible depending on how you store them. My audit logs always end up being yet another projection.

                            Ive found that event sourcing only gets nasty when you inevitably have a breaking change to an event contract and have to upgrade/downgrade events for different projections/handlers.

                            I feel like its a powerful pattern even on the small scale but you need a disciplined set of developers and a strong ops strategy to make it work

                            1. 4

                              event sourcing only gets nasty when you inevitably have a breaking change to an event contract and have to upgrade/downgrade events for different projections/handlers

                              Our solution to this was a robust capability model. Capabilities limit access to resources, but a change in a capability is itself an event. So at the point when a contract changes, that change is itself modeled in the event log, and hence only affects events that occur after it.

                              1. 2

                                Our solution to this was a robust capability model.

                                That sounds really interesting! Is there something I can read, or can you say a bit more about that?

                              2. 1

                                The article mentions being able to completely rebuild the application state. It makes sense in theory, but how does it work out in practice?

                                I imagine that you might have to do event compaction over time or else the event storage would be massive. Seems like an area where those sharp edges might come out.

                                1. 3

                                  A lot of folks end up using snapshots ever n events or something discarding the previous events. Its an optimization that becomes fairly essential quickly

                                  1. 2

                                    I run a darts scorer in my spare time. The state of every single game is only stored as an event stream. Account and profile data uses a conventinal relational model.

                                    I never rebuild the entire application state, only the game I’m interested at. Should I introduce new overall statistics I would just rebuild game after game to backfill the statistics data.

                                    Storage and volume hasn’t been a problem on my tiny VPS. PostgreSQL sits at 300.000 games with each one having about 300 events. If you want I can lookup the exact numbers in the evening.