Threads for kstatz12

  1. 1

    I’m so glad they nixed adding extra syntax for null checking

    1. 3

      Spacemacs is a popular alternative Emacs framework.

      A week ago, I tried to research the difference between Doom and Spacemacs, but I didn’t learn of any fundamental differences. Both frameworks support either Emacs or Vim keybindings. Both define systems of keybindings starting with the Space key, though I imagine some specific key bindings might be different. And both of them come with optional modules/layers of configuration that you can supplement with your own Elisp.

      After my exploration a week ago, I ended up sticking with my existing Spacemacs config because I couldn’t even install Doom on my work computer – the install command dumped a five-page Elisp stack trace about a nil somewhere. However, that error may have only happened because I was using a corporate proxy with network issues.

      1. 3

        spacemacs, in my experience was a little too abstracted from the emacs base and you had to point to the dev branch which was wildly unstable. I hope it has gotten better

        1. 3

          Can confirm this; we get a lot of questions in the #emacs channel on libera from people who are confused about the basics and no one can answer them because they’ve seemingly-arbitrarily gone and changed a bunch of stuff from the normal Emacs way without really explaining how.

          1. 2

            I’m not sure what you mean by confused users having “seemingly-arbitrarily gone and changed a bunch of stuff”. Do you mean that they just installed Spacemacs, which itself makes changes that seem arbitrary to vanilla Emacs users, or are you saying that Spacemacs users are more likely to make seemingly-arbitrary changes on top of Spacemacs (which they would probably do by editing their .spacemacs file)? Is your comment framework-agnostic – do you also hear from confused Doom users in that channel?

            1. 3

              I mean the first one.

              We get confused Doom users but not as many; I think Doom learned from the mistakes of Spacemacs and either didn’t diverge in config as drastically, or documented things better.

              1. 2

                Anecdata, as a vim user, I tried spacemacs twice (months or years between) and was kinda completely lost and the second time nothing worked out of the box, not even the install instructions.

                That day I installed doom and it was kinda nice. I didn’t stick with it, but the experience was so much better,

          2. 1

            I can’t comment on Spacemacs’s level of abstraction relative to Doom, but it is still true that with Spacemacs you have to point to the unstable develop branch to use recent features. For comparison, the develop branch has had 1,488 commits since February 2020, while the master branch has had only one commit (and it’s tiny).

            That’s not as unstable as it sounds, though: Spacemacs won’t update itself automatically on the develop branch even if you set dotspacemacs-check-for-update to t. Your configuration will only update when you choose to run git pull in your ~/.emacs.d. (Spacemacs might update your packages – I forget if it does that automatically – but it lets you roll the package updates back if you think they broke anything.)

          3. 2

            The biggest difference for me was kinda silly, but with Doom I was able to figure out how to set my preferred theme in ~5min, whereas after almost an hour of reading Spacemacs’ documentation, I couldn’t figure out where I could set my preferred theme so that it would load on startup. Small and silly, but it just kept going from there: I was able to grok Doom and how to configure it much more quickly than Spacemacs.

          1. 1
            • putting the finishing touches on my weird fever dream of a weekend project https://github.com/kstatz12/Perfy
            • adding semantic tokens to the csharp langauge server
            • documenting parts of our codebase that make me sad
            1. 3

              Trying my hand at contributing to an open source repo that is missing a feature I use heavily (code formatting from an lsp) in f#.

              1. 2

                Doing some dangerous refactors while the codebase is quiet

                1. 6

                  This comes right on time, I’m attempting to introduce formal models to my team at $WORK, and have been going through so many materials (lots of them by you) these past few days. Will check this out today! Thank you!

                  1. 6

                    PLEASE let us know how it goes. I’ve got a couple of failures under my belt when it comes to such efforts. Would love to compare notes and learn.

                    1. 3

                      I have yet to have it take on at work at all. Even lighter weight things like property-based testing are outside of the comfort zone of a lot of people. I will say, what has piqued the most curiosity is when you show someone relevant to something they’re working on, instead of speaking philosophically / abstractly.

                      For example, today I was able to draw up a quick proof in Isabelle to show that introducing a feature flag to a code path wouldn’t break the existing behavior when the flag was disabled. Simple example, but it was received well.

                      1. 3

                        Now that I think about it, property-based testing sounds like a much easier sell than “formal methods” proper. It’s really just a variation on unit testing, which is official industry “best practice” now (for good or ill). For programmers who already write lots of asserts, the notion of invariants should already be familiar. And once you have your team sold on property-based testing, formalizing invariants in a spec should be a little easier (I hope).

                        1. 2

                          Absolutely - properties and theorems are expressed the same way, theorems just must be proven. With property-based testing, you’re just evaluating the property with tons of input data and saying “even though we haven’t proven this, we’ve never observed it to be false.” That’s empiricism at its finest, and it’s something people are more open to than math proofs.

                          This also offers a way to start with property-based testing and “upgrade” to proof if you choose to. They’ve written about this on the Cogent team, where they’re using exactly that workflow for formally verified file system implementations. No use trying to prove something that doesn’t even pass a property-based test.

                          The way I sell property-based testing is that it’s also generally way less actual work to write the test. If you have to write test data generators, they’re written one time and reused between different properties. Test setup can be soul crushing, so it’s a way to automate the boring stuff away. That’s in line with the “practical programmer” ethos.

                      2. 1

                        I keep trying to introduce this on a team wide scale at the $WORK but so far its really just crazy ol kstatz12 tinkering in his imagination closet

                        1. 1

                          I can’t even get my colleagues to read my informal correctness proofs, so not sure how I’d sell them on formal methods…

                          1. 1

                            tell them that folks practicing formal methods at a $dayjob today, will be the ‘data scientists with exuberant pay options’ of tomorrow.

                            My thinking is that the ‘break through’ will be called ‘compile-time testing and verification’, and it will happen when major languages and their toolchains (including compile time debugging) will integrate formal methods into them.

                            There will be ‘learn formal methods in 21 days’ books and boot-camps, a million of youtube videos about how to ‘Ace an interview for compile-time testing and verification jobs’, recently graduated students coming your place of work and asking when you are going to get off a dinosaur-way of doing things.

                            There will be companies going public that will list, ‘formally verified’ software as their ‘competitive advantage’. Manufacturing, robotics will be touting that their formally verified software powers the world… … And so on.

                            … And, perhaps, the data science be remembered as a historically-relevant attempt to derive business value from log files :-)

                      1. 1

                        emacs has been slowly infecting my entire workflow for the past 7 years almost. I started with vanilla emacs, failed. Then moved to spacemacs and it was great for a while then the dev branch because where all the features were and it became a stability mess. Now im on doom emacs and life is great. Having an LSP mode for every language i work in, easy enough to integrate a package from MELPA or something and a consistent experience across all my languages/projects has been amazing. Also org keeps me from losing my mind

                        1. 2

                          It respects .gitignore/.dockerignore so it was a great tool to use when diagnosing weird large docker images

                          1. 2

                            Did anyone else go to libera.chat and find their nick had already been registered?

                            1. 9

                              Many people did. Someone ran a bot to try to scoop up usernames. Contact support in #libera and they’ll sort you out.

                            1. 1

                              I’m rewriting a database migration (Marklogic -> Postures, pray for my soul) and working on doing some formal methods work for a new product offering

                              1. 1

                                I’ve been teaching c++ to college students for a few years now and I wish I explained this topic as well as this

                                1. 7

                                  I felt this one. I haven’t been doing this, relatively, that long but it feels like we are eschewing formal documentation and planning for a more fly by the seat of our pants, get it right with enough iteration approach. I’ve talked to a few “technology savvy investors” in the past few years that have told me that software architects are an antipattern.

                                  On the one hand the people against a more formal architecture approach might be right. If you break your features down to such manageable chunks that they can go from zero to hero in production in a sprint do we need need a full design process and documentation that will end up wrong or out of date before anyone has a chance to fix a bug?

                                  I think the problem ends up being with the struggle of maintaining a cohesive architecture across all of these manageable l, hopefully vertical, slices of functionality. I’ve come into a few shops who have followed a low/no documentation route (“The user story is the documentation!” Said the project manager) and indeed they ship code fast but by the time I get there we get tasked with trying to build a unifying style or standard to move everything to because the original developers are gone/dead/committed to an asylum for their health and no one is willing to make changes to the legacy codebase with any confidence and ticket times are climbing.

                                  So where is the middle ground? I would be happy to never have to create another 10+ page proposal, print it out and defend it in front of people who are smarter than me, and know it but there is something missing in modern software development and I think it’s some sort of process of getting our bad ideas out in the planning phase before writing code. Formal methods for sure fill this role but no one has the time outside of the “if my code fails someone dies” crowd.

                                  I guess this is a long winded way of saying “I use formal planning to get my bad ideas out of my system before I create software that works but breaks in subtly or is just flat out wrong.

                                  1. 8

                                    I think it’s a market thing. Subtly broken software that’s out now is better than well-designed software a year from now, from a customer adoption perspective….sadly.

                                    1. 2

                                      I agree. Usually (hopefully) the lag isn’t that pronounced but it’s all how fast can we get a feature in front of users to drive adoption or validate our “product market fit” to investors. I’m glad where I’m at we have a formal architecture review, albeit abbreviated, for new features so we are trying to stradle that fine line between velocity and hopefully we’ll designed code. Sadly this just means the architect (me) gets a lot of extra work

                                    2. 5

                                      Have you heard the good word of lightweight formal methods

                                      (Less facetiously: the key thing that makes the middle ground possible finding notations that are simple enough to be palatable while also being machine checkable. I volunteer a lot with the Alloy team because I think it has potential to mainstream formal methods, even moreso than TLA+.)

                                      1. 2

                                        A friend has gotten me on a mcrl2 kick for modeling one of our new systems. I’ll check out alloy as well. I love playing with pluscal/tla+ when I have the time for it

                                        1. 4

                                          The thing I really like about Alloy is that the whole BNF is like a page. So it’s not as big a time-investment to learn it over something like mcrl2 or TLA+.

                                          Also, I think you’re the second person to mention mcrl2 in like a week, which is blowing my mind. Before this I’d only ever heard about it from @pmonson711.

                                          1. 3

                                            I think I’ve finally warmed up to the idea of writing more about how and why I lean on mCrl2 as my main modelling tool.

                                            Regarding Alloy and the time investment, I agree the syntax is much simpler. It did take me a huge mental investment to get comfortable with modelling anything that changes overtime. It’s a great tool for data structure/relations modelling but if you need to understand how things change with time, than TLA+’s UNCHANGED handling feels like a necessity.

                                            1. 2

                                              I think thats the most valuable part of the whole exercise for me. I usually model everything I can as some sort of state machine with well defined entrance and exit behaviors so seeing over time the affect of changes to that system is really useful to me

                                              1. 2

                                                I’m in the process of working out how to change Alloy’s pitch away from software specifications and more towards domain modeling, precisely because “how things change” is less omnipresent in requirements modeling than in software specs.

                                                1. 1

                                                  I think I gave a less positive view of Alloy than I wanted in this comment. Alloy can certainly model time, but I always seem to end up ad-hoc defining the UNCHANGED much like the in the talk Alloy For TLA+ Users around the 30min mark.

                                                2. 2

                                                  Haha! Oddly he is the friend who put me on that path

                                                  1. 2

                                                    Thanks you two for bringing up mcrl2, I wouldn’t have discovered it if y’all didn’t talk about it!

                                                    1. 1

                                                      The thing I really like about Alloy is that the whole BNF is like a page. So it’s not as big a time-investment to learn it over something like mcrl2 or TLA+.

                                                      I think that is a main selling point to me especially. I have a lot of interest in formal methods and modeling because i can see how damn useful it can/will be. Just carving out the time and the mental bandwidth to do it is a struggle. I am by no means good at any of the tools but I am putting a lot of effort to make the time investment worth it not just to me but the product as a whole. Currently I am trying to unravel my thoughts on how we apply/enforce a fairly granular security model in our system and its been a great thought exercise and Im fairly confident it will prove useful, or drive me insane.

                                              1. 1

                                                Event sourcing is such an intriguing pattern that has deceptively sharp edges. Ive heard the “you get an audit log for free” a few times but the event store ends up being a very poor audit log and querying the raw events is usually hard/cumbersome or impossible depending on how you store them. My audit logs always end up being yet another projection.

                                                Ive found that event sourcing only gets nasty when you inevitably have a breaking change to an event contract and have to upgrade/downgrade events for different projections/handlers.

                                                I feel like its a powerful pattern even on the small scale but you need a disciplined set of developers and a strong ops strategy to make it work

                                                1. 4

                                                  event sourcing only gets nasty when you inevitably have a breaking change to an event contract and have to upgrade/downgrade events for different projections/handlers

                                                  Our solution to this was a robust capability model. Capabilities limit access to resources, but a change in a capability is itself an event. So at the point when a contract changes, that change is itself modeled in the event log, and hence only affects events that occur after it.

                                                  1. 2

                                                    Our solution to this was a robust capability model.

                                                    That sounds really interesting! Is there something I can read, or can you say a bit more about that?

                                                  2. 1

                                                    The article mentions being able to completely rebuild the application state. It makes sense in theory, but how does it work out in practice?

                                                    I imagine that you might have to do event compaction over time or else the event storage would be massive. Seems like an area where those sharp edges might come out.

                                                    1. 3

                                                      A lot of folks end up using snapshots ever n events or something discarding the previous events. Its an optimization that becomes fairly essential quickly

                                                      1. 2

                                                        I run a darts scorer in my spare time. The state of every single game is only stored as an event stream. Account and profile data uses a conventinal relational model.

                                                        I never rebuild the entire application state, only the game I’m interested at. Should I introduce new overall statistics I would just rebuild game after game to backfill the statistics data.

                                                        Storage and volume hasn’t been a problem on my tiny VPS. PostgreSQL sits at 300.000 games with each one having about 300 events. If you want I can lookup the exact numbers in the evening.