1. 1

    Sleeping

    1. 1

      Preparing for a course for people who want to sit for the CKA exam. Like knowledge transfer.

      1. 1

        curl | bash I understand. npm and its complexities, nope

        1. 9

          Have you read Science and Sanity by Alfred Korzybski? This feels a lot like reading that, specifically where he’s describing non-Aristotelianism having relational instead of subject-predicate methods, probability of cause creating effect than a given cause creates effect. Modelling from the perspective of processes as opposed to actions and outcomes. Unfortunately, nothing in the article stands out to me as actionable - we create simple models, feedback loops if necessary (which do exist, you just need an LM741 and you’re good) and if they’re good enough we run with them until we have to refine them (which again is a feedback loop). We generally don’t need to model every single potential database client or the probability that the client is of some given category, we just provide the client with a suitable API for whichever actions any of the above might need to do (extracting permissions and such into a wholly different system). I would appreciate if you could lay out the core argument of the article for me in plain terms, because all I can gather is that it’s wrong we model systems this way, and I feel like I’m missing the point.

          1. 5

            +1 for the LM741. The system does not let me add a second point for Korzybski.

            Feedback loops do exist and we have the whole space of Control Systems that studied them, before they hit us.

            And now that as time goes by, Control Systems are getting programmed by “real” programming languages and systems, not only are they here to stay, we also need to learn more about them. And Feedback loops of course.

            There’s a lot of rediscovery in this article of things that were studied with pencil and paper, only now with pseudo- and real code.

          1. 1

            I have a M1 macmini and Parallels installed on it. It seems there is no Ubuntu vagrant box available that suits me, so I will build one.

            1. 6

              I will happily say: resting! I hope you do too!

              1. 2

                Which reminds me of the paper What you always wanted to know about Datalog

                1. 1

                  I remember to have run this! If memory serves me well, then this port was done by two programmers and they were joked around by the others for porting to Unix. Sadly, I cannot locate the article that reported this.

                  1. 13

                    I’ve used m4. I mean I’ve actually written it, not just copy-pasted stuff.

                    I liked it, but I’m a very strange person.

                    I appreciated the consistency and simplicity of its design. At the same time, I see why its extremely dense and unusual syntax creates a very high barrier to entry. Also, it’s inextricably linked to autoconf for most people who’ve worked with it, and I can see why having to learn an entire Turing-complete language to customize config scripts is not something everyone is thrilled by.

                    1. 4

                      No sendmail.mc/.cf configuration ever?

                      1. 7

                        Nooo, I had managed to finally forget Sendmail configuration and then I had to read this!

                      2. 2

                        FWIW I also really enjoy using m4, and I managed to get it deployed into production too LOL

                      1. 8

                        I think only me and the author think about m4 this way. I was using it some years ago to create templates for terraform and lately for deploying redash on Kubernetes.

                        I still think Exploiting the m4 macro language[pdf] is one of the best reading material on it.

                        Let’s give m4 a second life. Everyone is inventing a template language and we have one in front of us for 50 years or so.

                        1. 4

                          I know m4 is used in telecommunications somewhat - it’s on an OpenSIPS example page somewhere.

                        1. 3

                          There was even a web site for it

                          1. 5

                            It is not the databases that ruin good ideas. It is the fear of changing a vital component of the production system that does. Sometimes even database migrations feel like a heart transplant and the powers that be require zero downtime. There in lies the fear, the anxiety and the killer of any idea.

                            1. 4

                              Zero (intentional) downtime is a much more expensive requirement than I think people give it credit for, mostly because it’s usually more of a death-by-a-thousand-cuts kind of expensive than a put-a-big-project-on-the-roadmap kind of expensive.

                              In some cases it’s a legit requirement, but I suspect if the ongoing cost were more obvious, it’d be a much less frequent requirement.

                              1. 3

                                Zero (intentional) downtime is a much more expensive requirement than I think people give it credit for

                                I think if folks don’t understand this, they should look at the uptime of their own systems as a start. While this doesn’t have data associated with it, I feel comfortable saying that the uptime/effort curve is logarithmic, such that only an exponential increase in effort will get you a linear increase in uptime after a certain point. Knowing that you should be able to look at a system you run and come to a rough idea on how much effort it would take to scale its uptime.

                                1. 3

                                  I think that is totally true if you’re talking about minimizing unintentional outages, but it seems much less clear to me that it applies to intentional ones, if only because it’s possible to actually get all the way to zero intentional downtime without infinite effort.

                                  One can argue that there’s no good reason to distinguish the two, that downtime is downtime whether it’s scheduled or not. But I think in some contexts there’s a meaningful difference between, “Our service will be unavailable from 2AM to 4AM on Thursday of next week,” and, “Oops, something broke all of a sudden and it took us two hours to recover.”

                                  1. 1

                                    Fair enough. I work in a context where we have guarantees around uptime, planned or unplanned, which is where my comment came from. But yes, I do think it’s a lot easier to drive intentional downtime to zero.

                                2. 1

                                  I think there are definitely ways to mitigate issues with database change in particular:

                                  1. Just use your error budget. It’s there to be used.
                                  2. Make a new database with the new schema, deploy the system writing to both, run a migration process behind the scenes then turn off the original database. Have some error budget in the bank for a rollback.
                                  3. Build tolerance into the system interfaces to be able to deal with the old and new scenario. This is what we do at Google constantly, we just build out a protocol buffer with more and more fields and mark the deprecated fields as such, then file bugs to get rid of those fields from the code when we can.

                                  State doesn’t have to be scary, especially with databases. I worry about things like corrupt files, cascading failures and security holes. Basically anything that requires specialist knowledge and foresight, which no one can have all of.

                                  1. 3

                                    Oh, it is absolutely possible to mitigate the risks. But everything you mentioned requires additional work compared to a “shut the whole system down, upgrade all of the components, start it up again” model.

                                    Google easily falls into the “sometimes it’s worth the cost” category, of course. Taking Google down for a couple hours on a Saturday night is obviously totally out of the question.

                                    But I’ve worked at B2B shops that insisted on zero-downtime code deployment and were willing to have the engineering team do the extra work to make it happen, even though none of the customers would have noticed or cared if we’d taken the system offline in the middle of the night. In one case, our system was a front end for an underlying third-party service that had regular scheduled downtime, but we had to stay online 24x7 even though our system was completely useless during the other system’s outages. “Our system never goes down!” was apparently a valuable enough talking point for our sales team that management was willing to pay the price, even though it added no actual value for customers.

                              1. 3

                                There’s also a 2017 book from O’Reilly, BGP in the data center for anyone interested.

                                1. 7

                                  This is my favorite blogpost of all time. It’s someone thoroughly exploring and playing with the idea of the Sierpinski Triangle and having an absolute ball. I don’t understand every part of the post, but to me this is the perfect embodiment of the fun of “doing math” and really exploring a space.

                                  • They play the Sierpinski Triangle on a piano
                                  • They build it with cellular automata
                                  • They plot it on a 3D model of a Cow
                                  • They explore Chaos with it
                                  • They explore it in higher dimensions
                                  • They smear different terms across space during construction
                                  • They view it as a Markov Chain
                                  • They view it as an L-System
                                  • They view it as a graph

                                  Just SO MUCH Sierpinski Triangle! I saw that u/pushcx posted this a few years ago and no one commented…but I really wanted to post it again because when I think of what types of blogs I want to read this always pops into my head. True exploration.

                                  1. 1

                                    This book links it with the Towers of Hanoi game.

                                  1. 4

                                    I do not understand linkers, but the post reminded me of this book Linkers and Loaders

                                    1. 3

                                      And it reminds me of John Levine mentioning in this book that the number of linker authors on this planet are just a handful (or something similar to that). No wonder, we just have this one book on the guts of linkers and loaders!

                                      1. 3

                                        I treat them as a black box and don’t understand them either, but this comment on the same article gave me a bunch of insight into it:

                                        https://news.ycombinator.com/item?id=27446341

                                        The analogy is that a linker (and an OS loader of shared libraries!) is like a compacting garbage collector – which I just wrote and spent awhile debugging, so it’s burned into my brain. My way of digesting this:

                                        • A garbage collector walks a graph of objects in memory starting from a live set / stack roots; a linker walks a function call graph, starting from main()
                                        • A compacting garbage collector produces a contiguous block of objects; A linker produces the contiguous executable (well at least the “text” portion of it).
                                        • GCs are conerned with data pages; linkers are concerned with code pages.
                                        • A garbage collector has to find edges within objects (there are several ways to do this); a linker has to find calls within functions
                                        • A leaf object has no pointers (e.g. Point {int x, int y}); a leaf function calls no others (abs() or something)
                                        • The Cheney GC algorithm is clever about the order in which objects are moved, so it can modify pointers correctly. The linker has the same issue with regard to adjusting code offsets
                                        1. 3

                                          I don’t think it’s fair to say a linker is like a garbage collector. A linker may have a garbage collector (and that is probably a good thing for linker to have), but its purpose is to resolve symbol references.

                                          1. 2

                                            It’s not about the linker having a garbage collector, but about the similarities in what a compacting gc and linker do. They both walk from a set of roots, trace, fix up, and move things around. The linker could also yeet dead code, but doesn’t have to. The isomorphism is the traversal.

                                      1. 2

                                        I think I’ve been a brilliant jerk. And easily more jerk than brilliant. Life happened and I got corrected but kudos to those next to me that endured me while I was changing because it took its time.

                                        1. 1

                                          Good work jpmens and .CY NIC!

                                          1. 1

                                            SNMP and OSI, but we’re not going to talk about CMIP and CIMOM? Perhaps for the best…

                                            1. 1

                                              The last time I heard of CMIP was back in 1995 when I was sitting for an undergraduate exam on networks.

                                            1. 11

                                              If you like mazes, there’s an excellent resource: Mazes for Programmers - Usually this is a book I choose to gift to friends of mine who can write code.

                                              1. 4

                                                That book is wonderful.

                                              1. 1

                                                Install the monit client and have the system itself email / slack you alerts when it is out of disk space for example.

                                                If you can afford a RPi or any other cheap server at home, use a prometheus node exporter on each of the machines and scrape the data with that local machine. You’ll have the joy of learning the basics of prometheus and alertmanager.

                                                You can make it a learning experience:

                                                • install prometheus and its exporters
                                                • setup alertmanager and send alerts somewhere
                                                • learn about webhooks and how to trigger different actions when you receive them
                                                • make it so that only your machine can scrape data
                                                • make it so that you can save data older than 15 days to a local InfluxDB (or other)
                                                • use the RIPE Atlas software proble to measure availability (if the situation allows you to run it alongside anything else on your machine).
                                                • the journey can continue

                                                Because it is a small scale infrastructure you have the wonderful opportunity to expose yourself to too many technologies, one step at a time, without feeling their weight pressed upon you.