1. 4

    At first I thought this is about Lucene and Xapian… Wikipedia redirects “search engine” to “web search engine”, I guess the battle is lost.

    The world also needs more search engines, not just web search engines. I think the design space is rather underexplored.

    1. 2

      Agreed. Having a better search engine for things like GitHub, where punctuation should explicitly not be ignored, would be great. I’d guess, though, that the market doesn’t really want such a thing as attempts to do this outside of/on top of GitHub haven’t exactly succeeded.

      1. 1

        Please let us know such attempts which didn’t exactly succeed.

        For me I got very impressed by livegrep, but yes, it didn’t take off.

    1. 17

      Further to this point. Strive to design your data structures so that ideally there is only one way to represent each value. That means for example NOT storing your datetimes as strings. This will imply that your parsing step also has a normalization step. In fact storing anything important as a string is a code smell.

      1. 9

        A person’s name should be stored as a string. City names. Stock symbols. Lots of things are best stored as strings.

        1. 3

          Names of things are best stored as strings, yes.

          1. 1

            What I recommend though is to encode them in their own named string types, to prevent using the strings in ways that they are not meant to be used. We often use that to encode ID references to things which should be mostly “opaque BLOBs”.

          2. 3

            Should stock symbols contain emojis or newlines, or be 100 characters long? Probably not. I assume there are standards for what a stock symbol can be. If you have a StockSymbol type constructed by a parser that disallows these things, you can catch errors earlier. Of course then the question is what do you do when the validation fails, but it does force a decision about what to do when you get garbage, and once you have a StockSymbol you can render it in the UI with confidence that it will fit.

          3. 3

            Interesting. If you don’t mind, I’d like to poke at that a bit.

            Why should you care what anything is stored as? In fact, why should you expect the rest of the universe, including the persistence mechanism, to maintain anything at all about your particular type system for your application?

            1. 7

              They can maintain it in their own type system. The issue is information loss. A string can contain nearly anything and thus I know nearly nothing about it and must bear a heavy burden learning (parsing or validating). A datetime object can contain many fewer things and thus I know quite a lot about it lowering the relearning burden.

              You can also build and maintain “tight” connections between systems where information is not lost. This requires owning and controlling both ends. But generally these tight connections are hard to maintain because you need some system which validates the logic of the connection and lives “above” each system being connected.

              Some people use a typed language with code generation, for instance.

              1. 3

                @zxtx told the reader trying to leverage type systems to design their own program a certain way that derives more benefit from type systems. The rest of the universe can still do their own thing.

                I’m not sure the string recommendation is correct. There’s several languages that are heavily based on strings powering all kinds of things out there successfully. I’ve also seen formally-verified implementations of string functionality. zxtx’s advice does sound like a good default.

                We probably should have, in addition to it, verified libraries for strings and common conversions. Then, contracts and/or types to ensure calling code uses them correctly. Then, developers can use either option safely.

                1. 3

                  Sure some things inevitably have to be strings: personal names, addresses, song titles. But if you are doing part-of-speech tagging or word tokenization, an enumerative type is a way better choice than string. As a fairly active awk user I definitely sympathize with the power of string-y languages, but I think people new to typed languages overuse rather than underuse strings.

                  1. 2

                    Unfortunately, even folks who have used typed languages for years (or decades) still overuse strings. I’m guilty of this.

                2. 3

                  I admit to going back and forth on this subject….

                  As soon as you store a person name as a PersonName object…… it’s no longer a POD and you’re constricted to a tiny tiny subset of operations on it…. (With the usual backdoor of providing a toString method)

                  On the other hand Bjarne Stoustrup’s assertion that if you have a class invariant to enforce… that’s the job of an object / type.

                  Rich Hickey the clojure guy has an interesting talk exactly on this subject with an interesting different take….

                  Instead of hiding the data in a type with an utter poverty of operators, leave everything as a pod of complex structure which can be validated and specified checked and asserted on using a clojure spec.

                  ie. If you want something with a specific shape, you have the spec to rely on, if you want to treat it as ye olde list or array of string….. go ahead.

                  1. 11

                    I stuck to simple examples of the technique in my blog post to be as accessible as possible and to communicate the ideas in the purest possible way, but there are many slightly more advanced techniques that allow you to do the kind of thing you’re describing, but with static (rather than dynamic) guarantees. For some examples, I’d highly recommend taking a look at the Ghosts of Departed Proofs paper cited in the conclusion, since it addresses many of your concerns.

                    1. 1

                      Ok. That took me awhile to digest…. but was worth it. Thanks.

                      For C++/D speakers it’s worth looking at this first to get the idea of phantom types…

                      https://blog.demofox.org/2015/02/05/getting-strongly-typed-typedefs-using-phantom-types/

                    2. 4

                      As someone who worked professionally with both Clojure (before spec but with Prismatic Schema) and OCaml and I have to say I utterly prefer to encode invariants in a custom type with only a few operations instead of the Clojure way of having everything in a hashmap with some kind of structure (hopefully) and lots of operations which operate on them.

                      My main issue writing Clojure was that I did apply some of these (really useful and versatile) functions on my data, but the data didn’t really match what I had expected so the results were somewhat surprising in edge cases and I had to spend a lot of brain time to figure out what was wrong and how and where that wrong data came to be.

                      In OCaml I rarely have the problem and if I want to use common functions, I can base my data structures on existing data structures that provide the functions I want to over the types I need, so in practice not being able to use e.g. merge-with on any two pieces of data is not that painful. For some boilerplate, deriving provides an acceptable compromise between verbosity and safety.

                      I can in theory do a similar thing in Clojure as well, but then I would need to add validation basically everywhere which makes everything rather verbose.

                      1. 3

                        I’ve used Clojure for 8 years or so, and have recently been very happy with Kotlin, which supports sealed types that you can case-match on, and with very little boilerplate—but also embraces immutability, like Clojure.

                        With Clojure, I really miss static analysis, and it’s a tough tradeoff with the lovely parts (such as the extremely short development cycle time.)

                      2. 3

                        The ability to “taint” existing types is the answer we need for this. Not a decorator / facade sort of thing, just a taint/blessing that exists only within the type system, with a specific gatekeeper being where the validation is done and the taint removed/blessing applied.

                        1. 2

                          In Go, wrapping a string in a new type is zero-overhead, and you can cast it back easily. So it’s mostly just a speedbump to make sure that if you do something unsafe, you’re doing it on purpose and it will be seen in code review. If the type doesn’t have very many operators, you might have more type casts that need to be checked for safety, but it’s usually pretty easy to add a method.

                          On the other hand, the Go designers decided not to validate the string type, instead accepting arbitrary binary data with it only being convention that it’s usually UTF8. This bothers some people. But where it’s important, you could still do Unicode validation and create a new type if you want, and at that point there’s probably other validation you should be doing too.

                          1. 1

                            The last one is the best.

                            Instead of scaling out code, we should be scaling out tests. We’re doing it backwards.

                            I’ve been meaning to put together a conference proposal on this but haven’t gotten around to it. It’s the kind of thing that blows people’s minds.

                            1. 1

                              Can you expand a little on this? Sounds interesting.

                              1. 3

                                People don’t understand what tests do. If you ask them, they might say they help your code be less buggy, or they show your business customers that your program does what they’re paying for.

                                That’s all true, but horribly incomplete. Tests resolve language.

                                That is, whether it’s science, programming, running a business, or any of hundreds of other areas where human language intersects science, tests are the only tools for determining what’s true or not in unambiguous terms. Come up with some super cool new way of making a superconductor? Great! Let’s have somebody go out and make it on their own, perform a test. If the test passes, you’re on to something. Yay! If the tests fails? Either you’re mistaken or the language and terms you’re using to describe your new process has holes the reproducer was unable to resolve. Either way, that’s important information. It’s also information you wouldn’t have gained otherwise without a test.

                                In coding, as I mentioned above, we have two levels of tests. The unit level, which asks “Is this code working the way I expected it to?” and the acceptance level, which asks “Is the program overall performing as it should?” (I understand the testing pyramid, I am simplifying for purposes of making a terse point). But there are all sorts of other activities we do in which the tests are not visible. Once the app is deployed, does it make a profit? Is your team working the best way it can? Are you building this app the best way you should? Are you wasting time on non-critical activities? Will this work with other, unknown apps in the future? And so on.

                                We’ve quantitized some of this with things like integration testing (which only works with existing apps). Frankly, we’ve made up other stuff out of whole cloth, just so we can have a test, something to measure. In most all cases, when we make stuff up we end up actually increasing friction and decreasing productivity, just the opposite of what we want.

                                So how do we know if we’re doing the best job we can? Only through tests, whether hidden or visible. How are we doing at creating tests? I’d argue pretty sucky. How can we do tests better? More to the point, if we do tests correctly, doesn’t it make whatever language, platform, or technology we use a seconary-effect as opposed to a primary one? We spend so much time and effort talking about tools in this biz when nobody can agree on whether we’re doing the work right. I submit that this happens because we’re focusing far, far too much on our reactions to the problem than the problems themselves. If we can create and deploy tests in a comprehensive and tech-independent manner, we can then truly begin discussing how to take this work to the next level. Either that or we’re going to spend the next 50 years talking about various versions of hammers instead of how to build safe, affordable, and desirable houses, which is what we should be doing.

                                There’s a lot missing in my reply, but once we accept that our test game sucks? Then a larger and better conversation can happen.

                                1. 1

                                  It will take me some time to digest this properly… it’s a completely different angle to which I usually approach the matter. (I’m not saying you’re wrong, I’m just saying you coming at it from such a different angle I’m going to have to step back and contemplate.)

                                  To understand where I’m coming from let me add…

                                  I regard tests as a lazy pragmatic “good enough” alternative to program proving.

                                  If we were excellent mathematicians, we would prove our programs were correct exactly the way mathematicians prove theorems.

                                  Except we have a massive shortage of that grade of mathematicians, so what can we do?

                                  Design by Contract and testing.

                                  DbC takes the raw concepts of program proving (pre-conditions and post conditions and invariants) and then we use the tests to setup the preconditions.

                                  Writing complete accurate postconditions is hard, about as hard as writing the software, so we have a “useful subset” of postconditions for particular instance of the inputs.

                                  Crude, very crude, but fairly effective in practice.

                                  My other view of unit tests is closer to yours…

                                  They are our executable documentation (proven correct and current) of how to use our software and what it does. So a design principle for tests is they should be Good, Readable understandable documentation.

                                  Now I will shutup and contemplate for a day or two.

                                  1. 1

                                    We are saying the same thing. Heck we might even be agreeing. We’re just starting from completely opposite sides of the problem. Formally validating a program proves that it matches the specification. In this case the formal specification is the test.

                                    I think when I mention tests you may be thinking of testing as it was done in the IT industry, either manual or automated. But I mean the term in the generic sense. We all test, all the time.

                                    What I realized was that you can’t write a line of code without a test. The vast majority of times that test is in your head. Works for me. You say to yourself “How am I going to do X?” then you write some code in. You look at the code. It appears to do X. Life is good.

                                    So you never get away from tests. The only real questions are what kinds of tests, where do they live, who creates them, and so forth. I’m not providing any answers to these questions. My point is that once you realize you don’t create structure without some kind of tests somewhere, even if only in your head, you start wondering exactly which tests are being used to create which things.

                                    My thesis is that if we were as good at creating tests as we were at creating code, the coding wouldn’t matter. Once again, just like I don’t care whether you’re an OCAML person or a Javascript person, for purposes of this comment I don’t care if your tests are based on a conversation at a local bar or written in stone. That’s not the important part. The thing is that in various situations, all of these things we talk about doing with code, we should be doing with tests. If the tests are going to run anyway, and the tests have to pass for the project to be complete or problem solved, then it’s far more important to talk about the meaning of a successful completion to the project or a solution to the problem than it is to talk about how to get there.

                                    Let’s picture two programmers. Both of them have to create the world’s first accounting program. Programmer A sits down with his tool of choice and begins slinging out code. Surely enough, in a short time voila! People are happy. Programmer B spends the same amount of time creating tests that describe a successful solution to the problem. He has nothing to show for it.

                                    But now let’s move to the next day. Programmer A is just now beginning to learn about all of the things he missed when he was solving the problem. He’s learning that for a variety of reasons, many of which involve the fact that we don’t understand something until we attempt to codify it. He begins fixing stuff. Programmer B, on the other hand, does nothing. He can code or he can hire a thousand programmers. The tech details do not matter.

                                    Programmer B, of course, will learn too, but he will learn by changing his tests. Programmer A will learn inside his own head. From there he has a mental test. He writes code. It is fixed. Hopefully. Programmer A keeps adjusting his internal mental model, then making his code fit the model, until the tests pass, ie nobody complains. Programmer B keeps adjusting an external model, doing the same thing.

                                    Which of these scale when we hire more coders? Which are these are programs the programmer can walk away from? Formal verification shows that the model meets the spec. What I’m talking about is how the spec is created, the human process. That involves managing tests, in your head, on paper, in code, wherever. The point here is that if you do a better, quicker job of firming the language up into a spec, the tech stuff downstream from that becomes less of an issue. In fact, now we can start asking and answering questions about which coding technologies might or might not be good for various chores.

                                    I probably did a poor job of that. Sorry. There’s a reason various programming technologies are better or worse at various tasks. Without the clarification tests provide, discussions on their relative merits lack a common system of understanding.

                                2. 1

                                  ADD: I’ll add that most all of the conversations we’re having around tech tools are actually conversations we should be having about tests: can they scale, can they run anywhere, can they be deployed in modules, can we easily create and consume stand-alone units, are they easy-to-use, does it do only what it’s supposed to do and nothing else, is it really needed, is it difficult to make mistakes, and so on. Testing strikes me as being in the same place today as coding was in the early-to-mid 80s when OO first started becoming popular. We’re just not beginning to think about the right questions, but nowhere near coming up with answers.

                                  1. 1

                                    Hmm… In some ways we hit “Peak Testing” a few years back when we had superb team of manual testers, well trained, excellent processes, excellent documentation.

                                    If you got a bug report it had all the details you needed to reproduce it, configs, what the behaviour that was expected, what was the behaviour found, everything. You just sat down and started fixing.

                                    Then test automation became The Big Thing and we hit something of a Nadir in test evolution which we are slowly climbing out of…

                                    This is how it was in the darkest of days…

                                    “There’s a bug in your software.”

                                    Ok, fine, I’ll fix how do I reproduce…..

                                    “It killed everything on the racks, you’ll have to visit each device and manually rollback.”

                                    (Shit) Ok, so what is the bug?

                                    “A test on Jenkins failed.”

                                    Ok, can I have a link please?

                                    “Follow from the dashboard”

                                    What is this test trying to test exactly?

                                    “Don’t know, somebody sometime ago thought it a good idea”.

                                    Umm, how do I reproduce this?

                                    “You need a rack room full of equipment, a couple of cloud servers and several gigabytes of python modules mostly unrelated to anything”.

                                    I see. Can I have a debug connector to the failing device.

                                    “No.”

                                    Oh dear. Anyway, I can’t seem to reproduce it… how often does it occur?

                                    “Oh we run a random button pusher all weekend and it fails once.”

                                    Umm, what was it doing when it failed?

                                    “Here is a several gigabyte log file.”

                                    Hmm. Wait a bit, if I my close reading of these logs are correct, the previous test case killed it, and the only the next test case noticed…. I’ve been looking at the wrong test case and logs for days.

                            2. 1

                              Because throughout your program you will need to do comparisons or equality checks and if you aren’t normalizing, that normalization needs to happen at every point you do some comparison or equality check. Inevitably, you will forget to do this normalization and hard to debug errors will get introduced into the codebase.

                              1. 1

                                Ok. Thank you. I figured out what my hang up was. You first say “Strive to design your data structures so that ideally there is only one way to represent each value.” which I was completely agreeing with. Then you said “In fact storing anything important as a string is a code smell” which made me do a WTF. The assumption here is that you have one and only one persistent data structure for any type of data. In a pure functional environment, what I do with a customer list in one situation might be completely different from what I would do with it in another, and I associate any constraints I would put on the type to be much more related to what I want to do with the data than to my internal model of how the data would be used everywhere. I really don’t have a model of how the universe operates with “customer”. Seen too many different customer classes in the same problem domain written in all kinds of ways. What I want is a parsed, strongly-typed customer class right now to do this one thing.

                                See JohnCarter’s comment above. It’s a thorny problem and there are many ways of looking at it.

                                1. 1

                                  I think ideally you still do want a single source of truth. If you have multiple data structures storing customer data you have to keep them synced up somehow. But these single sources of data are cumbersome to work with. I think in practice the way this manifests in my code is that I will have multiple data structures for the same data, but total functions between them.

                                  1. 2

                                    Worked with a guy once where we were going to make a domain model for an agency. “No problem!” he said, “They’ve made a master domain model for everything!”

                                    This was an unmitigated disaster. The reason was that it was confusing a people process (determining what was valid for various concepts in various contexts) with two technical processes (programming and data storage) All three of these evolved dramatically over time, and even if you could freeze the ideas, any three people probably wouldn’t agree on the answers.

                                    I’m not saying there shouldn’t be a single source of data. There should be. There should even be a single source of truth. My point is that this single point of truth is the code that evaluates the data to perform some certain action. This is because when you’re coding that action, you’ll have the right people there to answer the questions. Should some of that percolate up into relational models and database constraints? Sure, if you want them to. But then what do you do if you get bad incoming data? Suppose I only get a customer with first name, last name, and email? Most everybody in the org will tell you that it’s invalid. Except for the marketing people. To them all they need is email.

                                    Now you may say but that’s not really a customer, that’s a marketing lead, and you’d be correct. But once again, you’re making the assumption that you can somehow look over the entire problem space and know everything there is to know. Do the mail marketing guys think of that as a lead? No. How would you know that? It turns out that for anything but a suite of apps you entirely control and a business you own, you’re always wrong. There’s always an impedance mismatch.

                                    So it is fine however people want to model and code their stuff. Make a single place for data. But the only way to validate any bit of data is when you’re trying to use it for something, so the sole source of truth has to be in the type code that parses the data going into the function that you’re writing – and that type, that parsing, and that function are forever joined. (by the people and business that get value from that function)

                                    1. 2

                                      I suspect we might be talking a bit past each other. To use your example, I might ask what it means to be a customer. It might require purchasing something or having a payment method associated with them.

                                      I would in this case have a data type for Lead that is only email address, a unique uuid and optionally a name. Elsewhere there is code that turns a Lead into a Customer. The idea being to not keep running validation logic beyond when it is necessary. This might mean having data types Status = Active | Inactive | Suspended which needs to be pulled from external data regularly. I can imagine hundreds of different data types used for all the different ways you might interact with a customer, many instances of these data types created likely right before they are used.

                            3. 1

                              Mostly agree, but I’d like to add that the ability to pass along information from one part of the system to another should not necessarily require understanding that information from the middle-man perspective. Often this takes the form of implicit ambient information such as a threadlocal or “context” system that’s implemented in library code, but as a language feature it could be made first-class.

                            1. 2

                              I’m in the Java world, so, like @technomancy says, I already have a deployment mechanism in the “uber” or “fat” JAR. Not only that, but with PaaS like Heroku, I don’t even have to build the JARs locally.

                              For development, I do use Docker containers for things like a postgres database, mkdocs, etc., because otherwise the configurations are a pain and conflict with other things, and am also looking at Test Containers to make some of that testing with postgres easier.

                              1. 2

                                This week I’m splitting my Java live coding time between JitterChat, an IntelliJ Plugin to embed the Twitch chat into a tool window, and my Kid Money Manager project (which is in production and being used by its first customer: me).

                                Also updating my “Human Learning” presentation for a local meetup in San Mateo, CA on Wednesday evening.

                                1. 1

                                  Oh this brings back memories of hacking the CGA card, but nothing like what this person does. Very nice!

                                  1. 1

                                    I’ve been using Publii that uploads to Netlify, but I don’t like it, and it’s always been meant to be temporary anyway.

                                    I’ve played with Hugo and other static site generators, but I could never find a theme that worked for me. My next step is to move to straight HTML and continue uploading to Netlify until I find something better. I’m a bit of a control freak over my formatting, especially since I have lots of code examples, which nis why a lot of the themes haven’t worked for me.

                                    If I get sick of writing HTML directly, then I may throw together my own static site generator in Java just for educational purposes, not because I think the world needs another one..

                                    1. 2
                                      • Preparing to train a bunch of C# developers in Java next week in the Denver, CO area
                                        • Will be interesting to see how good they are – it’s a big Enterprise company, so wide range of skills
                                      • Creating “lessons learned” notes from my daily live code streams that I’ve been doing on Twitch
                                        • My main app is the “Kid Money Manager”, which is just a few more hours away from being put into production
                                        • Been having a blast doing my “Java Makeover” stream, so writing up notes from that as well
                                      1. 2

                                        As previously mentioned here, I’ve been doing livecoding of a Java/Spring app on Twitch [here](https://twitch.tv/jitterted] called “kid-bank” (tracking how much money my son has, since we didn’t think it was worth opening up a “youth” account for him).

                                        The open-source OBS broadcasting software has been rock solid, but the usability sucks (the bane of much open-source). And, since the streaming is mostly gamer-oriented, finding the right settings has been lots of trial-and-error, especially on the Mac where OBS sees the retina screen differently than other software, so resolutions are wonky. SwitchResX may be a solution, but haven’t tried that route yet.

                                        Twitch.tv as a platform is OK, though it’s surprisingly lacking in features (you can’t schedule recurring broadcasts, for one). Also, the latency is a bit high for me (6-8sec) and that means responding to chat comments is a bit…weird (may be due to the odd resolution I’m broadcasting in). The 3rd party app support is aimed at Windows, which makes sense given the gamer-focus, so for me it’s all a hodgepodge of little apps and lots of manual work to get it right.

                                        All that aside, it’s been a blast doing the livecoding! On my first session, I started out assuming that nobody was out there – because I wasn’t talking to anyone about it yet – but someone dropped by and had some nice/useful comments, which made it more fun. I’ve now done 9 live sessions – one of them was over 3 hours, which is waaay too long for livecoding – and plan on doing them as much as possible, at least once a week. I’ve got two more scheduled for this week.

                                        I’ve got a half-dozen sessions that I still have to put through the video editing process, the ones so far have been posted on YouTube. On the one hand, I’ve gotten really good at editing out the many “uhhmmmm”s that I can’t seem to help (I am doing a lot of thinking out loud), on the other hand, I’ve spent way too much time doing the editing given that I’m not making any money off it (yet?).

                                        My blog post has all the details if you’re interested: https://www.tedmyoung.com/kid-bank-the-money-tracker-for-kids/.

                                        1. 5

                                          Why is the first two pages just a picture of a fox

                                          Like it doesn’t have anything to do with the content, even if you stretch, it’s just a picture of a fox

                                          A 2000x1,200px picture of a fox

                                          1. 3

                                            If it makes you feel better, people will still worry more about minifying their production code while doing stuff like that.

                                            1. 1

                                              Hmm. It’s pretty reasonably sized on mobile. I guess Medium just makes it scale up until it’s wide enough to fill the browser width?

                                              1. 2

                                                Yup, welcome to Medium’s idea of a responsive web page.

                                            1. 12

                                              Working on https://www.laarc.io/ People keep saying how eerily similar it feels to HN circa 2007.

                                              There’s been an influx of new members seemingly out of nowhere each day. And they tend to be some of the best hackers I’ve met. JungleCat’s “scent map” pathfinding technique is particularly impressive: https://www.laarc.io/item?id=285

                                              There are a bunch of easter eggs scattered throughout the site, which users seem to like. And setting up a discord server turned out to be super important for building the community. https://discord.gg/qaqkc9z

                                              We’re gearing up for an actual Show Lobsters post. Emily has been one of our secret weapons; she’s like an assassin that targets items on a todo list. It’s scary watching her work. We have an iPhone app coming soon, powered by our firebase API (e.g. https://laarrc.firebaseio.com/v0/item/1.json?print=pretty)

                                              screenshot

                                              The API is nearly identical to HN’s. https://github.com/HackerNews/API But we’ll be adding support for tags, which is one of N hundred things to do this weekend.

                                              1. 4

                                                Very cool! What do you see as some notable differences between laarc and Lobste.rs?

                                                1. 4

                                                  I think the communities will always compliment one another. The main difference can be illustrated by example:

                                                  https://www.laarc.io/l/obscure

                                                  https://www.laarc.io/l/chess

                                                  https://www.laarc.io/l/startups

                                                  https://www.laarc.io/l/happiness

                                                  https://www.laarc.io/l/hn

                                                  https://www.laarc.io/l/templeos

                                                  And personally, I find it motivating to deliver a high-quality production-grade site written in modern Arc: https://github.com/laarc/laarc

                                                  The world is starting to notice that writing webapps in React or Vue can end up being an order of magnitude more work. Elixir is doing some pretty wonderful work with realtime serverside re-rendering. Arc has had this since the very beginning. And you get some amazing features like thread-local variables.

                                                  You know how in Express you have to pass around req and res everywhere? In Arc you can just write (the-req*) to get the current request. Many functions have been rewritten to take advantage of this. For example, the authentication code is one of the simplest you’ll see anywhere: if you call (get-user) with no arguments and it returns nil, the user isn’t logged in. If (get-auth) returns nil, that means the user visited a link like /logout but it didn’t have the proper /logout?auth=<hash> code, so therefore the server should ignore the logout attempt. And so on.

                                                  But really, the most exciting part of laarc is the people. Everyone has so many neat personal projects, and everyone is so nice. It’s a spirit I’ll be embedding into the site’s soul.

                                                2. 2

                                                  Looks really cool. I’ve somehow gotten the idea that HN is inflated by people who are doing social media campaigning to get to the front page for advertisement. In more or less honest way. That’s what I like about lobsters it simply feels more authentic somehow.

                                                  1. 1

                                                    Looks interesting, though I wish it would use larger default font sizes, 10 pt is pretty hard to read for the titles and have to zoom in to 125% to make it easier to read.

                                                  1. 2

                                                    Presentation: Working on a 2.0 version of my “Human Learning” talk that I gave 3 times in 2018 (two of which were recorded: https://www.youtube.com/playlist?list=PLBHctPrH7Z2-BcpRWJ0uFanCbTHqdJ_1T). Have been submitting it to conferences for 2019, so we’ll see if any take me up on it. Also have plans to expand it to an online video course.

                                                    Live Coding: Going to some live coding on Twitch now that I have the basics working. Haven’t figured out my regular schedule yet, likely weekend mornings. Going to do it in stealth while I work the kinks out – finding good, recent information on doing this well on a MacBook Pro has not been easy. Have an in-progress blog post about it.

                                                    Kid-Bank: Continuing to work on a money tracking project that tells my son how much he has, since we didn’t want to bother opening up a “youth” bank account (the ones I found don’t pay interest, as if kids are 2nd class citizens or something). This is what I’ll be coding live (see above) and is perhaps a bit overkill using Spring Boot, Microsoft Azure, et al, but is also fodder for upcoming training courses that I’m delivering.

                                                    1. 5

                                                      I find it simplest to ask, which is more likely to be lost or corrupted accidentally, white space or not white space? I copy and paste some code from your blog. Am I more likely to lose indentation or curly braces?

                                                      1. 2

                                                        And yet which are you more likely to understand from a novice or person with stupid tools, the well indented stuff or the well bracketed stuff without any leftward whitespace?

                                                        1. 5

                                                          The code that runs without me painstakingly recreating the correct indentation.

                                                          1. 1

                                                            The problem with things that are easy for novices is the “expertise reversal effect”, where it hurts non-novices (those approaching or having expertise) possibly more than it helps the novices. It’s a difficult balance, for sure.

                                                            1. 2

                                                              Especially in this field with its current growth, providing many more novices than experts

                                                        1. 6

                                                          I’ve been working on my journal software a lot over the holiday, and have been using it to track various things, like diet and reciepts. I’ve also been using it to track my creative ideas. In the past week, I’ve been refining the facilities exposed to the lua scripting that I integrated into this software the week before.

                                                          This week, I plan on building some basic web-based tools with it, like a small shopping list page, and an agenda mode that allows me to see what I’ve marked as TODOs for each given day in a week, so I know if I need to move things around.

                                                          I have also started working on tracking and classifying my expenses better, in an attempt to work on better saving my money. So far, that’s a work in progress python script over a CSV of financial activity from my bank.

                                                          There are other ideas I have in mind for this week, but those are the main 2 that I know I’ll be working on for sure. And, since I’ve following this set of emails, I’ve started to try to embrace making the decisions to embrace working on only a few things, rather than spending my time dreaming of all the things I could do.

                                                          1. 2

                                                            Ooh, those Creative Compass essays go well with Jon Acuff’s “Finish” (that I just started reading). Perfectionism is such a huge issue for me.

                                                            1. 1

                                                              I’d be interested to see what you think when you finish that book (no pun intended). The one thing about Growing Gills that’s been a bit of a difficult thing for me, at least without someone else to go through it with, is that it’s more of a workbook than a “read this and just get ideas” book. That being said, the emails that I’ve been signing up for along-side it have been good, and I think once I get to a certain place (I’m currently addressing other issues in my life outside of creative output), it may be a valuable tool.

                                                          1. 4

                                                            I think it’s also because programming is mostly an unregulated industry. Aviation software is no different from other software and yet they use formal methods and specifications. When the cost of errors is high enough in some industry then that industry implements methods and practices that either end up being formal methods or close enough approximations.

                                                            1. 2

                                                              Agreed, and to me, that’s why software development is not yet “engineering”.

                                                              Costs are interesting, because surely companies are spending/wasting/leaking money due to lack of use of formal methods, but in aviation the cost is usually due to lives lost/almost lost or huge law suits. In most corporations, though, “bugs” are seen as the cost of doing business and, unless there’s some huge (usually press-release-worthy) problem, there’s usually no push to look at such methods.

                                                            1. 3

                                                              {rant}

                                                              I’m all for writers being paid for their work, but some of these Medium “articles” are no better written than a million other posts. To put this one under the “paid” section is certainly not appropriate for this shallow analysis.

                                                              {/rant}

                                                              Apparently the TL;DR is YAGNI (You Ain’t Gonna Need It), summarized at the end:

                                                              Design a system that is easy to maintain and solves the problems you need to solve today.

                                                              1. 1

                                                                The real kicker is: I know I’ve read better articles on this topic, but this one is almost more memorable because it’s associated with Google+.

                                                              1. 6

                                                                I feel like everyone should read Chuck Moore’s Programming a Problem Oriented Language (1970)(Also available on Amazon), if not for the walkthrough in building a beautifully designed program then be it for snippets of wisdom like this:

                                                                The Basic Principle has a corollary:

                                                                • Do Not Speculate!

                                                                Do not put code in your program that might be used. Do not leave hooks on which you can hang extensions. The things you might want to do are infinite; that means that each one has 0 probability of realization. If you need an extension later, you can code it later - and probably do a better job than if you did it now. And if someone else adds the extension, will they notice the hooks you left? Will you document that aspect of your program?

                                                                1. 1

                                                                  Yep, that’s the YAGNI (You Ain’t Gonna Need It) principle at work, which I talk about every time I teach a class. Good to know an early source for the principle!

                                                                  1. 1

                                                                    Not sure what library this is using, but it just crashed Chrome on my Mac after chewing through 4GB of memory and 100% CPU.

                                                                    1. 6

                                                                      Like many security holes, this could’ve been much worse.

                                                                      Key point:

                                                                      Of course you could run your own internal Maven repository for your company and have every project rely on that exclusively, with only carefully reviewed and verified packages being imported there. Most won’t have time to be so careful about dependencies.

                                                                      If you’re at a company that’s producing apps for internal or external use, you need to be using an internal managed repository, otherwise you’re wide open to these kinds of attack vectors. And this is not a new problem, it’s been around for decades.

                                                                      1. 3

                                                                        When I read this, I thought it was interesting, but at the same time I was kind of appalled at how poorly companies manage their internal documentation. It’s certainly not uncommon as I’ve seen this behavior at just about every company I’ve been at (eBay, Google, Apple, Guidewire, etc.). Apple, perhaps not surprisingly, was the absolute worst of the bunch, since they’ve got internal systems going back decades (and their NIH syndrome and IP paranoia hurt as well). For the others, it varied from group to group, but entropy of internal wikis, etc., were always high.

                                                                        I always wished companies would measure the loss of productivity due to outdated internal docs and then use that to justify hiring a dedicated “wiki gardener”, but gave up that fight long ago.