1. 2

    Continuing to do some work on TinyDevCRM (or “tinydev”): https://bytes.yingw787.com/posts/2020/02/08/tinydevcrm_1/ (Can’t publish a new post due to new lobste.rs rules, which is all good with me!)

    Hopefully I can get to MVP stage by end of next week, how likely that is I’m not sure.

    1. 1

      I use links2 as a CLI-based browser. It’s surprisingly great and I learned a good deal about what’s important in an HTML document and how I want to write front-ends. I’ve also looked at Midori, but if I’m going with a GUI I still like Firefox.

      Chrome I use for WebRTC-related tasks like video chats with friends and professional contacts. I’m definitely not a FOSS purist, but I try to rely on it whenever I can.

      1. 3

        I come to Lobste.rs for the genuine tech conversations that I find harder get on other sites like Hacker News. Beyond Hacker News and Lobste.rs, other sites are kind of just “here’s how to create a basic REST API” or some fluffy crap like that and I don’t bother going there. I would be really sad if I didn’t have you guys to talk to because we didn’t take care of our community.

        I really like how the two people I’ve invited to Lobste.rs prefaced their ask for a Lobste.rs invite with some discussions around some of the blog posts I submitted here to confirm their interest and their ability to knowledgeably add to the discussion.

        I think it’s so great that Lobste.rs is valuable now (as shown / described). It’s important we understand we have lever in extending invites, and we should learn to say no when we’re uncomfortable or unsure. I personally find it hard (which is why I’m mentioning it), but I find the alternative of losing this community scarier.

        1. 3

          It is always super weird to see someone quote me. Really cool! But also super weird. I’m not fully adjusted to that yet ;)

          If you find Haskell’s type system interesting, I’d also recommend checking out an ML! The module system in ML is really cool, powerful, and unique. I’m shocked more languages haven’t adopted similar module systems.

          Also, a couple of quick pedantic nitpicks:

          I was reading through some of Hillel Wayne’s blog posts, and one of them discussed the Curry-Howard correspondence, which somehow proves a 1:1 association between aspects of a mathematical proof and aspects of a type system. You can convert a proof into a type, if your type system supports it. I think Haskell’s type system respects this. I don’t think Python’s type system does.

          I don’t think I explained the CHC very well in that. All sound static type systems are “proofs”, where what you are proving is “this is well-typed”. This is true for both Haskell and Python (mypy). But different languages can encode different consequences of being well-typed, like if you can express the type of sorted lists, you can say “I’ve proven this is well-typed ⇒ this sort function correctly sorts the list.” Haskell’s type system is more powerful than most other commercial languages, but it’s not powerful enough to encode arbitrary theorems. For that you need dependent typing a la Idris or Coq.

          Also proving type-safety for arbitrary theorems is really, really hard. Tradeoffs!

          Python has hypothesis, but having used it a tiny bit for an open-source contribution, I don’t think it’s the same without the Haskell type system.

          Gotta defend hypothesis a bit here! It sounds like you were using quickcheck in the context of learning from a book, while you were using hypothesis for a real-world project. In that context quickcheck is gonna seem more elegant. But Hypothesis is world-class and a lot of more modern PBT libraries follow its example. It’s a lot better at generating inputs like “two dictionaries where the values of the first dict are a superset of the keys in the second.”

          1. 2

            OCaml seems really cool, and I’ve heard it used in industry; is it the most popular ML for practitioners? I also know of ReasonML used by Facebook to create Facebook Messenger. But I think there’s still so much to learn about Haskell, and between going broad and learning about type systems, I’d want to go deep into Haskell and become intermediate, or even advanced if I’m fortunate enough to have the time and resources to do so!

            Hmm, I didn’t know that Idris or Coq would be powerful enough to encode arbitrary theorems. I think if I were to study more into type theory (an interesting subject), I’d try to learn those!

            Yeah, I think Corbin mentioned that Hedgehog was closer to hypothesis than QuickCheck was; I think I was too quick in dismissing hypothesis. I do like the notion of using generators over types, and the integrated shrinking (when I wrote this I assumed that QuickCheck had integrated shrinking; I think that’s actually a Hedgehog or maybe hypothesis innovation). I definitely like your blog post point about creating integration tests from a combination of hypothesis and pycontracts, I’ll give that a shot for my next project :D

            1. 2

              QuickCheck does shrinking. The Arbitrary typeclass has a shrink :: a -> [a] method which should return a list of smaller versions of its argument.

              If you don’t want to create an instance of Arbitrary, you can use the function forAllShrink. It’s type is a more polymorphic version of (RandomSeed -> a) -> (a -> [a]) -> (a -> Bool) -> Bool: the first argument is a generating function, the second is a shrinking function and the third is the property to check.

          1. 19

            As somebody who first learned Python, and then Haskell, and then left both, this review reminds me of many of my own experiences.

            I was reading through some of Hillel Wayne’s blog posts, and one of them discussed the Curry-Howard correspondence, which somehow proves a 1:1 association between aspects of a mathematical proof and aspects of a type system. You can convert a proof into a type, if your type system supports it. I think Haskell’s type system respects this. I don’t think Python’s type system does.

            Yes, but undefined happens to be a value of every type, in Haskell; this corresponds to being able to prove any statement, including the false ones. So while it is indeed possible to turn proofs into programs, it’s not possible to turn Python or Haskell programs into useful valid proofs.

            (Somebody may pedantically note that every Python or Haskell program gives a proof, just not of the fact that we might hope that they prove; they usually have Python-shaped or Haskell-shaped holes in them.)

            I don’t think you can do anything like [fmap] with Python or any other practitioner’s language that I’m familiar with.

            I think that this is part of the roots of the Haskeller memetic complex. Because Haskell has the lightweight syntax of an ML, and its prelude is rich in algebraic tools, suddenly there is a blindness as to the fact that the tools have been there all along. For example, in Python 3:

            >>> plusOne = lambda x: x + 1
            >>> lmap = lambda f: lambda xs: [f(x) for x in xs]
            >>> nada = object()
            >>> mmap = lambda f: lambda x: nada if x is nada else f(x)
            >>> dot = lambda f: lambda g: lambda x: f(g(x))
            >>> zf = dot(lmap)(mmap)(plusOne)
            >>> zf([1, 2, 3])
            [2, 3, 4]
            >>> zf([1, nada, 3])
            [2, <object object at 0x7ff65c5db0d0>, 4]
            

            Even the base error type, bottom or |, is defined as an evaluation that never completes successfully, either because a computation failed, or because there’s an infinite loop. … I don’t think Python’s BaseException has a built-in notion of infinite loops evaluating to an error condition. This appears super powerful to me, because the big error type introduced by going over network connections are network timeouts, which appear to most programs as infinitely long computations.

            There are many opinions on this, but I am of the opinion that the sort of incompleteness generated by an unresponsive network peer, known as FLP impossibility or perhaps FLP “incompleteness” of network requests, is not the same as the incompleteness that Turing, Gödel, Tarski, Lawvere, and others showed via diagonalization. They are connected, in that the FLP result implies that a Turing machine which simulates many computers on a network may encounter a Turing-complete subproblem which prevents the simulated network peers from responding.

            In Alice-and-Bob framing, suppose Alice is waiting for Bob to reply over the network. The FLP impossibility of Alice having to wait forever for Bob’s reply is connected to the Turing undecidability of whether a simulation of Alice and Bob will hang forever while simulating Bob’s preparation of their reply.

            Haskell has Test.QuickCheck, a property-based testing framework that generates values to check a declared property. Python has hypothesis, but having used it a tiny bit for an open-source contribution, I don’t think it’s the same without the Haskell type system.

            I think that this is another memetic root; specifically, the root is type-driven dispatch. To compare apples to apples, a better analogue of QuickCheck in Python might be PayCheck, while Hedgehog is closer to Hypothesis in Haskell. As soon as the one-dimensional comparison becomes two-dimensional, the flexibility of Hypothesis, and crucially the ability to select arbitrary strategies for generating test cases, far outweighs the convenience of simple type-checking. This makes sense, in a certain way; if Haskell’s type system were sound in the first place, then QuickCheck would be little more than a totality checker. Hypothesis’ own author recommends Hedgehog over QuickCheck, and in my language, Monte, I implemented both a classic unit-test runner and also a Hypothesis-style property test runner.

            Ultimately, I don’t think [indentation] is a big issue. Haskell has code auto-formatters like haskell-formatter, much like Python’s black. Use it and move on. Bikeshed about something else, you know?

            Since you didn’t mention it, and none of your code samples seem to use it, and to give an example of something to bikeshed over, it’s worth pointing out that Haskell supports curly-brace-oriented syntax as well as indentation. This is a recurring topic in programming language design: We want code to be readable, and that leads to caring about whitespace.

            1. 8

              This makes sense, in a certain way; if Haskell’s type system were sound in the first place, then QuickCheck would be little more than a totality checker.

              I don’t think that follows unless you’re using incredibly specific types. There are many total functions of type Ord a => [a] -> [a], but not all of them would pass the tests of a sorting function.

              1. 3

                This is a fair observation. QuickCheck test selection indeed is driven not just by types, but by typeclasses and associated behaviors, and I was remiss to dismiss it so glibly.

                The main desire is that one might express algebraic laws, and ultimately all of these testing tools are only approximations to actual equational reasoning. That doesn’t preclude a type system from giving useful annotations.

              2. 4

                Wow, amazing comment! I understand about half of this. I wish I had more than one upvote to give you.

                I am really curious as to whether Python’s NoneType would map to Haskell’s bottom in that regards. Would that be the Python-shaped hole you’re talking about?

                I did see a tutorial by David Beazley on implementing a lambda calculus in Python. I didn’t think it was all that practical tbh, and I think he said the same in the video: https://www.youtube.com/watch?v=pkCLMl0e_0k

                Could you give a brief summary of how FLP impossibility and Turing incompleteness are different? I know of “Turing completeness” and that it’s a thing, but my knowledge ends there. I haven’t heard of FLP impossibliity before. Bookmarked that website.

                Huh, the author of hypothesis says QuickCheck doesn’t have integrated shrinking. From his statement on generators rather than type classes, I understand your point on how Hedgehog and Hypothesis are more similar.

                Hmm, there weren’t a whole lot of examples of curly-brace oriented syntax. Most examples used indentation. The Haskell wiki does mention it though: https://en.wikibooks.org/wiki/Haskell/Indentation#Explicit_characters_in_place_of_indentation

                (I do care about whitespace and clean code; I read through “Clean Code” my second year out of college, but in my experience as a practitioner so far I’d rather use a tool and automated process to have a consistent way of doing things with multiple people, in order to spend political capital elsewhere.)

                1. 9

                  (Phew this got long, time to turn it into a blog post ;) )

                  I am really curious as to whether Python’s NoneType would map to Haskell’s bottom in that regards.

                  It’s important to distinguish between the empty type and the unit type. In (type) theory there are no values of the empty type, and there is one value of the unit type. In a language without _|_, like Agda, we might define them something like this:

                  data Empty : Type
                  
                  data Unit : Type where
                      unit : Unit
                  

                  This says that Empty is a Type, that Unit is a Type, that unit is a Unit. Since we’ve given no contructors (or “introduction forms”) for Empty, there’s no way to make one. Likewise we can write function types like Int -> Empty which have no values (since there’s no way for any function to return a value of type Empty), we can write tuple types like Pair Int Empty that have no values, etc. We can also write function types like Empty -> Int which do have values (e.g. const 42), but which can never be called (since there are no Empty values to give as an argument). Incidentally, the fact that we can have types with no values is one of the reasons we can’t have a value with type forall a. a (in Haskell) or (a : Type) -> a (in Agda): they claim to return a value of any type, but we might ask for an Empty.

                  The unit type is trivial, literally. Since there’s only one value of this type (unit), whenever we know that a value will have type Unit we can infer that the value must be unit, and hence (a) if it’s a return value, we can always optimise the function to simply unit and (b) if it’s an argument, it has no effect on the return value (there’s nothing to branch on). In this sense, the unit type contains 0 bits of information (compared to Bool which contains 1 bit).

                  As an aside, Either is called a “sum type” because it contains all the values from its first argument plus those of its second: Either Unit Unit is equivalent to Bool (with values Left unit and Right unit). Pairs are called “product types” since they contain every combination of their first argument and second arguments, which is the number of values in each multiplied together. In this sense Empty acts like zero: the type Either Zero a doesn’t contain any Left values, so it’s equivalent to a, just like 0 + a = a for numbers. Likewise Pair Empty a contains no values, since there’s nothing to put in the first position, just like 0 * a = 0 for numbers. Either Unit a acts like 1 + a, since we have all of the values of a (wrapped in Right) plus the extra value Left unit; note that this is also the same as Maybe a, where Right acts like Just and Left unit acts like Nothing. Pair Unit a is the same as a since each a value can only be paired with one thing, and we know that thing is always unit; this is just like 1 * a = a with numbers.

                  Back to your question: things get confusing when we try to apply this to “real” languages, especially since a lot of the terminology is overloaded. In Haskell the type Unit is written () and the value unit is also written ().

                  In Haskell the type Empty is called Void, but that is not the same as void in languages like Java! In particular, remember that we cannot have a function which returns Empty, so void being Empty would mean we could not implement methods like public static void putStrLn(String s). Such methods certainly do exist, since they’re all over the place in Java, but what happens if we call them? They will run, and return a value back to us (hence it can’t be Empty). What is the value we get back? We know, without even running the method, that we’ll get back null. That’s just like the Unit type (where we know the value will be unit without having to run the code). Hence Java’s void acts like Unit, and null acts like unit.

                  If we follow a similar line of reasoning in Python, we find that None acts like unit (or Java’s null) and hence NoneType is like Unit (or Java’s void). AFAIK Python doesn’t have anything which acts like Empty (Haskell’s Void) since, lacking any values, Empty is only useful for type-level calculations (of the sort which get erased during compilation), which Python doesn’t tend to do.

                  That just leaves “bottom”. There are a couple of ways to think about it: we can be “fast and loose” where we ignore bottom as a nuisance, which mostly works since bottom isn’t a “proper” value: in particular, we can’t branch on it; hence any time we do a calculation involving bottoms which results in a “proper” value, we know that the bottoms were irrelevant to the result, and hence can be ignored. Any time a bottom is relevant, we have to abandon our nice, logical purity in any case, since catching them requires IO, so why bother complicating our pure logic by trying to include them?

                  Alternatively we can treat bottom as an extra value of every type, in a similar way to null inhabiting every type in Java. From this perspective Haskell’s Void type does contain a value (bottom), and the () type contains two values (() and bottom). In this sense we might think of Java’s void corresponding to Haskell’s Void, but I find this line of thinking difficult to justify. In particular, we can branch on null in Java (in fact we should be doing such “null checks” all the time!), whereas (pure) Haskell doesn’t even let us tell whether we have () or bottom.

                  As a final thought, whenever comparing dynamic and static languages, it’s important to not conflate different concepts which just-so-happen to have similar names. In particular I find it useful to think of dynamically typed languages as being “uni-typed”: that is, every value has the same type, which we might write in Haskell as a sum like data DynamicValue = S String | I Int | L List | O Object | E Exception | .... Python is actually even simpler than this, since “everything is an object” (not quite as much as in Smalltalk, but certainly more than e.g. Java), so Python values are more like a pair of a class (which itself is a Python value) and a collection of instance properties.

                  This is important because “dynamic types”, like in Python, aren’t directly comparable to “static types” like those of Haskell; they’re more comparable to “tags” (e.g. constructors). In particular, think about a Python function branching based on its argument’s “dynamic type”; this is the same as a Haskell function branching on its argument’s constructor. What does this tell us about “bottom” in Python? From this perspective, there isn’t one (at least not reified; an infinite loop might be comparable, but it’s not something we can e.g. assign to a variable). Python’s None is just a normal value like any other, in the same way that Haskell’s Nothing is a normal value (we might sometimes use it to stand for an error, but that’s not inherent to the system); likewise Python’s exceptions are also normal values (we can assign them to variables, return them from functions, etc.); the idea of “throwing and catching” (or raise/except for Python) is actually a perfectly normal method of control flow (it’s actually a limited form of delimited continuation), and this is orthogonal to error representation and handling.

                  This makes raising an exception in Python very different to triggering a bottom in Haskell, since Haskell provides no way to branch on a bottom (or whether we even have one). In Python we can raise a value (with the exception “tag”) as an alternative to using return, and we can catch those values, inspect their tag and value, etc. to determine what our overall return value will be, with none of this being visible by the caller. To do anything like that in Haskell we need some “proper” data to inspect and branch on, hence why I say None, exceptions, etc. are just normal values (even though they’re used to represent errors, and we treat them specially because of that; but the same could be said about Either String values in Haskell, for example). Consider that in Haskell the only way to even check if a bottom exists is to use IO, but the idea behind IO is that it’s like State RealWorld, i.e. every IO a value is a pair of (RealWorld, a), so conceptually we never “actually” see a bottom; it’s more like triggering a bottom changes the RealWorld, and we’re branching on that.

                  1. 4

                    Thank you for the clarification on None vs bottom! I think it definitely helps me understand how much more different Python is from Haskell.

                    I like the idea of saying data DynamicValue = S String | I Int | L List | O Object | E Exception for dynamic languages, and I think from a type theory perspective I finally understand what people say when they say “In {Ruby, Python}, everything is an object”.

                    I hate try/catch logic in Python. It works, but I much rather prefer the functorial structure espoused by Haskell.

                    I agree on the description on IO! The book does mention IO is technically without effects, since it is merely a description of what IO actions should be taking place instead of imperatively executing IO.

                    1. 4

                      I like the idea of saying data DynamicValue = S String | I Int | L List | O Object | E Exception for dynamic languages

                      That’s a good intuitive starting point, but it’s not the whole story. The “unitype framework” is a way of understanding dynamic types from a static typing perspective, but we can also start from a “dynamic typing perspective” and see where it gets us. A dynamically-typed language is one where you can manipulate type definitions at runtime, and test for type membership at runtime. So saying DynamicValue = S String | I Int ... will always be incomplete, because we might not know something’s type even after creating it!

                      An intuition I like here is that dynamic types are boolean predicates: x is of type T iff T(x). This means we can define the type of, say, all functions with format as an optional keyword, or the type of all iterables that aren’t strings, or the type of all types that appear as return values in functions.

                      (The downside of this is it gets hellishly confusing very quickly, and allows for a lot of spooky action at a distance. Also, there’s no guarantee that checking type membership is total. That’s one reason even highly dynamic languages shy away from going deep in dynamic types.)

                      I think from a type theory perspective I finally understand what people say when they say “In {Ruby, Python}, everything is an object”.

                      Everything can also be an object in a static system, too. My rule of thumb for what “everything is an object” means: are object methods also objects? Is the call stack an object? If you are using message passing, are the messages objects? Can you send messages to other messages?

                      1. 1

                        RE: “Everything is an object”, I usually put it in quotes since the meaning is usually context-dependent. A colleague once said that to me regarding Java, and I replied that Java classes aren’t objects yet Python classes are; and Python’s if branches aren’t objects yet Smalltalk’s are (at least, the closures are); and so on.

                        RE: Unitypes, etc. yes the static/dynamic distinction can be built up from the dynamic side too (I tend not to think that way since fixed points and things trip me up). The important thing is that people aren’t talking past each other, e.g. it’s fine to argue about whether “foo can/can’t do bar”, but not so useful when each person is using “foo” and “bar” to mean different things ;)

                    2. 3

                      The overloaded word “void” across languages is confusing. In Rust and Typescript (and others?) they called it “never”, which I think makes it really clear: values of this type never actually exist. And “this function returns never” is like a pun: “this function never returns” and “this function’s return type is ‘never’”.

                      1. 3

                        This was wonderfully clear. Thank you.

                  1. 8

                    I’m pleased to say that I made it to the end of this 1,857 page (by the e-reader PDF version) montrosity(sic).

                    Congratulations. It’s a long road but IMHO very worthwhile. Well done.

                    Does newtype exist as part of native top-level APIs for Haskel(sic) libraries? How much churn do production Haskell type signatures suffer from? This is an unknown quantity to me.

                    I have never worried about this in my few years of writing Haskell. Often people export the newtype but not its constructor, so you as a client of a library cannot know nor care what the representation is behind the newtype.

                    The Semigroup/Monoid and Applicative/Monad upgrade paths are both examples of probably the hardest change to make in a Haskell environment - introducing a new superclass. These are managed carefully, over the course of several GHC releases, and often include transitional packages that make it easier to write code for both the old and new versions (e.g., the fail package that assists with the MonadFail migration completed by GHC 8.8). I hope we get a better solution one day, but for now I believe two things are true:

                    1. These changes only happen when the power:weight ratio makes them well worthwhile.
                    2. The GHC team as done a good job of managing these changes across multiple GHC version.

                    Haskell has no null definition of type Char

                    I don’t understand this. The “not-a-”-ness that would be expressed by a “null Char” is encapsulated by the Nothing constructor of the Maybe data type.

                    Haskell can raise errors partway through evaluating an error message. This (feature?) seems alien to me:

                    I assume you mean “evaluating an expression”. This is a consequence of being a non-strict language. You can’t stream results and guarantee the absence of errors across a structure at the same time if you allow undefined or error into your language. Note that this works:

                    Prelude> foldr const 3 (2:undefined)
                    2
                    

                    Re: your broken links complaint. Here is my favourite page for GHC language extensions, and I should probably just make it my homepage: https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/glasgow_exts.html

                    1. 3

                      Thank you! I saw the spelling mistake I made; I updated the copy and credited you :)

                      It’s great to hear support for the upgrade paths for Semigroup/Monoid and Applicative/Monad. Both of them, after reading the error messages, were rather trivial to solve, so I wouldn’t say they should be a blocker for everybody, especially those of us who are already practitioners. The broken links issue is also along the same vein; even though it was a bit irritating, it’s not like the pages didn’t exist anymore, I found where I needed to go using one query in DuckDuckGo.

                      As for the “not-a-“ness, I think another user here or on Hacker News pointed out that in many languages, Char maps to Int (like an ASCII table), and it’s not really possible to have a null Int because anywhere on the number line would be a valid Int. So it wouldn’t make sense in that case to have an association to an invalid / null Char. I thought it was important to point out because the book had mentioned certain instances where program correctness and typecheck guarantees diverged. In practice, totally agree that lifting into Maybe or Either is the proper way to handle “ValueError”.

                      I understand the point about error handling; it just seems like there could be an under-the-hood optimization that would print out only the exception instead of eagerly logging everything. How would you tie this kind of stdout to an automated logging / monitoring tool (like if you were using an ELK stack) if it had to parse through arbitrary partially applied stdout?

                      1. 4

                        It just doesn’t make sense to have an empty character. A character is a character. A string is 0 or more characters. It’s like having an empty int. There’s zero, but that’s not ‘no integer’, it’s an integer. If you want a ‘null character’ you can use '\0' but it IS a character.

                    1. 2

                      but because the language is non-strict by default

                      You mean non-eager, I think.

                      1. 4

                        I believe non-eager can mean either lazy or non-strict. At least according to the book, they’re two different things. Laziness is a stricter definition than non-lazy:

                        Haskell is not fully lazy; it is merely non-strict, so it is not required to remember the result of every function application for a given set of arguments, nor would it be desirable given memory constraints.
                        

                        Complete laziness would imply complete memoization and no sharing of generated results, which would blow up memory.

                        https://wiki.haskell.org/Lazy_vs._non-strict

                      1. 1

                        Hmm, not sure if it’s possible (I’ve read somewhere on Hacker News that Erlang companies say they’re Java shops to deceive competitors and retain their edge), but it would be super awesome to name the people who could back up and vouch for these quotes. Especially for light technical or non-technical stakeholders, having a network of people to reach out to in case things go wrong may help de-risk taking that first step. Just a friendly suggestion :)

                        1. 11

                          I’ve read somewhere on Hacker News that Erlang companies say they’re Java shops to deceive competitors and retain their edge

                          Hah, I always thought this was a bit overblown. From my understanding, talking with insiders at companies, their “edge” is never really a specific language or anything. It’s always something like “we have a team of Math and CS PhDs who spend all day writing faster compression algorithms” or “we spend tons of money on developer training” or other human factors like that. Things that, even if you knew what they were doing, would be really hard to repeat for yourself.

                          IME the main reason companies don’t talk about their tech is for brand management. If a company says “yeah we use Erlang” on the record, they’re now That Company That Uses Erlang. If they eventually decide to stop using Erlang, they’re now That Company That Decided Erlang Was a Bad Idea, or even worse That Company That Keeps Chasing New Technology.

                          That’s one reason all of the case studies are things that are on-the-record, where a person at the company wrote a thing where they explicitly say “here’s how FM benefited us.” I have quotes by individuals on my consulting page. I want to add more, but getting them takes time.

                          1. 2

                            Yeah, I guess that makes sense. I talked to one company during one interview round that mentioned they did a rewrite of a service from Ruby to Elixir and back to Ruby, and now they have a particular connotation in my mind now despite my best intentions.

                            Whoops, I didn’t see that those are clickable links with named citations! I thought it was just “Amazon the company found TLA+ useful”. That’s totes on me. I think my only suggestion then might be adding names with the company name; “$NAME and $NAME at $COMPANY used TLA+” vs. “$COMPANY used TLA+”.

                            1. 3

                              Given you and @iamnearlythere had exactly the same issue, it’s 100% a UI failing on my end.

                        1. 1

                          Hopefully this doesn’t come as too irrelevant, but one thing I think would be really cool for a Linux laptop manufacturer would be to have a “native cloud backup” solution, where you can separate out user-specific data onto a separate partition, and back the entire partition up to a remote server. I switched from macOS to Linux for my primary driver recently and the biggest thing I’ve missed from native macOS might be Time Machine.

                          Is this a feasible thing to do? It would be a massive selling point to me (like if System76 had FOSS native cloud backup software integrated with Pop!_OS, it would be a selling point against Ubuntu)

                          1. 2

                            Meta: I love this thread, I’ve seen it around but I didn’t know how consistent it would be. I generally post to the Indie Hackers Daily Stand-up thread to keep myself accountable, but I’ve also signed up for YC Startup School W2020 and Pioneer.app; maybe others could find those places to be helpful too :)

                            This week I’m looking at wrapping up covering “The Haskell Book” and writing up a technical blog post on a Pythonista’s review of Haskell. This in order to begin work on TinyDevCRM MVP, which I hope to release by end of next week (maybe next Friday or so), with another technical blog post.

                            1. 6

                              To be fair you can replace the cushions of the QC35 and also connect it to your laptop using a 3.5 mm to 2.5 mm cable.

                              I wonder if the small rechargeable lipo battery in the QC35 is more environment friendly then 100 replacement batteries. (although you can easily buy re-chargable ones).

                              1. 1

                                Hmm, you’re right. I did a little bit more searching and in addition to what you mentioned, it looks like the Bose 700 do come with the same 2.5mm to 3.5mm headphone jack. I’ll post a correction and credit you.

                                I’m not quite sure about environmental friendliness of AAA vs. LiPo, but I searched on Bose’s website and I don’t see a replacement battery program or replacement guide for either the QC35s or the Bose 700s. I kind of just hope that Home Depot actually does e-recycle those parts instead of dumping them overseas. The big selling points of a replaceable battery for me is the zero charge time, and multiple vendors -> low price :)

                                1. 2

                                  https://onezero.medium.com/unraveling-the-secret-supply-chain-behind-an-amazonbasics-battery-e7b9ead4d72e delved a bit into the supply chain world of alkaline batteries. Personally been using nimh rechargeable batteries on a hand-me-down pair of QC15’s, but the tradeoff I’ll bring up is that I never carry these headphones and for travel carry a boring wired pair of earbuds. Travelling with the QC15’s means carrying them along with a battery charger.

                                  I sometimes think we’re at a point where battery serviceability needs to be regulated into existence, though I’m hesitant as that’s just one point of the problem and doesn’t address battery supply chain issues (particularly low-quality/fake batteries, reselling old batteries, proof-of-service when selling a used item with replaced batteries, etc.). There’s other problems (google ‘repairable loophole’ in the e-waste export space).

                                  1. 1

                                    I wasn’t surprised that Amazon’s supply chain is…locked down…but I did not know it took 100x the energy to produce an alkaline battery than it outputs during its lifetime.

                                    I did a quick DuckDuckGo search and apparently not only are there NiMH AAA batteries, but there’s also Lithium-ion AAA batteries and Lithium-polymer AAA batteries, with different tradeoffs between battery capacity, rechargability, and cost. I like that about standards, you can usually pick from a number of options :)

                                  2. 1

                                    @jelly Updated and credited!

                                    1. 2

                                      When replacing my ear pads I can’t remember that you could see or access the LiPo battery easily so replacable batteries are a valid point.

                                1. 4

                                  I think it’s a cold shower we FOSS lovers should have when approaching certain proprietary software; it’s not all sunshine and rainbows: http://itvision.altervista.org/why.linux.is.not.ready.for.the.desktop.current.html

                                  I personally would love to get my hands on a Windows 10 LTSC release.

                                  1. 4

                                    That page is the biggest load of FUD I think I’ve ever seen. A lot of the things listed there are simply untrue, or completely miss the whole point.

                                    ‘Linux’ is not comparable to ‘Windows’. You shouldn’t say ‘there’s one way to do X on Windows but there are five ways to do X on Linux so Windows is better and more consistent’. Look at individual distributions. A distribution is an operating system. Windows is an NT distribution, essentially. Look at some KDE-based distro then make the comparisons.

                                    Look at this crap:

                                    Few software titles, inability to run familiar Windows software (some applications which don’t work in Wine - look at the lines which contain the word “regression” - have zero Linux equivalents).

                                    Questionable patents and legality status. USA Linux users cannot play many popular audio and video formats until they purchase appropriate codecs.

                                    A small number of native games and few native AAA games for the past six years

                                    There’s no concept of drivers in Linux

                                    There’s a lot of hostility in the open source community.

                                    The USA’s laws are shit, Linux is unpopular, something completely made up, and a complaint about the community. Wow, what a load of fantastic constructive technical issues with Linux!!!!111

                                    1. 3

                                      The LTSC build does not support very recent hardware, so if you own an AMD Ryzen 3000 something CPU, then you will have no luck running this version of windows.

                                    1. 10

                                      I often notice that the lack of the use of UNIX principles in the software world more or less directly translates to planned obsolescence in products. When software is modular, and each component does only one job and does it well with well-defined and simple interfaces, you can easily hack with it and adapt it to your needs.

                                      The author (@yingw787) describes this process with his headphones, where he was able to make use of the headphone jack for a bluetooth adaptor and simply attach a microphone suiting his needs. Most notably, the batteries and cushions are replaceable. If we look at competing all-in-one-headphones, when the batteries age, when the supported Bluetooth standard becomes older and older or simply when the non-replaceable cushions deteriorate, you have no other choice but to throw them away at some point. And that’s what planned obsolescence is in a nutshell.

                                      We need to be more critical of companies designing their products for the landfill. Everybody’s talking about “climate change” and environmentalism, but many have no issues buying a MacBook with glued in batteries or headphones where nothing is replaceable, even things that are used heavily. Things used to be much more serviceable in the past, but somehow most people do not care about it anymore.

                                      1. 8

                                        Not that I disagree, but even as a staunch Apple critic I somehow contest the MacBook example. Most people don’t buy it for the hardware, but for the software. And if it needs to be mobile.. it’s not an iMac without a battery. Also 90% of the people preferred the old models with replaceable batteries and nobody I know ever said “Thank god it’s thinner and lighter and I happily accept that I can’t change the battery” - so they accept it because there’s no alternative, if you want/need to stay on OSX. And that is a different discussion not suited here.

                                        Not my personal problem in this case, but for everyone there are things where it’s easy/medium/hard to change their habits and to make compromises. I know a few people who would prefer to get rid of their car a lot easier than switching from OS X.

                                        1. 1

                                          Maybe I’m not most people, but I personally buy laptops for the hardware, and not the software :) I had a Dell Precision 5530 early last year (enterprise-grade XPS 15), and I didn’t like the feel of the hardware. I got the Lenovo PI Gen 2 (enterprise-grade X1 Extreme), and I liked it, so I installed Ubuntu on it. I would say the OS is our interface to the hardware, and an open OS allows us to commoditize the hardware and give us lever.

                                          I won’t say it’s easy, but for me the biggest oof in transitioning from macOS to Ubuntu was the lack of SelfControl.app, and Timing.app, which managed my blockers and time tracking. Instead, I purchased a tool called RescueTime, which comes with a redirect proxy (?) that blocks certain distracting websites even after if I block using /etc/hosts, and tracks my time. Not as polished as macOS, but it works for my personal dev workflows. Laptop sleep, night shift, etc. work fine out of the box for 19.10 and Linux kernel 5.x.

                                        2. 2

                                          None of the 2 latest laptops I’ve gotten from work have had user-replaceable batteries.

                                          1. 2

                                            Hmm, what do you mean by user-replaceable batteries? Like ones with the release latch baked into the battery case, or ones where it has a detachable power bus inside a unibody laptop case?

                                            I personally consider both user-replaceable, and have replaced the battery in my Mid-2012 15’’ MacBook Pro twice with no complications. I looked online at the MacBook Air battery, and to my surprise other companies do make replacement batteries for those too. The batteries aren’t glued in or anything.

                                            1. 2

                                              The ones with the latch. But if one can open the laptop using standard screws and there’s no glue involved that’s fine too.

                                              1. 1

                                                Okay cool! I personally don’t mind the Apple-specific screw heads. You can get a full set of different screw heads on Amazon for $20. And nope, no glue involved!

                                          2. 1

                                            I’m a big fan of voting with my wallet. I’m willing to pay much more for things that give me the consumer as much lever as possible. Maybe it won’t counterbalance the rest of society, and maybe it will. I consider it to be my stand and my voice :)

                                          1. 4

                                            Great article, makes me want to buy QC25s if I ever need another pair of ANC phones. I have the Teufel Mute BT (from work) and I’m only semi-happy with them. But at least they do have a 3.5 jack.

                                            1. 1

                                              Thank you! I looked at the Teufel Mute BT, it looks like it also has headphone jack and AAA battery! All that’s missing are replaceable cushions, couldn’t find those on the website.

                                              1. 2

                                                These are mine: https://stuff.art-core.org/2020/teufelmutebt.jpg - not sure if they updated the model, mine are from ~christmas 2017.

                                                Also neither the sound nor the ANC are great, but I can’t really compare the ANC as this is my first pair of noise-cancelling headphones.

                                            1. 3

                                              I replaced my fancy noicecancelling ones by fancy (wired) in-ear ones. They block out just as much, but are soo much simpler. (come to think of it, I also only used wired mice. Same reason)

                                              1. 2

                                                Simplicity is such a huge selling point! Explainability -> maintainability!

                                              1. 3

                                                GNU Grep is older than the GNU project?

                                                1. 3

                                                  Huh. Yeah I guess, probably for the same reason UNIX is older than Linux. That makes sense; Linux is a port of UNIX.

                                                  1. 5

                                                    What I meant was that grep is probably from 1974 (although I remember the story being that it’s even older). GNU grep (i.e. the grep by the GNU project, not by Ken Thompson) seems to have been released 1992.

                                                    1. 3

                                                      Oh. Yes. That’s a good point. :facepalm:

                                                      I’ll update the blog post and credit you. Thank you for pointing that out!

                                                      1. 1

                                                        Updated and credited!

                                                    2. 3

                                                      Linux is a port of UNIX

                                                      Ssssh! I can hear them in the bushes, waiting to hear something like this!

                                                      1. 5

                                                        They’re generally not subtle, the war cry of “ACTUALLY” echoes across the steppes, the thundering herd of GNUs approach rapidly to set things straight!

                                                  1. 5

                                                    I’ve been both coding and consuming books and videos about how to be a good coder for many, many years.

                                                    Lately, I’ve been teaching what I call “Good Enough Programming”. If the code does something useful, and you can walk away from it, it’s good enough. All of this 50 year code is like that.

                                                    Oddly enough, nobody talks much about good enough programming. In fact, if you take a look around, it’s not the natural state of things. Projects go on forever and their software is always being tweaked, poked at, and updated. We’ve got a lot of coders wanting to be perfect, kick-ass programmers, but not a lot of coders writing good enough code. It’s very odd.

                                                    1. 3

                                                      I think the difference for software engineering is the trope of “changing the world”, which means hiring “animals” (Paul Graham’s essay: http://www.paulgraham.com/start.html) who pour in their heart and soul into somebody else’s property in a hope of attaining riches, which is held up by society as the goal to attain. Throw in the fact that the field is constantly 50% buzzed 20 year olds, and you get a culture of coders wanting to be perfect.

                                                      Kudos to you for fighting the good fight! Code that puts food on the table today and tomorrow is good enough and should allow you to sleep well at night.

                                                    1. 3

                                                      Oooh nice post!

                                                      Your mention of hyperproperties immediately made my mind jump to limplock situations: https://danluu.com/limplock/; I read later on and saw you mentioned partial degradation, which limplock / slowlock would bin underneath. I’m not sure whether large companies use formal reasoning to evaluate their deployments; I’m guessing it would be far far too slow to be practical.

                                                      I’m also interested in whether hyperproperty usage can be generalized for any solution to a “gradient problem”, where a non-perfect solution is perfectly acceptable (e.g. spam filtering, ML/AI, content moderation), and whether it just hasn’t because only security research pays off.

                                                      1. 9

                                                        Yeah I’m big on this philosophy and a related idea is the “lindy effect”

                                                        https://en.wikipedia.org/wiki/Lindy_effect

                                                        I’m working on a shell because it’s been around for 50 years now, so I expect it to be around in 50 more years.

                                                        There’s an unbroken chain from Thompson shell in 1970, to Bourne shell, to Korn shell, to bash to Oil. Because Oil is compatible unlike other alternative shells. There is wisdom you’re abandoning when you break things – i.e. don’t throw the baby out with the bathwater.

                                                        When foundations are stable, you can make progress on top. The alternative is what I call “motion without progress”.

                                                        Funnily enough the only car I ever owned was a ’99 accord, which I sold a couple years ago :)


                                                        I also meant to blog about two designs I’ve encountered that have lasted 50 years:

                                                        • Ohm Walsh speakers. I bought mine new in 2014 but the design is unique and has been honed over 50 years. The company has basically makes one product, and continually makes it better without changing the design. That’s because it works and changing it would break it. Whenever people hear my speakers in person I can see the expression change on their face – they’re surprised.

                                                        https://ohmspeaker.com/

                                                        https://en.wikipedia.org/wiki/Lincoln_Walsh

                                                        https://ohmspeaker.com/technology/ – Unfortunately their ad copy is bad, but the speakers are an omnipolar design, and this is not bullshit, unlike most claims about audio.

                                                        • Brompton folding bikes. Also another company that made an advance 50 years ago and has been honing it since. Nice history of the company:

                                                        https://www.youtube.com/watch?v=Lh4yStWFNvs

                                                        A folding bike is a surprisingly constrained engineering problem. It has to be small but hold your weight at the same time. When you get a design that works, changing it will break it. I had another folding bike designed in the 2000’s and it was shitty in multiple dimensions compared to the Brompton. They have been doing something real for all these decades.

                                                        1. 3

                                                          Oooooh I love your design taste! I’ve actually been looking at folding bikes for a couple years, but never pulled the trigger. Maybe if I move to a better apartment with an elevator or a less grungy bike room :)

                                                        1. 11

                                                          GNU grep ships as a standalone executable (or at least, when I run which grep and dpkg -L grep, I don’t see any packaging dependencies for /usr/bin/grep)

                                                          With ldd /usr/bin/grep on Arch I get dependencies on linux-vdso.so.1, libpcre.so.1, libc.so.6, libpthread.so.0 and /lib64/ld-linux-x86-64.so.2. Not sure, if any of these come shipped with Linux itself, but Arch at least lists glibc and pcre as package dependencies. Maybe it’s statically compiled on Debian-derivates? But is this even possible?

                                                          One thing I know I can’t do for my personal projects is constantly dedicate time to working on it after I’ve “shipped”.

                                                          I think this sentence describes a negative feeling which bugs me since a few years. When I was younger I constantly fiddled with things, but nowadays (with a job in software) I feel tired about my hobby projects and just want something to be finished, completed, working. And then continue with another hobby. And when I then decide to run my software again after 1 or 2 years I still want it to run. Which is something that with my current main language Python can break, e.g. because in Python 3.5 or so they made await a reserved keyword, so existing libraries broke. Maybe I should start writing C, not so much stuff changing there (every 10 years a new standard or so? and compilers will happily compile old versions).

                                                          Now that I can point a finger at the feeling, I can start to think about actions. Maybe C for hobby really would be the solution.

                                                          1. 10

                                                            Damn, you’re right, it’s not a single executable. I’m getting something similar, I should know better than to figure /usr/bin contains single executables lol. I’ll post a correction and credit you :)

                                                            1. 11

                                                              Updated, hopefully this is better :) Thank you very much for the correction!

                                                              On a side note, does anybody here know of good groups to get content editing for blog posts? I don’t have anybody to bounce drafts off of, which means these content errors keep getting through.

                                                            2. 7

                                                              As if C is the only alternative to Python. ;)

                                                              If you want a high level language for future-proof software, OCaml maintainers take backwards compatibility very seriously, and OPAM (the package manager) is versioned and allows installing any old version of a library.

                                                              With compiled languages, you can also make a statically linked binary so that you can still use the program even if building it requires big changes to the source. Rust and Go make that easy, but you can link anything with musl or another static C standard library.

                                                              1. 3

                                                                And when I then decide to run my software again after 1 or 2 years I still want it to run.

                                                                Might I suggest Clojure? https://www.groundedsage.dev/posts/the-clojure-mindshare/#heading-stability

                                                                1. 2

                                                                  linux-vdso is part of the kernel. ld-linux is the dynamic linker; you need it to run anything. Libc and libpthread provide the most basic possible functionality for interacting with the system, and it’s difficult to imagine something that doesn’t depend on them. Unfortunately, there are non-standard glibc features that some programs depend on, and that has caused problems in the past, but I can’t imagine that glibc depends on any of those.

                                                                  That leaves pcre as the only real dependency which–ok, it’s not technically freestanding, but I don’t think that really detracts from the point of the article. Not least because it’s fairly easy to make a grep that runs without it; it’s only used for the -P flag.

                                                                  1. 1

                                                                    Go doesn’t make use of libc for the most part. It does its own syscall interaction.

                                                                    1. 2

                                                                      Interesting. This makes me wonder where the border between kernel and libc actually sits. So, fopen(3) is implemented in libc. open(2) is more lowlevel and specified by POSIX, but where is it implemented? As it’s part of POSIX, I’d expect it to be implemented in the kernel. Go would then not use fopen(3), but open(2) to implement it’s I/O, right? Or is syscall(2) the only function directly talking to the kernel, and open(2) is already something in libc? Does someone know where to look for such information?

                                                                      I’m just curious. I don’t normally go that lowevel, but something not using libc sounds a bit arcane to me.

                                                                      1. 3

                                                                        So at some level, syscall(2) is really a wrapper for the SYSCALL instruction (or similar for !AMD64). open(2) is a wrapper around syscall(2), but the semantics of how files are opened are implemented at the kernel level. At some level, to the program files don’t exist, but they are provided to the application by a very elaborate and consistent set of lies. Those lies are usually standardized by things like POSIX.

                                                                        For a very abstract example, take this webassembly code. Files don’t exist in webassembly by itself, but by importing open and write from the webassembly environment, hello world can be written to the standard output of the host machine. Also see my talk on this.

                                                                        TL;DR: syscalls don’t real, but the kernel lies and makes them appear real.

                                                                        1. 2

                                                                          Thank you!

                                                                        2. 2

                                                                          open() is implemented in libc, likely as a thin wrapper around the syscall instruction. Here, for instance, is how it’s implemented in my libc. But do note that it also needs to set errno (in that example, it’s handled by the syscall function), which is not something the kernel knows or cares about.

                                                                          1. 1

                                                                            Thank you!