As a genuine question from someone who hasn’t used procedural programming productively before, what would be the benefits of a procedural language to justify its choice?


      I would say less conceptual/cognitive overhead, but I don’t know if that’s something that can be said of this language as a whole, as I have no experience with it.

      By that I mean something like: I have a rough idea of what code I want from the compiler, how much mental gymnastics is required to arrive at the source-level code that I need to write?

      I would imagine that’s an important consideration in a language designed for game development.


        Yeah, it makes perfect sense.

        To dumb down Kit’s value prop, it’s a “Better C, for people who need C (characteristics)”.


        On top of alva’s comment, they compile fast and are easy to optimize, too.


          I looked this up for some other article on lobste.rs. I found wikipedia to have a nice summary


          Imperative programming

          Procedural programming languages are also imperative languages, because they make explicit references to the state of the execution environment. This could be anything from variables (which may correspond to processor registers) to something like the position of the “turtle” in the Logo programming language.

          Often, the terms “procedural programming” and “imperative programming” are used synonymously. However, procedural programming relies heavily on blocks and scope, whereas imperative programming as a whole may or may not have such features. As such, procedural languages generally use reserved words that act on blocks, such as if, while, and for, to implement control flow, whereas non-structured imperative languages use goto statements and branch tables for the same purpose.

          My understanding is that if you use say C you are basically using procedural language paradigms.


            Interesting. So basically what was registering in my mind as imperative programming is actually procedural.

            Good to know. Thanks for looking it up!


              I take “imperative” to mean based on instructions/statements, e.g. “do this, then do that, …”. An “instruction” is something which changes the state of the world, i.e. there is a concept of “before” and “after”. Lots of paradigms can sit under this umbrella, e.g. machine code (which are lists of machine instructions), procedural programming like C (where a “procedure”/subroutine is a high-level instruction, made from other instructions), OOP (where method calls/message sends are the instructions).

              Examples of non-imperative languages include functional programming (where programs consist of definitions, which (unlike assignments) don’t impose a notion of “before” and “after”) and logic programming (similar to functional programming, but definitions are more flexible and can rely on non-deterministic search to satisfy, rather than explicit substitution)


                If functional programs don’t have a noton of before and after, how do you code an algorithm? Explain newton’s method as a definition.


                    both recursion and iteration say “do this, then do that, then do … “. And “let” appears to be assignment or naming so that AFTER the let operation a symbol has a meaning it did not have before.

                    open some namespaces
                    open System
                    open Drawing    
                    open Windows.Forms
                    open Math
                    open FlyingFrog

                    changes program state so that certain operations become visible AFTER those lines are executed, etc.


                      It is common for computation to not actually take place until the result is immediately needed. Your code may describe a complicated series of maps and filters and manipulations and only ever execute enough to get one result. Your code looks like it describes a strict order the code executes in, but the execution of it may take a drastically different path.

                      A pure functional programming language wouldn’t be changing program state, but passing new state along probably recursively.


                        but you don’t really have a contrast with “imperative” languages - you still specify an algorithm. In fact, algorithms are all over traditional pure mathematics too. Generally the “state” being changed is on a piece of paper or in the head of the reader, but …


                    If functional programs don’t have a noton of before and after, how do you code an algorithm?

                    Roughly speaking, we define each “step” of an algorithm as a function, and the algorithm itself is defined as the result of (some appropriate combination of) those functions.

                    As a really simple example, let’s say our algorithm is to reverse a singly-linked-list, represented as nested pairs [x0, [x1, [x2, ...]]] with an empty list [] representing the “end”. Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty. Here’s an implementation in Javascript, where reverseAlgo is the algorithm I just described, and reverse just passes it the new empty list:

                    var reverse = (function() {
                      function reverseAlgo(result, input) {
                        return (input === [])? result : reverseAlgo([input[0], result], input[1]);
                      return function(input) { return reverseAlgo([], input); };

                    Whilst Javascript is an imperative language, the above is actually pure functional programming (I could have written the same thing in e.g. Haskell, but JS tends to be more familiar). In particular, we’re only ever defining things, in terms of other things. We never update/replace/overwrite/store/retrieve/etc. This style is known as single assignment.

                    For your Newton-Raphson example, I decided to do it in Haskell. Since it uses Float for lots of different things (inputs, outputs, epsilon, etc.) I also defined a bunch of datatypes to avoid getting them mixed up:

                    module Newton where
                    newtype Function   = F (Float -> Float)
                    newtype Derivative = D (Float -> Float)
                    newtype Epsilon    = E Float
                    newtype Initial    = I Float
                    newtype Root       = R (Float, Function, Epsilon)
                    newtonRaphson :: Function -> Derivative -> Epsilon -> Initial -> Root
                    newtonRaphson (F f) (D f') (E e) (I x) = if abs y < e
                                                                then R (x, F f, E e)
                                                                else recurse (I x')
                      where y  = f x
                            x' = x - (y / f' x)
                            recurse = newtonRaphson (F f) (D f') (E e)

                    Again, this is just defining things in terms of other things. OK, that’s the definition. So how do we explain it as a definition? Here’s my attempt:

                    Newton’s method of a function f + guess g + epsilon e is defined as the “refinement” r of g, such that f(r) < e. The “refinement” of some number x depends on whether x satisfies our epsilon inequality: if so, its refinement is just x itself; otherwise it’s the refinement of x - (f(x) / f'(x)).

                    This definition is “timeless”, since it doesn’t talk about doing one thing followed by another. There are causal relationships between the parts (e.g. we don’t know which way to “refine” a number until we’ve checked the inequality), but those are data dependencies; we don’t need to invoke any notion of time in our semantics or understanding.


                      Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty.

                      Algorithms are essentially stateful. A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program. A “functional” language relies on a smaller set of control mechanisms to reduce, in theory, the complexity of algorithm specification, but “recursion” specifies what to do when just as much as a “goto” does. Single assigment may have nice properties, but it’s still assignment.

                      To me, you are making a strenuous effort to obfuscate the obvious.


                        Algorithms are essentially stateful.

                        I generally agree. However, I would say programming languages don’t have to be.

                        When we implement a stateful algorithm in a stateless programming language, we need to represent that state somehow, and we get to choose how we want to do that. We could use successive “versions” of a datastructure (like accumulating parameter in my ‘reverse’ example), or we could use a call stack (very common if we’re not making tail calls), or we could even represent successive states as elements of a list (lazy lists in Haskell are good for this).

                        A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program.

                        I don’t follow. I think it’s perfectly reasonable to say that Prolog code encodes algorithms. How does Prolog’s use of a “universal algorithm” (depth-first search) imply that Prolog code isn’t algorithmic? Every programming language is based on “a kind of universal algorithm”: Python uses a bytecode interpreter, Haskell uses beta-reduction, even machine code uses the stepping of the CPU. Heck, that’s the whole point of a Universal Turing Machine!

                        “recursion” specifies what to do when just as much as a “goto” does.

                        I agree that recursion can be seen as specifying what to do when; this is a different perspective of the same thing. It’s essentially the contrast between operational semantics and denotational semantics.

                        I would also say that “goto” can be seen as a purely definitional construct. However, I don’t think it’s particularly useful to think of “goto” in this way, since it generally makes our reasoning harder.

                        To me, you are making a strenuous effort to obfuscate the obvious.

                        There isn’t “one true way” to view these things. I don’t find it “strenuous” to frame things in this ‘timeless’ way; indeed I personally find it easier to think in this way when I’m programming, since I don’t have to think about ‘time’ at all, just relationships between data.

                        Different people think differently about these things, and it’s absolutely fine (and encouraged!) to come at things from different (even multiple) perspectives. That’s often the best way to increase understanding, by find connections between seemingly unrelated things.

                        Single assigment may have nice properties, but it’s still assignment.

                        In name only; its semantics, linguistic role, formal properties, etc. are very different from those of memory-cell-replacement. Hence why I use the term “definition” instead.

                        The key property of single assignment is that it’s unobservable by the program. “After” the assignment, everything that looks will always see the same value; but crucially, “before” the assignment nothing is able to look (since looking creates a data dependency, which will cause that code to be run “after”).

                        Hence the behaviour of a program that uses single assignment is independent of when that assignment takes place. There’s no particular reason to assume that it will take place at one time or another. We might kid ourselves, for the sake of convenience, that such programs have a state that changes over time, maybe going to far as to pretend that these hypothetical state changes depend in some way on the way our definitions are arrangement in a text file. Yet this is just a (sometimes useful) metaphor, which may be utterly disconnected from what’s actually going on when the program (or, perhaps, a logically-equivalent one, spat out of several stages of compilation and optimisation!).

                        Note that the same is true of the ‘opposite’ behaviour: garbage collection. A program’s behaviour can’t depend on whether or not something has been garbage collected, since any reference held by such code will prevent it from being collected! Garbage collection is an implementation detail that’s up to the interpreter/runtime-system; we can count on it happening “eventually”, and in some languages we may even request it, but adding it to our semantic model (e.g. as specific state transitions) is usually an overcomplication that hinders our understanding.

          1. 13

            RSS is used in all manner of places for useful content syndication. I use RSS daily for heaps of stuff from regular news to LSE RNS subscriptions via Investegate, through to using RSS2Email and RSS-Bridge to move social networks into my mail inbox, where I can keep up with family and friends at my leisure rather than Instagram’s terms.

            It only ‘feels’ dead because it’s not something that regular people often see, but it’s still used a lot.

            1. 15

              Yeah, it’s about as dead as electricity & plumbing.

            1. 1

              I’m sad to see that in 2018 they are still using the terms master & slave instead of primary & replica(s).

              1. 2

                Thats the official terminology for redis.

                1. 2

                  Ah, that explains it then. I guess my despair should be directed toward Redis instead.

                2. 2

                  Stop it, please.

                  You’re overloading the terms master and slave in an unreasonable fashion when considering the current context, as well as derailing the post. Conflating the two terms with what is provided by your political correctness filter likely means there’s an issue with you to be sad about, not the people who put together this incredibly informative diagram.

                  1. 6

                    Using the primary definitions is “overloading”? Master/Slave in engineering contexts is a lazy, crappy, metaphor that can usually be replaced by something more informative, more precise, and less offensive.

                    1. 5

                      I don’t think I’m overloading the terms. They carry connotations with them because of history.

                      Furthermore, I don’t think that pointing out issues with terminology that is alienating to people whose ancestors were slaves is a political statement. It seem like a simple change to make that would make technical language more inclusive.

                      The post and diagrams are otherwise excellent but people who want to improve tend to be receptive to critical feedback.

                      1. 2

                        Folks that language police redefine words or use newer definitions of old ones all the time. Then, they say language cant evolve if its other people with their preferred words these people dislike. Makes me think many dont believe what they say about always acting in terms of history of words.

                        If they’re inconsistent and enforcing against political opposition, then I oppose that type of language policing since it’s political control instead of keeping words’ meaning consistent with history.

                        I’d love to see them do it first since it could make for entertaining, confusing reading. All of them would be calling out negative aspects from Old English or something while rest of us just have a conversation. ;)

                        1. 1

                          Normally I find your comments remarkable insightful. But I had a hard time even parsing this one!

                          1. 1

                            I was trying to be done with this. You disagreed and followed up in an exceedingly-polite way. So, I’ll make another attempt in a similarly non-judgmental way focusing on different perspectives. :)

                            I should first define what I mean by free speech for purposes of forums, database books, and so on. It’s mostly free where we avoid, call out, and/or censor only the things where majority of people in different groups agree are bad. There’s near-universal agreement from Left to Right that someone dropping N-bomb or saying someone is inherently inferior due to skin color is racist. That group X causes more of specific problems for whatever reason in specific area, whether Master/Slave is inherently offensive, what privilege someone has or doesn’t in various contexts, and so on are in dispute among groups. So, it’s not censored given free speech exists to protect dissent, not popular things. We allow the differing opinions even if one side’s pisses another off. We just keep things civil, minimize trashtalk, avoid or call out weak arguments (esp emotional), and focus on data supporting or refuting views. I especially like looking for inconsistencies as you’ll see.

                            My understanding is that you in this thread and a lot of other people in similar threads object to the use of specific words due to historical connotations. The argument is that they’re irreversibly tied to something evil in history, either created for evil (e.g. N-word) or supporting it in major way later (eg swastika’s). Then, these words get used in a new context where they might offend someone due to the historical evil’s attached to them. Some also claim that using the words is condoning or further promoting the evil. I was just assuming first claim (negative baggage) for your objection. Given these premises, your group thinks the moral response is to object to the use of those words, replace them with non-offensive terms, and ensure the source carefully considers their words to minimize the harm they’re creating.

                            I’m calling your group SpeakCarefully vs SpeakFreely for my group. The SpeakCarefully group tells us we can’t use these words in new ways since their historical meaning and potential effects have to be considered. Yet, the SpeakCarefully side redefines words to suit their political purposes all the time. Take racism whose original use by person that coined it was a level of suffering, isolation, and force change that I’d imagine is quite different from what most non-whites mean when they said they experienced racism. They’ve changed the meaning of the term to current form, structural racism, which suits their political agenda. SpeakCarefully folks often use that one. SpeakFreely folks and right-leaning people changed the word to yet a different meaning: an act of discrimination based on race with adjectives like “individual” or “structural” used to narrow it down or elaborate. I think that makes more sense if we’re redefining it anyway since it’s more consistent with English words in other topics.

                            In any case, I see an inconsistency where the SpeakCarefully types tell me the words have inherent baggage that can never be changed. We have to avoid them or mention them in ways consistent with that history. Then, they don’t follow their own standard when they redefine words such as racism. They even go further in many arguments telling the other side they’re using an incorrect definition: it should be their definition. Yet, their definition was itself incorrect if they changed the meaning. That has a few problems:

                            1. If term is incorrect upon change, then SpeakCarefully’s re-definition is incorrect as well. Yet, they defend it and their right to redefine things for their group’s own purposes. SpeakFreely are told they can’t redefine words. This is inconsistency.

                            2. SpeakCarefully selective about what we can redefine. They say this is what is offensive. Yet, what offends them is different than what offends another group. So, it’s not really what’s offensive so much as what their group decides to enforce on everyone else. This is why SpeakFreely types think SpeakCarefully folks aren’t mitigating harm or correcting immorality: they’re enforcing their views, for which there isn’t a national consensus, on people that don’t have those views. Once you know that, these seemingly harmless conversations correcting immoral behavior become something different entirely. And more complicated.

                            3. If we stick with No 1., SpeakCarefully are also ignoring the harms that their new definition might cause those who were effected by either pride in or baggage from original definition. In my link, natives sent via “racism” to schools bathing them in kerosene and forcing them to not understand their parents at all might be offended by suburban blacks encountering prejudicial hiring saying they were “victims of racism.” If historical meaning stays, then there are probably only a tiny percentage of non-natives in modern America that were victims of similar treatment and reasoning (aka racism in original definition).

                            4. As I alluded to in “do it first,” I found that SpeakCarefully doesn’t actually consider the negatives and offenses in all the words they use. This builds on No 2. There’s specific terms affecting specific groups under specific ideology (i.e. theirs) that are considered harmful. They may, at their sole discretion, add to or remove from the list. Yet, they continue to use terms that might offend those outside their political group. Most of them haven’t done a systematic, historical investigation of their vocabulary to only use words non-offensive to all groups. As my poem alludes, it may be impossible to not offend all groups. The inconsistency is they say they care about doing harm with words due to historical meanings, that SpeakFreely should carefully consider harm of words they use, and SpeakCarefully don’t actually do that themselves outside a subset of terms and concerns their group stays on (esp race/gender/religion/age).

                            I’m with SpeakFreely because I think the above incosistencies show the SpeakCarefuly side are often well-intentioned but have seriously inconsistent use of words and political action. They say not to redefine words with baggage but do it themselves, sometimes with same exact words. They’re selective about exceptions despite admonishments talking more like “always do this.” They define harm based on their political sub-groups beliefs on what’s harmful. No other group’s beliefs are allowed even if it’s members are in minority groups SpeakCarefully is claiming to protect. A smaller set of SpeakCarefully moves further to cut off speech, community membership, or even jobs over this stuff. Yet, they themselves violate their own rules causing the same kinds of emotional harm to the same kinds of people, assuming the words actually cause harm as they claim. They neither stop it nor punish their members for doing this.

                            That makes this more about political beliefs of a sub-group (or many sub-groups), pushing them, and/or enforcing them. That’s political domination even if you do it by asking nicely for other side to self-censor or be censored. Unlike what there’s consensus on, political domination should be resisted by default until a debate hashes out new consensus. At the least, I want to see SpeakCarefully modify their behavior to match their stated justifications or modify their statements of belief to match their inconsistent behavior.

                            EDIT: On a side note, it’s hard to do these discussions without ascribing malice to or flinging insults at other side. The SpeakCarefully vs SpeakFreely model is my attempt at representing the different views with simple terms that both have positive connotation since each group thinks their beliefs/actions are positive. Then, I focus on group views, actions, and potential inconsistencies. In another post, maybe successes or failures. Anyway, I wanted to know if you liked or disliked those in terms of softening these debates a bit. I’m trying to keep them accurate and neutral if not positive.

                            1. 1

                              I appreciate the thoughtful response but I don’t want to continue hijacking this thread so I’m going to respond to you via private message.

                              1. 1

                                Probably best move.

                          2. 0

                            Words meanings have to be consistent with history? I’m not even sure what that means, but language is always changing and we have choices about how we speak. Terms like “jew” for bargain (usually unfairly) or to “gyp” or to call someone an “indian giver” used to be in common usage just a few years ago, but they are usually now rightly considered to be both offensive and indicative of ignorance or malice on the part of the speaker. It’s weird to me that civility is so controversial. “Master/slave” is a terrible metaphor as well as being offensive - I can’t imagine why anyone would find it worth defending.

                            1. 5

                              “ I’m not even sure what that means, but language is always changing and we have choices about how we speak.”

                              You’re making my argument for me while countering some on your side. They reach for historical meanings or intent when saying words are inherently offensive. If it’s fluid and contextual, then their arguments don’t hold water since that can be redefined be new groups to mean things like database setups. They’re inconsistent because this is about pushing politics rather than thorough assessment of each word we use with its historical connotations.

                              “But they are usually now rightly considered to be both offensive and indicative of ignorance or malice on the part of the speaker.”

                              You gave good examples that were designed for evil and currently used close to that context. Then, you apply it to a different situation with current terms. The people using master/slave are doing something comparatively harmless. Some setups in control systems even follow the literal meaning. Your position would ditch them, too, just on ideological grounds saying they’re always evil. Yet, in many setups, there are in fact master (management) and slave (“workers following orders no matter cost”) setups.

                              “It’s weird to me that civility is so controversial. “

                              This is kind of statement that motivates my replies. Anyone that disagrees with your position on language definitions or evolution is not “civil.” Yet, you dont want your opponents to be able to label you similarly just because you had different position. This is also example of personal attacks your side does that most on it are cool with.

                              “Master/slave” is a terrible metaphor “

                              I already said that in this thread. Finding a better metaphor makes sense. It’s just that most blacks I know in the South arent worried about how older whites labeled database functions. That’s you, some other Lobsters, and specific set of liberal ideology. People on your side here talk like they’re fighting for justice or black folks’ concerns. They didnt ask you to rename DB’s, protocols, etc. They asked you to help them avoid government abuse, get good jobs, get taken seriously for their tech skills, and so on.

                              What “offensive” things you put energy into tell me you arent really fighting for minority members’ needs. We just hooked up a few more in my [terrible, fairly-racist-on-top] company moving them to better positions. I worked on that personally plus coaching. Their lives might improve. What have you done lately for individuals with slavery in their background to help them stop flipping burgers, bagging groceries, delivering packages to geeks all day, assembling shit at factories, or (for privileged) doing menial office tasks? And especially outside your political group like they were mine?

                              I’m betting either nothing or close to it. The ethics of language and its policing are so easy in comparison. Just another online argument. I dare you to try the other thing if you havent. Especially in a combo of the hood and white businesses. I cant wait to see what you write afterward whether you agree or disagree with me. :)

                              1. 0

                                If it’s fluid and contextual, then their arguments don’t hold water since that can be redefined be new groups to mean things like database setups.

                                Language is fluid, but it’s not completely redefinable. I have been told by people who use “jew” as a pejorative way of saying “bargain hard” to lighten up and not read offensive content into everything. According to these people, it’s not meant as a slur anymore and they tell me how pissed off they are at politically correct busybodies trying to police the language. This has happened to me a couple of times in Europe and once in the old south. I’m not impressed - and it has left me with a lot of skepticism about arguments like yours.

                                The people using master/slave are doing something comparatively harmless.

                                It never occurred to me that it was a problem until I saw objections recently. But that was my blindness, not evidence of harmlessness.

                                Some setups in control systems even follow the literal meaning.

                                No they do not. Violence is intrinsic to slavery. Do, e.g. “master” bus signals coerce “slaves”? It’s never been a good metaphor - it is a lazy use of language.

                                It’s just that most blacks I know in the South arent worried about how older whites labeled database functions.

                                Done a survey, have you? I don’t see how that’s relevant. What’s odd to me is how much energy people like you put into complaining about efforts to change nomenclature to something else. What’s the emotional stakes for you? Why do you care so much? To me, it was just a minor thing: oh yeah, this metaphor I used for years without thinking about it is both crappy and offensive, we should change it. No big deal. But somehow, for you it’s important to keep calling replicas or clients or backups or controlled signals, “slaves”. Why?

                                One of the uses I run into the most, in IEEE 1588, always bothered me, because the metaphor kind of justifies the underlying design error.

                                1. 1

                                  “I have been told by people who use “jew” as a pejorative way of saying “bargain hard” “

                                  I would oppose that usage since it was inherently negative. The Jewish business people down here usually do hustle really hard. So do a lot of other types of hustlers. That it’s a capitalist system where hustling is rational makes this a stupid complaint by such people just as much a racial slur. I’ve been clear I’m for blocking racist language or acts if it’s something there’s a consensus on among groups. Some groups want to expand things way beyond what others accept with them solely defining what’s acceptable or unacceptable. I block that stuff. Other things, like this slur, most people out here would agree is racist. Many people that do it even admit it but don’t care.

                                  “. “master” bus signals coerce “slaves”? “

                                  In some setups (esp industrial or medical), the control systems can send commands that physically damage the receiver, physical property, or people. They’re usually designed to not do this. I’m not going further since I’m opposed to master/slave already on grounds it’s non-intuitive. No sense further justifying it.

                                  “What’s odd to me is how much energy people like you put into complaining about efforts to change nomenclature to something else. “

                                  It’s the other way around. Everyone was talking tech when someone like you put effort into language changes. They wanted everyone to stop what they were doing, think deeply about their morality, think deeply about the language problem, think deeply about alternatives with their tradeoffs, and then change it in every system in existence. That’s a lot of responsibility and energy. Then, you’re shocked that there’s initial resistance or that I’d throw a few posts at exploring this political act of people demanding the industry to change. The bigger the thing, the larger the energy is put into response.

                                  It’s also political activism on our technical forum representing the one side that does that among the many sides here and elsewhere. I supported that being against the rules in general. People on your side wanted it allowed so they can inject their politics into every discussion trying to force others’ compliance. Although we’re just talking tech, people on your side want to expend energy on this. They often initiate these tangents, too. If you all do that, expect others to put energy into representing their beliefs or just countering the weaker parts of yours. It’s like a few of you think every political discussion will involve your side making a comment with everyone else just nodding, saying “Thanks!,” and changing their life. That’s not how people and politics works.

                                  1. 1

                                    It’s the other way around. Everyone was talking tech when someone like you put effort into language changes.

                                    There are two errors in your remark. First, you begin by assuming the master/slave language was “tech” and didn’t have a political/moral content - which is convenient for your argument in the way that assuming one’s conclusion usually is convenient. Second, you want to characterize your emotional and frankly out of proportion freak out as rational, while my casual observations are, to you, some grand conspiracy to impose a political agenda. It’s not a big deal to me, but apparently is a red flag for you - for reasons I don’t get at all.

                                    1. 0

                                      I’m just counterpointing parts of comments here like I do in other threads. Hardly freaking out. Since you bring it up, a conspiracy is when people work together to do something that benefits them usually in secrecy. The group pushing this brand of politics are certainly working together at the forum level since one comment might be hit by several replies that are usually the same people. The word fails on coordination and secrecy: I don’t think you all are doing that since there’s no need or evidence. You’re just responding to comments from your viewpoints.

                                      A subset are also for censoring anyone with dissenting views on these topics. Ensuring alternative views are represented or they don’t acquire more power isn’t something hypothetical if censorship is an actively-stated goal. Dropping comments on those threads is the bare minimum response to such a thing. So, I do it.

                                      This conversation has dragged on a while, though. I’m done with my part in it since I think we’ve explored the topic plenty. I do want to re-emphasize I think Master-Slave sucks for other reasons and encourage better wording for new developments.

                          3. 2

                            Our people are no longer slaves. That you can’t separate the proper use of certain terminology given the relationship of the components in this system from your own history and feelings is your problem. The ones providing this service should not have to worry about coddling you or anyone else. There’re too many eggshells to avoid stepping on today. It’s mentally taxing.

                            Thinking about this selfishly, I just want useful information. If people continue on with your pervasive and detrimental mindset it will be too hard for others to disseminate what they know and the situation will become less inclusive overall. I worry about this daily.

                            1. 2

                              I’m literally just asking for a pair of trms to be replaced with synonyms. Text substitution is an easily solved problem.

                              1. 0

                                I’m literally just asking for a pair of terms to be replaced with synonyms.

                                No, you’re not.

                                I’m sad to see that in 2018 they are still using the terms master & slave instead of primary & replica(s).

                                You’re indirectly admonishing them for being racially insensitive because they used of a pair of innocuous words. You’re not just asking for text replacement, you’re trying to shame them into fixing their perfectly acceptable mindset.

                                This is like back in the late 80’s when people started getting indignant about “the culturally insensitive” act of placing Christmas trees in company lobbies. I’m trying to stay respectful here but you and everyone else on the internet who would unreasonably police thoughts and words like that yet think nothing of it pisses me off. This problem widespread and getting worse. If you, a seemingly very intelligent individual, can’t see where this is going then I am right to worry.

                                That is my point of view on this issue, I’m done. Do what you will.

                                1. 4

                                  If you think Christmas trees & slavery are somehow on the same level, you’re a lost cause, my friend.

                              2. 1

                                Here’s is a nice poem you can give people illustrating how it’s a no-win scenario if we’re really trying to avoid offending anyone. What I’ve seen among groups that tell me not to be offensive is that they’re often not really doing that. They have specific beliefs their group likes and doesn’t like. The doesn’t like is offensive. What they like may also be offensive to lots of people in other groups for similarly arbitrary or decent reasons (varies a lot). They don’t care, though, since they’re not really about offending people so much as offending people that agree with them on what’s offensive. Aka forcing others to comply with their beliefs and practices.

                          4. 1

                            In my opinion replica means knock-off, which is not the same as a clone or slave of master.

                            Granted a slave is usually usually promoted to master in failovers. A replica (kit) car is not the same thing as a name brand car … And I wouldn’t expect a replica to be a suitable replacement for a production system.

                            On the other hand I much prefer the term hot-spare for these kinds of redundant services.

                            1. 0

                              In my opinion replica means knock-off,

                              Then you should consult a dictonary to correct your opinion.

                              1. 6

                                rep·li·ca ˈrepləkə/ noun an exact copy or model of something, especially one on a smaller scale. “a replica of the Empire State Building” synonyms: copy, carbon copy, model, duplicate, reproduction, replication; More a duplicate of an original artistic work. “it is a replica of an antique plaque”

                                … I stand by what I said as a native English speaker a replica is not the same as the authentic item/service, it implies it’s inferior quality typically.

                                1. 3

                                  I fail to see how any of what you posted implies inferior quality.

                                  1. 2

                                    Ever shot a gun with “replica” printed on the side? I doubt you have, and I bet you know why

                                  2. 2

                                    Some of that definition means equivalence. That’s opposite of lower quality. So, the meaning of the word isnt tied to the replica’s quality. The quality can be anything.

                                  3. 3

                                    Dictionaries are not authoritative sources on what words mean to people. Most words have multiple meanings, some of which are not in dictionaries, and they have connotations, relating them to networks of meaning.

                                    1. 2

                                      And then dictionaries periodically add words that are in popular use.

                                2. 1

                                  Is this a topic of discussion for some people? Call it peanut and butter, as long as we all understand clearly.

                                1. 1

                                  Evidently the author has somehow not heard of Android’s Digital Wellbeing features.

                                  1. 4

                                    Given that ‘digital wellbeing’ is apparently a preview feature in Android 9.0 that only works on Pixel devices, I doubt many people have heard of this feature.

                                    And excuse me if I don’t put much stock in Google wanting people to ‘disconnect’.

                                    1. 1

                                      The reviews indicate that Digital Wellbeing, while a tad slower out of the gate due to Android’s slower upgrade pattern, is at least as well designed as Apple’s approach:


                                  1. 5

                                    While functional programming languages like Haskell are conducive to modularity and otherwise generally good software engineering practices, they are unfit as implementation languages for what I will call interactive systems. These are systems that are heavily IO bound and must provide some sort of guarantee with regard to response time after certain inputs. I would argue that the vast majority of software engineering is the engineering of interactive systems, be it operating systems, GUI applications, control systems, high frequency trading, embedded applications, databases, or video games. Thus Haskell is unfit for these use cases. Haskell on the other hand is a fine implementation language for batch processing, i.e. non-interactive programs where completion time requirements aren’t strict and there isn’t much IO.

                                    It’s not a dig at Haskell, this is an intentional design decision. While languages like Python/Java remove the need to consider memory allocation, Haskell takes this one step further and removes the need to consider the sequential steps required to execute a program. These are design trade-offs, not strict wins.

                                    1. 5

                                      While languages like Python/Java remove the need to consider memory allocation, Haskell takes this one step further and removes the need to consider the sequential steps required to execute a program.

                                      Haskell makes it necessary to explicitly mark code which must be performed in sequence, which, really, is a friendlier way of doing things than what C effectively mandates: In C, you have to second-guess the optimizer to ensure your sequential code stays sequential, and doesn’t get reordered or removed entirely in the name of optimization. When the IO monad is in play, the Haskell compiler knows a lot of its usual tricks are off-limits, and behaves itself. It’s been explicitly told as much.

                                      Rust made ownership, previously a concept which got hand-waved away, explicit and language-level. Haskell does the same for “code which must not be optimized as aggressively”, which we really don’t have an accepted term for right now, even though we need one.

                                      1. 8

                                        The optimiser in a C implementation absolutely won’t change the order in which your statements execute unless you can’t observe the effect of such changes anyway. The definition of ‘observe’ is a little complex, but crucially ‘my program is faster’ isn’t an observation that counts. Your code will only be reordered or removed in the name of optimisation if such a change is unobservable. The only way you could observe an unobservable change is by doing things that have no defined behaviour. Undefined behaviour exists in Haskell and Rust too, in every language.

                                        So I don’t really see what this has to do with the concept being discussed. Haskell really isn’t a good language for expressing imperative logic. You wouldn’t want to write a lot of imperative logic in Haskell. It’s very nice that you can do so expressively when you need to, but it’s not Haskell’s strength at all. And it has nothing to do with optimisation.

                                        1. 3

                                          What if you do it using a DSL in Haskell like Galois does with Ivory? Looks like Haskell made their job easier in some ways.

                                          1. 1

                                            Still part of Haskell and thus still uses Haskell’s awful syntax. Nobody wants to write a <- local (ival 0). or b' <- deref b; store a b' or n `times` \i -> do when they could write int a = 0;, a = *b; or for (int i = 0; i < n; i++).

                                            1. 8

                                              “Nobody wants to”

                                              You’re projecting your wishes onto everybody else. There’s piles of Haskell code out there, many DSL’s, and some in production. Clearly, some people want to even if some or most of us don’t.

                                              1. 1

                                                There is not ‘piles of Haskell code out there’, at least not compared to any mainstream programming language. Don’t get confused by its popularity amongst people on lobsters, hackernews and proggit. It’s an experimental research language. It’s not a mainstream programming language. It has piles of code out there compared to Racket or Idris or Pony, but compared to Python or C or C++ or Ruby or Java or C# or god forbid Javascript? It might as well not exist at all.

                                                1. 2

                                                  Im not confused. Almost all languages fail getting virtually no use past their authors. Next step up get a few handfuls of code. Haskell has had piles of it in comparison plus corporate backing and use in small scale. Then, there’s larger scale backings like Rust or Go. Then, there’s companies with big market share throwing massive investments into things like .NET or Java. There’s also FOSS languages that got lucky enough to get similarly high numbers.

                                                  So, yeah, piles of code is an understatement given most efforts didnt go that far and a pile of paper with source might not cover the Haskell out there.

                                                  1. 1

                                                    I don’t care how popular Haskell is compared to the vast majority of languages that are used only by their authors. That’s completely irrelevant to the discussion at hand.

                                                    Haskell is not a good language for expressing imperative concepts. That’s plainly and obviously true. Defending it on the basis that it’s widely used ignores that firstly languages aren’t better simply because they’re widely used, secondly that languages can be widely used without necessarily being good at expressing imperative concepts, and thirdly that Haskell isn’t widely used.

                                              2. 4

                                                int a = 0 is okay, but not great. a = *b is complete gobbledygook that doesn’t look like anything unless you already know C, but at least it’s not needlessly verbose.

                                                for (int i = 0; i < n; i++) is needlessly verbose and it looks like line noise to anyone who doesn’t already know C. It’s a very poor substitute for actual iteration support, whether it’s n.times |i| or for i in 0..n or something else to express your intent directly. It’s kind of ridiculous that C has special syntax for “increment variable by one and evaluate to the previous value”, but doesn’t have special syntax for “iterate from 0 to N”.

                                                All of that is kind of a minor nit pick. The real point is that C’s syntax is not objectively good.

                                                1. 2

                                                  How in the world are people unfamiliar with ruby expected to intuit that n.times|i| means replace i with iterative values up to n and not multiply n times i?

                                                  1. 2

                                                    A more explicit translation would be 0.upto(n) do |i|.

                                                  2. 0

                                                    You do know C. I know C. Lots of people know C. C is well known, and its syntax is good for what it’s for. a = *b is not ‘gobbledygook’, it’s a terse way of expressing assignment and a terse way of expressing dereferencing. Both are very common in C, so they have short syntax. Incrementing a variable is common, so it has short syntax.

                                                    That’s not ridiculous. What I am saying is that Haskell is monstrously verbose when you want to express simple imperative concepts that require a single character of syntax in a language actually designed around those concepts, so you should use C instead of Haskell’s weird, overly verbose and syntactically poor emulation of C.

                                            2. 3

                                              How does Haskell allow you to explicit mark code that must be performed in sequence? Are you referring to seq? If you’re referring to the IO Monad, it’s a fair point, but I think generally it’s considered bad practice to default to using the IO monad. This sort of thing creates a burden when programming Haskell, at least for me. I don’t want to have to constantly wonder if I’ll need to port my elegant functional code into sequential IO Monad form in the future. C++/Rust address this sort of decision paralysis via “zero-cost abstractions,” which make them both more fit to be implementations languages, according to my line of reasoning above.

                                              1. 5

                                                Personally, I dislike discussions involving “the IO Monad”. The key point is that Haskell uses data flow for control flow (i.e. it’s lazy). We can sequence one thing after another by adding a data dependency (e.g. making bar depend on the result of foo will ensure that it runs afterwards).

                                                Since Haskell is pure, compilers can understand and optimise expressions more thoroughly, which might remove ‘spurious’ data dependencies (and therefore sequencing). If we want to prevent that, we can use an abstract datatype, which is opaque to the compiler and hence can’t be altered by optimisations. There’s a built-in datatype called IO which works well for this (note: none of this depends at all on monads).

                                                1. 3

                                                  The trouble is that oftentimes when you’re building time-sensitive software (which is almost always), it’s really inconvenient if the point at which a function is evaluated is not clear from the source code. Since values are lazy, it’s not uncommon to quickly build up an entire tree of lazy values, and then spend 1-2 seconds waiting for the evaluation to complete right before the value is printed out or displayed on the screen.

                                                  You could argue that it’s a matter of setting correct expectations, and you’d be right, but I think it defeats the spirit of the language to have to carefully annotate how values should be evaluated. Functional programming should be about functions and pure computation, and there is no implicit notion of time in function evaluation.

                                                  1. 4

                                                    I agree that Haskell seems unsuitable for what is generally called “systems programming” (I’m currently debugging some Haskell code that’s been over-complicated in order to become streaming). Although it can support DSLs to generate suitable code (I’ve not experience with that though).

                                                    I was just commenting on using phrases like “the IO Monad” w.r.t. evaluation order, etc. which is a common source of confusion and hand-waving for those new to Haskell, or reading about it in passing (since it seems like (a) there might be something special about IO and (b) that this might have something to do with Monads, neither of which are the case).

                                                    1. 2

                                                      building time-sensitive software (which is almost always)

                                                      Much mission-critical software is running in GC’d languages whose non-determinism can kick in at any point. There’s also companies using Haskell in production apps that can’t be slow. At least one was using it specifically due to its concurrency mechanisms. So, I don’t think your “almost always” argument holds. The slower, less-predictable languages have way too much deployment for that at this point.

                                                      Even time-sensitive doesn’t mean what it seems to mean outside real-time since users and customers often tolerate occasional delays or downtime. Those they don’t might also be fixed with some optimization of those modules. Letting things be a bit broken fixing them later is default in mainstream software. So, it’s not a surprise it happens in lots of deployments that supposedly are time-critical as a necessity.

                                                      In short, I don’t think the upper bounds you’ve established on usefulness match what most industry and FOSS are doing with software in general or timing-sensitive (but not real-time).

                                                      1. 2

                                                        Yeah it’s a good point. There certainly are people building acceptably responsive apps with Haskell. It can be done (just like people are running go deployments successfully). I was mostly speaking from personal experience on various Haskell projects across the gamut of applications. Depends on cost / benefit I suppose. For some, the state of the art type system might be worth the extra cycles dealing the the occasional latency surprise.

                                                        1. 2

                                                          The finance people liked it because it was both closer to their problem statements (math-heavy), the apps had lower defects/surprises vs Java/.NET/C, and safer concurrency. That’s what I recall from a case study.

                                                  2. 1

                                                    If you’re referring to the IO Monad, it’s a fair point, but I think generally it’s considered bad practice to default to using the IO monad

                                                    Lmao what? You can define >>= for any data type effectively allowing you to create a DSL in which you can very precisely specify how the elements of the sequence combine with neat do notation.

                                                    1. 2

                                                      Yes that’s exactly the problem to which I’m referring: Do notation considered harmful. Also do notation isn’t enough to specify evaluation sequencing since values are lazy. You must also carefully use seq

                                                      1. 1

                                                        Ah well I use a Haskell-like language that has strict-by-default evaluation and seems to be able to address a lot of those other concerns at least by my cursory glance:)

                                                        Either way the benefits of do, in separating the logic and execution of procedures, look great to me. But I may be confusing them with the benefits of dependent typing, nevertheless the former facilitates the latter when it comes to being able to express various constraints on a stateful system.

                                                2. 3

                                                  For systems Haskell, you might like Habit from the people behind House, a Haskell OS. I just found some answers to timing part that I’ll submit in morning.

                                                  1. 1

                                                    The House website seems incredibly out of date!

                                                    1. 3

                                                      Oh yeah. It’s mostly historical. They dropped the work for next project. Then dropped that for even better one. We get some papers and demos out of it.

                                                      1. 2

                                                        But so damn cool.

                                                        1. 2

                                                          Exactly! Even more so, there’s a lot of discussion of how to balance the low-level access against Haskell’s high-level features. They did this using the H Layer they describe in some of their papers. It’s basically like unsafe in Rust where they do the lowest-level stuff in one way, wrap it where it can be called by higher-level Haskell, and then do what they can of the rest in Haskell. I figured the concepts in H Layer might be reusable in other projects, esp safe and low-level. The concepts in Habit might be reusable in other Haskell or non-Haskell projects.

                                                          It being old doesn’t change that. Good example is how linear logic was in the 1980’s, That got used in ML first I think years later, then them plus singleton types in some safer C’s in the 2000’s, and an affine variant of one of them in Rust. They make a huge splash with “no GC” claim. Now, linear and affine types are being adapted to many languages. The logic is twenty years old with people talking about using it for language safety for 10-20 years. Then, someone finds it useful in a modern project with major results.

                                                          Lots of things work that way. It’s why I submit older, detailed works even if they have broken or no code.

                                                    2. 1

                                                      none of the examples of “interactive systems” you mention are nomally io bound. sub-second response time guarantees, otoh, are only possible by giving up gc, and use a real-time kernel. your conclusion that Haskell is unusable for “these use cases” seems entirely unfounded. of course, using Haskell for real time programming is a bad idea, but no less bad than anything that’s, essentially, not C.

                                                      1. 2

                                                        I’ve had a few personal experiences writing large Haskell applications where it was more trouble than I thought it was worth. I regularly had to deal with memory leaks due to laziness and 1-5 second stalls at io points where large trees of lazy values were evaluated last minute. I said this in another thread: it can be done, it just requires a bit more effort and awareness. In any case, I think it violates the spirit of Haskell programming to have to carefully consider latency issues, GC times, or lazy value evaluation when crafting pure algorithms. Having to trade off abstraction for performance is wasteful IMO, i think Rust and C++ nail this with their “zero cost abstractions.”

                                                        I would label most of those systems IO bound. My word processor is normally waiting on IO, so is my kernel, so is my web app, so is my database, so is my raspberry pi etc.

                                                        1. 1

                                                          I guess I’m picking nits here, but using lots of working memory is not “memory leaks”, and a program that is idling due to having no work to perform is not “io bound”. Having “to carefully consider latency issues, GC times, [other tradeoffs]” is something you have to do in every language. I’d venture that the ability to do so on a subconcious level is what distinguishes a skilled developer from a noob. This also, I think, plays a large part in why it’s hard for innovative/weird languages to find adoption; they throw off your sense of how things should be done.

                                                          1. 1

                                                            Yes you have to consider those things in all languages which is precisely my point. Haskell seeks to abstract away those details but if you want to use Haskell in any sort of “time-sensitive” way, you have to litter your pure, lazy functional code with annotations. That defeats the purpose of the language being pure and lazy.

                                                            And yes, waiting on user input does make your program IO bound. If your program is spending more time waiting on IO and less time churning the CPU, it is IO bound. IO bound doesn’t simply mean churning the disk.

                                                          2. 1

                                                            I brought that up before as a counterpoint to using Haskell. A Haskeller gave me this link which is a setting for making it strict by default. Might have helped you out. As a non-Haskeller, I can’t say if it makes the language harder to use or negates its benefits. Worth looking into, though, since it was specifically designed to address things like bang patterns that were cluttering code.

                                                      1. 3

                                                        I think this varies by language. IMO the sweet spot for productivity on projects past a certain size is:

                                                        • Statically-typed, null-safe languages,
                                                        • with integration tests covering the major flows / use cases,
                                                        • and a small number of unit tests for core library code.

                                                        This is pretty much the inverse of the “testing pyramid” I’ve seen discussed elsewhere — where the base of the pyramid is a large number of unit tests, and on top of that sit far fewer integration tests — but assuming the above are true, I agree with the author that in my experience additional unit tests rarely provide additional value and incur high maintenance costs.

                                                        That being said, for dynamically-typed languages like Ruby (or to some extent even statically-typed languages with inexpressive, unsound type systems like Go), you really do need high unit test coverage on large codebases to ensure correctness. You can’t rely on the compiler to tell you if you did something wrong, humans inevitably make mistakes, and on big enough projects the original authors of a piece of code will sometimes (often?) miss catching those mistakes, or may not even be around anymore. I think this is one of the downsides of using those kinds of languages for large projects: the extra time you save by not writing out the types you end up paying back many times over maintaining extensive unit test suites.

                                                        For small teams and codebases you can pretty much do whatever, though. If enough of the whole thing can fit in everyone’s heads, humans can be a sufficiently smart compiler and test framework.

                                                        1. 3

                                                          In some cases, you can think of things like type signatures in a statically typed language as a replacement for a certain class of unit test. I.e. you get the type system to verify certain properties instead of just using unit tests to bounce data off it. At some point, you reach the limits of your type system (just how much you can express in the type system will vary from language to language), so you fill the gaps with unit tests.

                                                          If you look at its this way, the “testing pyramid” is still the right way up: You have a large base of very localised type signatures, with unit tests filling the gaps, and then you have a smaller number of integration tests forming the top of your pyramid.

                                                          1. 1

                                                            I can see why you might think of Go’s type system as inexpressive but I’m curious as to why you think it’s unsound.

                                                            1. 2

                                                              Since Go allows nil to be passed in the place of any interface, it’s unsound (in the sense that soundness is used in type theory): when the Go type-checker says that an expression is of some interface type, we may at runtime get a value that isn’t actually of that interface type — instead, we can get nil.

                                                              Found this article from Brown’s CS department that gives a decent explanation of soundness:

                                                              The central result we wish to have for a given type-system is called soundness. It says this. Suppose we are given an expression (or program) e. We type-check it and conclude that its type is t. When we run e, let us say we obtain the value v. Then v will also have type t.

                                                              In practice a fair amount of Go code is even less reliable from a type theory perspective than just not-nil-safe: to compensate for not having generics it seems somewhat common practice to cast to and from the empty interface{} type and hope you didn’t mess up somewhere, much like void pointers in C.

                                                          1. 4

                                                            I switched my work language from a compiled language (C#) to an interpreted one (Ruby) a couple of years ago. Before that switch, I tended to agree with the author more. After working in a non-compiled language for a while, I’ve started to lean more towards always writing tests for everything. That happened after I kept making the kind of brain-dead mistakes that would be caught by the compiler in such languages - wrong/misspelled variable name, syntax errors, basic logic mistakes, etc.

                                                            You could find these bugs if you reviewed your code really carefully. But I find it very time-consuming and unappealing to review my own code carefully enough to find this stuff. Writing a few quick tests proves that it actually works, and keeps proving it for the lifetime of the code.

                                                            I thought I was smart and careful enough to not have to worry about that kind of thing, but the number of times I’ve been bitten much later than I would like tells me that I’m not. Maybe you’re better and you actually don’t make those kinds of mistakes, but do try counting how many times it’s happened to you.

                                                            This also points back towards sticking with compiled languages after all, but that’s a whole different discussion.

                                                            1. 1

                                                              Indeed, vigorous unit tests can serve as a crutch for languages that lack static type-checking.

                                                            1. 5

                                                              How old is this post? I see there’s an update from 2009 when someone pointed out the fundamental difference between check boxes and radio buttons to the author.

                                                              1. 3

                                                                Wayback machine says 2007.

                                                                1. 1

                                                                  Sadly, he doesn’t date his content :(

                                                                1. 1

                                                                  “Their explanation was that the slow Cortex A7 cores have a single-cycle access to their level-1 caches, whereas the fast Cortex A15 cores have a four-cycle penalty.”

                                                                  That in itself was interesting. I wonder why the fast core is slower on memory access.

                                                                  1. 2

                                                                    Maybe the cycles are faster/shorter but the L1 cache speed is the same?

                                                                    1. 1

                                                                      That was one possibility I considered. Another was it might have more memory with a longer, lookup operation. Both could even happen. Could be some synchronization logic in there. Not a hardware specialist, though, so could only guess.

                                                                      1. 2

                                                                        This reminds me of the situation in the Nintendo DS (warning: huge page): both CPUs access memory through the same bus (which seems to be the case with big.LITTLE, too), but the bus is clocked at the same frequency as the ARM7 (the weaker CPU). This results in memory accesses on the ARM9 being slower (relatively to the CPU frequency), but also possibly causing the net performance of the ARM9 to be worse than the ARM7.

                                                                  1. 4

                                                                    Importantly, all of the techniques that do generate good code from JavaScript require a virtual machine that performs dynamic recompilation. If the most efficient way of running general-purpose code on a processor is to implement an emulator for a different (imaginary) architecture, then it is hard to argue that the underlying substrate is truly general purpose.

                                                                    I think this fails to reject alternative explanations, like JavaScript is not general purpose code. :)

                                                                    1. 1

                                                                      That explains nothing. If you can even run code that’s not even general purpose then the situation can only be worse!

                                                                    1. 12

                                                                      Yet what I found even more troubling was that in order to write effective tests, a programmer had to know all of the ways that a piece of software could fail in order to write tests for those cases. If she forgot the square root of -1 was undefined, she’d never write a test for it. But, obviously, if you knew where your bugs were, you’d just fix them. Errors typically hide in the places we forget to look

                                                                      I used to think like this but then I realised that tests are not about catching every kind of issue that could occur. The greatest value in tests is they ensure that stuff that once worked, continues to work. If someone makes a change and it breaks something that was working before, the tests will catch that. If you find a new bug that wasn’t covered by the tests then write a new tests that fails on that bug. Now you won’t have that bug go unnoticed again.

                                                                      1. 7

                                                                        Absolutely. The other thing? It is much easier to debug into a unit test than into application code that will only be called under very specific circumstances. So the “write a new test that fails on that bug” is the core of my debugging strategy.

                                                                        1. 1

                                                                          So regression tests, essentially.

                                                                        1. 34

                                                                          I’m impressed by the lack of testing for this “feature”. It may have a huge impact for end users, but they have managed it to ship with noob errors like the following:

                                                                          Why is www hidden twice if the domain is “www.www.2ld.tld”?

                                                                          Who in their right mind misses that, and how on Earth wasn’t it caught at some point before it made it to the stable branch?

                                                                          1. 11

                                                                            url = url.replace(/www/g, '') - job well done!

                                                                            1. 21


                                                                              What’s really eye-opening is that comment just below wrapped in the pre-processor flag! Stunning.

                                                                              1. 9

                                                                                Wow, so whoever controls www.com can disguise as any .com page ever? And, as long as it’s served with HTTPS, it’ll be “secure”? That’s amazing.

                                                                                1. 5
                                                                                  1. 5

                                                                                    Not just .com. On any TLD so you could have lobster.www.rs

                                                                                  2. 3

                                                                                    If I may ask, how is this worse than url = url.replace(/www/g, '')? If anything, the current implementation use a proper tokenizer to search and replace instead of a naive string replace.

                                                                                    1. 2

                                                                                      That’s just my hyperbole.

                                                                                2. 10

                                                                                  Right, the amateurishness of Google here is stunning. You’d think with their famed interview process they’d do better than this.

                                                                                  On a tangential rant, one astonishing phenomenon is the helplessness of tech companies with multibillion capitalizations on relatively simple things like weeding out obvious bots or fixing the ridiculousness of their recommendation engines. This suggests a major internal dysfunction.

                                                                                  1. 14

                                                                                    To continue off on the tangent, it sounds like the classic problem with any institution when it reaches a certain size. No matter which type (public, private, government…), at some point the managerial overhead becomes too great and the product begins to suffer.

                                                                                    Google used to have a great search engine. It might even still be great for the casual IT user, but the signal-to-noise ratio has tanked completely within the past ~2 years. Almost all of my searches are now made on DuckDuckGo and it’s becoming increasingly rare that I even try Google, and when I do it’s mostly an exercise in frustration and I spend the first 3-4 searches on quoting and changing words to get proper results.

                                                                                    1. 5

                                                                                      Large institutions collapsing under their own managerial weight is more of a ‘feature’ in this case.

                                                                                      1. 1

                                                                                        What are a few examples of queries for which DDG produces better results than Google?

                                                                                        1. 2

                                                                                          I’m not able to rattle off any examples, sorry. I’ll try to keep it in mind and post an example or two, but don’t hold your breath :)

                                                                                          I’ve been using DDG as my primary search engine for 2-3-4 years now, and have tried to avoid Google more and more in that same time frame. This also means that all the benefits of Google having a full profile on me are missing from the equation, and I don’t doubt that explains a lot of the misery I experience in my Google searches. However, I treat DDG the same and they still manage to provide me with better search results than Google…

                                                                                          In general every search that includes one or more common words tend to be worse on Google. It seems to me that Google tries to “guess” the intent of the user way too much. I don’t want a “natural language” search engine, I want a search engine that searches for the words I type into the search field, no matter how much they seem like misspellings.

                                                                                  1. 5

                                                                                    One of the key to understand Go code is to translate the following types to their underlying data structures:

                                                                                    Map: pointer to hashmap

                                                                                    Slick: struct{ptr, len, cap}

                                                                                    Interface: struct{ptr, type}

                                                                                    After that Go is surprisingly unsurprising.

                                                                                    1. -1

                                                                                      Never heard of a Slick before! What language has those!?

                                                                                      Oh, you probably meant slice…

                                                                                    1. 1

                                                                                      Alternate scenario: use AppEngine.

                                                                                      1. 1

                                                                                        App Engine has a lot of limitations in practice. In particular, you can’t run native code which makes it hard (impossible?) to exec arbitrary programs. At work, we call external programs (e.g. Git, Maven, Go) a lot through exec and pipes. We could emulate their behaviour instead, but that’s a Red Queen’s race we don’t really want to run.

                                                                                        1. 1

                                                                                          Yeah, it won’t work for your needs. But the OP didn’t seem to have needs like that.

                                                                                          1. 1

                                                                                            I don’t think OP was talking about any particular needs. This was a general architecture for launching web applications without black box technologies. E.g.,

                                                                                            The industry has provided a number of hosted options that handle most of this for you. Instead of building all of this yourself, you can rely on Beanstalk, AppEngine, GKE, ECS, etc. Most of these services setup sensible permissions, load balancers, subnets, etc… automatically. They take a lot of the hassle out of getting an application up and running quickly that has the reliability your site needs to run for a long time.

                                                                                            Regardless, I think it’s useful to understand what functionality each of these platforms provides and why they provide it. It makes it easier to select a platform based on your own needs.

                                                                                            We’ve built a scalable web application with backups, rollbacks (using blue/green deployments between production and staging), centralized logging, monitoring, and alerting. This is a good point to stop, since growth from here tends to depend on application-specific needs.

                                                                                      1. 2

                                                                                        “if the GNI is much higher than the GDP this can mean that the country receives large amounts of foreign aid”

                                                                                        The lone country in their accompanying graph where this inequality holds is Norway. I find it hard to believe that Norway is the recipient of any foreign aid…

                                                                                        1. 4

                                                                                          In Norway’s case, the reason is probably their gigantic sovereign wealth fund. Income from foreign investments held by nationals is included in GNI (as it’s income flowing in to the country) but not included in GDP (because it’s not economic activity that takes place in the country itself).

                                                                                        1. 1

                                                                                          Decades later, SF has finally managed to recreate the tenements of the Lower East Side & East Village. But with wifi.

                                                                                          More importantly, the authors seems to have found a way to live in a better housing situation in SF without working as a programmer but doesn’t explain how; seems suspicious…

                                                                                          1. 2

                                                                                            Why not to use transactions and (compound?) unique indexes in order to make the whole operation atomic?

                                                                                            1. 3

                                                                                              It’s a trade-off; there’s a cost to using transactions and you can avoid that cost if your operations are idempotent. As you say though, if it’s infeasible to make an op idempotent then transactions are an excellent fallback :)

                                                                                              1. 1

                                                                                                It returns 404 now.

                                                                                                1. 1

                                                                                                  Um, yeah, the author deleted the post. So what do I do now? Delete my post here? Hope that the post might come back?

                                                                                                  1. 1

                                                                                                    Yeah, even the cached version doesn’t work!

                                                                                                1. 1

                                                                                                  Wow, this is a remarkable romp through computing history!