Computer science clocksteps at the rate of algorithms and discoveries. Languages are always going to come and go, unless the language springs up from a good theory.
If you want to understand why this would be true, just look at the history of mathematics. Read about algebraic representations, which kind of abacuses have been used, slide rules, mechanical calculators. You will find out that what we have present today is a small fragment of what used to be, and the stuff that still exists was lugged to today because there’s not many obvious better ways to do the same thing.
By this basis, I’d propose that the current “top 20” by Redmonk cannot form any kind of a long-running status quo. It’s a large list of programming languages rooting to the same theory (Javascript, Java, Python, PHP, C#, C++, Ruby, C, Objective-C, Swift, Scala, Go, TypeScript, Perl, Lua).
There is going to be only one in 30 years, and I think it’ll be falling to C or Javascript axis. They are syntactically near and lot of software was and gets written with these languages. Although there is even more written with C++, it’s way too contrived to survive without reducing back to something like C.
CSS may have some chance of surviving, but it’s pretty much different from the rest. About Haskell I’m not sure. I think typed lambda calculus appear or will reappear in a better format elsewhere. The language will be similar to Haskell though, and may bear the same name.
Unix shell and its commands will probably survive, while Powershell and DOS will wither. Windows seems to have its days counted already by now. Sadly it was not because of open source movement. Microsoft again just botched themselves up.
R seems like a write-and-forget language. But it roots to Iverson’s notation.. Though perhaps the notation itself will be around, but not the current instances of it.
I think that hardware getting more concurrent and diverging from linear execution model will do permanent shakeup on this list in a short term. The plethora of programming languages that describe a rigid evaluation strategy will simply not survive. Though I have bit of bias to think this way so I may not be a reliable source for checking into the future.
But I think this be better than looking at programming language rankings.
I think, most importantly, we haven’t even seen anything like the one language to rule them all. I expect that language to be in the direction of Conal Elliott’s work compiling to categories.
A language that is built around category theory from the start, like you have many different syntactic constructs and the ones you use in a given expression determines the properties of the category that the expression lives in. Such a language could locally have the properties of all the current languages and could provide optimal interoperation.
BTW, I think we won’t be calling the ultimate language a “programming language” because it’ll be as good for describing electrical circuits, mechanical designs and biological systems as for describing programs. So I guess it’ll be called something like a specification language.
“we haven’t even seen anything like the one language to rule them all. “
That’s exactly what the LISPers always said they had. Their language could be extended to do anything. New paradigms and styles were regularly backported to it as libraries. It’s also used for hardware development and verification (ACL2).
Well, it’s hard to say anything about LISPs in general since the span is so vast and academic, and especially for me, since my contact with any LISP is quite limited. But, from my understanding of the common usage of LISP, it doesn’t qualify.
First of all, I think dropping static analysis is cheating, but I don’t intend to tap into an eternal flame war here. What I mean when I say “the properties of the current languages” is no implicit allocations, borrow-checking and inline assembly like in Rust, purity and parametricity like in Haskell, capabilities-security like in Pony etc. etc. , and not only the semantics of these, but also compilers taking advantage of these semantics to provide static assistance and optimizations (like using the stack instead of the heap, laziness & strictness analysis etc.).
And I’m also not just talking about being able to embed these into a given language; you should also be able to write code such that if it’s simple enough, it should be usable in many of them. For instance, it’d be hard to come up with some language semantics in which the identity function cannot be defined, so the identifier id x = x should be usable under any local semantics (after all every category needs to have identity morphisms). You should also be able to write code that interfaces between these local semantics without leaving the language and the static analysis.
I know you can embed these things in LISP, expose enough structure from your LISP code to perform static analysis, get LISP to emit x86 assembly etc. etc. But, IMHO, this doesn’t make LISP the language I’m talking about. It makes it a substrate to build that language on.
I think one major difference between math and computer science, and why we’re not going to see a lot of consolidation for a while (not even in 30 years, I don’t think), is that code that’s on the internet has a way of sticking around, since it’s doing more than just sitting in research papers, or providing a tool for a single person.
I doubt we’ll see 100% consolidation any time soon, if for no reason than that it’s too easy to create a new programming language for that to be the case.
Hardware changes might shake up this list, but I think it’ll take 30 years for that to be realized, but there will be a lot of programming languages that fall out of that.
We’re definitely still going to have COBOL in 30 years, and Java, and C. The rest, I’m unsure of, but I’ll bet that we’ll be able to recognize the lineage of a lot of the top 30 when we look in 30 years.
R seems like a write-and-forget language. But it roots to Iverson’s notation.
Did you mean to write J or APL? I understand R as the statistics language.
I’m disappointed to read the negative comments on TFA complaining that the author has “merely” identified a problem and called on us to fix it, without also implementing the solution. That is an established pattern, a revolution has four classes of actor:
Identifying a problem is a necessary prerequisite to popularising the solution, but all four steps do not need to be partaken by the same person. RMS wrote the GNU Manifesto, but did not write all of GNU. Martin Luther wrote the ninety-five theses, but did not undertake all of protestant reform. Karl Marx and Friedrich Engels wrote the Communist Manifesto but did not lead a revolution in Russia or China. The Agile Manifesto signatories wrote the manifesto for agile software development but did not all personally tell your team lead to transform your development processes.
Understandably, there are people who do not react well to theory, and who need the other three activities to be completed before they can see their role in the change. My disappointment is that the noise caused by so many people saying “you have not told me what to do” drowns out the few asking themselves “what is to be done?”
I read through and concluded that the author says nothing. Cannot exactly identify the problem he raises up. And by observing the first lines, I think he’s an idiot.
That computing is so complex is not a fault of nerdy young 50 years old men. If nerdy 50 years young men had designed that stuff we’d be using Plan9 with Prolog and not have as many problems as now.
The current computing platforms are created by multiple-body companies and committees with commercial interests. They’ve provided all the great and nice specs such as COBOL, ALGOL, HDMI, USB, UEFI, XML and ACHI, just few to start the list with. All of the bullshit is the handwriting of the ignorant, not of those playing dungeons and dragons or solving rubik cubes.
Not surprised. Oracle’s Java copyright lawsuit going the way it is, Google is going to be license price extorted if they stick to Java so they pretty much need to kill Android if the lawsuit succeeds. Working their own version also gives them the copyrights so the ruling will be helpful in maintaining a tight grip on the new platform.
As I understand it (and as noted by alva above), Fuschia competes with Linux, not Java. It’s a microkernel, not a language VM or a language. The article was technically confused – the Oracle line was just a throwaway and not terribly accurate.
Java is going to be in Android forever, simply because there are hundreds of thousands of apps written in Java. Kotlin was a logical move because it’s very compatible with Java.
I assume the thing everyone is kind of getting at is Flutter, which is the preferred IDE for Fuchsia and it’s not Java-encumbered.
Programmers mental model changes while he learns and is very flexible. Statically typed languages cannot match to this.
Many statically typed languages still allow you to do really bad errors and botch it in numerous multiple ways despite their type system. Very classic example of this would be the whole C language. But it is not the only statically typed language ridden full of foxholes. For example, you may use variables before you’ve set their contents in some corner-case that the language designer did not manage to solve. Then you got a surprise null, despite that careful use of Maybe or nullable-property.
Another example of this kind of failure would be the introduction of corruption bugs. Too many popular statically typed languages do not protect you from data corruption bugs when handling mutable data, and do not provide tools to protect your mutable data from corruption bugs.
I think that dynamically typed languages are easier to use because they genuinely let you decide afterwards on some design problems that you face. They are polymorphic without work, and programmers who use them naturally produce more abstract code. I also think that you can prove dynamically typed programs correct, and you don’t need full type annotations for that which means it can still be dynamically typed after that.
They are simply, just better programming languages.
Most of these arguments are unrelated to static vs dynamic typing. It sounds like you’re arguing that dynamic languages are easier to quickly prototype ideas, however, which I agree with.
In such situations, I like to bring up Strongtalk and Shen w/ Sequent Calculus & Prolog. Both add typing to high-productivity, dynamic languages/environments.
A message veiled into a personal learning story to make it more palatable. I would not care much, but it scratches an itch.
We got a slight tests vs. type checking going in the middle of the lines. These subjects should not be dumped together because you may also do Python without any tools and get along just fine.
My opinion about types and type checking has changed as well, but it has grown to very different direction than the posters. I also have had some sort of enlightenment going along. I am a die-hard dynamic-typing proponent. I did not need a static type system, and I neither needed tests. The author had to work with something else than Python to form his opinion. I had to go into other direction, deeper into Python and finally into my own improved variation of Python.
If type checking is sound, and if it is decidable, then it must be incomplete. I’ve realized this is really important as the set of correct and useful programs that do not pass a type checker is large. Worse, often these are the most important sort of programs that spares a person from stupid or menial work.
“If it compiles, it works” and “type system make unit tests unnecessary” are hogwash. It doesn’t really matter how much you repeat them or whether you clothe them into a learning story. There was a recent post pointing out how difficult it is to construct a proof that some small program is actually correct. This means you cannot expect that program works or is correct despite that it types correctly in any language.
There is an aspect that is required for making computation and logical reasoning possible in the first place. That is in recognizing variant and invariant parts of the program. I’ve realized that spamming variants is the issue in modern dynamically typed languages. That cannot be solved by adding type annotations because you still have tons of variables in your code that could theoretically change. And you really have to check whether they do, otherwise you have not verified that your program is correct.
Statically typed languages commonly do better in keeping variances smaller, but they are also stupid in the sense that they introduce additional false invariants that you are required to satisfy in order to make the type checking succeed. And you cannot evaluate the program before the type checker is satisfied. This is an arbitrary limitation and I think people defending this for any reason are just dumb. Type checker shouldn’t be a straitjacket for your language. It should be a tool and only required when you’re going to formally verify or optimize something.
During working on software I’ve realized the best workflow is to make the software work first, then later validate and optimize. Languages like Python are good for the first purpose while some ML-variants are good for the middle, and for the optimization C and similar are good. So our programming languages have been written orthogonal, to cross with the workflow that makes most sense.
the set of correct and useful programs that do not pass a type checker is large
If it’s large then you should be able to give a few convincing examples.
I haven’t had the problem the quote implies. The basic, type systems were about enforcing specific properties throughout the codebase and/or catching specific kinds of errors. They seem to do that fine in any language designed well. When designers slip up, users notice with it becoming a best practice to avoid whatever causes protection scheme to fail.
Essentially, the type system blocks some of the most damaging kinds of errors so I can focus on other verification conditions or errors it can prevent. It reduces my mental burden letting me track less stuff. One can design incredibly-difficult, type systems that try to do everything under the sun which can add as many problems as they subtract. That’s a different conversation, though.
This set includes programs that could be put to pass a type checker, given that you put extra work into it, or use a specific type checker for them. Otherwise that set is empty: For every program you can construct such variation where the parts that do not type check are lifted outside from the realm of the type checker. For example. stringly typed code.
The recipe to construct a program that does not pass a type checker is to vary things that have to be invariant for the type checker. For example, if you have a function that loads a function, we cannot determine the type for the function that is produced. If the loaded function behaves like an ordinary function, it may result in a dilemma that you may have to resolve either by giving it some weird different type that includes the idea that you do not know the call signature, or by not type checking the program.
Analogous to the function example: If you define creation of an abstract datatype as a program, then you also have a situation where the abstract datatype may exist, but you cannot type the program that creates it, and you will know the type information for the datatype only after the program has finished.
And also consider this: When you write software, you are yourself doing effort to verify that it does what you want. People are not very formal though, and you will likely find ways to prove yourself that the program works, but it does not necessarily align with the way the system thinks about your program. And you are likely to vary the ways you use to conclude the thing works because you are not restricted to just one way of thinking about code. This is also visible in type systems that themselves can be wildly different from each other, such that the same form of a same program does not type in an another type system.
I think for the future I’ll try to pick up examples of this kind of tricky situations. I am going to encounter them in the future because in my newest language I’ll have a type inference and checking integrated into the language, despite that the language is very dynamic by nature.
There is some work involved in giving you proper examples, and I think people have already moved to reading something else when I finish, but we’ll eventually resume to this subject anyway.
Looking forward to seeing your examples, but until then we don’t have any way to evaluate your claims.
About your function loading example, that may or may not be typeable, depending on the deserialisation mechanism. Again, can’t really say without seeing an example.
When you write software, you are yourself doing effort to verify that it does what you want.
That’s exactly why I find type systems so useful. I’m doing the effort when writing the code either way; types give me a way to write down why it works. If I don’t write it down, I have to figure out why it works all over again every time I come back to the code.
A message veiled into a personal learning story to make it more palatable.
Why do you think this is veiled message instead of an actual personal story?
If type checking is sound, and if it is decidable, then it must be incomplete.
Only if you assume that some large set of programs must be valuable. In my experience useful programs are constructive, based on human-friendly constructions, and so we can use a much more restricted language than something Turing-complete.
If type checking is sound, and if it is decidable, then it must be incomplete.
That’s not a bug. That’s a feature.
If you can implement a particular code feature in a language subset that is restricted from Turing completeness, then you should. It makes the code less likely to have a security vulnerability or bug. (See LangSec)
You forgot to point out that Python provides a traceback:
Traceback (most recent call last):
File "smalldemo.py", line 17, in <module>
main()
File "smalldemo.py", line 8, in main
bar(target)
File "smalldemo.py", line 11, in bar
foo(target)
File "smalldemo.py", line 14, in foo
value = target.a['b']['c']
KeyError: 'b'
And the varying error messages it gives is enough to pinpoint which one of these operations failed.
It looks like a good exercise project though. Someone saw time to work out the documentation and it didn’t fail entirely explaining what this thing is doing. Good exercise especially for the purposes of explaining what’s the purpose of that thing. It can get tricky.
EDIT I just realized that you were referring to the author’s statement about which dict get is causing the error. My statement below is not relevant to that.
The traceback is good, but if you are getting lots of deeply nested json documents some fields might be present on one document and not on another within the same collection. So you end up in this loop where you process a piece of the collection, hit an exception, stop and fix it. Repeat this a while until you think the code is stable. Then at some point in the future you end up with another piece of a new collection that blows up. C’est la vie.
Trust me, no forgetfulness occurred here. If 'b' and 'c' were variables, which they commonly are, you wouldn’t know which one had the value which caused the KeyError. And furthermore, the example was more about the TypeErrors, such as the one raised when a dictionary is replaced with a list.
The traceback sheds no light on that. The only way to make the traceback work is to split up the operation into multiple lines, and that’s why that code ends up verbose and un-Pythonic.
It would be better to talk about tagged records in place of sum types because you would then immediately understand what the subject is about.
I’m commenting because I think it’s interesting to point out. I plan to write a type system into my language that relies on conversions between types, and on tagged records. It won’t have typecases though, because I thought out that the definition of type ended up being very unstable. Also the type annotations do not really have a reason to feedback into the value space.
This thing feels genuine, but I can’t stop the feeling that something is missing.
The BBS interface. ;)
In 2010, Oracle sued Google. In 2012, District Court ruled API uncopyrightable. In 2014, Appeals Court ruled API copyrightable. Google petitioned Supreme Court, which denied the petition. In 2016, District Court, operating under the assumption that API is copyrightable, ruled Google’s use was fair use. In 2018, Appeals Court ruled Google’s use was not fair use. Now the case is back in District Court to determine the damage, operating under the assumption that API is copyrightable and Google’s use was not fair use.
Most people do not understand the significance of this decision, so it’s enough for Oracle to re-roll the dice until they get the answer they want.
Besides I think the crowds inflate the significance of this. It’s almost as if somebody unconditionally respected copyrights here.
LCS is an acronym for the longest common subsequence problem. You may know it if you’ve studied how diff works because it’s one, but not only, way to calculate diff between two text files. The point to using it here is to keep the moving of indeterminates few while shuffling them into same order. LCS reveals the longest sequence of indeterminates that are already in the same order.
I added this into the post as well.
Most of the points raised in the post are quite awful.
The things that the code doesn’t tell, or things that are hard to gather from the code should go into the comments.
When I think something is even slightly surprising later on, I write a comment to describe why it’s there and what it’s needed for. That has turned out to be useful because it allows you to resume on the code much faster than otherwise.
If you don’t write code that can be read, then you should learn. Don’t use comments as a crutch as that will fail.
A nice ideal, but why insist that code take on the job of conveying the full semantic intent of the algorithm it’s implementing? Including natural language comments can separate the concern of semantic communication from the concern of accurate and clear implementation.
I do not think that I did insist anything like that.
Readability is about the ability to understand the written code. There is no emphasis that you’d tell anything more than what the code would inherently tell.
This is a problem with any score-keeping or “gamification” mechanism online.
Though I honestly wonder where people go to talk once reddit/lobsters has been explored..
When I joined reddit years ago I cared about my karma count so much and I would delete comments when they start to get downvoted so I wouldn’t lose my karma but after hitting about 10k I stopped caring and now I can say what I want and stand behind it even if it’s not a popular opinion.
Reminds me of that first episode of black mirror season 3. Only when you ignore scoring systems can you truly be free.
I had a friend visit over to learn programming few days ago. That made me finally realize what’s wrong with Quorum.
I was teaching basics of Java: variables, types, conditionals, constants, methods. I showed the friend how to think about programming by writing a naive Fibonacci function from its definition, discussed properties of the produced code and such. And after few minutes prompted him to do the same exercise with the factorial function.
I told him to not care about java’s syntax and write down the ideas first. The code he ended up writing was something like:
static public void factorial() {
if n = 0 then factorial = 1
if n = 1 then factorial = 1
else if n=2 then factorial * factorial - 1
}
Well.. You see some obvious BASIC influence there. There are other influences of languages commonly used for training people.
I then helped him by rewriting these concepts to actual Java and left him the recursive part of the code left to figure out on himself. Again discussing what is going on there.
The problem is that practically there is no human being on the Earth who has not been learning some programming before. If you ask their opinion with A/B tests then they tend to pick patterns that are familiar to them.
What’s familiar for today’s people is not going to be that for tomorrow people.
It sounds like this page misuses the “straw man” -term.
To “attack a straw man” is to refute an argument that was not made by anyone.
There are people who have used these arguments, and they’re only strawmen in the sense that they have been disassociated from the people who made the arguments.
To “attack a straw man” is to refute an argument that was not made by anyone.
While this might be a definition, it’s most certainly not the only one peope think of when talking about “straw man” or “straw person” arguments. Another usage I have hear people use, and what I understand this page to imply, is to simplify or stupidify an opponents position, and then attack this easily attackable argument, thereby avoiding the actual issue. I belive that this is being done here, they take points raised against Lisp and lisp like languages, and show that these either miss the point or don’t really make sense anyways.
But regardlessly, if it’s a “misuse” of the term or not, I belive everyone on this site is bright enoug to understand what is intended, regardless of this or that proposed formal definition, as well as being able to enjoy an article in it’s own merit, instead of an abstract notion of total semantical correctness in regards to each term, especially when it’s not crucial to the intend of the piece.
The update on how the performance characteristics of this thing changed after JIT would have been interesting. The paper’s 11 years old now.
There’s a follow-up paper (2010) with the JIT implemented and reporting some benchmarks. I don’t think the project continued after that, although the lead author has continued research in related areas.
Oh darn, I didn’t realize he was also an author of a paper I was saving for next week when more readers are on. He’s doing some clever stuff. Interestingly, he also ported Squeak to Python while Squeak people made it easier to debug Python using Squeak’s tooling. Some odd, interesting connections happening there.
I hate this project. It’s had big names, big fanfares, big talks, big money grants, they were hiring people.
But very little was done that mattered in the end. We barely remember this project by now. Nobody even cares about this thing in 15 years. It created absolutely nothing of value and all the ideas presented were already done better in 1980s. It was kind of Bret Victorian in that sense.
And now it’s going to die. If they had just pushed on something might have come out of this. Now it is guaranteed that everything of this will be forgotten in very short time.
If you try to shoot at the moon, at least.. Could you try to aim upwards and put enough fuel into the tank?
I love all the saltiness. I wish the GPU prices would still rise a little bit more so we’d see more bickering.
I think the prices keep rising though. Crypto prices are rising and it’ll be profitable to buy those cards off from shelves.
There’s hard time seeing how this could be bad except for few PC enthusiasts who end up having to pay a bit more for their hardware in the short term. GPU hardware also faces demand for improvement for general purpose parallel workloads and it cannot happen only within the terms of cryptocurrency mining because the demand it creates can collapse without a warning.
Good times ahead.
There are plenty of tutorials like this out there. Some are cross-platform, this one is Linux-only and omits the W^X limit that’s wanted these days. Most of these write hexadecimals into a buffer and then call it.
I’m a bit tired to seeing clones from the same story. It’d be nice if people writing new tutorials would sometimes continue from someone else’s tutorial.
The post in question Big-O: how code slows as data grows
The comment by ‘pyon’:
You should be ashamed of this post. How dare you mislead your readers? In amortized analysis, earlier cheap operations pay the cost of later expensive ones. By the time you need to perform an expensive operation, you will have performed enough cheap ones, so that the cost of the entire sequence of operations is bounded above by the sum of their amortized costs. To fix your list example: a sequence of cheap list inserts pays the cost of the expensive one that comes next.
If you discard the emotion, he gives out a fairly interesting additional note about what amortized analysis means. Instead of giving the information value, Ned reacts on the part that questions his authority. Such a brittle ego that puts you to writing a small novel worth’s of rhetoric instead of shrugging it off. Childish.
If @pyon had just phrased the first part of the comment like “You’re making a number of simpliications regarding “amortization” here that I believe are important…” this would probably not have escalated. This is what Ned means with being toxic - being correct, and being a douche about it.
Indeed; the original article appeared on Lobsters and featured a thoughtful discussion on amortization.
I wonder whether better word choice without changing the meaning would help one step earlier: the original post did include «you may see the word “amortized” thrown around. That’s a fancy word for “average”», which sounds a bit dismissive towards the actual theory. Something like «Notions of ‘‘amortized’’ and ‘‘average’’ complexity are close enough for most applications» would sound much more friendly.
(And then the follow-up paints the previous post as if it was a decision to omit a detail, instead of a minor incorrectness in the text as written, which can be (maybe unconsciously) used to paint the situation as «correctness versus politeness», and then options get represented as if they were mutually exclusive)
I feel like that would have put the author in a more defensible position on this specific point, yes. Being clear about where additional nuance exists and where it doesn’t is something that anyone writing about technical subjects should strive for, simply because it’s useful to the reader.
I don’t think it’s likely that that clarification would have much of an effect on most readers, since the hypothetical reader who’s mislead would have to study complexity theory for some years to get to the point where it’s relevant, and by that time they’ll probably have figured it out some other way. We should all be so lucky as to write things that need several years of study before their imperfections become clear. :)
But more to the point, while I can’t know anything about this specific commenter’s intent, somebody who’s determined to find fault can always do so. Nobody is perfect, and any piece of writing can be nit-picked.
Several years sounds like an upper bound for an eventually succesful attempt. A couple months can be enough to reach the point in a good algorithms textbook where this difference becomes relevant and clear (and I do not mean that someone would do nothing but read the textbook).
I would hope that the best-case effect on the readers could be a strong hint that there is something to go find in a textbook. If someone has just found out that big-O notation exists and liked how it allows to explain the practical difference between some algorithms, it is exactly the time to tell them «there is much more of this topic to learn».
These two posts together theoretically could — as a background to the things actually discussed in them — create an opposite impression, but hopefully it is just my view as a person who already knows the actual details and no newbie will actually get the feeling that the details of the theory are useless and not interesting.
As for finding something to nitpick — my question was whether the tone of the original paragraph could have made it not «finding» but «noticing the obvious». And whether the tone may have changed — but probably nobody will ever know, even the participants of the exchange — the desire to put a «well, actually…» comment into the desire to complain.
Not having previous familiarity with this subject matter, I was guessing at how advanced the material was. :)
I agree about your best case, and that it’s worth trying for whenever we write.
I’ve never found anything that avoids the occasional “well, actually”, and not for want of trying. This is not an invitation to tell me how to; I think it’s best for everyone if we leave the topic there. :)
I consider a polite «well, actually» a positive outcome… (Anything starting with a personal attack is not that, of course)
It’s possible to share a fairly interesting additional note without also yelling at people. Regardless of what Pyon had to say, he was saying it in a very toxic manner. That’s also childish.
Correct. But I don’t just care about the emotion. I care about the message.
Instead of trying to change web into a safe haven of some kind, why not admire it in its colors? Colors of mud and excrete among the colors of flowers and warmth, madness and clarity. You have very little power over having people get angry or aggressive about petty things. Though you can change a lot yourself and not take up about everything that’s said. Teaching your community this skill is also a pretty valuable in life overall.
I don’t want my community to be defined by anger and aggression. I want beginners to feel like they can openly ask questions without being raged or laughed at. I want people to be able to share their knowledge without being told they don’t deserve to program. I want things to be better than they currently are.
Maintaining a welcoming, respectful community is hard work and depends on every member being committed to it. Part of that hard work is calling out toxic behavior.
I want beginners to feel like they can openly ask questions without being raged or laughed at.
While I agree this is critically important, it’s not entirely fair to conflate “beginners asking questions” and “people writing authoritative blog posts”.
Yeah. That kind of self-regulation and dedication to finding signal in noise are endlessly rewarding traits worth practicing. And to extend your metaphor, we weed the garden because otherwise they’ll choke out some of the flowers.
But I don’t just care about the emotion. I care about the message.
I’m with you unless the message includes clear harm. I’ll try to resist its affect on me but advocate such messages are gone. That commenter was being an asshole on top of delivering some useful information. Discouraging the personal attacks increases number of people who will want to participate and share information. As Ned notes, such comment sections or forums also get more beginner friendly. I’m always fine with a general rule for civility in comments for such proven benefits.
Edit: While this is about a toxic @pyon comment, I think I should also illustrate one like I’m advocating for that delivers great information without any attacks. pyon has delivered quite a lot of them in discussions on programming language theory. Here’s one on hypergraphs:
https://lobste.rs/s/cfugqa/modelling_data_with_hypergraphs#c_bovmhr
I personally always care about the emotion (as an individual, not as a site moderator), it’s an important component of any communication between humans. But I understand your perspective as well.
I may have been unclear. I do too. I was just looking at it from other commenters’ perspective of how Id think if I didnt care about it but wanted good info and opportunities in programming sphere. Id still have to reduce harm/toxicity to other people by ground rules to foster good discussion and bring more people in.
So, whether emotional or not, still cant discount the emotional effect of comments on others. Still should put some thought into that with reducing personal attacks being among easiest compromise as they add nothing to discussions.
It’s ridiculous to say that if someone cannot ignore personal attacks, they have a brittle ego and are childish. While also defending personal attacks and vitriol as being the thing that we should celebrate about the internet. Rather, we should critique people for being assholes. The comment was critiquing the manner and tone in which he explained amortized analysis, but he’s not allowed to say that the comment’s manner and tone was bad? It’s ridiculous. The comment was bad, not because of the point it made, but because it made the point badly.
Compare this approach:
I believe this post simplifies the idea incorrectly. In amortized analysis, earlier (cheap) operations pay the cost of later (expensive) ones. When you need to perform an expensive operation, you will have performed enough cheap ones that the cost of the entire sequence of operations is bounded by the sum of their amortized costs. In the context of your list example, a sequence of cheap list inserts would pay the cost of the expensive one that comes next.
This is the same content, free of “shame” and accusations of “misleading.” The original comment is a perfect example of the terrible tone that people take, as discussed in this post and in my previous post of Simon Peyton-Jones’ email.
Instead of giving the information value, Ned reacts on the part that questions his authority.
The author does give it value. You’ve missed the point. The author isn’t saying it’s incorrect or not valuable; he’s saying that this attitude from experts (who use their expertise as a tool to put others down) is highly toxic.
If you discard the emotion, he gives out a fairly interesting additional note about what amortized analysis means. Instead of giving the information value, Ned reacts on the part that questions his authority.
It’s not clear that Ned interprets pyon as questioning his authority. His criticism is of pyon‘s tone, which is histrionic. The cutting intro isn’t bad if we discard it; but what is the effect if we include it? It would be more balanced for Ned to discuss the details and value of pyon’s post, but that does not invalidate Ned’s point.
I smelled that the author is pushing his own, so I went to see what’s going on.
This is actually post-rationalization because although it gives a good rational, this is not really what’s going on. What is actually going on is that the modulus and division are connected.
The way how they are connected can be described as:
a = (a / b) * b + a % b.Division gives a different result depending on rounding. C99 spec says that the rounding goes toward to zero, but we have also had floor division implementations and systems where you can decide on the rounding mode.
If you have floor division, the
19 / -12gives you-2. That is correct when the modulo operator gives you-5. If you do a round-towards-zero-division, the19 mod -12must give you7.On positive numbers, the rounding to zero and floor rounding give the same results.
Also checked on x86 spec. It’s super confusing about this. If the online debugger I tried was correct, then the x86 idiv instruction is doing floor division.
Forgive my extreme mathematical naivety, but
a = (a / b) * b + a % bdoesn’t make much sense to me. Given(a / b) * bwill always equala, doesn’t this imply thata % bis always 0?/in this context is integer division, not rational division, so e.g.7 / 3 = 2.The division operator in this case is not division in the algebraic sense, and it does not cancel with the multiplication such that
(a / b * b = a) {b != 0}. Otherwise your reasoning would be correct.To still make this super clear, lets look at
19 / -12. The real number division of this would give you-1.58... But we actually have division rounded toward negative (floor division) or division rounded toward zero, and it’s not necessarily clear which one is it. Floor division returns-2and division rounding toward zero returns-1.The modulus is connected by the rule that I gave earlier. Therefore
19 = q*-12 + (19 % -12). If you plug in-2here, you’ll get-5 = 19 % -12, but if you plug in-1then you get7 = 19 % -12.Whatever intuition here was is lost due to constraints to stick into integers or approximate numbers, therefore it’s preferable to always treat it as if modulus was connected with floor division because the floor division modulus contains more information than remainder. But this is not true on every system because hardware and language designers are fallible just like everybody else.