if the Great And Wise Zed Shaw can’t figure it out in ten seconds then it must just be impossible.
Surprise! This is not new; he’s been “criticizing” OOP in exactly the same fashion for years. The only part I don’t understand is why the Python community supported him for so long.
Because we are a welcoming bunch ;)
You could’ve fooled me: https://www.reddit.com/r/Python/comments/5efe3t/the_case_against_python_3/
While the criticism to the Zed Shaw article in the reddit thread is harsh, I don’t think it is unjustifiably harsh. Eevee’s criticism in the blog post that this thread links to is also harsh. Does that mean that lobste.rs isn’t a welcoming community? I tend to think not, we just value good arguments backed up by valid reasoning.
Does that mean that lobste.rs isn’t a welcoming community?
The comment with the most votes in the original thread on lobste.rs is extremely unfriendly and is backed up by exactly zero good arguments other than “it’s stupid.”
I’m not sure why, but in the last few months, lobste.rs has grown a higher concentration of short snarky comments that are nearly content free. That doesn’t feel especially friendly to me.
Do you have any other examples of this trend? IMO, the @tedu’s comment that you are referring to is about something I would expect most lobsters to find pretty obvious and the upvotes are a sign of agreement. There isn’t, for example, 40 more comments with meme links.
You can start with tedu’s comment history. I don’t know what meme links, obviousness or agreement have to do with this.
Can you give explicit examples? It is your claim, after all. It’s not my place to go out and find your evidence.
For the memes, what I mean is that one reason I do not like reddit is that productive conversations are drowned in a sea of meme posts. Lobste.rs, at least, has not reached that point.
Meme posts are not the only kinds of low value comments.
I’m on vacation. I don’t feel like digging through senseless comments to win an Internet argument. You can start with tedu’s history if you are so inclined.
After ten years of internet forums, I’ve come to accept that it’s human nature. People will naturally prefer short, amusing posts in an internet forum they visit outside of work or study. The reason to prefer Lobsters is not because those posts will never happen but rather because, at least for now, they are either rare or in a very small quantity. There’s no way to get people to always post well thought-out arguments with sources.
What apy is refferring to is what happens on Reddit and Hacker News.
I’ve been on Internet forums a lot longer than that. I grew up on them. I understand it’s human nature. But that’s not going to stop me from demanding/hoping for better.
That submission has more downvotes than upvotes. What does that tell you about the biggest Python subreddit’s ability to deal with criticism?
“python 3 sux lol” is not really criticism. if I were part of the python subreddit i’d have downvote it too.
even if it was pure trolling, it was clunkily done and lacked entertainment value (unlike, say, this brilliant troll that even guido played along with)
“python 3 sux lol” is not really criticism
Have you read the article in question?
i have, and from the perspective of someone who has actually used python2 and python3 casually (so definitely neither a power user nor overly invested in defending it), I found it had a few good points buried in a mass of ranting and trying to make things sound worse than they actually were.
i mostly agree with this reddit comment https://www.reddit.com/r/Python/comments/5efe3t/the_case_against_python_3/dac7fmn/
The guy tried to claim python 3 isn’t turing complete because nobody has implemented a python 3 interpreter with it ffs. That’s hardly criticism worthy of consideration.
It’s a reductio ad absurdum - Python core devs claim that the Python 3 VM can’t run Python 2 bytecode to make interoperability easier and the only way this could be true is if Python 3 wasn’t Turing complete.
You’ll get all the jokes and rhetorical devices once you stop assuming that the author is a moron.
In : print "Hi, my name is Łukasz Langa."[::-1]
.agnaL zsaku?? si eman ym ,iH
“Good luck figuring out how to fix that.” … wat?
In : print u"Hi, my name is Łukasz Langa."[::-1]
.agnaL zsakuŁ si eman ym ,iH
You only “cannot” run Python 2 inside the Python 3 VM because no one has written a Python 2 interpreter in Python 3. The “cannot” is not a mathematical impossibility; it’s a simple matter of the code not having been written. Or perhaps it has, but no one cares anyway, because it would be comically and unusably slow.
More critically, Turing machines, the untyped lambda calculus, <insert theoretical model of general-purpose computation here> calculus also can’t interpret Python 2 or any other realistic programming language, even in principle, in spite of being Turing-complete. What went wrong?
Turing-completeness is just about the ability to compute all partial computable functions. Programming languages do a lot more than that for you.
<insert theoretical model of general-purpose computation here> calculus also can’t interpret Python 2 or any other realistic programming language, even in principle
This absolutely isn’t true.
(elsewhere) Are you implicitly assuming no I/O, no concurrency
There exist general purpose mathematical formalisms for both of these.
Do what now? A Turing machine could interpret any programming language devised by mankind…just really really slowly and also probably by emulating the entire computer underneath it.
No computer we could ever build could completely emulate a Turing machine, though, because Turing machines by definition have infinite memory.
(The same goes for the Lambda Calculus, which is entirely equivalent with a Turing machine; either can perfectly emulate the other.)
Pretty much nothing we build ever emulate any of the idealistic construct we made up to allow us to reason about things.
A real computer has weird property like a non-zero chance to randomly die because somebody pulled the plug trying to charge their smartphone. I have read some computer science books and none of them ever mentions this peculiar property.
A Turing machine could interpret any programming language devised by mankind…
Are you implicitly assuming no I/O, no concurrency and no computational complexity constraints in the specification of a programming language? There’s a reason why Turing machines are used a lot less in complexity theory than in computability theory.
You said a Turing machine couldn’t emulate Python 2 in principle. In principle, it absolutely could, even going to far as simulating an entire multi-core x84_64 computer with monitor and keyboard running Linux running Python 2. Such a machine would be unimaginably large and take an unimaginable amount of time to run even one instruction, but it’s certainly possible. I was objecting to the idea that a Turing machine somehow couldn’t emulate Python, which is patently false.
It’s not about time. It’s about things like “paint this pixel on a computer screen”, “print this text on a piece of paper” or “wait until the user has entered a line of text and pressed Enter”. You’d have to “creatively redefine” the terms “computer screen”, “paper” and even “user” if you want to implement real-world programming languages on top of mathematical models of computation (only).
I’m not so sure. In a real computer things like painting a pixel to the screen, sending data to a printer, or waiting for keyboard input is done by reading and writing to certain sections of memory that the hardware can also read and write. There’s no reason a turing machine couldn’t use its memory the same way for communication with external devices.
A Turing machine, which lives in the abstract world of mathematics, obviously cannot physically drive a monitor, or send packets over a network. You have to mathematically model any I/O devices you wish to communicate with. You don’t have to “creatively redefine” anything, you just have to formalize the details of how the desired I/O devices work. You can find the details in the specification/reference manual for the real world device in question.
One way to formalize I/O with a Turing machine is to have an extra tape that you write to when you want to communicate with an I/O device. The I/O device will read from this tape and write to it later. This is essentially how real computers work. The CPU in your computer cannot “paint pixels on a computer screen” any more than a Turing machine can, and I think we can all agree a CPU can interpret Python 2. Instead, you perform I/O by writing to particular addresses in memory, with the assumption that there exists an I/O device that can read those locations, interpret the data you have written there, and perform the desired physical action.
When a programming language specification says “prints a character to the screen”, it means “writes to standard output” or something similar. No programming language means it so literally as to preclude VNC or output redirection.
At any rate, all that ultimately means is that a certain memory address is written to or something, that the hardware has ultimately defined as the display buffer or whatever. If you define your Turing machine to treat writes to certain locations on the tape as “writes to the screen ”, then you can again simulate Python perfectly.
Saying that a Turing machine doesn’t have a “real monitor” and therefore can’t simulate Python is like saying running Python on a computer without a graphics card isn’t really running Python.