I used to work in “academic technology” (sort of IT meets education) at a university, and definitely I agree with the author that anyone who claims that “kids today” are a bunch of computer geniuses is badly mistaken.
The mistake, I believe, is one of confusing frequency of use with expertise. Young people tend to use mobile devices and computers regularly, and for more “things” than older people (due in part, perhaps, to device convergence). However, they largely fail to grasp even basic abstractions like files, or the difference between a web browser and the web site currently displayed.
Their computer use also tends to focus almost exclusively on communication and media consumption. They know how to open a web browser and get to Facebook and Netflix. But many have never used a computer to create anything more complicated than a poorly-formatted Word document. Many others have used Photoshop or something else in the same vein, but only by rote, following specific instructions to generate specific output.
Their computers are constantly “slowing down” (becoming infected with malware) and they don’t understand why. This is actually a massive impediment to BYoD policies at universities and online learning in general: you can’t assume that arbitrary young people can keep a computer working properly for an entire semester (this is, perhaps, more of a sad commentary on the state of the web and the commercial software ecosystem, but that’s another comment).
I disagree with the author, however, that things have gotten worse. The problem as I see it is that things haven’t gotten any better. When I was a kid, I was basically the only person I knew who knew more about computers than how to get to the games. In fact, for a good part of my childhood, I was the only person I knew who had a computer at home.
Today, there are some small fraction of children who have a Raspberry Pi at home, who are learning to program, and who actually end up with some grasp of how things work. So from where I’m sitting, it looks a lot like the situation hasn’t changed much in 30+ years.
This is kind of sad from my perspective (I find computing and all its possibilities to be fascinating), but does it really matter? Do we need everyone to know how computers work? Should everyone be a programmer? Would anything change for the better if they were? What would we sacrifice in getting to that point?
People used telephones for decades with almost no knowledge about how they worked. Everyone didn’t have to become phreakers for phones to change the world. But it’s not a perfect analogy, phones aren’t as powerful, phreaking was always more or less illegal, or at least unsavory.
I got lost at “anymore”. This is comparing a few thousand children from the 80s to a few tens of millions of children today (worldwide; I just made these numbers up but I think they’re sufficiently accurate guesses for this purpose). It’s hard to take seriously any conclusion based on such a comparison. Especially since it’s all anecdotes.
Yeah in order to get a relatively accurate sampling of say 50 million, you’d need a random sampling of like 5000 students. I used a calculator and then forgot all my numbers, but yeah I’m pretty sure he wasn’t anywhere near random.
Yeah, point taken, I shouldn’t really have put numbers in when they weren’t going to mean anything. I wanted to convey that he’s comparing “kids these days” to “everyone I know at work”; I’m sure there were better ways to make the point.
I was actually just taking your points literally to see what the results would be to aid in gut feel. I wasn’t trying to discredit your statement. it’s around 5000 for +- 4% for 50million. Obviously his set would be biased for his region, his age range, socioeconomic status, etc. I was merely saying there’s no way he had a random sampling given some conservative estimates.
Ah! Okay. I mean, that’s legitimate analysis and interesting to me because I wasn’t sure how to compute such a thing.
I’m glad that you weren’t annoyed about it this time, but I do try to catch myself when I’m tempted to try to sound more credible by using rhetorical devices like “let’s assume these numbers…”; it’s a bad habit. I prefer in general to make a weaker statement that I can actually support.
In this case, it ought to be obvious even without numbers that the author didn’t support any of his claims. :)
I used to work in “academic technology” (sort of IT meets education) at a university, and definitely I agree with the author that anyone who claims that “kids today” are a bunch of computer geniuses is badly mistaken.
The mistake, I believe, is one of confusing frequency of use with expertise. Young people tend to use mobile devices and computers regularly, and for more “things” than older people (due in part, perhaps, to device convergence). However, they largely fail to grasp even basic abstractions like files, or the difference between a web browser and the web site currently displayed.
Their computer use also tends to focus almost exclusively on communication and media consumption. They know how to open a web browser and get to Facebook and Netflix. But many have never used a computer to create anything more complicated than a poorly-formatted Word document. Many others have used Photoshop or something else in the same vein, but only by rote, following specific instructions to generate specific output.
Their computers are constantly “slowing down” (becoming infected with malware) and they don’t understand why. This is actually a massive impediment to BYoD policies at universities and online learning in general: you can’t assume that arbitrary young people can keep a computer working properly for an entire semester (this is, perhaps, more of a sad commentary on the state of the web and the commercial software ecosystem, but that’s another comment).
I disagree with the author, however, that things have gotten worse. The problem as I see it is that things haven’t gotten any better. When I was a kid, I was basically the only person I knew who knew more about computers than how to get to the games. In fact, for a good part of my childhood, I was the only person I knew who had a computer at home.
Today, there are some small fraction of children who have a Raspberry Pi at home, who are learning to program, and who actually end up with some grasp of how things work. So from where I’m sitting, it looks a lot like the situation hasn’t changed much in 30+ years.
This is kind of sad from my perspective (I find computing and all its possibilities to be fascinating), but does it really matter? Do we need everyone to know how computers work? Should everyone be a programmer? Would anything change for the better if they were? What would we sacrifice in getting to that point?
People used telephones for decades with almost no knowledge about how they worked. Everyone didn’t have to become phreakers for phones to change the world. But it’s not a perfect analogy, phones aren’t as powerful, phreaking was always more or less illegal, or at least unsavory.
So I guess I’m just not sure.
I got lost at “anymore”. This is comparing a few thousand children from the 80s to a few tens of millions of children today (worldwide; I just made these numbers up but I think they’re sufficiently accurate guesses for this purpose). It’s hard to take seriously any conclusion based on such a comparison. Especially since it’s all anecdotes.
Yeah in order to get a relatively accurate sampling of say 50 million, you’d need a random sampling of like 5000 students. I used a calculator and then forgot all my numbers, but yeah I’m pretty sure he wasn’t anywhere near random.
Yeah, point taken, I shouldn’t really have put numbers in when they weren’t going to mean anything. I wanted to convey that he’s comparing “kids these days” to “everyone I know at work”; I’m sure there were better ways to make the point.
I was actually just taking your points literally to see what the results would be to aid in gut feel. I wasn’t trying to discredit your statement. it’s around 5000 for +- 4% for 50million. Obviously his set would be biased for his region, his age range, socioeconomic status, etc. I was merely saying there’s no way he had a random sampling given some conservative estimates.
Ah! Okay. I mean, that’s legitimate analysis and interesting to me because I wasn’t sure how to compute such a thing.
I’m glad that you weren’t annoyed about it this time, but I do try to catch myself when I’m tempted to try to sound more credible by using rhetorical devices like “let’s assume these numbers…”; it’s a bad habit. I prefer in general to make a weaker statement that I can actually support.
In this case, it ought to be obvious even without numbers that the author didn’t support any of his claims. :)