TLDR: I don’t like smartphones because they are not PCs.
This comment was brought to you using a smartphone.
But I’m sure you also own a PC for all the real work, right? The problem the author points out is that more and more people don’t even own PC’s anymore, given there are less and less primary incentives for that. Even kids nowadays mostly spend their money on expensive smartphones, there’s often no money left for a computer.
Later on, when a kid might come up with an app idea, which can we a secondary incentive, there then won’t be a PC to work on those things. It may sound funny, but this is a real problem, and it will be devastating.
[Comment removed by author]
I totally agree with you - people who use an ipad today are unlikely to be have been type who used the PC as a creation device.
What we are slowly losing is the malleability of a PC in the house. I would bet that a there are many adults in comp-sci that started off with tinkering with the home PC that was probably bought to help the family do taxes or write school reports. It is becoming harder and harder to come across that kind of opportunity today.
The implicit assertion here is that PC’s will remain the only viable way to make things. Something like TouchDevelop is still toy-ish but I’ve been able to make little games and apps while sitting at bars. I think what we have so far is primitive compared to the possibilities, there’s still so much to explore.
But I’m sure you also own a PC for all the real work, right?
More and more people are shifting entirely to tablets and phones for “real work” – or more specifically – all their work. I have seen this first hand, a friend of mine has been living without a “classic computer” since the iPad Pro release. He does 100% of his work on his iPhone and has iPad Pro and claims he is more productive than ever. Companies are doing this as well – as iPads are easier to maintain.
I suspect (for better or worse), the general purpose programmable computer will be a specialized tool used by engineers and will fall out of the consumer space in the next couple decades. Developers will rage about it – and it won’t matter. Just like when people raged about the inability to repair their own cars due to growing computer control and complexity – and it didn’t matter.
General purpose computers have not been an unqualified success for non-technical users; is it any surprise that people drowning in a foetid sea of viruses, malware, Windows, MDI, the OS X Finder, et al would grab hold of the first lifeline that allowed them to simply get on with their technologically mediated lives?
More and more people are shifting entirely to tablets and phones for “real work” – or more specifically – all their work.
I don’t really buy it, because the tablet market is stagnating - sales are shrinking. Admittedly, the PC market also hasn’t been great, but in contrast to tablets, many six year old Windows 7 PCs can still run current software fine. So, there is less incentive to buy a new PC every three years.
He does 100% of his work on his iPhone and has iPad Pro and claims he is more productive than ever.
As long as we don’t have statistics over a large population, this is just an anecdote. Of course, there will be some people who use just tablets.
I think general purpose computers aren’t dying yet, because (1) people keer around and use their old PCs; and (2) cheap Windows laptops are approximately in the same price bracket as usable tablets or Chromebooks. I do agree that usage patterns have changed a lot to move from local applications to cloud applications. So, there could be a rapid change from general purpose computers with a keyboard to computers that only have a browser (and a keyboard).
I am two-minded about this. For the general population, computing will be safer. Family incidents like malware, lost files, viruses, etc. will be fewer. But it’s indeed also harder for someone who would like to hack on their system to do so.
[1] http://www.dailytech.com/Its+Official+the+Tablet+Market+is+Stagnant/article37123.htm
If you haven’t read this yet, take the time to do so. The AI portion of the article is interesting, but the walk through of the NES version’s game assembly (and ensuing bugs) was downright fascinating.
What I’d do if I really wanted to turn off syntax highlighting is to only highlight string literals and comments, to separate code from non-code. Which is in the end what he did anyways.
Rob Pike (I think?) wrote a bit he thought how syntax highlighting traditionally only highlighted the obvious, rather than little things like the difference between
=and==.I’m thinking that syntax highlighting would help more if it was context sensitive. As in, it understood the content of your code, sort of like another pivot of hungarian.
E.g., smart pointers are green, raw pointers are pink. Variables containing user input are red, sanitized input is purple. That sort of thing.
You could flip through different color views to see different semantics of your code. In this function, which of these things are const and non-const? Which of these things are declared on the stack, or on the heap?
I think that wouldn’t be the job of the highlighter per se. More like a linter (or any static analysis tool really) giving feedback to the editor/IDE which then highlights accordingly. It’s already what most editors and IDEs are capable of. My methods are not highlighted the same as builtin methods. If I use an undeclared variable or something from a module I forgot to import, it’s also highlighted to show me my mistake. (At least it’s what I have using IDEs like Intellij for Java or Scala, editors such as Sublime or Atom with linters for Elixir, JavaScript, … can’t say for other languages I don’t use)