I do not think resource constraints are gone, but I do believe there is a growing separation between software that is resource-aware and software that is not.
Right now my system monitor is reporting the memory usage of the five programs I am using (at the moment) as:
chromium-browser : 431.7 MB
Approximately 30 tabs open, most of which are inactive.
firefox : 352.3 MB
One tab open, active.
spotify : 324.6 MB
mate-terminal : 45 MB
vim : 12.1 MB
Comparing Vim to Firefox is like comparing apples to oranges, and I think it would be ridiculous to say that Firefox requiring ~30x as much memory makes it ~30x worse of an application as Vim.
But I do feel that the divide between these two is much greater than the divide between a web browser and a text editor was twenty years ago.
I don’t know whether that is a good thing or a bad thing overall, or if it really even matters in the grand scheme of things, but it does feel different.
The comparison Firefox vs Vim might be a bit unfair, given how people have basically made the browser an operating system and therefore the browser needs to provide somehow everything. But then compare Spotify (a media player, only capable of playing one song at a time and playlists plus some browseable database) with a player with a local library or an MPD player and then you realize what a mess it is. But of course the times where you could build your own FOSS clone of ICQ or Spotify are mostly gone.
I think your comparison of Spotify vs $OTHER_MEDIA_PLAYERS is much better than of my Firefox vs Vim comparison.
I understand where you are coming from when you say that browsers were basically made an operating system, but I do think it is important to note that current browsers offer significantly less functionality than the OSs I use on a daily basis at the cost of equal or greater memory usage.
It doesn’t seem like you are trying to hold a particularly strong position that BROWSER == OS, but if such a position was strongly asserted I would contend that a noticeable gap in functionality vs memory footprint exists.
That’s just my opinion though.
So, we’d compare Firefox to Windows or Linux when they’re idle. Back when I checked that, the Ubuntu builds were around the same size in memory. So, that fits.
I just posted something in a web editor and it has ~500ms input lag (rough estimate; certainly far too much to be comfortable). Resource constraints are far from gone, people just accept stuff like this for some reason 🤷♂️
Agreed. What increased every year is our tolerance to broken stuff (and requiring a ludicrous amount of resources is one such example).
The only thing that changed is that people managed to stack the pile of shit that is software development even higher. I can’t wait until that tower comes tumbling down.
Main thing is that safe, portable software runs fast enough to be a good default. Twenty years ago, just having bounds checks and GC made the apps unbearably slow. It does today for performance-critical apps. Fortunately, the tech for finding or proving absence of problems in unsafe code is also extremely good compared to then.
I’m not sure about it. 20 years ago we also had Java, C#, Python, and PHP. Their roles, and perceived performance haven’t changed that much. Even though their implementations and hardware they run on have improved dramatically, we now expect more from them, and we’re throwing more code at them.
All of those language runtimes have seen dramatic performance improvements, as has the hardware available to run them. In 2000, writing a 3D game in C# would’ve been insane; today it’s just Unity.
At one point, Java was about 10-15x slower than C or C++ apps. The well-written apps in native code were always faster with less resources than the others you mentioned. That’s both load and run time. I always noticed a difference on both a 200MHz PII w/ 64MB of RAM at home and the 400MHz PIII w/ 128MB RAM at another place. Hell, going from just Visual Studio 6 to .NET bogged things down on that PIII. Visual Basic 6 was about as fast as REPL development with everything taking about a second.
We do have a trend where the hardware got faster making even slower software seem similar or faster. The responsiveness of native apps was usually better unless they were of the bloated variety. Modern apps are often slower in general with more resources, too. If they ran like old ones, I could buy a much cheaper computer with less resources. Good news is, besides Firefox and VLC, I have plenty of lightweight apps to choose from on Linux that make my computer run snappy like it did 10-20 years ago. :)
A conjecture, people upgrade to the latest hardware to get there BEFORE the devs do. Then soon, because the devs have also upgraded they write FOR these new machines. From a game theoretic standpoint, the cpu vendors should give the Carmacks of the world the fastest systems they can muster.
This was Kay’s idea, but processing hardware doesn’t improve at a fast enough rate to justify it anymore. Unless your project is mired deep in core development for a decade, customers won’t have a twice-as-fast machine by the time it’s released.
Yeah, we’re sort of in a golden age wherein, when it comes to bloat, the easiest way for a developer to avoid it does not involve avoiding useful language features like garbage collection but instead just to avoid being sloppy with time complexity & space complexity.
I get the impression that a lot of that is actually due to improvements in compiler tech (especially JITs) and GC design tech, rather than hardware. The division between scripting language, bytecode language, and compiled language is fuzzier because most scripting languages get bytecode-compiled and most bytecode languages get JIT compiled, so you can take arbitrarily high-level code and throw out a lot of the runtime overhead for it, basically making high-level code run more like low-level code, and when you do that, you can write higher level code in your actual language implementation too, which can make it easier to write complicated optimizations and such.
I’m not really familiar with what specific optimizations might have been introduced, though, and all I know about the advances in GC tech is that even people who think they know about GC tech are apparently generally 20 years out of date on it…
It’s hard to imagine “python for data science” in 1999. It’s even harder to imagine something like Julia in 1999 – a high-level high-performance garbage-collected language with strong implicit types and a REPL, intended for distributed statistical computing. It’s not that such things were impossible in ’99, but it was very much limited to weird academic projects in lisp / forth / smalltalk / whatever.
If you look at stuff like Chrome devtools, or even the jetbrains tooling for debugging, I feel like writing software and debugging it has become a lot more accessible.
I remember struggling a lot as a kid to figure out how to get even the most basic of toolchains running. Nowadays things have “real documentation” (not always of course), and there’s some hope of using a tool without having to read its entire manual front-to-back
Some people consider that a negative. Personally I’m not interested in doing a first-principles study of every CLI tool I use
I started programming on the MSX in my early teens. The MSX is roughly similar to the C64 or BBC Micro: you turn it on, you’re dropped in a BASIC environment, and you can program.
After we got a Windows machine things were a lot harder; remember, this is around 1998 and I was 14-year old non-native English speaker. I got “Sam’s teach yourself C++ in 10 minutes” in my native language from the local bookstore, which is probably the shortest programming book I’ve seen to date. I think the 4th or 5th chapter introduced templates. You can imagine how well that went for me. In hindsight, that book was not just a waste of my pocket money, it also did me a massive disservice by making me believe I wasn’t smart enough for modern (at the time) programming.
Also, getting a development environment meant getting bootlegged copied of Visual Studio. It was a pain. Perhaps there was an easier way back then, but it was non-obvious, at least for me.
So, I stopped programming for a few years. It until I installed Linux (which got replaced by FreeBSD a bit later) around 2004 that I really got back in programming, as by that time it was so much easier to get started especially on the Unix-y environment.
With “web as OS” we’re essentially back to the MSX/C64/BBC Micro days,where you start the machine and you’re dropped in a programmable environment. Regardless of the criticisms on the web tech and how it’s used, I feel this is a very important and powerful advantage that’s often overlooked.
That’s a little harsh given some environments were easy to get if it wasn’t C++. Quite a few languages had installers that let you just get right to it with good documentation. ActiveState’s for Perl and Python on Windows let me experiment with them quickly. I remember I used FreeBASIC for one experiment. It was easy to download, easy to compile, and its commands were online. Lisp had Lisp in a Box with Practical Common Lisp online.
The browser environments are certainly more seemless. It’s just not a huge leap from providing a doc/ebook and an installer. There’s just an extra step in the second scenario. Once installed, you’re doing the same thing. C++ and its tooling are just their own level of pain. I found that out after trying to quickly absorb it using Sams Teach Yourself C++ in 21 Days a long, long time ago. ;) Hey, look what (pdf) I just randomly found checking to see if I remembered its name.
Oh yeah, there were undoubtedly many things out there that I simply didn’t know about; after all, I was just a 14-year old kid who had no idea what he was doing. We didn’t even have internet at first, that didn’t come until half a year later or so (dialup, of course, so you had to be quick about it!)
But that was kind of my point: you really had to search for solutions and at least vaguely know what you were doing, which was quite a different experience from a machine that’s “programmable by default”.
Oh I see what you mean. I had the same problem. Google wasn’t there. Had to go with whatever was on the machine, outdated books in thrift stores, etc. My early ideas about programming were probably way off, too. I can’t remember them now.
…massive disservice by making me believe I wasn’t smart enough for modern (at the time) programming.
I feel as if modern programmers (pesky humans) have a knack for doing this to others just getting started or wanting to grow. Curious too how you think people have changed since the time you read that book?
No, I stopped using it when the old pkg_* stopped working and everyone had to forcibly upgrade to pkgng. There were many bugs (broke literally all four of my systems) and the state of it back then was horrible. I didn’t like the design in the first place, but if it had at least worked it would have been palatable, but it didn’t even do that.
I don’t know what the current state is; incidentally there was a HN thread yesterday which said it got better, although it still has weird bugs 🤷♂️
…massive disservice by making me believe I wasn’t smart enough for modern (at the time) programming.
I feel as if modern programmers (pesky humans) have a knack for doing this to others just getting started or wanting to grow. Curious too how you think people have changed since the time you read that book?
The book was just really bad; I mean, the entire title is obviously just complete bullshit: teaching anyone C++ is 10 minutes is nothing short of an ludicrous and fraudulent claim, and the content of the book was to match. Applicable. But … I didn’t know any of that at the time.
There are still really bad books out there; a few years ago a friend was trying to learn C for some embedded project she had to do for uni (industrial design studies). Her C book was beyond horrible; even I had trouble understanding some parts and I actually already know how to program in C. I offered her my copy of K&R – which is probably also not the best introductory book, but much better than what she had – but by this point she was sufficiently demotivated to just give up.
Books are like software: a lot just gets written by some rando who just wants to make a buck and doesn’t really care. If you put something on the market with a bit of advertising around it, you’re going to get buyers because the quality can be rather hard to appraise before-hand.
Nowadays, there are a lot more resources in the form of online content, forums, Stack Overflow, YouTube, Kahn academy, meetups, code camps, what-have-you. It’s not like this was completely absent back in the day, but now it’s so much more accessible, especially for interested 14-year olds. To be honest, I think sometimes there’s a bit too much focus on helping new users.
Besides what @nickpsecurity said, I think the other thing that has changed are that formal methods & fancier types are slowly creeping in from the edges.
20 years ago, linear & affine types were neat academic exercises, now we have at least one major programming language with them
languages like ML, Haskell, &c were side curiosities, now you can often find major projects written in them
tools like symbolic executors, abstract interpreters, “design by contract,” and so on are relatively normal now, not fancy wares of academic high towers
property testing, fuzzing, and other types of random mutation testing aren’t seen as black arts, but rather mundane things that most people can use
There are both jokes and serious observations, I like that :-)
I have a problem with two items specifically though:
Since we have much faster CPUs now, numerical calculations are done in Python which is much slower than Fortran.
Actual numerical calculations are run by a vectorized C code, even if it’s called from Python. Python is there to describe the logic around.
Unit testing has emerged as a hype and like every useful thing, its benefits were overestimated and it has inevitably turned into a religion.
It also made software orders of magnitude* more reliable. I consider testing and version control the two most important innovations in software developments since I started on it.
To be honest not that much changed the last 20 years, fundamentally. But I’m happy I got to see the whole stack as we have it nowadays grow layer-by-layer.
$2K is an odd-ball. There are $200-$500 conferences where people may buy tickets from their own pocket, and $8K conferences that are designed for extracting money out of corporations.
Also interestingly I only bought my first digital camera in 2001 - and nobody made selfies back then… That’s what threw me off more in that sentence :)
Computer programming 40 years ago (when I started) was always dominated by two groups: the highly prosperous academic, or the dirt poor hacker kid.
The academic had experience on all the old metal. The hacker kid, only the new stuff. Or, depending on the hackers’ interests, often a bit of old stuff too. Both camps were often as competent as each other, but in different ways. The old guys could design software, then implement it and then write tests; the hackers mostly coded software, then fixed what was broken - or didn’t. Back then the more you hacked things, the more academic you actually became. Often, hackers wrote papers that academics would absorb with resistance.
Everyone had their favourite flavour of machine, language, and coding style. Your true value then, depended on how well you could shove that flavour down anyone else’ throats. Managers didn’t care, how you did it - just if you did it, and where it was running, and when could the users be given access to it, and so on.
The tooling was bonkers - there was always some new way to write code. A lot of time, you’d spend just writing tools, then hack up the project by gluing all the tools together.
Things haven’t changed much. In fact, they’ve stayed exactly the same. The only real ‘difference’ is the cyclomatic complexity has gone up a few factors, and there’s way more RAM than there needs to be, for most tasks. This is because the academics mostly win, and we get “good enough” bloated mediocrity in our operating environments, as happens always with academia, while industry keeps all the hackers fighting with each other for the latest/greatest shiny new toy that does - basically the same things that last years toys did, only ‘better’.
We had that problem back in the 70’s too. The computer industry is very, very decadent in this regard.
I use a lot more software with the primary maintainers from eastern Europe/Russia. For a few years in the mid 2010’s there seemed to be a slow explosion of interesting open-source projects coming from Chinese programmers, notably Gogs, but that seems to have dried up now…
The one that really “got me” was one gigabyte isnt enough storage anymore.
It’s comments like this make me look at MenuetOS and other small OSs and wonder why there isn’t some modern assembly language that makes certain programming easier while remaining small.
Chuck Moore has a new ColorForth variant called uhdforth, which may be a good fit for that. His decision to build it on top of windows instead of making it a standalone OS is really confusing to me, though. (Maybe it has something to do with difficulty interfacing with modern video hardware? Jumping into long mode is not actually that hard, & he’s already rolling his own everything, including threading.)
Absolutely the most important factor, I think. Revision control existed earlier, but it’s not only gotten nigh-universal (even hobbyists use it), but it’s improved a lot (for instance, I use both svn and git for work, and git is so much easier to deal with because it’s not file-based). In 2000, we would have at best been using CVS.
When I started using VCS I already used SVN for most of the things (university, work, private stuff) but PHP was still on CVS - iirc merging was a bit of a complicated thing. git was a lot easier to “repair” if someone did something weird, but I don’t remember much. CVS wasn’t horrible, actually.
SVN is a pretty minor improvement over CVS, IMO. It removed some of the flakiness & weird corner cases. I’m still much happier merging, trusting diffs & logs, & moving around big chunks of code in git. (About the only thing I prefer from SVN & CVS is the way you could revert uncommitted changes to a file by deleting it and running update).
Subversion was better with submodules and binary files. For games with assets, for example. (Talking small games here, not real[tm] professional[tm] game development :)
As someone who had to do branch merges and cleanups (not to mention flaky network connections causing broken commits) in CVS, I say SVN was a remarkable improvement. Also, and very importantly, SVN was not file-based, but commit-based, even though it used nearly the same UI as CVS did. It was honestly a remarkable feat.
And, frankly, that UI was much easier for me to understand than git’s (which, honestly, is a UI nightmare).
Still, when bitbucket and github launched, they ushered in ubiquity of source control, and that wouldn’t have happened without git and hg.
I managed to avoid having to do branch cleanups & merges in CVS, though I’ve heard that it’s a nightmare.
You’re right – I’m technically wrong when I say SVN is file-based. SVN doesn’t identify identical blocks of code as they move between files, as git does. Additionally, I’ve frequently found that the svn state of the root directory of some tree won’t track with individual files, so that ‘svn log’ will give out of date information unless you do an update first – a gotcha that can be very confusing to people who are used to other RCSes.
Mature programs must now degrade into a gordian knot of abstraction; to stupefy anyone working on your program, control flow must be laundered through at least 4 different interfaces before accomplishing anything.
I feel resource constraints are gone now. People built apps to send text back and forth that use 1.5GB memory with a constantly active CPU.
I hate it.
I do not think resource constraints are gone, but I do believe there is a growing separation between software that is resource-aware and software that is not. Right now my system monitor is reporting the memory usage of the five programs I am using (at the moment) as:
Comparing Vim to Firefox is like comparing apples to oranges, and I think it would be ridiculous to say that Firefox requiring ~30x as much memory makes it ~30x worse of an application as Vim. But I do feel that the divide between these two is much greater than the divide between a web browser and a text editor was twenty years ago. I don’t know whether that is a good thing or a bad thing overall, or if it really even matters in the grand scheme of things, but it does feel different.
The comparison Firefox vs Vim might be a bit unfair, given how people have basically made the browser an operating system and therefore the browser needs to provide somehow everything. But then compare Spotify (a media player, only capable of playing one song at a time and playlists plus some browseable database) with a player with a local library or an MPD player and then you realize what a mess it is. But of course the times where you could build your own FOSS clone of ICQ or Spotify are mostly gone.
I think your comparison of Spotify vs
$OTHER_MEDIA_PLAYERS
is much better than of my Firefox vs Vim comparison.I understand where you are coming from when you say that browsers were basically made an operating system, but I do think it is important to note that current browsers offer significantly less functionality than the OSs I use on a daily basis at the cost of equal or greater memory usage. It doesn’t seem like you are trying to hold a particularly strong position that
BROWSER == OS
, but if such a position was strongly asserted I would contend that a noticeable gap in functionality vs memory footprint exists. That’s just my opinion though.I think modern browsers offer functionality roughly comparable to minimalistic OSes, except resources are very virtual and sandboxed.
So, we’d compare Firefox to Windows or Linux when they’re idle. Back when I checked that, the Ubuntu builds were around the same size in memory. So, that fits.
I just posted something in a web editor and it has ~500ms input lag (rough estimate; certainly far too much to be comfortable). Resource constraints are far from gone, people just accept stuff like this for some reason 🤷♂️
Agreed. What increased every year is our tolerance to broken stuff (and requiring a ludicrous amount of resources is one such example).
The only thing that changed is that people managed to stack the pile of shit that is software development even higher. I can’t wait until that tower comes tumbling down.
Recommended talk.
Software is immutable. Why must the tower crumble?
Main thing is that safe, portable software runs fast enough to be a good default. Twenty years ago, just having bounds checks and GC made the apps unbearably slow. It does today for performance-critical apps. Fortunately, the tech for finding or proving absence of problems in unsafe code is also extremely good compared to then.
I’m not sure about it. 20 years ago we also had Java, C#, Python, and PHP. Their roles, and perceived performance haven’t changed that much. Even though their implementations and hardware they run on have improved dramatically, we now expect more from them, and we’re throwing more code at them.
All of those language runtimes have seen dramatic performance improvements, as has the hardware available to run them. In 2000, writing a 3D game in C# would’ve been insane; today it’s just Unity.
At one point, Java was about 10-15x slower than C or C++ apps. The well-written apps in native code were always faster with less resources than the others you mentioned. That’s both load and run time. I always noticed a difference on both a 200MHz PII w/ 64MB of RAM at home and the 400MHz PIII w/ 128MB RAM at another place. Hell, going from just Visual Studio 6 to .NET bogged things down on that PIII. Visual Basic 6 was about as fast as REPL development with everything taking about a second.
We do have a trend where the hardware got faster making even slower software seem similar or faster. The responsiveness of native apps was usually better unless they were of the bloated variety. Modern apps are often slower in general with more resources, too. If they ran like old ones, I could buy a much cheaper computer with less resources. Good news is, besides Firefox and VLC, I have plenty of lightweight apps to choose from on Linux that make my computer run snappy like it did 10-20 years ago. :)
A conjecture, people upgrade to the latest hardware to get there BEFORE the devs do. Then soon, because the devs have also upgraded they write FOR these new machines. From a game theoretic standpoint, the cpu vendors should give the Carmacks of the world the fastest systems they can muster.
This was Kay’s idea, but processing hardware doesn’t improve at a fast enough rate to justify it anymore. Unless your project is mired deep in core development for a decade, customers won’t have a twice-as-fast machine by the time it’s released.
Yeah, we’re sort of in a golden age wherein, when it comes to bloat, the easiest way for a developer to avoid it does not involve avoiding useful language features like garbage collection but instead just to avoid being sloppy with time complexity & space complexity.
I get the impression that a lot of that is actually due to improvements in compiler tech (especially JITs) and GC design tech, rather than hardware. The division between scripting language, bytecode language, and compiled language is fuzzier because most scripting languages get bytecode-compiled and most bytecode languages get JIT compiled, so you can take arbitrarily high-level code and throw out a lot of the runtime overhead for it, basically making high-level code run more like low-level code, and when you do that, you can write higher level code in your actual language implementation too, which can make it easier to write complicated optimizations and such.
I’m not really familiar with what specific optimizations might have been introduced, though, and all I know about the advances in GC tech is that even people who think they know about GC tech are apparently generally 20 years out of date on it…
It’s hard to imagine “python for data science” in 1999. It’s even harder to imagine something like Julia in 1999 – a high-level high-performance garbage-collected language with strong implicit types and a REPL, intended for distributed statistical computing. It’s not that such things were impossible in ’99, but it was very much limited to weird academic projects in lisp / forth / smalltalk / whatever.
XML is dead, the millennials’ best kill so far.
If you look at stuff like Chrome devtools, or even the jetbrains tooling for debugging, I feel like writing software and debugging it has become a lot more accessible.
I remember struggling a lot as a kid to figure out how to get even the most basic of toolchains running. Nowadays things have “real documentation” (not always of course), and there’s some hope of using a tool without having to read its entire manual front-to-back
Some people consider that a negative. Personally I’m not interested in doing a first-principles study of every CLI tool I use
I started programming on the MSX in my early teens. The MSX is roughly similar to the C64 or BBC Micro: you turn it on, you’re dropped in a BASIC environment, and you can program.
After we got a Windows machine things were a lot harder; remember, this is around 1998 and I was 14-year old non-native English speaker. I got “Sam’s teach yourself C++ in 10 minutes” in my native language from the local bookstore, which is probably the shortest programming book I’ve seen to date. I think the 4th or 5th chapter introduced templates. You can imagine how well that went for me. In hindsight, that book was not just a waste of my pocket money, it also did me a massive disservice by making me believe I wasn’t smart enough for modern (at the time) programming.
Also, getting a development environment meant getting bootlegged copied of Visual Studio. It was a pain. Perhaps there was an easier way back then, but it was non-obvious, at least for me.
So, I stopped programming for a few years. It until I installed Linux (which got replaced by FreeBSD a bit later) around 2004 that I really got back in programming, as by that time it was so much easier to get started especially on the Unix-y environment.
With “web as OS” we’re essentially back to the MSX/C64/BBC Micro days,where you start the machine and you’re dropped in a programmable environment. Regardless of the criticisms on the web tech and how it’s used, I feel this is a very important and powerful advantage that’s often overlooked.
That’s a little harsh given some environments were easy to get if it wasn’t C++. Quite a few languages had installers that let you just get right to it with good documentation. ActiveState’s for Perl and Python on Windows let me experiment with them quickly. I remember I used FreeBASIC for one experiment. It was easy to download, easy to compile, and its commands were online. Lisp had Lisp in a Box with Practical Common Lisp online.
The browser environments are certainly more seemless. It’s just not a huge leap from providing a doc/ebook and an installer. There’s just an extra step in the second scenario. Once installed, you’re doing the same thing. C++ and its tooling are just their own level of pain. I found that out after trying to quickly absorb it using Sams Teach Yourself C++ in 21 Days a long, long time ago. ;) Hey, look what (pdf) I just randomly found checking to see if I remembered its name.
Oh yeah, there were undoubtedly many things out there that I simply didn’t know about; after all, I was just a 14-year old kid who had no idea what he was doing. We didn’t even have internet at first, that didn’t come until half a year later or so (dialup, of course, so you had to be quick about it!)
But that was kind of my point: you really had to search for solutions and at least vaguely know what you were doing, which was quite a different experience from a machine that’s “programmable by default”.
Oh I see what you mean. I had the same problem. Google wasn’t there. Had to go with whatever was on the machine, outdated books in thrift stores, etc. My early ideas about programming were probably way off, too. I can’t remember them now.
Curious if you’re still using FreeBSD nowadays?
I feel as if modern programmers (pesky humans) have a knack for doing this to others just getting started or wanting to grow. Curious too how you think people have changed since the time you read that book?
No, I stopped using it when the old
pkg_*
stopped working and everyone had to forcibly upgrade to pkgng. There were many bugs (broke literally all four of my systems) and the state of it back then was horrible. I didn’t like the design in the first place, but if it had at least worked it would have been palatable, but it didn’t even do that.I don’t know what the current state is; incidentally there was a HN thread yesterday which said it got better, although it still has weird bugs 🤷♂️
The book was just really bad; I mean, the entire title is obviously just complete bullshit: teaching anyone C++ is 10 minutes is nothing short of an ludicrous and fraudulent claim, and the content of the book was to match. Applicable. But … I didn’t know any of that at the time.
There are still really bad books out there; a few years ago a friend was trying to learn C for some embedded project she had to do for uni (industrial design studies). Her C book was beyond horrible; even I had trouble understanding some parts and I actually already know how to program in C. I offered her my copy of K&R – which is probably also not the best introductory book, but much better than what she had – but by this point she was sufficiently demotivated to just give up.
Books are like software: a lot just gets written by some rando who just wants to make a buck and doesn’t really care. If you put something on the market with a bit of advertising around it, you’re going to get buyers because the quality can be rather hard to appraise before-hand.
Nowadays, there are a lot more resources in the form of online content, forums, Stack Overflow, YouTube, Kahn academy, meetups, code camps, what-have-you. It’s not like this was completely absent back in the day, but now it’s so much more accessible, especially for interested 14-year olds. To be honest, I think sometimes there’s a bit too much focus on helping new users.
Besides what @nickpsecurity said, I think the other thing that has changed are that formal methods & fancier types are slowly creeping in from the edges.
There are both jokes and serious observations, I like that :-)
I have a problem with two items specifically though:
Actual numerical calculations are run by a vectorized C code, even if it’s called from Python. Python is there to describe the logic around.
It also made software orders of magnitude* more reliable. I consider testing and version control the two most important innovations in software developments since I started on it.
*) I can exaggerate things as I see fit :-)
Same nitpick, but reminder that SciPy has more Fortran in it than C, which makes the author’s statement even more confusing.
To be honest not that much changed the last 20 years, fundamentally. But I’m happy I got to see the whole stack as we have it nowadays grow layer-by-layer.
Sorry if I misunderstand, but people pay their own money to go to these corporate conferences?
$2K is an odd-ball. There are $200-$500 conferences where people may buy tickets from their own pocket, and $8K conferences that are designed for extracting money out of corporations.
Also interestingly I only bought my first digital camera in 2001 - and nobody made selfies back then… That’s what threw me off more in that sentence :)
Computer programming 40 years ago (when I started) was always dominated by two groups: the highly prosperous academic, or the dirt poor hacker kid.
The academic had experience on all the old metal. The hacker kid, only the new stuff. Or, depending on the hackers’ interests, often a bit of old stuff too. Both camps were often as competent as each other, but in different ways. The old guys could design software, then implement it and then write tests; the hackers mostly coded software, then fixed what was broken - or didn’t. Back then the more you hacked things, the more academic you actually became. Often, hackers wrote papers that academics would absorb with resistance.
Everyone had their favourite flavour of machine, language, and coding style. Your true value then, depended on how well you could shove that flavour down anyone else’ throats. Managers didn’t care, how you did it - just if you did it, and where it was running, and when could the users be given access to it, and so on.
The tooling was bonkers - there was always some new way to write code. A lot of time, you’d spend just writing tools, then hack up the project by gluing all the tools together.
Things haven’t changed much. In fact, they’ve stayed exactly the same. The only real ‘difference’ is the cyclomatic complexity has gone up a few factors, and there’s way more RAM than there needs to be, for most tasks. This is because the academics mostly win, and we get “good enough” bloated mediocrity in our operating environments, as happens always with academia, while industry keeps all the hackers fighting with each other for the latest/greatest shiny new toy that does - basically the same things that last years toys did, only ‘better’.
We had that problem back in the 70’s too. The computer industry is very, very decadent in this regard.
I use a lot more software with the primary maintainers from eastern Europe/Russia. For a few years in the mid 2010’s there seemed to be a slow explosion of interesting open-source projects coming from Chinese programmers, notably Gogs, but that seems to have dried up now…
Thankfully this turned into a joke/satire quickly because I got really triggered with that one, haha
(Sadly it’s true tho)
I think the biggest thing I can think of is that the web became one of the main platforms, rather than sort of an afterthought.
The one that really “got me” was one gigabyte isnt enough storage anymore.
It’s comments like this make me look at MenuetOS and other small OSs and wonder why there isn’t some modern assembly language that makes certain programming easier while remaining small.
Chuck Moore has a new ColorForth variant called uhdforth, which may be a good fit for that. His decision to build it on top of windows instead of making it a standalone OS is really confusing to me, though. (Maybe it has something to do with difficulty interfacing with modern video hardware? Jumping into long mode is not actually that hard, & he’s already rolling his own everything, including threading.)
See https://tonsky.me/blog/disenchantment/
Compute power is cheap so we can do a lot more testing, automated via continuous integration servers.
20 years ago was just before “Free Lunch is over” so your programs were twice as fast every 18 months back then.
Not even sure which of the comments were intended to be snarky and which are just real developments :P
One of the biggest things for me is version control, I hardly knew anybody who used in in the year 2000.
Absolutely the most important factor, I think. Revision control existed earlier, but it’s not only gotten nigh-universal (even hobbyists use it), but it’s improved a lot (for instance, I use both svn and git for work, and git is so much easier to deal with because it’s not file-based). In 2000, we would have at best been using CVS.
When I started using VCS I already used SVN for most of the things (university, work, private stuff) but PHP was still on CVS - iirc merging was a bit of a complicated thing. git was a lot easier to “repair” if someone did something weird, but I don’t remember much. CVS wasn’t horrible, actually.
SVN is a pretty minor improvement over CVS, IMO. It removed some of the flakiness & weird corner cases. I’m still much happier merging, trusting diffs & logs, & moving around big chunks of code in git. (About the only thing I prefer from SVN & CVS is the way you could revert uncommitted changes to a file by deleting it and running update).
Subversion was better with submodules and binary files. For games with assets, for example. (Talking small games here, not real[tm] professional[tm] game development :)
As someone who had to do branch merges and cleanups (not to mention flaky network connections causing broken commits) in CVS, I say SVN was a remarkable improvement. Also, and very importantly, SVN was not file-based, but commit-based, even though it used nearly the same UI as CVS did. It was honestly a remarkable feat.
And, frankly, that UI was much easier for me to understand than git’s (which, honestly, is a UI nightmare).
Still, when bitbucket and github launched, they ushered in ubiquity of source control, and that wouldn’t have happened without git and hg.
I managed to avoid having to do branch cleanups & merges in CVS, though I’ve heard that it’s a nightmare.
You’re right – I’m technically wrong when I say SVN is file-based. SVN doesn’t identify identical blocks of code as they move between files, as git does. Additionally, I’ve frequently found that the svn state of the root directory of some tree won’t track with individual files, so that ‘svn log’ will give out of date information unless you do an update first – a gotcha that can be very confusing to people who are used to other RCSes.
Is this intended as a joke or satire?
Either or.
Mature programs must now degrade into a gordian knot of abstraction; to stupefy anyone working on your program, control flow must be laundered through at least 4 different interfaces before accomplishing anything.
App like this at work. Gordian Knot is right.
Devops
vs 2000?
information.
faster computers.
much more F/LOSS
accessible programming languages