1. 9
  1.  

  2. 9

    For a teenager and an aspiring computer programmer, the 00s were a great time to learn.

    You can replace ‘00s’ with ‘80s’ or ‘90s’ and this would still be true (I’ve heard this sentiment many times). Perhaps not the 70s or earlier, since home computers were not really a thing then.

    I think the key point in this discussion/rant is that computers are mostly an appliance and a consumption device. Tools for creating things with them have gotten harder to work with over time. Part of that has to do with the complexity of the systems, but part of it is also the paucity of profits that come from providing such tools. The solution, such as it is, seems to be adding more options to our C compilers.

    Also: it turns out Mastodon is just as bad for long threads as Twitter.

    1. 2

      I don’t think inherent complexity has much to do with why systems have gotten less flexible over time. I think we’re mostly looking at the result of changing norms.

      Personal computers in the early 80s were marketed along the lines of “master for loops with this machine and you can take control over your life”. This wasn’t necessarily totally accurate (a lot of those machines had 8 bit integer arithmetic, so using them even for personal finances could be tricky), but it at least made clear that the point of the machine was that you’d set aside a half an hour with the manual and gain control over the machine in turn.

      There was a concerted effort, spearheaded by Jobs, to turn general-purpose computers into single-function computing appliances, hide programming from non-technical users, and force the hobby community to turn itself into a much more professionalized “microcomputer software industry”.

      I don’t think any of those things were really necessary – we used to have a distinction between workstations (big expensive machines used by professionals for important work and paid for by the company) and micros (small, buggy machines without memory managers, where commercial software was thin on the ground and you were really expected to write everything yourself even if you weren’t a programmer), and that division was really empowering for both sides (even as, by clocking in or out, you could cross the boundary between small computing and big computing).

      1. 6

        I think you are misrepresenting the state of advertising and computers in the 80s.

        Look at this ad for a TRS-80.

        It’s not selling programming, it’s not selling for loops–it’s selling the software that solves the types of problems the user has, writing stuff and balancing the checkbook.

        The distinction between workstations and micros is similarly incorrect. Workstations–say, HP or Sun or SGI boxes–were outnumbered by cheap PC-clones or IBM-ATs or Apple boxes or whatever, running boring business applications.

        There’s this desire to say “Ah, but in the golden age of computing, where every user was a programmer-philosopher-king!”, but that just isn’t borne out by history.

        1. 6

          There’s this desire to say “Ah, but in the golden age of computing, where every user was a programmer-philosopher-king!”, but that just isn’t borne out by history.

          Absolutely. Don’t get me wrong: just because I argue that dev tools were easier to work with in the 80s doesn’t mean they were good. It’s a subtle difference that some fail to take into account.

          1. 1

            Thank you for clarifying that, I understand your point more clearly now. :)

          2. 1

            I’m slightly exaggerating for the sake of emphasis. There was never a golden age of programmer-centric home computers, but there was a silver age of home computers that expected that most non-programmers would do a little programming, and catered to the middle-ground between novice and developer in a way that didn’t require a mission statement and career goals. (If you had a computer in your home, you probably wrote a little bit of code, and nobody was forcing you to write more.)

            There were business-specific ad campaigns that focused on existing shrink-wrapped software, and were essentially ads for the software in question. But, those same manufacturers would have programming related campaigns. And, if you weren’t in the market for spreadsheets, you’d quickly find that if you wanted your computer to be anything more than an expensive paperweight, you’d need to either play a lot of games or learn simple programming concepts.

            Sure, workplaces could use micros to run dedicated business apps. But, if someone bought that same machine for their own home, programming would be presented to them, by the machine and the machine’s documentation, as the primary way of interacting with the machine unless they bought third party shrink-wrapped software.

            We’ve moved to a programmer/user division as part and parcel of a privileging of workplace deployment needs over exploration (as the thread mentioned) – a situation where users aren’t expected to be in full control over their machines anyway because they have sold their time to Moloch and must use their machine in only Moloch-approved ways. It’s fine that such systems exist (we all sacrifice our 40 hours a week into Moloch’s gaping maw), but it’s pretty stupid to have even the machines in our bedrooms set up like they expect our house to have a professional sysadmin and an internal dev team (to write new applications or edit the code of licensed stuff to meet our needs).

            The ability, as a non-programmer, to write hairy messy code in the comfort of your own home, and the understanding that it’s expected of you to write code for yourself and nobody else, is really important. The alternative is that everybody who masters for loops thinks that they’re ready to work for IBM, and they end up using plaintext databases for password storage at a fortune 500 company because they don’t understand the difference between big computing and small computing.

            1. 5

              The ability, as a non-programmer, to write hairy messy code in the comfort of your own home, and the understanding that it’s expected of you to write code for yourself and nobody else, is really important.

              Why? Why does this matter to non-programmers?

              I have an app that streams movies and porn. I have an app that lets me tweet at other people who feel that tweeting is important. I have a web browser and an app to file taxes. I have an app to go look at my bank transactions and remit rent. I have an app to collect e-books I never get around to reading.

              What problems do I have that require programming or number crunching that are not solved, better and more easily, by just using something somebody else wrote and leases back to me for the convenience?

              And if I need to do something really weird, why not just hire a coder to solve it for me?

              ~

              I’m not even being facetious here. The argument for everybody knowing how to program is, increasingly, like the argument for everybody knowing how to cook, how to debate properly, how to shoot, or any other thing we used to be able functioning adults to be able to do.

              It doesn’t matter anymore. It’s not required. Federation of skills and services is inefficient.

              It’s obsolete.

              1. 5

                First off, spreadsheets are the most popular programming environment ever created. Yes, spreadsheets. It’s a form of programming, only it’s not called programming, so people do it. [1]

                Second, people not exposed to “programming” are often unaware of what can be done. A graphic designer is given 100 photos to resize. Most I fear, would, one at a time, select “File”, then “Open”, then select the file, then “Okay”, then select “Image”, then “Resize”, then type in a factor, hit “Okay”, then “Save” and then “Okay”. One Hundred Times.

                I think there’s an option in Photoshop to do that as a batch operation but 1) that means Photoshop has to provide said functionality and 2) the user has to know about said option.

                As a “programmer”, I know there exist command lines tool to that and it’s (to me) a simple matter to do

                for image in src/* do; convert -resize 50% $image dest/`basename $image`; done
                

                (I think I got the syntax right) and then I can go get a lovely beverage while the computer chugs away doing what the computer does best—repetitive taks. It’s a powerful concept that many non-programmers even realize.

                [1] It’s scary how much of our economy depends upon huge spreadsheets passed around, but that’s a separate issue.

                1. 2

                  Oh, I’m quite aware of spreadsheets–but that’s not programming by the some people’s definition, because they don’t come with a bunch of manuals that non-programmers can cavort through.

                  As for your second point, I see what you’re getting at–but the majority of people will keep doing things the dumb, slow way because they don’t think that learning a new way (programming or not) is easy enough or because there is simply no incentive to be more efficient.

                2. 2

                  The argument for everybody knowing how to program is, increasingly, like the argument for everybody knowing how to cook

                  Yes, it is. If you know how to cook, then you aren’t at the mercy of McDonalds.

                  I’m not really arguing for everybody “knowing how to code” in the sense that some people use that phrase.

                  Every UI is a programming language. We’re stuck with shitty UIs because we think our users are unable to cross some imagined chasm of complexity between single-use and general-purpose, but that chasm is mostly an artifact of tooling we have invented in order to reproduce the power structure we benefit from.

                  There’s no technical reason that you need to learn how to program in order to program – only social reasons. And, I don’t consider that acceptable.

                  It’s fine if people decide to remain ignorant of programming (even when it’s literally easier to learn enough to automate some problem than to solve it with shrink-wrapped software). It’s not fine that the road to proficiency is being hidden.

                  Ultimately, if a non-programmer requests a programmer to write some code, it’s typically done for money. It’s done for money because there’s a gap between the professional class of programmers (who write professional code with professional tools for money) and the non-programmer (who must embark upon a quest to become a programmer, typically with shades of career-orientation, before writing a line of code). But, the ability for a non-programmer to say “I’m not willing to pay you to spend five minutes writing this code; I’m going to spend twenty minutes and do it myself” is missing, because it’s not possible with current tooling to go from zero to novel-yet-useful code in 20 minutes. So, programmers (as a class) get to overvalue their services by using tools that require more initial study to use.

                  Everybody knows how to cook (delta some tiny epsilon – very rich people who can eat out every night, small children, the institutionalized). Most cooks are not chefs (or even line cooks) – their success in cooking hinges on whether or not they are willing to eat the food they make, and so they don’t need to live up to the standards of paying customers. This provides a steady stream of people who already know how to cook enough to know that they like it, who can graduate on to professional cooking jobs, but it also provides a built-in competition for those professionals. And, it’s something that is only so widespread because there is an expectation that everyone can learn, an understanding that everyone benefits from doing so, and a wide variety of tools and learning materials covering the entire landscape from absolute novice to world-famous expert. Nobody mistakes being able to boil an egg for being able to stuff a deboned whole chicken.

                  When there is no place for absolute amateurs, everybody with a minimum competence gets shunted into the professional category. This is a problem when there’s no licensing system. It’s a huge problem with the tech industry. We need to stop it. The easiest way to stop it is to make it easy for non-programmers to compete on relatively even ground with professionals – which isn’t as hard as it looks, because users have many needs that are too rare to be met by a capitalist system.

                  (To give a cooking example: I like to put cinnamon and nutmeg in my omlettes. No professional cook would ever do that. So, if I want that wonderful flavor combination, I need to do it myself. Every user has stuff like that, where they would like their software to work in a particular way that no professional programmer will ever implement.)

                  1. 2

                    ooh ooh pick me pick me

                    Most of the stuff we use is created by companies, which are trying to make it maximally useful for given effort, so they up covering, like, 90% of use cases. That could mean every person can get 90% of their stuff done with it, but it could also mean that it’s perfect for five people and only 80% useful for the other five. Programming can help (not fix, but help) patch up that 20%.

                    In practice, though, most programming languages aren’t suited for duct-taping consumer apps. When I say “everybody would benefit from learning to program”, I’m thinking things like spreadsheets, or autohotkey, or maybe even javascriptlets.

                    1. 1

                      Yeah. There’s a tooling issue, in that most programming languages these days are made for programmers, and the ones that aren’t don’t play nice with the ones that are. This is a huge gap, and one that benefits capital exclusively.

            2. 0

              Tools for creating things with them have gotten harder to work with over time.

              Really? Every modern browser has a built-in development environment!

              1. 4

                Even on mobile browsers? I think not.

                1. 1

                  This is a very good point. I was looking at this issue in the light of my own experience, which has been with personal computers of various vintages. But most new users come in to contact with computers through phones and tablets now!

                  (I have copied program listings from magazines into my ZX Spectrum, to date myself).

                  If we confine ourselves to MacOS/Windows, even these have good scripting environments that can be capable programming environments - PowerShell beats bash in this regard, I think.

                  As an aside, in last year’s Advent of Code, a post was made on the subreddit complaining that an assignment built on a previously solved assigment (i.e. the code was to be reused). It turns out that this person solved the assignments on their mobile device and discarded the code after submitting a correct answer.

                2. 4

                  Every modern browser has a scripting language sandbox with a giant, awkward, broken, poorly-documented API, which you need internet access and a guide to even start on. No editor either. And then, to share your work, you need to buy an account on somebody else’s web server and learn to use SFTP. Most users don’t even know that writing javascript is something a regular person can do.

                  In comparison, early home computers (including the IBM PC) ran BASIC by default at boot-up. You would need to go out of your way to override this by putting in a cartridge or floppy before you started the machine, and in many cases the machine didn’t come with any software other than BASIC. And, your machine would ship with a beginner’s guide intended to teach BASIC to people who could barely read, an “advanced BASIC” guide for people who couldn’t code but had read the beginner’s guide already, full API documentation, schematics for the machine, the source code for the BASIC interpreter (sometimes), and BASIC code for a handful of demo programs. An effort was made to ensure that every machine showed a clear, easy to follow path from end user to mastery over one programming language (and, typically, you got an only-slightly-muddier path in the documentation itself for progressing to a basic grasp of assembly language or machine code).

                  For most people who have a web browser, programming is still “something somebody else does”. For anybody with an Apple II, Vic-20, C-64, TRS-80, Sinclair, BBC Micro, PC-8300, or really any home computer manufactured between 1977 and 1983 save the Lisa, programming is “something I could do if I spent a couple hours with these manuals”.

                  1. 4

                    This is incorrect. Firefox give you an editor in the form of the scratchpad. MDN documents almost all of the web APIs currently supported by browsers, and if that doesn’t float your boat the W3C spec + caniuse works as well. There are issues with the web, yes. Ease-of-entry is not one of them.

                    Also while new and experimental features are buggy, by and large browsers are not buggy or awkward from a web developer or consumer’s POV.

                    1. 3

                      MDN documents almost all of the web APIs currently supported by browsers, […]

                      That’s exactly what enkiv2 is saying:

                      […] which you need internet access and a guide to even start on.

                      There are issues with the web, yes. Ease-of-entry is not one of them.

                      I disagree: you need to know what you’re doing in order to start making things. Most people don’t know how to open the devtools. (EDIT: and then, there’s the “ecosystem”, a huge pile of overengineered abstraction layers causing nothing but bloat.)

                      by and large browsers are not buggy or awkward from a web developer or consumer’s POV.

                      Iceweasel (from the Parabola repositories) has a bunch of bugs (search doesn’t work in the address bar, …), and is awfully bloated (takes a while to launch, uses half a GiB of RAM for 2 tabs, …), in my opinion.

                      1. 2

                        Iceweasel (from the Parabola repositories) has a bunch of bugs (search doesn’t work in the address bar, …), and is awfully bloated (takes a while to launch, uses half a GiB of RAM for 2 tabs, …), in my opinion.

                        I should clarify. If you run a “normal” browser on a “normal” OS you won’t run into many issues. Also compared to the vintage computers the OP is referring to (especially the Apple II which had it’s startup sound specifically engineered to sound more pleasant since it crashed so often), the web is solid as a rock.

                        1. 1

                          Facebook & twitter are slow as molasses & glitchy on stock chrome on a stock windows 10 install on a brand new machine.

                          1. 1

                            I would argue that’s the developer’s fault. On vintage machines (and calculators), it’s just as easy or easier to produce a badly optimized solution that runs horribly. The current trend in web development is to force the client to do all the work which causes issues on less powerful machines.

                            Also that’s anecdotal evidence. My experience with the facebook and twitter web experience using chrome, windows 10 on a Thinkpad T540p has been pretty good. Unless you have solid evidence that the web in general is slow and glitchy that statement has no backing.

                            1. 1

                              Man, if you’re going to consider a systemic problem (like “almost every major web app is slow and glitchy, and most of the minor ones too”) as though it’s a cluster of unrelated particulars and ask for proof of every one, I don’t know what to tell you. Using the web at all is pretty good evidence that the web is slow and glitchy, and the experience of writing web apps explains why they would be expected to be slow and glitchy in a pretty convincing way.

                              I mean, maybe you just have really low standards? But, I don’t think it’s OK to cater to low standards in a systematic way, even if you can get away with it.

                              1. 1

                                Do you consider lobsters slow and glitchy? What about most blogs? Stack overflow? I can name tons of sites that get it right. The ones that don’t in my experience are few and far between. Facebook is the only popular site I can think of at the moment, but I really don’t think that counts since their native mobile app sucks just as much or more. Which would imply it’s facebook’s fault, not the web’s.

                                News sites are generally bad but that’s an issue with ads, not the web itself. There are cultural problems in web development but from a purely technical pov I don’t think the web is a bad platform.

                                1. 0

                                  Do you consider lobsters slow and glitchy?

                                  It took in excess of 20 seconds to load this comment, on a broadband connection. What do you think?

                                  What about most blogs?

                                  The only blogs that have what I would consider acceptable overhead are non-CMS-based minimally-formatted static HTML sites like prog21. The average blogger or medium blog takes tens of seconds to load. Depending on the platform, sometimes a blog page becomes a problem in the middle of reading an article, causing the tab to crash. (This isn’t necessarily an ad thing – it’ll happen on medium, which has no ads and no third-party or user-supplied scripts.)

                                  Stack overflow?

                                  Stack overflow has, on occasion, taken more than 10 minutes to load a single page on my machines.

                                  So, from my perspective, most web sites do not have acceptable performance. Even fast sites are slower than they could be, given absolutely minimal effort. (And, this is not even considering the embarassing level of bloat introduced by web standards – just using HTML and HTTP expands the number of bytes that need to be transferred across the network to render a static page by a factor of eight or more over markdown+gopher.) In other words, even if performance was acceptable from a user perspective (and I’m a professional developer with a newish machine that’s been tuned to improve performance – anything that’s slow for me is a hundred times slower for the proverbial grandmother), there’s a lot of low-hanging-fruit in terms of improvement.

                      2. 1

                        Firefox give you an editor in the form of the scratchpad.

                        Hidden deep enough in menus that, unless you knew it existed and were looking for it, you would never find it.

                        MDN documents almost all of the web APIs currently supported by browsers, and if that doesn’t float your boat the W3C spec + caniuse works as well.

                        Doesn’t ship with every offline browser. Isn’t linked to from the default home page.

                        Ease-of-entry is not one of them.

                        I think I’ve made my case that the web doesn’t do a fraction as much work to ensure that every end user finds it easy to get on the road to being a programmer as every mom-and-pop computer shop did in 1981.

                        by and large browsers are not buggy or awkward from a web developer or consumer’s POV

                        I disagree completely. Web developers are constantly complaining about things being awkward, inconsistent, or buggy – and front-end and back-end developers who switch to working with web standards for a project or two have every reason to sympathize.

                        Just because web development has become marginally easier since 2006 doesn’t mean it was ever acceptable, in terms of effort/reward ratio.

                        1. 6

                          I think I’ve made my case that the web doesn’t do a fraction as much work to ensure that every end user finds it easy to get on the road to being a programmer as every mom-and-pop computer shop did in 1981.

                          This is flagrantly false, between Stack Overflow, MDN, MSDN, W3Schools, and others.

                          There is so much more information out there, better presented and better organized and better indexed and at lower cost, than there ever was in 1981.

                          1. 1

                            If you need to be told that it exists, then it isn’t accessible to people who identify as non-programmers.

                            I’m not talking about the ease with which someone who has already determined that they would like to become a professional programmer can find documentation. That, obviously, has improved.

                            I’m talking about the ease with which a completely novice user can wander into programming without any particular desire to learn to program, and learn to program despite themselves.

                            (Some people in very particular fields still do learn to program despite themselves. Those people are mostly research scientists. I don’t consider that an improvement.)

                            1. 6

                              I’m talking about the ease with which a completely novice user can wander into programming without any particular desire to learn to program, and learn to program despite themselves.

                              They only have to Google “How do I build a website”, “How do I write a website”, like this.

                              Just because they aren’t rifling through thick manuals they bought with their micro doesn’t mean that non-programmers don’t have equivalent (or better!) resources.

                              1. 3

                                Just because they aren’t rifling through thick manuals they bought with their micro doesn’t mean that non-programmers don’t have equivalent (or better!) resources.

                                Exactly. I get nostalgic about my Casio fx9750’s programming manual, but you won’t find me claiming it was a better resource than anything you could have found online. There really isn’t a good beginner alternative for solid documentation and question and answer sites.

                                1. 1

                                  If you find it online, it’s not a piece of documentation you have – it’s a piece of documentation you seek, that happens to be free and delivered quickly. You need to know that it exists, and you need to know how to find it, and both of those things are barriers.

                                  For somebody to google “how to I write a website” they need to believe that a website is the appropriate way to solve whatever half-understood problem they have. Their problem may be something more like “how do I sort paid invoices by attachment type in paypal” – in other words, a useful feature missing from a popular service, which is best implemented by a shell script. Searching for this will not teach them how to solve the problem, because they didn’t put anything about programming in the query, because they don’t know that the best way to solve this is by writing some code. They will instead get zero relevant results, and instead of thinking “I should write code to do this”, they will think “I guess it can’t be done”.

                                  1. 3

                                    “how do I sort paid invoices by attachment type in paypal” – in other words, a useful feature missing from a popular service, which is best implemented by a shell script.

                                    What?!

                                    1. 1

                                      Paypal will let you export a CSV of invoice summaries containing information about attachment names. So, the sensible way is to export that CSV and use shell tools to sort by attachment extension – in other words, write a couple lines of code to handle a corner case that the original developers of the site couldn’t forsee.

                                      (This particular example is taken from my life. I’ve commissioned a bunch of artworks, and I want to separate those records from other unrelated invoices, so that I know which works have been finished and paid for even though it’s taken the better part of a year for them to be made & they’re not in any particular order.)

                                      1. 1

                                        Why not write a couple lines of js in a greasemonkey userscript so you don’t have to go to the trouble of exporting as CSV, opening a terminal, and running a shell script?

                                        1. 2

                                          Because attachments are never listed in the summary page (which also has a very small maximum pagination). Web services are intended for display, and not made accessible for further user-driven hacking – particularly financial systems like paypal – so doing this kind of work in a browser is made even more awkward than it otherwise might be.

                                          Even had we a reasonable page size (say, ten thousand, instead of twenty) and the necessary information, javascript is going to be a much more awkward solution – we need to navigate arbitrarily-labeled tag soup in order to handle what is essentially tabular data. Using shell tools (which are optimized for tabular data) is easier.

                                          Even so, this whole discussion is about what we, as hackers, would do. What hackers would do is basically irrelevant. The problem is that what a non-hacker will do to solve such a one-off problem is see if someone has already solved the problem, find that nobody has, and give up – when the ideal solution is for the non-hacker to be able to hack enough to solve the problem on their own.

                                          1. 1

                                            Fair enough.

                3. 3

                  Some parts of this make me think of a book I’m reading right now, ‘You are not a gadget’ by Lanier. The quote in this thread about making computers more human literate versus getting people to be more computer literate makes me think a lot about a question brought up by the book: ‘are we building a digital utopia for people or machines?’ It seems valid to me to want these things considered by those designing products like computers and the software that runs on them which have become so ubiquitous.

                  1. 2

                    Yup. Lanier is on the same wavelength as Ted Nelson, Alan Kay, and a lot of the general computers-for-the-people crowd. (He used to be the most visible guy in this group, during the 90s, though I think the current title-holder is Brett Victor.) All of those guys are pretty influential on the growing community of humanist-tech people on the fediverse.

                    Currently, a lot of the rhetoric is around nice things that used to exist in the industry that have gone away. However, Ted Nelson, Doug Englebart, J. R. Licklider, and Alan Kay have been saying this stuff before there was an industry – so the extent to which the utopian ideal of human-centric computing was ever historically achieved is only relevant insomuch as it’s an indication of the ease with which such ideas could be implemented again in the future.

                  2. 2

                    Reminds me of these blog posts by viznut:

                    He sometimes sounds a bit crazy, but he has some valid points nonetheless.

                    1. 2

                      It’s clear that he’s a smart guy who’s familiar with marxist critiques. The ‘resource leak bug’ post doesn’t seem like anything new, but he’s described it pretty lucidly. Thanks!

                    2. 1

                      The author has edited this thread into a blog post, and it’s worth taking another look, even as some of the good observations from the original thread by other users were lost: http://ajroach42.com/observations-on-modern-computing-the-last-10-years-were-a-misstep/