1. 14
  1.  

  2. 23

    I cannot recall the last time a single tool gained such widespread acceptance so swiftly, for so many use cases, across entire demographics.

    I can! I remember the paperless office meme during the ascent of Excel. IBM started advertising the meme in the 1960s, Lotus 1-2-3 became available in 1983, and Excel was published in 1987. I could imagine what it must have been like in the late 1980s, with a feeling of sweeping change as everything was encoded into spreadsheets. In my childhood in the 1990s, I learned Excel at school while my parents learned Excel at the dinner table.

    If a single dumb, stochastic, probabilistic, hallucinating, snake oil LLM with a chat UI offered by one organisation can have such a viral, organic, and widespread adoption—where large disparate populations, people, corporations, and governments are integrating it into their daily lives for use cases that they are discovering themselves—imagine what better, faster, more “intelligent” systems to follow in the wake of what exists today would be capable of doing.

    Indeed. Lotus 1-2-3 was precisely designed with a baroque cell-updating algorithm, while rumor is that Excel’s algorithm is hand-coded and has so many corner cases that it is not understandable by its maintainers. This is not unlike how we can precisely analyze and explain tiny machine-learning systems but struggle to grasp the inner workings of LLMs.

    The non-deterministic operation of the typical LLM is worth noting, as it doesn’t fit into the analogy. People can learn to predict how Excel will behave; I’m not so sure about LLMs.

    An increasing number of decision-making systems in corporations, governments, and societies will start being offloaded to AI blackboxes for efficiency and convenience, which will slowly eat away at human agency, like frogs in slowly boiling water.

    I wonder how many fields of study are silently corrupted by Excel. Anything with dates, for sure. We have renamed genes to appease Excel. And that’s not considering incorrect formulae; Excel has no way to prove that a spreadsheet is correct, eventually correct, partially correct, etc.

    1. 2

      The whole point is “widespread acceptance so swiftly”. I’m not sure your argument is strong enough here, considering the paperless office meme started from 1960s to 1990s, spanning 4 decades, with probably about 1 in 10 households likely had a PC at the beginning of 1990s, and perhaps half of the households in the US at the end of 1990s.

      I think one reason “this time, it feels different” is because almost everybody has a freakishly fast computer at their hands. Is this new AI thing gonna last? Who knows.

      1. 1

        I’m not convinced that ChatGPTs explosive spread is significant, considering it is mostly free as in beer, and promoted with all the tools that have been used in the last decade to push software products. Plenty of people sign up for services all the time but don’t continue using them.

        The growth of cell phones and internet access involved quite significant investments of money, and they offered recurring revenue to the service providers. I’m not saying LLMs won’t be profitable, but right now they’re billion-dollar bets by huge companies that they will be the next big thing, not something there’s an intrinsic demand for.

    2. 18

      I have been actively writing software, tinkering, and participating in technology/internet stuff for about 22 years. I cannot recall the last time a single tool gained such widespread acceptance so swiftly, for so many use cases, across entire demographics.

      I can! I remember the World-Wide Web exploding in 1995. I’d been using Mosaic and then Netscape for a year or two, but suddenly the web “went viral” and became part of the shared culture. I remember how deeply surreal it felt to see a URL on a billboard for the first time, or to hear radio announcers carefully spelling out “H-T-T-P-colon-slash-slash-W-W-W-dot-…”

      (But no, this wasn’t as fast as the rise of ChatGPT. Everything is faster now, we’re all living in “Slow Tuesday Night”.)

      1. 3

        I remember the World-Wide Web exploding in 1995.

        Even though I was only 11 years old, the progression was shockingly fast. In 1994 we were running an old Compaq XT with a 20MB hard drive (admittedly pretty old and crusty even at the time) and it did everything we needed it for. In 1995 an Internet Cafe (i.e. a coffee shop with two 486 or P1 PCs) opened not too far from my house. I would bring 5.25” 360kB floppies with me and download all kinds of things from FTP sites and newsgroups and bring them home to peruse at my leisure in black & orange phosphor. I went from “hooked on computers” to “hooked on Internet” in very short order, but my parents’ finances were pretty tight.

        Christmas of 1997 we got a Pentium 133 with a 1.6GB (!!!) hard drive and a dial-up Internet subscription. Game on! I went from MSDOS 6.22 to Windows 95 to dual-booting Slackware in short order. Gaming changed from crude text-based and graphical adventures (Loom!) to Diablo and Starcraft and Quake. By late 1999 or early 2000 we upgraded from 33.6kb dialup to 1.5Mb DSL. It was wild how much changed in 5 years.

      2. 18

        I remain utterly baffled by people’s capacity for self-deception when it comes to LLMs.

        You can literally go and chat to one right now and get it to, often within minutes, spit out nonsense. Often well-written but subtle nonsense, but nonsense nonetheless.

        Which makes sense, given the algorithm explicitly tries to generate text which probabilistically sounds like a correct answer.

        This odd assumption that somehow, via magic, this technology will do all of the things it fails to do now (despite its underlying algorithm being literally incapable of doing so) seems to be what’s driving this.

        The excitement is based on the self-perpetuating idea of what it could be + a snazzy demo. And isn’t that a description of every bit of vapourware out there?

        I don’t mean to be disrespectful, but I do find this LLM craze a good litmus test of AI ‘experts’. Those who back the knowingly false nonsense spoken about LLMs expose themselves as either liars, incompetent, or in the most charitable interpretation, wishful thinkers.

        Unfortunately I think ML as a field has a lot of bullshit and people twiddling knobs with no understanding of what underlies what they do, despite calling themselves ‘data scientists’. It’s been a huge turn off for me from when ‘big data’ proponents first claimed ridiculous things that don’t seem to have panned out. Plus ca change.

        This is not to say that it isn’t useful for anything, it’ll be very useful for spam, fraud, copyright theft (which is a lot of what it actually does anyway), and in the nicer realms perhaps automation of the most rote, pattern-based non-novel activities out there.

        For things which have precise requirements and are trivially novel, it is worse than useless, it is actively harmful.

        Obviously you can come up with 100 examples of ‘this time it feels different’ things that were meant to change the world but people forget so quickly… equally so with this when chat gpt + friends fail to fulfill the nonsense claimed for them (but do succeed at those things they are good at). People will just act as if they only meant the latter all along…

        1. 3

          Have you compared levels of nonsense between generations of GPT? GPT-2 would fall for all silly questions like “who’s the king of the USA?”, but GPT-3 catches many of them. GPT-3 will happily hallucinate and give vague answers about anything you ask it, but GPT-4 less so. It’s not perfect, but it is going in the right direction, and there is real tangible progress. What makes you believe that it won’t improve further?

          1. 11

            I’m expecting some performance plateaus to show up sooner or later as a result of it getting progressively harder to source training data that didn’t itself come out of an earlier version of ChatGPT.

            1. 2

              A lot of complex tasks reach plateaus where techniques that work moderately well simply don’t improve. For example:

              If you want to predict the weather in a temperate climate, you can get around 60% accuracy by predicting the same as yesterday. You can also get similar accuracy by predicting the same as this day last year. You can then build statistical models on top that try to exclude outliers. No matter how complex these models are, you don’t get to 70% accuracy. To get to 80-90% accuracy, you need to do fluid dynamics modelling of the atmosphere.

              If you want to translate natural language and you have a dictionary of words, you can do a reasonable job (at least within a language family) translating them independently. You get a big improvement moving to translating bigrams (pairs of words) and trigrams. Above that, you get little improvement.

              Dall-E and Stable Diffusion both also got a lot of hype, but they still get fingers wrong. Worse, at least Stable Diffusion is massively racist. Yesterday, my partner used it to generate an image to accompany a blog post. The prompt asked for a student sitting an exam with a robot looking over their shoulder. The first picture was okay (except or some weirdness around the eyes) but about two thirds of them had an Indian person instead of a robot, no matter how we tweaked the prompts. Now, hopefully, that’s because there are a lot of Indian students photographed at robotics competitions and it doesn’t know which of the features in the image is the student and which the robot, but it could equally be racism in the training data. Either problem can be solved only with better-labelled data and that’s expensive.

              I can’t think of a single probabilistic process that doesn’t show massive wins early on and then plateau. LLMs will definitely hit that, the only question is how close we are to that point.

              1. 1

                LLMs are simply not capable of inferring results from insufficient data, they’re essentially running statistics on words in a corpus with zero understanding of what is being discussed.

                The idea that a technique that literally CANNOT do what people claim of it will one day evolve into being able to do them SOMEHOW is my whole objection.

                Case in point is tesla’s FSD. Trivially novel tasks are not suited to such techniques nor could they ever be. That’s not to say they’re not useful for some things.

                1. 2

                  Tesla’s FSD is a fraud, not LLM.

                  Notion of AI “understanding” anything is irrelevant. It’s philosophical distraction and a tangential argument about semantics. It’s not falsifiable — an AI can conquer the universe, and you can always say it merely executed a universe-conquering algorithm without truly understanding what it did.

                  So I still think we are on track to make an algorithm that can execute almost any text-based task as well or better than a human. The algorithm won’t have a clue that it exists, beyond parroting how humans refer to it. It won’t understand that it is beating humans at intellectual tasks. Humans will continue to change the definition of AI to make these achievements not count as intelligence.

                  1. 1

                    I never said LLM was a fraud? I said it can’t do what people claim of it because it cannot handle novel input.

                    When I say ‘understanding’ I mean it modelling reality e.g. understanding physics for a physics question, understanding logic for a logic question etc. That you think that is ‘philosophical’ is umm… ok.

                    The issue is that when past data is sparse (trivially the case for many realms, e.g. driving, hence why I mention it) and you have literally no realistic model for inference, but rather some unknowable process based on what is perceived to sound like a correct answer, you are going to get eloquent sounding nonsense.

                    Nobody here nor anywhere else anywhere that I’ve read (and I have read fairly widely) who is promoting what you are promoting here has explained how this can be overcome.

                    I think there’s a reason for that and the fact you quite literally ignored the first paragraph in the parent post encourages my belief in that.

              2. 1

                I think you are far and away too pessimistic about this tech. Instead of approaching LLMs as another tool for translating natural language to computing actions you’re attacking the hype around it. Of course there’s hype, it’s new and cool. That doesn’t matter. There are two important factors about LLMs that matter to me personally:

                • LLMs can be tweaked to get reliable and actionable input and output for other systems.
                • Many of the things that humans want from other humans are not precise answers but rather things that evoke feelings or help them generate ideas and LLMs can do this in ways that were previously impossible. Essentially text that approaches the nonsensical qualities that humans sometimes display.

                LLMs feel about as important as databases to me right now but they’re newer, different and not as explored so I could be wrong.

                To those people who upvoted OP: have you tried, like really tried to utilize gpt-4 or gpt-3-turbo or any local models to process naturally posed questions and requests into reasonable commands and formatted api responses? That’s one of the powers this stuff grants you. You write a few sloppy sentences about what you want, add preprocessing and extra prompting in the back then with the support of systems that read from the generated output you’re able to use that for good enough end user responses.

                And if they don’t like the first gen, generate it again.

                It’s just another tool that you should vet for your work. It may require more effort than you want to put in in order to make it useful for your domain but other people have different reqs.

                1. 2

                  LLMs feel about as important as databases to me right now but they’re newer, different and not as explored so I could be wrong.

                  I think that’s a great analogy. In particular:

                  • Databases are essential to some classes of applications.
                  • Some of those application classes are incredibly important in the real world.
                  • Lot of things need only a very simple database to get the benefit.
                  • Techniques from databases are useful in a lot of other places.
                  • Most things don’t need a database.
                  1. 1

                    ‘To those people who upvoted OP’ or, like, OP himself perhaps? Slightly rude there. Yes I have, thanks. But do go on assuming I haven’t used this technology, as I am sure that is far more convenient.

                    ‘Instead of approaching LLMs as another tool for translating natural language to computing actions you’re attacking the hype around it.’

                    OK, replace ‘LLM’ with eliza. Do you see the issue?

                    The problem is whether the technique is capable of doing what is claimed of it. No amount of ‘it’s getting better!’ or digs at me can get around the fact that LLMs are simply not capable of solving trivially novel problems (again, I see all these people criticising me have totally ignored that, very telling).

                    You can’t correctly infer a model from which to make a determination with sparse data to do so using LLMs, it’s literally impossible.

                    I find the database analogy utterly bizarre. Databases are precise and follow set rules which you can assess and find out in detail exactly what they do.

                    LLMs infer things from data sets and by their nature have not one inkling about what they speak.

                    And again as I’ve said, I’m sure they will be useful for some things. They just won’t be replacing programmers or radiologists or magically changing the nature of knowledge work. My objection is firmly directed at the CLAIMS made for it.

                    1. 1

                      I find the database analogy utterly bizarre. Databases are precise and follow set rules which you can assess and find out in detail exactly what they do.

                      I use databases for storing and querying data; I’ve seen ChatGPT used for spitballing ideas and napkin calcs for cargo laden airships with accurate formulas and usage of those formulas.

                      From your previous post:

                      This is not to say that it isn’t useful for anything, it’ll be very useful for spam, fraud, copyright theft (which is a lot of what it actually does anyway), and in the nicer realms perhaps automation of the most rote, pattern-based non-novel activities out there.

                      It’s not just spam. It’s idea and lead generation at the very least. https:// rentry airships_gpt_full

                      But do go on assuming I haven’t used this technology, as I am sure that is far more convenient.

                      It just feels like you used for a bit then concluded that it was and forever will be a toy. I might be wrong in that assumption. Sorry for making it if it’s wrong.

                      1. 2

                        I use databases for storing and querying data; I’ve seen ChatGPT used for spitballing ideas and napkin calcs for cargo laden airships with accurate formulas and usage of those formulas.

                        OK so databases are used for spitballing ideas? I mean… no? My point is comparing them to databases (rigid, specific, understandable method for obtaining data, you can even run PLAN commands) is bizarre as LLMs are the precise opposite.

                        It’s not just spam. It’s idea and lead generation at the very least.

                        Yes perhaps I was a bit mean there, sorry. I definitely do think there are uses for it, people keep missing that I say that though, perhaps because I was a wee bit too spicy in that OP. I have used it for idea generation myself!

                        It just feels like you used for a bit then concluded that it was and forever will be a toy. I might be wrong in that assumption. Sorry for making it if it’s wrong.

                        Nope, as I’ve said over and over again, my objection is based on the nature of LLMs - they’ve been around for a while and I seriously doubt many were claiming they can do what people now claim they can do.

                        The fundamental issue is that they cannot deal with novel input. They essentially perform a clever pattern match to their giant corpus, algortihmically determining what sounds like a correct answer to a query.

                        Where data is sparse in that corpus, it has no model of reality to refer to to determine what is a sensible answer or not. It maintains the ‘what sounds like the right answer’ and thus defaults to eloquent nonsense. This is not something that can be fixed iteratively, it’s a limitation of the technique.

                        There are some fields (driving is a good example) where there is infinite, trivial novelty (complicated junction, ok now it’s snowing, ok now there’s glare, ok now it’s icy ok now there’s fog ok now there’s 3 vehicles doing complicated manouvers with 1 obscured, ok etc. etc.).

                        My issue is not with LLMs, it’s with people claiming they can do things they very obviously cannot not or that ‘trust me bro’ it’ll iterate to these magical things in the future.

                        This perception is pushed by people who stand to make literal $bns from this. perhaps $100’s of bns or more + endless ML folks who, I am a little cynical as to how well they understand the fundamentals of what they do shall we say who are equally benefiting from the gravy train. That combined with a number of people who kid themselves and those honestly confused or who don’t understand the technique + fanboys and we see why this hype cycle is what it is.

                        I can’t stand lies, I can’t stand liars, it’s that simple for me.

                  2. 1

                    This is not to say that it isn’t useful for anything, it’ll be very useful for spam, fraud, copyright theft (which is a lot of what it actually does anyway), and in the nicer realms perhaps automation of the most rote, pattern-based non-novel activities out there.

                    This is overly dismissive. I use ChatGPT and CoPilot a lot during the day, because it makes me more productive and I can assure you I am not a spammer or a fraudster.

                    Claiming that LLM’s are useless because they can produce nonsense is like saying that autocomplete is useless because sometimes the option you want isn’t listed in the pulldown. Clearly the world has a different opinion on that.

                    As for progress, I am not an expert, but so far each generation of GPT has shown remarkable steps forward. If it suddenly stops with GPT4 I am ok with that, because I can already put it to good use.

                    1. 1

                      You intentionally ignored the second part of the sentence there. But again, people prefer to ignore that because it’s much more exciting to imagine that LLMs can do things LLMs can’t do.

                      I never claimed ‘LLMs are useless because they can produce nonsense’. I think it’s quite telling that critics have to misrepresent these things. And ‘the world’ had a different opinion on crypto from me. It also a different opinion on evolution by natural selection. I’ll leave you to fill in the gaps as to why that’s a bad analogy.

                      If you’re happy using copilot to essentially plagiarise other people’s code without license where, again, due to the literal nature of how LLMs work, subtle errors that you might miss creep in then fine. Personally I would consider this to be ‘worse than useless’.

                  3. 17

                    The haiku at the end has the wrong syllable count. LLMs cannot count syllables or letters in words reliably. I have been informed that this is an artifact of the tokenization process.

                    I think LLMs are neat, but a lot of “this time is different” just assumes that the current progress will continue. Maybe? I don’t have a prediction one way or the other. But I will say that at the current level of technology, if you lose your job, it was a bullshit job.

                    1. 4

                      It is indeed an open question whether the recent LLM progress was just a fluke, and it’s already approaching the ceiling, or is it just a baby step in a whole new field. For now, it seems like it’s the latter. GPT 2 -> 3 -> 4 were noticeable improvements, and there’s no sign of them stopping. We still have possibility of both developing more powerful hardware and throwing more data at them, as well as continued advancement in reducing model sizes and improving training. We’re also at inflection point where LLMs are useful for preparing and cleaning training data for themselves.

                      if you lose your job, it was a bullshit job.

                      That’s meaningless. There was a time when being a messenger was an important job. There was a time when being a watchmaker was a proper craft.

                      Is programmer a bullshit job? Lawyer? Pharmacist? Psychotherapist? Is Prompt Engineer a serious job?

                      1. 5

                        I mean “Bullshit Job” in the sense of the kind of job in David Graeber’s book of the same name and I mean if you lose your job to an LLM with the capabilities it has in May 2023. There are lots of jobs today that are made more efficient by an LLM, but if it can be made so efficient that it can be eliminated entirely, I don’t think the job needed to be done in the first place. I dunno, maybe some companies can go from having three people in marketing down to two, but I can’t think of anyone at my small company who we could eliminate today with an LLM. Everybody does a lot of different small things, and getting faster in one area just means we can do something else with the left over time.

                        One of my grandfathers was a traveling salesman, and part of his job was adding up the numbers for his sales. The adding part was made obsolete by Lotus 123, and travel part is being obviated by Zoom. I’m not totally sure, but I think his old job doesn’t exist anymore because the company went broke due to globalization.But now I have a job that didn’t exist then. I don’t think it makes sense to worry about technological unemployment. People have enough to do to keep us all busy!

                        1. 9

                          As I recall, Graeber deliberately defines a bullshit job as one that the person doing it thinks is bullshit. If you start defining other people’s jobs for them then that’s not quite what he was saying, besides being ruder and perhaps patronising.

                          1. 3

                            Ironically, I think many of Graeber’s bullshit jobs are fairly safe from AI: a lot of them are the kind where the employer wants warm bodies.

                            1. 1

                              I remember I was visiting San Francisco when I read Bullshit Jobs and the neighborhood I was visiting had little advertising banners for the neighborhood itself up on the lamp posts with a hashtag on them. (I can’t find a picture of it on my phone, but it was something dumb like “#feelmore in Filmore!”) I remember thinking that it was clearly a bullshit job to have to think up the hashtag for the campaign, because there was no way any human being would ever bother to tweet the hashtag or search for it. I bet you could get an LLM to spit out hashtags for a neighborhood very quickly.

                            2. 1

                              I think you’re stretching David Graeber’s Bullshit Job definition. We have obviously useful value-producing creative and intellectual jobs threatened by AI. Not just data entry assistants to middle-managers for regional branch compliance advisory departments, but doctors, graphic artists, and programmers that until recently were thought irreplaceable. I do expect that for now many will keep jobs to supervise AI and have things to do thanks to Jevon’s paradox, but if we get closer to AGI, it will turn things upside down.

                            3. 2

                              We still have possibility of … throwing more data at them

                              Do we? What data have they not used that they could? I find it hard to believe that a company with such enormous financial resources would not already have, to all intents and purposes, exhausted all possible training data.

                              We’re also at inflection point where LLMs are useful for preparing and cleaning training data for themselves.

                              Are we? How do we know that? Has it happened? Can you give a reference?

                            4. 1

                              I think there are some realistic jobs that could feel threatened. GPT-3 may not be great at spitting out correct python code (in my experience), but it has been exceptional when I ask it for very formulaic stuff. I’d expect copy writers, maybe people dreaming up marketing campaigns, legal interns typing up form letters, and anything that plays into its strength of “Make up something that sounds reasonable” should be concerned.

                              That said, I believe this will then add jobs onto people checking the output, doing editing, and the like. Similarly, for the image generating AIs, I could see smart artists adding a “Artisanal computer graphics made by a human” sticker onto their work and charging more as the flood of AI-generated fluff starts floating around.

                              LLMs will likely impact jobs that today are needed, but that doesn’t mean the job isn’t valuable to someone today.

                              I say this entrenched in the “ChatGPT couldn’t spit out a single working program for me” camp, seeing AI as an assistant, not a replacement, in any field that requires correctness (such as compiling code).

                            5. 5

                              I cannot recall the last time a single tool gained such widespread acceptance so swiftly, for so many use cases, across entire demographics.

                              I can! The cellular telephone, once it had gotten down to a reasonable cost and form factor. It seemed that almost overnight everyone in my small southern hometown went from landline only to carrying a cell phone everywhere.

                              1. 4

                                There is a very funny scene in the movie Ladybird where her dirtbag boyfriend goes from “cellphones cause cancer and let the government track you” to owning one overnight. Reader, I was that dirtbag in 2004.

                                1. 1

                                  I think that process was pretty slow. My father had a mobile phone for work from the early ‘90s (initially an analogue one and then GSM). I got one in 1998, but I didn’t use it much. A couple of years later, I was paying less in total on a per-pay mobile than some of my friends were paying for line rental (plus call costs) on their landlines but even in 2007 (when the first iPhone launched) I knew several people who didn’t own a mobile and many of the ones that did often didn’t take it with them (they were expensive with a thriving stolen-phone market, so you didn’t want to take them anywhere they might be stolen).

                                  GSM was standardised in 1991. I think it took less than two decades from that point until they were ubiquitous, but not certainly more than one. There were a few points where adoption jumped (pre-pay plans, cheap data plans).

                                  That said, LLM adoption is still quite low outside of the tech bubble. When I talk to non-geek people, they often haven’t even heard of [Chat]GPT-n.

                                2. 5

                                  Humanity’s voice fades. — GPT-4

                                  As a natural-born human this is exactly what I’m “afraid” of: being buried in so much regurgitated noise even genuine humans have a hard time saying anything meaningful or being heard when they do.

                                  1. 10

                                    I find it very revealing that the author did not notice that this is six syllables long. It’s like a thumbnail encapsulation of why LLMs are dangerous. Not because they will take over the world, but because we’ll believe them when they hallucinate that it’s safe to push the big red button to launch the missiles. :-)

                                    1. 2

                                      Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. - Brian W. Kernighan

                                      If people start using ChatGPT to create code that they couldn’t have made themselves they’ll never be able to debug it. In fact, will they need a next gen system to debug it? Does GPT 3 need GPT 4 to debug its code? Or perhaps this is really only an issue at boundaries? (I.e., code at the edge of GPT-3’s capabilities)

                                      1. 1

                                        It’s actually pretty easy to use GPT to generate code I couldn’t have made myself, and still debug it. It’s good at pulling from libraries I haven’t heard of before, and then I can look up the library methods.

                                  2. 4

                                    I appreciate the two “I can” replies this got so far, and I can even offer my own, although in a more modest scale: I remember when containers were a curiosity, then docker happened, and now they’re a backbone of software development almost everywhere.

                                    However, it’s worth noticing that analogies with the past have one major weakness, which is that the world change. Both Excel and the Web happened in a world that was considerably different than today’s world.

                                    The two main differences that I think can make LLM have wildly unexpected impacts are 1) the web itself, and 2) social networks.

                                    These two things made everything so much more interconnected and volatile. Today’s world is order of magnitude more susceptible to butterfly effects, and those effects can be much bigger, too.

                                    That said, I’m still very much in the LLM skeptical side: I think they’re are very much just dumb machines. The impactful consequences will come from how careless we use them.