1. 81
  1.  

  2. 30

    This example is an extreme example on why most B2B software has poor UX and poor performance. The hospital or organization buying the EMT software had a business problem, wrote up their requirements and a vendor fulfilled these plus a bunch of regulatory hurdles it and is now getting paid, irrespective of how pleasant or unpleasant the software is to use. Or irrespective if 50% of people who should use it, don’t actually use it.

    The only time the maker of this software will be incentivized to fix the performance issue (which, really is a UX issue) is the next time the renewal of this juicy contract comes around the corner. And only if the customer (the EMT) demands this to be addressed.

    If you ever used B2B software, you’ve experienced the same. Take Concur, the travel and expense app, who have a terrible mobile and web app, that take booking a business travel trip or expensing it minutes longer than any consumer competitor. It’s frustrating to use, the app crashes randomly and does not work when losing network connectivity. Yet, my company switched from a more usable alternative (Expensify) to this worse app because Concur knows very well how to sell to the paying customer, the CFO. They have jumped through all the invisible hurdles on reporting, integrating with legacy payments systems and so on that is more important to a CFO than having a pleasant UX. And in this market, it’s the CFO you need to sell to, not the end user. I know this: I worked at a startup where we built a more performant, far easier to use travel booking app for business than there was on the market. Employees loved it. We just failed to sell it to the CFO.

    The only place where you consistently see performant apps? Where performance impacts the bottom line. A good example is Uber, where I work at. Because we have millions of daily paying customers, we ran tests that showed that every second that the app took longer to load, resulted in X% of less trip bookings, translating into $Y millions lost per day plus customer churn of $Z millions per day. With this data, we first started to instrument performance, tracking how slow or fast parts of the app is, looking at p95, p99 and p999 performance / latencies. Then we started to systematically fix the issues and going deeper. We actually have a dedicated team on app and network performance of engineers, who spend their time building tooling and optimization around all of these. We could do this, as there was a clear enough connection between performance and revenue that made it a no brainer to hire people, where their cost was a fraction of the revenue they generated.

    If there are lives at risk, that should somehow feedback to a metric that impacts the vendor’s revenue. Unfortunately, it’s really the EMT/hospital that needs to figure out what this metric is and use that as a carrot/stick to get the software with the kind of performance they need - similar to how some car makers already define hardware performance metrics and use them.

    1. 14

      I’m not sure what it is about the field (Regulatory capture? Excessive conservatism? Pathologically outdated hardware?), but I’ve literally been praised by doctors for providing SaaS stuff that is “super-fast” when IMO it’s merely acceptable. We use MySQL Workbench to ensure that we never perform full-table scans and we provision hardware (on-site / off-site, at the customer’s option) that doesn’t suck. And we avoid doing anything that will crush the browser under too many DOM nodes or force server round-trips just to open a menu (which are two like explanations for the problem with the ePCRs, the other is that they might be using an eInk display).

      1. 13

        Performance seems to be an issue with most of this kind of appliances. ATMs and ticket machines are horribly slow, and while nobody’s life is at risk here (probably) I almost missed a train the other day because it took 1 minute more than I thought to get a train ticket.

        1. 13

          Don’t you sometimes wish you could swoop in and fix these issues?

          Whenever I see terrible software there is an urge in me to fix it. I would love to be able to knock on the company’s door, be instantly given access to their code base, fix the issue, and then disappear. Just like I do with open source code projects.

          1. 5

            Most open source projects have the luxury to not be beholden to the regulatory concerns of the medical field.

          2. 10

            Damn. You bet them not using it probably killed people. Reminds me of the prescription problem that’s more tradition than performance. Many doctors like doing handwritten, cursive-like prescriptions. Some percentage of those are misread with harm to the patient. I ran into one that had an app on his iPad that auto-filled, let him review, and electronically filed the prescriptions. He bragged it both saved time and prevented mistakes. The others I have to look at with this contradictory feeling that they’re both working to help their patients and not caring about harming them at the same time for no justifiable reason.

            The automated versions should be mandatory [when they work right]. Seems like a good regulation given the circumstances.

            1. 6

              I suspect a major reason they didn’t switch to the ePCR was habit. Changing habits is hard, and often stops us from making optimal decisions, even for ourselves. So while the electronic version may be better (even for the EMTs who don’t want to use it) it might just not be better enough to get them to switch.

              That being said, I suspect it was still horrendously slow for what it was doing.

              1. 6

                I have also thought about this from another perspective. If you have an application you make for yourself, it can be worth it to improve the performance. If you do this, and it costs you 10 minutes, but it saves you 10 seconds every time you use the program, you need to use it 60 times before it pays off.

                On the other hand, if you have a site or program that is intensively used by a lot of people, it becomes more attractive to improve performance. There are 500 million tweets that are sent per day. Twitter is horribly slow to load on my phone, even over a perfect connection (a tweet takes about 10 seconds to load, while, for example, wikipedia pages load in less than 1). Every second that twitter is loading costs humanity at least 550000000/60/60/24/365=17.44 years per day. The actual number is probably way, way higher, since most tweets are read multiple times.

                Now, I don’t claim that this is anywhere near as important as saving lives, but it’s still a baffling figure. If you work on an application that is used repeatedly by even some users, it’s often worth it to improve the performance (of course, it’s often not worth it to optimize it if the delay isn’t noticeable or if there are other steps in the process which take way longer). If you think about things this way it’s ridiculous how much time is wasted by not optimizing performance in some systems.

                1. 1

                  I have also thought about this from another perspective. If you have an application you make for yourself, it can be worth it to improve the performance. If you do this, and it costs you 10 minutes, but it saves you 10 seconds every time you use the program, you need to use it 60 times before it pays off.

                  In this situation performance is also important, but what is usually more important is that the task is automated and that you can free up a human for other things.

                  I usually stick to the rule of thumb that I do things manually the first three times, but if it pops up a fourth time, I start automating it. But if I get that far, I usually also start optimizing things, because I will usually reach the said 60 times without question.

                  1. 3

                    In this situation performance is also important, but what is usually more important is that the task is automated and that you can free up a human for other things.

                    Two other bonuses:

                    • We can do a bunch of sanity checks, etc. in the automated solution, which would seem like too much hassle when doing it manually. Counterpoint: automated scripts might do stupid things that a person wouldn’t, e.g. rm -rf $DIR/* when $DIR hasn’t been set.
                    • Once something’s automated, we can work at a higher level of abstraction, e.g. using a script can ensure that generated files follow some particular naming pattern and directory structure; that makes it easier to make tools for browsing or querying those files. Counterpoint: easy to end up with a house of cards, which does bad things if some underlying job breaks.
                    1. 2

                      Counterpoint: easy to end up with a house of cards, which does bad things if some underlying job breaks.

                      Another rule of thumb: Only use shell scripting at the top level of your processing (for example to start things up), or use shell scripting all the way down to the one but lowest level like Slackware does. My present self regularly thanks my past self for sticking to this.

                2. 5

                  I’m curious how much hardware contributes to the problem, rather than software. I had a run in with a similar device as a patient in England and the refresh time on the tablet was horrendous, but the e-ink-ish (think Kindle, but not as high contrast) display used seemed to require a complete blanking phase before redraw that the programmer of the actual software couldn’t help to overcome. It was also terribly unresponsive to clicks with the integrated stylus (the EMT switched to using a pencil eraser, which worked much better), and the EMT cursed it repeatedly. Yes it did seem to be using a seemingly quite overweight OS that looked like it was maybe Windows Mobile – I was too concerned about the piece of metal in my leg to look too carefully – and the interface itself seemed poorly thought out, but at a glance I can’t imagine that any amount of profiling of the underlying business code was going to make a difference to the outcome.

                  1. 3

                    Computer games are easily reaching 60 FPS yet people still manage to create Forms that exhibit a noticable lag.

                    1. 3

                      It’s one thing to say, “Make sure your medical software performs well.” It’s another thing to say, “Your slow CSV parser could kill someone!”

                      Yup. It certainly could. Everyone makes hundreds of little decisions every day that could result in horrible consequences. It’s impossible to know ahead of time, but occasionally something bad does happen as a result of a decision I made.

                      Also, if my CSV parser is the reason your medical software doesn’t perform well, it’s your responsibility to (a) ask me to fix it, or (b) use a different CSV parser.

                      Using fear to get people to optimize for performance will probably have worse consequences than the slow performance itself. Fear makes people hyper-focused on something, and we don’t want to make people hyper-focused on performance, because hyper-focus causes other important areas to be ignored, like maintainability or security. We merely want developers to remember that performance is important and shouldn’t be disregarded.

                      1. 2

                        I don’t know. I want to think not, because I can’t wrap my mind around that kind of scale. I can’t think about that many inputs to the system, that many steps in the chain, that many people. There are too many other software problems to worry about.

                      2. 3

                        I have a friend who works in the IT helpdesk at a hospital, and has for a long time. Their IT is a horrible mess, for systemic and organizational reasons that then flow throughout all their infrastructure. He’s nearly the only person there who seems to actually care and try to make things better, as far as he can tell at least, and everyone above him who should have the power to actually fix things a) doesn’t actually have that power, and so b) is only still working at that place if they want a cushy job where they don’t have to do much.

                        I’ve heard similar things about other hospitals. Or seen it in government projects which were similarly institutionalized. This manifests as a software problem, but it’s a symptom of a human problem.

                        1. 1

                          The mantra “premature optimization is the root of all evil” is taken too far. As a rule of thumb that you should measure before optimizing, it makes sense. But some take it too far, and interpret as “don’t worry about optimizing until the features are done.” Most programs are line-of-business programs without significant gains to be made with big-O improvements or parallelization. By the time you realize your program is sluggish, it’s too far to turn back. That’s why performance must be taken into account from the beginning: the programming language and the memory/concurrency/eventing models must be chosen correctly to avoid a death by a thousand cuts. Once you’ve got a heavyweight app on a browser using a big single-page-app framework and a few extra libraries, you’re way too far gone to do much about optimizing the little things that add up.