1. 5
  1. 8

    One reason, I’d figure, is because the standards for working software are so low. The clear example is UNIX, responsible for however many billions or trillions of lost money, at least one death during the most recent New York blackout due to being used to control a power station, and being worse than those things that came before it, such as Multics. The WWW, with its JavaScript and security flaws everywhere, is a newer iteration of the same bad thing.

    As the computing industry grapples with its role in society, many people, both in the field and outside it, are talking about a crisis of ethics.

    This to me seems to be conflating computing in general with malicious companies such as Facebook, Twitter, and Google. This already seems similar to blaming automobiles of every type, themselves, for the actions of companies that happen to produce them, such as dismantling public transportation, creating laws such as jaywalking, and technically facilitating people running each other over.

    There is a massive rush to hire chief ethics officers, retool codes of professional ethics and teach ethics to students. But as a scholar of computing – and a teacher of a course on computing, ethics and society at Rice University – I am skeptical of the assumptions that what ails technology is a lack of ethics, and that the best fix is to teach technologists about ethics.

    This brings to mind a large difference of opinion. People can generally agree that, say, a doctor should take care of someone and ’‘do no harm’’, even though even that isn’t often followed by some. What of those, like me, who feel proprietary software in any shape or form is unethical? I’m inclined to believe none of these ethics courses have much to say about that.

    Instead, in my view, the solution is government action, which aims at balancing regulation, ethics and markets. This isn’t a radical new idea: It’s how society treats cars and driving.

    How ominous. I’m reminded of what I’ve read of Ted Kaczynski’s thoughts on how cars were at first optional and are now practically necessary and regulated in ways that shape society. I’ve read of people being arrested or fined or whatever for making modifications to their cars that add secret compartments; OnStar permits a corporation to shut down vehicles and I’ve even seen commercials that glorify this. I don’t accept that future or similar for computing and we’re all aware of those similar restrictions that have been pushed for years.

    The reason for that improving safety record is not that people learning to drive studied the ethics of responsible and safe driving. Rather, they were taught, and tested on, the rules of the road, in order to obtain a driving license. Other regulations improved how roads were built, required car makers to adopt new safety features, mandated accident insurance, and outlawed drunk driving and other unsafe behaviors. I believe a similar approach – regulation, in addition to ethics education for technologists, as well as market competition – is needed today to make modern technology safe for society as a whole.

    So, what, computing should require a license? Computers are mostly a threat when used improperly in critical infrastructure, such as the aforementioned New York blackout. I could agree with real safety features, such as mandatory bounds and type checking, being put into machines, but that’s not what the author means here, I’d think; any ’‘safety’’ feature would likely be aimed at preventing one from running software how they want, such as mandatory code signing where only the government and a few corporations have the right keys. We have plenty of DRM examples of how this would work and I figure many of you, unfortunately, own a cell phone that doesn’t let you run whatever you want.

    In the 1980s, internet pioneers adopted a philosophy that “information wants to be free” – so website owners didn’t charge readers for access to the content. Instead, internet companies used advertising to support their efforts. That led them to collect personal data on their users and offer micro-targeted advertising to make money, which social scientist Shoshana Zuboff calls “surveillance capitalism.”

    I blame the WWW here. Project Xanadu has an outline for a payment system, but the WWW of course lacks this (I’m aware of Response 402, Payment Required.) and finds it practically impossible to add at this point, because the WWW is very poorly designed. A positive I see from regulation would be that malicious idiots can no longer push their poorly written garbage and watch it become popular, but I don’t figure that would do anything to the garbage already thoroughly entrenched.

    The real problem with surveillance capitalism is not that it is unethical – which I believe it is – but that it is completely legal in most countries. It is unreasonable to expect for-profit corporations to avoid profitable businesses that are legal. In my view, it is not enough to simply criticize internet companies’ ethics. If society finds the surveillance business model offensive, then the remedy is not an ethical outrage, but making laws and regulations that govern it, or even prevent it altogether.

    I agree more than disagree, here. Perhaps people should believe companies shouldn’t do anything just because it’s legal, but that’s perhaps a tangent for this topic. Still, I’d be wary that any regulations, considering all of the bribing that happens, wouldn’t be written in a way that punishes most everyone except for the worst offenders, such as Google and Facebook. One also must take into consideration that both of these corporations and more are part of the government’s spying program, so there’s a dearth of motivation to handicap it.

    For decades, the information-technology industry has successfully lobbied against attempts to legislate or regulate it, arguing that “regulation stifles innovation.” Of course, that assumes all innovation is good. It has become evidently clear that this is not always the case: Some of the internet giants’ innovation has harmed democratic society in the U.S. and around the world.

    It’s really rather telling that this article is focused mainly on basic talking points about Facebook than issues such as DRM and proprietary file formats. This isn’t an article aimed at someone who knows much about computing.

    In fact, one purpose of regulation is to chill certain kinds of innovation – specifically, those that the public finds wrong, distasteful or unhelpful to the advancement of society. Regulation can also encourage innovation in ways society deems beneficial. There is no question that regulations on the automobile industry encouraged innovation in safety and fuel efficiency.

    Again, it also encouraged laws against walking in certain ways, and the companies that produce these machines helped dismantle or cripple public transportation and design cities in a way that necessitates using their products. One reason it’s sensible to have any regulations on automobiles is because people could relatively easily die, compared to a computer.

    Some members of Congress have proposed a number of ambitious plans to tackle information warfare, consumer protection, competition in digital technology and the role of artificial intelligence in society. But much simpler – and more widely supported – rules could make a huge difference for individual customers and society as a whole.

    I don’t really support the government deciding what is and isn’t truth. Neither of the articles concerning ’‘consumer protection’’ or ’‘competition in digital technology’’ mention using the DMCA to prevent competitors from, say, producing compatible ink cartridges. Why should this be framed in terms of ’‘consumers’’ and, following, ’‘feeders’’? For that matter, why do the automobile comparisons not mention those who build their own vehicles. Take note of what this tone entails.

    For instance, federal regulators could require software terms and licenses include plain language that’s easily understood by anyone – perhaps modeled on the longstanding “plain English rule” for corporate financial filings to the U.S. Securities and Exchange Commission. Laws or rules could also require companies to disclose data breaches quickly, both to officials and the public at large. That might even spark innovation as firms increase their efforts to prevent and detect network intrusions and data theft. Another relatively easy opportunity would be to regulate automated judicial decision systems, including requiring that they not be deployed before passing an independent audit showing that they are fair and unbiased.

    This doesn’t seem bad, sans that passage about the judicial system. The hysteria seems to be about real or imagined bias against some groups using opaque algorithms, rather than the opaque algorithms being used at all.

    The bottom line is that technology advances have been moving very fast, while public policy has lagged behind. It is time for public policy to catch up with technology. If technology is driving the future, society should do the steering.

    All computing legislation I’m familiar with is misguided, stupid, or only benefits corporations. I’m not optimistic about more legislation and this article makes not a single mention about repealing some of these bad decisions, such as the DMCA, allowing software to be patented, or the ’‘Computer Fraud and Abuse Act of 1984’’.

    In brief, I’m not against the idea of regulation, having taken good looks at what life in China is like, but I don’t trust the cretins who would be there to write it.

    1. 5

      We regulate fields that then have software in them, like aerospace and the automobile industry. I don’t know that requiring audits of random code you put on GitHub to make sure you conform to the regulations is the answer.

      Is the GDPR the right way to go? How about the EU’s other efforts to control internet content? The regulation is hard to write, hard to confirm compliance, and all for an issue that isn’t directly killing people, so the effects are uncertain and hard to verify. That’s not a recipe for good public policy.

      1. 3

        Far as system security, it worked under the TCSEC with most of the B3/A1 systems still stronger than today’s. Far as safety, DO-178C is forcing companies to do a combo of strong review, testing everything, automated analyses/tests, and even source to object code correspondence to catch compiler errors. They’re delivering high quality stuff with reusable components showing up to keep costs down.

        The problem with almost every article about software regulation is that they’re written like it’s never happened. If they start with results, then we can go onto discussing how we’d fix the problems that happened in previous attempts. Galois is also pushing a concept called “goal-based assurance” that’s more focused on achieving specific results than forcing them to achieve it in a specific way. That might allow more innovation with less straight-jacketing that regulations sometimes lead to.

        Throw in open protocols and data formats in all software products, past and present, to reduce lock-in.

        1. 2

          Conversely, metal is regulated for safety, that is why we don’t need to regulate cars.

          Why does everyone in academia want to regulate software this week?

          1. 1

            If I had to guess, it’s because of the speculation around why the latest 737 MAX crashed. Everyone is blaming “software” despite the investigation still being underway.