One of the earliest accounts of technology control appears in ancient Greek mythology in the story of Prometheus, who stole fire from the gods and gave it to humans. This stolen gift enabled the development of human civilization.
We humans are now attempting to control a new technology, artificial intelligence, and it is fair to wonder if we will be more successful than the Greek gods were.
State legislatures in New York and New Jersey have proposed legislation that represents the first, tentative steps at regulation. While the two proposed laws are different, they both have elements of information gathering about the risks to such things as privacy, security and economic fairness.
Both states owe a debt to the New York City’s efforts to understand what AI is, exactly, so it could be defined in law. The initial group established by the City Council could not agree on a definition, which may explain why some of the proposed laws aim at algorithm-based decisions rather than the broader concept of AI. This may be a good start to regulating the use of algorithms in the stock market – the city’s primary interest — but clearly leaves a lot undone.
New York State’s legislative action still lacks the governor’s signature, but that is likely. There is no real controversy about this law since its only action would be to create the New York State Artificial Intelligence, Robotics and Automation Commission.
Among the subjects that the commission would report on is a key economics issue: “The impact of AI, robotics and automation on employment in New York state. That should stir up enough controversy to satisfy the politicians that there were important constituencies interested in AI.
The New Jersey proposal is a different animal entirely. It is basically a consumer protection move that would require certain businesses to assess and report the risk to consumers posed by their company’s data collection and use of algorithms or otherwise automated decision systems. Businesses subject to this would be those with more than $50 million in annual revenue, possess or control personal data on over 1 million New Jersey consumers, their computers or mobile phones. The legislation would also include all data brokers, irrespective of size.
It is not clear whether either approach, New York’s or New Jersey’s, is likely to serve as a pattern for other states or cities considering AI regulatory legislation.
All the legislative efforts so far reflect a general uncertainty about what AI is and what about it, if anything, should be regulated. It is all mixed up with the growing distrust of so-called “big tech” and, more generally, with the public’s sometimes puzzling attitude toward AI.
Mark Twain wrote about a puzzlement of human behavior nearly a century and a half ago in his book. “The Adventures of Tom Sawyer.” In one episode Tom had been tasked by his Aunt Polly with whitewashing the fence around the house. And, of course, other kids cane to watch him work. Instead of complaining about the chore, Tom seemed to be enjoying himself so much that they wanted to help and get in on the fun. Eventually, after the other kids began to offer trade goods in exchange for a turn with the brush Tom relented and accepted their work and their payment.
It’s a funny story, and a revealing one from a behavioral theory standpoint. What was happening was that boys who would do anything to avoid a job like whitewashing eagerly paid to do same work – a puzzle, surely.
It would be reassuring, but inaccurate, to say that our behavior today is much more rational. But consider robots as an example. We whine and complain when forced to talk to a robot that answers a customer service phone. But we eagerly pay money to talk to a robot installed in our home. Another puzzle.
The potential economic impact of AI is huge and may also be very disruptive. And it goes well beyond robots taking our jobs. Our financial markets, for example, require participants with offsetting views in order to find equilibrium points and any kind of order. Even on the stock market’s darkest days every sale is matched to a purchase. For every seller that thinks that a stock will go down there has to be a buyer who thinks it will go up. That is how a market system works.
If all market participants, large and small, are using similar AI systems to manage their portfolios, though, even the smallest market twitch could thrust the entire system into all-sellers or all-buyers. Is there a way to regulate this in order to prevent disruptive or disastrous volatility? We don’t know yet, but we’d better find out.