Basis for AI decisions still not understood by humans
n

n n n
When senior fund manager Takuya Hiroi’s artificial intelligence system recommends a trade, something about the process unsettles him.
“I just don’t quite understand why the AI system gives an instruction to buy stocks of such and such an issue,” said Hiroi of Astmax Asset Management Inc. (ASTAM), a subsidiary to Yahoo Japan Corp., who has been using an AI system for two and a half years.
Hiroi isn’t alone. AI systems think for themselves, but their train of thought, or the basis for the decisions they make, remain a mystery to humans.
That poses the question of whether humans are prepared to entrust their lives and safety to AI systems, even as they remain a so-called “black box.”
Yjam Plus!, one of ASTAM’s investment trusts, uses an AI system, which draws on big data collected by Yahoo Japan, alongside business conditions, corporate finance information and other conventional data, to forecast stock prices and decide which issues to invest in.
Operated since late 2016, Yjam Plus! invests in the stocks of about 180 listed companies in Japan.
None of the information that the investment trust uses as big data has ever been used traditionally in the practice of investments, including, for example, the weather. The AI system draws on rainfall and snowfall patterns in Tokyo to predict the stock prices of a mass apparel retailer.
The AI system also sets its sights on companies that are being accessed intensively via a search engine.
Available in society are all kinds of information, whose combinations must include certain patterns that are associated with stock price fluctuations. Humans never notice them, but AI systems can spot such patterns, if only they are fed enough data for learning–so goes the thinking behind AI-assisted investments.
With the development of computers, high-frequency trading of shares, with thousands of transactions per second, is quite commonplace now. But AI-assisted investment is a world apart from similar computer trading, Hiroi said.
“In automated trading, you look at stock price fluctuations, and you can analyze the investment results retroactively, such as by attributing them to improved corporate performance or to undervalued stock prices,” he explained. “But you can never track decisions made by an AI system.”
Humans can’t decipher decisions made by AI systems, a situation known as the “black box problem.”
“If you don’t see the basis for decisions made by an AI system, then you won’t see the reasons for the decisions,” said Tomoya Suzuki, a professor of mechanical systems engineering with Ibaraki University, who studies AI-assisted stock investments. “If investments in one issue were to fail, then the only thing you could do about it is to stop investing in that issue.”
He said AI systems are good at short-term transactions but aren’t so great at making forecasts for long-term investments. In fact, most of the available AI-assisted, long-term investment trusts have less-than-stellar performance records.
Yjam Plus! posted an impressive return of about 18 percent during the year through the end of May 2018, but that flopped to about 18 percent in the negative during the year ending in May 2019.
“AI-assisted investment is a new challenge,” said Kazuhiko Okubo, the head of sales for publicly offered investment trusts with ASTAM. “You are invited to look forward to its performance results in the future.”
The use of big data in mid- to long-term investments runs the risk of letting an AI system take in “noisy” data, which may have nothing to do with stock price fluctuations.
“Overall, AI systems have surpassed humans in terms of stability and high-speed processing,” Suzuki said. “But AI systems could never beat humans as ‘speculators.’ ”
ETHICAL QUESTION OF AUTONOMOUS DRIVING
AI systems, which make decisions in ways that humans don’t understand, also pose ethical problems for them.
“A green light has been detected,” a computer voice was heard aboard a bus, which had been stopped at an intersection.
The bus began to slowly turn right, but the driver’s hands remained off the steering wheel.
That was a scene from a demonstration test held on a Saturday in May in Kiryu, Gunma Prefecture, which involved self-driving vehicles, including a bus, running along public roads. The bus took about 30 minutes to cover a round-trip distance of 3.6 kilometers, with publicly solicited monitors riding aboard.
The bus limited its speed to about 20 kph. When other vehicles got stuck behind it and when it encountered a car parked on the street, the bus switched to a manual driving mode and let the following vehicles go ahead.
That is because the bus was only designed to run along predesignated and memorized routes and was not mounted with an AI system for selecting routes autonomously.
“To avoid accidents, self-driving technology should realize an ultimate sort of what I would call ‘could-be driving’ (which takes into account every conceivable situation),” said Takeki Ogitsu, an associate professor of mechanical science and technology with Gunma University, who had helped develop the system used in the test.
Suppose a runaway car comes head on toward a self-driving car.
Turn the wheel to the right and an elderly person would be killed. Turn it to the left and a baby would be killed. Go straight and the passengers’ lives would be at risk. Which option should the AI system choose?
That dilemma is a variant of the “trolley problem,” a thought experiment of ethics, which originally assumed a runaway freight car. The dilemma has troubled developers of autonomous driving systems, not the least Ogitsu.
Suppose, for example, there is an AI system that can be used to recognize both a stray cat and a pet cat.
“I don’t believe society would approve of a decision to distinguish between a stray cat and a pet cat so that the former will be run over,” Ogitsu said.
And what if it were about humans, not cats?
“Go deeper into the issue and discussions come to a dead end,” he said.
SB Drive Corp., a subsidiary of Softbank Corp., has set the goal of having 10,000 self-driving buses operating on the streets by the end of fiscal 2025. They will be using AI in a system for the surveillance of bus interiors but will not be using one in driving-related systems.
Yuki Saji, president of SB Drive, cited the narrow widths of Japan’s roadways in explaining why.
Suppose a self-driving bus, traveling at 20 kph, crosses over into the oncoming lane to avoid an obstacle, whereupon it sees a car approaching head on at 60 kph. Only a span of two or three seconds is left before they collide.
“If a (self-driving) car were to try to decide based on accumulated experience in such an unpredictable situation, it would only end up in the trolley problem,” Saji said.
“Understanding of the host society will be key to realizing self-driving technology,” he added. “There should be, for example, fewer cars parked on the street.”
URGENT NEED FOR SETTING RULES
Developers of AI systems are also being called on to address the black box problem.
Fujitsu Ltd. is studying an AI system that could present the grounds for the decisions it makes and provide links to supporting evidence, such as literature.
“An AI system could make a wrong inference,” said Koji Maruhashi, a research manager with Fujitsu Laboratories Ltd. “As long as the grounds for decisions remain a mystery, it will be difficult to use AI systems in autonomous driving, financing and health care, because accountability is rigorously required in those fields.”
What if an AI system were to teach itself to boost stock prices and were to dabble in “stock price manipulation,” or to force up stock prices on purpose to gain profits?
Under the current Financial Instruments and Exchange Law, such an act would be subject to no criminal punishment or fines unless the human in charge of the transactions had an intention to entice other investors.
In September last year, the Bank of Japan worked out a report of a panel of experts, including university professors and lawyers, which said, in sounding an alarm, that the option should be considered of obligating developers and operators of AI systems to ensure that no transactions will take place that are not based on natural supply and demand.
Efforts are also going on to work out international rules.
The Organization for Economic Cooperation and Development (OECD) adopted five principles of AI development during a Meeting of the Council at Ministerial Level on May 22.
“AI actors should respect … human rights and democratic values,” one of the principles says. “AI actors should commit to transparency,” says another.
Strong arguments were made, during an OECD conference of experts, that decisions should be allowed to be entrusted to AI systems if and only if that entails no adverse effects on individuals.
“There is this issue of how to strike a balance between convenience, safety and costs,” said Susumu Hirano, dean of Chuo University’s Faculty of Global Informatics, who served on the conference of experts. “There should be more discussions on how AI systems should be used in fields that have to do with human lives and rights.”
As AI systems are becoming part of social infrastructure, the host society is also being called on to develop relevant rules.
n n