On the control problem of artificial intelligence (AI) and other perennial issues
In 2019, Stuart Russell’s acclaimed bestseller “Human Compatible. Artificial Intelligence and the Problem of Control” (there is already a wikipedia article about it). Russell is a British lecturer in computer science at Berkeley (CA) and has been one of the leading experts in the field of artificial intelligence (AI) for years. He is also the author of one of the basic works on the subject, already in its third edition.
To say it in advance, the work had a very good media response. It is very broadly conceived as a comprehensive compendium on the status, consequences and risks of artificial intelligence today and tomorrow. I think it asks the right questions openly and credibly, including questions that have been taboo in AI research. It is definitely worth the read. Virtually every conceivable issue of AI is discussed in a current context from the perspective of a natural scientist.
The Times, for example, wrote: “This is not quite the popular book that AI urgently needs. Its technical parts are too difficult, its philosophical ones too easy. But it is fascinating, and significant.”
My impression also goes in this direction and I would like to tie in here with a few critical points: Despite the generally meritorious contributions to an ethics in dealing with AI (“beneficial” AI controls), there are a few statements that make one wonder.
These are usually the usual “issues” from the guild of “techies” or AI researchers.
- The “knowledge” term is imprecise and does not incorporate certain research findings
People are still asking how “Knowledge” could be stored in computers (p.50), although we have known since the beginning of the noughties at the latest that the paradigm of Nonaka/Takeuchi simply making tacit knowledge explicit has not worked and has caused numerous knowledge management projects to fail The philosopher Michael Polanyi clearly explained earlier (1958) why purely explicit knowledge is inconceivable. (cf. quote)
Polanyi (1958), Knowing and Being (cf. also: https://en.wikipedia.org/wiki/Tacit_knowledge)
From a scientific point of view, explicit knowledge is pure information in coded and processable form. How this is understood and what is done with it is left to the actors in a given context (turning information into actionable knowledge or insight).
Russell certainly recognizes the problems of knowledge management in various dimensions and refers to certain upcoming breakthroughs, e.g. in the field of speech recognition, information extraction and automatic translation. However, the more recent findings in the field of semantic technologies (e.g. the meaning of “knowledge graphs” or ontologies) are not mentioned at all, which is somewhat surprising. Instead comes the terse sentence: “One way to summarize the difficulties is to say that reading requires knowledge and knowledge (largely) comes from reading”(p.81).
I think that a social scientist would never dare to say such a sentence because it presupposes something self-evident. Unfortunately, it is also one of the paradoxes of the zeitgeist that more and more texts are produced but less and less is read and understood (keyword: formalization of the individual sciences). Services with smart tools can partly take over this, but there are always certain purposes behind it where the human being then makes the difference on the action level.
Well, it is also doubtful that Carnegie Mellon University’s NELL project(Never-ending language-learning) is currently the most sophisticated “language-bootstrapping” project in the world, as Russell writes(p.81)? In my view, for example, in terms of dynamic knowledge representation of content and context, library science is at least as far along as some analytical capacities in the private sector, precisely thanks to semantic technologies (linked open data, integration of ontologies, taxonomies, special vocabularies and terminologies, text mining, etc.).
2. the political-philosophical part is the weakest one
As the Times journalist has already alluded to, the part with the perfectly valid reflections on political and philosophical aspects of AI research is the weakest, because it mostly makes objectively understandable suggestions, but then gets bogged down in dilemmas and speculations that might be solved, but very vaguely, with AI. Many contradictions probably also arise from the facts of the school of British “utilitarians” (J.Bentham, J.S, Mill) which the AI community seems to have made its worldview. However, as we have known since Wittgenstein at the latest, world problems can only be solved to a limited extent with instrumental reason and pure utility calculations.
What are individually and collectively/socially “reasonable” or “beneficial” needs and goals of “beneficial machines” or elsewhere “altruistic machines” if he presupposes as a fundamental thesis “Machines are beneficial to the extent that their actions can be expected to achieve our objectives”(p.11) . What isprovably beneficial AI(chapter 7: “provably beneficial AI”? The researcher’s heart makes a few well-intentioned remarks in order to avoid or exclude from the outset certain misunderstandings with regard to specifications of “human values” (Not what I mean, p.177), but nevertheless inevitably enters the waters of moral dilemmas and complexity traps when he tries to separate needs logically and technically from “values” (“desirability of anything from pizza to paradise“(p.178); the former decisions (choices) could in principle be clearly controlled by machines in terms of judgment. However, he then quickly realizes again that this could become too difficult and potentially disastrous and confines himself to stating that machines should simply learn better to make principled predictions for individuals (section “The third principle: Learning to predict human preferences“), although it is known that such predictions are “highly uncertain and incomplete” are. Such ambivalences run through several sections; on the other hand, he argues very clear-sightedly against certain taboos within the AI community (Whataboutery, p.156, “… if the risks are not successfully mitigated, there will be no benefits.”). or in the section “Humble machines” he muses about machines that should actually also incorporate humility and define “uncertainty” (Ungewissheit) as a goal. How could it be – he obviously complains – that the AI community and other related disciplines (Controls & Operations) for years merely researched benefit maximization, cost minimization, goal achievement, etc., without recognizing the blind spot of “uncertainty in operational decision making” and dealing with it accordingly(p.176)?
Finally, one must give Russell credit for absolutely acknowledging certain speculative aspects in his argumentation and giving them self-critical consideration, i.e. he demonstrates the ability that no machine yet possesses, the ability to contradict itself (see conclusion below); he weighs up between reasons for optimism and reasons that call for caution. In chapter 9: “Complications: Us”., he also acknowledges the extent to which humans themselves and their behavior are an analogous obstacle to progress in AI; in a kind of AI-compatible sociology and psychology (stupidity, envy, and other vices), Russell discusses all aspects of human behavior that can be dysfunctional to AI goals; this extends to the question of whether human beings even know what they want.
Regarding“uncertainty” and “predictability” he remains very realistic: “Very little of our knowledge is absolutely certain” , incl. content and context shortcomings (p.68) for AI progress in the near future. … faster machines just give you the wrong answer more quickly.
“Happiness should be an engineering discipline”
His assessment of the socio-economic problem of employment remains sympathetic. These aspects should not be left to economists alone(how true), the subject is too important for that (p.113). “The art of life itself” should be promoted educationally (p.122): he is in favor of universal basic income (UBI) in principle, but, following Keynes, sees the psychological difference between “those who strive and those who just enjoy“, that is, between the “strained” who recognize the value of work and can realize themselves (truly human) and the cheerful and carefree souls (delightful) who are merely “profiting”. The truth is probably somewhere in between. So-called. Life architects could support individuals in building “personal resilience” and “developing natural curiosity”. So far so good. We are still very far away from that.
My objections here may deal with minor aspects, which does not diminish the merits of the work in the field of control problems (e.g. regulation of AI or limits of superintelligence). Russell deals very responsibly with the relevant problems and challenges (“his message is not one of gloom and doom but of explaining the nature of the dangers and difficulties, before setting out a plan for how we can rise to these challenges”. Toby Ord.)
My conclusion on the current state of AI rather follows the statements of a critical observer of AI: Reinhard Sprenger (“the grandmaster of management”). By the way, he prefers the term “Machine Intelligence” (MI) to Artificial Intelligence, which I strongly agree with.
The AI has a fundamental problem: it does not make mistakes.
Sprenger: the susceptibility to errors is an advantage of human intelligence; with it the species would have survived for millions of years. “Our thinking and actions do not follow an algorithm, but rather adapt, are capable of learning, anticipate, while always making small mistakes that we correct. (…) Machine intelligence can therefore be intelligent in the sense of extremely fast data processing. But it will never be intelligent in the human sense.
At no time was it particularly intelligent to run a race with machines that one cannot win. Machines are always faster. We lose all the games there – and are the winners as a result. In which games? Where it is a matter of feeling and intuition, of practical virtues such as wisdom and prudence. Human intelligence qualifies for the creative, the complex, …(…) An almost endless series: autonomy, context sensitivity, analogy formation, conscience, (…) – all not programmable. But above all the contradiction! The ability to contradict oneself, it will probably always remain reserved for the human being. That is its highest nobility.”
 Stuart Russell, Peter Norvig (2010): Artificial Intelligence: A Modern Approach. 3rd edition
 Nonaka, I.; Takeuchi, H. (1995), The knowledge creating company: how Japanese companies create the dynamics of innovation.
 The CEO of a leading semantic technology provider puts it this way, “Nine out of 10 of the “quite huge” companies already use knowledge graphs for different purposes, but in many cases, it’s about knowledge management, it’s about data integration.”
 Cf. R. Sprenger: Many fear becoming superfluous because of artificial intelligence. Yet AI has a fundamental problem: It does not make mistakes, in: NZZ 26.1.2019; cf. also his work “Radikal Digital. Because people make the difference (2018): here he leaves no doubt about the fact that everything that can be digitized will be digitized . There is also economic potential in this. But this will not happen as quickly as most people assume; the changes should be viewed in the long term, and alarmism is out of place. There is an old insight in dealing with the new: Historically, the short-term effects of technological upheavals have always been overestimated, while the long-term effects have been underestimated.
 Some researchers are already working on this problem: how does a machine develop natural curiosity, similar to how children learn (recursive self-improvement & reinforcement learning), e.g. Jürgen Schmidhuber from the Institute for AI in Lugano: http://people.idsia.ch/~juergen/interest.html