E.U. & U.S. Public Policy Forum
Workshop on Issues in Artificial Intelligence and Philosophy (II)
The “Workshop on Issues in Artificial Intelligence and Philosophy (II)” was held in the Institute of European and American Studies, Academia Sinica on November 16-17, 2018. Eight scholars and experts in related fields were invited to present their recent research findings, and discuss the developments and research on contemporary artificial intelligence and philosophy with other invited attendees.
The first paper presented was Artificial Intelligence, Self-driving Vehicles, and Ethic Programs. It stated that artificial intelligence technologies aim to maximize goal achievement. However, when facing ethical choices, the actions taken by artificial intelligence may not be so “humanlike”. Therefore, the two presenters intended to: 1) construct an ethical case base via on-line questionnaires; 2) construct an ethical decision-making model of actions to achieve the greatest consistency among decisions and case studies; and 3) provide philosophers an analysis of the similarities and differences between artificial intelligence and existing ethics. The presenters hope that their findings will reduce the disparities between artificial intelligence and human actions.
The second paper presented was Davidson on Turing Test. The main purpose of Turing’s “Imitation Game” is to discuss whether machines have the ability to think. After his characterizations of and brief comments on the game, Davidson presented his criticisms on Turing’s test in terms of his well-known triangulation. However, Kuczynski argued that Davidson’s criticisms do not hold true. The discussion during this session focused on reexamining Kuczynski’s argument, and pointed out that it is still open for debates.
The third paper presented was Normative Controversies on AI Risk Prediction. First, it examined the nature and extension of AI, and further proposed the definition and application of AI prediction technology. Furthermore, it discussed the controversies surrounding AI prediction technology, which consists of the prediction technology itself and the disputes on related data, such as the reliability of AI prediction results and the inexplicability of the process. Can AI prediction technology be used to prevent human-induced risks? If so, how can it be used effectively?
The fourth paper presented was Why Computational Weak Artificial Intelligence is Impossible to Succeed. R. Penrose mentioned in his work that computational weak intelligence is not possible to succeed. However, since the argument is extremely complicated, it is very easily misunderstood. On the other hand, in a publication by S. Russel and P. Norvig, they offered three arguments against Penrose’s viewpoint. The paper presenter briefly explained and pointed out that the limitations of Russel and Norvig’s findings were based on their lack of thorough understanding Penrose’s viewpoint. This paper also demonstrated that there is still a huge gap between AI experts and philosophers.
The fifth paper presented was On AI’s Possibility of Making a Purposive Interpretation and Analogy in Law. The author argued for the key role of purposive interpretation of law both in legal reasoning and the computational model of AI. Understanding of this specific method of legal interpretation not only contributes to the nature of legal reasoning but also to the understanding of how far a computational model of legal reasoning is possible. A law’s purpose plays different functions in legal reasoning and so the first step is to investigate the logical structure of purposive interpretation, the nature of “purpose”, and how normativity of legal interpretation can emerge from a law’s purpose.
The sixth paper presented was Problems of Artificial Consciousness and Inspirations of Social Theory. First, the paper presenter explained the differences between studies on animal consciousness and on artificial consciousness. Next, the presenter delved into the social theories of consciousness, and their implications to issues on artificial consciousness, specifically on how to build and test artificial intelligence. Different from the traditional views which focus on the features of systems the initiative of shifting one’s position to becoming the interpreter, and eventually moving the discussions of consciousness to the level of groups.
The seventh paper presented was The Moral Relationship Between Robots and Human Beings. To make machines efficient, they should function without humans’ supervision. But the decisions these machines make must be morally acceptable to humans. A natural idea is to teach robots to think about morality anthropomorphically. However, the author of this paper argued against this practice: because robots cannot build meaningful personal relationships with humans, they cannot interfere with humans’ autonomy as deeply as humans.
The eighth and final paper presented was The Possibility of Writing Poems and Hymns with Artificial Intelligence: An Analysis from a Philosophical Perspective. Artificial intelligence has now provided an opportunity to reunderstand and redefine art. The author of this paper proposed that Zang Di’s analysis on the possibility to understand the direction and development of new poetry through language may be incorrect. Nevertheless, the author believed that this is not a move toward the separation of art and science, rather a mean to establish a new and different association between art and science.
After presenting eight papers in the two-day period, the workshop ended successfully on the afternoon of November 17. Scholars and experts in related fields engaged in extensive discussions of recently published results on philosophical and ethical issues that current advancements on artificial intelligence are facing. This workshop has cumulated research energy for academic developments on artificial intelligence and philosophical issues, and consequently, introduced many new research ideas and awareness in related fields.