"AI enables technical systems to perceive their environment, to deal with what they perceive and to solve problems in order to achieve a certain goal", says an article (1) on artificial intelligence published on the website of the European Parliament. Most people do not realise how often they encounter what is called artificial intelligence. Many well-known platforms such as YouTube and Google use artificially intelligent programmes to analyse every activity of the user on their platform, create a profile and display search results and suggestions tailored to this profile, with the main aim of getting the user to spend as much time as possible on the platform. An algorithm that can, for example, guess a user's favourite band based on their activities on the web is called weak AI. Without really realising it, we have allowed "intelligent" computer systems to do more and more of our work for us, to make decisions for us and to filter the information we receive about our environment. While we can always decide whether or not to accept a search result or suggestion offered to us by an AI on the internet, there is no way of knowing whether we can still influence the driving behaviour of an AI when we are driving a car, once we are automatically strapped into a car without a steering wheel.
A strong AI, on the other hand, is a computer programme that thinks beyond the task for which it was originally created and has the same intellectual abilities as a human being or even surpasses it. This would be achieved if the favourite band recognition algorithm suddenly also suggested to you which outfit fits best to the music scene of your favourite band and from now on constantly sent you suggestions for suitable clothing by e-mail. Afterwards, the AI might decide to create its own online shop because its outfit suggestions in the emails have actually inspired many people to buy. Afterwards, the AI then invests the money it has earned in an arms company, simply because the share price has just happened to rise unusually sharply. Through data from the internet, the AI learns which area the arms company supplies and then also invests in an arms company, which in turn supplies an enemy area, in order to increase its sales through increased warlike actions.
Most people would probably decide differently at one point or another because their conscience would not allow them to make these decisions. Presumably, some people would already have no desire to fire up other people with advertising proposals. A strong AI, on the other hand, has neither a conscience nor any kind of personal goal. It simply makes decisions because they seem logical to it. Strong AIs are highly intelligent and conscienceless machines that are likely to have access to all the data stored on the internet and the ability to analyse it in a fraction of a second and thus intervene in pretty much all the processes of our human world to whatever degree and in whatever way it pleases.
The development of powerful AI is a serious systemic risk to our human world. In order to regain control over this development, it is of fundamental importance to start with the currently already developed weak AI, to identify and characterise its dangers and then to elaborate and implement safety concepts so that an uncontrolled development of stronger AI is not possible in the first place.
Questions about to what extent we want to hand over decision-making power to programmed algorithms and machines, and whether we agree with all possible consequences, are questions we should answer before creating such machines. A chance to answer these questions afterwards and to be able to make changes in this regard is not certain.
1. Europäisches Parlament. Was ist künstliche Intelligenz und wie wird sie genutzt? last accessed on: 26.09.22
2. Keith Wynroe, David Atkinson and Jaime Sevilla. Literature review of Transformative Artificial Intelligence timelines. Epochai, Jan. 17, 2023. last accessed on: 06.03.23
Approaches for more AI security
The fundamental goal for safe use of artificial intelligence is that ultimately humans remain in control of the technology, not a few, but ideally every single individual. Without establishing multi-layered control instances, it is very likely that a few people will take over control of the AI technology and thus control many people, or that the AI technology will end up controlling us itself. This outcome is logically predictable once it is understood how powerful artificial intelligence already is today and is likely to become in the future due to the steady increase in systemic complexity.
Already, many AI technologies are undermining our right of individual self-development by controlling what information we have access to on the Internet, how and when, what makes them highly manipulative. "At stake here are nothing less than our most important constitutionally guaranteed rights," as accurately described in an article on "Das Digitale Manifest" published in "Spektrum" (1).
One conceivable approach to greater AI security, could consist of multi-layered human control over the technology. Society, politics, and research could be the three main entities.
Control at the societal level
First of all, there is a need for general education about the influence, scope and consequences of currently deployed and potentially future deployed AIs. This would be significantly advanced by supporting factual-rational discourses on the topic. This can be achieved on a societal level through people taking the initiative to inform themselves about the topic of AI and then start a discourse. This can be done at any level, whether it is in the family, with friends, in educational institutions, in the company, or as a societal survey. Through education and through following discourses, society can make demands to the government and to research, which they think are important for a safe way of dealing with artificial intelligence. Those who want to contribute even more to this goal can also think about aligning their careers accordingly. A good guiding concept is offered by the book "80,000 hours" by Benjamin Todd and the corresponding website (2).
Control at political level
The state should establish legal framework conditions for the production and use of artificial intelligence that do not make maximum profit their primary goal, but rather human safety. These should be decided in consultation with society, research and the ethics committee, constantly reviewed and adjusted if necessary. A logical approach to such a framework would be, for example, to decide that permission from the government is required for the production and use of AI systems, as well as for conducting research projects on this topic. In addition, it could be established that in order to promote the transparent use of AI, it must be labeled. Furthermore, the state could ensure greater security by prioritizing the allocation of research funds for projects that deal with the analysis and prevention of risks associated with AI development. A general strict control of political activities as well as of research projects on the topic of AI by the ethics commission is recommended.
Control by research
In order to support society, research should make general information on the topic of AI, on the current research results and the current use of AI accessible to all people. Collecting, simplifying and also translating the information is important for this goal. More research should be conducted on the topic of AI safety, in which important legal framework conditions can be developed, which then can be passed on to politics. Another important task is to research the ability of artificially intelligent systems to suffer, which should be carried out under the supervision of the ethics committee and in compliance with defined ethical framework conditions. International research collaborations can reduce the risk of a technological arms race.
Literature on AI-safety
1. Helbing et al. (2022). Digitale Demokratie statt Datendiktatur. 17.12.2015, Spektrum. last accessed on: 19.10.22
2. Arden Koehler. How to use your career to help reduce existential risk. 80,000 Hours Website, August 2020. last accessed on: 19.10.22
3. Ajeya Cotra. Two-year update on my personal AI timelines. Lesswrong, 03.08.2022. last accessed on: 06.03.23
4. Benjamin Hilton. Preventing an AI-related catastrophe. 80,000 Hours Webseite, August 2022. last accessed on: 06.03.23