AI is like a game of soccer – you can’t play without rules!

Does Switzerland need an AI law?

AI is like a game of football – it can’t be played without binding rules! I am of the opinion that Switzerland must initiate regulation of artificial intelligence procedures as quickly as possible. The hesitant approach that is currently being taken will take its revenge.

Do you know of a sport that gets by without rules and referees? Admittedly, the referee is not needed in many sports and only when something really important is at stake (football is an exception here…). But I don’t know of any sport that doesn’t have a minimal set of rules. Rules ensure that the game is attractive and interesting and they describe the correct procedure. At the same time, they define the framework conditions in such a way that the risks for those involved are minimized.

The same applies to the law. Laws are created when new facts and procedures are created in real life that entail risks for those affected. Of course, it is equally important that these procedures are set up in such a way that the parties involved can rely on them and, should a dispute arise, it can be decided who is in the right (binding nature of the law). Someone must be able to decide on the basis of a clear rule whether there is a free kick or not.

The development of AI

In the following, I will only talk about artificial intelligence (AI) and will not go into individual processes in detail. A simple observation suffices: AI processes and their applications have already become widely accepted. Many users use AI-based processes on a daily basis, sometimes without even realizing it. Hundreds of new applications in which artificial intelligence plays a key role are launched on the market every week. So this is not a technology that has to be thrown onto the market with a lot of start-up investment and political support; these processes have already become established and are developing at a horrendous pace. The .com bonanza of 1999 wasn’t even a paltry breeze, we’re dealing with a veritable storm here! Applied to sport: it’s as if a new trend sport has reached half of humanity within five years. The necessary infrastructure has been set up worldwide and is already well utilized. So people play for all they’re worth, even though nobody really knows what the rules are. Sooner or later, this is bound to lead to a lot of trouble! If the size of the field is not defined, the number of players is not specified and the size of the balls is unclear, then things become difficult. However, clever players are already trying to exploit these gaps. They take part in the game by circumventing the vague field boundaries and scoring a goal from behind, while others manipulate the results or try to change the course of the game.

AI risks

Now, AI processes are generally not as harmless as a new sport. Admittedly, there are many applications in which AI plays a valuable role and where no negative effects are to be expected. Unfortunately, however, the new risks that have arisen with the use of such processes are incomparably greater (they are in addition to the existing ones). However, as is usually the case in the tech scene, these are denied or it is pointed out that “experience has to be gained first”. This phase is long gone. AI processes are frequently and readily used and their transparency and traceability are rarely given. Research has only just begun to develop approaches for making the results of AI comprehensible. Today, AI processes are used in black box mode, in which the generation of results is barely comprehensible. You could also say that the game takes place in a darkened stadium and after 102 minutes the scoreboard reads 2:1. You are correct in assuming that this means that various rules of the game are now being fundamentally violated. Just think of the banal requirements of commercial law: “Every transaction must be completely traceable and correct”. This redefines the term “creative accounting”! ChatGPT: “Please change my bookings so that the hotel stay of officials from certain countries of origin is always automatically booked as further training.”
Things get even hairier when you look at the data protection rules. Here, complete transparency is always required when it comes to automated decisions, which are subject to strict requirements (if they are permitted at all). By law, it is necessary to clarify the impact of decisions and what this means for the data subject BEFORE using such procedures. In case of doubt, a “human in the loop” must make the decision. According to current legislation, such procedures may not actually be brought into circulation. These procedures must be adequately tested and assessed by a neutral body before being disseminated (data protection impact assessment).
Data protection law helps, but AI also creates completely new risks and attack vectors. These include, for example, data poisoning, model evasion, model inversion and many more. The classic security risks continue to exist, supplemented by new attack methods (e.g. language synthesis for the purpose of social engineering).

Without going into the other additional risks such as discrimination or violation of ethical principles, it can certainly be said that in my opinion the risk potential of these procedures in their total number significantly exceeds the risks of defective medicines. For this reason, the EU Act, for example, requires a classification of risks as a basis for the measures to be taken. This means that high-risk procedures may not be marketed.

Lack of know-how

The AI bonanza has suddenly led to an incredible number of AI experts on the pitch (“14 days ago I didn’t know what an LLM was, today I’m programming one”). I don’t think most of them even know which way they’re allowed to play. There is a gold-rush atmosphere – every dust-covered application is suddenly given the attribute “AI-supported”. Most of the so-called experts are still fishing in the doldrums, or as mentioned, are playing with the wrong balls or don’t know who their teammates are. This is not an accusation, just a fact. After all, we have seen that without sensible rules, no reliable game can be played. Yes, on the basis of what are you supposed to train? Unfortunately, it can also happen that those who are not directly involved become victims of these players. So we have to prevent spectators who want to take part in this attractive game from coming to harm (most of the players are there voluntarily, which is no justification for letting them kick the turf).

I know that I do not know my data (information governance)

A classic information governance issue should not be missing here: The organization doesn’t even know what quality its data is, or even worse, it has no idea what data is being used! A completely neglected topic, apart from the billions in damage caused by faulty data in the aerospace industry alone. Admittedly, AI was not yet an issue back then, so we should have assumed that such errors would not occur! However, the use of AI is less about a single date and more about a large mass of data that flows uncontrollably into automated evaluations that nobody can trace later! Anyone who argues that this problem can also be solved with AI should take a closer look at the topic of “semantics” and ontologies. Because there is currently no way around humans here.

Conclusion: Based on the risks alone, there should no longer be any question as to whether AI regulation is necessary. But where do we stand with this?

What is the status of regulation?

Many are already aware of the EU’s efforts to regulate these issues. The EU AI Act has been in force since August 24. The EU is a pioneer in developing laws and rules for the use of AI processes. The first efforts and preliminary investigations were initiated back in 2018. Care was taken to ensure that the regulation can be applied to any form of AI. Be it in an embedded system, in an autonomous vehicle or in common software, such as for the assessment of applicants or the granting of loans.

Many countries and individual US states have adopted the idea and are on the way to drafting or implementing their own regulations.

Here are the most important basic rules for AI use under the EU Act:

1. human agency and oversight, including fundamental rights, human agency and human oversight.
2. technical robustness and safety, including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility.
3. privacy and data governance, including respect for privacy, quality and integrity of data, and access to data.
4. transparency, including traceability, explainability and communication.
5. diversity, non-discrimination and fairness, including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation.
6. societal and environmental wellbeing, including sustainability and environmental friendliness, social impact, society and democracy.
7. accountability, including auditability, minimization and reporting of negative impact, trade-offs and redress.

Where does Switzerland stand?

The federal government has missed out on developments in the field of AI. A study is currently underway and should be completed by the end of 2024. It will then probably be a while before any concrete measures or legislative proposals emerge. In fact, we are encountering a situation here that we are already familiar with from data protection. The EU is forging ahead with a groundbreaking law and Switzerland can only follow what has been regulated there. Gone are the days when Switzerland set standards in the regulation of digitization (we were one of the first countries in Europe to allow fully digitized accounting)! Unfortunately, it must be admitted at this point that, strictly speaking, it no longer matters what Switzerland does. As with the GDPR, the EU AI Act has created a de facto standard that we can no longer ignore. This also has to do with the fact that the scope of the EU law applies extraterritorially, i.e. also for providers who use products whose area of application may lie within the EU. Anyone wishing to export to the EU will therefore have to comply with the rules.

Well, then the rules of the game are actually set. It may be possible to fine-tune certain principles or tweak certain settings, but the size of the pitch, number of players and how many balls are used should be clear. The question of who the rules apply to has also been clarified. Are there still plans for a Swiss Finish? Marginal adjustments should also be in the interests of the Swiss economy, but there is less and less leeway here. Presumably it is still possible to decide what color the playing field outline should be (as long as it is white).
It is probably no coincidence that Switzerland does not have a Swiss Finish for ball sports!

Quo vadis?

Although the EU law is new, it has of course not been able to keep up with all the developments that have happened in the meantime. There is also a lot of room for interpretation and the question always arises as to who now decides whether the rules of the game actually address the risks correctly on the one hand and are designed in such a way that the game remains attractive on the other. As we know from experience, the EU cannot necessarily be accused of being very business-friendly. This is also a general point of criticism raised by the Swiss protagonists. In my view, it must also be possible to develop innovative processes. Every game developer tinkers with a new idea for months, if not years, before bringing it to market. But just as they need a test audience, today’s shrewd AI developer must also ensure that their application is not only sufficiently tested, but also fully complies with the overarching rules. Today, there is a latent risk that, due to the enormous drop in development costs, some applications developed with a hot needle will be thrown onto the market without being sufficiently tested. This is a fundamental evil of modern software development processes (keyword CI/CD), which must be adapted in the long term. Only what has actually been sufficiently tested may be released into the field. This requires sandbox procedures, not only for the technology, but also for the law. Legislators are called upon to create a framework that favors innovative development environments.
The risks are similar to those of medicines: Long-term damage cannot be ruled out but must be prevented at all costs! This means that clear rules must be set for the development of such applications. We will not be able to avoid expanding the testing of such new procedures and possibly even having them examined by the authorities (approval procedure). The trend is moving in this direction. This was also demonstrated by the EU Act and will be a major challenge. Why? Today, software development is still a matter for people; in ten years’ time, only machines will be developing software. The last barrier remains the test procedure and approval. As with drug development, a defined approval process, field tests and cohort studies as well as a comprehensive test regime are required.

Conclusion: The game of AI, or rather the basic concept of the “AI ball”, has arrived in the world. Based on this evolutionary game system, new variants are constantly being developed that involve more or less risk. Rules are needed to ensure the flow of the game and to manage the risks for those involved. However, the basic rules must always be adhered to. New game variants or gaming devices must be tested BEFORE they are distributed so that the risks for all participants are known and can be managed. The freedom of individual countries to formulate deviating rules is minimized. After all, anyone who wants to take part in international championships must adhere to the basic principles! Innovative approval procedures offer an opportunity to speed up the launch of new products and procedures. This requires a law.

Bruno Wildhaber, krm.swiss

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Related articles

On 16.3. is Digital Cleanup Day

On 16.3. is Digital Cleanup Day

Tidying up is clearly not everyone's cup of tea, but we all know the good feeling that a tidy room, a tidy desk or ... a tidy drive! You can feel proud with a clear conscience, because deleting data also has an important effect on energy consumption. I have calculated...

read more