Context and case study
AI is making inroads and is also keeping us busy. For several years now, krm has also been using semantic technologies for data cleansing with the MATRIO Method®, which would be inconceivable without the use of AI mechanisms. We have addressed these topics in detail in the Information Governance Guide 2021 (p. 41ff.). All text references in this article are to the current guide.
The question arises as to what impact AI technologies will have on the testing and certification of products and systems. The following questions are at the forefront:
- Can such systems be tested and certified at all?
- What adjustments to the test procedures are necessary?
- What must the product provider pay attention to?
- What must the user pay attention to?
We use a fictitious but practical example here for illustration:
A product provider offers an app for scanning expense receipts, which are to be attached as receipts to an expense/expense report, for example. The expense report is automated by directly filling in the recorded data and processing it further. The assignment to the correct expense category is done automatically (by AI). The system is sold as self-learning.
Can such systems be used in a legally compliant manner?
Fortunately, the legislator (at least in Switzerland) does not have any technological framework conditions that would prohibit the use of AI. The situation is different in Germany, for example, where the GoBD specify very precisely what is considered permissible and what is not (see, for example, the GoBD’s extensive rules on document scanning, p.197). We take the Swiss situation as a starting point here. As mentioned, AI tools can be used without restriction, provided that the general audit principles are observed or the central compliance requirements are met.
Can such systems be tested or certified?
Here, too, there are basically no restrictions. As always, we have to say goodbye to the notion of “auditability” because it doesn’t exist, even in AI-powered systems. Audited:
- The product and its technical functions to ensure compliance (p.138, 165ff.) = product certification, or/and
- Its use in a concrete application scenario =application testing.
Products can certainly be tested without a concrete application scenario. This is the normal case with all product tests, because the product-adequate use may be assumed. Only in the case of DMS software has the belief apparently become established that this is not possible. The technical functionalities, e.g. semantic concept recognition based on corpora (e.g. for the recognition of index terms), can be tested and evaluated at any time independently of the use case.
What are the requirements for a successful exam?
As with all systems that we or other auditors audit for compliance, minimum basic requirements must be met in order for an audit to be performed. The linchpin is proper documentation. Here you already have to distinguish. While the specifications are central for SW products, the user must ensure that he provides sufficient procedural documentation (p. 251ff.). The latter is not witchcraft, but with AI the question naturally arises: What can the user document at all, or how can he support the auditor so that the latter writes a positive report?
What adjustments to the test procedures are necessary?
None, the test procedure corresponds to the established procedure (p. 277ff.)
What must the product provider pay attention to?
As mentioned, new requirements for process transparency are emerging: Everything we have listed here so far can be summarized under the term “XAI” = Explainable AI(XAI). The aim is to make automated behaviors comprehensible.
One can restate the requirements for XAI algorithms as follows:
The results must be comprehensible to a subject matter expert (so-called white-box approach).
- Transparency: The model for extracting data is described and can be controlled and adapted by the developer.
- Interpretability: A human can understand and comprehend the machine learning model.
- Explainability: The results of the application to the specific application area are comprehensible and have produced correct results (including factors 1 and 2).
What does this mean in concrete terms for our example?
- The database for the extraction of the key terms is known. It can be adjusted. This also applies to the automated recognition of key terms (corpus analysis). It is controllable when and how the analysis basis is expanded, e.g. by including other data sources or by networking.
- It is comprehensible how data fields are filled in automatically (e.g. recognition of a VAT number). In combination: The provider can fully explain why a certain receipt is added to the internal expense category … or not.
What must the user pay attention to?
In our case study, the user needs a comprehensive, i.e. multi-level, protection concept. This is most easily compared with automated invoice processing (p. 307). A control system with multi-level checks and balances is mandatory. Again, this is not new, but the importance of manual controls becomes even more significant when using AI. The procedural documentation remains the most important element for ensuring traceability (p. 251).
By the way: Legally, the requirement for XAI also comes with the regulations around automatic profiling (data protection) and automated decisions. Both the GDPR and the Swiss Data Protection Act have corresponding regulations.
0 Comments