AI Compliance Check

Risks of AI processes and regulatory background

AI processes are omnipresent today. The question of whether they should be used has long since ceased to arise. They are a reality and we encounter them more and more frequently in everyday life. However, with the implementation and use of AI, the risks increase for both providers and users. While most technology solutions were developed in an almost legal vacuum, this is no longer the case with AI. A large number of regulations such as the EU AI Act or data protection laws restrict the use or development of AI. But how can such risks be mitigated or optimized? This is also referred to as AI lifecycles, because AI processes must be processed and handled properly.

Risk-optimized use of AI We make your AI processes compliant

EXPERTISE

Key questions on the use of AI

Do I have to comply with legal requirements for my solution, and if so, which ones?

Can I use them in other countries without any problems?

Is it compliant with data protection?

What do I need to bear in mind when processing personal data?

Who owns the generated data?

What are the security risks and threats?

How secure does my AI process need to be and what do I need to look out for?

Why is prompt engineering necessary and what are the specific risks?

Do I have to make the procedure comprehensible, if so, how?

What AI architecture models are there and what risks do you need to be aware of?

Offer

Our offer

We advise you on the development and use of AI-supported processes by:

Evaluate the existing concepts and procedures

Carry out data protection impact assessments

Create a risk assessment of the planned solution

Make and describe recommendations for modification

Make recommendations to the decision-makers

Carry out follow-up checks in the company and suggest improvements

Expertise

Our know-how

krm has been a data and risk expert from the very beginning. We have been dealing with information lifecycle management and the correct handling of data since 2002. Our consultants have years of experience as cyber security experts, auditors or records managers and developers of automated processes. We have developed an AI-based solution for the identification of personal data and implemented automated identification processes. Do you know, for example, that a data protection impact assessment is mandatory in many cases?

Risk

AI lifecycle-specific risk topics

Selecting the right AI tool

AI security, security concepts

Data protection Risk assessments

AI lifecycle management

Prompt Engineering

AI version management

AI risk analysis and threat management

Data ownership, IP and data use concepts

Secondary Data Use

Application concepts for AI

Data erasure in the context of AI and GenAI

Transparency and traceability of procedures and processes used (black box vs. white box)

AI ethics

Topics

Consulting topics (excerpt)

Healthcare (sorting and classification)

Creation of patient dossiers and metadata validation as well as semantic processing of patient dossiers. Assignment of metadata from various taxonomies/ontologies. Lifecycle management of data from physical form to electronic destruction. Ensuring data integrity at both technical and content level.

Privacy

Concept and construction of an AI-based system for the recognition and identification of personal data(PID Cockpit). Recognition and marking of critical metadata and implementation of data protection-specific requirements.

Human Resources (risk assessment / data protection impact assessment)
  • Risk analyses and data protection follow-up estimates of AI-based HR tools for assessing employees and creating appraisals and satisfaction analyses (e.g. using sentiment analyses).
  • Comprehensive data protection impact assessments of systems that use AI processes.
Banking (AI-supported evaluation of customer information)

Digitized or digitally recorded customer information from ID cards, official forms and online forms can be evaluated and validated with AI. This enables largely automated processes for standard cases. The necessary control system is sophisticated – we know how to do it.

Data cleansing, use of the semantic middleware PoolParty

We have been a partner of the Semantic Web Company for several years and advise on the use of PoolParty (semantic middleware). Here, too, LLMs are used for corpus analysis and taxonomy/ontology creation, for example. The specific use case involves data deletion/cleansing.

Prepared

Checklist

Shows 10 important questions on the legally compliant use of AI technologies:

  1. Is particularly sensitive personal data involved? Have you carried out a data protection impact assessment?
  2. Is your data correctly anonymized?
  3. Is the data basis known and the quality assured?
  4. Is the training data sufficient and correct?
  5. Is the AI process and the AI algorithms comprehensible and transparent to the user/person?
  6. Is it possible to exclude/limit discrimination, prejudice and false results?
  7. Are control processes established so that the requirements can be checked (e.g. use of the dual control principle for “human in the loop” data checks)?
  8. Does your system fall into a regulated risk class (under the AI Act)?
  9. How do they check that they are not infringing any copyrights by using the data?
  10. Do you know the processing chains (who is involved in processing the data)?

BLOG

Everything about AI

CONTACT US NOW

Call: +41 44 888 10 11

or by mail to info@krm.swiss