Bern, 27.09.2019 – On September 27, the Federal Council adopted the report “Reducing regulatory costs. Relaxation of documentation and archiving regulations”. In addition to the ongoing efforts of the federal government to reduce duplications in documentation and archiving, the Federal Council presents in its report possible relaxations and simplifications in 29 regulations to reduce the administrative burden on companies.
On March 7, 2017, the National Council adopted Postulate 15.3122 de Courten (Reduction of regulatory costs. Relaxation of documentation and archiving regulations). The postulate instructs the Federal Council to examine how companies can be relieved of official documentation and archiving regulations through coordination and data exchange among the responsible cantonal and federal authorities.To obtain an overview of existing documentation and archiving regulations for companies at the federal level, the State Secretariat for Economic Affairs SECO conducted a survey of the departments in October 2017. At the same time, the survey asked for indications of any potential for reducing regulatory costs.Within the scope of the survey, the federal offices reported 194documentation and archiving regulations, although no claim is made to completeness. In addition to the results of the survey, this report also describes the ongoing efforts of the Confederation to reduce duplication in documentation and archiving. The Federal Council’s data policy for Switzerland can make a contribution here, as can joint master data management, so that repeated entry of data is no longer necessary (once-only principle). The implementation of motion 16.4011 (Digitization. No duplications in data collection) also pursues the goal of relieving companies by reducing duplications
Significance: The report is based on a postulate (Postulate 15.3122 de Courten). In it, he is asked “The postulate instructs the Federal Council to examine how companies can be relieved of official documentation and archiving requirements through coordination and data exchange among the responsible cantonal and federal authorities.”
The Federal Council has defined the mandate somewhat more broadly and commissioned an analysis within the federal administration. The offices should examine the potential for optimization and submit proposals for simplifying official communications (both internal and external). This resulted in a collection of regulations which could be adapted from the point of view of the offices.
It is not surprising that companies will only benefit to a very limited extent, since 95% of business transactions can already be conducted electronically thanks to the GeBüV.
Only in the area of taxes is there still a small potential for improvement, and the same applies to sector-specific requirements; the healthcare sector was explicitly mentioned here.
However, master data management is likely to offer the greatest potential. Multiple entry is one of the central problems in the federal administration, which leads to quality problems and causes an enormous maintenance effort. The Federal Council is therefore right to play the “once-only” card. Excerpt from the original text:
Joint management of master data allows the population and administrative enterprises to provide their data only once. The public and businesses also enjoy greater transparency over their personal data, as they can track which authority has requested their data. For their part, public authorities can provide services faster, more efficiently and with better quality. Data can be exchanged without media discontinuity and in high quality. Once-Only also enables close knowledge sharing and collaboration between administrative agencies.
We would agree with that without reservation. what applies to the federal government also applies to any other organization. But implementing the once-only principle is a big challenge. Master data cleansing can only be done using modern AI tools. Graph databases and semantic technologies are a must. We describe how such master data can be cleansed in our white paper on “Data cleansing with semantic technologies”, among other things.