Landscape of Trusted Information Standards

With recent AI tools such as Deepfakes and OpenAI/GPTx it is possible to automate the fabrication of complex digital content on a  large scale and to create large numbers of bots that can convincingly mimic human behaviour, e.g. posting product reviews, engaging in political and societal discourse, promoting stocks, acting as influencers and interacting with followers. Such tools could even mimic specific humans,  e.g. making a head of state appear to deliver provocative statements or even declare war.

The effects are difficult to undo, even if they are later shown to be based on a fabrication. The consequences are a fundamental challenge to  European society and business.  

While the issue of disinformation is already being addressed on multiple levels (including an EU HLEG in 2017/18 and various content marking standardisation efforts), the additional dimension of automation of the fabrication of disinformation has not yet been acted on. Most countermeasures require standards, e.g. standards for tracing information back to its source or creator, standards for bot-resistant pseudonymous identities, standards or protocols for assigning and tracking trust etc.

This report provides an overview of the recent and imminent technological capabilities as well as their impact on democracy,  business  and legal systems. It explores sustainable countermeasures,  with the main conclusion being that the detection tools for fabricated persons and content are not sufficient and need to be supplemented by standards for handling the identity and trustworthiness of sources of information without sacrificing privacy.

The report concludes with concrete recommendations to European standardises, policymakers and other stakeholders.

 

Read the full Landscape of Trusted Information Standards report on Zenodo.