20 November 2019
Federal Ministry for Digital and Economic Affairs, Vienna, Austria

Addressing Misinformation in Social Media: Perceptions of Artificial Intelligence Tools

The second Co-Inform project workshop was jointly organised by IIASA and the Austrian Federal Ministry for Digital and Economic Affairs. The goal of the workshop was to discuss the prototypes of artificial intelligence tools to detect and prevent misinformation, such as MisinfoMe, developed in the frame of the Co-Inform project. 

© Alexey Zatevakhin | Dreamstime.com

© Alexey Zatevakhin | Dreamstime.com

The feedback of stakeholders should help to develop this tool further as well as two other tools currently under development in the frame of the project - the browser plug-in to be used by citizens and the dashboard for fact-checking journalists and policymakers. The aim of the browser plug-in will be to provide users with misinformation ratings of social media posts and corrective information collected from several fact-checking sources. The tool crawls the Twitter timeline of a user, gives a label to tweet whether the tweet is blurry or not and explain the reason in case of the blurry tweet, and shows the tweet credibility score, based on the threshold of aforesaid MisinfoMe tool approach. The aim of the dashboard for fact-checking journalists and policymakers will be to detect, track and predict the spread and evolution of misinformation on the web.  

During the workshop, the participants discussed why such artificial intelligence tools should be developed, either for reasons of building trust or for making people think twice while reading news items or for reasons of transparency. The ranking was conducted with the application of the DecideIT multi-criteria decision analysis tool. The participants also discussed functionalities of the tools and participated in decision making experiments and the goal was to collect perceptions of three groups of stakeholders, i.e., journalists/fact checkers, citizens, and policy-makers, on various functionalities and features of artificial intelligence tools dealing with misinformation detection and prevention. 

Photos from the workshop

  •  |
  •  |
  •  |
  •  |
  •  |
  •  |
  •  |
  •  |

How the MisinfoMe tool works? 

The assessments are done about the profile (on Twitter, Facebook or website) coming from published reports, for example, when a fact-checker reviewed a tweet from the selected profile or when the profile has appeared on public lists of misinforming accounts. The functionality of MisinfoMe is based on Internet URLs being used and assessments of these URLs by fact-checkers. 



Print this page

Last edited: 25 November 2019

CONTACT DETAILS

Nadejda Komendantova

Research Group Leader and Senior Research Scholar Cooperation and Transformative Governance Research Group - Advancing Systems Analysis Program

CONTACT DETAILS

Nikita Strelkovskii

Research Scholar Cooperation and Transformative Governance Research Group - Advancing Systems Analysis Program

Research Scholar Exploratory Modeling of Human-natural Systems Research Group - Advancing Systems Analysis Program

RESEARCH PROJECT

Co-Inform

PUBLICATIONS

International Institute for Applied Systems Analysis (IIASA)
Schlossplatz 1, A-2361 Laxenburg, Austria
Phone: (+43 2236) 807 0 Fax:(+43 2236) 71 313