The new program is the latest in an attempt to automate ways to determine threats, such as foreign operatives who try to direct public opinion through posting false or malicious “news” stories, which can influence elections and other important events.
Speaking to the Washington Post, Tessa Lyons, the product manager in charge of fighting misinformation at Facebook, said the scheme would mean users are ranked on a scale of zero to one, though that’s not meant to be a definitive indicator.
Critics are concerned that users have no apparent way to obtain their rating. At present only Facebook’s misinformation team makes use of the measurement.
The report states the trustworthy rating will be one of many metrics that Facebook is using to improve the site.
The social network will also look at which news outlets users consider to be trustworthy as well as who tends to flag other people’s content as problematic.
Since the first reports of fake news on the social network, Facebook has taken many steps to combat false information on its site. The social network has been accused of letting bad actors use the platform to influence elections. Facebook is taking steps to reduce the problem in other countries like Brazil, India and the E.U. as well.