Infodemia , Fake News and Medicine : Science and the Quest for Truth

Besides fighting against the COVID-19 pandemic, there is another critical problem that Medicine and Science need to face in this crucial moment: the spread of inaccurate information online. By the end of March 2020, more than 2100 Iranians were poisoned by the oral ingestion of methanol. Iran, as an Islamic country, has severe restrictions on alcohol, but in this case, patients told that social media messages suggested they could prevent being infected by SARS-CoV-2 drinking alcohol. Almost 900 illicit alcohol poisoned patients were admitted to the Intensive Care Unit (ICU), and 296 died (fatality rate of 13.5%).1 In the past, news was produced and distributed by a few organizations or private companies, but today, in the Internet and social media age, anyone can broadcast news online. Fake news is better defined as deliberate false information spread via social or conventional media.2 Fake medical news can mislead in order to damage an organization and/ or a person. Another problematic consequence of a fake medical report is to make profits with some specific food, supplement or treatment. WHO Direc tor–Genera l Tedros Adhanom Ghebreyesus recently said: “We are not just fighting an epidemic; we are fighting an infodemic”. Knowing that stressful times like pandemic are associated with an overload of information and misinformation, immediately after COVID-19 was declared a Public Health Emergency of International Concern, a platform to share tailored information with specific target groups was launched WHO Information Network for Epidemics (EPI-WIN).3 The infodemic, the global epidemic of misinformation, can have severe consequences to healthcare and for the society. Content created on the web has the potential to provide the right information and to change people’s behavior positively. Still, it is also capable of generating opinions and social behaviors that may put health in danger.4 The first and most consequential misinformation in public health is the misconception that the measles, mumps, rubella (MMR) vaccine causes autism created by a fraudulent article published in Lancet.5 This misinformation was widely disseminated on social media and, combined with conspiracy theories and other beliefs strength an anti-vaccination movement. As a consequence, in 2020, many countries, including the United Kingdom, Greece, Venezuela, and Brazil, have lost their measles elimination status.6,7 In cardiology, there are examples of fake news too. Social media disseminated much misinformation about the potential oncogenic effect of antihypertensive drugs driving many patients to stop using some proved beneficial medication. Battistoni et al. demonstrated that there is any support to promote or encourage the banning of antihypertensive drugs because of a possible risk of neoplasms.8 O’Connor makes a strong argument calling cardiologists to firmly oppose exaggerated therapies, untested entities, unproven vaccines, and nutraceuticals taking the example of heart failure fake news.9 Widening the quote of Jonathan Smith, fake news diffuses significantly farther, more quickly, deeper, and 203

immediately after COVID-19 was declared a Public Health Emergency of International Concern, a platform to share tailored information with specific target groups was launched WHO Information Network for Epidemics (EPI-WIN). 3 The infodemic, the global epidemic of misinformation, can have severe consequences to healthcare and for the society. Content created on the web has the potential to provide the right information and to change people's behavior positively. Still, it is also capable of generating opinions and social behaviors that may put health in danger. 4 The first and most consequential misinformation in public health is the misconception that the measles, mumps, rubella (MMR) vaccine causes autism created by a fraudulent article published in Lancet. 5 This misinformation was widely disseminated on social media and, combined with conspiracy theories and other beliefs strength an anti-vaccination movement. As a consequence, in 2020, many countries, including the United Kingdom, Greece, Venezuela, and Brazil, have lost their measles elimination status. 6,7 In cardiology, there are examples of fake news too. Social media disseminated much misinformation about the potential oncogenic effect of antihypertensive drugs driving many patients to stop using some proved beneficial medication. Battistoni et al. demonstrated that there is any support to promote or encourage the banning of antihypertensive drugs because of a possible risk of neoplasms. 8 O'Connor makes a strong argument calling cardiologists to firmly oppose exaggerated therapies, untested entities, unproven vaccines, and nutraceuticals taking the example of heart failure fake news. 9 more largely than the truth. 10 As robots accelerate the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it. 10 This is particularly important, as Pennycook et al identified that having had previous contact with information (familiarity) increases the feeling that this information is true. Furthermore, they also demonstrated that repetition amplifies this feeling of "illusory truth". 11 How can we fight against this threat (figure 1)?
A promising approach to that is to rely on computational methods to detect fake news and misinformation. The majority of techniques to tackle this problem are developed in the area of Artificial Intelligence (AI), mainly using Natural Language Processing (NLP) and Machine Learning (ML) methods. To automatically classifying a piece of text as fake news or not, other ML and NLP solutions are also of aid, including features extraction, 12 social context modeling, 13,14 knowledge-based systems, 15 sentiment analysis, 16

among others.
Feature extraction is particularly important to provide useful information to ML methods. They can be gathered either directly from the text or from external sources. Examples of them include 1) title representativeness, 2) quotes of external sources, 3) presence of citations of other organization and studies, 5) use of logical fallacies, 6) emotional tone of the article, 7) inference consistency, e.g., a wrong association and causation or making a fact to generalize into an incorrect conclusion, 8) originality, 9) credibility of citations, 10) number of ads, 11) confidence degree in the authors, 12) number of social calls, and others. The ML algorithms can use some of these features to approximate a classifier model able to distinguish between a fake and a truthful content. The classifier learning process uses a previously annotated data set as a training set, where the examples in this dataset are the articles, and the annotation is if it is fake or not. In some cases, it is necessary to pre-process the data before extracting the features, using, for example, tokenization (divide the text into smaller parts called tokens), lower casing transformation, removal of common words that lack a proper meaning (stop words), sentence segmentation, etc. 12 Besides relying on feature engineering and extraction, recent methods based on Deep Learning take into account the content of the texts directly, in an end-to-end fashion. For example, Fang et al. developed a model to judge the authenticity of news with a precision rate of 95.5% based only on their content by using convolutional neural networks and self multi-head attention mechanism. 17 Other AI promising approaches consist of analyzing the social network features that hold the possible fake information. This scenario is relevant because it is increasingly common to use non-human accounts or bots to create fake news and spread them into a social network. 15 Thus, analyzing those social networks users' profiles, for example, can provide useful information for fake news detection. Furthermore, post-based features focus on analyzing how people

References
This is an open-access article distributed under the terms of the Creative Commons Attribution License express their opinions towards fake news through social media posts. Users shape different networks on social media in terms of interest, topics, and relations. Network-based features evaluate the network patterns whose user belongs.
One crucial strategy to avoid the dissemination of fake news is providing evidence-based information to the general public by liable organizations and institutions like WHO, OPAS, national health authorities, and academic societies. 3 Linked to the previous action is the creation of health content that is accessible for laypeople, increasing the collaboration of journalists and scientists to minimize errors in communication. 6 Finally, all physicians and healthcare providers should always elicit corrective information when confronting fake news. This strategy is proved to be useful, and the repetition of corrections also appears to be successful for reducing the effect of misinformation. 6 Applying multiple checks with social media information, detecting and avoiding information growth, and recognizing profit-related motivation is vital for managing fake medical information. 2 As in the well-succeeded anti-smoking strategy, the government participation is essential in the fight against fake news. Combating false information must be seen not only as a momentary action, but also as a continuous effort. Thus, creating regulatory support, implementing educational actions and paying special attention to children and young adults are essential. 18 In summary, we are engaged in a new and never imagined situation as the COVID-19 pandemic is spreading. Fake news can lead to particularly serious health events. All scientists, physicians, healthcare collaborators must work together to fight fake medical misinformation. This fight must not be lost.