top of page

Social Medias Fight Against AI-Generated Cybersecurity and Medical News

Social media websites such as Facebook and Twitter have always been home to misinformation. Most misinformation—flagged and unflagged—has been aimed at the general public. However imagine the possibility of misinformation—information that is false or misleading—in scientific and technical fields like cybersecurity, public safety, and medicine.


There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers.


It’s possible for A.I systems to generate false information in critical fields like medicine and defense that is convincing enough to fool even experts. FastCompany found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.


Normal misinformation often aims to harm the reputation of companies or public figures. Misinformation within communities of such expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could potentially put lives at risk.


To test this threat, FastCompany studied the impacts of spreading misinformation in the cybersecurity and medical communities. They used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. They found that transformer-generated misinformation was able to fool cybersecurity experts.



bottom of page