The biggest danger of AI is ‘deepfakes’, says Microsoft

Explosion of Artificial Intelligence: Benefits and Concerns

The recent months have seen a surge in the use of Artificial Intelligence (AI) that has taken many by surprise. While generative AIs such as ChatGPT and Midjourney have marked a shift in paradigm, some anticipate that this technology will bring more problems than advantages. The threat of malicious actors using deepfakes to launch disinformation campaigns has been a latent worry for regulators and companies such as Microsoft.

Microsoft’s President’s Concerns

Brad Smith, the president of Microsoft, spoke before US legislators and highlighted his biggest concern related to the proliferation of deepfakes. Synthetic media, or deepfakes, are AI-generated videos or images that seek to imitate the look and sound of a person. Smith noted that countries such as Russia, China, and Iran would likely take advantage of AI and called on lawmakers to create regulations to address the issue of forgeries, particularly in foreign influence operations.

Regulatory Responses

In response to Smith’s statements, the US government has taken steps to create standards and regulations to prevent malicious use of AI. The National Institute of Standards and Technology (NIST) has developed a set of guidelines to help organizations identify deepfakes. The US Senate has also introduced the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act (DEEPFAKES Accountability Act). The bill seeks to create criminal penalties for individuals who create deepfakes with the intent to deceive.


The explosion of AI has brought both benefits and concerns. While AI has the potential to revolutionize many aspects of life, malicious actors may use the technology to launch disinformation campaigns. Microsoft’s president, Brad Smith, has highlighted the need for regulations to prevent the malicious use of AI. In response, the US government has taken steps to create standards and regulations to prevent malicious use of AI.