WIRED reported on May 28 that University of Washington professor Bo Zhao employed AI techniques similar to those used to create deepfakes to alter satellite images of several cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings where there are none in Seattle and to remove structures and replace them with greenery in Beijing.
You may want to take a look at the photos in the article to see how deepfake maps could really become problematic.
You may recall the satellite images showing the expansion of large detention camps in Xinjiang, China, between 2016 and 2018. Those images showed some of the strongest evidence of a government crackdown on more than a million Muslims, resulting in international condemnation and sanctions.
Other aerial images have had powerful impact, including those of nuclear installations in Iran and missile sites in North Korea. Now it appears that image-manipulation tools used by artificial intelligence may confuse us about what is real and what is not.
Zhao used an algorithm called CycleGAN to manipulate satellite photos. The algorithm, developed by researchers at UC Berkeley, has been widely used for all sorts of image manipulation. The result might mislead governments or be propagated on social media, giving misinformation or doubt about real visual images.
“I absolutely think this is a big problem that may not impact the average citizen tomorrow but will play a much larger role behind the scenes in the next decade,” says Grant McKenzie, an assistant professor of spatial data science at McGill University in Canada.
“Imagine a world where a state government, or other actor, can realistically manipulate images to show either nothing there or a different layout,” McKenzie says. “I am not entirely sure what can be done to stop it at this point.”
We have thus far seen just a few crudely manipulated satellite images spread on social media, including a photograph purporting to show India lit up during the Hindu festival of Diwali. That image was apparently touched up by hand. How long will it be before sophisticated “deepfake” satellite images are used to hide weapons installations or justify military actions?
Altered aerial imagery could also have commercial significance since such images are valuable for digital mapping, tracking weather systems, and guiding investments.
US intelligence has acknowledged that manipulated satellite imagery is a growing threat. “Adversaries may use fake or manipulated information to impact our understanding of the world,” says a spokesperson for the National Geospatial-Intelligence Agency, part of the Pentagon that oversees the collection, analysis, and distribution of geospatial information.
The spokesperson says forensic analysis can help identify forged images but acknowledges that the rise of automated fakes may require new approaches. Software may be able to identify telltale signs of manipulation, such as visual artifacts or changes to the data in a file. But AI can learn to remove such signals, creating an endless contest between those creating deepfaked images and those trying to uncover the image manipulation.
Zhao at the University of Washington plans to explore ways to identify deepfake satellite images. He says that studying how landscapes change over time could help spot suspect features. “Temporal-spatial patterns will be really important,” he says.
But that won’t matter if deepfakes satellite imagery spreads like wildfire over social media. As we have noted in the last several years, misinformation of any kind can spread quickly on social media and cause a lot of damage. But it would be helpful if the government developed technology that can flag manipulated images.
Hat tip to Dave Ries.
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225|Fairfax, VA 22030
Email: firstname.lastname@example.org Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology