In March 2022, a video surfaced online showing Ukrainian President Volodymyr Zelensky allegedly asking his troops to lay down their arms in the face of a Russian invasion. The artificial intelligence (AI) generated video was of poor quality and the ruse was quickly exposed, but as synthetic content becomes easier to create and more compelling, such an attempt could someday have major geopolitical implications.

This is partly why computer scientists are developing better methods for algorithmically generating video, audio, images, and text—generally for more constructive uses, such as allowing artists to implement their ideas—and they are also creating counter-algorithms to detect such synthetic content. . Recent research shows progress in improving the reliability of detection, sometimes by going beyond the subtle signatures of specific generation tools and instead using underlying physical and biological signals that are difficult for AI to mimic.

It is also entirely possible that AI-generated content and detection methods will be locked into a constant exchange of data as both parties become more sophisticated. “The main problem is how to handle new technologies,” says Luisa Verdoliva, a computer scientist at the Federico II University of Naples, of the new generation methods that keep coming up. “In that respect, it never ends.”

In November, Intel announced its Real-Time Deepfake Detector, a video analysis platform. (The term “deepfake” comes from the use of deep learning — a field of AI that uses multilayer artificial neural networks — to create fake content.) Likely customers are social media companies, broadcasters, and non-governmental organizations that can distribute detectors to the general public, Ilke says. Demir, researcher at Intel. One of the Intel processors can analyze 72 video streams at once. The platform will eventually use multiple detection tools, but when it launches this spring, it will use the FakeCatcher detector that Demir co-created (with Binghamton University’s Umur Çiftçi).

FakeCatcher studies changes in complexion to infer blood flow. This process is called photoplethysmography (PPG). The researchers developed the software to focus on specific color patterns in specific areas of the face and ignore anything else. If they let him use all the information in the video, then during training he could rely on signals that other video generators would be easier to manipulate. “PPG signals special in the sense that they are all over your skin,” says Demir. “It’s not just about the eyes or the lips. And changing the lighting doesn’t eliminate them, but any generating operation actually eliminates them, because the type of noise they add distorts spatial, spectral, and temporal correlations.” In other words, FakeCatcher ensures that color naturally fluctuates over time as the heart pumps blood and that there is consistency between areas of the face. In one test, the detector was 91 percent accurate, nearly nine percentage points better than the next best system.

The creation and discovery of synthetic media is an arms race in which each side leans on the other. With a new detection method, one can often train the generation algorithm to better fool it. The key advantage of FakeCatcher is that it is not differentiable, a mathematical term meaning that it cannot be easily reverse engineered for training generators.

The Intel platform will also eventually use a system recently developed by Demir and Çiftçi, which is based on facial movement. While natural movement follows the structure of the face, deepfake movement looks different. So, instead of training a neural network on raw video, their method first applies a motion augmentation algorithm to the video, making motion more visible before passing it on to the neural network. In one test, their system determined with 97% accuracy not only whether the video was fake, but which of several algorithms created it, more than three percentage points better than the next best system.

infographic that reads "FakeCatcher" with a photo of a man with dots on his faceIntel

Researchers at the University of California, Santa Barbara took a similar approach in a recent paper. Michael Goebel, an electrical engineering graduate student at the University of California at San Francisco and co-author of the paper, notes that there is a range of detection methods. “On the one hand, you have very unrestricted methods that are just pure deep learning,” meaning they use all the data available. “On the other hand, you have methods that do things like gaze analysis. We are somewhere in between.” Their system, called PhaseForensics, focuses on the lips and extracts motion information at various frequencies before feeding that processed data into a neural network. “By using the motion functions themselves, we kind of hardcode part of what we want the neural network to learn,” Goebel says.

He notes that one of the advantages of this middle ground is generalizability. If you train an unrestricted video detector of some generation algorithms, it will learn to detect their signatures, but not necessarily the signatures of other algorithms. The UCSB team trained PhaseForensics on one set of data and then tested it on three others. Its accuracy was 78%, 91%, and 94%, which is about four percentage points better than the best comparison method for each respective data set.

Audio deepfakes have also become a problem. In January, someone uploaded a fake clip of actress Emma Watson reading an excerpt from Hitler’s book. My struggle. Researchers work here too. In one approach, scientists at the University of Florida have developed a system that models the human vocal tract. Trained on real and fake audio recordings, he created a series of realistic cross-sectional areas at various distances along sound-reproducing airways. Given a new suspect sample, he can determine if it is biologically plausible. The paper reports that the accuracy of one set of data is about 99 percent.

Their algorithm doesn’t need to see deepfake audio from a certain generation of algorithms to protect against it. Verdoliva from Naples developed another such method. During the learning process, the algorithm learns to find the biometric signatures of the speakers. In implementation, it takes the real recordings of a given speaker, uses the learned techniques to look for a biometric signature, and then looks for that signature in the dubious recording. On one set of tests, it scored an “AUC” (adjusting for false positives and false negatives) of 0.92 out of 1.0. The best participant scored 0.72 points.

Verdoliva’s group also worked on identifying generated and processed images, whether they were altered by artificial intelligence or old-fashioned cut and paste in Photoshop. They trained a system called TruFor on photos from 1,475 cameras, and it learned to recognize the kinds of signatures left by those cameras. By looking at a new image, it can detect inconsistencies between different areas (even from new cameras) or tell if the whole image doesn’t look like it was plausibly taken from the camera. In one test, TruFor scored an AUC of 0.86 while the top competitor scored 0.80. In addition, he can highlight which parts of an image influence his judgment the most, helping people to cross-check his work.

High school students now regularly engage in AI content creation play, prompting ChatGPT’s text generation system to write essays. One solution is to ask the creators of such systems, called large language models, to watermark the generated text. Researchers at the University of Maryland recently came up with a method that randomly generates a set of words from a green list and then gives those words a slight preference when writing. If you know this (secret) list of words from the green list, you can look at their prevalence in a piece of text to determine if this is due to the algorithm. One problem is that the number of powerful language models is growing and we cannot expect all of them to watermark their output.

One Princeton student, Edward Tian, ​​created a tool called GPTZero that looks for signs that text was written by ChatGPT, even without watermarks. People tend to make more unexpected word choices and hesitate more in sentence length. But GPTZero seems to have limitations. One user who put GPTZero through a bit of testing found that it correctly labeled 10 out of 10 AI-generated texts as synthetic, but also incorrectly labeled 8 out of 10 human-written texts.

Synthetic text detection is likely to be far behind other media detection. According to Tom Goldstein, a professor of computer science at the University of Maryland who co-authored the paper on watermarking, this is due to the diversity in how people use language and the fact that there aren’t as many signals. An essay can have several hundred words rather than a million pixels in a picture, and the words are discrete, as opposed to the subtle variations in pixel color.

There is a lot at stake when synthetic content is discovered. It can be used to influence teachers, courts, or voters. It may create humiliating or intimidating adult content. The very idea of ​​deepfakes can undermine trust in mediated reality. Demir calls this future “dystopian”. In the short term, she says, we need detection algorithms. In the long term, we also need provenance protocols, perhaps using watermarks or blockchains.

“People would like to have a magical tool that can do everything perfectly and even explain,” Verdoliva says of the detection methods. Nothing like it exists and probably never will. “You need a few tools.” Even if a quiver of detectors can detect deepfakes, the content will have at least a short online life before it disappears. It will have an impact. So, says Verdoliva, technology alone cannot save us. Instead, people should be aware of the new reality-filled reality.

From articles on your site

Related articles online

LEAVE A REPLY

Please enter your comment!
Please enter your name here

two × one =