Connect with us

Technology

Real or Fake? Finding the best ways to detect digital deception

Published

on

Deepfake technology has people wondering — is what I’m seeing real or fake? University researchers are making deepfake detection tools that can help journalists, intelligence analysts, and all of our trusted decision makers.

ROCHESTER, N.Y., Nov. 20, 2024 /PRNewswire-PRWeb/ — Seeing is believing. Well, it used to be, anyway.

How do deepfakes work? The process uses AI deep learning algorithms to analyze thousands of images and videos of the person being replicated. The neural network then recognizes patterns, like facial features, so it can continuously generate new ones.

Today, artificial intelligence (AI) is being used to manipulate media.

It can face-swap celebrities. It allowed a de-aged Luke Skywalker to guest star in The Mandalorian. It also falsely showed Ukrainian President Volodymyr Zelensky surrendering to the Russian invasion.

Deepfakes are videos, audio, or images that have been altered using AI. In a deepfake, people can be shown saying and doing things that they have never said or done.

This capability has profound implications for entertainment, politics, journalism, and national security. As deepfakes become more convincing, the challenge of distinguishing fact from fiction grows, threatening the credibility of news sources and the stability of democratic institutions.

At RIT, a team of student and faculty researchers is leading the charge to help journalists and intelligence analysts figure out what is real and what is fake. Their work, called the DeFake Project, has more than $2 million in funding from the National Science Foundation and Knight Foundation.

The RIT team aims to mobilize the best deepfake detectors around—observant humans armed with the right tools. “There is real danger in shiny new deepfake detectors that confidently offer often inaccurate results,” said Saniat (John) Sohrawardi, a computing and information sciences Ph.D. student leading the DeFake Project. “We need to provide journalists—and other experts who vet reality—with forensic tools that help them make decisions, not make the decisions for them.”

Journalists agree and they are working with RIT.

Scott Morgan, a reporter and producer with South Carolina Public Radio, said that it’s increasingly harder to spot a fake and a good detector tool would be invaluable. He said he’s often relying on a “would that person really have said that” kind of approach.

“And ultimately, that’s what DeFake is trying to be—a tool that supplements the journalist’s gut feeling and complements old-fashioned legwork, but doesn’t replace them,” said Morgan. “Because even an AI-driven program that analyzes videos for the teeny-tiniest of clues that it might have been doctored shouldn’t be left to make decisions about what to do with that information or disinformation.”

Spotting the Fake

Matthew Wright, endowed professor and chair of the Department of Cybersecurity, first saw a high-quality deepfake lip sync of President Obama in 2017. He called it a real “OMG moment.”

“It was really disconcerting,” said Wright. “The potential to use this to make misinformation and disinformation is tremendous.”

As an expert in adversarial machine learning, Wright was studying how AI can impact cybersecurity for good and bad. Deepfakes seemed like a valuable offshoot of this.

In 2019, Wright and the newly formed DeFake Project team answered a call from the Ethics and Governance of Artificial Intelligence Initiative to build a deepfake detector. After developing some specialized techniques, their detector worked perfectly on curated deepfake datasets—it had 100-percent accuracy. Then they pulled up some YouTube videos to run through their detector.

“It would make mistakes,” said Wright. “But this wasn’t just our design. There is a cottage industry around developing deepfake detectors and none of these are foolproof, despite the claims of the company.”

Detectors can become confused when video is even slightly altered, clipped out of context, or compressed. For example, in 2019, a Myanmar news outlet used a publicly available deepfake detector to analyze a video of a chief minister confessing to a bribe. The tool was 90-percent confident that the video was fake, yet expert analysis later determined it was in fact real.

“Users tend to trust the output of decision-making tools too much,” said Sohrawardi. “You shouldn’t make a judgment based on percentage alone.”

That’s why the DeFake Project is so important, said Andrea Hickerson, dean and professor of the School of Journalism and New Media at The University of Mississippi and a member of the project. The goal is to make a tool that journalists can actually use.

“If a trusted journalist accidentally shares a deepfake, it would reach a wide audience and undermine trust in the individual and the profession as a whole,” said Hickerson, the former director of RIT’s School of Communication.

“Journalists have important contextual expertise that can be paired with a deepfake detection tool to make informed judgments on the authenticity of a video and its newsworthiness.”

To better understand the journalistic process, the DeFake researchers interviewed 24 reporters, ranging from national broadcast networks to local print media. Taking inspiration from a popular tabletop game, the team created a role-playing exercise called Dungeons & Deepfakes. The journalists were placed in a high-stakes newsroom scenario and asked to verify videos using traditional methods and deep-learning-based detection tools.

The team observed that journalists diligently verify information, but they too have the potential to over rely on detection tools, just like in the Myanmar incident.

Most of all, journalists saw the overall fakeness score and had a healthy skepticism. They needed insight into its calculation. Unfortunately, AI is not inherently good at explaining the rationale behind its decisions.

Unboxing the Black Box

When Pamposh Raina is asked to investigate a potential deepfake, she checks with multiple sources and often reaches out to RIT’s experts.

She is an experienced reporter who has worked with The New York Times, written for international publications, and currently heads the Deepfakes Analysis Unit at the Misinformation Combat Alliance, which is helping fight AI-generated misinformation in India.

One clip she questioned was being passed around social media in 2024. It was a video in Hindi that apparently featured Yogi Adityanath, chief minister of the most populated state in India, promoting a pilot gaming platform as a quick means to make a financial gain.

After running the video through detection tools from Hive AI, TrueMedia, and escalating to ElevenLabs for audio analysis, the investigators wanted an expert view on possible AI tampering around Adityanath’s mouth area in the video.

The DeFake team noted that the chief minister’s mouth animation looked disjointed and could be a result of the algorithm failing to extract proper facial landmarks. Ultimately, the Deepfakes Analysis Unit concluded that the video was fake and Adityanath did not utter the words attributed to him.

Creating meaningful tools like this is why Kelly Wu, a computing and information sciences Ph.D. student, came to RIT. After completing her undergraduate degrees in mathematics and economics at Georgetown University, Wu jumped at the chance to research deepfakes with the RIT team.

“Right now, there is a huge gap between the user and detection tools, and we need to collaborate to bring that together,” said Wu. “We care about how it will transition into people’s hands.”

Just like human brains, AI systems identify trends and make predictions. And just like in humans, it’s not always clear how a model comes to any particular conclusion.

Wu is figuring out how to unbox that AI black box. She aims to produce explanations that are both faithful to the AI model and interpretable by humans.

A lot of today’s detection tools use heatmaps to present explanations of results. A blob of dark red highlighting the eye region signifies that this area is more important for the model’s decision-making process.

“But, even to me, it just looks like a normal eye,” said Wu. “I need to know why the model thinks this is important.”

The DeFake tool will highlight areas and provide detailed text explanations. The detector displays information on the processed content, including metadata, overall fakeness, top fake faces, and an estimation of the deepfake manipulation method used. It also incorporates provenance technology, extracting Content Credentials—a new kind of tamper-evident metadata. Due to the resource-intensive nature of AI, the tool allows people to assess specific snippets of a video.

Most recently, the DeFake Project, which now has nine members from three universities, is expanding to meet the needs of intelligence analysts.

In 2023, RIT earned a grant to work with the Department of Defense on bolstering national security and improving intelligence analysis.

RIT’s team is interviewing analysts and using their insights to help create a Digital Media Forensic Ontology that makes the terminology of manipulated media detection methods clearer and more consistent. Analysts can use the DeFake all-in-one platform along with the ontology to narrow down why content needs to be analyzed, where in the media analysts should focus their attention, and what artifacts they should look for.

Candice Gerstner, an applied research mathematician with the Department of Defense, is collaborating on the project. She said that when analysts write a report that will be passed up the chain, they need to be sure that information has integrity.

“I’m not satisfied with a single detector that says 99 percent—I want more,” said Gerstner. “Having tools that are easily adaptable to new techniques and that continue to strive for explainability and low error rates is extremely important.”

In the future, the DeFake Project plans to expand to law enforcement, who are worried about fake evidence getting into the court system. RIT students are also researching reinforcement learning to limit bias and make sure AI models are fair.

Akib Shahriyar, a computing and information sciences Ph.D. student, is taking it one step further. He’s attacking the underlying model that powers the DeFake tool to uncover its weaknesses.

“In the end, we’re not just creating a detector and throwing it out there, where it could be exploited by adversaries,” said Shahriyar. “We’re building trust with the users by taking a responsible approach to deepfake detection.”

How to Identify a Deepfake

Although RIT’s DeFake tool is not publicly available, here are some common ways to identify fake content.

Artifacts in the face: Look for inconsistencies in eye reflections and gaze patterns. Anomalies may occur in the face—unnatural smoothness, absence of outlines of individual teeth, and irregular facial hair.Body posture: Deepfakes prioritize altering facial features, so body movements could appear odd or jerky.Audio discrepancies: Does the audio sync seamlessly with the speaker’s mouth movements?Contextual analysis: Consider the broader context, including the source, timestamps, and post history.External verification: Do a reverse image search and try contacting the original sources.Check the news: Look for reports about the content in reputable news sites.

Media Contact

Scott Bureau, Rochester Institute of Technology, 585-475-2481, sbbcom@rit.edu, rit.edu

View original content to download multimedia:https://www.prweb.com/releases/real-or-fake-finding-the-best-ways-to-detect-digital-deception-302311795.html

SOURCE Rochester Institute of Technology

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Specified Technologies Inc. Unveils Firestop Clash Management and Locator Updates

Published

on

By

SOMERVILLE, N.J., Nov. 23, 2024 /PRNewswire/ — Specified Technologies Inc. has announced their latest Firestop Clash Management (FCM) and Firestop Locator (FSL) releases. FCM automates the process of locating and assigning firestop solutions to conditions within Autodesk® Revit®, enabling firestop novices to find firestop solutions like a firestop expert. In this latest release, STI has further expanded the capabilities of FCM by integrating it with their firestop documentation and compliance tool, Firestop Locator (FSL). FSL enables contractors and facilities’ teams to document the status, location, and products used for any and all fire life safety services across a building.

In the latest update for FSL, teams can now create custom items to track any service on their project beyond the base six (Penetration, Joint, Extinguisher, Door, Damper, and Barrier). Teams can also modify the base six items to include project specific inspection and maintenance requirements and any other details that they would like to be tracked.

With this new integration, decisions made during the design phase of a building using FCM are seamlessly passed into FSL during the construction phase, giving implementation teams a jump start on work to be done. This integration also improves data integrity and eliminates the guesswork in the field of determining what firestop systems and products are to be used where.

“We’re proud of the latest releases of FCM and FSL and look forward to continuing to support the fire life safety community,” says Justin Pine, Sr. Manager of Software & Services.

Specified Technologies Inc. promotes life and building safety by developing innovative fire protection systems and accompanying digital tools that help stop the spread of fire, smoke, and hot gases. Our SpecSeal® and EZ Path® product lines are engineered for easy installation and deliver powerful performance, often resulting in lower installed costs. Since firestopping is our only business, we concentrate all our resources on providing the highest quality, fully tested, innovative firestopping solutions.

Contact: Jess Bern; jbern@stifirestop.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/specified-technologies-inc-unveils-firestop-clash-management-and-locator-updates-302314629.html

SOURCE Specified Technologies, Inc.

Continue Reading

Technology

ZICC: Internet Experts Pay Attention to the Development of Artificial Intelligence

Published

on

By

BEIJING, Nov. 23, 2024 /CNW/ — During the Wuzhen Summit of the World Internet Conference, ZICC interviewed Internet experts from all over the world.

Lampros Sterg, UNESCO Chair in AI & Data Science, said that AI can accelerate social progress, but it needs to be used properly to avoid its negative effects in order to benefit citizens and society. Latif Ladid, the president of the IPv6 Forum, called for the establishment of a global governance system to allow artificial intelligence technology to serve humanity for good. South Korean computer scientist Kilnam Chon shared insights on AI’s positive role in healthcare. He urged global efforts to ensure AI safety and prevent its misuse in weapons. Indian entrepreneur Bibin Babu said he believes that AI will not replace humans, but will create more new jobs.

View original content to download multimedia:https://www.prnewswire.com/news-releases/zicc-internet-experts-pay-attention-to-the-development-of-artificial-intelligence-302314745.html

SOURCE ZICC

Continue Reading

Technology

Hankyung.com introduces: MecKare, Leading the AI-powered Innovation in Health Monitoring Solution

Published

on

By

– Leading efficient care management for the elderly with unimpeded smartcare

https://img.hankyung.com/pdsdata/pr.hankyung.com/uploads/2024/11/image01-1.png

SEOUL, South Korea, Nov. 23, 2024 /PRNewswire/ — JCF Technology is a startup that independently developed ‘MecKare’, a radar sensor that measures biological signals in a non-contact manner, and provides a platform service that automatically connects users and guardians in two-way emergency situations through an artificial intelligence analysis system. Since its establishment in 2016, it has developed a highly accurate non-contact multi-biological radar sensor through many years of technology accumulation, and succeeded in commercializing the product for the first time in 2021.

MecKare uses microwave radar and micro-Doppler signal processing technology to measure the user’s heart rate, respiratory rate, and skin temperature within 16.4 ft in real time. The sensor can measure human body movement patterns using precise and highly responsive thermal infrared rays and can detect falls through pattern analysis based on changes in human movement. In particular, the movement and change of thermal infrared rays within the measurement range are detected in real time, and the trend of biomarkers that appear as advance signs before a person falls can be checked through differential motion detection that measures the user’s movement pattern. It provides an alarm in advance by predicting before a person falls, enabling accuracy and quick response to accidents. As a result, it is possible to prevent safety accidents in the elderly by detecting emergency situations such as lonely death, cardiac arrest, breathing difficulties, and falls. Additionally, unlike other existing wearable devices such as smart watches or bands, MecKare does not need to be worn or attached to the body, so it can be used remotely via Wi-Fi without causing stress to the user.

https://img.hankyung.com/pdsdata/pr.hankyung.com/uploads/2024/11/image02.png

MecKare can be installed in the bedroom, bathroom, living room, or entrance of a home or facilities(Assisted Living, Nursing Home, etc) to provide 24-hour monitoring without a camera and detect abnormal signs in advance using a biometric information analysis algorithm and deliver them to the guardian.

MecKare’s radar biometric sensor is recognized in the global market for its technology as a device that obtains precisely customized biometric information while overcoming spatial constraints and without risk of privacy infringement. MecKare is being supplied to senior care facilities in Australia, Germany, Poland, Saudi Arabia, and China. In 2025, MecKare plans to conduct verification of vital signs such as attendance, fall prevention, and asthma of elderly people living in hospitals or assisted living in conjunction with local PPOs/HMOs in the United States.

In summary, MecKare is a system that reduces user inconvenience and enables management of multiple patients. By being able to provide personalized health data analysis results, it will serve as an opportunity to change the market paradigm towards preventive smart care. We expect MecKare’s A.I to play a role as an innovator that complements, rather than replaces, humans in care settings.

View original content:https://www.prnewswire.com/news-releases/hankyungcom-introduces-meckare-leading-the-ai-powered-innovation-in-health-monitoring-solution-302310743.html

SOURCE Hankyung.com

Continue Reading

Trending