FutureFive New Zealand - Consumer technology news & reviews from the future
Laptop deepfake detection split screen suspicious face green shield

Gen & Intel unveil on-device deepfake scam defence

Wed, 7th Jan 2026

Cyber safety company Gen has unveiled an early preview of an on-device deepfake detection system developed with Intel, as the firm releases new data suggesting that scam deepfakes concentrate in long-form online video.

The technology runs on users' devices and analyses both audio and visual elements of video. It carries out simultaneous checks on manipulated voices and images. This approach reduces the need to send content to the cloud and shortens detection times.

The company is presenting the prototype with Intel at the CES technology show. The work forms part of wider industry efforts to identify synthetic media and limit fraud linked to generative AI tools.

On-device focus

Gen's system sits locally on PCs and other consumer devices. It scans video content as it plays. It flags suspected manipulation without relying solely on centralised content moderation or server-side filters.

The tool separates the analysis of spoken audio from the analysis of video frames. It checks for cloned or synthetic voices. It also checks for altered or composited visuals.

The companies are positioning the system as infrastructure that can sit underneath consumer security products and media apps. It is designed for continuous monitoring across recommendation-driven video feeds.

Long-form risk

Alongside the technical preview, Gen released data from its consumer security products that track scam activity linked to manipulated media. The company reported that deepfake-enabled scams appear more often in extended viewing sessions than in short clips.

Gen's analysis suggests that longer viewing windows give scammers more time to build trust with viewers. Persuasion typically develops over several minutes. The firm observed that the majority of intercepted deepfake scam activity occurred on YouTube, followed by Facebook and X.

These platforms support long-form video and recommendation-driven feeds. They also reach viewers on televisions and personal computers. This combination creates an environment where manipulated content can embed itself within normal viewing patterns.

Most of the flagged videos appeared during playback rather than as downloads, links or attachments, according to the company. The scams did not stand out as separate items. They sat inside standard video streams and looked like regular content until the fraudulent segments emerged.

Audio-led deception

Gen reported that audio manipulation dominated the detected scams. Many of the cases involved cloned or synthetic voices. These were paired with video that showed only minor changes from genuine footage.

The company likened the effect to dubbed foreign-language video. Visual frames could come from a real interview or broadcast. The audio track could be entirely fabricated.

They also found that underlying deepfake often served as a delivery channel rather than the complete threat. The company observed that risk rose sharply when manipulated videos included financial promises, time pressure, or instructions to move conversations to unregulated channels.

These signals included offers of unrealistically high returns, countdowns or expiring deals, and requests for payment through methods that sit outside the mainstream banking system. Such elements often appeared after a period of apparently benign content.

Shifting scam tactics

The findings indicate a shift from crude, short-form deepfake clips to more elaborate narratives. Scammers now embed synthetic speech into videos that resemble familiar formats such as talk shows, tutorials, or investment explainers.

They believe this approach allows fraudsters to copy the style of legitimate creators and public figures. It also allows them to blend calls to action into content that viewers discover through normal recommendation algorithms rather than direct spam.

The companies plan further development of the on-device system. They expect to refine detection models and work with partners that run large consumer platforms and hardware lines.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X