The cybersecurity landscape is evolving at an unprecedented pace, with fraudsters no longer attempting to breach identity systems outright. Instead, they are subtly manipulating the very signals these systems depend on to establish trust, a trend that Henry Patishman, executive VP of Identity Verification Solutions at Regula, describes as a move from “artifact manipulation to now signal manipulation, and most worrying, it’s now going to system manipulation.” This signifies a critical shift from faking inputs to shaping the outcomes of verification processes.
The Rise of Identity Signal Manipulation
Attackers are now combining seemingly valid identity signals—drawn from IDs, biometrics, device data, and behavioral analytics during onboarding and authentication—to exploit how decision-making systems operate. The result is the creation of digital identities that appear legitimate but do not correspond to real individuals. This emerging threat, termed “identity signal manipulation” by industry insiders, is compelling a fundamental reevaluation of how digital identities are verified across payments, banking, and online platforms.
“In a world where everything can look real, trust will depend on how well you understand the integrity of identity signals, not just their appearance,” Patishman told PYMNTS. The focus is shifting from a one-time verification event to ensuring an identity remains consistent and trustworthy across multiple sessions, devices, and behaviors.
From Spoofing to Orchestration
What distinguishes this new wave of fraud is not merely its sophistication but its orchestrated nature. Fraudsters are no longer spoofing a single attribute; they are coordinating multiple signals simultaneously to construct a convincing, albeit false, digital identity. A recent case in the Netherlands, cited by Patishman, exemplifies this shift. A single attacker managed to open nearly 50 bank accounts by utilizing real stolen passports, a live participant on camera, and subtly altered biometric inputs. Each individual component appeared legitimate, and collectively, they were able to present a false narrative that existing banking fraud systems failed to detect.
“The interesting part was that real documents, real selfies, and a real human were all part of the process, but manipulated just enough to pass the controls,” Patishman noted. This highlights the core of signal manipulation: not outright falsifying an identity, but bending reality just enough to bypass existing defenses.
Fragmented Defenses and the Chain of Signals
The effectiveness of these attacks is amplified by the structure of many identity verification systems, which often treat verification as a singular event, such as a checkpoint during onboarding or login. However, identity is fluid, expressed through a series of interactions that generate distinct signals over time. “One of the biggest shifts is that identity verification is no longer a single decision,” Patishman explained. “It’s a chain of signals over time.”
Fraudsters have adapted by probing for weak links within this chain. This can involve injecting synthetic video into camera feeds, replaying biometric sessions, or subtly altering device data. While each individual signal might pass its respective check, the deception lies in how these manipulated signals are pieced together to form a cohesive, yet fraudulent, identity.
Trust at the Source: Signal Provenance
The evolving fraud landscape is driving a conceptual shift in identity verification, moving from validating outcomes to validating origins. This emerging model emphasizes “signal provenance”—understanding where data originates, how it was captured, and whether it has been tampered with. This approach mirrors legal frameworks that require a clear chain of custody for evidence.
“Trust can’t be inferred from outcomes anymore. It has to be proven at the point of capture,” Patishman stated. “A biometric match proves similarity, not authenticity.” Traditional systems often focus on the results of checks, such as a document or face match. However, attackers are increasingly manipulating inputs before these checks occur, leading to outputs that appear valid but are fundamentally compromised. “Biometrics don’t fail,” Patishman added. “But the systems around them can fail to verify their authenticity.”
The challenge is compounded by the fact that many organizations employ strong individual verification tools that operate in isolation. “Fraud doesn’t happen in one signal, it happens in gaps between them,” Patishman observed. “The next generation of identity verification isn’t a better check with a better tool. It’s a coordinated system that understands how signals relate to each other.”
This necessitates systems that can correlate signals across different sources and over time, identify inconsistencies, and adapt dynamically to emerging risks. Identity verification is transitioning from a series of independent checks to an evidence-based process. Ultimately, as Patishman concluded, “security is now much more reliant on system and process design, not just tool accuracy.”

