Why U.S. Lenders Can’t Rely on PII Alone: The Case for Device Intelligence in Fighting Fraud

In 2024, American consumers reported more than $12.5 billion in fraud losses to the Federal Trade Commission. Actual losses are likely higher, given the chronic underreporting of digital fraud incidents.
By the end of 2024, U.S. lenders faced a record $3.3 billion in exposure to synthetic identity fraud across auto loans, bank and retail credit cards, and unsecured personal loans. Deloitte projects the problem will generate at least $23 billion in losses by 2030.
Against this backdrop, one question keeps surfacing in my conversations with lenders across the U.S.: how much further can we go in fraud prevention by relying more on device intelligence? My answer — a great deal further. Personal data remains important, but the real progress comes from strengthening risk assessment by adding privacy-focused, device-based insights.
Many U.S. lenders continue to view device and behavioral data as a secondary input, something optional or supplementary to the “real” information provided by credit bureaus, telcos, or third-party data vendors. The pressure is especially acute among mid-tier and digital-first lenders — institutions too large for manual review yet too small to invest in fully customized fraud infrastructure.
But this misses the core issue. Fraudsters can piece together enough personally identifiable information (PII) to look convincing — even to the point of holding a full set of documents or verified accounts. The critical question is not whether a customer appears “real” on paper, but whether the digital traces behind the application belong to a consistent, authentic individual — and that’s exactly where device data provides a simple, effective answer.
Synthetic identity fraud is particularly well-suited to exploit systems built around PII. By blending authentic identifiers with fabricated attributes, synthetic profiles can bypass traditional verification.
In practice, synthetic applicants look stable on the surface. They may pass bureau checks, receive approvals, and even build a positive repayment record before “busting out” with high-value defaults. Because the fraud involves identities that are partly real, the losses are often harder to trace and harder to dispute.
Traditional scoring models, even those augmented with third-party PII-based data, are not designed to see through this type of deception. What they miss is the layer of digital behavior and device-level consistency — fraudsters can mimic documents, not digital DNA.
In emerging markets, lending philosophies evolved differently — shaped by necessity rather than abundance. For example, in India and Brazil, where credit bureau coverage remains partial, we see lenders increasingly rely on device-based, behavioral, and network data as a core layer of digital trust and risk assessment. Decisioning models were forced to evolve around non-PII signals from the very beginning. Device intelligence, behavioral analytics, and contextual risk indicators are not “nice to have” — they are central to risk assessment.
Device-level intelligence delivers this stability — a data layer independent of personal identifiers and resistant to manipulation. Every interaction — the configuration of a browser, the patterns of login attempts, the subtle differences in connection attributes — creates a footprint that is far harder to forge. These signals are stable across time, resistant to manipulation, and independent of PII.
Regulation is accelerating the move toward privacy-first fraud prevention. Global frameworks like the EU’s GDPR, India’s DPDP Act, and Brazil’s LGPD all restrict the use of personal identifiers. The U.S. is following the same path: while no federal privacy law exists yet, state-level acts — including the CCPA, CPRA, and similar regulations in Colorado, Virginia, and Connecticut — are setting new standards. Federal proposals are in progress, and regulators are increasing scrutiny of how lenders, fintechs, and data brokers handle consumer data.
Industry standards are evolving in parallel. The OpenID Foundation’s Shared Signals Framework (SSF) defines methods for sharing security events between organizations without exposing PII — part of a wider shift toward telemetry-driven intelligence, which underpins modern, device-based fraud detection.
For financial institutions, this intersection of regulation and innovation makes data protection a strategic necessity. Privacy-first device intelligence provides a scalable path forward, allowing risk assessment without processing personal identifiers. JuicyScore’s models use over 65,000 technical and behavioral signals — from device configuration and connection quality to behavioral consistency — to build a stable device ID and dynamic risk profile.
None of these signals identify users personally, yet they deliver precise, compliant, and future-proof insights aligned with global privacy laws.
As part of a global shift toward privacy-by-design risk assessment, JuicyScore’s models operationalize non-PII intelligence at scale — across more than 45 markets.
Clients who integrate our device and behavioral scoring report sharper fraud detection and lower false declines, without compromising compliance. They achieve this by correlating device and behavioral patterns — without collecting or storing sensitive identifiers.
Let’s take a concrete case. A mid-tier U.S. digital lender notices a sharp rise in charge-offs linked to synthetic profiles — yet bureau-based models show no anomaly. If we apply JuicyScore’s device intelligence, the picture changes. We identify clusters of applications originating from the same device infrastructure, sharing similar internet connection anomalies and displaying randomized yet clearly artificial behavior patterns. These signals reveal the fraud ring’s scale, and the lender is able to adjust decisioning rules.
The result isn’t just reduced losses. Approval rates for genuine customers also improve, since risk segmentation is based also on authentic device behavior, not only PII.
The U.S. lending industry is at a crossroads. Loan delinquencies are growing — with a significant share directly linked to fraudulent activity. Synthetic identities continue to rise, and consumer trust is steadily eroding. Treating device and behavioral signals as secondary data is no longer viable — they must become a central element of modern risk assessment.
The stakes are too high to keep device data on the sidelines.
Fraud in the U.S. is not merely a financial loss — it’s a systemic erosion of trust. Synthetic identity fraud illustrates the limits of PII-based defenses — and the need for a deeper, more resilient layer of intelligence.
For lenders, the next step is clear: make device and behavioral intelligence a foundation, not a footnote. The shift is both possible and necessary — and those who embrace it will be better prepared for the digital reality of the decade ahead.
Book a demo with JuicyScore’s team to explore how our solutions can strengthen fraud prevention, improve decisioning, and protect your portfolio.
JuicyScore API 17 introduces 17 new variables, improved indexes, and a faster infrastructure to enhance fraud prevention, risk analysis, and client experience.
Brazil’s Pix revolutionized payments and financial access. But rapid growth fueled fraud, debt, and systemic risks. Explore lessons for global fintech in the expert article by José Da Costa.
Modern technologies are becoming more robust, and security measures more sophisticated. But there’s one vulnerability that can’t be patched — human trust.
Get a live session with our specialist who will show how your business can detect fraud attempts in real time.
Learn how unique device fingerprints help you link returning users and separate real customers from fraudsters.
Get insights into the main fraud tactics targeting your market — and see how to block them.
Phone:+971 50 371 9151
Email:sales@juicyscore.ai
Our dedicated experts will reach out to you promptly