Naomi Lariviere is a senior technology and product leader at ADP specializing in the integration of AI ethics, data governance, and responsible product development. She leads initiatives that embed telemetry, data quality, and ethical guardrails directly into the product lifecycle. At ADP, she plays a central role in shaping how advanced analytics and AI are deployed safely and transparently for over one million clients, with a strong focus on trust, compliance, and human-centered decision support.
In this interview, Naomi explains embedding data quality, telemetry, and ethics into every stage of product development. She emphasizes that high-quality, contextual data is foundational before any AI or LLM deployment and that missing data is treated as a signal rather than a flaw. ADP uses deterministic, rules-based AI for high-risk domains such as payroll while avoiding probabilistic AI where accuracy must be absolute. Clients retain decision authority, supported by transparency into how outputs are generated. Lariviere stresses careful AI use in sensitive areas like performance reviews and highlights bias testing, independent audits, experimentation, and human curiosity as essential safeguards for responsible AI adoption.
I. Core Philosophy
Data quality is the foundation of AI effectiveness
Telemetry and continuous feedback guide product evolution
AI must preserve human agency and transparency
II. AI Deployment Principles at ADP
Deterministic vs. probabilistic AI pathways
High-risk areas (e.g., payroll) require deterministic certainty
LLMs used only after data fitness validation
III. Governance & Trust
Control towers / orchestration layers manage AI behavior
Clients always review, modify, and approve AI outputs
Trust built through explainability and gradual automation
IV. Sensitive Use Cases & Regulation
Cautious stance on performance reviews and career decisions
Heavy emphasis on compliance, regulation, and experimentation boundaries
V. Bias & Responsible AI
Bias mitigation through diverse teams, data audits, and third-party testing
Responsible AI as ongoing discipline, not a one-time fix
VI. Innovation & Problem Reframing
Organizations must relearn how to define problems
Curiosity, experimentation, and human creativity remain irreplaceable







