Dataiku has expanded its commitment to transparent and governed AI by announcing support for the NVIDIA Nemotron open model family within Kiji Inspector™, its newly introduced open‑source explainability framework for enterprise‑grade AI agents. The announcement, made through Dataiku’s 575 Lab open‑source initiative, positions the company at the forefront of solving one of the toughest enterprise AI challenges: making autonomous agents trustworthy, understandable, and auditable.
As organizations rush to deploy AI agents across mission‑critical workflows spanning revenue operations, compliance, safety, and customer engagement the need for visibility into how these agents arrive at decisions has become paramount. Kiji Inspector directly addresses this problem by offering explainability capabilities purpose‑built for enterprise use, rather than relying on generic or opaque LLM reasoning.
“Without explainability, scaling AI means scaling uncertainty. With Kiji Inspector for NVIDIA Nemotron, enterprises can finally understand and trust how their AI agents reason.”
— Hannes Hapke, Director, 575 Lab at Dataiku
Kiji Inspector uses a Sparse Autoencoder to analyze an AI model at the moment it decides to use a specific tool or action, surfacing the internal signals that drove that choice. This allows teams to trace, validate, and trust AI‑driven decisions without slowing down system performance, a critical factor as enterprises shift toward sovereign, self‑hosted AI architectures.
“Enterprises are embedding AI agents into decisions that influence revenue, safety, compliance, and customer trust, yet most still lack structural visibility into how those systems reason,” said Hannes Hapke, Director of 575 Lab at Dataiku. “Bringing Kiji Inspector to NVIDIA Nemotron open models changes that equation. It enables organizations to inspect and refine AI explainability before risk becomes reality.”
The launch builds on the growing alignment between Dataiku and NVIDIA as both companies expand support for production‑grade generative and agentic AI. NVIDIA highlighted that open‑source models like Nemotron give enterprises deeper control, auditability, and transparency.
“Scaling autonomous AI agents across the enterprise demands trust rooted in transparency and accountability,” said Amanda Saunders, Director of Generative AI at NVIDIA.
Early adopters such as SLB emphasized that explainability is becoming a prerequisite for bringing AI into real engineering and operational workflows.
Kiji Inspector for NVIDIA Nemotron is available starting today, offering enterprises a new pathway to deploy high‑performance, governed, explainable AI agents at scale.
