A March 2026 paper (https://arxiv.org/pdf/2603.15381) from META FAIR (Dupoux, LeCun, and Malik) identifies the core limitation of current AI: once deployed, models learn nothing. Learning is outsourced to human experts through rigid MLOps pipelines. The authors propose a fix: systems that learn from action, observe feedback, and improve autonomously. They estimate full implementation is decades away.

KMod already works this way within its domain.

The paper describes “learning from action”: an agent acts, receives a feedback signal, and adjusts. KMod does exactly this. It suggests a field mapping, the payroll specialist accepts or corrects, the correction becomes a persistent rule, the next suggestion improves. A correction in Munich at 9:00 AM improves suggestions in Vienna at 9:01. No retraining. No batch processing.

The paper identifies “domain mismatch” as a critical failure: models trained on internet data break in specific environments. KMod was built for this problem. Every new client brings different schemas, field names, and country rules. The system adapts through use.

The paper calls for “active data selection.” KMod requires human validation for every learning event. Nothing enters the knowledge model unsupervised. The paper warns about “alignment hacking” in autonomous systems. KMod’s constraint is the same one the authors recommend: keep the human in the loop.

The result: 1.5 million validated mapping decisions across 7,000+ schemas in 150+ countries. 88.9% deterministic accuracy. Implementations in days, not months. The paper describes a theoretical architecture. KMod is that architecture in production, for payroll data.