Inner Speech as Behavior Guides:
Steerable Imitation of Diverse Behaviors for Human-AI Coordination
1Massachusetts Instituteof Technology
2Georgia Instituteof Technology
3HarvardUniversity
(a) Behaviorist framework: Direct stimulus-response mapping
(b) Cognitive framework: Linguistically-mediated action selection
Figure: Contrasting theoretical frameworks for IL. (a) The behaviorist approach models human behavior as a direct mapping from environmental states to actions (st ↦ at), treating cognitive processes as opaque transformations. (b) The cognitive approach instantiated by MIMIC introduces inner speech as a mediational layer (st → mt → at), where mt represents linguistically-structured internal deliberation that enables behavioral diversity and contextual adaptation.
Abstract
Effective human-AI coordination requires artificial agents capable of exhibiting and responding to human-like behaviors while adapting to changing contexts. Imitation learning has emerged as one of the prominent approaches to build such agents by training them to mimic human-demonstrated behaviors. However, current methods struggle to capture the inherent diversity and non-Markovian nature of human behavior and lack the ability to steer behavior at inference time. Drawing inspiration from the theory of human cognitive processes, where inner speech guides action selection before execution, we propose MIMIC (Modeling Inner Motivations for Imitation and Control), a framework that uses language as an internal representation of behavioral intent.
MIMIC employs the novel use of vision-language models as linguistic scaffolding to train a conditional variational autoencoder capable of generating inner speech from observations. A diffusion-based behavior cloning policy then selects actions conditioned on current observations and the generated inner speech. MIMIC enables fine-grained steering of behavior at inference time by conditioning the agent on behavior-specific speech. Experiments across robotic manipulation tasks and human-AI collaboration games demonstrate that MIMIC significantly enhances both behavior diversity and fidelity to human demonstrations while enabling nuanced behavioral steering without training on additional demonstrations.
Method
Coming Soon
Results
Ablation Studies
Ablation Studies: Analysis of key components in the MIMIC framework.
Generation and Steering Examples
Overcooked
Here, I balance onion placement and soup retrieval to maintain workflow. The agent exhibits strategic placement of onions in the pot while communicating with the other agent to reduce overlap in actions.
The agent adapts its movement to minimize collisions in the cramped space. The agent coordinates movements to minimize delays and optimize task completion.
Carefully navigate around the green-hatted agent to avoid collisions.
D3IL Benchmark
Use a zigzag pattern to correct initial misalignment.
Adopt a systematic approach, sorting one color completely before switching.
I focus on speed over precision, rapidly stacking blocks.
Ask a Question
Have a question about our work? We'd love to hear from you!
We'll respond to your email as soon as possible.
BibTeX
@inproceedings{
trivedi2025inner,
title={Inner Speech as Behavior Guides: Steerable Imitation of Diverse Behaviors for Human-{AI} coordination},
author={Rakshit Trivedi and Kartik Sharma and David C. Parkes},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
url={https://openreview.net/forum?id=AwLRF1lZvI}
}