Capacity-Adaptive UI
A dynamic system that responds to human capacity through distributed intelligence
How it works: Four capacity inputs (cognitive, temporal, emotional, valence) derive a coherent interface mode. Components adapt density, content length, motion, and tone based on that mode - not individual slider values.
Live Demo
Adjust the capacity controls (bottom-right) to see how this card adapts in real-time.
Mode Derivation
See exactly how your inputs become a coherent interface mode.
Derivation Rules
Cognitive controls density:
- cognitive < 0.35 → density: low
- cognitive > 0.75 → density: high
- else → density: medium
Temporal controls choices:
- temporal < 0.35 → choiceLoad: minimal
- else → choiceLoad: normal
Emotional controls motion:
- emotional < 0.35 → motion: subtle
- else if valence > 0.25 → motion: expressive
- else → motion: subtle
Valence controls tone:
- valence < -0.25 → contrast: boosted
- else → contrast: standard
Roadmap
Phase 1: Manual 4-input controls with mode derivation
Phase 2: Automatic signals (scroll velocity, time-on-page, interaction patterns) to modulate inputs without manual controls
Phase 3: Arousal dimension, multimodal feedback, proportional scaling systems