Capacity-Adaptive UI

A dynamic system that responds to human capacity through distributed intelligence

How it works: Four capacity inputs (cognitive, temporal, emotional, valence) derive a coherent interface mode. Components adapt density, content length, motion, and tone based on that mode - not individual slider values.

Live Demo

Adjust the capacity controls (bottom-right) to see how this card adapts in real-time.

Exploratory ModeYou're doing great!
Adaptive Interface
This card demonstrates how the capacity system adapts UI in real-time based on your current state.
  • Cognitive capacity controls visual density
  • Temporal capacity controls content length

Live State

Cog

0.7

density

Temp

0.7

length

Emo

0.7

motion

Val

+0.3

tone

Mode Derivation

See exactly how your inputs become a coherent interface mode.

1Your Inputs
Cognitivemental bandwidth
70%
Temporaltime available
70%
Emotionalresilience
70%
Valencemood
+0.3
2Derived Mode
Exploratory
density:mediumguidance:lowchoices:normalmotion:expressivecontrast:standardfocus:gentle
3UI Effects
Fewer items shown, simpler layouts
Full feature display, dense grids
More labels, helper text visible
Reduced options, smart defaults
No animations, fully static UI
Slow rhythmic motion: breathe, float
Calm animations, no surprises
-> Playful micro-interactions
Higher contrast for accessibility
-> Soft highlight on important elements
Strong beacon glow on key elements

Derivation Rules

Cognitive controls density:

  • cognitive < 0.4 → density: low
  • cognitive > 0.7 → density: high
  • else → density: medium

Temporal controls choices:

  • temporal < 0.4 → choiceLoad: minimal
  • else → choiceLoad: normal

Emotional controls motion:

  • emotional < 0.15 → motion: off
  • emotional < 0.4 → motion: soothing
  • emotional > 0.6 & val > 0.15 → motion: expressive
  • else → motion: subtle

Valence controls tone:

  • valence < -0.15 → contrast: boosted
  • else → contrast: standard

Cognitive controls focus:

  • motion == off → focus: default
  • cognitive < 0.4 → focus: guided
  • cognitive < 0.7 → focus: gentle
  • else → focus: default

Roadmap

Done

Phase 1: Manual 4-input controls with mode derivation

Done

Phase 2: Automatic signals (scroll velocity, time-on-page, interaction patterns) modulate inputs passively — plus pattern-based prediction from past sessions

Done

Phase 3: Arousal dimension, multimodal feedback, proportional scaling systems

Development Phases

Framework implementation status.

Phase 1Complete

Manual Inputs

  • 4-input capacity controls
  • FieldManager with derived fields
  • Mode derivation (4 modes)
  • Active token system
  • 4-tier motion system
  • prefers-reduced-motion override
Phase 2Complete

Automatic Signals

  • SignalAggregator (6 detectors)
  • Time, Session, Scroll detectors
  • Interaction, Input, Environment detectors
  • Auto-mode with manual override
  • PatternStore + PatternExtractor
  • PredictionEngine + usePredictedCapacity()
Phase 3Complete

Extended Dimensions

  • Arousal dimension → pace token
  • Haptic feedback (Vibration API)
  • Sonic feedback (Web Audio API)
  • Fibonacci spacing scale
  • Golden ratio utilities
  • guidance + choiceLoad consumed
Exploratory