// GESTURE_CLASSIFIER_SYSTEM_V2.0

> SYSTEM_INITIALIZING...

00_MODE_SELECTION

Both models train on the same preset-gesture pairs. Click to switch between modes.

01_DATA_COLLECTION

Save 4 presets below, then hold preset buttons to collect gestures associated with each preset.

0 samples
0 samples
0 samples
0 samples

Total: 0 total samples (need at least 5 per preset, 20 total)

02_MODEL_TRAINING

03_REALTIME_PREDICTION

ALPHA:
00%
BETA:
00%
GAMMA:
00%
DELTA:
00%
WAITING_FOR_INPUT...

Model predicts parameter values directly from gestures. Parameters update in real-time below.

WAITING_FOR_MODEL...

04_AUDIO_SYNTHESIS_CORE

STATUS: STANDBY

// PRESET_MATRIX (4-WAY)

[INFO] SAVE STATES TO ENABLE COLLECTION & INTERPOLATION
EMPTY
EMPTY
EMPTY
EMPTY

Save 4 presets with desired parameter values. Then collect gestures for each preset. Both models will learn from the same data.

// OPERATIONAL_MANUAL

  1. Enable camera access when prompted
  2. AUDIO_CORE: Initialize engine for sound synthesis
  3. DESIGN_SOUNDS: Adjust parameters and save 4 presets (Alpha, Beta, Gamma, Delta)
  4. ACQUIRE_DATA: For each preset, hold its button and make gestures to collect samples
  5. COLLECT_SAMPLES: Collect at least 5 samples per preset (20 total minimum)
  6. INITIATE_TRAINING: Train both models simultaneously on the same data
  7. SWITCH_MODE: Toggle between Classification (interpolate presets) or Regression (direct prediction)
  8. PERFORM: Move and rotate hand - use classification for bounded interpolation or regression for extrapolation