
On-device SDK making apps acoustically intelligent: real-time pitch detection, singing evaluation, vocal quality and melody analysis, multi-track audio engine, low-latency offline operation.
Voice AI today understands what you say: the words, the language, the text. But it's deaf to how you actually sound — pitch, timbre, emotion, vocal quality, rhythm, melody. Everything that makes a voice a voice, not just a transcript.
VoxaTrace is an on-device SDK that makes any application acoustically intelligent.
Eight years of R&D. Five million users in production. All running natively on Android and iOS, without a single server call.
| Speech AI | VoxaTrace |
|---|---|
| "The user said 'hello'" | "The user sang A4 at 440 Hz with 92% confidence" |
| Words and language | Pitch, melody, rhythm, vocal quality |
| Transcription | Acoustic analysis |
| Cloud-dependent | On-device, real-time |
| Application | What VoxaTrace Enables |
|---|---|
| Singing apps | Pitch detection, real-time scoring, performance feedback |
| Vocal training | Intonation analysis, progress tracking, guided exercises |
| Music education | Ear training, sight-singing evaluation, pitch matching |
| Voice games | Pitch as input — sing to jump, hum to control |
┌──────────────────────────────┬──────────────────────────────────┐
│ Sonix │ Calibra │
│ (Audio Engine) │ (Acoustic Analysis) │
├──────────────────────────────┼──────────────────────────────────┤
│ • Multi-track playback (8) │ • Pitch detection (YIN/SwiftF0) │
│ • Recording (M4A/MP3) │ • Voice activity detection │
│ • Pitch shifting ±12 semi │ • Singing evaluation & scoring │
│ • Tempo control 0.5x–2x │ • Vocal range detection │
│ • MIDI synthesis (SoundFont)│ • Audio effects chain │
└──────────────────────────────┴──────────────────────────────────┘
dependencies {
implementation("com.musicmuni:voxatrace:0.9.2")
}dependencies: [
.package(url: "https://github.com/musicmuni/voxatrace", from: "0.9.2")
]pod 'VoxaTrace', :podspec => 'https://raw.githubusercontent.com/musicmuni/voxatrace/main/VoxaTrace.podspec'val detector = CalibraPitch.createDetector()
val point = detector.detect(audioSamples, sampleRate = 16000)
println("${point.pitch} Hz @ ${(point.confidence * 100).toInt()}% confidence")
detector.close()let detector = CalibraPitch.createDetector()
let point = detector.detect(samples: audioSamples, sampleRate: 16000)
print("\(point.pitch) Hz @ \(Int(point.confidence * 100))% confidence")
detector.close()Output:
440.0 Hz @ 92% confidence
| Metric | Specification |
|---|---|
| Pitch detection latency | ~50ms |
| Frequency range | 50 Hz – 2000 Hz |
| Simultaneous tracks | Up to 8 synchronized |
| Minimum Android | API 24 (Android 7.0) |
| Minimum iOS | iOS 14 |
Commercial. See LICENSE.
Voice AI today understands what you say: the words, the language, the text. But it's deaf to how you actually sound — pitch, timbre, emotion, vocal quality, rhythm, melody. Everything that makes a voice a voice, not just a transcript.
VoxaTrace is an on-device SDK that makes any application acoustically intelligent.
Eight years of R&D. Five million users in production. All running natively on Android and iOS, without a single server call.
| Speech AI | VoxaTrace |
|---|---|
| "The user said 'hello'" | "The user sang A4 at 440 Hz with 92% confidence" |
| Words and language | Pitch, melody, rhythm, vocal quality |
| Transcription | Acoustic analysis |
| Cloud-dependent | On-device, real-time |
| Application | What VoxaTrace Enables |
|---|---|
| Singing apps | Pitch detection, real-time scoring, performance feedback |
| Vocal training | Intonation analysis, progress tracking, guided exercises |
| Music education | Ear training, sight-singing evaluation, pitch matching |
| Voice games | Pitch as input — sing to jump, hum to control |
┌──────────────────────────────┬──────────────────────────────────┐
│ Sonix │ Calibra │
│ (Audio Engine) │ (Acoustic Analysis) │
├──────────────────────────────┼──────────────────────────────────┤
│ • Multi-track playback (8) │ • Pitch detection (YIN/SwiftF0) │
│ • Recording (M4A/MP3) │ • Voice activity detection │
│ • Pitch shifting ±12 semi │ • Singing evaluation & scoring │
│ • Tempo control 0.5x–2x │ • Vocal range detection │
│ • MIDI synthesis (SoundFont)│ • Audio effects chain │
└──────────────────────────────┴──────────────────────────────────┘
dependencies {
implementation("com.musicmuni:voxatrace:0.9.2")
}dependencies: [
.package(url: "https://github.com/musicmuni/voxatrace", from: "0.9.2")
]pod 'VoxaTrace', :podspec => 'https://raw.githubusercontent.com/musicmuni/voxatrace/main/VoxaTrace.podspec'val detector = CalibraPitch.createDetector()
val point = detector.detect(audioSamples, sampleRate = 16000)
println("${point.pitch} Hz @ ${(point.confidence * 100).toInt()}% confidence")
detector.close()let detector = CalibraPitch.createDetector()
let point = detector.detect(samples: audioSamples, sampleRate: 16000)
print("\(point.pitch) Hz @ \(Int(point.confidence * 100))% confidence")
detector.close()Output:
440.0 Hz @ 92% confidence
| Metric | Specification |
|---|---|
| Pitch detection latency | ~50ms |
| Frequency range | 50 Hz – 2000 Hz |
| Simultaneous tracks | Up to 8 synchronized |
| Minimum Android | API 24 (Android 7.0) |
| Minimum iOS | iOS 14 |
Commercial. See LICENSE.