Skip to main content

CalibraLiveEval

Real-time singing evaluation session that scores a singer's performance by comparing their pitch to a reference melody. Supports segment-based progression with automatic advancement, retry logic, and both singalong and singafter practice modes.

Quick Start

Kotlin

// 1. Create detector and session
val detector = CalibraPitch.createDetector()
val session = CalibraLiveEval.create(lessonMaterial, detector = detector)

// 2. Prepare (loads reference, creates evaluator)
session.prepareSession()

// 3. Start segment and feed audio
session.startPracticingSegment(0)
recorder.audioBuffers.collect { buffer ->
session.feedAudioSamples(buffer.toFloatArray(), sampleRate = 48000)
}

// 4. Get result
val result = session.finishPracticingSegment()
println("Score: ${result?.score}")

// 5. Cleanup
session.closeSession()

Swift

// 1. Create detector and session
let detector = CalibraPitch.createDetector()
let session = CalibraLiveEval.create(
reference: lessonMaterial,
detector: detector
)

// 2. Prepare (loads reference, creates evaluator)
try await session.prepareSession()

// 3. Start segment and feed audio
session.startPracticingSegment(index: 0)
for await buffer in recorder.audioBuffers {
session.feedAudioSamples(buffer.toFloatArray(), sampleRate: 48000)
}

// 4. Get result
if let result = session.finishPracticingSegment() {
print("Score: \(result.score)")
}

// 5. Cleanup
session.closeSession()

When to Use

ScenarioUse This?Why
Score singing against reference in real timeYesCore use case
Karaoke apps with segment scoringYesSegment-based with auto-advance
Music education with retry logicYesPractice mode with score thresholds
Just detect pitch (no scoring)NoUse CalibraPitch.createDetector()
Analyze pre-recorded audio (not live)NoUse CalibraMelodyEval
Voice activity detection onlyNoUse CalibraVAD

Configuration

Presets

PresetKotlinSwiftDescription
DefaultSessionConfig.DEFAULT.defaultAuto-advancing, no score threshold
PracticeSessionConfig.PRACTICE.practiceScore threshold 70%, max 3 attempts, best-of aggregation
KaraokeSessionConfig.KARAOKE.karaokeAlways advances, one attempt per segment
PerformanceSessionConfig.PERFORMANCE.performanceOne attempt, no repetition

Builder

Kotlin

val config = SessionConfig.Builder()
.preset(SessionConfig.PRACTICE)
.scoreThreshold(0.6f)
.maxAttempts(5)
.resultAggregation(ResultAggregation.BEST)
.build()

val session = CalibraLiveEval.create(
reference = lessonMaterial,
session = config,
detector = CalibraPitch.createDetector()
)

Swift

let config = SessionConfig.Builder()
.preset(.practice)
.scoreThreshold(0.6)
.maxAttempts(5)
.resultAggregation(.best)
.build()

let session = CalibraLiveEval.create(
reference: lessonMaterial,
session: config,
detector: CalibraPitch.createDetector()
)

Config Properties

PropertyTypeDefaultDescription
autoAdvanceBooleantrueAutomatically advance to next segment when current ends
scoreThresholdFloat0Minimum score to auto-advance (0 = advances regardless of score)
maxAttemptsInt0Maximum attempts before forced advance (0 = unlimited)
resultAggregationResultAggregationLATESTHow to aggregate multiple attempts (LATEST, BEST, AVERAGE)
hopSizeInt160Hop size between frames in samples (160 = 10ms at 16kHz)
autoPhaseTransitionBooleantrueAutomatically transition LISTENING to SINGING in singafter mode
autoSegmentDetectionBooleantrueAutomatically detect segment end from player time

Builder Methods

MethodDescription
preset(config)Start from a preset configuration
autoAdvance(enabled)Enable or disable auto-advance to next segment
scoreThreshold(threshold)Set minimum score to auto-advance (0 = disabled)
maxAttempts(max)Set maximum attempts before forced advance (0 = unlimited)
resultAggregation(agg)Set how to aggregate multiple attempts
hopSize(samples)Set hop size between frames in samples
autoPhaseTransition(enabled)Enable or disable automatic LISTENING to SINGING transition
autoSegmentDetection(enabled)Enable or disable automatic segment end detection

Creating a Session

Factory Method

Kotlin

val session = CalibraLiveEval.create(
reference = lessonMaterial, // LessonMaterial (required)
session = SessionConfig.PRACTICE, // SessionConfig (default: DEFAULT)
detector = CalibraPitch.createDetector(), // Detector (required)
player = player, // SonixPlayer? (optional)
recorder = recorder // SonixRecorder? (optional)
)

Swift

let session = CalibraLiveEval.create(
reference: lessonMaterial,
session: .practice,
detector: CalibraPitch.createDetector(),
player: player,
recorder: recorder
)

Factory Parameters

ParameterTypeRequiredDescription
referenceLessonMaterialYesReference audio, segments, and key
sessionSessionConfigNoSession configuration (default: DEFAULT)
detectorCalibraPitch.DetectorYesPitch detector. Session takes ownership and closes it.
playerSonixPlayer?NoAudio player for convenience API. Caller manages lifecycle.
recorderSonixRecorder?NoAudio recorder for convenience API. Caller manages lifecycle.

Usage Tiers

Tier 1: Convenience API

Pass player and recorder handles; the session coordinates seeking, playback, recording, and scoring automatically.

val session = CalibraLiveEval.create(
reference = lessonMaterial,
detector = CalibraPitch.createDetector(),
player = player,
recorder = recorder
)
session.prepareSession()
session.onSegmentComplete { result -> showScore(result) }
session.startPracticingSegment(0) // Seeks, plays, records, scores automatically

Tier 2: Low-Level API

Omit player and recorder; manually manage audio and feed samples directly.

val session = CalibraLiveEval.create(
reference = lessonMaterial,
detector = CalibraPitch.createDetector()
)
session.prepareSession()
session.startPracticingSegment(0)
recorder.audioBuffers.collect { buffer ->
session.feedAudioSamples(buffer.toFloatArray(), sampleRate = 48000)
}
val result = session.finishPracticingSegment()

Ownership Model

DependencyOwnershipRationale
detectorOwned -- session closes itCreated specifically for this session
playerBorrowed -- caller managesShared resource, UI may need direct access
recorderBorrowed -- caller managesShared resource, may be reused

Core Features

Session Lifecycle

// Prepare (suspend, runs on background dispatcher)
session.prepareSession()

// Finish and get aggregated results
val result: SingingResult = session.finishSession()

// Close and release all resources
session.closeSession()
// Alternatively, use close() (implements AutoCloseable)
session.close()

// Restart for "Practice Again"
session.restartSession(fromSegment = 0)
// Prepare
try await session.prepareSession()

// Finish and get aggregated results
let result = session.finishSession()

// Close and release all resources
session.closeSession()

// Restart for "Practice Again"
session.restartSession(fromSegment: 0)

Segment Control

// Start practicing a specific segment
session.startPracticingSegment(0)

// Finish current segment and get result
val result: SegmentResult? = session.finishPracticingSegment()

// Discard current segment without scoring
session.discardCurrentSegment()

// Retry the same segment (increments attempt number)
session.retryCurrentSegment()
session.startPracticingSegment(index: 0)
let result = session.finishPracticingSegment()
session.discardCurrentSegment()
session.retryCurrentSegment()
// Jump to a specific segment (discards current attempt if practicing)
session.seekToSegment(3)

// Advance to next segment (returns false if at end)
val advanced: Boolean = session.advanceToNextSegment()

// Seek to a time position
session.seekToTime(seconds = 15.0f)

// Pause and resume playback
session.pausePlayback()
session.resumePlayback()

// Manual LISTENING → SINGING transition (when autoPhaseTransition = false)
session.beginSingingPhase()
session.seekToSegment(index: 3)
let advanced = session.advanceToNextSegment()
session.seekToTime(seconds: 15.0)
session.pausePlayback()
session.resumePlayback()
session.beginSingingPhase()

Feeding Audio

The feedAudioSamples method accepts audio at any sample rate and resamples internally to 16kHz. This means you do not need to pre-process audio before passing it to the session.

Kotlin

// Feed from recorder (any sample rate)
recorder.audioBuffers.collect { buffer ->
session.feedAudioSamples(buffer.toFloatArray(), sampleRate = 48000)
}

// Default sample rate is 16000 if omitted
session.feedAudioSamples(samples)

Swift

// Feed from recorder (any sample rate)
session.feedAudioSamples(buffer, sampleRate: 48000)

// Default sample rate is 16000 if omitted
session.feedAudioSamples(buffer)
ParameterTypeDefaultDescription
samplesFloatArray / [Float]--Mono audio samples, normalized -1.0 to 1.0
sampleRateInt16000Sample rate of the input audio in Hz

If the input sample rate differs from 16kHz, the session uses SonixResampler to convert the audio before processing. This is handled transparently on every call.

Runtime Configuration

// Set student key for transposition (0 = same as reference)
session.setStudentKeyHz(220.0f)

// Enable or disable pitch processing (smoothing + octave correction)
session.setPitchProcessingEnabled(true)
val isEnabled: Boolean = session.pitchProcessingEnabled
session.setStudentKeyHz(keyHz: 220.0)
session.setPitchProcessingEnabled(enabled: true)
let isEnabled = session.pitchProcessingEnabled

Query Methods

// Get all results for a specific segment
val attempts: List<SegmentResult>? = session.getResultsForSegment(0)

// Check if a segment has been completed at least once
val completed: Boolean = session.hasCompletedSegment(0)
let attempts = session.getResultsForSegment(index: 0)
let completed = session.hasCompletedSegment(index: 0)

State Machine

IDLE ──prepareSession()──► READY
READY ──startPracticingSegment()──► PRACTICING
PRACTICING ──finishPracticingSegment()──► BETWEEN_SEGMENTS (or COMPLETED if last)
PRACTICING ──discardCurrentSegment()──► BETWEEN_SEGMENTS
PRACTICING ──seekToSegment()──► PRACTICING (new segment)
BETWEEN_SEGMENTS ──startPracticingSegment()──► PRACTICING
BETWEEN_SEGMENTS ──advanceToNextSegment()──► PRACTICING (or COMPLETED if last)
BETWEEN_SEGMENTS ──finishSession()──► COMPLETED
* ──closeSession()──► (released)

SessionPhase

PhaseDescription
IDLESession created but not started
READYReference loaded, ready to begin practicing
PRACTICINGActively capturing and evaluating audio for a segment
BETWEEN_SEGMENTSFinished one segment, waiting before next
COMPLETEDAll segments completed or session manually finished
CANCELLEDSession was cancelled via closeSession()
ERRORAn error occurred during preparation

PracticePhase

Tracks the student's activity within a single segment. The progression depends on mode:

  • Singalong: IDLE -> SINGING -> EVALUATED
  • Singafter: IDLE -> LISTENING -> SINGING -> EVALUATED
PhaseDescription
IDLENot practicing -- waiting to start
LISTENINGReference playing, student not recording yet (singafter only)
SINGINGStudent is being recorded and evaluated
EVALUATEDSegment complete, score available

Observing State

Kotlin (StateFlow)

// Session state (phase, active segment, pitch, progress)
session.state.collect { state ->
updateUI(state.phase, state.currentPitch, state.segmentProgress)
}

// Active segment details (null when not practicing)
session.activeSegment.collect { active ->
active?.let { showProgress(it.elapsedSeconds, it.remainingSeconds) }
}

// Completed segments map: segment index → list of attempts
session.completedSegments.collect { results ->
updateScoreboard(results)
}

// Practice phase (IDLE, LISTENING, SINGING, EVALUATED)
session.phase.collect { phase ->
updatePhaseIndicator(phase)
}

// Live pitch contour for scrolling visualization
session.livePitchContour.collect { contour ->
drawPitchCanvas(contour)
}

// Live pitch point (real-time, includes time and confidence)
session.livePitch.collect { pitchPoint ->
updatePitchIndicator(pitchPoint)
}

// Playback time, playing/recording status
session.currentTime.collect { seconds -> updateSeekBar(seconds) }
session.isPlaying.collect { playing -> updatePlayButton(playing) }
session.isRecording.collect { recording -> updateRecordingIndicator(recording) }

Swift (Observers)

Each observer method returns a cancellable Task that dispatches updates on MainActor.

let stateTask = session.observeState { state in
self.sessionPhase = state.phase
self.currentPitch = state.currentPitch
self.segmentProgress = state.segmentProgress
}

let activeTask = session.observeActiveSegment { active in
self.activeSegment = active
}

let completedTask = session.observeCompletedSegments { results in
self.completedResults = results // [Int: [SegmentResult]]
}

let phaseTask = session.observePhase { phase in
self.practicePhase = phase
}

let contourTask = session.observeLivePitchContour { contour in
self.pitchContour = contour
}

let pitchTask = session.observeLivePitch { pitchPoint in
self.livePitch = pitchPoint
}

let timeTask = session.observeCurrentTime { seconds in
self.currentTime = seconds
}

let playingTask = session.observeIsPlaying { isPlaying in
self.isPlaying = isPlaying
}

let recordingTask = session.observeIsRecording { isRecording in
self.isRecording = isRecording
}

// Cancel when done
stateTask.cancel()

StateFlows

StateFlowTypeDescription
stateStateFlow<SessionState>Session state (phase, pitch, amplitude, progress, completed segments)
activeSegmentStateFlow<ActiveSegmentState?>Active segment details, or null if not practicing
completedSegmentsStateFlow<Map<Int, List<SegmentResult>>>Map of segment index to list of attempts
phaseStateFlow<PracticePhase>Current practice phase (IDLE, LISTENING, SINGING, EVALUATED)
livePitchContourStateFlow<PitchContour>Accumulated pitch contour for scrolling visualization
livePitchStateFlow<PitchPoint>Real-time pitch point (includes time and confidence)
currentTimeStateFlow<Float>Playback position in seconds
isPlayingStateFlow<Boolean>Whether player is currently playing
isRecordingStateFlow<Boolean>Whether recording is active

Swift Observer Methods

MethodCallback TypeDescription
observeState(_:)(SessionState) -> VoidSession state changes
observeActiveSegment(_:)(ActiveSegmentState?) -> VoidActive segment changes
observeCompletedSegments(_:)([Int: [SegmentResult]]) -> VoidCompleted segments with native Swift Int keys
observePhase(_:)(PracticePhase) -> VoidPractice phase changes
observeLivePitchContour(_:)(PitchContour) -> VoidLive pitch contour updates
observeLivePitch(_:)(PitchPoint) -> VoidReal-time pitch point updates
observeCurrentTime(_:)(Float) -> VoidPlayback time updates
observeIsPlaying(_:)(Bool) -> VoidPlaying state changes
observeIsRecording(_:)(Bool) -> VoidRecording state changes

Properties

PropertyTypeDescription
segmentsList<Segment>All segments from the reference
referenceKeyHzFloatReference key in Hz from LessonMaterial
studentKeyHzFloatCurrent student key in Hz (0 = same as reference)
pitchProcessingEnabledBooleanWhether pitch processing is currently enabled

Callbacks

Alternative to StateFlow observation. Callbacks are dispatched on MainActor in Swift.

Kotlin

session.onPhaseChanged { phase ->
println("Phase: $phase")
}

session.onReferenceEnd { segment ->
println("Reference ended for: ${segment.lyrics}")
}

session.onSegmentComplete { result ->
println("Score: ${result.score}")
}

session.onSessionComplete { result ->
println("Overall: ${result.overallScore}")
}

Swift

session.onPhaseChanged { phase in
print("Phase: \(phase)")
}

session.onReferenceEnd { segment in
print("Reference ended for: \(segment.lyrics)")
}

session.onSegmentComplete { result in
print("Score: \(result.score)")
}

session.onSessionComplete { result in
print("Overall: \(result.overallScore)")
}

Callback Reference

MethodCallback SignatureDescription
onPhaseChanged(PracticePhase) -> UnitPractice phase transitions (e.g., LISTENING to SINGING)
onReferenceEnd(Segment) -> UnitReference audio finished playing (singafter mode)
onSegmentComplete(SegmentResult) -> UnitSegment finished with its result
onSessionComplete(SingingResult) -> UnitAll segments finished

Model Types

SessionState

PropertyTypeDescription
phaseSessionPhaseCurrent session phase
activeSegmentIndexInt?Index of segment being practiced, or null
activeSegmentSegment?The segment being practiced, or null
currentPitchFloatDetected pitch in Hz (-1 for unvoiced)
currentAmplitudeFloatAudio amplitude (0.0 - 1.0)
segmentProgressFloatProgress through current segment (0.0 - 1.0)
completedSegmentsSet<Int>Indices of completed segments
errorString?Error message if phase is ERROR
isPracticingBooleanTrue if session is actively practicing
canBeginSegmentBooleanTrue if a new segment can be started
isFinishedBooleanTrue if session is finished (COMPLETED, CANCELLED, or ERROR)
completedCountIntNumber of completed segments

ActiveSegmentState

PropertyTypeDescription
segmentIndexIntIndex of the segment
segmentSegmentThe segment being practiced
currentPitchFloatDetected pitch in Hz (-1 for unvoiced)
currentAmplitudeFloatAudio amplitude (0.0 - 1.0)
elapsedSecondsFloatTime elapsed since segment started
isCapturingBooleanWhether audio is currently being captured
progressFloatProgress through the segment (0.0 - 1.0)
remainingSecondsFloatTime remaining in seconds
hasVoiceBooleanTrue if detected pitch is valid

SegmentResult

PropertyTypeDescription
segmentSegmentThe segment that was evaluated
scoreFloatOverall score (0.0 - 1.0)
pitchAccuracyFloatPitch accuracy component (0.0 - 1.0)
levelPerformanceLevelPerformance level classification
attemptNumberIntWhich attempt this is (1-based)
referencePitchPitchContourReference pitch contour for visualization
studentPitchPitchContourStudent pitch contour for visualization
isPassingBooleanTrue if score >= 0.5
isGoodBooleanTrue if score >= 0.7
isExcellentBooleanTrue if score >= 0.9
scorePercentIntScore as percentage (0-100)
feedbackMessageStringHuman-readable feedback based on performance level

SingingResult

Returned by finishSession() with aggregated results across all segments.

PropertyTypeDescription
overallScoreFloatAggregate score across all segments (0.0 - 1.0)
segmentResultsMap<Int, List<SegmentResult>>Map of segment index to list of attempts
aggregationResultAggregationHow the overall score was calculated
overallScorePercentIntOverall score as percentage (0-100)
segmentCountIntNumber of segments evaluated
totalAttemptsIntTotal attempts across all segments
allPassingBooleanTrue if all segments have a passing score
MethodDescription
latestScorePerSegment()Latest score for each segment
bestScorePerSegment()Best score for each segment
averageScorePerSegment()Average score for each segment
latestResultPerSegment()Latest SegmentResult for each segment
getAllFeedback()Feedback messages for all segments

PerformanceLevel

LevelScore RangeDisplay Name
NEEDS_WORK< 0.3"Needs Work"
FAIR0.3 - 0.6"Fair"
GOOD0.6 - 0.8"Good"
VERY_GOOD0.8 - 0.95"Very Good"
EXCELLENT>= 0.95"Excellent"
NOT_DETECTEDnegative"No Voice"
NOT_EVALUATED--"Not Evaluated"

Segment

PropertyTypeDescription
indexIntZero-based segment index
startSecondsFloatReference audio start time
endSecondsFloatReference audio end time
lyricsStringText/lyrics for this segment (optional)
studentStartSecondsFloat?When student recording starts (null = same as startSeconds)
studentEndSecondsFloat?When student recording ends (null = same as endSeconds)
durationFloatDuration in seconds
isSingafterBooleanTrue if student starts after reference
effectiveStudentStartFloatStudent start time (falls back to startSeconds)
effectiveStudentEndFloatStudent end time (falls back to endSeconds)
studentDurationFloatDuration of the student recording portion

LessonMaterial

PropertyTypeDescription
audioSourceAudioSourceSource of the reference audio
segmentsList<Segment>Segment boundaries and lyrics
keyHzFloatMusical key frequency in Hz (e.g., 261.63 for middle C)
pitchContourPitchContour?Pre-computed pitch (enables fast initialization)
hpcpFramesList<FloatArray>?Pre-computed HPCP frames for DTW alignment
durationFloatTotal duration based on the last segment's end time
segmentCountIntNumber of segments

Creating LessonMaterial

// From audio file
val material = LessonMaterial.fromFile(
audioPath = "/path/to/reference.m4a",
segments = segments,
keyHz = 196.0f
)

// From raw audio samples (with optional pre-computed pitch)
val material = LessonMaterial.fromAudio(
samples = audioData.samples,
sampleRate = audioData.sampleRate,
segments = segments,
keyHz = 196.0f,
pitchContour = preComputedContour // optional, speeds up prepareSession()
)

Creating Segments

// Individual segments
val segments = listOf(
Segment(index = 0, startSeconds = 0.0f, endSeconds = 4.5f, lyrics = "Sa Re Ga Ma"),
Segment(index = 1, startSeconds = 4.5f, endSeconds = 9.0f, lyrics = "Pa Da Ni Sa")
)

// From parallel arrays
val segments = Segment.fromArrays(
starts = floatArrayOf(0.0f, 4.5f, 9.0f),
ends = floatArrayOf(4.5f, 9.0f, 13.5f),
lyrics = listOf("Sa Re Ga Ma", "Pa Da Ni Sa", "Sa Ni Da Pa")
)

// Singafter segments (student sings after reference)
val segments = Segment.fromArrays(
starts = floatArrayOf(0.0f, 8.0f),
ends = floatArrayOf(8.0f, 16.0f),
studentStarts = floatArrayOf(4.0f, 12.0f), // Student starts halfway
studentEnds = floatArrayOf(8.0f, 16.0f)
)

Common Patterns

Singalong ViewModel (Kotlin)

class SingalongViewModel : ViewModel() {
private var session: CalibraLiveEval? = null
private var player: SonixPlayer? = null
private var recorder: SonixRecorder? = null

val practicePhase = MutableStateFlow(PracticePhase.IDLE)
val currentPitch = MutableStateFlow(-1f)
val lastResult = MutableStateFlow<SegmentResult?>(null)

fun loadSession(reference: LessonMaterial, config: SessionConfig) {
viewModelScope.launch {
player = SonixPlayer.create(audioPath, SonixPlayerConfig.DEFAULT)
recorder = SonixRecorder.create(tempPath, SonixRecorderConfig.VOICE)

session = CalibraLiveEval.create(
reference = reference,
session = config,
detector = CalibraPitch.createDetector(),
player = player,
recorder = recorder
)

session?.onPhaseChanged { phase -> practicePhase.value = phase }
session?.onSegmentComplete { result -> lastResult.value = result }

session?.prepareSession()

// Observe session state
session?.state?.collect { state ->
currentPitch.value = state.currentPitch
}
}
}

fun play(segmentIndex: Int) {
session?.startPracticingSegment(segmentIndex)
}

fun pause() {
session?.pausePlayback()
}

fun seekTo(segmentIndex: Int) {
session?.seekToSegment(segmentIndex)
}

fun retry() {
session?.retryCurrentSegment()
}

fun finish(): SingingResult? {
return session?.finishSession()
}

override fun onCleared() {
session?.close()
player?.release()
recorder?.release()
}
}

SwiftUI View Model (Swift)

@MainActor
class SingalongViewModel: ObservableObject {
@Published var phase: PracticePhase = .idle
@Published var lastResult: SegmentResult?
@Published var currentPitch: Float = -1

private var session: CalibraLiveEval?
private var observerTasks: [Task<Void, Never>] = []

func loadSession(reference: LessonMaterial) async {
let detector = CalibraPitch.createDetector()
let session = CalibraLiveEval.create(
reference: reference,
session: .practice,
detector: detector,
player: player,
recorder: recorder
)
self.session = session

session.onPhaseChanged { [weak self] phase in
self?.phase = phase
}
session.onSegmentComplete { [weak self] result in
self?.lastResult = result
}

try? await session.prepareSession()

observerTasks.append(session.observeState { [weak self] state in
self?.currentPitch = state.currentPitch
})
}

func play(segmentIndex: Int) {
session?.startPracticingSegment(index: segmentIndex)
}

func pause() {
session?.pausePlayback()
}

func seekTo(segmentIndex: Int) {
session?.seekToSegment(index: segmentIndex)
}

func cleanup() {
observerTasks.forEach { $0.cancel() }
session?.closeSession()
}
}

Low-Level Manual Audio (Kotlin)

val session = CalibraLiveEval.create(
reference = lessonMaterial,
detector = CalibraPitch.createDetector()
)
session.prepareSession()

for (segmentIndex in session.segments.indices) {
session.startPracticingSegment(segmentIndex)

// Feed audio from your own source
recorder.audioBuffers.collect { buffer ->
session.feedAudioSamples(buffer.toFloatArray(), sampleRate = 48000)
}

val result = session.finishPracticingSegment()
println("Segment $segmentIndex: ${result?.scorePercent}%")
}

val finalResult = session.finishSession()
println("Overall: ${finalResult.overallScorePercent}%")
session.closeSession()

Pitch Visualization (Swift)

let contourTask = session.observeLivePitchContour { contour in
let anchorX: CGFloat = 200 // "Now" position on screen
let currentTime = contour.samples.last?.timeSeconds ?? 0

for sample in contour.samples {
let x = anchorX - CGFloat(currentTime - sample.timeSeconds) * pixelsPerSecond
let y = midiToScreenY(sample.midiNote)
drawPoint(x, y)
}
}

Next Steps