iLive Docs

Android quickstart

Get face liveness detection running in your Android app in under five minutes.

Get face liveness detection running in your Android app in under five minutes.

What you'll build

A single button that launches the iLive liveness verification flow — camera preview, face detection, on-device analysis — and returns a verdict (PASS / FAIL / RETRY) with confidence scores.

Requirements

RequirementMinimum
Android StudioHedgehog 2023.1.1+
JDK17
Min SDK26 (Android 8.0)
Target / Compile SDK35
Kotlin2.1.0

Step 1: Add dependencies

Add the iLive SDK modules to your project. For local development, include the modules directly:

settings.gradle.kts:

// Include iLive SDK modules
include(":ilive-core")
project(":ilive-core").projectDir = file("path/to/ilive-android-sdk/ilive-core")
 
include(":ilive-ui")
project(":ilive-ui").projectDir = file("path/to/ilive-android-sdk/ilive-ui")

app/build.gradle.kts:

dependencies {
    implementation(project(":ilive-core"))
    implementation(project(":ilive-ui"))
}

Also ensure you have the shared version catalog. Copy ilive-android-sdk/gradle/libs.versions.toml to your project's gradle/ directory, or add the iLive entries to your existing catalog.

Step 2: Add camera permission

AndroidManifest.xml:

<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" android:required="true" />

No runtime permission handling needed — the SDK handles it internally.

Step 3: Add the SDK model assets

The SDK ships with a set of model files it loads at runtime. Copy them into ilive-core/src/main/assets/models/ following the instructions in the SDK distribution bundle. The SDK's README lists the exact files and expected sizes.

The models are distributed separately from the SDK source to keep the git repository small. Your SDK bundle includes a models/ directory — copy its contents into ilive-core/src/main/assets/models/ before your first build.

Step 4: Launch the liveness flow

The SDK ships two entry points — a drop-in activity for the fastest integration, and a lower-level engine if you want to build your own UI.

Launch the pre-built LivenessActivity with LivenessContract — a type-safe ActivityResultContract that returns a sealed LivenessOutcome:

import com.ilive.sdk.ui.LivenessContract
import com.ilive.sdk.ui.LivenessOutcome
 
class MainActivity : ComponentActivity() {
 
    private val livenessLauncher = registerForActivityResult(
        LivenessContract()
    ) { outcome ->
        when (outcome) {
            is LivenessOutcome.Success -> {
                val livenessResult = outcome.result
                val verdict = livenessResult.verdict       // PASS, FAIL, or RETRY
                val confidence = livenessResult.confidence  // 0.0 - 1.0
                val scores = livenessResult.layerScores    // per-layer breakdown
                val photo = livenessResult.icaoPhoto       // JPEG bytes (on PASS)
 
                Log.d("iLive", "Verdict: $verdict, Confidence: ${(confidence * 100).toInt()}%")
                scores.forEach { layer ->
                    Log.d("iLive", "  ${layer.layer}: ${(layer.score * 100).toInt()}%")
                }
            }
            is LivenessOutcome.Error -> Log.w("iLive", "Error: ${outcome.message}")
            LivenessOutcome.Cancelled -> Log.d("iLive", "User cancelled")
        }
    }
 
    fun startLivenessCheck() {
        livenessLauncher.launch(
            LivenessContract.Input(
                // Optional: passiveMode = false for challenge-based verification
                passiveMode = true,
            )
        )
    }
}

The legacy launch path that reads LivenessActivity.pendingResult after StartActivityForResult is deprecated and will be removed in a future release. Prefer LivenessContract.

Step 5: Handle the result

when (livenessResult.verdict) {
    Verdict.PASS -> {
        // Verification successful
        // livenessResult.icaoPhoto contains the JPEG passport photo
        // livenessResult.attestation contains the signed payload
        // livenessResult.confidence is the overall score (0.0-1.0)
        proceedToNextStep(livenessResult)
    }
    Verdict.RETRY -> {
        // Marginal result — ask the user to try again
        // livenessResult.retryHint has a user-facing suggestion
        showRetryDialog(livenessResult.retryHint ?: "Please try again")
    }
    Verdict.FAIL -> {
        // Verification failed
        // livenessResult.failureReason has the internal reason
        showFailureMessage()
    }
}

Configuration

val config = ILiveConfig.Builder()
    // Challenge settings
    .challengeCount(4)                    // 3-6 challenges per session
    .challengeTimeoutSeconds(10)          // seconds per challenge
    .challengeTypes(setOf(                // which challenges to use
        ChallengeType.BLINK,
        ChallengeType.SMILE,
        ChallengeType.TURN_LEFT,
        ChallengeType.NOD
    ))
 
    // Verdict thresholds
    .passThreshold(0.75f)                 // minimum confidence for PASS
    .retryThreshold(0.45f)                // minimum confidence for RETRY (below = FAIL)
    .antispoofFloor(0.30f)                // anti-spoof veto threshold
    .deepfakeFloor(0.20f)                 // deepfake veto threshold
 
    // Voice prompts
    .voicePromptsEnabled(true)
    .voiceLanguage("en-US")
 
    // Photo extraction
    .photoExtractionEnabled(true)
    .photoWidth(480)
    .photoHeight(600)
    .photoJpegQuality(95)
 
    // Security
    .attestationKey(myHmacKey)            // HMAC-SHA256 key (32+ bytes)
    .frameBundleEncryptionKey(myAesKey)   // AES-256-GCM key (32 bytes)
 
    // Timeouts
    .totalSessionTimeoutSeconds(120)
    .build()

Passive mode vs challenge mode

ModeHow it worksWhen to use
Passive (passiveMode = true)Camera captures frames silently for about 3 seconds, then runs anti-spoof, deepfake, face-consistency, and motion analysis. No user interaction needed.Low-friction onboarding, background verification
Challenge (passiveMode = false)User completes 3–6 randomized challenges (blink, turn head, smile, etc.), then analysis runs on the captured frames. The challenge score is an additional verification layer.High-security scenarios, regulatory compliance

Both modes use the same underlying analysis pipeline. Passive mode redistributes the challenge weight across the other four layers.

LivenessResult fields

FieldTypeDescription
sessionIdStringUnique session identifier (UUID)
verdictVerdictPASS, FAIL, or RETRY
confidenceFloatWeighted aggregate score (0.0–1.0)
layerScoresList<LayerScore>Per-layer breakdown (anti-spoof, deepfake, consistency, motion)
retryHintString?User-facing suggestion on RETRY
failureReasonString?Internal reason on FAIL
icaoPhotoByteArray?JPEG passport photo (480×600) on PASS
photoQualityScoreFloat?Photo quality (0.0–1.0)
attestationAttestation?Signed payload (if key configured)
encryptedFrameBundleFrameBundle?Encrypted frames (if key configured)
metadataSessionMetadataDuration, device, SDK version

Face comparison

Compare the verified face against a reference photo (for example, an ID document):

// Compare the liveness photo against an ID document photo
val match = ILive.compareFaces(
    context = this,
    referencePhoto = idDocumentJpeg,
    probePhoto = livenessResult.icaoPhoto!!
)
 
if (match.isMatch) {
    println("Same person: ${(match.similarity * 100).toInt()}% match")
} else {
    println("Face mismatch")
}

For faster repeat comparisons, store the face embedding from the liveness result:

// Store the embedding after the first verification
val embedding = livenessResult.faceEmbedding  // 512-dimensional
 
// Later, compare against a new photo without reloading the model
val match = ILive.compareFaceWithEmbedding(
    context = this,
    referenceEmbedding = embedding!!,
    probePhoto = newPhotoJpeg
)
ParameterDefaultDescription
threshold0.45Minimum similarity for a match (0.0–1.0)

Troubleshooting

App crashes on launch: Ensure the SDK model files are in assets/models/ — the SDK README lists the full file inventory.

Face not detected: Camera frames are rotated to portrait orientation automatically. Ensure the phone is held upright.

All scores show around 50%: The analyzers may be failing silently. Check logcat for SDK errors — usually a missing model file or an asset path mismatch.

"Move closer" even when close: The face must occupy at least 2% of the frame area. At arm's length this is typically met.

Build error "Duplicate class org.tensorflow.lite": Add the following to your app's build.gradle.kts:

configurations.all {
    exclude(group = "org.tensorflow", module = "tensorflow-lite-api")
}

On this page