Flutter SDK Reference
Complete public API surface for the iLive Flutter plugin.
Flutter SDK Reference
This page documents the public Dart API of the ilive_flutter plugin: entry points, configuration, result types, and face comparison helpers. For a step-by-step integration walkthrough, see the Flutter quickstart.
The plugin wraps the Android and iOS SDKs through a platform channel and exposes the same surface to Dart.
Installation
The native SDKs must already be linked into your Android (ilive-core / ilive-ui) and iOS (ILiveCore / ILiveUI) host projects. The quickstart walks through platform setup.
Entry points
Two ways to run a liveness session:
- Drop-in UI —
ILive.start(...). Launches the native drop-in activity/view controller and returns aLivenessResult. Fastest path. - Custom UI —
LivenessEngine.create(...). Exposes platform-channel streams for face tracking and challenge events so you can build your own Flutter UI on top.
Top-level helpers live on the ILive class:
| Method | Signature | Purpose |
|---|---|---|
start | static Future<LivenessResult> start({ILiveConfig? config}) | Launch the native drop-in liveness flow. |
isDeviceSupported | static Future<DeviceSupportResult> isDeviceSupported() | Query device support. |
version | static Future<String> get version | Native SDK version string. |
compareFaces | static Future<FaceMatchResult?> compareFaces({required Uint8List referencePhoto, required Uint8List probePhoto, double threshold = 0.45}) | 1:1 comparison of two JPEGs. |
compareFaceWithEmbedding | static Future<FaceMatchResult?> compareFaceWithEmbedding({required Float64List referenceEmbedding, required Uint8List probePhoto, double threshold = 0.45}) | Compare a stored embedding against a new JPEG. |
Drop-in UI: ILive.start()
Custom UI: LivenessEngine
| Method | Signature | Notes |
|---|---|---|
| Create | static Future<LivenessEngine> create({ILiveConfig? config, bool autoInitialize = true}) | Async factory. Cheap — does not load models unless autoInitialize is true (the default, which transparently calls initialize for you). |
| Initialize | Future<void> initialize() | Load detection / analysis models on the native engine. Takes roughly 100–500 ms on first call. Must be invoked once, after create, and before evaluateVerdict. Called automatically when autoInitialize is true. |
| Face tracking | Stream<FaceTrackingUpdate> get faceTrackingUpdates | Broadcast stream of tracking state (face presence, bounds, landmarks). |
| Challenges | Stream<ChallengeEvent> startChallenges() | Broadcast stream of challenge prompts, progress, completion. |
| Evaluate | Future<LivenessResult> evaluateVerdict() | Compute the final verdict. |
| Dispose | Future<void> dispose() | Release native resources. |
Lifecycle: create + initialize
The engine has a two-step lifecycle that matches the native Android / iOS
SDKs: create allocates the engine (cheap) and initialize loads the
on-device models (heavier). By default the two steps are fused for you.
Shorthand (default — autoInitialize: true):
Explicit (autoInitialize: false) — useful if you want to show a
"loading models" indicator while models load:
End-to-end custom UI flow
Configuration: ILiveConfig
ILiveConfig is an immutable Dart value class. All fields are optional named parameters.
Challenges and verdict thresholds
| Field | Type | Default | Description |
|---|---|---|---|
challengeCount | int | 3 | Number of challenges in active mode (3–6). |
challengeTypes | Set<ChallengeType> | all 8 | Pool of challenges. |
challengeTimeoutSeconds | int | 8 | Per-challenge timeout. |
transitionDelayMs | int | 500 | Pause between challenges. |
maxRetriesPerChallenge | int | 1 | In-session retries per challenge. |
passThreshold | double | 0.70 | Minimum confidence for Verdict.pass. |
retryThreshold | double | 0.45 | Minimum confidence for Verdict.retry. |
antispoofFloor | double | 0.30 | Anti-spoof veto floor. |
deepfakeFloor | double | 0.20 | Deepfake veto floor. |
layerWeights | LayerWeights? | balanced | Per-layer weight vector. |
ChallengeType values: blink, turnLeft, turnRight, nod, smile, mouthOpen, eyebrowRaise, eyeFollow.
Voice prompts
| Field | Type | Default | Description |
|---|---|---|---|
voicePromptsEnabled | bool | false | Speak challenge instructions. |
voiceRate | double | 1.0 | Speech rate multiplier. |
voiceLanguage | String | 'en-US' | BCP-47 locale tag. |
Photo extraction
| Field | Type | Default | Description |
|---|---|---|---|
photoExtractionEnabled | bool | true | Produce an ICAO-style still on pass. |
photoWidth | int | 480 | Output width. |
photoHeight | int | 600 | Output height. |
photoJpegQuality | int | 95 | JPEG quality (0–100). |
Security
| Field | Type | Default | Description |
|---|---|---|---|
attestationKeyBase64 | String? | null | Base64-encoded HMAC key used to sign the result payload. Canonical form across all SDKs. |
attestationKey | String? (deprecated) | null | Deprecated alias for attestationKeyBase64, retained for one release. Prefer the new name. |
frameBundleEncryptionKey | String? | null | Base64-encoded AES key (32 bytes) used to encrypt captured frames. |
frameBundleFrameCount | int | 8 | Number of frames in the encrypted bundle. |
Theming and timeouts
| Field | Type | Default | Description |
|---|---|---|---|
theme | ILiveTheme? | null | Native UI color / typography overrides. |
modelLoadTimeoutSeconds | int | 10 | Model-load stage timeout. |
cameraInitTimeoutSeconds | int | 5 | Camera-init stage timeout. |
totalSessionTimeoutSeconds | int | 120 | Whole-session timeout. |
Results: LivenessResult
| Field | Type | Description |
|---|---|---|
sessionId | String | UUID for the session. |
verdict | Verdict | pass, fail, or retry. |
confidence | double | Weighted aggregate confidence (0.0–1.0). |
layerScores | List<LayerScore> | Per-layer breakdown. |
retryHint | String? | User-facing guidance when verdict == retry. |
failureReason | String? | Diagnostic reason when verdict == fail. |
icaoPhoto | Uint8List? | JPEG still of the subject on pass. |
photoQualityScore | double? | Quality rating for icaoPhoto (0.0–1.0). |
faceEmbedding | Float64List? | 512-dimensional face embedding from the best frame. |
attestation | Attestation? | Signed payload (payload, signature, algorithm). |
encryptedFrameBundle | FrameBundle? | Encrypted frames (ciphertext, iv, frameCount, keyId). |
metadata | SessionMetadata | Duration, per-challenge timings, device model, OS version, SDK version. |
SessionMetadata
| Field | Type | Description |
|---|---|---|
durationMs | int | Total session duration in milliseconds. |
challengeTimings | List<ChallengeTiming> | Per-challenge timing breakdown. One entry per challenge the user attempted, in the order presented. |
deviceModel | String | Device model string. |
osVersion | String | OS version. |
sdkVersion | String | Native SDK version. |
delegateUsed | String | Inference delegate selected on this device. |
ChallengeTiming
| Field | Type | Description |
|---|---|---|
type | String | Challenge identifier, matching the native SDK's lowercase name — e.g. "blink", "turn_left", "turn_right", "nod", "smile", "mouth_open", "eyebrow_raise", "eye_follow". |
durationMs | int | Wall-clock time spent on the challenge, in milliseconds. |
passed | bool | Whether the challenge was completed successfully. |
Face recognition
FaceMatchResult fields: similarity (double), isMatch (bool), threshold (double).
Error handling
Platform-channel failures surface as PlatformException with the codes defined in ilive_error.dart. The more common path is a successful call with verdict == Verdict.fail and a populated failureReason — for example "No face detected", "Session timeout", or a hard-floor veto from the anti-spoof or deepfake layer.
Camera permission is still handled by the native drop-in UI; for LivenessEngine custom UI, ensure camera permission is granted via permission_handler before starting.