iLive Docs
SDK Reference

Flutter SDK Reference

Complete public API surface for the iLive Flutter plugin.

Flutter SDK Reference

This page documents the public Dart API of the ilive_flutter plugin: entry points, configuration, result types, and face comparison helpers. For a step-by-step integration walkthrough, see the Flutter quickstart.

The plugin wraps the Android and iOS SDKs through a platform channel and exposes the same surface to Dart.

Installation

# pubspec.yaml
dependencies:
  ilive_flutter:
    path: path/to/ilive_flutter
import 'package:ilive_flutter/ilive_flutter.dart';

The native SDKs must already be linked into your Android (ilive-core / ilive-ui) and iOS (ILiveCore / ILiveUI) host projects. The quickstart walks through platform setup.

Entry points

Two ways to run a liveness session:

  1. Drop-in UIILive.start(...). Launches the native drop-in activity/view controller and returns a LivenessResult. Fastest path.
  2. Custom UILivenessEngine.create(...). Exposes platform-channel streams for face tracking and challenge events so you can build your own Flutter UI on top.

Top-level helpers live on the ILive class:

MethodSignaturePurpose
startstatic Future<LivenessResult> start({ILiveConfig? config})Launch the native drop-in liveness flow.
isDeviceSupportedstatic Future<DeviceSupportResult> isDeviceSupported()Query device support.
versionstatic Future<String> get versionNative SDK version string.
compareFacesstatic Future<FaceMatchResult?> compareFaces({required Uint8List referencePhoto, required Uint8List probePhoto, double threshold = 0.45})1:1 comparison of two JPEGs.
compareFaceWithEmbeddingstatic Future<FaceMatchResult?> compareFaceWithEmbedding({required Float64List referenceEmbedding, required Uint8List probePhoto, double threshold = 0.45})Compare a stored embedding against a new JPEG.

Drop-in UI: ILive.start()

final result = await ILive.start(
  config: const ILiveConfig(challengeCount: 3, passThreshold: 0.75),
);
 
if (result.verdict == Verdict.pass) {
  // result.icaoPhoto, result.faceEmbedding, result.attestation
}

Custom UI: LivenessEngine

MethodSignatureNotes
Createstatic Future<LivenessEngine> create({ILiveConfig? config, bool autoInitialize = true})Async factory. Cheap — does not load models unless autoInitialize is true (the default, which transparently calls initialize for you).
InitializeFuture<void> initialize()Load detection / analysis models on the native engine. Takes roughly 100–500 ms on first call. Must be invoked once, after create, and before evaluateVerdict. Called automatically when autoInitialize is true.
Face trackingStream<FaceTrackingUpdate> get faceTrackingUpdatesBroadcast stream of tracking state (face presence, bounds, landmarks).
ChallengesStream<ChallengeEvent> startChallenges()Broadcast stream of challenge prompts, progress, completion.
EvaluateFuture<LivenessResult> evaluateVerdict()Compute the final verdict.
DisposeFuture<void> dispose()Release native resources.

Lifecycle: create + initialize

The engine has a two-step lifecycle that matches the native Android / iOS SDKs: create allocates the engine (cheap) and initialize loads the on-device models (heavier). By default the two steps are fused for you.

Shorthand (default — autoInitialize: true):

final engine = await LivenessEngine.create(config: const ILiveConfig());
// Engine is ready to use — models have been loaded.

Explicit (autoInitialize: false) — useful if you want to show a "loading models" indicator while models load:

final engine = await LivenessEngine.create(
  config: const ILiveConfig(),
  autoInitialize: false,
);
 
// Show your own loading UI here…
await engine.initialize();
// …hide it. Engine is now ready to use.

End-to-end custom UI flow

final engine = await LivenessEngine.create(config: const ILiveConfig());
 
engine.faceTrackingUpdates.listen((update) {
  // update.faceDetected, update.confidence, update.faceBounds
});
 
engine.startChallenges().listen((event) {
  // event instances: prompt, progress, pass, fail, allComplete
});
 
final result = await engine.evaluateVerdict();
await engine.dispose();

Configuration: ILiveConfig

ILiveConfig is an immutable Dart value class. All fields are optional named parameters.

Challenges and verdict thresholds

FieldTypeDefaultDescription
challengeCountint3Number of challenges in active mode (3–6).
challengeTypesSet<ChallengeType>all 8Pool of challenges.
challengeTimeoutSecondsint8Per-challenge timeout.
transitionDelayMsint500Pause between challenges.
maxRetriesPerChallengeint1In-session retries per challenge.
passThresholddouble0.70Minimum confidence for Verdict.pass.
retryThresholddouble0.45Minimum confidence for Verdict.retry.
antispoofFloordouble0.30Anti-spoof veto floor.
deepfakeFloordouble0.20Deepfake veto floor.
layerWeightsLayerWeights?balancedPer-layer weight vector.

ChallengeType values: blink, turnLeft, turnRight, nod, smile, mouthOpen, eyebrowRaise, eyeFollow.

Voice prompts

FieldTypeDefaultDescription
voicePromptsEnabledboolfalseSpeak challenge instructions.
voiceRatedouble1.0Speech rate multiplier.
voiceLanguageString'en-US'BCP-47 locale tag.

Photo extraction

FieldTypeDefaultDescription
photoExtractionEnabledbooltrueProduce an ICAO-style still on pass.
photoWidthint480Output width.
photoHeightint600Output height.
photoJpegQualityint95JPEG quality (0–100).

Security

FieldTypeDefaultDescription
attestationKeyBase64String?nullBase64-encoded HMAC key used to sign the result payload. Canonical form across all SDKs.
attestationKeyString? (deprecated)nullDeprecated alias for attestationKeyBase64, retained for one release. Prefer the new name.
frameBundleEncryptionKeyString?nullBase64-encoded AES key (32 bytes) used to encrypt captured frames.
frameBundleFrameCountint8Number of frames in the encrypted bundle.

Theming and timeouts

FieldTypeDefaultDescription
themeILiveTheme?nullNative UI color / typography overrides.
modelLoadTimeoutSecondsint10Model-load stage timeout.
cameraInitTimeoutSecondsint5Camera-init stage timeout.
totalSessionTimeoutSecondsint120Whole-session timeout.
const config = ILiveConfig(
  challengeCount: 4,
  passThreshold: 0.75,
  voicePromptsEnabled: true,
  attestationKeyBase64: 'MTIzNDU2Nzg5MDEyMzQ1Ng==',
);

Results: LivenessResult

FieldTypeDescription
sessionIdStringUUID for the session.
verdictVerdictpass, fail, or retry.
confidencedoubleWeighted aggregate confidence (0.0–1.0).
layerScoresList<LayerScore>Per-layer breakdown.
retryHintString?User-facing guidance when verdict == retry.
failureReasonString?Diagnostic reason when verdict == fail.
icaoPhotoUint8List?JPEG still of the subject on pass.
photoQualityScoredouble?Quality rating for icaoPhoto (0.0–1.0).
faceEmbeddingFloat64List?512-dimensional face embedding from the best frame.
attestationAttestation?Signed payload (payload, signature, algorithm).
encryptedFrameBundleFrameBundle?Encrypted frames (ciphertext, iv, frameCount, keyId).
metadataSessionMetadataDuration, per-challenge timings, device model, OS version, SDK version.

SessionMetadata

FieldTypeDescription
durationMsintTotal session duration in milliseconds.
challengeTimingsList<ChallengeTiming>Per-challenge timing breakdown. One entry per challenge the user attempted, in the order presented.
deviceModelStringDevice model string.
osVersionStringOS version.
sdkVersionStringNative SDK version.
delegateUsedStringInference delegate selected on this device.

ChallengeTiming

FieldTypeDescription
typeStringChallenge identifier, matching the native SDK's lowercase name — e.g. "blink", "turn_left", "turn_right", "nod", "smile", "mouth_open", "eyebrow_raise", "eye_follow".
durationMsintWall-clock time spent on the challenge, in milliseconds.
passedboolWhether the challenge was completed successfully.
for (final timing in result.metadata.challengeTimings) {
  print('${timing.type}: ${timing.durationMs} ms (passed=${timing.passed})');
}

Face recognition

// Compare two JPEGs directly.
final match = await ILive.compareFaces(
  referencePhoto: idDocumentBytes,
  probePhoto: result.icaoPhoto!,
);
if (match != null && match.isMatch) {
  // same person; match.similarity ∈ [0, 1]
}
 
// Compare a stored embedding against a new JPEG — faster for repeated checks.
final match2 = await ILive.compareFaceWithEmbedding(
  referenceEmbedding: result.faceEmbedding!,
  probePhoto: newSelfieBytes,
);

FaceMatchResult fields: similarity (double), isMatch (bool), threshold (double).

Error handling

Platform-channel failures surface as PlatformException with the codes defined in ilive_error.dart. The more common path is a successful call with verdict == Verdict.fail and a populated failureReason — for example "No face detected", "Session timeout", or a hard-floor veto from the anti-spoof or deepfake layer.

Camera permission is still handled by the native drop-in UI; for LivenessEngine custom UI, ensure camera permission is granted via permission_handler before starting.

See also