iLive Docs
SDK Reference

iOS SDK Reference

Complete public API surface for the iLive iOS SDK.

iOS SDK Reference

This page documents the public Swift API of the iLive iOS SDK: entry points, configuration, result types, and face comparison helpers. For a step-by-step integration walkthrough, see the iOS quickstart.

Installation

The SDK ships as two Swift targets: ILiveCore (engine, configuration, results) and ILiveUI (drop-in LivenessViewController). Both are distributed via Swift Package Manager and CocoaPods. See the quickstart for full setup.

Minimum deployment target: iOS 14.0.

import ILiveCore
import ILiveUI

Entry points

There are two ways to run a liveness session:

  1. Drop-in UILivenessViewController from ILiveUI. A full-screen UIViewController that handles camera, guidance, challenges, and the result screen. SwiftUI callers can wrap it with UIViewControllerRepresentable.
  2. Custom UILivenessEngine from ILiveCore. You drive the camera and UI; the engine consumes CVPixelBuffer frames and returns a verdict.

Top-level helpers live on the ILive class:

MemberSignaturePurpose
versionstatic let StringSDK version string.
isDeviceSupportedstatic func isDeviceSupported() -> DeviceSupportResultChecks OS version.
compareFacesstatic func compareFaces(reference: Data, probe: Data, threshold: Float = 0.45) -> FaceMatchResult?1:1 comparison of two JPEGs.
compareFaceWithEmbeddingstatic func compareFaceWithEmbedding(referenceEmbedding: [Float], probe: Data, threshold: Float = 0.45) -> FaceMatchResult?Compare a stored embedding against a new JPEG.

Drop-in UI: LivenessViewController

LivenessViewController is a plain UIViewController. Create it, set result callbacks, then present it modally.

Initializer / propertySignatureNotes
init(config:passiveMode:)convenience init(config: ILiveConfig = ILiveConfig(), passiveMode: Bool = true)Passive mode runs without user challenges.
onResult((LivenessResult) -> Void)?Fired with the final LivenessResult.
onDismiss(() -> Void)?Fired after the controller dismisses itself.
import ILiveCore
import ILiveUI
 
let vc = LivenessViewController(config: ILiveConfig(), passiveMode: true)
vc.onResult = { result in
    if result.verdict == .pass {
        // result.icaoPhoto, result.faceEmbedding, result.attestation
    }
}
vc.modalPresentationStyle = .fullScreen
present(vc, animated: true)

For SwiftUI, wrap it with UIViewControllerRepresentable:

struct LivenessFlow: UIViewControllerRepresentable {
    let onResult: (LivenessResult) -> Void
    func makeUIViewController(context: Context) -> LivenessViewController {
        let vc = LivenessViewController()
        vc.onResult = onResult
        return vc
    }
    func updateUIViewController(_ vc: LivenessViewController, context: Context) {}
}

Custom UI: LivenessEngine

LivenessEngine exposes the capture pipeline without a bundled UI.

MethodSignatureNotes
Createstatic func create(config: ILiveConfig = ILiveConfig()) async -> LivenessEngineAsync factory.
Initializefunc initialize() asyncLoads on-device models. Call once before processFrame.
Process framefunc processFrame(pixelBuffer: CVPixelBuffer) -> TrackingResultFeed every camera frame.
Start challengesfunc startChallenges(delegate: ChallengeEventDelegate)Begin the interactive challenge sequence.
Evaluate (active)func evaluateVerdict() async -> LivenessResultCall after all challenges complete.
Evaluate (passive)func evaluatePassiveVerdict() async -> LivenessResultSkips challenges; scores purely from captured frames.
Releasefunc release()Free resources. Idempotent.
sessionIdStringUUID for this session.
currentStageLivenessStage.initializing, .ready, .challenges, .processing, .result.
let engine = await LivenessEngine.create(config: ILiveConfig())
await engine.initialize()
 
// For each CVPixelBuffer from AVCaptureVideoDataOutput:
let tracking = engine.processFrame(pixelBuffer: pixelBuffer)
 
// Passive flow:
let result = await engine.evaluatePassiveVerdict()
engine.release()

Configuration: ILiveConfig

ILiveConfig is a value type with a memberwise initializer — pass only the fields you want to override.

Challenges and verdict thresholds

FieldTypeDefaultDescription
challengeCountInt3Number of challenges in active mode.
challengeTypesSet<ChallengeType>ILiveConfig.defaultChallengeTypes (7 of 8)Pool of challenges to draw from. Excludes .eyeFollow by default — see note below.
challengeTimeoutSecondsInt8Per-challenge timeout.
transitionDelayMsInt500Pause between challenges.
maxRetriesPerChallengeInt1In-session retry count per challenge.
passThresholdFloat0.70Minimum confidence for .pass.
retryThresholdFloat0.45Minimum confidence for .retry.
antispoofFloorFloat0.30Anti-spoof layer veto floor.
deepfakeFloorFloat0.20Deepfake layer veto floor.
layerWeightsLayerWeightsbalancedPer-layer weight vector.

ChallengeType values: .blink, .turnLeft, .turnRight, .nod, .smile, .mouthOpen, .eyebrowRaise, .eyeFollow.

eyeFollow on iOS. The drop-in LivenessViewController does not currently render the moving-dot overlay that this challenge needs. To avoid showing users a prompt with no visible target, .eyeFollow is omitted from ILiveConfig.defaultChallengeTypes. The enum case is still public so sessions that ran with earlier defaults continue to compile, and callers who supply a custom UI can opt in by passing it explicitly in challengeTypes — the SDK emits an os_log warning when they do. eyeFollow remains fully supported on Android and Flutter; iOS parity will land when the overlay ships.

Voice prompts

FieldTypeDefaultDescription
voicePromptsEnabledBoolfalseSpeak challenge instructions.
voiceRateFloat1.0Speech rate multiplier.
voiceLanguageString"en-US"BCP-47 locale tag.

Photo extraction

FieldTypeDefaultDescription
photoExtractionEnabledBooltrueProduce an ICAO-style still on .pass.
photoWidthInt480Output width.
photoHeightInt600Output height.
photoJpegQualityInt95JPEG quality (0–100).

Security

FieldTypeDefaultDescription
attestationKeyBase64String?nilBase64-encoded HMAC key used to sign the result payload. Canonical form across all SDKs.
attestationKeyData? (deprecated)nilDeprecated raw-bytes field, retained for one release. Prefer attestationKeyBase64.
frameBundleEncryptionKeyData?nilAES key (32 bytes) used to encrypt captured frames.
frameBundleFrameCountInt8Number of frames in the encrypted bundle.

Timeouts

FieldTypeDefault
modelLoadTimeoutSecondsInt10
cameraInitTimeoutSecondsInt5
totalSessionTimeoutSecondsInt120
var config = ILiveConfig(
    challengeCount: 4,
    passThreshold: 0.75,
    voicePromptsEnabled: true
)
config.attestationKeyBase64 = "MTIzNDU2Nzg5MDEyMzQ1Ng=="

Results: LivenessResult

Returned by evaluateVerdict() and evaluatePassiveVerdict(), and delivered via LivenessViewController.onResult.

FieldTypeDescription
sessionIdStringUUID assigned when the engine was created.
verdictVerdict.pass, .fail, or .retry.
confidenceFloatWeighted aggregate confidence (0.0–1.0).
layerScores[LayerScore]Per-layer breakdown.
retryHintString?User-facing guidance when verdict == .retry.
failureReasonString?Diagnostic reason when verdict == .fail.
icaoPhotoData?JPEG still of the subject on .pass.
photoQualityScoreFloat?Quality rating for icaoPhoto (0.0–1.0).
faceEmbedding[Float]?512-dimensional face embedding from the best frame.
attestationAttestation?Signed payload (payload, signature, algorithm) when attestationKeyBase64 was set.
encryptedFrameBundleFrameBundle?Encrypted captured frames (ciphertext, iv, frameCount, keyId).
metadataSessionMetadataDuration, challenge timings, device model, OS version, SDK version.

Face recognition: ILive.compareFaces()

One-shot 1:1 face comparison. Returns nil if an image cannot be decoded or no face is found.

static func compareFaces(
    reference: Data,   // JPEG data
    probe: Data,       // JPEG data
    threshold: Float = FaceRecognition.defaultThreshold  // 0.45
) -> FaceMatchResult?
 
static func compareFaceWithEmbedding(
    referenceEmbedding: [Float],   // 512 floats from LivenessResult.faceEmbedding
    probe: Data,
    threshold: Float = 0.45
) -> FaceMatchResult?

FaceMatchResult: similarity: Float (cosine similarity, 0.0–1.0), isMatch: Bool (similarity >= threshold), threshold: Float.

// Compare an ID document photo against the captured liveness photo.
if let match = ILive.compareFaces(
    reference: idDocumentJpegData,
    probe: result.icaoPhoto!
), match.isMatch {
    // Same person, match.similarity
}
 
// Later: compare the stored embedding against a new selfie.
if let match = ILive.compareFaceWithEmbedding(
    referenceEmbedding: result.faceEmbedding!,
    probe: newSelfieJpegData
) {
    // ...
}

Error handling

The engine does not throw on evaluation — instead, failures surface via LivenessResult.verdict == .fail with failureReason set. Common reasons:

ReasonMeaning
"Engine released"evaluate* was called after release().
"No face detected"No face was found across captured frames.
"Session timeout"The session exceeded totalSessionTimeoutSeconds.
"Antispoof floor" / "Deepfake floor"A hard-floor layer vetoed the verdict.

Camera permission is the caller's responsibility — ensure NSCameraUsageDescription is set and permission is granted before presenting LivenessViewController.

See also