Skip to main content

GitHub

View source code and contribute

Installation

Swift Package Manager

Add the package dependency in Xcode:
  1. File → Add Package Dependencies
  2. Enter: https://github.com/decartai/decart-ios
  3. Select version and add to your target
Or add to your Package.swift:
dependencies: [
    .package(url: "https://github.com/decartai/decart-ios.git", from: "0.1.0")
]

Quick Start

import Foundation
import DecartSDK

let config = DecartConfiguration(apiKey: "your-api-key-here")
let client = DecartClient(decartConfiguration: config)

// Edit a video using the Queue API
let videoData = try Data(contentsOf: videoURL)
let input = try VideoEditInput(
    prompt: "Transform into anime style",
    data: .video(data: videoData)
)

let result = try await client.queue.submitAndPoll(
    model: .lucy_2_v2v,
    input: input
) { status in
    print("Status: \(status.status)")
}

switch result {
case .completed(_, let data):
    try data.write(to: outputURL)
    print("Video saved!")
case .failed(_, let error):
    print("Failed: \(error)")
}

What can you build?

The SDK provides three main APIs for different use cases:
If you need to…UseMain class
Transform live camera streams over WebRTCRealtime APIDecartRealtimeManager
Generate/edit videos asynchronouslyQueue APIQueueClient
Edit images synchronouslyProcess APIProcessClient

Realtime API

Realtime video streams

Queue API

Video processing

Process API

Image editing

Client Setup

Initialize the Decart client with your API key:
import DecartSDK

let config = DecartConfiguration(
    baseURL: "https://api.decart.ai",  // optional, this is the default
    apiKey: "your-api-key-here"
)

let client = DecartClient(decartConfiguration: config)
Parameters:
  • baseURL (optional) - Custom API endpoint (defaults to https://api.decart.ai)
  • apiKey (required) - Your Decart API key from the platform
Store your API key securely. Never commit API keys to version control. Use environment variables or secure storage like Keychain.

Client Tokens

For production iOS and macOS apps using the Realtime API, fetch short-lived client tokens from your backend instead of embedding your permanent API key in the app bundle:
// Fetch ephemeral key from your backend
let ephemeralKey = try await fetchTokenFromBackend()

// Use it to create the client
let config = DecartConfiguration(apiKey: ephemeralKey)
let client = DecartClient(decartConfiguration: config)
See Client Tokens for details on secure client-side authentication.

Available Models

Import models from the SDK to use with any API:
import DecartSDK

// Realtime models
Models.realtime(.lucy_2_rt)            // Realtime video editing
Models.realtime(.mirage_v2)            // Realtime video restyling

// Video models (Queue API)
Models.video(.lucy_2_v2v)             // Video editing
Models.video(.lucy_restyle_v2v)       // Video restyling
Models.video(.lucy_motion)            // Trajectory-based motion control

// Image models (Process API)
Models.image(.lucy_pro_i2i)           // Image editing
// Realtime models
Models.realtime(.lucy_v2v_720p_rt)    // Realtime video editing
Models.realtime(.mirage)              // Realtime video transformation

// Video models
Models.video(.lucy_pro_v2v)           // Lucy Edit v1
Each model exposes properties for optimal configuration:
let model = Models.realtime(.mirage_v2)
print(model.fps)     // 22
print(model.width)   // 1280
print(model.height)  // 704

Platform Requirements

The Swift SDK requires:
  • iOS 17.0+ or macOS 12.0+
  • Swift 6.2+
  • Xcode 16.0+
  • A real device for camera access (simulator not supported for WebRTC camera features)

Swift Concurrency

The SDK uses modern Swift concurrency with async/await and AsyncStream for reactive state:
// All connection methods are async
let remoteStream = try await manager.connect(localStream: localStream)

// Monitor state changes with AsyncStream
for await state in manager.events {
    print("Connection: \(state.connectionState)")
    print("Service: \(state.serviceStatus)")
}

Type Safety

The SDK uses typed input classes for each model, providing compile-time guarantees:
// Video editing — requires prompt + video data
let videoEditInput = try VideoEditInput(
    prompt: "Transform into anime style",
    data: .video(data: videoData)
)

// Image editing — requires prompt + image data
let imageInput = try ImageToImageInput(
    prompt: "Change the background to a beach",
    data: .image(data: imageData)
)

// Video restyling — requires prompt OR reference image (not both)
let restyleInput = try VideoRestyleInput(
    prompt: "Studio Ghibli style",
    data: .video(data: videoData)
)

// Motion video — requires image + trajectory points
let motionInput = try MotionVideoInput(
    data: .image(data: imageData),
    trajectory: [
        try TrajectoryPoint(frame: 0, x: 0.5, y: 0.5),
        try TrajectoryPoint(frame: 30, x: 0.7, y: 0.3),
    ]
)

Ready to start building?

Realtime API Guide

Build realtime iOS experiencesCamera handling, video transformation, and interactive applications with WebRTC.

Queue API Guide

Video processingEdit videos, control motion, and transform videos with style transfer.

Process API Guide

Image editingEdit and transform images on-demand with synchronous processing.