Skip to main content
Lucy 2 is our most advanced realtime video editing model. Upload a reference image and watch yourself transform into that character live. It builds on everything in our original Lucy realtime model and adds character reference: provide any face, and Lucy 2 maps your movements and expressions onto that identity in real time.
Use Lucy 2 as your default realtime editing model. It supports character reference and text-only editing in the same integration.

Quick start

Installation

npm install @decartai/sdk

Transform into a character

import { createDecartClient, models } from "@decartai/sdk";

const model = models.realtime("lucy_2_rt");

// Get your camera
const stream = await navigator.mediaDevices.getUserMedia({
  video: {
    frameRate: model.fps,
    width: model.width,
    height: model.height,
  },
});

const client = createDecartClient({
  apiKey: "your-api-key-here",
});

// Connect to Lucy 2
const realtimeClient = await client.realtime.connect(stream, {
  model,
  onRemoteStream: (transformedStream) => {
    document.getElementById("output").srcObject = transformedStream;
  },
});

// Upload a reference image and transform
await realtimeClient.set({
  prompt: "Substitute the character in the video with the person in the reference image.",
  image: characterImage, // File, Blob, or URL string
  enhance: true,
});

Portrait mode (9:16)

Lucy 2 supports both landscape (16:9) and portrait (9:16) video input. On mobile devices (iOS/Android), portrait mode works automatically — the OS maps the front camera to a vertical stream regardless of the constraints you pass. For desktop browsers or external webcams, swap width and height when calling getUserMedia to request a portrait stream:
const model = models.realtime("lucy_2_rt");

const stream = await navigator.mediaDevices.getUserMedia({
  video: {
    frameRate: { ideal: model.fps },
    width: { ideal: model.height },  // swap: use height as width
    height: { ideal: model.width },  // swap: use width as height
    facingMode: "user",
  },
});

const client = createDecartClient({ apiKey: "your-api-key-here" });

const realtimeClient = await client.realtime.connect(stream, {
  model,
  onRemoteStream: (transformedStream) => {
    document.getElementById("output").srcObject = transformedStream;
  },
});
Portrait mode is ideal for mobile webcams and vertical video content. On mobile (iOS/Android), you don’t need to swap dimensions — the OS maps the front camera to a vertical stream automatically regardless of the constraints you pass.

Use cases

Character cosplay

Become any character live on camera — no costume needed. Stream as your favorite game, anime, or movie character.

Virtual try-on

Let customers see products on themselves in realtime. Upload a model photo and map it onto the customer’s live feed.

Live streaming

Transform your appearance for Twitch, YouTube, or TikTok Live. Switch characters on the fly without interrupting your stream.

Content creation

Produce character-driven video content without actors or costumes. Change your look between shots with a single API call.

Video conferencing

Appear as a custom avatar in meetings. Your facial expressions and head movements transfer naturally to the reference character.

Interactive experiences

Build apps where users transform into characters in realtime — photo booths, AR filters, virtual events.

How character reference works

Lucy 2 uses your reference image as a visual identity target. Your live video provides the motion, expressions, and pose — the model blends the two so the output looks like the reference character performing your movements.
1

Connect with your camera

Establish a WebRTC connection with your camera stream, just like any other realtime model.
2

Upload a reference image

Provide any portrait photo — a character, a celebrity, or a generated face. The model extracts the visual identity from this image.
3

See yourself transformed

Your movements, expressions, and gestures are mapped onto the reference character in realtime. The output stream shows the character performing your actions.

Updating the reference image

You can change the character at any time without reconnecting. Use the set() method to atomically replace the session state — include all fields you want to keep:
// Change to a new character
await realtimeClient.set({
  prompt: "Substitute the character in the video with the person in the reference image.",
  image: newCharacterImage,
  enhance: true,
});

// Set a new image only (clears any previous prompt)
await realtimeClient.set({ image: newCharacterImage });

// Set a new prompt only (clears any previous image)
await realtimeClient.set({ prompt: "Add dark sunglasses to the person's face." });

// Clear the reference image (fall back to text-only editing)
await realtimeClient.set({ image: null });
set() replaces the entire state — fields you omit are cleared. Always include every field you want to keep. This avoids intermediate states and ensures prompt and image stay in sync.

Text-only editing

Lucy 2 works without a reference image too. Use text prompts to add, modify, or remove elements in your live video:
// No reference image needed for text-only edits
await realtimeClient.set({ prompt: "Add a small dog running around in the background." });
await realtimeClient.set({ prompt: "Change the background to a sandy beach with clear blue water." });
await realtimeClient.set({ prompt: "Change the person's hair color to bright blonde." });
See the prompting guide below for the best prompt structures for each edit type.

Prompting guide

Lucy 2 responds best when your prompts follow specific patterns for each edit type. There are four supported operations, each with its own structure.
Edit typePrompt template
Character transformation”Substitute the character in the video with <description>.”
Add”Add <description of object in reference image> to <where to add it>.”
Replace”Change <object to change> with <description of the object in the reference image>.”
Change attribute”Change <object to change attribute of> to <description of new attribute>.”
These templates produce the best results. The enhance option can improve short prompts, but starting with a well-structured prompt gives you more control over the output.

Character transformation

When using a reference image, describe the character’s appearance in the prompt. The more detail you provide, the closer the output matches the reference. “Substitute the character in the video with <description of the character in reference image>.” Examples:
  • Substitute the character with an older man, which has pale, wrinkled skin, light blue eyes, a powdered white wig with side curls, and wears a dark formal coat with a white ruffled neckpiece.
  • Substitute the character with a young person wearing a short-sleeved pink top with white ribbon ties on the back, loose pink pants, and short brown hair tied in a side ponytail.
  • Substitute the character with a furry creature, which has soft brown and orange fur, a light face with dark eye markings, a dark nose, and long claws.
await realtimeClient.set({
  prompt: "Substitute the character with an older man, which has pale, wrinkled skin, light blue eyes, a powdered white wig with side curls, and wears a dark formal coat with a white ruffled neckpiece.",
  image: referenceImage,
  enhance: true,
});
Describe what you see in the reference image — skin tone, hair, clothing, distinctive features. Generic prompts like “Transform into this character” still work but produce less precise results.

Adding objects

Add new elements to the scene by specifying what to add and where to place it. “Add <description of object in reference image> to <where to add it>.” Examples:
  • Add a red conical hat, covered in sequins, with a white fluffy trim and a matching pompom to the person’s head.
  • Add a purple knit headband which features a black embroidered athletic jumping figure the person’s head.
await realtimeClient.set({
  prompt: "Add a red conical hat, covered in sequins, with a white fluffy trim and a matching pompom to the person's head.",
  image: hatReference,
  enhance: true,
});
Always specify placement (“to the person’s head”, “in the background”, “on the table”). Without a location, the model places the object unpredictably.

Replacing objects

Swap an existing element in the scene with something different. “Change <object to change> with <description of the object in the reference image>.” Examples:
  • Change the person’s sweater to a red knit sweater, which has a white-outlined, gold and white striped rectangular emblem on the chest.
  • Change the shirt to a black t-shirt, which features a large, stylized white text graphic across the chest and has a round neck.
await realtimeClient.set({
  prompt: "Change the person's sweater to a red knit sweater, which has a white-outlined, gold and white striped rectangular emblem on the chest.",
  image: sweaterReference,
  enhance: true,
});

Changing attributes

Modify a property of an existing object — color, texture, material — without replacing the object itself. “Change <object to change attribute of> to <description of new attribute>.” Examples:
  • Change the wall’s color to light blue, natural consistent paint finish.
  • Change the shirt’s texture to knitted, woven fabric.
// No reference image needed for attribute changes
await realtimeClient.set({
  prompt: "Change the wall's color to light blue, natural consistent paint finish.",
});
Attribute changes work without a reference image. When you do provide one, the model pulls the attribute (color, texture) from the reference.

Prompt tips

  • Be specific — “a red knit sweater with a white emblem on the chest” outperforms “a red sweater”
  • Describe the reference image — when using character transformation, describe what you see in the reference photo (skin, hair, clothing, features)
  • One edit per prompt — combining multiple edits in a single prompt can produce unpredictable results
  • Use enhance: true — prompt enhancement auto-expands short prompts, but explicit detail always wins

Reference image best practices

For the best character transformation results. See also the character transformation prompting pattern for how to describe your reference image in the prompt.
  • Use a clear, well-lit portrait — front-facing photos with neutral expressions work best
  • Match the framing — head-and-shoulders crops produce more consistent results than full-body shots
  • Supported formats — JPEG, PNG, and WebP
  • Resolution — at least 512×512 pixels recommended
  • Avoid heavy occlusion — images where the face is partially hidden may reduce quality

Connection lifecycle

Lucy 2 shares the same connection lifecycle as all realtime models. See the JavaScript SDK, Python SDK, or Android SDK for details on:
  • Connection states (connecting, connected, generating, reconnecting, disconnected)
  • Auto-reconnect with exponential backoff
  • Error handling with DecartSDKError
  • Session tracking with generationTick events
  • Session viewing with subscribe tokens

Complete example

A full application with character switching, connection management, and error handling:
import { createDecartClient, models, type DecartSDKError } from "@decartai/sdk";

async function setupLucy2() {
  const model = models.realtime("lucy_2_rt");

  const stream = await navigator.mediaDevices.getUserMedia({
    video: {
      frameRate: model.fps,
      width: model.width,
      height: model.height,
    },
  });

  // Show the local camera feed
  document.getElementById("input-video").srcObject = stream;

  const client = createDecartClient({
    apiKey: process.env.DECART_API_KEY,
  });

  const realtimeClient = await client.realtime.connect(stream, {
    model,
    onRemoteStream: (transformedStream) => {
      document.getElementById("output-video").srcObject = transformedStream;
    },
  });

  // Track connection state
  realtimeClient.on("connectionChange", (state) => {
    document.getElementById("status").textContent = state;
  });

  // Track billing
  realtimeClient.on("generationTick", ({ seconds }) => {
    document.getElementById("usage").textContent = `${seconds}s`;
  });

  // Handle errors
  realtimeClient.on("error", (error: DecartSDKError) => {
    console.error("Lucy 2 error:", error.code, error.message);
  });

  // Character selection
  document.getElementById("character-input").addEventListener("change", async (e) => {
    const file = (e.target as HTMLInputElement).files[0];
    if (file) {
      await realtimeClient.set({
        prompt: "Substitute the character in the video with the person in the reference image.",
        image: file,
        enhance: true,
      });
    }
  });

  // Cleanup
  window.addEventListener("beforeunload", () => {
    realtimeClient.disconnect();
    stream.getTracks().forEach((track) => track.stop());
  });

  return realtimeClient;
}

setupLucy2();

Client-side authentication

For browser and mobile apps, use client tokens instead of your permanent API key.
// Backend: generate a short-lived token
const token = await client.tokens.create();

// Frontend: connect with the token
const frontendClient = createDecartClient({ apiKey: token.apiKey });
See the full pattern for JavaScript or Android.

Technical specifications

PropertyValue
Model IDlucy_2_rt
Resolution1280×720
OrientationLandscape (16:9) and Portrait (9:16)
TransportWebRTC
Character referenceYes (JPEG, PNG, WebP)
Prompt enhancementYes (default: enabled)
Auto-reconnectYes (exponential backoff, up to 5 retries)

Next steps

JavaScript SDK

Full JavaScript SDK reference for realtime features

Python SDK

Full Python SDK reference for realtime features

Android SDK

Full Android SDK reference for realtime features

All Models

Compare all Decart models side by side