Transform yourself into any character in realtime with AI-powered video editing at 720p
Lucy 2 is our most advanced realtime video editing model. Upload a reference image and watch yourself transform into that character live. It builds on everything in our original Lucy realtime model and adds character reference: provide any face, and Lucy 2 maps your movements and expressions onto that identity in real time.
Use Lucy 2 as your default realtime editing model. It supports character reference and text-only editing in the same integration.
import { createDecartClient, models } from "@decartai/sdk";const model = models.realtime("lucy_2_rt");// Get your cameraconst stream = await navigator.mediaDevices.getUserMedia({ video: { frameRate: model.fps, width: model.width, height: model.height, },});const client = createDecartClient({ apiKey: "your-api-key-here",});// Connect to Lucy 2const realtimeClient = await client.realtime.connect(stream, { model, onRemoteStream: (transformedStream) => { document.getElementById("output").srcObject = transformedStream; },});// Upload a reference image and transformawait realtimeClient.set({ prompt: "Substitute the character in the video with the person in the reference image.", image: characterImage, // File, Blob, or URL string enhance: true,});
import asynciofrom decart import DecartClient, modelsasync def main(): async with DecartClient(api_key="your-api-key-here") as client: model = models.realtime("lucy_2_rt") realtime = await client.realtime.connect( stream=media_stream, model=model, on_remote_stream=lambda s: display(s), ) # Upload a reference image and transform with open("character.jpg", "rb") as f: await realtime.set( prompt="Substitute the character in the video with the person in the reference image.", image=f, enhance=True, )asyncio.run(main())
import ai.decart.sdk.DecartClientimport ai.decart.sdk.DecartClientConfigimport ai.decart.sdk.RealtimeModelsimport ai.decart.sdk.realtime.ConnectOptionsval model = RealtimeModels.LUCY_2_RTval client = DecartClient(context, DecartClientConfig(apiKey = "your-api-key"))client.realtime.initialize(eglBase)// Create camera track using model dimensionsval videoSource = client.realtime.createVideoSource(isScreencast = false)!!val videoTrack = client.realtime.createVideoTrack("camera", videoSource)!!val enumerator = Camera2Enumerator(context)val cameraName = enumerator.deviceNames.first { enumerator.isFrontFacing(it) }val capturer = enumerator.createCapturer(cameraName, null)capturer.initialize( SurfaceTextureHelper.create("CaptureThread", client.realtime.getEglBaseContext()), context, videoSource.capturerObserver)capturer.startCapture(model.width, model.height, model.fps)// Connect to Lucy 2client.realtime.connect( localVideoTrack = videoTrack, options = ConnectOptions( model = model, onRemoteVideoTrack = { track -> remoteRenderer.addSink(track) } ))// Upload a reference image and transformval characterBase64 = Base64.encodeToString(characterBytes, Base64.NO_WRAP)client.realtime.setImage( imageBase64 = characterBase64, prompt = "Substitute the character in the video with the person in the reference image.", enhance = true)
Lucy 2 supports both landscape (16:9) and portrait (9:16) video input. On mobile devices (iOS/Android), portrait mode works automatically — the OS maps the front camera to a vertical stream regardless of the constraints you pass.For desktop browsers or external webcams, swap width and height when calling getUserMedia to request a portrait stream:
Portrait mode is ideal for mobile webcams and vertical video content. On mobile (iOS/Android), you don’t need to swap dimensions — the OS maps the front camera to a vertical stream automatically regardless of the constraints you pass.
Lucy 2 uses your reference image as a visual identity target. Your live video provides the motion, expressions, and pose — the model blends the two so the output looks like the reference character performing your movements.
1
Connect with your camera
Establish a WebRTC connection with your camera stream, just like any other realtime model.
2
Upload a reference image
Provide any portrait photo — a character, a celebrity, or a generated face. The model extracts the visual identity from this image.
3
See yourself transformed
Your movements, expressions, and gestures are mapped onto the reference character in realtime. The output stream shows the character performing your actions.
You can change the character at any time without reconnecting. Use the set() method to atomically replace the session state — include all fields you want to keep:
// Change to a new characterawait realtimeClient.set({ prompt: "Substitute the character in the video with the person in the reference image.", image: newCharacterImage, enhance: true,});// Set a new image only (clears any previous prompt)await realtimeClient.set({ image: newCharacterImage });// Set a new prompt only (clears any previous image)await realtimeClient.set({ prompt: "Add dark sunglasses to the person's face." });// Clear the reference image (fall back to text-only editing)await realtimeClient.set({ image: null });
set() replaces the entire state — fields you omit are cleared. Always include every field you want to keep. This avoids intermediate states and ensures prompt and image stay in sync.
Lucy 2 works without a reference image too. Use text prompts to add, modify, or remove elements in your live video:
// No reference image needed for text-only editsawait realtimeClient.set({ prompt: "Add a small dog running around in the background." });await realtimeClient.set({ prompt: "Change the background to a sandy beach with clear blue water." });await realtimeClient.set({ prompt: "Change the person's hair color to bright blonde." });
See the prompting guide below for the best prompt structures for each edit type.
Lucy 2 responds best when your prompts follow specific patterns for each edit type. There are four supported operations, each with its own structure.
Edit type
Prompt template
Character transformation
”Substitute the character in the video with <description>.”
Add
”Add <description of object in reference image> to <where to add it>.”
Replace
”Change <object to change> with <description of the object in the reference image>.”
Change attribute
”Change <object to change attribute of> to <description of new attribute>.”
These templates produce the best results. The enhance option can improve short prompts, but starting with a well-structured prompt gives you more control over the output.
When using a reference image, describe the character’s appearance in the prompt. The more detail you provide, the closer the output matches the reference.“Substitute the character in the video with <description of the character in reference image>.”Examples:
Substitute the character with an older man, which has pale, wrinkled skin, light blue eyes, a powdered white wig with side curls, and wears a dark formal coat with a white ruffled neckpiece.
Substitute the character with a young person wearing a short-sleeved pink top with white ribbon ties on the back, loose pink pants, and short brown hair tied in a side ponytail.
Substitute the character with a furry creature, which has soft brown and orange fur, a light face with dark eye markings, a dark nose, and long claws.
await realtimeClient.set({ prompt: "Substitute the character with an older man, which has pale, wrinkled skin, light blue eyes, a powdered white wig with side curls, and wears a dark formal coat with a white ruffled neckpiece.", image: referenceImage, enhance: true,});
Describe what you see in the reference image — skin tone, hair, clothing, distinctive features. Generic prompts like “Transform into this character” still work but produce less precise results.
Add new elements to the scene by specifying what to add and where to place it.“Add <description of object in reference image> to <where to add it>.”Examples:
Add a red conical hat, covered in sequins, with a white fluffy trim and a matching pompom to the person’s head.
Add a purple knit headband which features a black embroidered athletic jumping figurethe person’s head.
await realtimeClient.set({ prompt: "Add a red conical hat, covered in sequins, with a white fluffy trim and a matching pompom to the person's head.", image: hatReference, enhance: true,});
Always specify placement (“to the person’s head”, “in the background”, “on the table”). Without a location, the model places the object unpredictably.
Swap an existing element in the scene with something different.“Change <object to change> with <description of the object in the reference image>.”Examples:
Change the person’s sweater to a red knit sweater, which has a white-outlined, gold and white striped rectangular emblem on the chest.
Change the shirt to a black t-shirt, which features a large, stylized white text graphic across the chest and has a round neck.
await realtimeClient.set({ prompt: "Change the person's sweater to a red knit sweater, which has a white-outlined, gold and white striped rectangular emblem on the chest.", image: sweaterReference, enhance: true,});
Modify a property of an existing object — color, texture, material — without replacing the object itself.“Change <object to change attribute of> to <description of new attribute>.”Examples:
Change the wall’s color to light blue, natural consistent paint finish.
Change the shirt’s texture to knitted, woven fabric.
// No reference image needed for attribute changesawait realtimeClient.set({ prompt: "Change the wall's color to light blue, natural consistent paint finish.",});
Attribute changes work without a reference image. When you do provide one, the model pulls the attribute (color, texture) from the reference.
For the best character transformation results. See also the character transformation prompting pattern for how to describe your reference image in the prompt.
Use a clear, well-lit portrait — front-facing photos with neutral expressions work best
Match the framing — head-and-shoulders crops produce more consistent results than full-body shots
Supported formats — JPEG, PNG, and WebP
Resolution — at least 512×512 pixels recommended
Avoid heavy occlusion — images where the face is partially hidden may reduce quality