VPApp is a lightweight virtual production system that enables two core capabilities:

VPApp simplifies the virtual production pipeline by encoding tracking data directly into the video output — enabling frame-accurate synchronization and faster calibration, while remaining compatible with standard video tools.

Tracked Video refers to any video signal — live (HDMI/SDI) or recorded — that includes synchronized tracking data encoded in the audio channel.
Because the tracking is encoded as audio, the video signal remains 100% standard and fully compatible with existing video equipment and workflows, including:
No proprietary formats. Just standard video, enhanced with tracking.
Tracked Video becomes a self-contained virtual production (VP) asset, equally useful for live rendering and offline compositing.
Antilatency tracker sends real-time positional data via VP Socket to the VPApp, running on an Android device connected to the VP Socket.
The VPApp encodes tracking data into an audio signal.
The encoded audio is sent back through the VP Socket into the camera’s audio input, producing a Tracked Video.
The camera outputs a standard video signal — either live via HDMI/SDI, or recorded with tracking audio on the SD card.
Traditionally, lens calibration is complex and time-consuming — often requiring hours per lens and ideally a recalibration after every lens change.
VPApp introduces a fast calibration workflow based on Tracked Video.
Each Antilatency marker includes:
As a result, Antilatency markers appear as glowing green dots in the recorded video — providing precise visual reference points for calibration.

Record a short Tracked Video in the studio.
Using the known tracking data and visible marker positions in the video, the application computes:
This workflow significantly reduces manual calibration time and effort, streamlining the setup for virtual production.
