OmniShow supports multimodal input to generate high-definition videos with human-object interaction.
|
Founded year:
|
2026
|
|
Country:
|
United States of America
|
|
Funding rounds:
|
Not set
|
|
Total funding amount:
|
Not set
|
Description
OmniShow is an open-source human-object interaction video generation model launched by ByteDance. It integrates four control modes: text, reference image, audio and posture. It can generate long videos with natural movements, physical rationality and stable portrait features, solving the problems of model penetration and distorted interaction. It is widely applied to virtual live streaming, AI short dramas, digital human broadcasting and character animation.