Get quick answers to common queries in our FAQs.
Wan2.2 I2V A14B improves upon Wan 2.1 by introducing a two-expert (A14B) design that enhances denoising stability, producing cleaner motion and finer visual detail in generated videos. This makes transitions between frames smoother and visuals more natural in 480p and 720p outputs.
Pricing starts as low as $0.08 per second of generated video. Users are billed based on total video duration, making it ideal for scalable use—from quick prototypes to long-form generation on GMI Cloud’s serverless platform.
The model currently supports 5-second and 10-second video outputs at 480p or 720p resolutions. These limits ensure that inference remains fast, cost-efficient, and optimized for real-time or near-real-time generation through the GMI Cloud API.
Yes — GMI Cloud offers serverless deployment, letting you launch and scale models on demand. You can access the model using Python SDK, REST API, or any OpenAI-compatible client, all with automatic scaling and high uptime.
GMI Cloud’s state-of-the-art serving architecture dynamically scales GPU resources in real time. This ensures peak model performance during large-scale or concurrent requests while maintaining cost efficiency across varying loads.
Yes — GMI Cloud supports dedicated deployments for users needing consistent performance or enterprise-grade availability. Models run on reserved GPUs with auto-scaling, low latency, and guaranteed compute isolation for production-grade workloads.