framepack-i2v / README.md
lisonallen's picture
Limit video length to maximum 5 seconds
8336ddb

A newer version of the Gradio SDK is available: 5.27.1

Upgrade
metadata
title: FramePack图像到视频生成(5秒限制版)
emoji: 🎬
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: 5.23.0
app_file: app.py
pinned: false
license: mit

FramePack - Image to Video Generation

This is a modified version of the FramePack model with a 5-second maximum video length limit.

Features

  • Generate realistic videos from still images
  • Simple and intuitive interface
  • Bilingual support (English/Chinese)
  • Maximum video length of 5 seconds to ensure quick generation times

Usage

  1. Upload an image
  2. Enter a prompt describing the desired motion
  3. Adjust parameters if needed (seed, video length, etc.)
  4. Click "Generate" and wait for the result

Technical Details

This application uses the HunyuanVideo transformer model for image-to-video generation. The model has been optimized to work efficiently with videos up to 5 seconds in length.

Credits

Based on the original FramePack model by lllyasviel.

特点

  • 使用单张图片生成流畅的动作视频
  • 基于HunyuanVideo和FramePack架构
  • 支持低显存GPU(最低6GB)运行
  • 可以生成最长5秒的视频
  • 使用TeaCache技术加速生成过程

使用方法

  1. 上传一张人物图像
  2. 输入描述所需动作的提示词
  3. 设置所需视频长度(最大5秒)
  4. 点击"开始生成"按钮
  5. 等待视频生成(生成过程是渐进式的,会不断扩展视频长度)

示例提示词

  • "The girl dances gracefully, with clear movements, full of charm."
  • "A character doing some simple body movements."
  • "The man dances energetically, leaping mid-air with fluid arm swings and quick footwork."

注意事项

  • 视频生成是倒序进行的,结束动作将先于开始动作生成
  • 如果需要高质量结果,建议关闭TeaCache选项
  • 如果遇到内存不足错误,可以增加"GPU推理保留内存"的值

技术细节

此应用基于FramePack项目,使用了Hunyuan Video模型和FramePack技术进行视频生成。该技术可以将输入上下文压缩为固定长度,使生成工作量与视频长度无关,从而在笔记本电脑GPU上也能处理大量帧。


原项目链接:FramePack GitHub