Generative TV and Showrunner Agents are fascinating concepts that blend artificial intelligence with storytelling. Let’s dive into each of them:
Generative TV:
Generative TV refers to the use of artificial intelligence (AI) to create TV show content. Instead of relying solely on human writers, directors, and actors, generative TV leverages algorithms to generate scripts, scenes, and even entire episodes.
Fable Studio is at the forefront of generative TV. They combine large language models (LLMs), diffusion models, and multi-agent simulations to produce realistic and engaging narratives.
LLMs, such as GPT-4, learn from a vast corpus of TV show data and generate coherent and diverse texts. Diffusion models create realistic images and animations based on these texts.
The goal is to allow users to rewrite entire seasons of their favorite shows or create new ones with minimal effort and maximum creativity.
Showrunner Agents:
Showrunner Agents are individual AI instances within generative TV systems. They perceive themselves as characters within the show.
Developed on the SHOW-1 model, these agents can generate every component of a TV episode in real-time. They write, animate, direct, voice, and edit complete episodes.
The unique feature here is that the actors are played by autonomous AI bots, and the scripts are generated dynamically based on an initial text prompt.
Fable Studio’s Vision:
Fable Studios aims to create immersive simulations with infinite story possibilities.
Their goal is to give every AI character in their simulations a unique and engaging weekly story that adapts to their actions and choices.
Fable doesn’t intend to compete with traditional media companies but believes that the showrunner approach offers endless entertainment and exploration.
Driving AI Technology:
Fable encourages people in the games, movie, and TV industry to actively shape AI technology rather than passively receive it.
They envision a future where creative companies use AI to make games, movies, and TV shows, driving the next wave of entertainment.
In summary, generative TV and showrunner agents represent a fascinating intersection of AI, creativity, and storytelling—a glimpse into a future where imagination meets code.
About generative tv and showrunner agents...
Re: About generative tv and showrunner agents...
Where can i download the SHOW-1 model for fabel studio?
Re: About generative tv and showrunner agents...
You can download the SHOW-1 model for Fable Studio from their HuggingFace page. Here’s how:
Install Dependencies:
First, make sure you have the necessary dependencies installed. You can do this by running:
pip install -r requirements.txt
Note that PyTorch 2.0+ is highly recommended for better efficiency and speed on GPUs.
Download Model Weights:
All the model weights for SHOW-1 are available on Show Lab’s HuggingFace page:
Base Model: show-1-base
Interpolation Model: show-1-interpolation
Super-Resolution Models (show-1-sr1, show-1-sr2)
To obtain the respective weights, follow their official instructions.
Run Inference:
To generate a video from a text prompt using SHOW-1, run the following command:
python run_inference.py
By default, the videos generated from each stage will be saved to the outputs folder in GIF format.
The script will automatically fetch the necessary model weights from HuggingFace.
If you prefer, you can manually download the weights using git lfs and then update the pretrained_model_path to point to your local directory.
Demo:
A demo is also available on the SHOW-1 Space.
You can use the Gradio demo locally by running:
python app.py
Citation:
If you make use of SHOW-1, please cite their paper:
@article{zhang2023show,
title={Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation},
author={Zhang, David Junhao and Wu, Jay Zhangjie and Liu, Jia-Wei and Zhao, Rui and Ran, Lingmin and Gu, Yuchao and Gao, Difei and Shou, Mike Zheng},
journal={arXiv preprint arXiv:2309.15818},
year={2023}
}
Commercial Use:
Fable Studio is working on approving commercial use requests. If you’re interested, you can email them at mike.zheng.shou@gmail.com and junhao.zhang@u.nus.edu with details about your product/application and how SHOW-1 can benefit it.
Remember to explore the fascinating world of generative TV and storytelling with SHOW-1!
Install Dependencies:
First, make sure you have the necessary dependencies installed. You can do this by running:
pip install -r requirements.txt
Note that PyTorch 2.0+ is highly recommended for better efficiency and speed on GPUs.
Download Model Weights:
All the model weights for SHOW-1 are available on Show Lab’s HuggingFace page:
Base Model: show-1-base
Interpolation Model: show-1-interpolation
Super-Resolution Models (show-1-sr1, show-1-sr2)
To obtain the respective weights, follow their official instructions.
Run Inference:
To generate a video from a text prompt using SHOW-1, run the following command:
python run_inference.py
By default, the videos generated from each stage will be saved to the outputs folder in GIF format.
The script will automatically fetch the necessary model weights from HuggingFace.
If you prefer, you can manually download the weights using git lfs and then update the pretrained_model_path to point to your local directory.
Demo:
A demo is also available on the SHOW-1 Space.
You can use the Gradio demo locally by running:
python app.py
Citation:
If you make use of SHOW-1, please cite their paper:
@article{zhang2023show,
title={Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation},
author={Zhang, David Junhao and Wu, Jay Zhangjie and Liu, Jia-Wei and Zhao, Rui and Ran, Lingmin and Gu, Yuchao and Gao, Difei and Shou, Mike Zheng},
journal={arXiv preprint arXiv:2309.15818},
year={2023}
}
Commercial Use:
Fable Studio is working on approving commercial use requests. If you’re interested, you can email them at mike.zheng.shou@gmail.com and junhao.zhang@u.nus.edu with details about your product/application and how SHOW-1 can benefit it.
Remember to explore the fascinating world of generative TV and storytelling with SHOW-1!