A Content Generation Bot that creates audio-visual content clips of any topic (but we like to clickbait AI/ML) that can be posted on Social Media. The bot uses a lot of API's and services to create a one-minute-long clip with subtitles. Honestly, I just coudn't find a job and was considering to become an influencer at this point.
- Generate any topic's content script using OpenAI GPT API
- Convert generated scripts to audio using Azure TTS
- Fetch relevant images using OpenAI DALL-E API
- Create a video with audio, images, and subtitles
- Organized folder structure for outputs
- CLI options for specific tasks (images, audio, video, all)
- Python 3.7+
- FFmpeg
- ImageMagick
- Virtual environment (optional but recommended)
- Clone the repository
git clone https://github.com/prashanth-up/content-generation-bot.git
cd content-generation-bot
- Set up a virtual environment (optional but recommended)
python -m venv contentbot_env
source contentbot_env/bin/activate # On Windows use `contentbot_env\Scripts\activate`
- Install the required dependencies
pip install -r requirements.txt
- Create a
config.json
file in the root directory
{
"openai_api_key": "YOUR_OPENAI_API_KEY",
"azure_api_key": "YOUR_AZURE_API_KEY",
"azure_region": "YOUR_AZURE_REGION"
}
- Create a
topics.json
file in the root directory with a few clickbait topics that doomscrollers just love to stare at and forget
{
"topics": [
"Introduction to Machine Learning",
"Supervised vs Unsupervised Learning",
"Neural Networks Explained"
]
}
FFmpeg is required for processing video and audio. You can install it using Homebrew on macOS, Chocolatey on Windows, or via the package manager on Linux.
- macOS
brew install ffmpeg
- Windows
choco install ffmpeg
- Linux
sudo apt-get update
sudo apt-get install ffmpeg
ImageMagick is used for handling image transformations.
- macOS
brew install imagemagick
- Windows
Download and install from ImageMagick.
- Linux
sudo apt-get install imagemagick
Ensure FFmpeg and ImageMagick binaries are accessible. You may need to set the paths in your script:
import os
# Set the path to the FFmpeg binary
os.environ["FFMPEG_BINARY"] = "/path/to/ffmpeg" # Update this path based on your system
# Set the path to the ImageMagick binary
os.environ["IMAGEMAGICK_BINARY"] = "/path/to/magick" # Update this path based on your system
The main script is main.py
. You can run it with different options to test specific parts of the project.
python main.py -i
python main.py -a
python main.py -v
python main.py -c
The output folders will be organized as follows:
output_videos/
│
├── Introduction_to_Machine_Learning/
│ ├── assets_audio/
│ │ └── audio.mp3
│ ├── assets_images/
│ │ ├── image1.png
│ │ ├── image2.png
│ │ ├── image3.png
│ │ ├── image4.png
│ │ └── image5.png
│ ├── script.txt
│ └── video.mp4
│
├── Supervised_vs_Unsupervised_Learning/
│ └── ... (similar structure as above)
│
└── Neural_Networks_Explained/
└── ... (similar structure as above)
These are actually bad in fact, but dall-e is pure RNG
- No Audio in Video: Ensure FFmpeg is correctly installed and accessible. Verify the audio path and codec settings.
- Text Positioning: Adjust the
width
parameter intextwrap.fill
to fit the text horizontally. - Dependencies: Ensure all dependencies are installed using the
requirements.txt
file.
- Enhanced Scheduling: Implement a scheduling feature to automatically post the generated content on social media platforms at specified times.
- Improved Text-to-Speech: Integrate more advanced text-to-speech options with different voices and emotions.
- Customization Options: Allow users to customize the video output, including fonts, colors, and background music.
- Multi-language Support: Add support for generating content in multiple languages.
- Analytics Integration: Track the performance of the posted content using analytics APIs.
- Modular Codebase: Further divide the code into more modular components for better maintainability and scalability.
- Error Handling: Improve error handling to cover more edge cases and provide more informative error messages.
- Optimization: Optimize the video generation process for speed and efficiency.
- Testing: Implement unit tests and integration tests to ensure the code's reliability and correctness.
- Documentation: Expand the documentation to include detailed usage examples and advanced configurations.
This project is licensed under the MIT License. See the LICENSE file for details.
Contributions are very very very welcome! It took a lot of my sanity to build this in one night so; Please fork the repository and submit pull requests to make it less ugly.