A versatile command line tool for automated verifying and transcoding of all your torrents.
Most gazelle based indexers/trackers are supported
- RED
- [new] OPS.
Tested on Linux, theoretically works on Windows.
Fully configurable, if there's something hard coded that you think should be configurable then open a discussion on GitHub.
Each source is verified to ensure:
- A lossless FLAC
- Not a scene, lossy, unconfirmed, or trumpable release
- Files match the torrent hash
- Audio tags for artist, album, title and track number are set
- [fixed] Classical sources have a composer tag.
- [fixed] Vinyl track numbering is converted to numeric
- Sample rate and channels are suitable
- Full and zoomed spectrograms generated for review
- [fixed] Multi-threaded transcoding with optional CPU limit
- FLAC and FLAC 24 bit sources are supported
- FLAC, MP3 320 (CBR) and MP3 V0 (VBR) target formats
- Existing formats are skipped
- [fixed] Nested sub directories are fully supported (i.e. CD1, and CD2 etc)
- [fixed] Automatic naming following established conventions, with decoding of HTML entities.
- [fixed] Shorter file names.
- Automatic torrent file creation
- [new] Images in the root and first nested directory are included and all other files ignored.
- [new] Images larger than 750 KB are reduced to less than 1280 px, converted to JPG and compressed.
- Copy transcodes to content directory
- Copy torrent file to client auto-add directory
- [new] Verify, transcode and upload with one command for every torrent file in a directory.
- [new] Source torrents are added to a queue to track their progress reducing duplicate work and speeding up subsequent runs.
The application will crunch through your torrent directory and automatically determine which are FLAC sources suitable for transcoding.
Docker is the recommended way to run the application across all platforms.
- All dependencies are built into the image
- Runs in an isolated environment reducing risks to your system
Install Docker Engine for your OS.
Run the help command to see the available commands and options.
docker run ghcr.io/rogueoneecho/caesura --helpTip
You can append --help to any command to see the available options.
docker run ghcr.io/rogueoneecho/caesura verify --helpCreate a config.yml file with the following content:
announce_urlYour personal announce URL. Find it on upload page.api_keyCreate an API key withTorrentspermissionSettings > Access Settings > Create an API Key
Refer to CONFIG.md for full documentation of options.
announce_url: https://flacsfor.me/YOUR_ANNOUNCE_KEY/announce
api_key: "YOUR_API_KEY"You can then run the config command to see how the full configuration including default values the application will use:
Note
Because the application is running in a Docker container, you need to mount the config file as a volume.
docker run -v ./config.yml:/config.yml ghcr.io/rogueoneecho/caesura configTip
The following fields are optional, if not set they're set based on the announce_url:
indexerthe id of the indexer:red,pth,ops.indexer_urlthe URL of the indexer:https://redacted.sh,https://orpheus.network.
Create a directory for the application to output files to:
mkdir ./outputCreate a directory for the application to cache files to:
mkdir ./cacheTip
Refer to the directory structure section for documentation on the purpose and structure of these directories.
Run the verify command with the source as an argument.
Note
Because the application is running in a Docker container, you need to mount the config file, content directory, output directory and cache directory.
Tip
For the source you can use a permalink, the numeric torrent id or a path to a torrent file:
Each step of this guide will use a different source to demonstrate, but feel free to use whichever suits you best.
docker run \
-v ./config.yml:/config.yml \
-v /path/to/your/content:/content \
-v ./output:/output \
-v ./cache:/cache \
ghcr.io/rogueoneecho/caesura \
verify https://redacted.sh/torrents.php?id=80518&torrentid=142659#torrent142659If it looks good you can proceed to the next step, otherwise try another source.
Docker is great but specifying the volumes everytime is tedious and prone to error.
Using Docker Compose simplifies this by storing the configuration in a docker-compose.yml file.
Create a docker-compose.yml file with the following content:
services:
caesura:
container_name: caesura
image: ghcr.io/rogueoneecho/caesura
volumes:
- ./config.yml:/config.yml:ro
- /path/to/your/content:/content:ro
- ./output:/output
- ./cache:/cacheNote
The :ro suffix makes the volume read-only which is a good security practice.
If you intend to use the --copy-transcode-to-content-dir option then you must remove the :ro suffix from the content volume.
If you intend to use the --hard-link option then the content and output paths must be inside the same volume and you will need to update the config.yml accordingly.
Now run the verify command again but this time using Docker Compose:
docker compose run --rm caesura verify 142659Run the spectrogram command with the source as an argument.
docker compose run --rm caesura spectrogram 142659Inspect the spectrograms in the output directory.
Run the transcode command with the source as an argument.
docker compose run --rm caesura transcode "Khotin - Hello World [2014].torrent"Inspect the transcodes in the output directory.
Tip
Things to check:
- Folder structure
- File names
- Tags
- Audio quality
- Image size and compression quality
Two .torrent files are created for each transcode, one with the indexer as a suffix (*.red.torrent or *.ops.torrent) and one without. For now these files are identical but if you subsequently transcode the same source for a different indexer they may differ.
Warning
You are responsible for everything you upload.
Misuse of this application can result in the loss of your upload privileges.
Run the upload command with the source as an argument.
Tip
Ideally you've already checked everything and nothing will go wrong but just in case there is a grace period after uploading in which you can remove the upload from your indexer.
Tip
If you're unsure about this then you can append --dry-run to the command and instead of uploading it will print the data that would be submitted.
docker compose run --rm caesura upload https://redacted.sh/torrents.php?id=80518&torrentid=142659#torrent142659If you haven't already then add the *.red.torrent or *.ops.torrent file to your torrent client.
Tip
caesura can automatically add the .torrent to your torrent client if it supports an autoadd directory.
Either use the --copy-torrent-to path/to/autoadd/directory CLI option or add the following to config.yml
copy_torrent_to: path/to/autoadd/directoryDon't forget to ensure the path is mounted as a volume in docker-compose.yml.
In qBittorrent you can configure auto add under: Options > Downloads > Automatically add torrents from
Monitored Folder: path/to/autoadd/directory
Override save location: path/to/caesura/output
Go to your indexer and check your uploads to make sure everything has gone to plan.
Warning
You are responsible for everything you upload.
Misuse of this application, especially the batch command, can result in the loss of your upload privileges or a ban.
Now that you have the hang of the application we can speed things up with the queue and batch commands.
The batch command handles verify, spectrogram, transcode and upload in a single command.
Run the queue add command to search through a directory of torrents and queue them for batch processing:
Note
The batch and queue commands use the cache directory to store progress helping speed up subsequent runs.
Make sure the cache directory is in a mounted volume so it's not deleted between runs.
docker compose run --rm caesura queue add /path/to/your/torrentsRun the queue list command to see what is next in the queue for the current indexer:
docker compose run --rm caesura queue listBy default the batch command will limit to processing just 3 transcodes and it won't create spectrograms or upload unless explicitly instructed. These safeguards are in place to prevent mistakenly uploading a bunch of sources that you haven't checked.
Run the command to batch verify and transcode the three sources in the queue:
docker compose run --rm caesura batch --transcodeTip
Add the --spectrogram flag to generate spectrograms.
If everything goes to plan three sources should have transcoded to your output directory.
Use the queue summary command to see the progress:
docker compose run --rm caesura queue summaryTip
Refer to the analyzing the queue section to inspect the queue in greater detail.
Nothing was uploaded in the previous run of the batch command giving you a chance to check the transcodes and spectrograms. Once you're satisfied run again but with the --upload flag.
docker compose run --rm caesura batch --transcode --uploadCheck the uploads on your indexer to make sure everything has gone to plan.
Now, we can set the batch command loose with the --no-limit option to transcode (but not upload) every source in the directory:
docker compose run --rm caesura batch --transcode --no-limitOnce you've checked the transcodes you can start to upload them in batches. The --wait-before-upload 30s option will add a 30 second wait interval between uploads to give you time to check everything looks good, and spread out the load on your indexer:
docker compose run --rm caesura batch --upload --limit 10 --wait-before-upload 30sWarning
In theory you can execute with both --upload --no-limit but that is probably a bad idea and a very fast way to lose your upload privileges.
If you are going to do so then you should definitely use a long wait interval:
--upload --no-limit --wait-before-upload 2m
Check out the full documentation of configuration options in CONFIG.md, in particular you may want to use --copy-transcode-to-content-dir and --copy-torrent-to to suit your preferred setup.
caesura is designed to work with both RED and OPS. There's no need for separate cache or output directories, however, you will need a separate configuration for each and the commands must be run separately.
Make a copy of config.yml for each indexer. For clarity I recommend naming them config.red.yml and config.ops.yml
cp config.yml config.red.yml
cp config.yml config.ops.ymlEdit each config file to include the API key and announce URL for that indexer.
announce_url: https://home.opsfet.ch/YOUR_ANNOUNCE_KEY/announce
api_key: "YOUR_API_KEY"Edit docker-compose.yml to include separate services for each indexer. The only difference between them is the mapping of the the config file.
services:
caesura-red:
container_name: caesura-red
image: ghcr.io/rogueoneecho/caesura
volumes:
- ./config.red.yml:/config.yml:ro
- /path/to/your/content:/content:ro
- ./output:/output
- ./cache:/cache
caesura-ops:
container_name: caesura-ops
image: ghcr.io/rogueoneecho/caesura
volumes:
- ./config.ops.yml:/config.yml:ro
- /path/to/your/content:/content:ro
- ./output:/output
- ./cache:/cacheRun the config command to verify the config is loaded correctly for each:
docker compose run --rm caesura-red config
docker compose run --rm caesura-ops configThe queue command is indexer agnostic so as long as both configurations use the same cache it only needs to be run once.
docker compose run --rm caesura-red queue add /path/to/your/torrentsThen run the batch command for each indexer:
docker compose run --rm caesura-red batch --transcode --upload
docker compose run --rm caesura-ops batch --transcode --uploadNote
If you start a transcode for a source on OPS that you've already transcoded for RED then caesura will detect this automatically and instead of re-transcoding it simply creates a *.ops.torrent file from the existing transcode so there's no duplication of effort and the existing files are re-used without taking up additional space.
Therefore the first time you run the batch command for the new indexer you will likely see a few messages along the lines of:
Found existing 320 transcode
Found existing V0 transcode
The application requires two writable directories.
The verify command will download .torrent files for each source to {CACHE}/torrents/{ID}.{INDEXER}.torrent
Tip
You can delete the cached .torrent files at any time. The application will just download them again if required.
The queue and batch commands will read and write the source statues to {CACHE}/queue/{FIRST_BYTE_OF_HASH}.yml
Warning
In theory you can delete the cache/queue files as they can be re-created using queue add however:
- subsequent
batchwill be slow as it will need to re-process everything from scratch making an unnecessary number of I/O and API calls queue summarywill no longer include your uploads. Insteadverifywill just see them as all formats being transcoded already. > It's therefore recommended to leave these files alone.
Tip
The cache/queue can be checked into version control. It uses a flat file format so changes can easily be tracked, backed up, and even reverted using git.
The spectrogram command will generate spectrograms inside to
{OUTPUT}/{ARTIST} - {ALBUM} [{YEAR}] [{MEDIA} SPECTROGRAMS]/
Tip
Once you've reviewed the spectrograms you can freely delete each sectrograms directory (it can always be re-generated).
The transcode command will transcode to
{OUTPUT}/{ARTIST} - {ALBUM} [{YEAR}] [{MEDIA} {FORMAT}]/
Tip
You can delete each transcode directory if you:
- Store the transcode elsewhere for seeding
- Don't intend to produce transcodes or cross seed to another indexer.
Then transcode will create two .torrent files:
{OUTPUT}/{ARTIST} - {ALBUM} [{YEAR}] [{MEDIA} {FORMAT}].{INDEXER}.torrent{OUTPUT}/{ARTIST} - {ALBUM} [{YEAR}] [{MEDIA} {FORMAT}].torrent
Tip
You can delete the .torrent files if you:
- Have already uploaded to the indexer
- Don't intend to produce transcodes or cross seed to another indexer.
Configuration options are sourced first from the command line arguments, then from a configuration file.
By default the application loads config.yml from the current working directory, but this can be overridden with the --config <CONFIG_PATH> cli argument.
Most options have sensible defaults so the minimum required configuration is:
announce_url: https://flacsfor.me/YOUR_ANNOUNCE_KEY/announce
api_key: "YOUR_API_KEY"This is based around the setup in this guide: how to set up Deluge via Proton VPN with port forwarding.
/srv/sharedis a shared between multiple containers, by mounting as a single volume hard linking is possible./srv/deluge/stateis the Deluge state directory, containing all.torrentfiles loaded in Deluge./srv/shared/delugeis the Deluge download directory, containing all the content.
source: /srv/deluge/state,inconfig.ymlmeans the source can be ommitted from the command.
announce_url: https://flacsfor.me/YOUR_ANNOUNCE_KEY/announce
api_key: YOUR_API_KEY
content:
- /srv/shared/deluge
limit: 2
output: /srv/shared/caesura
source: /srv/deluge/state
verbosity: debuguser: "1000:1001"ensures files have the same ownership as the host user (use theidcommand to find your user and group id).- Only
/srv/sharedhas write permissions, the other directories are read-only. command: batchruns the batch command by default./is the working directory of the container so mounting the config to/config.ymlmeans it's read by default.
services:
caesura:
container_name: caesura
image: ghcr.io/rogueoneecho/caesura
user: "1000:1001"
volumes:
- /srv/caesura/config.yml:/config.yml:ro
- /srv/deluge/state:/srv/deluge/state:ro
- /srv/shared:/srv/sharedThe cache/queue uses a YAML file format that can be analyzed with yq.
Filter` to see what has been transcoded:
cat ./cache/queue/*.yml | yq 'map(select(.transcode != null))'Or to see what has been skipped and why:
cat ./cache/queue/*.yml | yq 'map(select(.verify.verified == false))'If you're working with a lot of files then less can be helpful:
cat ./cache/queue/*.yml | yq --colors 'map(select(.verify.verified == false)) | less -RIf you encounter any issues:
- Check the logs for errors
The logging verbosity can be adjusted with the --verbosity <LOG-LEVEL> option. The available log levels are:
warnonly showing warnings and errorsinfowill give an overview of what's happeningdebugprovides insight into each steptraceis detailed logging to see exactly what's happening
- Ask ChatGPT
You might be surprised how often just copying and pasting the command and error message into ChatGPT can provide an instant solution.
- Re-read the getting started guide
- Ask for help in support discussion
- If it's an idea or request for a new feature search for an existing or create a new idea discussion
- If it's a bug report search for an existing or create a new issue
Tip
If you manage to resolve your issue it's always worth creating a new discussion anyway because it might help someone else in the future, or identify an area where the documentation could be improved.
The build process is documented in BUILD.md
Releases and a full changelog are available via GitHub Releases.
Release versions follow the Semantic Versioning 2.0.0 specification.
Commit messages follow the Conventional commit specification.
DevYukine completed the initial work and released it as red_oxide under an MIT license.
RogueOneEcho forked the project to complete a major refactor, fix some issues, add new features and improve logging and error handling. The fork is released as caesura under an AGPL license.
The main difference between the former MIT license and the present AGPL license is that if you intend to distribute a modified version of the code - even to run it on a server - you must also provide the modified source code under an AGPL license.
This is often known as copyleft. The intent is to ensure that anyone taking advantage of this open source work are also contributing back to the open source community.
The code base has now adopted object oriented patterns with SOLID principles and dependency injection.
See also the list of contributors who participated in this project.