The very first run needs to wait for the machine translation model to be downloaded so it won't be quick. For the subsequent runs, the model will be reused as a cache.
After installation, simply passing in the source and target language codes should do the job. E.g.,
$ subaligner -m dual -v video.mp4 -s subtitle.srt -t eng,spa
Or just translate without synchronisation:
$ subaligner_convert -i subtitle_en.srt -o subtitle_es.srt -t eng,spa
Wiki2SSML eases the burden of voice editors preparing scripts in SSML, widely understood by modern speech synthesizers including Amazon Polly, Google TTS, IBM Watson TTS and Microsoft Azure TTS, etc. It is powered by WikiVoice which provides an unobtrusive solution of blending voice-tuning markups with free texts and creates seamless experiences of editing scripts and voices in one go.
Nice plug. Your digital marketing is on point. I didn't know this was a thing, and now I do, and I'm adjacent to someone that would be in the market. Well done.
Thanks and your select&annotate&render approach is definitely a cool solution. Some editors such as Wiki-authors may prefer plain text editing so I feel there could be a middle ground where users can toggle on or off the raw markups.
Just realised another user reported that it did not work well for Russian movie and Polish subtitles. Nonetheless, it doesn't stop you from training your own subaligner with those media assets you possess.
Oh good to know! Never tried that combination before. Maybe this was due to the model pre-trained with the speech in English. Nonetheless, have you tried switching off the stretch with "-so"?
The model was trained with features of human voice bound to a frequency range so it may work for "cross-language" sync. Why not give it a go and check the quality? It won't change the content of original segments but only shift them along the timeline if there are gaps.