Our frequently asked questions

Why can’t you quote subtitle translation on a per-word basis?

Even if you only translate precued subtitles (subtitle translation from a template) it’s not really reasonable to quote a subtitle translation job by the word unless the subs are for descriptive corporate videos where you might come up with “dialogues” that may be comparable to any written text (although that’s not always the case).

The example below can give you an immediate idea.
1. 13 words (subtitling):
Hey, man. How are you doing? 
Have you heard from our mutual friend?

2. 13 words (technical text translation):
Clean the stopper with an alcohol swab. Remove the cap from the syringe.

First of all, it’s obvious that translating option 1. and option 2. requires different efforts and skills. Secondly, you have to think of subtitling as a way to transfer “ideas”, “messages”, “cultural settings”, not individual words or concepts as you might be used to doing in the translation scenario. This is a creative task. Besides, in subtitling you always end up adding or removing words because your target language will require you to, either because there is not enough room, the speaker’s speaking too fast to be able to accommodate longer lines, or simply because that same idea is conveyed through a different metaphor or proverb in your native language. 
For example, point 1. could become: “Hi, how are you? Have you heard from him?”, Here’s how 13 words have been magically turned into 9. For this reason you either quote by the minute of video or per originated subtitle. 

Why is it recommended for subtitling teams to use professional software as opposed to freeware? There are quite a few tools around…

While subtitlers obviously shouldn’t expect the software to do the job for them, if you’re wondering whether you can replace a subtitler with a piece of software pls. read http://itpros.it/art-form.html. We highly recommend professional subtitling software to handle several scenarios that we believe would be too time-consuming and inaccurate with freeware.

Here are just a few examples:
– Your client has you cue and translate a video, then changes his mind and decides to remove a few scenes here and there later on. With professional software it is more practical and faster to apply offsets to several subtitles at one time while leaving the rest unchanged.

– Your client wants the time codes to reflect their burntin ones no matter how awkward they might appear. This can be done with just a few clicks with professional tools.

– Your client wants you to export to all formats possible. Here are a few examples:
EBU.stl/ .pac / .srt/ .scc / .sif / .rtf / .das / .dar / .mtl / .cip / .sbv / .vtt / .usf / .fdx / .html / .fpc / .aqt / .asc / .ass / .dat / .dks / .js / .jss / .lrc / .mpl / .ovr / .pan / .pjs / .rt / .s2k / .sami / .sbt / .smi / .son / .srf / .ssa / .sst / .ssts / .stl / .stp / .sub / .tts / .vkt / .vsf / .zeg / .txt / .xml

– Your client wants you to provide DCP subtitle files that they can use with their favorite video editing suites allowing them to have the dialogs subtitled while they’re still editing their feature films, and embed the DCP files for doing the cuts later.

There are many more scenarios that would take too much time to mention. That said, of course a subtitler has to perfectly know what it takes to do a top notch job in terms of rhythm, syntax, character limit, reading speed etc. and no software whatsoever will be able to do that for them, but that doesn’t mean you can’t take advantage of a few options that really come in handy to reduce your workload or speed up your processes in an industry where speed and quality necessarily have to go together.

Why is it important to spot the video with a view to the subsequent translation stage?

IT Pros Subtitles is happy to accept already originated templates for translation only, but if the people originating the source language subs don’t know what they’re doing then our job will simply be impossible. Below is an example of an instance where we were forced to turn down the job. The transcribed text and time codes were divided and broken down without adhering to the most elementary grammar and syntax rules. Adjectives were separated from the related nouns, prepositions came right at the end of subtitles without anything else to complete the sentence, verbs were truncated and all sorts of other similar issues. The time codes, which by the way didn’t match the audio accurately, also reflected the same awkward structure. Besides, sometimes a lot of text appeared to be literally squeezed into insufficient timeframes. Translating and condensing that text taking into account subtitling guidelines would have forced us to leave out a lot of important information in the content, thus making it impossible to condense without creating new subtitles and new time codes, which the client wasn’t prepared to pay for because they believed they’d provided the synch themselves. It was practically impossible for us to take on the task because subtitles should have an independent, standalone structure, a character limit, syntax has to be consistent and reading speed should be comfortable. Spotting is not just about rhythm and speed, which is obviously paramount. It’s also about making sure the sentences are broken down logically in a way that will not leave the viewer wondering how to interpret the text. A reasonable reading speed will not leave time for interpreting, so you never separate the article from its related noun, the verb from its object et cetera. If you do, that will be a scenario where the audience will not understand because even though time to read is enough, there is no time to wonder what the text you’re reading actually means. That will force the viewer to press pause and rewind.

Why is it not possible to originate subtitles in a MS Word table with separate columns for timecodes, source language text and translated text?

It is possible, but not recommendable. Copying and pasting the originated source text and timecodes into a plain table is not at all a good idea for subtitling purposes because that only allows you to insert the source text into one liners, whereas in subtitling (to accommodate for standard reading speeds) you need two-liners exposed for a minimum of 2 seconds and a maximum of 8 seconds of video and having 38 to 42 characters maximum per line.
Also, spotting/cuing is not just about assigning timecodes and transcribing, but it also serves as a guide for non subtitling translators who, based on how the syntax, sentences and lines are broken down in the source language according to the subtitling guidelines, will be able to do the same with their target languages by simply overwriting the text as it is and without having to worry about learning those guidelines. Besides, the best is for translators to work in a subtitling environment that allows them to see the text in context and make choices and decisions based on the scenes. And on a side note, whoever is going to take care of the next step of hardcoding the subs would also have a hard time trying to embed way too long one-line subtitles in the video or trying to break up the sentences themselves having to go back to translators multiple times for those languages they can’t speak and make a judgment about. Best idea is to go for a proper subtitle file (like a plain SRT for example), or of course we can provide appropriately originated subtitles in a Word format.

Why is it sometimes impossible to include onscreen text as well as subtitles for the dialogs?

Because the minimum exposure time of subtitles is one (preferably 2 seconds) so unless you can accommodate a minimum of a 2 second span where you can expose the onscreen text as well, no human being would be able to read it anyway if the speaker starts speaking and you have to subtitle them while the onscreen text is also displayed. So in most cases where narratives and audio overlap, it’s preferable to leave narratives out not to distract the viewer from the actual subtitled content.

What is the difference between subtitling for the SDH and closed captioning?

 

 Subtitling for the SDH implies a condensed version of the audio including all off screen sounds, background noise, et cetera as well. This also follows different rules compared to standard multilingual subtitling ie reading speed is higher (around 17 characters per second) and character limit per line is longer. Closed captioning instead is a more or less verbatim timed transcription which includes sounds as well an indication of character names, and transcription of songs, characters’ hesitations, pet words etc. When you watch the video along with the captions, it has a faster pace than subtitles (more or less like reading a narration or a book), there is no gap between subtitles and a character limit of 32 per line.

Can you supply a new DCP package with synchronized audio, video and subtitles from scratch? We would send you a hard drive with a high res video file and a stereo audio file for you to sync up on your end.

 In principle, we could, but this would be a time-consuming and totally pointless process because you obviously aready have audio and video tracks in your package otherwise the movie wouldn’t be around, so what you really need is a “DCP subtitle file” that fits in your system for you to import it and merge it. It’ll synch automatically based on the timecodes we already assigned and the result will be exactly the same as creating a new DCP package altogether. Creating a new DCP package would turn a simple task requiring just a few minutes (exporting and reimporting) into an unnecessary and monstrous activity with both our ends having to arrange courier shipment, resynch audio, video and subs all over again for no reason.

Why is it preferable to have your subtitling provider take care of hardcoding the subs?

One example. Someone pointed us to a video we subtitled a long time ago that had become available on the Web in the meantime. We were shocked to see that despite the efforts we put into providing a top quality translation and a synch that would allow for a comfortable reading speed and alignment to the subtitling guidelines, the result after the subs were embedded (by somebody else) was totally disappointing and didn’t do justice to the production, let alone the hard work we did. The synch was changed so as to make the reading speed less comfortable. Two liners were all turned into one-line way too long subtitles forcing the viewer to read up until the bottom right corner of the screen thus missing the scenes. The font and size could have been better. All our efforts had gone to waste. So make sure your subtitling partner takes on all the individual stages in the process (cuing, translating, embedding) to make the most of your audiovisual production.

Why is it preferable to have your subtitling provider take care of hardcoding the subs?

One example. Someone pointed us to a video we subtitled a long time ago that had become available on the Web in the meantime. We were shocked to see that despite the efforts we put into providing a top quality translation and a synch that would allow for a comfortable reading speed and alignment to the subtitling guidelines, the result after the subs were embedded (by somebody else) was totally disappointing and didn’t do justice to the production, let alone the hard work we did. The synch was changed so as to make the reading speed less comfortable. Two liners were all turned into one-line way too long subtitles forcing the viewer to read up until the bottom right corner of the screen thus missing the scenes. The font and size could have been better. All our efforts had gone to waste. So make sure your subtitling partner takes on all the individual stages in the process (cuing, translating, embedding) to make the most of your audiovisual production.