That said, the training process can take a long time-“sometimes many weeks.” But the training process can be snapshotted and resumed on demand if needed. Torch can take advantage of GPU acceleration, which means the training process for OpenNMT models can be sped up a great deal on any GPU-equipped system. After training OpenNMT on this data, the user can then deploy the resulting model and use it to translate texts. The user prepares a body of data that represents the two language pairs to be translated-typically the same text in both languages as translated by a human translator. OpenNMT, which uses the Lua language to interface with Torch, works like other products in its class. But the algorithms aren't the hard part it’s coming up with good sources of data to support the translation process-which is where Google and the other cloud giants that provide machine translation as a service have the edge.
0 Comments
Leave a Reply. |