Scientific work called "One model to learn all of them" presents a template for creating a single model of learning systems that can do very well with several tasks.
The multimodel, as Google programmers call it, has been trained in a variety of tasks, such as translation, sentence definition, speech recognition, image recognition, and object detection. Although the results show no dramatic improvement over existing projects, they illustrate that simultaneous training of a self-learning machine in several tasks can improve its overall performance.
For example, MultiModel has improved its precision in translation, speech and sentence grammar when trained in all operations that are capable of performing compared to a model that was trained in only one task.
Google can provide a template for the development of future self-learning machines that will be more widely used and will be more accurate than the limited solutions currently available on most machines. In addition, the techniques shown may contribute to reducing the amount of training data needed to create a useful algorithm in a well-functioning self-learning machine.
How it's possible? Google team results show that if MultiModel is trained at the same time for all tasks that it is capable of performing, its precision improves for tasks requiring less training data. It is quite important that completing a complete set of training data in some areas can be a challenge.
It should be emphasized that Google does not think that it has discovered the perfect algorithm that is able to learn everything at once. As its name implies, the MultiModel network simply includes systems that are tailored to take on different challenges, together with systems that will help direct input to these expert algorithms. This approach from Google may be useful in the further development of similar systems that combine multiple domains.
But this is not the end of the road - there are more research in front of Google. Previous results have not yet been verified and it's hard to predict wheather these studies may be used in other domains. The Google Brain team has released the MultiModel code as part of the open source TensorFlow project so that others can experiment with the system and see for themselves.
Google has a number of ways to improve the system. The team has admitted that it has not spent too much time optimizing some system parameters (known as hyperparameters in the business language of self-learning machines) and that making a more expensive tweaking process could improve the precision of the system in the future.