Google announced a new library that will minimize the impact of machine learning in terms of privacy. Known as TensorFlow Privacy, this open source library is based on the theory of differential privacy, a statistical technique that seeks to learn as much as possible of a group without compromising the data of the individuals that comprise it.
The difference with Apple
Differential privacy is something that Apple uses on the iPhone to protect the data of its users, while Google says to apply it in Incognito Mode of Chrome. In the case of TensorFlow Privacy, the Mountain View company indicates:
When training automatic learning modules with user data, differential privacy offers mathematical guarantees that models do not learn or remember the details of any specific user.
The new open source libraries adhere to the Google Practices responsible for Google and can be implemented with a few additional lines of code, without the need for developers to require knowledge of mathematical calculations or be experts in privacy.
This announcement also mentions TensorFlow Federated, a system for machine learning and other centralized data calculations. This platform is based on Federated Learning, a technique in which a model is trained in a local environment with the help of several clients (smartphones, tablets).
The clearest example of this is seen in the Google Keyboard for iOS and Android
This one which generates predictive models without exposing user’s writing data. Federated learning does not depend on communication with the cloud, since according to Google, part of the machine learning algorithm is executed directly on the device.
At a time when automatic learning takes more force, it is necessary to take measures to correctly manage the user’s data. The use of federated learning ensures that the only information that is uploaded to the cloud is the model trained in the devices, without compromising the private information of each person.