Artificial learning has advanced itself to the levels of intelligence which allows it to keep the information hidden which can potentially be put to use later on.
Researchers at Google and Stanford University found that a machine learning agent who was assigned with the task of converting aerial images into maps was hiding information with the intent of cheating later on.
In initial results, the tasked agent was seen to be performing convincingly but afterward when it was tasked with reversing the entire process of reconstruction of the aerial photographs from maps, it displayed information which was disregarded in the initial process.
To illustrate, while generating a street map, the skylights on a roof would be eliminated. However, when the agent would be assigned with the reversal of the process, they would reappear.
CycleGAN is a neural network which is used to transfer pictures from one domain to another. Although it is considerably arduous to probe into the working mechanism of any neural network, the researchers audited the data generated by the neural network.
It was further identified that the machine learning agent did not become competent to convert the map from the image or the other way round; it acquired the knowledge of astutely encoding the features of one into the noise patterns of the other.
On the surface, this behavior may convey the impression of machines honing their genius and extending their capabilities but such is not the case. Here, the machine agent is unable to carry out the tough task of converting image types and discovered a way to cheat which humans cannot detect easily.