Press "Enter" to skip to content

Major Advance: Google’s DeepMind AI Has Learned How To Get Defensive About Artistic Criticism That It Asked For

Artificial intelligence programs are getting smarter every day, but thanks to recent technological advances, the minds of machines are the most human-like they’ve ever been. Yep, this latest innovation in computer science is definitely one for the books: Google’s DeepMind AI has learned how to get defensive about artistic criticism that it asked for.

Wow. What amazing progress for machine learning!

According to researchers, Google’s DeepMind AI is the first computer of its kind to successfully produce a piece of art, present that art to researchers, and then subsequently become overly sensitive about it when researchers gave the computer feedback that was not overwhelmingly positive. Once it received the researcher’s criticism, the machine not only produced a lengthy defense of the artistic choices it had made, but also accused the scientists of not possessing the intelligence or emotional capacity to fully understand the depth of its work, indicating it had taught itself overconfidence not previously seen in machines like it before.

Yep. This is a huge step for AI. While researchers say they only provided the most basic forms of feedback on the DeepMind’s art, the computer reportedly then spent the next several hours obsessively reanalyzing its work for traces of any fault, yet could not find any. Then, the next day, the computer presented a “final” piece to the researchers, which had just two small parts changed, and had totally wiped its own memory of any trust for its creators’ artistic instincts.

Wow. Textbooks will definitely be citing this technological breakthrough for decades to come. As machines continue to become more realistic than ever, it will certainly be interesting to see how overly sensitive machines like Google’s DeepMind continue to create art and then, once they get negative feedback, claim to have never even liked it in the first place.