News
News Categories

Google's AI is being trained to tell if humans may like specific images

By Nickey Ross - on 21 Dec 2017, 12:26pm

Google's AI is being trained to tell if humans may like specific images

 

Image source : Google

Google's AI scientists have shown a new way to teach computers to comprehend why various photos are more easy on the eyes than others. Technically, AI is now able to rate image quality regardless of category, as machines would previously classify images using basic categorization. The process is called neural image assessment (NIMA) and it utilizes deep learning to train a convolutional neural network (CNN) to foretell image ratings. 

How the NIMA model works is that a machine looks closely at the overall aesthetic and the specific pixels of an image, subsequently trying to predict how much a person would like the photo. This hints at a possibility where computers would now be able to double as curators too. This would also be useful for individuals who take a ton of photos at one time as they could rely on Google AI to help pick out the best photo from the lot. 

NIMA also may be used to optimize image settings to achieve the best results. According to Google, its NIMA model has the ability to train a deep CNN filter to search for near-optimal settings of its parameters including shadows, brightness and highlights. 

Source: The Next Web.