It is a well-known fact that the auto tagging feature of Facebook is already remarkable. Moreover, the social media network reportedly upgraded its processing features that will now recognize not just objects in the photos but scenes and actions even on posts shared without a single word.

 As reported by DigitalTrends, Lumos is the artificial intelligent platform that allows the computer to 'see' what is inside the image shared, even without any kind of text description. The machine learning system is apparently also responsible for a number of Facebook's image-recognition features specially eradicating nudity and in obliterating spam.

According to Engadget, this means that even when a photo was taken is forgotten but the content is remembered, the photo can still be located with ease. The report further said that Facebook gives the example of searching for 'black shirt photo', and the system being able to 'see' all of photos with a black shirt, even if the photos are not tagged as such. This reportedly means that something can be located by sorting through shared photos rather than relying on tags and text descriptions.

It is further reported that Facebook developers used deep learning and a neural network to allow the system to identify objects using several millions of photos with the proper annotations. In so doing the image search can capture scenes, objects, animals, places, attractions and clothing items. Furthermore, the system reportedly also factors in some degree of range so that search results will not produce similar images.

Apparently, this will not only be convenient with search, but it will also help Facebook describe images and video to users who are visually impaired. Facebook's automatic alt text, designed for blind users who rely on screen readers that speak aloud text-based descriptions when pictures are displayed, will now recognize 12 different actions that can be described by a verb with a noun attached at the end.