Graymatics is a cognitive multimedia analytics company headquartered in Singapore with bases in Silicon Valley, Bangalore and in Chile. It has developed a massively scalable AI-based platform for deep multimedia analytics. The CCTV based video analytics solution has been created to solve challenges across smart cities, smart transportation, telecommunications, manufacturing, building management, retail, banks and factories.
What sort of companies require video analytics?
We provide a wide range of AI-powered analytics for CCTVs. Our three sectors of focus for retail banks are safety & security, streamlining operations and customer experience. Our video AI solution provides specific functions for retail banks including estimating various experience metrics from a customer’s journey, ATM vandalism, various anomalies such as the bank vault door being left open at unscheduled hours, staff attendance management, and much more.
For media companies, the focus would be on providing analytics from media content such as videos and images to provide contextual recommendations and customer insights for better monetization and brand positioning.
Machine learning and specifically deep learning is based on having a lot of data to make decisions. How is that data collected?
Data is the backbone of machine learning and deep learning. In order to harness the right kind of data, we tap into the internet and passive CCTV infrastructure to collate and leverage the videos captured. We have developed extensive tools to automatically annotate data with minimal manual intervention and also sophisticated tools to transfer content with different style transfers. The video feeds captured from various CCTVs installed at a client’s premises are used for different types of analytics corresponding to both the industry and client need.
How is that data stored?
The data from the publicly available sites including the internet and publicly accessible CCTV may be stored in our cloud and leveraged for optimizing our AI. The client’s CCTV content is always stored on the client's premises with consent and they have their own IT team to handle the security.
Is the data governed under GDPR?
Our applications as leveraged by our clients are all carried out to be in full compliance with UK GDPR. Many of our analytics used are either for statistical (not individual-specific) insights or real-time alerts of different anomalies and customer discomfort. Clients such as banks will never enable our Face Recognition to identify any specific customers, unless it's a value-added customer who has opted in.
If you are using data to identify people through facial recognition, how does that work with civil liberties?
The facial recognition may be used only for bank staff and high value customers who have opted in to streamline some of their access within the bank premises. Facial authentication is a common application to streamline a customer’s ability to access their private data. All of the CCTV data is sensitive and this data with all the analytics will be stored securely at the client’s premises or cloud account. This data will never be accessible by third parties.
How does the AI identify emotion/nuance? Knowing a smile from a grimace, for example?
Different emotions are detected through the integration of visual information from facial expressions, body movement and gestures, and speech with temporal sequential models being obtained. This allows our AI to detect various emotions with higher accuracy.
Are we approaching a time when the Hollywood film “Minority Report” (2002 Steven Spielberg sci-fi film) becomes a reality?
In Minority Report, police divisions can use facial recognition, map this with their previous criminal history, and movement detection to monitor irregular behaviour. They can better scan for licence plates on cars, run facial recognition to search for potential criminals or missing people, and automatically detect suspicious anomalies like unattended bags in crowded venues. By using historical data and observing where recent crimes took place, they can predict to some extent where the future crimes might be more likely to happen.
In summary, the tools are available to make some of the Minority Report-style scenarios purely around visual symptoms a possibility today. However the limitations will be in limited coverage of cameras, regulatory provisions around the privacy and handling of the data and of course, the careless interpretation of some of the combinations of analytics.
Can you truly predict crime to prevent it?
There may be too many parameters, many non-visual and many based on human idiosyncrasies, cultural idiosyncrasies all bottled up with probabilities of different events happening in time. While we may be able to detect certain symptoms which can possibly lead to crime, we don’t believe it will ever be feasible to predict any crime to prevent it unless we are able to freeze our time axis.
Is there a risk our movements, buying habits, friendships etc will be analysed and sold to large tech companies?
Compared to traditional centralized machine learning techniques that require data sets to reside on a single server, we also provide for a federated learning platform which further reduces the need for data security and privacy concerns by maintaining stores of local data. Our platform today ensures that solutions can provide optimum results without compromising on the safety and privacy of individuals.
Is this ‘tech for good’?
This technology is very much focused on improving wellbeing, promoting community safety, security, enhancing productivity and improving people’s overall experiences. In specific concrete terms, video analytics can be used to enhance security and surveillance at banks, thereby ensuring protection for the bank staff and customers while facilitating a greater customer experience and easier streamlining of different operational chores that would otherwise take time. Particularly in these times with the global pandemic, video analytics can be used to monitor social distancing and adherence to COVID-19 lockdown rules.