October 18, 2021 12:22 pm
Dr Shiri Gordon is the director of the team behind the algorithms that power viisights’ advanced video analytics technology, which is used by cities, transport hub operators, education campuses, and more.
She shares her insights on algorithm accuracy, privacy and what drives her in the role.
You have an interesting job title: Director of Algorithm at viisights. Can you explain what the role entails?
Algorithms are the computer programs that run the world.
Algorithm developers create code by employing logic and reasoning or building machine learning models that can translate any data input into a desired output without human intervention.
Algorithm developers are responsible for the functional code that allows for the “intelligence” of a piece of technology. Their primary role is to solve complex computational problems by researching, designing and testing their models and making sure that the desired output is efficiently and accurately achieved.
As the Director of Algorithms at viisights, I manage and mentor a group of algorithm developers that are responsible for the development and improvement of viisights’ behavioural recognition technology.
We constantly improve the existing models, research and test new technologies and support new features. I’m a focal point for collaboration with other R&D teams and also responsible for monitoring processes that relate to the company’s data from which we develop our models.
What is your background that brought you to this role, and what attracted you to it?
I have been working in computer vision and machine learning for more than 20 years now.
This field attracted me right from the start of my professional life as a software engineer because I was eager to develop a technology that would enable the computer to ‘see’ and ‘understand’ the world around us.
During my PhD studies, I decided to research the field of medical image analysis as I wanted to develop algorithms that would help humanity cope with cancer. I later worked in a medical device company, trying to apply my research to the real world.
I began working at viisights two years ago since I wanted to polish my skills related to deep learning technology. Viisights is the perfect place for that, as we work with the most advanced algorithms for video analysis in near real-time.
In addition to that, viisights’ mission – to identify suspicious and abnormal behaviour in various scenarios to alert the required personnel and provide help in real-time – is aligned with my desire to develop products that will improve our wellbeing and safety.
Can you explain more about how viisights’ systems can understand what is happening in a situation in order to flag events of interest?
Viisights’ Wise solution is a video analytics system that extends the conventional imaging and analytics solutions. It detects not only objects, such as a person or a vehicle and their basic attributes like the colour of clothes or type of vehicle, but also their behaviour and interactions over time.
Through the power of AI and deep learning, Wise extends the notion of attributes to dynamic features, such as a person running, falling or throwing an object, and provides a holistic understanding of the scene with respect to objects, their behaviour and mutual interactions in near real-time.
The Wise platform can process multiple video streams on a single graphics processing unit (GPU) and detect events of interest in real-time. This gives command and control operators a deeper understanding of different events. It enables them to recognise behavioural patterns that predict an event that may need to be escalated and alerts the proper personnel so they can investigate whether they need to initiate immediate action.
How can cities be assured of a high level of accuracy?
The combination of detection over time, deep learning and leveraging NVIDIA’s graphics processing unit (GPU) and software frameworks enables viisights’ Wise to achieve competitive accuracy and set industry records of low false-positive rate (FPR). This performance makes Wise an operational-friendly platform for control centres.
What is the role of artificial intelligence (AI) and machine learning in viisights’ technology? Can you give an example?
Viisights uses advanced machine learning algorithms based on unique deep learning technology for video analysis. Models that can capture different behaviours and the dynamics in the scene are trained not only with images, but with real-world videos and augmented examples that can capture the behaviour over time.
One of our most important features, for example, is the detection of people fighting. The model in that case is trained to differentiate between fighting and non-fighting people.
Learning over time makes it easier for the model to distinguish between people that are wrestling and people that are simply hugging each other. The model learns from example videos collected randomly from different public sources or filmed during simulations of actual actors.
The model provides a signature of numbers for each object in the video, which roughly describes the object’s movements and attributes over time, and doesn’t include any personal details or biometric information. These signatures are then used by the system to decide if there is a fighting occurrence in a specific scenery or not.
Surveillance systems often raise privacy concerns. What would you say to cities concerned about this?
Viisights’ technology protects public privacy: it only analyses general behaviour patterns of individuals and groups. It does not identify faces or licence plates.
What else should cities know?
Viisights’ behavioural recognition engine requires a significantly lower hardware footprint relative to other machine vision applications, which drives low total cost of ownership.
The system can be easily scaled to hundreds and thousands of real-time video streams per deployment. New behaviours or features can be easily added, and we are constantly expanding the set of actions and events supported by our product.
Dr Shiri Gordon has over 20 years of experience in research and development of image processing and computer vision algorithms. She holds BSc, MSc and PhD degrees in engineering from Tel Aviv University and was a postdoctoral fellow at the department of Electrical and Computer Engineering in Ben Gurion University of the Negev. Shiri worked as a senior algorithm developer at Superfish, and as the head of the algorithm team at Biomedical. She has published numerous articles at leading conferences and newspapers in the field of computer vision and medical image analysis.