Video technology is evolving at a furious pace. And artificial intelligence (AI) and machine learning are the major drivers behind this.
Both AI and machine learning have enormous potential to transform video technology and how they will be used to fundamentally change the way we live, work and even shop.
One example will illustrate this. In 2018, Thailand’s biggest convenience store chain 7-Eleven implemented facial recognition video technology in its 11,000 stores across Thailand. The technology is used to identify the chain’s loyalty programme members, analyse in-store traffic, suggest purchases and even measure shoppers’ emotion.
The chain is also using the technology to allow managers to single out loyalty programme members for promotions. Essentially, the chain views video as not merely a surveillance tool but also a marketing tool, a tool to better understand customers, a tool to improve staff efficiency and, potentially, a tool to customise shoppers’ in-store experience.
In the healthcare industry, institutions are beginning to use video analytics to improve patientcare. Videos for example, are being programmed to alert staff if a patient has gone too long without being checked, or to signal that a patient has fallen and needs assistance.
In almost every sector, AI-powered video technology is already being used to simplify everyday processes, from enabling easier security checks at the airport to allowing passengers to pay for purchases with a smile.
From here on, the tide will only rise, with the projection of one billion video cameras connected to artificial intelligence platforms by 2020.
AI is the ability of a machine or a computer programme to think, act and learn like humans. Previously limitations of hardware processing power means that machine learning – an application of AI – could only deploy shallow learning of very large data sets, which looks at data in just three dimensions.
With recent, significant advances in processing power of graphical processing units such as a new coding technique known as parallelisation, we can now utilise a deep learning approach where we can look at data in many more levels or dimensions. Hence the word “deep”.
Software parallelisation is a coding technique for breaking a single problem into hundreds of smaller problems. The software can then run those 100 or 1,000 processes into 1,000 processing cores, instead of waiting for one core to process the data 1,000 times.
With parallelisation, there is a quantum leap forward in how fast we can solve a problem. Having the ability to solve problems faster allows us to go deeper with a problem and process larger, more complex data sets. As the world’s data is set to grow tenfold by 2020, for businesses, being able to process data faster and deeper will become a defining factor in whether they can stay ahead of the business curve.
One of the greatest application of AI and machine learning is in performing low-cognitive functions. AI-enabled devices and machines are able to master and perform tasks that humans cannot do very well. With proper aggregation of information, machines can be better at low-cognitive tasks and often deliver a better quality of service than humans.
For example, humans cannot sit and watch two or more video cameras simultaneously. Our attention span simply does not work that way. However, machines are extremely good and detailed at this. While we see objects, the machine sees the smallest finite details – that is, each and every single pixel. Within the pixel, the machine can see even more details: the shade of colours of that image.
By aggregating data and by allowing machines to automate responses and solutions, we can boost humans’ interaction with their environment.
Here is one example. Imagine a scenario where a law enforcement officer is viewing a large surveillance screen. Between the officer and the large screen is an additional clear screen which can also have video and data cast onto it. Finally, the officer is wearing smart glasses that can project information onto its lenses.
In a surveillance situation, a video feed can be shown on the main screen, while the screen in the medium distance will augment the main screen by appearing to layer extra visuals on top, for instance the face of a suspect. The smart glasses will then show detailed text data, for instance car licence plates or descriptions of suspects. The live video, augmented visuals and text data will work in concert. Data on the main viewing screen can even change to show information according to what the officer is seeing through his smart glasses’ lens. While this may sound rather futuristic, it is actually possible today.
With AI, there will be massive advancements in how we review and utilise video and data.
The City of Hartford in Connecticut, United States of America, is a great case study of how technology can be turned into a force multiplier. Working in tandem with local law enforcement and partners BriefCam and Axis, Milestone Systems was able to enhance the City’s C4 Crime Center and significantly upgrade the Hartford Police department’s ability to prevent and effectively respond to incidents throughout the city.
The Crime Center features the Milestone XProtect Smart Wall, which has thirty 55-inch, 4K video monitors connected to a high-powered workstation, which runs the Milestone XProtect Smart Client on every screen. The Center is staffed almost round-the-clock by civilian crime analysts who monitor the 450 PTZ video network cameras from Axis Communications that are located throughout the city.
With this centre, instead of spending 30 hours doing low-cognitive, manual tasks such as freezing on a rooftop monitoring a drug house all day and night, the City of Hartford’s officers now sit at their desk and within just a few minutes, know exactly where a drug house is by seeing an augmented reality of foot traffic over time.
With the enhanced system, officers can simply go into the data and extract the problem with precision and efficiency. Not only are many crimes now solvable, the video technology is changing how police work will be done in the future.
Having machines take over low-cognitive tasks will be a significant game changer for years to come.
Take Amazon. The online retail giant is applying this to its retail stores where the concept of a checkout is being replaced by customers simply walking out. By using data from smartphones, cameras, sensors, purchase histories and other data points, Amazon is making it possible for its customers to walk into a store, pick up what they need and walk out. Everything else is taken care of by machines. This type of thinking and tool creation is in its earliest infancy but will continue to address areas where value can be added to our lives.
In the book The Inevitable, author Kevin Keely says the next 10,000 startups will be based on finding applications for AI, similar to what happened with electrification during the second industrial revolution. The intelligent industrial revolution is beginning to happen all around us. It will be very disruptive within the security and surveillance industry. But it will also be insightful and liberating as it will free humans to perform higher cognitive processes.