Separating AI and ML From The Marketing BS Trickling Into DS
October 2, 2017 by Dave Haynes
It’s no surprise that digital signage solutions companies are starting to weave artificial intelligence into marketing descriptions of what they do and offer. But there is a giant distinction between companies that are developing and delivering AI capabilities, and those that are just using AI-based tools.
There’s also a big distinction between content management and playback functions that are based on data and pre-defined rules, and actual AI.
If you really want to stretch things out, I could say Sixteen:Nine is using AI to tell readers about the digital signage market, because I use an AI service to auto-transcribe recorded interviews (such a time-saver!) and I sometimes ask my Google Home to tell me the weather forecast when I wander downstairs from my office, foraging for food.
16:9 was started back in the Mesozoic Era to filter the industry’s bullshit, and it’s obvious that an industrial-grade BS filter is going to be needed to sort out the rising use and abuse of AI in digital signage sales and marketing.
The first way to develop that filter: talk to someone who runs a company that’s genuinely IN the AI business. So I rang up Rodolfo Saccoman, who runs the computer vision company AdMobilize, which is based in Miami, FL.
Saccoman’s company is heavily populated by brainiacs writing all the code that ingests and analyzes video streams to do pattern detection and recognition, generating analytic reports and data triggers about everything from Digital OOH viewing audiences to traffic counts and characteristics. Much of it is AI work. He kindly gave me an hour of his time to give me a layman’s tour of AI, and separate it from rules-based software.
The AI Family
There’s a hierarchy, he says. “At the top you have artificial intelligence. Then you go to machine learning. Then you go to deep learning, and within deep learning, you have neural networks, and then you have applications of these neural networks.”
“So artificial intelligence – AI – is a very, very general term, and it’s the broadest way you can think about advanced computer intelligence. Think of it like a feature of intelligence that you can train, so a machine can simulate that feature.”
Looked at another way, artificial intelligence is the software that enables machines to do things that have, until now, required human intelligence to do.
There are two types of AI:
- Applied AI or what Saccoman calls Narrow AI. That’s software and machines designed to do very specific things – like that Google AI program that beat the world’s top Go player (Go is arguably the world’s most sophisticated board game). Computer vision, what AdMobilize does, is Narrow AI.
- General AI – The somewhat more “out there” aspiration that machines can be designed to complete a wide variety of tasks that would normally by performed by humans. General AI requires that machines learn and evolve their skills and understanding as new and different tasks get put in front of them. That’s called machine learning.
“So AI is really an overall name for a collection of, let’s say, techniques. And what that can mean, because the terminology is so popular now, is anything from a computer being trained to play a chess game, to voice recognition to things like computer vision.”
Narrow or Applied AI involves a huge set of these techniques. “If you look at computer vision for face detection and emotional analysis, you’re counting and you’re evaluating distinguishing kinds of facial features, and then if you were to build that kind of category with different techniques, you might do computer vision for people, for crowds, for vehicles, objects and so on.”
“So that’s kind of like the over-arching set of categories. And then underneath it is a subset of AI, in which you have machine-learning – and that really refers to a group of reinforcement learning algorithms, which teach themselves over time. And what that means is that the main principle is that you have data – a large amount of data – that is input, and then you create these models, for the models to learn by themselves.”
The system learns to then recognize patterns and make predictions from that.
“So as an example, let’s say that we create this database of millions of Asian faces, which has its own kind of particular traits. And you put all of that in a database and then create a neural network, which is basically a network of coding that tries to duplicate how the mind operates and makes decisions. When you are putting in all of these different data, you are classifying the data.”
“So I’m saying, ‘OK, this is all data from a male person, within the ages of 10 or 20 and 25. And you know this person in that picture has an emotion that equals happy.’ And then you do that for millions of images. And the system starts training and identifying these patterns and the system is trying to do that by itself, so it doesn’t mean that you have to manually be inputting every picture with its own kind of classification. And that’s where that kind of intelligence starts to come in.”
Deep Learning Is The Heady Stuff
The really heady stuff involves deep learning, which is using neural networks to simulate human decision-making or human classification of things. Deep learning is using massive datasets and graphics processors to generate whatever the outputs may be — the graphics side of computing used because it tends to have more computing horsepower than computer CPUs.
In these artificial brains, so to speak, the neurons in the network have specific questions to ask and get binary yes or no answers. So for doing something like using computer vision for determining, through pattern detection, the characteristics of shoppers in a store, the neurons are getting their answers and those answers compile into a broader answer about male vs female, age ranges, total numbers, dwell time and so on.
There are relatively simple neural networks, “kind of like building blocks for many AI approaches,” says Saccoman, “or you go into deep learning, and you have deep neural networks. Basically it’s like a stack of neural networks, built in several layers, which gives that kind of description of being deep.”
If you have used Siri on an iPhone or Alexa at home, you’re working with AI that’s built around a neural network. But there is a whole lot more possible than answering consumer questions and ordering stuff. Think about AI being used for things like reviewing medical imaging to assist in diagnoses of illnesses and pull relevant data to recommend the best treatment – based on mountains of data and not just one specialist’s experience and insight.
I’ve seen AI being slipped into marketing materials in much the same way, a few years back, that companies were all jumping on the buzz surrounding cloud services. Abruptly, companies that used co-located hosting facilities to cut costs on infrastructure, and never talked about that because few cared, were In The Cloud and Cloud-based!
Now unquestionably, AI is slick technology to be touting, and more interesting than trying to hype new drag-and-drop functions in a CMS. But when a company says its scheduling and targeting is being done using AI, it’s very unlikely the platform doing that work has anything to do with AI – other than getting data from a third-party system that really does do some form of AI – like AdMobilize and Quividi and a short list of other companies working around the edges of the industry ecosystem.
Real AI Isn’t Easy, Or Cheap
“I cringe all the time,” says Saccoman, “when I hear somebody say, ‘Oh yeah we’re going to develop the roadmap. So we’re going to develop our own AI.’ I kind of smile in the ingenuity of the comment, and cringe because they have no clue, really, what it is, and what it takes.”
“We have invested $7.5 million in this company, and have some of the best talent in the world, I think, or we at least compete for the best talent in the world. It’s been a really hard journey to get to where we are.”
Where AI will tend to get appropriated is when it comes to things like content triggers used for dynamic scheduling in digital signage platforms. The more sophisticated systems out there use data rules to help determine and trigger what files play and when – so that a data trigger such as a low inventory threshold for a food item might cause that item to dynamically disappear from the digital menu board.
That happens based on a rule that laid out how IF the stock for an item goes to a certain low number THEN disappear that item and do X instead. That’s slick, but it’s not AI.
However, an AI-driven parallel system – like computer vision – could provide real-time data that might shape and trigger that sort of dynamic scheduling, based on what the AI is “seeing” and reporting, like TSA screening lineups are 20+ minutes, so flash messages saying Gate D’s screening area is open and lines are five minutes.
Detroit’s WaitTime, which uses computer vision to load-balance concession and restroom lines at sports venues, is a rare example of a digital signage display platform that expressly built AI into its platform.
Figuring Out What’s Real
So … as a buyer or solutions provider, how can you filter the noise and get your head clear on what’s being offered and whether it’s AI or not?
Well first, maybe it doesn’t matter all that much. End-users needs capabilities and deliverables, and I’ve worked with plenty of end-users who could care less about the code-base and who laid claim to the intellectual property.
But, you do want to know what the technology is, particularly if you are being up-sold on something that’s shiny and special, when it’s really not.
You want to know who created the technology, because that’s the company that will support it when or if something isn’t right.
And you want to understand that it does, as the Brits say, what is says on the tin.
Two Main Use-Cases For AI In Signage
So how to think of AI in the context of digital signage? Saccoman breaks the use-cases down to advertising/marketing, and corporate.
Advertising/Marketing
- Real-time analytics (who’s watching, how long, etc)
- Content triggers (targeted messages instead of broadcast)
- Interactive (messaging based on gestures and sensors)
- Accountability (media fees based on actual faces counted, not gross audience estimates)
Corporate
1, 2 and 3 from above, plus:
- Facial recognition – Using employee ID photos as the database to trigger messages as simple as birthday greetings as a worker arrives in the morning, to targeted messages based on things like the department they work in (the people in the warehouse don’t need reminders about the quarterly sales meeting).
Where this all goes, in the context of digital signage, is anyone’s guess. What’s clear is that computer vision, in particular, is providing a far deeper understanding of the characteristics and behaviors of the audiences for screens, whether ad-based, retail or corporate. Some of those insights are immediately actionable, while others establish patterns that may force re-thinks on how things are done.
Real-time data can shape messaging – whether that’s traffic billboards telling traffic-jammed commuters about commuter rail, or hungry fans at an NBA game how to avoid the longest lines.
We’re now increasingly accustomed to talking to personal assistants like Alexa, Siri and Google Home, and giving them orders. It’s no great leap – and very likely has already been done – to trigger content on a digital sign through voice commands as it is to ask the AI assistant to put a meeting time in their calendar.
The biggest thing to take on-board with AI. It’s not just buzz. It’s very real and coming fast.
Go Deeper
Need to know more … a quick Google search could put you deep in the weeds with more information than you want, or can understand. But searching AI primer or AI simplified will generate some good stuff.
Here’s a video primer I found:
https://vimeo.com/170189199
Leave a comment