Categories
Focus on APAC
August 28, 2024
Delve into the last ten years of technology, focusing on AI's need for quality input data. Learn why prioritizing data relevance is crucial for better insights.
Sydney, Australia, 2014: I spoke about how technology is increasingly binding itself to human behavior at the inaugural IIEX APAC event.
Back then, suggesting that our daily lives would soon be so deeply intertwined with technology might have seemed futuristic.
Yet here we are. 2024. Ten years later.
We’re surrounded by wearables and the Internet of Things (IoT) monitoring everything from our social behavior, to sleep patterns, to cat litter boxes. We have more quantitative data points that mirror human behavior than ever before.
The abundance of data as we infuse AI into market research brings a critical challenge: ensuring our global work built on AI-infused apps and services are reliable, accurate, and most importantly, relevant.
Artificial Intelligence is a combination of three things: data, math, and technology.
We take data, apply math, and use technology to deliver output that resembles the output of intelligence- and that’s predicting what will happen next. That’s what every human with intelligence does millions of times per day: predict the next thing. So let’s break it down:
Math—calculus and statistical models—hasn’t fundamentally changed over the years. Been around for a while, right?.
Technology has evolved rapidly, but in categories that have largely remained static: more computational speed, more memory, more bandwidth, more storage and more distribution. We’re delivering faster.
5 years ago, today’s AI capabilities were still possible. But predictions took days or even weeks for delivery. That’s not practical for the general consumer, so there was a small market. Today, with predictions ready in fractions of seconds, now that activates an economy worthy of the across-the-board investments in education, tools and careers that we’ve seen.
AI is prime time now. AI is faster now. And it’s cheaper.
But is it better now? Are we even on the path to make it better?
Here’s me:
My facial data has no business training a facial coding AI deployed in Japan where everyone is primarily:
My inputs for that model don’t make sense at all. Skin tone, face shape, impact of lighting would be almost irrelevant.
That’s a pretty obvious conclusion. But AI is like other tech: it will fade away from being so obvious to us.
Who remembers the iPod? One device with one purpose: copy your music onto it and play your music. Now, that capability is embedded inside streaming media and software buried in every smartphone, feathered into hundreds of other purposes.
That’s what happens. New tech becomes old tech by falling to the back, embedded among other tech, away from the surface. Away from our attention. AI will do the same.
So while we have it at the forefront and can test simpler AI implementations than what’s to come, let’s not lose sight of the only thing that extends or limits an AI’s capabilities: Its source data.
The math doesn’t really change. Any algorithm run on the same input data will eventually hit a ceiling… it can only be so good.
And technology will only make the same math run faster and cheaper.
Where the source data is the barometer of AI success, how are we making it easier to acquire it, evaluate it and infuse it into increasingly powerful and increasingly embedded AI capabilities?
Nobody’s cornered that market yet. I’d say it’s not seriously being looked at. Most things aren’t until it’s too late.
Having spent my life in technology, I've seen firsthand how easy it is to be enamored with fast, shiny outputs. I get paid for fast, shiny outputs.
And it’s important to keep in mind another staple from my 2014 IIEX APAC talk:
“Garbage in, garbage out.”
We’re drawn to quick results and impressive-looking AI predictions. However, it's the inputs—the data—that make all the difference. Without high-quality, relevant data, even the most advanced AI models will produce misleading insights, inaccurate results and compromise the livelihood of every industry researcher, decision-maker and technologist reading this.
We are at a critical point. If we prioritize the importance of input data now, we can avoid creating an industry of AI, apps, and insights that are misled by superficial output results for years.
We can maintain industry growth in the right direction, where we’re focused on the true value lying in the data that fuels these systems that produce our value to the world.
By placing AI input data quality at the forefront of the value behind what we build and sell and let the outputs be the byproduct of that proper focus, we retain and grow our value, accuracy and the return on clients’ investment in Market Research as AI layers itself into our everyday research tasks.
Successfully focusing on AI input data requires choice: systematic evaluation and quality-checking of the data that fuels the AI models that make market research better.
The first step: make the right data easily accessible in the first place. What’s not readily accessible cannot possibly be evaluated for relevance.
If data accessibility barriers remain high, it remains behind doors that can’t afford to be opened. It sits in the back of the warehouse, per se, inaccessible to those brains and algorithms that need it so AI can evolve without us stepping off the path.
It's crucial that we don't just focus on making AI as good as the source data that is readily available, but rather, everyone needs to quickly choose among the right data for their specific needs. Market research is not monolithic, and we have to treat the inputs to our future success accordingly, or else we limit our options for success according to the limits of the inputs to it.
The democratization of anything brings a greater demand for transparency and accountability among the masses, and organically drives higher standards and quality as the process and outcomes are subject to broad social and economic scrutiny and feedback.
This supremely applies to data, too. The democratization of data is what can lock in the inputs for the next wave of AI innovations, activate revolutions for market research, and maximize relevance for the AI-infused insights we can deliver.
Reflecting on the last ten years, technology has indeed bound itself to human behavior in ways we never could have imagined back in 2014. Wearables, IoT devices, and smart technologies have become layered into our daily lives.
We’re generating quantitative data that activates insights into everything from our health to our homes to our unique habits that we don’t even detect.
However, as we look ahead to the next AI-infused decade, the focus now shifts from data ubiquity to data accessibility. We shift to focusing on data being relevant, accessible, and accurately representing the huge breadth of diverse needs AI-infused market research will need to serve.
It’s now not a data supply challenge. It’s a data accessibility challenge.
We can solve it, if we look past the shiny objects.
Get your shades on, stay cool, and get moving before poor AI source data moves in on us.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Clint Taylor
Our techniques must be fused with those that intimately attach themselves to all the depth and breadth of observable human behavior.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.
67k+ subscribers