Partner Content

GRIT

May 28, 2024

The Automation Era: AI Tooling and the Magnification of Consequences

Discover how AI tools enhance fraud detection, emphasizing transparency, fairness, and privacy. Navigate the automation era with trust and efficiency.

Monica Bush

by Monica Bush

VP of Information Security at Tango, a division of Blackhawk Networks (BHN)

One statistic stood out to me in the 2024 GRIT Insights Practice Report: a 21% increase in the use of sample fraud detection services, greater than any other sample quality factor. This highlights our growing reliance on AI to improve tasks at scale. However, insight professionals need to examine key considerations when employing fraud detection or any AI tools for insights.

We appreciate the quantum leap these tools enable, but how often do we consider their potential to magnify objectionable traits? What responsibility do we have to minimize or avoid unintended consequences? Similar to flying a plane or handling a gun, improper use or lack of oversight of AI tools can have significant consequences, and we as professionals must thoughtfully examine the risks. 

Transparency and Explainability in AI Tools

Start with understanding how an AI tool’s predictions work. This requires transparency, and tool providers should share how their AI systems make decisions and which factors they consider. Ask for a transparency report, which should provide detailed statistics about the AI system’s performance and effectiveness.

Also, consider obtaining certification or license verification for AI-enabled tools you might consider. Certification organizations, such as the OECD’s Algorithmic Transparency Certification for Artificial Intelligence Systems, use comprehensive questionnaires to evaluate the explainability, fairness, and level of consumer protection offered by AI systems.

Creators of AI tools (or insights suppliers using them) must be willing and able to explain how the models are created, tuned, and trained, as well as to perform common-sense tests to evaluate their predictions. An ideal fraud detection and remediation service should prioritize transparency and accountability so customers can fully evaluate the benefits and risks.

Bias Awareness in AI Tools

The root magic of AI is the human-generated data it is trained on. In 2024, the UNESCO International Research Center on Artificial Intelligence published a report on gender bias in Large Language Models (LLMs). They found pervasive bias, reinforcing stereotypes against women in areas such as loan approvals, psychiatric diagnosis, and educational biases.

If you want your research insights to represent diverse populations accurately, it's responsible to ensure that models are transparent and capable of self-examination. They should be able to identify and measure biases to provide fair and accurate results for all population cohorts. 

Privacy and AI Tools

With data protection regulations like GDPR in Europe and CCPA in California in place, researchers must ensure strict compliance to avoid fines and reputational damage. These dictate how personal data is processed and govern complex and varied international data transfers.

It's not just about safeguarding insights at one point of use but ensuring confidentiality throughout the process. For any AI tool that captures personal data, it’s mandatory to obtain consent from each participant for each new analysis. Provide a clear privacy notice and an easy-to-access way for participants to opt-out at any time.

Ask how the tool provider anonymizes data and ensures confidentiality. Have them explain their protocol in the event of a material breach and how you would be notified. AI tools should be continuously monitored, retrained, and improved to ensure ongoing compliance and user trust, and participants must be able to manage their settings or opt out.

Conclusion

As AI tools rapidly take over routine tasks, insights professionals celebrate the power to shape future decisions on an unimaginable scale. However, we must also examine the risks. Perform due diligence to ensure proper transparency, eliminate biases, and honor legal and regulatory privacy mandates. Ultimately, we must ground our appreciation of the benefits in a solid understanding of the risks.

research biasgrit reportTangoartificial intelligenceLarge Language Models (LLMs)data security

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

2024 GRIT Insights Practice Report

EXPERT COMMENTARY

2024 GRIT Insights Practice Report

The annual GRIT Insights Practice Report focuses on trends in methodology adoption, how methodologies and suppliers are chosen, and how organizations invest in insights.

Data collected Q1 2024

May 2024

About partner

Tango’s mission is simple: We make gift cards easy to send and awesome to receive. By bundling easy-to-use technology, desirable incentives, and expert service, we help companies get the most out of their reward and incentive programs—from customer acquisition to employee engagement. With our leading reward-delivery technology, customers can instantly deliver digital gift cards to their target audiences, maximizing impact and driving real business results.

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers