Skip to content Skip to sidebar Skip to footer

AI Consumer Trends, Regulation, and Business, from 2020 to Today

This morning, I reflected on my interview with the Marketing AI Institute in 2020, in which I emphasized the importance of maintaining humanity in the development of AI technologies. I discussed with the interviewer AI’s role in daily use cases and discussed potential risks. I urged companies to provide consumers with the information they need to make informed choices and advocated for educational programs that combine AI theory with practical experience (much like the ones we’ve now developed at Emory University). I also offered some advice on ethical considerations.

Since that 2020 interview, we’ve seen a sea change in consumer awareness around data privacy, leading to more informed choices and a demand for transparency.

In fact, AI was a focus for many of Georgia’s state legislators this year, many of whom view it has a salient consumer issue. I was asked to provide feedback on several bills relevant to AI in Georgia via referrals from Sen. Jon Albers, the legislature’s Public Safety and Science and Technology committees, and the GA Chamber’s Georgia Business Action Network. The themes I noted were:

  1. Data and Privacy: There’s apprehension that AI could lead to unprecedented breaches of privacy, as it can process vast amounts of personal data to identify patterns and make predictions.
  2. Impersonation and Fraud: People are worried that bad actors will use their image, voice, or some other likeness to defraud love ones. The generic story I heard repeatedly was of an elderly woman bilked out of her savings because a fraudster called her on FaceTime with a deepfaked likeness of her grandchild, who was in serious trouble and required her money to solve the problem.
  3. Electoral Misinformation: Relatedly, legislators are concerned about AI’s role in spreading misinformation during elections. Ensuring the accuracy and reliability of information during election cycles is a significant challenge in the AI era.
  4. Transparency and Accountability: AI systems can perpetuate or even exacerbate biases if they’re trained on biased data, leading to discriminatory outcomes and legal transgressions. There’s a fear that AI decision-making processes can be opaque, making it hard to ascertain how decisions are made or to hold anyone accountable for those decisions.
  5. Security: AI systems can be susceptible to hacking and other forms of cyberattacks, posing risks to consumer data and safety.
  6. Copyright: Legislators are exploring how to adapt copyright laws to address the unique challenges posed by generative AI, ensuring creators are protected while fostering innovation. This involves balancing the rights of original content creators with the new dynamics introduced by AI-driven content generation, aiming to safeguard consumer interests and maintain a fair digital ecosystem. I’ll add here that folks often seem to conflate the issues of copyright and privacy.
  7. Job Displacement: Legislators are also concerned about the potential for AI to automate jobs, which many believe will create unemployment and economic disparity. I view AI as a productivity enhancer that will cause our next giant economic leap forward—but I think that this leap will only be possible if we equip folks with the skills to use these technologies.
  8. Health: Many legislators (and audience members I’ve had in the past few months) actually consider AI and issues related to social media, like mental health for teens, to be part and parcel of the same problem. I consider these to be separate issues, but I found it interesting that perspectives at times fused the two. I imagine it’s because people view “The Algorithm,” the code that makes social media content recommendations, as a subspecies of AI generally.

From business owners and executives, I’m hearing a different concern (although all of these items are salient to them, and all of these items have implications for the orientation of their enterprises):

Leaders are worried that the pace of governmental innovation has been too slow to reduce their downside risk.

They’re building technologies now that may need to be re-worked at great expense once the new rules are written. In other words, to keep up with the pace of this technological transformation, businesses are making judgment calls on things that could turn out to be wrong in a year or so.

Addressing these concerns through policy requires a framework that ensures AI technologies are developed and deployed in a way that protects consumers. I have yet to see a perfect framework, although some offer suitable, if incomplete, approaches (e.g. NIST’s AI Risk Management Framework). As a principal investigator for NIST’s US AI Safety Institute Consortium, I hope we can continue to improve this framework.

I believe a monolithic approach is impossible and that any attempt to solve these problems with a singular answer is futile, because in business we have a diversity of use cases that are highly domain specific. I recommend that leaders:

  1. Stay Informed: Regularly update your knowledge on AI trends and their implications through research, attending conferences, and engaging with thought leaders. Understanding the landscape is crucial for strategic decision-making.
  2. Assess Impact: Evaluate how AI trends affect the specific use cases and technologies on which your business is built (or on which it will eventually be built). Consider conducting impact assessments to identify opportunities and risks, ensuring that your strategy is responsive to the evolving landscape.
  3. Consult Experts: Engage with AI consultants or advisory services to gain deeper insights and tailor strategies to your business needs. Their expertise can provide a competitive edge, helping you navigate complexities and make informed decisions.

Don’t let fast-moving trends lead to costly mistakes. Reach out today to ensure you’re making informed choices in this dynamic landscape.

Leave a comment