Accenture Report Explores the ‘Unreal’ World of Synthetic Data and Generative AI
Accenture has released its Accenture Technology Vision 2022, a report examining key technologies under the theme “Meet Me in the Metaverse: The Continuum of Technology and Experience Reshaping Business.”
The report combines input from Accenture‘s Technology Vision External Advisory board (comprised of public and private sector experts from academia, venture capital, and business), as well as interviews with industry experts and “a global survey of 24,000 consumers and 4,650 C-level executives and directors across 35 countries and 23 industries.” It examines concurrent technologies like artificial intelligence, web 3.0, digital twins, edge computing, and quantum computing, and explores how they are changing business and the human experience in general, all within the context of building the Metaverse.
The term “metaverse” may be an instant turn-off for some, given its nebulous reputation as either the next technological revolution akin to the IBM mainframe or the Internet or an overhyped marketing fad destined to go the way of Second Life. But according to Accenture, it is “an evolution of the internet that enables a user to move beyond browsing to inhabiting and/or participating in a persistent shared experience that spans the spectrum of our real world to the fully virtual and in between,” and the development of metaverse technology is accelerating the convergence of these physical and digital worlds, or the “real” and the “unreal.”
The report’s authors explore the concept of the “unreal” through discussion of artificial intelligence and how “businesses and environments are increasingly supported by AI-generated data that convincingly reflects the physical world.” This compelling mimesis, powered by deepfakes and other generative AI technologies, forces us to question what is real or not, and in what circumstances do we care, or not? They give the example of a video of the President and how its authenticity matters more when it’s on the news, but less when it’s a deep faked Doritos commercial. This cloudy perception of reality is termed “synthetic realness,” about which the report says, “As synthetic realness progresses, conversations about AI that align good and bad with real and fake will shift to focus instead on authenticity.”
Artificial intelligence is driving that synthetic realness via synthetic data. Accenture says the use of AI was once a competitive advantage for businesses staying ahead of tech trends, but it is now a necessity in a data-soaked world where unlocking its insights is key for streamlining business processes, smoothing out the customer experience, and encouraging greater outcomes. In order to accomplish these goals, many companies are training AI models with a blend of real and synthetic data.
In a June 2021 report, Gartner defines synthetic data as data that is generated by simple rules, statistical modeling, or simulation, as opposed to real-world data gathered from direct measurement of business processes. Accenture’s report cites Gartner’s prediction that most data used in AI modeling will be synthetic by 2030. It attributes this to the fact that “synthetic data is being used to train AI models in ways that real-world data practically cannot or should not. This realistic yet unreal data can be shared, maintaining the same statistical properties while protecting confidentiality and privacy, and it can also be made to have increased diversity and to counter bias, thus overcoming the pitfalls of real-world data.” The report also discusses how synthetic data is being made more “humanlike” for the purposes of creation and interaction, which can help users save time and work, especially in development or customer service contexts.
Based on the existence of troll farms, deepfakes, and phishing scams, it is unsurprising that technological advancement often attracts opportunistic and malicious users who illegally exploit these new tools. The report acknowledges there will be downsides to the rise of synthetic data use, including some problems already coming into play like the lack of trust in information sources (only 35% trust what they read on social media, as surveyed by the Edleman Trust Barometer) and severe distrust of the technology sector in general. AI-powered bots and troll farms are reaching millions with propaganda and false information, for example, 25% of climate crisis-related tweets and 38% of general “fake science” tweets were from bots in a study done by Brown University. Additionally, a Carnegie Mellon study found that between 45% to 60% of accounts tweeting about COVID-19 are bots. Accenture calls this state of affairs an “infodemic” and says it could continue to worsen with AI’s evolution and the continuation of “disinformation-as-a-service.”
“As there is more and more convincing and alluring disinformation, what is real will become increasingly murky,” Accenture says. “Not only will threat actors be able to cause direct damage to businesses and their reputations, but they could undermine trust in the AI ecosystem on which businesses are now built.”
Regardless of potential pitfalls, which are too numerous to discuss in this short article, AI, especially generative AI, is here to stay. The report says that “73% of global consumers think that over the next three years, the number of times they interact with AI or AI-generated content will increase.” So, what can be done to make those interactions less harmful?
Rather than fixating on what is “real,” Accenture proposes authenticity as the “new compass,” which they define as being true to oneself and genuine in a way that others can attest to – and more concretely, using generative AI in an authentic way means taking heed of provenance, policy, people, and purpose.” Distributed ledger technology can help with provenance of digital content. For example, Project Origin, a Microsoft-led collaboration is using it to diffuse disinformation’s spread. Policy surrounding generative AI can help, such as California’s BOT Disclosure Law, “which states that one must disclose the use of a bot when they are used in communication to sell goods or services or influence a vote in an election,” according to the report. Enabling the right people can assist in authenticity as well, and organizations should be arranged with governance structures for increased accountability and expertise should a disinformation or phishing campaign occur. Finally, purpose comes into play with deciding the best use for generative AI. The report says using a bot instead of a person for customer service roles with the sole purpose of saving money is most likely lacking in authenticity. However, in a situation where customers may be embarrassed or hesitant to talk to a live person, such as in the healthcare or beauty industries, an AI “person” might be preferable, which Accenture says would be an authentic way to increase value for the consumer.
Accenture concludes its report’s section on the “unreal” by admitting that the rising use of synthetic data for AI models could either improve the world or leave it vulnerable to malicious actors, but reality will probably see it land somewhere in between. The company reiterates authenticity as a “compass and framework that will guide your company to use AI in a genuine way – across industries, use cases, and time – by considering provenance, policy, people, and purpose.”
Read the full report here.