A Wary Look at AI from an IT Implementation Veteran
If the AI hype cycle were a dial setting on a washing machine, it would now be in the spin cycle phase. It’s intense. The talk and hopes for AI are so exciting, there’s so much potential, we wait breathlessly for an incredible new age of smart systems that will revolutionize not just IT but, like, the whole world, too.
Sorry to be an AI buzzkill. Hey, I love (what used to be called) vapor as much as the next guy. But for us in the IT trenches, excitement about AI is tempered by that most plodding and prosaic of realities: implementation. You know, things like dependencies, cybersecurity, workflows, governance and all that stuff.
Up here on the front lines, it’s about getting machine and deep learning to actually work within the existing IT environment, and that's no mean feat. The difficulties and risks of implementation are major reasons why shiny bright new tech objects, particularly one with the far reaching impact of AI, can receive disappointing market uptake once they emerge into the light of day.
Here's why AI at scale will be no picnic to implement:
A full-scale enterprise contains a multitude of business units, departments, operational support groups, and so forth, with complementary objectives that compete for priority and attention from the CIO and other senior IT managers. Handling this complexity in a cost efficient way is a major challenge, particularly for large organizations that may have re-aligned or outsourced a number of times over the years. Those changes affect controls, governance and risk management.
So: when considering the implications of next-generation AI systems, the consistent adaptation of good practices, controls and security standards throughout the organization are critical success factors that, if mishandled or neglected, will degrade the potential benefits of AI.
Operational Complexity
When a business (financial services in particular) decides to invest in a process automation initiative, the point of the investment is to eliminate tedious manual filing and data analysis and to automate standard underwriting decisions and communications between teams. The goal is to remove human effort from handling repetitive, formulaic non-”value-add” tasks.
At the enterprise level, these projects can expend hundreds of person-years of effort in designing, developing and testing the decision rules and their outcomes, and especially rely on the knowledge of operational subject matter experts. The number of structural data elements can reach hundreds of thousands. The matrix of possible scenarios by possible data values is incalculable. And that would be in just one enterprise-class, integrated, workflow-based system purposed for one major line of business.
So, using artificial intelligence software to automate or streamline such a process or legacy system consolidation is a complex and time-consuming challenge. Configuring the software, handling the masses of data, and producing accurate results are an all-encompassing and expensive strategy with limited ROI. There are many more ways to fail or fall short than there are ways to succeed.
In the face of these complexities – or lack of awareness of them, in the case of new leaders taking over an organization – a natural optimism can take hold with a tendency to dismiss or obscure potential landmines.
Understand, IT leadership overconfidence is prevalent and frequently rewarded in the marketplace. Of course, vendors feed into and off of overconfidence; they sell consulting services and software regardless of its fitness for an organization. Disregarding the unknowns and dismissing the complexity of implementing a major AI system can result in executives taking risks they should avoid.
Operational Governance & Security Risk Escalation
”It’s becoming harder than ever for chief information officers and chief information security officers to track and manage the vast network of computer systems in an enterprise. – “Scourge of Outdated Software Creates Massive Cyber Risk,” Wall Street Journal, 9/18/17
The recent Equifax breach exposed security management shortcomings that prevail in many large organizations. The breach was enabled by a years-old vulnerability in a globally used web development framework for enterprise-scale systems. Even in enterprises good at flaw detection and remediation, it can take months for patches to be made in all critical, exposed systems – and that’s after the flaw is found.
Each complex, rules-based system requires technical and business staff for monitoring, maintenance, patching and reduction of technical debt. Enterprise data governance and security practices require constant attention, staffing, regular maintenance and regular audit. For business and internal political reasons, stretched IT budgets often prioritize the “new initiative” over continued investments in existing systems.
As a result, lean IT hosting teams can get backlogged on applying security patches. Another problem: server and system tracking can be flawed and departmental. In new initiatives, or in organizations going through “right-sizing,” governance practices are frequently de-prioritized or under-staffed.
Now imagine a company that has invested in a new predictive modeling implementation to generate new marketing insights, geo-location risk analysis or intelligent product pricing, and has taken shortcuts on security or governance. The havoc wreaked by a WannaCry-type malware infecting an enterprise AI decision making system staggers the imagination.
Before businesses begin implementing truly advanced AI systems they first need to work on managing and preparing their existing ones. Budgets will remain scrutinized. Reducing or managing several types of risk simultaneously will be mission critical. The next level of AI technologies have great potential, but it’s one that must be smartly focused and managed, involving much human effort.
Matthew Wynne’s 30+ years in IT have spanned roles from developer to consultant to program manager at major financial services companies and not-for-profit organizations.