The End of Prompt Engineering? AI Learns to Speak Our Language
For years, the rise of large language models (LLMs) has required users to develop a new skill: prompt engineering. To get useful responses, people have had to carefully craft their queries, learning the nuances of how AI interprets language. But that dynamic may be shifting. With advances in natural language processing (NLP) and multi-modal AI, systems are evolving to interact with humans more naturally, eliminating the need for users to shape their inputs with forced precision.
Brett Barton, Vice President and Global AI Practice Leader at Unisys, recently discussed this transition in an interview with AIwire, pointing to findings from the company’s recent report, “Top IT Insights for 2025: Navigating the Future of Technology and Business.” The report highlights a trend in which AI is being trained to adapt to humans, rather than humans being trained to adapt to AI. According to Barton, this shift could signal the decline of prompt engineering as a critical skill.
Prompt Engineering: Becoming Obsolete?
For many enterprise applications, in industries ranging from manufacturing to healthcare, users need AI to function seamlessly in their environment, especially in unpredictable conditions. Crafting long, detailed prompts and responses may not be possible.
“In the manufacturing realm, but you have employees that are in a crazy, noisy, often poorly lit environment, and they're trying to create a prompt that will generate a usable, or even useful response,” Barton said. “And what we found is that, as you interact with ChatGPT, or Claude, etc., if you don't give it enough where it can specifically relate and pull some data back, you're sort of in this foggy middle, and it's tough to get anything valuable out of it.”
A lot hinges on an LLM’s ability to retrieve relevant and contextual information from its training data or external sources. Without sufficient context, AI struggles to generate meaningful responses, often producing vague or inaccurate results. Prompt engineering focuses on crafting prompts to be detailed and precise, allowing the model to match a query to existing knowledge, extract relevant details, and generate a response that aligns with the user’s needs.
With advancements in NLP, however, AI systems are becoming more adept at handling imprecise input, allowing users to engage in more natural conversations with AI. These developments are enabling AI to infer context from less structured user input, reducing the burden on users to phrase their requests in a such a highly specific way.
How NLP and Multi-modal AI are Driving This Evolution
As NLP continues to advance, its integration with other AI modalities is creating more intuitive user experiences. Instead of relying solely on text-based interactions, AI is evolving to interpret and respond to multiple forms of input, from voice commands to visual. According to Barton, these developments are setting the stage for a future where AI interactions feel more natural and adaptive.
“I see NLP interacting with and playing well with multi-modal AI and GenAI,” he says. “We're seeing advancements in not only the voice recognition component, but we're also seeing it in real time for language translation. In addition, we're starting to see cameras that are more accurate, processing becoming faster, and we've got gesture detection. We're on the cusp of seeing some relatively accurate predictive AI to enable even more intuitive, natural user interaction.”
“No one is going to be upset about not having to hammer out a query,” he adds, mentioning how these improvements will allow us to live beyond our keyboards, interacting with AI on our mobile devices, autonomous vehicles, or anywhere else generative AI has yet to be deployed.
According to Barton, the maturation of NLP that is leading to more intuitive AI applications will also enable higher user satisfaction. This is particularly relevant for industries where traditional text-based interactions are not practical. In healthcare, for example, physicians often dictate notes rather than typing them. AI systems using NLP can listen, extract key details, and initiate workflows like automatically calling in prescriptions, scheduling follow-ups, and flagging potential health concerns. The goal is for AI to operate as a behind-the-scenes assistant, anticipating needs rather than requiring constant refinement from users.
If Prompt Engineers Disappear, Who Takes Their Place?
With AI taking on more of the responsibility for effective communication, the skill sets required to work with these systems will undoubtedly change. Barton predicts a rise in demand for linguists and other specialists who can refine AI’s ability to interpret and generate natural human language. “I think you're going to see people from the linguistic space that are really helping these systems achieve a higher level of efficiency and efficacy based upon the requests or the prompts that are given verbally,” Barton says.
Another critical shift will be in AI architecture, which will need to evolve to support rapid back-and-forth interaction, rather than the slower process of refining a written prompt. “You're also going to have to look at architecture because people are able to speak much more rapidly and more quickly turn around another request if the system doesn't generate the information that they sought the first time around,” he explains.
Ensuring AI can keep pace with natural speech and real-time interactions will require collaboration. AI and cloud architects will need to design scalable infrastructure capable of handling the increased data flow and computational demands of voice-driven AI. Software engineers and NLP specialists will focus on optimizing models for faster response times, reducing latency, and improving context retention across interactions.
Challenges in Deploying Voice-Driven AI
As AI moves toward voice-driven interactions, organizations will need to address several challenges including security and governance concerns. AI models trained to recognize speech must function in diverse environments, dealing with background noise, encryption, and how to securely handle data. Device compatibility is also of note, as many workplaces allow employees to use personal devices, but AI-driven interactions raise questions about security and data ownership.
As AI-driven voice interactions become more prevalent, governance and compliance frameworks will also be crucial in ensuring these systems operate within legal and ethical boundaries. Regulations like HIPAA in healthcare impose strict requirements on how sensitive data like patient records can be collected, processed, and stored. Organizations deploying voice-based AI will need to implement robust security measures, Barton says, such as encryption and access controls, to prevent unauthorized access or breaches.
Additionally, industries handling financial or personal data must comply with regulations like the E.U.’s GDPR, which mandates transparency in AI decision-making and user data. Beyond legal compliance, organizations will also need to develop internal governance policies to address AI biases and ensure that these AI interactions align with ethical standards.
AI is an Evolving Program, Not a One-time Project
The move toward AI that understands humans, rather than the other way around, represents a major shift in the evolution of NLP and generative AI. As these systems become more intuitive, the rigid world of prompt engineering could become a relic of the past.
Barton says successful AI deployment relies on three critical pillars: data quality, security, and organizational change management. High-quality, structured data is the foundation, as AI systems can only be as effective as the information they process. Organizations struggling with poor data quality often turn to generative AI use cases instead of traditional AI, but emerging techniques like diffusion strategy are helping clean and validate legacy data.
The second pillar, security, ensures that AI-driven queries return results quickly and securely, requiring robust infrastructure to handle high-speed data flow without latency issues. Finally, organizational change management plays a crucial role in adoption. Without proper training and user guidance, even the most advanced AI solutions will fail to deliver ROI.
“This is not a project. It does not have an end. This is a program. Like a child, it requires constant care and feeding, or the value that it delivers begins to drop,” Barton says, adding that organizations should not be deterred by the challenges ahead.
“Let's just be sure we know what needs to be done to build a fully functional AI application that meets your needs, that's flexible enough to scale with you, but also helps your people understand how best to use it so they can get the benefits,” he concludes.