HP: How smarter systems are making IT more complex
AI is reshaping enterprise IT, but smarter systems are increasing complexity as organisations struggle to scale, integrate, and govern AI at work.
Enterprises are heading into 2026 with better AI tools, faster infrastructure, and bigger ambitions than they had a year ago. Yet many CIOs are finding that day-to-day technology management feels harder, not smoother. The problem is not a lack of innovation. It is the way new capabilities are landing on top of estates that were already crowded, inconsistent, and difficult to govern.
Table Of Content
- AI adoption is rising, but scaling remains the missing link
- AI agents introduce power and complexity in equal measure
- Why simplifying the tech stack will matter more than adding new tools
- AI-enabled devices and hybrid platforms will reshape employee expectations
- Human-centric IT design will become the real differentiator
That tension is showing up in a simple pattern across the region. AI appears everywhere in the organisation, but it rarely changes the organisation. Teams run pilots, publish internal demos, and deploy assistants in pockets of work. Then progress stalls. The operational questions arrive first, and they are less forgiving than the early excitement. Who owns the data that the model relies on. Which systems are allowed to call which tools. What “good” looks like when an automated decision affects a customer, a clinician, or a trader.
This is why the current moment is best understood as an integration challenge. AI adoption is rising, but integration is where programmes either become durable or drift into a permanent trial state. CIOs are being pushed towards a more disciplined agenda, where the primary task is to reduce friction across systems, simplify workflows, and create environments that employees can trust.

Jennifer Baile, Global Head of Portfolio and Solutions at HP, frames the obstacle in operational terms rather than technical ones. “There’s no shortage of ambition around AI, but scale takes more than enthusiasm,” she says. “The biggest blockers aren’t technical capability; they’re trust, data readiness, and operating maturity.”
AI adoption is rising, but scaling remains the missing link
Most enterprises now have some form of AI in production, even if it is limited to a single function or team. The more important fact is what happens after that first deployment. In many organisations, AI becomes an additional layer to manage rather than a force that simplifies work. Leaders approve new tools, but the underlying processes, data ownership, and decision rights remain unchanged. The result is predictable. AI creates pockets of improvement but fails to reshape the operating model.
The gap between usage and impact is often blamed on model quality or tooling choices. That is a distraction. The more common cause is workflow design. When AI is embedded in a fragmented process, it inherits that fragmentation. When AI is asked to work with inconsistent data, it amplifies inconsistency. When teams deploy AI without clear ownership, every change becomes a negotiation. Scaling requires more than rolling out licences. It requires a redesign of how work moves through the organisation.
This redesign is not a one-time programme. It is a series of leadership decisions that establish what the organisation is prepared to standardise. That includes a common approach to data quality, a consistent way to manage prompts and policies, and clear accountability for outcomes. These are governance choices, and they become more important as AI tools begin to touch more of the organisation’s critical work.
In the Asia-Pacific, the scaling challenge is sharpened by structural realities. Many enterprises operate across multiple markets, each with distinct regulatory expectations, talent pools, and levels of digital maturity. Even within a single country, employees may be spread across headquarters, satellites, frontline sites, and hybrid roles. AI can support distributed work, but it also raises the cost of inconsistency. Without shared standards, every region becomes a custom integration project.

Baile argues that the barrier is confidence rather than intent. In her view, organisations hesitate because they cannot yet prove that AI will integrate cleanly, deliver outcomes reliably, and avoid introducing new risk. That confidence gap is often rooted at the endpoint. AI systems may run in the cloud, but they are used on devices that sit at the edge of the enterprise, where data is accessed, prompts are created, and actions are initiated. Baile points to security foundations as one of the quiet enablers of scale. Platforms such as HP Wolf Security are designed to protect endpoints against both known and unknown threats, reducing the risk that AI introduces new vulnerabilities into everyday work. In this context, security underpins AI deployment from the outset and creates the conditions for moving from pilots to production with confidence.
AI agents introduce power and complexity in equal measure
AI agents make this operational test far more demanding. Unlike a chatbot that answers questions, an agent can plan steps, call tools, and execute tasks across systems. This creates a new promise: that work can be orchestrated rather than manually coordinated. It also creates a new burden, which is that the organisation must now govern automated action, not just automated output.
The first impact of agents is a shift in where complexity lives. The employee experience can look simpler because the user interacts with a single interface. Behind that interface, orchestration becomes the work. Multiple systems must be connected, permissions must be managed, and the agent’s behaviour must be monitored. This is why agent adoption quickly becomes a leadership question. CIOs have to decide how much autonomy is acceptable, where escalation points sit, and which parts of the business are ready for automated action.
In many organisations, the service desk is an early proving ground. Agents can triage, suggest fixes, and carry out routine remediation. Knowledge management is another early use case, where agents can retrieve and summarise information across large internal repositories. These use cases matter because they touch core IT functions and quickly influence employee trust. If the agent reliably resolves common issues, it builds confidence. If it fails unpredictably, it creates a new layer of friction.
Baile describes how IT roles evolve under this model. She sees teams moving away from manual execution and towards orchestration, where the job is to set intent, create guardrails, and manage the agent lifecycle. This change increases the need for observability. The organisation must be able to see what an agent is doing, why it is doing it, and whether it remains aligned with policy over time. Midway through her explanation, she makes the key point in plain terms: “While agent-driven experiences feel simpler on the surface, the real challenge moves to orchestration and governance.”
The second impact is that workflows become more proactive. Agents can use device and application telemetry to surface issues earlier. They can also learn about friction patterns and intervene before a problem turns into a ticket. This changes the rhythm of IT operations. Instead of reacting to demand, IT begins to manage the conditions that create demand. That shift can reduce burnout in support teams and improve employee experience, but it also raises expectations. When systems start to anticipate problems, the tolerance for failure drops.
This is where agent adoption and trust become inseparable. A traditional automation failure is annoying. An agent failure can feel unsettling because it involves delegated action. CIOs therefore, need frameworks that support controlled experimentation while preventing uncontrolled sprawl. Agents can multiply quickly. If every team builds its own, the estate becomes a patchwork of autonomous tools with uneven security controls and unclear ownership. That is the complexity of a different kind, and it is harder to unwind.
Why simplifying the tech stack will matter more than adding new tools
The agent discussion leads to a larger point about the modern enterprise stack. Over the last decade, organisations accumulated overlapping applications, multi-cloud environments, and fragmented data infrastructure. Much of this growth was rational at the time. Teams needed speed, business units wanted flexibility, and cloud adoption encouraged decentralised buying. The long-term effect is an estate where many systems solve similar problems, and few are designed to work together cleanly.
AI lands on this stack like a stress test. It depends on data access, consistent identity, clean integrations, and reliable performance. When those foundations are weak, AI programmes spend their time on plumbing rather than outcomes. This is why simplification is moving from a hygiene task to a strategic priority. Simplification reduces integration work, lowers the surface area for security risk, and makes performance more predictable. It also makes governance easier, which matters when AI systems begin to affect sensitive workflows.
This does not mean stripping tools aggressively or forcing standardisation without context. It means making architectural decisions that reduce overlap and clarify how the organisation wants work to run. Consolidation is one part of that. Clear standards for deployment, integration, and lifecycle management are another. The most important element is visibility across the estate. Without unified visibility, leaders cannot see where complexity is coming from, which issues recur, or how changes in one layer affect another.

Baile’s view is that simplification and innovation belong to the same agenda, because both depend on disciplined architecture. She emphasises the need to consolidate around platforms with guardrails, build unified visibility across devices and applications, and treat security as a baseline requirement. For CIOs, the implication is practical. New AI capability should be assessed through the lens of operational cost. If it adds a tool that requires another dashboard, another policy layer, and another integration project, the hidden price may outweigh the headline benefit.
Automation and self-healing systems are often presented as remedies for the workload. The deeper benefit is that they can restore capacity. When IT teams spend less time on routine fixes, they can focus on higher-value work such as resilience, experience design, and alignment with business priorities. That is how simplification turns into momentum. It creates space for better decisions, rather than pushing teams to keep layering tools on problems they do not have time to solve.
AI-enabled devices and hybrid platforms will reshape employee expectations
Simplification is also becoming an employee expectation. Hybrid work has matured into a long-term operating model, and employees now judge technology by whether it supports focus, collaboration, and wellbeing across contexts. In 2026, the workplace experience will be shaped by how effectively systems remove friction from everyday work, not by the number of features available.
AI-enabled devices are central to this shift because they push capability closer to where work happens. When intelligence runs at the edge, employees can benefit from real-time performance without relying entirely on centralised infrastructure. For creative and technical roles, this can reduce delays in workflow, improve responsiveness, and support more fluid collaboration. For knowledge workers, it can reduce the effort required to find information, capture context, and move between tasks.
Hybrid experience platforms are also changing what organisations measure. Traditional IT metrics focus on uptime and incident response. Employee experience metrics focus on time lost to friction, recurring disruptions, and the predictability of tools across locations. Data from HP’s 2025 Work Relationship Index confirms that employees are ready for this transition. Sixty-nine percent of workers report excitement about how technology can improve their work life. The stakes for CIOs are high. When organisations invest in tools that genuinely improve the daily experience, employees are five times more likely to have a healthy relationship with work. By 2026, success will be measured by how effectively AI-enabled devices move from reacting to problems to anticipating them, resolving friction before it disrupts focus or collaboration.

This pushes CIOs towards a broader view of performance. If an employee loses 20 minutes each day to re-logins, inconsistent collaboration tools, or slow device performance, the cost is organisational rather than technical. It shows up in lower attention, weaker collaboration, and reduced pace of execution.
Baile ties this to a reframing of productivity and engagement. She argues that productivity should be understood in terms of regained time and reduced friction, while engagement should reflect whether employees feel supported by the tools they use. After describing the expectation shift, she reinforces it with a concrete claim about what success will look like. “By 2026, productivity will be defined less by output and more by how effectively AI removes friction from everyday work, freeing people to focus on judgment, creativity, and connection.”
The implication for leadership is direct. Employee experience has become a competitive factor in retention and performance. Technology decisions shape how work feels each day. That is why device strategy, collaboration platforms, security posture, and support models are converging into one leadership question. Does the organisation design technology around human work, or does it expect humans to accommodate technology?
Human-centric IT design will become the real differentiator
As AI spreads, the most consequential decisions will sit at the intersection of trust, usability, and organisational culture. Human-centric design is often discussed as a soft concern, but it has hard consequences. If employees do not trust AI systems, adoption will stall. If systems are difficult to use, they will create friction and workarounds. If automation removes too much agency, it will damage morale and increase resistance to change.
In the Asia-Pacific, human-centric design is more complex because workforces are diverse and change arrives unevenly across roles and markets. A single AI deployment can touch office workers, frontline staff, remote teams, and regional functions with different constraints. This diversity increases the importance of clear outcomes. AI should reduce low-value work, support better decisions, and strengthen collaboration. It should also be explainable enough that employees can understand what it is doing and why. That is where observability and privacy-first thinking become essential operational requirements rather than optional features.
Baile describes human-centric leadership as a balance of trust by design, experience management, and investment in people. Upskilling matters because confidence is a scaling factor. If employees do not feel capable of using AI tools, they will avoid, misuse, or distrust them. Change management matters because it prevents AI deployment from becoming a shock to the organisation’s routines.
Baile stresses that upskilling needs to be practical rather than abstract. Programmes such as HP Amplify AI reflect an approach focused on role-based learning, where employees build confidence through applied use rather than high-level awareness. This matters because AI adoption does not fail at deployment. It fails when capable systems are placed into environments where people do not yet trust themselves to use them well. The aim is for AI to become part of the working environment, rather than an additional layer employees must learn to tolerate.
This is also where the conversation returns to the topic of simplification. Human-centric design fails when the tool environment is chaotic. If employees face multiple overlapping systems, inconsistent security prompts, and unpredictable performance, AI will feel like yet another moving part. When organisations simplify platforms, unify visibility, and reduce friction, AI can fit into work more naturally.
That shift in emphasis is where 2026 will separate leaders from followers. The winners will be those who treat AI as an operating discipline. They will build architectures that can absorb change, governance that can support autonomy safely, and workplaces that are designed around how people actually work. Baile sharpens the transition into this final point with a clear principle: “AI should amplify people, not sideline them.”


