Skip to content
employees discussing change management

Dissolving Viscosity. Delivering Flow.

Strategic advisory to transform organizational friction into flow

The Impact of Organizational Flow

Accelerate Strategic Velocity

Eliminate the “Viscosity Tax”

Operational Anti-fragility

Case Studies

IT Infrastructure Modernization

+ Reduced operating costs
+ Neutralized systemic security risks
+ Accelerated ideation to launch

Service Reliability

+ Reduced application downtime
+ Increased employee productivity
+ Improved patient satisfaction

Secure-by-design

+ Lowered security risks
+ Improved compliance posture
+ Full visibility to risk landscape

Our Services

Fractional CTO & Advisory

Providing high-level technical leadership and CxO level strategy without the overhead of a full-time executive.

  • Diagnosis & Assessment of IT Strategy
  • Streamline Application Development & Delivery
  • Automate IT Operations

Change Management

Dissolving organizational viscosity that traps enterprise value into seamless organizational flow

  • Friction Audit of Current State
  • Flow-based Architecture of Future State
  • Lead the Change Implementation

AI Strategy

Operationalizing Intelligence: Moving beyond AI speculation to deliver architectural certainty.

  • Strategic Value Alignment
  • Active Governance & Trust
  • Sovereign AI Architectures

Testimonials

Vice President, Government Services“Your leadership in bringing a large team together and aggressively driving the timeline enabled us to resolve a major contractual and compliance issue.”
Vice President, Architecture Services“You have always challenged the status quo in a positive way and brought innovation to drive change.”
Director, Integration Services“You were brought into a challenging situation. You analyzed the complex requirements and provided reasonable solutions”

Insights

Application Portfolio Rationalization, Modernization, and Migration

By Jaswant Singh and Naresh Nayar

Organizations are under increasing pressure to reduce IT costs, enhance agility, and deliver business value faster. Yet many enterprises struggle with the “obsolescence tax”, spending up to 80% of their resources on fragmented application landscapes composed of legacy systems and redundant solutions that constrain innovation and increase operational risk. 

The Problem: The compounding cost of the “Status Quo”

Most enterprises struggle with:

  • Overlapping legacy applications and redundant technology stacks that inflate cost and complexity.
  • Outdated platforms that increase technical debt, compliance exposure and operational risk.
  • Shadow IT environments driven by gaps in scalability and availability.
  • High licensing and operating costs, compounded by skill shortages and resistance to change.

Collectively, these issues hinder growth, weaken security posture, and erode competitiveness.

The Opportunity

Enterprises that embrace Application Portfolio Rationalization, Modernization & Migration (APRMM) can:

  • Fund the Future: Optimize costs by rationalizing redundant applications and standardizing platforms.
  • Build an AI-Ready Foundation: Modernize for growth with architectures that support AI, automation, and advanced analytics.
  • Close the Agility Gap: Connect IT to business goals by making systems easier to adapt,  faster to update, more operationally reliable as business needs evolve.

Many transformation initiatives stall because application migration is treated as a one-time infrastructure exercise rather than a strategic redesign of the application portfolio.

Migration is not a tactical move; it is a strategic inflection point.  We help organizations rethink, streamline, and transform their application landscape. The objective is not simply to move workloads, but to align applications, platforms, and operating models with long-term business and compliance priorities.

Together, we explore this topic more thoroughly in the full Substack article, including Point of View, Approach, and Proof Points. Read the full article here.

Agentic AI for the Enterprise

By Naresh Nayar, Rick Hamilton and Jaswant Singh

The Problem

Enterprises are rapidly moving beyond prompt-driven generative AI toward agentic AI systems that can plan, reason, use tools, and take actions on behalf of users or teams. These systems can chain multiple steps together without explicit instructions at each step. They can also invoke APIs, workflows, and enterprise tools to change system state, and even maintain context over long tasks and across interactions.

This shift creates new governance, accountability, and safety challenges. Traditional automation models (e.g., RPA, workflow tools) assume deterministic flows; predefined logic and branching; and limited (or no) autonomy to take actions without explicit human command.

Agentic systems break those assumptions. They behave less like “smart macros” and more like semi-autonomous digital workers in business processes. Existing risk frameworks, monitoring, and access controls were not designed for systems that can

  • Decide which tools to call in what order
  • Generate and execute their own plans
  • Escalate (or fail to escalate) when uncertain

Without a clear operating model, agentic AI can quickly become ungovernable.

The Opportunity

Despite the risk, agentic AI represents a meaningful step-change in what enterprises can automate and augment:

  • Throughput & Efficiency Multi-step tasks (e.g., onboarding, claims triage, procurement, support workflows) can be orchestrated end-to-end, with humans inserted only where judgment or approval is needed.
  • Decision Quality & Consistency Agents can systematically retrieve relevant data, policies, and historical decisions, and enforce decision rules more consistently than fragmented, manual processes.
  • Complex Workflow Automation Instead of manual handoffs between teams and systems, agents coordinate across tools, queue tasks, and track state, reducing coordination overhead and delays.
  • Customer & Employee Experience Journeys that currently feel fragmented can be unified by agents that “remember” context across channels and episodes.
  • Operational Resilience Well-governed agents can act as an additional layer of resilience – detecting anomalies, handling routine incidents, and escalating appropriately.

Importantly, agentic AI is practical today in constrained, low-to-moderate risk workflows. The largest business impact will likely arrive over the next 12–24 months as enterprises:

  • Learn where agents work well and where they fail
  • Mature governance and platform foundations
  • Gradually increase agent autonomy in carefully controlled domains

Early movers who start now will accumulate know-how, patterns, and guardrails which will pay dividends as complexity increases.

Together, we explore this topic more thoroughly in the full Substack article, including Enterprise Use Cases; Governance and Operating Models; Platform Foundations; and Risk, Safety, and Compliance. Read the full article here.

Digital Transformation

By Jaswant Singh and Naresh Nayar

Digital transformation is no longer a choice—it is a business mandate to stay competitive, resilient, and relevant. It is not a one-time project or a “big bang” change, but a continuous journey of improvement—evolving step by step to meet changing customer needs, market realities, and new opportunities. While technology is a critical enabler, the real focus is on creating measurable business value—improving customer outcomes, operational efficiency, and the ability to respond to new opportunities and disruptions over time.

The Problem

  • Customers expect personalized, frictionless digital experiences.
  • Competitors that adopt cloud and AI are gaining speed, efficiency, and insight—making it harder for slower movers to keep up.
  • Disruption is constant, whether through emerging technology, regulatory changes, or new market entrants.

Many organizations are held back by outdated systems, slow processes, and resistance to change. In addition, tightly coupled architectures, fragmented data, and project-centric delivery models make even small changes complex, risky, and slow.

As a result, it becomes harder to improve, reduce costs, and respond quickly. Without transformation, companies risk losing relevance, market share, and long-term viability.

The Opportunity

Digital transformation creates real benefits when done right:

  • Better customer experiences: Make every interaction simple, helpful, and consistent across digital and human-assisted channels.
  • Faster operations: Simplify systems and automate routine work so enhancements can be delivered more quickly and with less effort and risk.
  • More resilient operations: Build platforms and processes that can absorb disruption, scale reliably, and support regulatory and security needs.
  • Empowered employees: Give teams the tools, skills, and confidence to work smarter and improve continuously.

New ways to grow: Use digital products, services, and partnerships to reach new markets and evolve business models over time.

Together, we explore this topic more thoroughly in the full Substack article, including Point of View, Approach, and Proof Points. Read the full article here.

AI Governance Is Broken

By Naresh Nayar, Rick Hamilton and Jaswant Singh

The Problem: AI Governance as It Exists Today Is Failing

Organizations are deploying AI faster than they are learning to govern it, and the cracks are showing. With the last few years’ explosion of generative AI solutions, what began as organizational experimentation has increasingly become operational dependence. We see this as AI now shapes underwriting decisions, clinical workflows, hiring pipelines, customer interactions, and strategic planning, across a variety of industries. Despite this, governance practices have not evolved at the same pace.

From our vantage point, many organizations still treat AI governance like traditional IT governance, with centralized control, technical oversight, and compliance checklists. Policies are drafted by senior committees, implemented by technical teams, and reviewed periodically for regulatory alignment. But this is not enough.

Our perspective is direct: this approach is fundamentally misaligned to how AI actually works, and how AI fails in operational scenarios.

AI systems, particularly agentic systems, are probabilistic and adaptive, and they are increasingly embedded across diverse business workflows. Their risks arise not only from code, but from context: how outputs are interpreted, which exceptions are ignored, where incentives distort behavior, and how small failures quietly accumulate all shape real-world outcomes. Traditional enterprise governance models assume predictability and linear cause-and-effect and, as a result, they systematically overlook the risks that matter most in AI-driven systems.

A further distinction between AI governance and prior IT governance lies in decision authority. Organizations must explicitly define which decisions AI may inform, which it may recommend, and which it may execute autonomously. These boundaries are not merely technical, but are organizational, ethical, and operational choices that evolve over time.

Effective AI governance must move at the same cadence as AI itself. Annual policy cycles and episodic reviews are misaligned with systems that learn, adapt, and act continuously. For agentic systems in particular, governance must extend into runtime operation, incorporating continuous supervision, real-time escalation signals, and the ability to pause, constrain, or override agents as conditions change.

Why Current Approaches Fall Short

1. Top-down governance is blind governance

Executive committees and centralized policy bodies operate far from where AI meets reality. They approve principles and frameworks, but they rarely see what matters most, including edge cases that only appear under real-world pressure; workarounds employees invent to “make the system work;” and those quiet failures that don’t trigger alerts but erode trust over time.

Eventually, when those problems surface to the top, the damage has often already been done.

2. Technical oversight alone misses the point

Accuracy, precision, drift detection, and model documentation are necessary, but not sufficient on their own. AI is not just a technical system; its successes and shortcomings have a strong behavioral element. Data scientists can tell you whether a model performs well on a test set. But they cannot always tell you:

  • Whether an AI model’s outputs are appropriate in a sensitive context.
  • Whether its users are over-trusting or under-trusting the AI model.
  • Whether its use subtly shifts responsibility or accountability, and if so, in what way?

Thus, governance which focuses exclusively on technical control confuses correctness with business suitability. This suitability to accomplish business objectives is the foundational capability which must be kept top-of-mind.

3. Compliance-driven governance is reactive and shallow

Regulatory compliance is essential, but it typically represents the bare minimum, and not the requirements of a successful and advanced business operation. Laws lag AI’s capabilities, so checklists reflect yesterday’s risks, not tomorrow’s needs.

Organizations that equate compliance with governance tend to react after public failures, employee backlash, and in some cases, after regulators intervene. Thus, this approach conflates governance with damage control, not stewardship of business processes.

The Cost of Getting This Wrong

Regardless of the failure mechanism, when AI governance falls short, the consequences can be significant. These include:

  • Reputational damage when AI misbehaves publicly.
  • Employee distrust that slows adoption and encourages “shadow AI.”
  • Regulatory exposure, particularly as global AI laws tighten.
  • Most pervasively, failures represent wasted investment when promising AI initiatives stall or collapse.

AI governance setbacks are rarely catastrophic all at once. More often, they are cumulative, as small misalignments compound until the organization loses control of its own systems.

Our Point of View: The Three-Pillar Framework

Our core thesis is the belief that effective AI governance requires distributed accountability across three interconnected pillars:

  1. First-line employee involvement in project selection, and in defining and monitoring proper AI behavior.
  2. A cross-functional oversight committee that reviews KPIs, outcomes, and risks.
  3. An independent audit function that red-teams AI use and challenges assumptions.

No single pillar is sufficient on its own. Together, these three functions form a system of checks and balances that reflects how AI operates inside organizations. In this context, governance defines decision rights, accountability, and escalation paths, while risk management implements controls and mitigations within the structure which governance establishes. For agentic AI, this system must also define bounded autonomy: clear thresholds for when agents may act independently, when human approval is required, and when authority must automatically revert to human control.

Why This Works

This framework deliberately combines:

  • Ground truth from the people closest to AI use
  • Strategic alignment from cross-functional leadership
  • Independent scrutiny from those empowered to question assumptions

It avoids the two most common governance failures–concentrating authority where visibility is weakest, and delegating responsibility without accountability

This approach is not about slowing AI adoption; rather, it is about making AI adoption durable. Importantly, the parties entrusted with these multilayered responsibilities should each be action-minded and accountable; each pillar earns its place. Finally, this framework complements – rather than replaces – technical AI safety practices, and should not be treated as a substitute for pre-deployment evaluation, ensuring sufficient observability, or guaranteeing strong data security and privacy controls.

Together, we explore this important topic more thoroughly in the full Substack article, including pillar definitions, conditions for framework success, and implications for leadership. Read the full article here.

AI Risks Don’t Wait for Committees

By Naresh Nayar and Rick Hamilton

The Problem: AI Governance as It Exists Today Is Failing

In a previous piece, Point-of-View: AI Governance is Broken, we described a three-pillar approach to AI governance – the policies, principles, and accountability structures that define an organization’s intent. Yet across the enterprise, a familiar pattern persists: policies get written; principles are endorsed; and committees are formed. And when an AI system degrades quietly or creates unintended downstream consequences, leaders discover that governance stopped at the point of good intentions.

Imagine a demand-forecast model whose error rate drifts after a quiet upstream data change; revenue leakage accumulates for weeks before anyone can prove where the shift began. The postmortem is not about ‘AI ethics’ in the abstract, but rather, it is about telemetry, ownership, and escalation. The reality is that AI risk doesn’t live in policy documents. Instead, risk emerges through day-to-day decisions, unexpected system behavior, and operational tradeoffs, the very areas where AI risk management matters most.

In a mature AI program, governance sets direction and intent, while operational risk management determines how those intentions translate into real outcomes. Because risk manifests unevenly, not all AI systems require the same level of operational rigor. Controls must scale with business impact, ensuring speed for low-risk experimentation while demanding stronger discipline for systems that influence customers, critical decisions, or regulated outcomes.

Together, we explore this important topic more thoroughly in the full Substack article, including operational risk management, the five domains of operational AI risk management, and the feedback loop that makes the governance real. Read the full article here.

Data Governance for AI Must Be Executable

By Naresh Nayar, Rick Hamilton and Jaswant Singh

Why AI models stall between proof of concept and production, and what technology leaders can do about it.

The Problem Isn’t the Model

A customer-facing AI agent confidently answers a benefits question. The answer is wrong because it retrieved a superseded policy document from an unversioned, access-uncontrolled corpus. The business now has three concurrent problems: customer harm, regulatory exposure, and an internal investigation that cannot reproduce the retrieval context that produced the response. No one can say which version of which document the model saw, because provenance was never captured. The investigation drags on for weeks; the root cause, ungoverned data, remains in place for the next incident. In AI-enabled enterprises, this is not an edge case. It is the predictable outcome of deploying AI on a data substrate built for reporting, not autonomous action. Models are only as trustworthy as the data they ingest; in most organizations, that data is neither traceable enough to explain nor governed well enough to defend.

In 2024, one of the authors was serving as CTO of a healthcare research organization when the team deployed its first RAG solution. We anticipated the usual technical challenges: problematic chunking strategies, improper ranking algorithms, nonfunctional requirements like system performance. The biggest problem, however, was none of these. It was out-of-date and contradictory data sources, resulting in the system misrepresenting current scientific thinking and organizational policy. The technical architecture worked, but the data substrate beneath it was ungoverned. Now amplify this lesson across an enterprise deploying autonomous agents that depend on data sources spanning dozens of systems and domains, whose immediate responses and downstream decisions may never be reviewed by a human. The problems we expected in 2024 were largely architectural. The problem that actually mattered was upstream, as ungoverned, conflicting data quietly degraded output quality as the AI became more relied upon and essential to our business.

In this context, data governance and modern data management together form the trust architecture for AI-driven analytics and automation. Governance defines decision rights, standards, and accountability for data meaning, quality, provenance, and access. These policies and standards do not enforce themselves; they must be translated into technical controls that operate within the data ecosystem. Modern data management enforces those standards as controls across pipelines, catalogs, and APIs, and ideally, it produces an evidence trail that makes outputs explainable and defensible. When governance and management are disconnected, AI scales faster than trust, and small data defects become systemic failures.

Governance structures and operational risk frameworks define who is accountable and what must be monitored. This paper addresses an important question previously raised: whether the data infrastructure beneath those frameworks is engineered to make accountability and monitoring possible? For related concepts, see AI Governance is Broken: Here’s How to Fix It and AI Risks Don’t Wait for Committees.

The central problem for most enterprises is not a lack of data, but rather is the lack of executable governance: policies that are enforced automatically at the point of data movement and model use, rather than documented in frameworks that nobody operationalizes. Until governance is implemented as enforceable controls within systems, AI will continue to scale faster than trust. In working with organizations across healthcare, financial services, and insurance, we have found three structural failures that recur with striking consistency, and they persist not because leaders are unaware of data governance, but because their governance programs are designed to produce documentation rather than controls.

Together, we explore this important topic more thoroughly in the full Substack article, including why this matters now, the executable governance model, how to start and metrics that matters. Read the full article here.

Contact Us

Your transition from friction to momentum starts here.

Rajesh Jaluka, Founder & Principal Advisor

Rajesh Jaluka

Founder & Principal Advisor

Naresh Nayar, Principal Advisor

Dr. Naresh Nayar

Principal Advisor

Jas Singh, Senior Advisor

Jaswant Singh

Senior Advisor