Predicting The Inevitable Future: Your Next User Is An Agent

gravatar
 · 
April 30, 2026
 · 
22 min read

The loudest AI conversation right now is about jobs.

AI agents acting as the next user between human intent and digital business systems

Agents replacing analysts. Agents replacing product people and software developers. Agents replacing support reps. Agents replacing the white-collar middle layer that spent the last 20 years turning meetings, spreadsheets, tickets, decks, and workflows into economic activity.

That conversation matters.

It misses the more dangerous disruption.

Agents will change how companies produce work. They’ll also change how customers deal with companies.

Most digital businesses were built around users doing the work.

Search the site. Open the app. Compare the offer. Read the policy. Find the ticket. Check the schedule. Look up the player. Fill the form. Contact support. Sit inside the funnel. Generate the session data. Consume the ad inventory. Give the business something to measure.

A lot of companies have been monetizing user labor and calling it engagement.

Agents are going to expose that.

AI agents taking over user work across search, forms, support, tickets, and payments

An AI agent, in plain English, is software that can understand what someone wants, use tools, make decisions within limits, and take action across systems on behalf of a person, team, or company.

That means the next customer of your digital product may never open your app.

They may never visit your website. They may never check your schedule page, browse ticket options, compare memberships, search your archive, read your FAQ, open your support center, or sit inside the funnel your team spent six months optimizing.

A person may still want “the thing.”

An agent may do the work.

The operating model is changing.

The internet is moving from browsing to delegation. For 30 years, digital strategy was built around humans doing the work: searching, tapping, scrolling, filtering, comparing, deciding, buying, renewing, subscribing, canceling, escalating, and filling out the same miserable forms across a dozen products.

Agents are taking over that work.

That changes the economics of digital business. It changes discovery. It changes loyalty. It changes commerce. It changes media. It changes B2B subscription software. It changes sports. It changes the role of design, engineering, product, legal, privacy, and security.

The companies that understand this will rebuild around trust, action, permission, interoperability, and control.

The companies that miss it will keep polishing the wrapper while the leverage moves underneath them.

That’s where the money is going.

The user interface is losing its position as the center of gravity

The current generation of the internet was built around human effort.

The web was built to be browsed. The app era was built to be tapped. B2B subscription software was built around dashboards, workflows, seats, and admin panels. Media was built around traffic and eyeballs. Commerce was built around conversion. Sports digital utility layer was built around fixtures, scores, tickets, highlights, memberships, content, sponsor inventory, merch, and fan identity.

All of it assumed the same thing: the human would show up and engage with it.

That assumption is breaking.

Here’s what leads me to that conclusion.

OpenAI introduced Operator as an agent that can perform browser tasks such as filling out forms and ordering groceries, while its Computer-Using Agent is trained to interact with graphical interfaces including buttons, menus, and text fields.

Anthropic’s Model Context Protocol gives AI systems a standard way to connect with tools and data sources. That matters because real agentic systems need clean ways to connect with business systems. Clicking through screens like a human works for demos. It’s a weak foundation for enterprise-grade work.

Google’s Agent2Agent protocol is designed so agents can communicate, securely exchange information, and coordinate actions across enterprise platforms and applications. Google’s Universal Commerce Protocol and Agent Payments Protocol point in the same direction for commerce and payment authorization.

The plumbing is already being laid.

Moving value from user interfaces to business systems and APIs

Agents are learning to use today’s interfaces. Protocols are emerging so they can work through cleaner connections. Commerce standards are being built so agents can transact. Enterprise platforms are being rebuilt so agents can coordinate across tools, data, identity, workflows, and approvals.

Chatbots were the appetizer.

Agentic delegation is the meal.

The click economy is already bleeding

The first place this shows up is discovery.

We’ve already started to see this trend grow over the last year. Google’s AI summaries and answer boxes reduce the effort it takes to get information. That’s useful for users. It’s brutal for businesses that depend on people clicking through to the source.

Bain found that

~80% of consumers rely on zero-click results for at least 40% of their searches, reducing organic web traffic by an estimated 15% to 25%.

Bain-Dynata: Frequency at which searches result in zero clicks
Source: Bain-Dynata

The Reuters Institute reported that publishers expect search traffic to fall by 43% over the next three years after recent declines in referrals from Google Search and Google Discover.

That should make every traffic-dependent business uncomfortable.

AI agents and AI summaries reducing clicks to websites and digital destinations

When answers come before visits, traffic loses power.

Agents take that further. They compare, decide, book, buy, renew, reorder, cancel, summarize, escalate, negotiate, and coordinate. They compress the journey between intent and action.

If a customer’s agent can find the best product, compare options, read policies, check reputation, calculate total cost, avoid bad terms, and complete the transaction, your beautifully designed digital destination becomes one possible input among many. That destination may be your app, your website, your marketing email, your landing page, your store, your help center, or your checkout flow.

The question shifts from “how do we get the user to click?” to “why would an agent trust us enough to act?”

That question is more useful than most redesign briefs.

If your digital strategy depends on users doing unpaid administrative work for your business, agents are coming for your margin.

Apple is the adoption catalyst

The agentic web will become mainstream when personal user agents become native to the laptops and phones people already use.

That’s where Apple matters.

Apple controls the consumer stack in a way very few companies can match: hardware, operating system, identity, payments, privacy posture, app distribution, notifications, location, health, messaging, calendar, photos, and user trust at global scale.

Personal agents need that stack.

AI agents becoming normal through the device layer on phones and laptops

Apple has announced the promise of Apple Intelligence: a personal intelligence system built into the products people already use. The ambition is clear. Apple wants intelligence to operate closer to the user, closer to the device, and closer to personal context. That includes on-device models, App Intents, privacy-centered APIs, and Private Cloud Compute for more complex requests.

We’re still waiting to see the full proof of that ambition.

I think it’s coming.

Late, yes. Strong, also yes.

Apple is known for not being first to a new technology, but for being the company that perfects it, packages it, and popularizes it. Digital music, smartphones, biometric authentication, wireless audio, and contactless payments all became more useful once Apple made them coherent enough for normal people to use every day.

Apple Intelligence GTM framing / promise
Apple Intelligence - still waiting to realize it's potential.

Personal agents will get the same treatment.

Most consumers won’t install open-source agents, configure servers and APIs, manage keys, run local models, maintain automation stacks, or think about orchestration. They’ll use the agent that ships inside the laptop and phone they already trust.

The mainstream version will feel like the operating system finally understands intent, context, privacy, permissions, and action. It will coordinate apps, messages, calendars, payments, location, health, media, commerce, and services through one personal layer.

That’s when behavior changes.

Samsung and Google will push the Android device layer. Huawei will push the Chinese ecosystem through devices, chips, cloud, and local AI infrastructure. The details will differ by market, but the direction is the same: the agent moves from app to operating layer.

Defaults win markets.

The agentic web needs infrastructure before enthusiasm

Consumer adoption gets the headlines. Enterprise adoption changes the operating model.

Stay with me here.

Google’s Gemini Enterprise Agent Platform is a strong signal. Google describes it as a platform to build, scale, govern, and optimize agents, bringing together model selection, model building, agent building, integration, DevOps, orchestration, and security.

That list matters because enterprise agents need controls before enthusiasm.

Enterprise AI agents requiring identity permissions governance audit logs and security controls

They need identity. They need permissions. They need runtime environments. They need gateways. They need registries. They need evaluation. They need observability. They need simulation. They need governance. They need to know what they can do, what they can access, what they touched, what failed, and who approved the action.

The future of B2B software will be shaped by systems that agents can operate safely.

A sales agent should understand CRM context, pricing rules, customer history, compliance boundaries, and approval thresholds.

A support agent should understand entitlement, warranty, account state, product telemetry, policy, escalation rules, and refund authority.

A finance agent should understand spend controls, approval rules, audit requirements, risk exposure, and regulatory constraints.

A media operations agent should understand rights windows, asset metadata, sponsorship obligations, distribution rules, archive permissions, and editorial context.

That’s where enterprise software is going.

The front end becomes one control point. The deeper value sits in the workflows, rules, records, integrations, and permissions that agents can safely operate.

B2B subscription software companies that adapt will package their value as governed capability. Companies that keep worshipping seat-based dashboard usage will watch agents compress large parts of the user journey.

The dashboard is becoming a control room.

The work is moving into the system.

OpenClaw and Hermes autonomous agents highlight the disruption pattern

OpenClaw is an open-source personal AI assistant that runs on your own devices and connects to the channels people already use. Its GitHub page describes it as an assistant that works across macOS, iOS, Android, and messaging channels including WhatsApp, Telegram, Slack, Discord, iMessage, WeChat, QQ, and others. In simple terms: OpenClaw turns a chat surface into a control layer for personal automation.

Hermes Agent, from Nous Research, is an open-source self-improving agent. That's right. It’s a new breed of agent designed to learn from completed tasks, create reusable skills, persist knowledge, search prior conversations, install and use it's own tools, and build a deeper model of the user across sessions.

Those two projects matter because they point to the same future from different angles.

OpenClaw and Hermes showing AI agent adoption and persistent agent capability

OpenClaw shows adoption.

Hermes shows persistence.

Agents don’t need permission from your roadmap.

They spread when people find them useful.

That’s the uncomfortable part.

Open-source agents can spread faster than enterprise software because individuals can adopt them directly. China is showing what that looks like.

Reuters reported that China’s Ministry of Industry and Information Technology warned about security risks linked to OpenClaw, while also noting that the open-source agent had rapidly gained global popularity, passed 100,000 GitHub stars, and attracted 2 million visitors in a single week. Reuters also reported that major Chinese cloud providers including Alibaba’s Alicloud, Tencent Cloud, and Baidu were offering hosting services for it.

The velocity kept getting attention. Public GitHub tracking sites reported OpenClaw passing the Linux kernel in stars shortly after launch, although exact counts move quickly and should be treated as a snapshot rather than a permanent ranking. Reuters reported that NVIDIA CEO Jensen Huang compared OpenClaw’s rapid rise to Linux, saying it had achieved in weeks the kind of popularity Linux earned over 30 years.

Jensen's NemoClaw
NVIDEA + OpenClaw =. NemoClaw

In NVIDIA’s own GTC announcement, Huang called OpenClaw “the operating system for personal AI” and announced NemoClaw, a stack meant to add privacy and security controls for OpenClaw agents.

The product matters.

The behavior matters more.

Chinese users started referring to OpenClaw adoption as “raising lobsters,” tied to the project’s mascot and the idea of tending to agents that can evolve and act. Reuters described OpenClaw enthusiasm spreading across schoolchildren, retirees, and users drawn to agent-enabled one-person businesses.

That’s how platform shifts spread: through behavior, language behavior, language, status, and practical utility.

The risk arrived at the same speed. China’s industry ministry warned that improper configuration of OpenClaw could lead to cyberattacks and data breaches, and urged organizations to audit network exposure, strengthen identity authentication, and improve access controls.

That’s the whole problem in one frame: capability, adoption, platform ambition, and security risk arriving together.

OpenClaw is the adoption curve.

Hermes is the capability curve.

A chatbot answers.

An agent acts.

A persistent agent compounds.

That’s the strategic difference.

Frontier behavior becomes product behavior

The next signal is smaller than China’s OpenClaw moment, but it matters because it shows what happens before a platform shift becomes polished.

It starts in niche technical circles.

IT professionals, infrastructure builders, local AI enthusiasts, and power users are already trying to turn agents into something more useful than a chat window. They’re experimenting with systems that can remember, research, summarize, review their own output, and keep working between requests.

The point isn’t that every user will build one of these systems.

They won’t.

The point is that the desire is already visible before the market has packaged it cleanly.

From strange builds to products

Local LLM AI builders and tinkerers are trying to make open-source models useful enough for real work because they want privacy, control, lower platform dependency, persistent memory, and agents shaped around personal workflows that are all running locally on their machines.

One Reddit LocalLLM thread made that demand concrete: a technical user described building a persistent local agent that can reason and is self aware.

It runs continuously between my messages. It has a wonder queue, basically its own list of questions it’s chewing on. It seeds them, explores them, and stores what it finds. Nothing prompted by me. Tonight it was sitting with questions like whether introspecting on its own reasoning counts as self-awareness, what the actual difference is between simulating empathy and experiencing it, and what makes a conversation feel meaningful to a human.

The implementation is technical. The ambition is simple: create a personal agent that operates with continuity.

That’s the signal.

Power users are already trying to build the personal agent the market hasn’t packaged cleanly yet.

First it looks like a strange build.

Then it becomes a product category.

Then it becomes an expectation.

Apple, Google, Samsung, Huawei, Perplexity, OpenAI, Anthropic, Tencent, Baidu, Nous Research, and the open-source ecosystem are all pushing from different directions toward the same behavioral end state.

A user asks for the thing.

The agent builds, retrieves, configures, recommends, books, or buys the thing.

The user interface becomes temporary, contextual, and sometimes invisible.

The “user” is changing because the work around the user is changing first.

The “user” experience is changing

This is where UX needs to expand.

The next wave of user experience will be shaped around agents that guide humans through decisions, exceptions, confirmations, and moments of trust. The “user” may be a person. It may be an agent acting for a person. It may be a team agent acting for a department. It may be a company agent interacting with another company’s agent.

That changes the design problem.

AI agents changing UX by creating two users the human and the agent

Designers have spent years optimizing screens for direct human interaction and conversion: navigation, information architecture, calls to action, forms, dashboards, error states, onboarding, search, checkout, support flows, and accessibility. Those skills still matter when humans need to understand, approve, correct, or override something.

The harder work is designing how agents understand intent, explain options, ask for permission, route exceptions, and bring humans into the loop at the right moment.

Agentic UX has two audiences: the agent that needs to act, and the human who needs to trust what’s happening.

The agent needs structured information, clear rules, accessible policies, reliable actions, machine-readable context, and confidence signals. The human needs clarity, control, trust, reversibility, and a sane way to intervene when the agent reaches a decision point.

That’s where service design becomes critical.

Service design maps the full experience across people, systems, policies, operations, support, and business rules. In an agentic-first ecosystem, that map becomes the difference between automation that creates leverage and automation that creates liability.

A sports ticketing example makes this obvious.

A fan’s agent may compare seats, check membership benefits, apply loyalty credits, consider weather, coordinate with friends, validate resale restrictions, and recommend a purchase. The user may only need to step in when price crosses a threshold, resale rules are unclear, identity verification is required, or the agent detects a better but riskier option.

That’s UX.

The interface may be a full app, a confirmation card, a voice prompt, a generated workflow, a service transcript, or an audit trail. The design work is deciding when the human needs control, what the agent is allowed to do, how trust is earned, and how failure is recovered.

The companies that get this wrong will create bad automation: wrong purchases, bad recommendations, broken support loops, unauthorized actions, and users who no longer trust the agent or the brand behind it.

The experience is no longer just what the human sees.

It’s what the agent can safely do.

That’s why design leaders who understand engineering, product strategy, service design, and business outcomes will matter more. They’ll be the ones shaping how intent becomes action without destroying trust, margin, safety, or the customer relationship.

Business models get repriced

The agentic web will reprice digital business.

Traffic, search, loyalty, customer support, commerce, media rights, B2B subscription seats, advertising, and sponsorship all become less stable when agents sit between intent and action.

The current digital playbook made sense when humans did the navigation. Bring the user into the app. Capture the session. Sell the impression. Convert the click. Trigger the upsell. Retarget the abandonment. Report the funnel.

Agents compress that journey.

If an agent compares five services and chooses one in seconds, the persuasive space shrinks. If an agent summarizes generic content before the visit, commodity pages lose leverage. If an agent completes a support task without entering the help center, ticket deflection becomes less about chatbot containment and more about system quality. If an agent can negotiate, reorder, renew, or cancel, companies with weak terms and sloppy data will get punished faster.

The economics move toward systems that can be trusted.

AI agents shifting business value from the app wrapper to the business engine

Trust becomes margin protection.

Clean data becomes conversion infrastructure.

APIs become distribution.

Policy clarity becomes UX.

Identity becomes strategy.

Auditability becomes revenue protection.

Rights become leverage.

This is why headless architecture matters.

In plain English, headless means separating the engine from the user interface. A headless CMS (content management system) is a simple example. Instead of writing content inside one website template, a company creates content once in a structured model, then sends it to a website, mobile app, smart glasses, in-venue screen, VR headset, voice assistant, or a channel that hasn’t been invented yet.

That optimizes the authoring flow. Humans can create content and data once, govern it properly, and ship it to many places with efficiency and consistency.

Headless architecture helping AI agents access content and data across channels

The same logic now applies to business capability.

A headless business separates customer records, pricing, payments, rights, inventory, policy, loyalty, and service rules from any single user interface. That allows value to travel across websites, apps, partner channels, voice, agents, commerce channels, and generated interfaces while the company keeps control over margin, identity, permissions, and trust.

A headless business lets value travel while control stays intact.

That’s the economic point.

Media and sports are the pressure tests

Media: the middle gets repriced first

Media companies are already feeling the pressure.

AI summaries and answer boxes are changing how people consume information. Many users can get the basic answer without clicking through to the publisher at all. That reduces traffic, weakens the old page view model, and puts pressure on media companies that built too much of their economics around generic content and search dependency.

AI also compresses commodity content. It makes generic summaries, rewrites, explainers, recaps, and basic aggregation cheaper.

That raises the premium on work with real differentiation: access, reporting, judgment, trust, original analysis, distinctive voice, archives, rights, data, community, events, and brand authority.

Commodity media gets compressed. Trusted access gets more valuable. The mushy middle, rewritten summaries, generic recaps, low-differentiation explainers, gets repriced first.

AI agents compressing commodity media while increasing the value of trusted access

Media companies need to package their value for agentic distribution without turning themselves into raw material for someone else’s product. That means clear licensing, attribution, provenance, usage rights, archive access rules, subscriber identity, commercial terms, and agent-readable access models.

AI licensing should be treated like rights licensing.

If a publisher, league, studio, or rights holder prices AI access as side revenue, it’ll regret the terms. The asset being licensed may become AI training material, retrieval source, or trusted data layer inside the product that sits between the brand and its audience.

That’s leverage.

Price it like leverage.

The companies that survive will package trust and rights for agentic distribution.

Sports: live scarcity buys time, not immunity

Sports are different.

Sports has live scarcity. It has emotional users. It has rights constraints, sponsors, betting adjacency, ticketing, memberships, loyalty, media workflows, commerce, personalization, venue operations, and short value windows.

The game still happens. Athletes still compete. Fans still care. People still go to venues. Broadcasts still matter. Rights still matter. Sponsors still pay for cultural attention.

That gives sports time.

Time also creates complacency.

The pressure starts around the live event. Fans don’t want to manage twelve apps, six logins, two ticket wallets, three streaming packages, fantasy tools, betting information, sponsor offers, parking, transit, merchandise, loyalty points, venue rules, and postgame content.

Agents will coordinate that mess.

AI agents coordinating the sports fan journey across tickets parking loyalty and support

A fan’s agent could find the best ticket, check membership benefits, confirm resale restrictions, book parking, plan arrival time, surface food offers, coordinate friends, retrieve highlights, compare merchandise, manage loyalty rewards, and handle support.

The sports organization can participate through governed systems or get routed around by platforms sitting closer to the user.

That’s the strategic issue.

As the internet moves from browsing to delegation, value shifts away from the visit and toward ticketing, membership, identity, rights, trust, and the business systems underneath the user interface.

That’s where sports organizations need to focus.

Customer truth. Ticketing logic. Membership rules. Payments. Entitlements. Rights. Archive access. Sponsor activation. Venue operations. Support workflows. Loyalty. Real-time commerce.

The website is the wrapper.

The engine is the business.

Full stop.

Loyalty: points theater will get exposed

Loyalty becomes part of this shift.

Useful loyalty compounds while superficial engagement is exposed

A fan’s agent could know seat preferences, price thresholds, preferred game times, player affinity, merchandise behavior, parking preferences, and sponsor tolerance. That turns loyalty from points theater into a permissioned relationship that compounds.

Useful loyalty has business value.

Fake engagement has reporting value.

Leaders should know the difference.

Betting, finance, and regulated industries need sharper rules

Regulated industries should treat agentic systems as both economic leverage and risk acceleration.

In betting, an agent can monitor injuries, odds, weather, line movement, player data, historical trends, bankroll rules, market changes, and user risk appetite. It can also accelerate harmful behavior, automate poor decisions, or create a compliance problem if the platform can’t prove authorization, suitability, disclosure, and consent.

In investing, an agent can monitor portfolios, tax exposure, market news, rebalancing thresholds, cash needs, risk limits, and product suitability. It can also execute a bad trade faster than a human can understand the consequence.

In insurance, healthcare, lending, wealth management, travel, ticketing, and enterprise procurement, the same pattern applies. Agents can reduce friction and lower operating cost. They can also create unacceptable risk if authority is vague.

The rule is simple: risk determines autonomy.

AI agents in regulated industries where risk determines autonomy and governance

Low-risk actions can be automated with logs. Medium-risk actions need constraints, summaries, and review. High-risk actions need explicit authorization, audit trails, disclosures, and human confirmation. Critical actions need governance before execution.

This is where the economics get interesting. Regulated companies often move slower because control costs money. Agentic systems can reduce operating cost, support cost, manual review cost, administrative drag, and decision latency. They can also create new costs in compliance, monitoring, incident response, model governance, and legal exposure.

The winners will use agents to reduce low-value human labor while strengthening controls around high-consequence decisions.

That’s the regulated-industry prize: speed where the risk is low, judgment where the risk is high, auditability everywhere.

In regulated markets, the agent is only as valuable as the control model around it.

Reality will need proof

Agents will act on behalf of users. They’ll also consume, evaluate, summarize, and distribute media on behalf of users.

That makes provenance critical.

AI agents verifying content provenance before sharing or acting on media

Synthetic media means AI-generated or AI-manipulated images, video, audio, documents, and other content that can look or sound real.

As generated video, images, voice, and documents become more convincing, the premium moves toward proof. Who created this? When was it captured? Was it edited? What system touched it? What rights apply? Can it be trusted?

C2PA describes Content Credentials as a global content provenance and authenticity standard. Its FAQ says Content Credentials provide a cryptographically secure way to capture and express the recorded history of digital content, including how content was created, what tools were used, when and where it was made, and how it changed over time.

That matters for news, sports, entertainment, politics, finance, insurance, education, legal workflows, and public institutions.

If agents can’t tell what’s real, licensed, edited, synthetic, or manipulated, they’ll amplify garbage at scale.

This is where some ideas from the WEB3 era may become useful in a less ridiculous form.

Blockchain doesn’t need to save the internet. Verifiable provenance, ownership, licensing, entitlement, and transaction history may become more useful as agents, synthetic media, and automated commerce spread.

The useful question is practical: where does a tamper-resistant record reduce risk, increase trust, or support a transaction?

That’s where the real value sits.

The leadership agenda is clear

This belongs in one room.

Boards should care because distribution power is moving.

CEOs should care because business models are changing.

Commercial teams should care because discovery, loyalty, licensing, sponsorship, and conversion are being repriced.

Product teams should care because the user journey is becoming agent-mediated.

Design teams should care because UX is moving into intent, trust, confirmation, and service design.

Engineering teams should care because the system layer becomes the experience.

Legal, privacy, and security teams should care because autonomous action expands the risk surface.

Operations teams should care because agents will expose broken workflows.

AI agents requiring product design engineering legal security and operations in one room

The agenda is simple

  • Identify which customer jobs agents will take first
  • Separate the engine from the user interface
  • Make customer, product, policy, pricing, rights, and inventory data machine-readable
  • Define permissions and confirmation rules
  • Build auditability into every agent-facing action
  • Treat AI access like commercial rights
  • Redesign UX around trust, service design, and human intervention
  • Stop measuring digital health only through visits and sessions

Then map the systems behind those jobs. Locate the customer record. Find the permission model. Identify which actions an agent can take, which actions require confirmation, which policies are machine-readable, which workflows still depend on a human clicking through a screen, which systems lack APIs, where audit logs are missing, where bad data can create harm, where automation can reduce cost, and where automation can create liability.

That map becomes the roadmap.

Then do the hard work.

Clean the data. Expose governed APIs. Define permission models. Create confirmation rules. Build audit trails. Instrument observability. Harden identity. Package rights and data with commercial discipline. Build design systems that describe behavior, not just components. Train product, design, engineering, security, legal, commercial, and operations leaders to work together.

The agentic future rewards companies with clean systems and clear rules.

That’s the work.

The inevitable future

The next digital advantage will go to companies whose systems agents can understand, trust, and act on.

Apple may be the catalyst that makes personal agents normal. Google, Samsung, Huawei, OpenAI, Anthropic, Perplexity, Tencent, Baidu, Nous Research, and the open-source ecosystem are already building the surrounding machinery.

The direction is visible.

The user interface is no longer the whole game.

The agent is becoming the operating layer between intent and action.

The companies that win will own the engine: identity, data, permissions, rights, pricing, payments, workflows, trust, and governance.

AI agents operating against the business engine instead of the website wrapper

Leaders need to decide now whether their products are ready for the user that increasingly stands between their business and the human customer.

What are you doing about it?

And if you’re not thinking about it yet, start now.

The window is already closing.

Other Reads Worth Your Time

©Bora Nikolic 2026

Make something great.

View