Members of the Leap Metrics team had the opportunity to attend Google Cloud Next this year—and the message was loud and clear: the future of AI isn’t just about bigger models. It’s about making those models useful by connecting them with your own data, your own systems, and your own workflows.

Hosted in a space buzzing with developers, founders, and cloud architects, the event delivered a front-row seat to major product launches, demos, and deep discussions around what’s next in generative AI. We saw firsthand how AI is evolving from isolated tools into interconnected agents that can reason, automate, and collaborate.

Context is Key to Smarter AI

A consistent theme across sessions was the importance of contextual grounding. Today’s foundation models, no matter how advanced, don’t inherently know what’s on your systems. They operate based on training data—not your CRM, EHR, or case notes.

That’s where Model Context Protocol (MCP) comes in.

The Model Context Protocol (MCP), introduced in late 2024 by Anthropic, the AI company behind the Claude language model, provides a standardized way for large language models (LLMs) to securely access and interact with external APIs and data sources. Think of it as a bridge that allows models to securely and reliably retrieve real-time, personalized data—without requiring retraining. With recent advancements in Claude, Anthropic has helped make MCP adoption more tangible, and both Anthropic and Google are actively working to support it across their ecosystems.

For Leap Metrics and other applied AI teams, the implications are huge. MCP enables AI models to deliver context-aware responses, tailored to the specific member, patient, or user in question.

Building with Agents, Not Just Prompts

With better models and deeper context, the next question is: how do you automate? At Google Next, the answer was clear—AI agents.

Google launched its Agent Development Kit (ADK), giving developers tools to build structured, goal-driven agents that can loop, reason, and interact with other systems. This goes beyond simple prompting. These agents are built with rules, workflows, and clear termination conditions—making them safe and scalable.

The debut of multi-agent systems and the Agent2Agent protocol allows different AI agents to collaborate across tasks. Imagine a summarizer agent handing off results to an email writer, who then triggers a scheduling agent—all without manual input. 

Agent Space and Applied Use Cases

Google’s new Agent Space was another highlight. Designed as a productivity suite for internal teams, agents can automate daily tasks like:

  • Summarizing Slack threads
  • Pulling data from Google Sheets
  • Drafting and sending emails
  • Extracting descriptions from websites

By interpreting natural language and taking action across platforms, they help teams reclaim time for higher-impact work.

AI Telemetry: Debugging for Intelligent Systems

Another standout feature introduced was AI Telemetry—a toolset that plugs into your code to track agent behavior and diagnose errors. It helps answer the inevitable question in autonomous workflows: what went wrong, and why?

This kind of telemetry is essential for maintaining trust and control in systems where agents are acting with increasing autonomy. With AI taking the wheel more often, having a dashboard that visualizes failures, misfires, and context mismatches is critical.

What This Means for Healthcare AI

Google Cloud Next 2025 made it clear: the future of AI is not just about smarter models—it’s about context, control, and collaboration. From the rise of MCP to the introduction of agent frameworks and AI telemetry, the focus is shifting toward making AI systems truly actionable and aligned with real-world needs.

For Leap Metrics, these developments affirm the path we’re on. As agentic AI continues to evolve, we’re closely evaluating where these capabilities can bring the greatest impact—both for our team and for the healthcare organizations we serve.