Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
Gate MCP
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Ground coverage, agent-based AI tracking enhancement... Native support in Google Vertex AI
The startup company Groundcover, specializing in application observability, has significantly expanded its “AI observability” features. This update enhances tracking capabilities for agent-based AI systems and introduces native support fully compatible with Google Cloud’s managed AI platform, Google Vertex AI.
The focus of this expansion is to fill the “visibility gap” that arises when enterprises rapidly deploy large language models (LLMs) into real-world service environments. Existing observability tools are mostly designed for traditional software operating under fixed rules, which have clear limitations when dealing with prompt and response real-time AI systems. As a result, development teams and platform operators find it difficult to understand which inputs lead to which outputs, why responses change, and where costs are incurred and how much.
To address these issues, Groundcover has enhanced its capabilities to capture the full context of LLM interactions and trace the result generation process within increasingly complex multi-stage AI systems. The company emphasizes that its core advantage lies in enabling quick application of observability without requiring additional detection work in the operational environment.
Groundcover Vice President Or Benjamin stated, “Customers have consistently reported that LLM calls are outside the scope of operational observability teams. They want a systematic way to understand prompts, responses, and costs. To meet the large-scale, mission-critical AI monitoring needs beyond current observability, we developed AI observability,” he explained.
Agent Tracking Visibility
The most notable change in this update is the “Agent Tracking Visibility.” This feature allows users not only to view each model invocation but also to see the tool execution process, parameters, results, and reasoning paths during the process. For enterprises operating multi-step agent AI workflows, troubleshooting issues and improving performance will become much easier.
Cost Management Enhancements
Cost management features have also been strengthened. The newly added precise cost attribution function considers prompt caching to track token costs at a detailed execution unit level. It differentiates between regular input tokens, cache creation tokens, and cache read tokens, providing a more accurate reflection of the complex billing structure of the latest LLM APIs. This enables teams to better understand the actual costs incurred by specific agent executions or sessions.
Support for Google Vertex AI
Additionally, support for Google Vertex AI has been introduced. Enterprises building AI services on Google Cloud can now automatically collect relevant observability data without additional detection work. The company states that its design ensures all observability data remains within the customer’s environment, balancing security and data control.
Groundcover’s AI observability features are currently being deployed automatically and universally to all customers. The company announced that these new features were demonstrated at the “Google Cloud Next” event held from April 22 to 24.
As AI services rapidly move from experimental phases into real-world deployment, “AI observability” is evolving beyond simple monitoring to become a core area for quality, cost, and reliability management. This expansion of features is seen as a step toward providing enterprises with the necessary infrastructure for more stable operation of agent-based AI.
TP AI Notice: This article is summarized based on the TokenPost.ai language model. The main content may be omitted or differ from the actual details.