The Q1 2026 updates for the Amplify AI Gateway center on the secure management and governance of AI agents through the Model Context Protocol (MCP). This release introduces a redesigned management interface for selective tool exposure, implements critical rate-limiting controls to protect back-end stability, and integrates pre-built guardrails to automate content safety and compliance. These enhancements provide the necessary infrastructure to scale AI initiatives while maintaining strict oversight of model interactions and resource consumption.
The redesigned MCP proxy server interface allows providers to selectively expose specific Tools from back-end servers. This moves away from "all-or-nothing" exposure, ensuring that only necessary capabilities are accessible to AI agents.
Updated the MCP server protocols to the latest standards (transitioning from version 2025-03-26 to 2025-11-25) to ensure compatibility with the evolving AI ecosystem.
For new projects, providers now have full control to manually configure and authorize only the specific Tools intended for exposure via the MCP proxy.
Comprehensive rate-limiting support is now available for both MCP servers and MCP proxies.
These limits apply to all calls and specific tool execution requests, preventing back-end exhaustion and ensuring predictable service levels for AI-driven applications.
Pre-built connectors enforce safe and compliant LLM usage. These can be inserted directly into integration flows to provide reusable security logic.
Input Validation: Blocks prompt injection and prevents unsafe or non-compliant queries from reaching the model.
Output Redaction: Automatically redacts PII (Personally Identifiable Information) and sensitive content from model responses.
Implementation of these guardrails includes automated audit logging, helping organizations maintain a clear trail of AI interactions for regulatory requirements.