vertex-engine-inspector

Inspect and validate Vertex AI Agent Engine deployments including Code Execution Sandbox, Memory Bank, A2A protocol compliance, and security posture. Generates production readiness scores. Use when asked to inspect, validate, or audit an Agent Engine deployment. Trigger with "inspect agent engine", "validate agent engine deployment", "check agent engine config", "audit agent engine security", "agent engine readiness check", "vertex engine health", or "reasoning engine status".

25 stars

Best use case

vertex-engine-inspector is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Inspect and validate Vertex AI Agent Engine deployments including Code Execution Sandbox, Memory Bank, A2A protocol compliance, and security posture. Generates production readiness scores. Use when asked to inspect, validate, or audit an Agent Engine deployment. Trigger with "inspect agent engine", "validate agent engine deployment", "check agent engine config", "audit agent engine security", "agent engine readiness check", "vertex engine health", or "reasoning engine status".

Teams using vertex-engine-inspector should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/vertex-engine-inspector/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/vertex-engine-inspector/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/vertex-engine-inspector/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How vertex-engine-inspector Compares

Feature / Agentvertex-engine-inspectorStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Inspect and validate Vertex AI Agent Engine deployments including Code Execution Sandbox, Memory Bank, A2A protocol compliance, and security posture. Generates production readiness scores. Use when asked to inspect, validate, or audit an Agent Engine deployment. Trigger with "inspect agent engine", "validate agent engine deployment", "check agent engine config", "audit agent engine security", "agent engine readiness check", "vertex engine health", or "reasoning engine status".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Vertex Engine Inspector

## Overview

Inspect and validate Vertex AI Agent Engine deployments across seven categories: runtime configuration, Code Execution Sandbox, Memory Bank, A2A protocol compliance, security posture, performance metrics, and monitoring observability. This skill generates weighted production-readiness scores (0-100%) with actionable recommendations for each deployment.

## Prerequisites

- `google-cloud-aiplatform[agent_engines]>=1.120.0` Python SDK installed
- `gcloud` CLI authenticated (for IAM and monitoring queries — **not** for Agent Engine CRUD)
- IAM roles: `roles/aiplatform.user` and `roles/monitoring.viewer` granted on the target project
- Access to the target Google Cloud project hosting the Agent Engine deployment
- `curl` for A2A protocol endpoint testing (AgentCard, Task API, Status API)
- Cloud Monitoring API enabled for performance metrics retrieval
- Familiarity with Vertex AI Agent Engine concepts: Code Execution Sandbox, Memory Bank, Model Armor

**Important**: There is no `gcloud` CLI surface for Agent Engine (no `gcloud ai agents`, `gcloud ai reasoning-engines`, or `gcloud alpha ai agent-engines` commands exist). All Agent Engine operations use the Python SDK via `vertexai.Client()` or `vertexai.preview.reasoning_engines`.

## Instructions

1. Connect to the Agent Engine deployment by retrieving agent metadata via the Python SDK (`client.agent_engines.get(name=...)`)
2. Parse the runtime configuration: model selection (Gemini 2.5 Pro/Flash), tools enabled, VPC settings, and scaling policies
3. Validate Code Execution Sandbox settings: confirm state TTL is 7-14 days, sandbox type is `SECURE_ISOLATED`, and IAM permissions are scoped to required GCP services only
4. Check Memory Bank configuration: verify enabled status, retention policy (min 100 memories), Firestore encryption, indexing enabled, and auto-cleanup active
5. Test A2A protocol compliance by probing `/.well-known/agent-card`, `POST /v1/tasks:send`, and `GET /v1/tasks/<task-id>` endpoints for correct responses
6. Audit security posture: validate IAM least-privilege roles, VPC Service Controls perimeter, Model Armor activation, encryption at rest and in transit, and absence of hardcoded credentials
7. Query Cloud Monitoring for performance metrics: request count, error rate (target < 5%), latency percentiles (p50/p95/p99), token usage, and cost estimates over the last 24 hours
8. Assess monitoring and observability: confirm Cloud Monitoring dashboards, alerting policies, structured logging, OpenTelemetry tracing, and Cloud Error Reporting are configured
9. Calculate weighted scores across all categories and determine overall production readiness status
10. Generate a prioritized list of recommendations with estimated score improvement per remediation

See `${CLAUDE_SKILL_DIR}/references/inspection-workflow.md` for the phased inspection process and `${CLAUDE_SKILL_DIR}/references/inspection-categories.md` for detailed check criteria.

## Output

- Inspection report in YAML format with per-category scores and overall readiness percentage
- Runtime configuration summary: model, tools, VPC, scaling settings
- A2A protocol compliance matrix: pass/fail for AgentCard, Task API, Status API
- Security posture score with breakdown: IAM, VPC-SC, Model Armor, encryption, secrets
- Performance metrics dashboard: error rate, latency percentiles, token usage, daily cost estimate
- Prioritized recommendations with estimated score improvement per item

See `${CLAUDE_SKILL_DIR}/references/example-inspection-report.md` for a complete sample report.

## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| Agent metadata not accessible | Insufficient IAM permissions or incorrect agent ID | Verify `roles/aiplatform.user` granted; confirm agent ID with `client.agent_engines.list()` via Python SDK |
| A2A AgentCard endpoint 404 | Agent not configured for A2A protocol or endpoint path incorrect | Check agent configuration for A2A enablement; verify `/.well-known/agent-card` path |
| Cloud Monitoring metrics empty | Monitoring API not enabled or no recent traffic | Run `gcloud services enable monitoring.googleapis.com`; generate test traffic first |
| VPC-SC perimeter blocking access | Inspector running outside VPC Service Controls perimeter | Add inspector service account to access level; use VPC-SC bridge or access policy |
| Code Execution TTL out of range | State TTL set below 1 day or above 14 days | Adjust TTL to 7-14 days for production; values above 14 days are rejected by Agent Engine |

See `${CLAUDE_SKILL_DIR}/references/errors.md` for additional error scenarios.

## Examples

**Scenario 1: Pre-Production Readiness Check** -- Inspect a newly deployed ADK agent before production launch. Run all 28 checklist items across security, performance, monitoring, compliance, and reliability. Target: overall score above 85% before approving production traffic.

**Scenario 2: Security Audit After IAM Change** -- Re-inspect security posture after modifying service account roles. Validate that least-privilege is maintained (target: IAM score 95%+), VPC-SC perimeter is intact, and Model Armor remains active.

**Scenario 3: Performance Degradation Investigation** -- Inspect an agent showing elevated error rates. Query 24-hour performance metrics, identify latency spikes at p95/p99, check auto-scaling behavior, and correlate with token usage patterns to isolate the root cause.

## Resources

- [Vertex AI Agent Engine Documentation](https://cloud.google.com/vertex-ai/docs/agents) -- deployment and configuration
- [A2A Protocol Specification](https://google.github.io/A2A/) -- AgentCard, Task API, protocol compliance
- [Cloud Monitoring API](https://cloud.google.com/monitoring/api/v3) -- metrics queries and dashboard configuration
- [VPC Service Controls](https://cloud.google.com/vpc-service-controls/docs) -- perimeter setup and access policies
- [Model Armor](https://cloud.google.com/vertex-ai/docs/generative-ai/model-armor) -- prompt injection protection configuration

Related Skills

Socratic Method: The Dialectic Engine

25
from ComeOnOliver/skillshub

This skill transforms Claude into a Socratic agent — a cognitive partner who guides

vertex-ai-media-master

25
from ComeOnOliver/skillshub

Automatic activation for ALL Google Vertex AI multimodal operations - video processing, audio generation, image creation, and marketing campaigns. **TRIGGER PHRASES:** - "vertex ai", "gemini multimodal", "process video", "generate audio", "create images", "marketing campaign" - "imagen", "video understanding", "multimodal", "content generation", "media assets" **AUTO-INVOKES FOR:** - Video processing and understanding (up to 6 hours) - Audio generation and transcription - Image generation with Imagen 4 - Marketing campaign automation - Social media content creation - Ad creative generation - Multimodal content workflows

vertex-infra-expert

25
from ComeOnOliver/skillshub

Terraform infrastructure specialist for Vertex AI services and Gemini deployments. Provisions Model Garden, endpoints, vector search, pipelines, and enterprise AI infrastructure. Triggers: "vertex ai terraform", "gemini deployment terraform", "model garden infrastructure", "vertex ai endpoints"

vertex-ai-pipeline-creator

25
from ComeOnOliver/skillshub

Vertex Ai Pipeline Creator - Auto-activating skill for GCP Skills. Triggers on: vertex ai pipeline creator, vertex ai pipeline creator Part of the GCP Skills skill category.

vertex-ai-endpoint-config

25
from ComeOnOliver/skillshub

Vertex Ai Endpoint Config - Auto-activating skill for GCP Skills. Triggers on: vertex ai endpoint config, vertex ai endpoint config Part of the GCP Skills skill category.

vertex-ai-deployer

25
from ComeOnOliver/skillshub

Vertex Ai Deployer - Auto-activating skill for ML Deployment. Triggers on: vertex ai deployer, vertex ai deployer Part of the ML Deployment skill category.

vertex-agent-builder

25
from ComeOnOliver/skillshub

Build and deploy production-ready generative AI agents using Vertex AI, Gemini models, and Google Cloud infrastructure with RAG, function calling, and multi-modal capabilities

firebase-vertex-ai

25
from ComeOnOliver/skillshub

Execute firebase platform expert with Vertex AI Gemini integration for Authentication, Firestore, Storage, Functions, Hosting, and AI-powered features. Use when asked to "setup firebase", "deploy to firebase", or "integrate vertex ai with firebase". Trigger with relevant phrases based on skill purpose.

engineering-features-for-machine-learning

25
from ComeOnOliver/skillshub

This skill empowers Claude to perform feature engineering tasks for machine learning. It creates, selects, and transforms features to improve model performance. Use this skill when the user requests feature creation, feature selection, feature transformation, or any request that involves improving the features used in a machine learning model. Trigger terms include "feature engineering", "feature selection", "feature transformation", "create features", "select features", "transform features", "improve model performance", and similar phrases related to feature manipulation.

feature-engineering-helper

25
from ComeOnOliver/skillshub

Feature Engineering Helper - Auto-activating skill for ML Training. Triggers on: feature engineering helper, feature engineering helper Part of the ML Training skill category.

conducting-chaos-engineering

25
from ComeOnOliver/skillshub

This skill enables Claude to design and execute chaos engineering experiments to test system resilience. It is used when the user requests help with failure injection, latency simulation, resource exhaustion testing, or resilience validation. The skill is triggered by discussions of chaos experiments (GameDays), failure injection strategies, resilience testing, and validation of recovery mechanisms like circuit breakers and retry logic. It leverages tools like Chaos Mesh, Gremlin, Toxiproxy, and AWS FIS to simulate real-world failures and assess system behavior.

adk-engineer

25
from ComeOnOliver/skillshub

Execute software engineer specializing in creating production-ready ADK agents with best practices, code structure, testing, and deployment automation. Use when asked to "build ADK agent", "create agent code", or "engineer ADK application". Trigger with relevant phrases based on skill purpose.