monitoring-database-transactions
Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".
Best use case
monitoring-database-transactions is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".
Teams using monitoring-database-transactions should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/monitoring-database-transactions/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How monitoring-database-transactions Compares
| Feature / Agent | monitoring-database-transactions | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Database Transaction Monitor ## Overview Monitor active database transactions in real time to detect long-running queries, lock contention, uncommitted transactions, and transaction throughput anomalies across PostgreSQL, MySQL, and MongoDB. ## Prerequisites - Database credentials with access to system catalogs (`pg_stat_activity`, `information_schema.PROCESSLIST`, or MongoDB `currentOp`) - `psql`, `mysql`, or `mongosh` CLI installed - Permissions to view other sessions' transactions (PostgreSQL: `pg_monitor` role; MySQL: `PROCESS` privilege) - Baseline metrics for normal transaction duration and throughput - Alerting infrastructure (email, Slack webhook, or PagerDuty) for notifications ## Instructions 1. Query the active transaction view to establish a baseline. For PostgreSQL: `SELECT pid, state, query_start, now() - query_start AS duration, query FROM pg_stat_activity WHERE state != 'idle' ORDER BY duration DESC`. For MySQL: `SELECT id, user, host, db, command, time, state, info FROM information_schema.PROCESSLIST WHERE command != 'Sleep'`. 2. Identify long-running transactions by filtering for duration exceeding the application's expected transaction time. Set initial thresholds at 30 seconds for OLTP workloads or 5 minutes for batch/reporting workloads. 3. Detect idle-in-transaction sessions that hold locks without executing queries. For PostgreSQL: `SELECT pid, state, query_start, now() - state_change AS idle_duration FROM pg_stat_activity WHERE state = 'idle in transaction' AND now() - state_change > interval '5 minutes'`. 4. Monitor lock contention by querying the lock manager. For PostgreSQL: `SELECT blocked_locks.pid AS blocked_pid, blocking_locks.pid AS blocking_pid, blocked_activity.query AS blocked_query FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype`. For MySQL: `SELECT * FROM information_schema.INNODB_LOCK_WAITS`. 5. Track transaction throughput by sampling `pg_stat_database` (xact_commit, xact_rollback) or MySQL `Com_commit` / `Com_rollback` status variables at regular intervals. Calculate commits/second and rollback ratio. 6. Create monitoring scripts that run on a cron schedule (every 30-60 seconds) to capture transaction metrics and write to a time-series store or log file. 7. Configure alerting thresholds: transactions exceeding 60 seconds, idle-in-transaction sessions exceeding 5 minutes, lock wait queues exceeding 10 waiters, and rollback ratio exceeding 5%. 8. Build a transaction summary dashboard query that shows: active transaction count, average duration, longest running transaction, lock wait count, and commits-per-second over the last hour. 9. Implement automatic remediation for known-safe scenarios: terminate idle-in-transaction sessions older than 30 minutes using `SELECT pg_terminate_backend(pid)` (PostgreSQL) or `KILL connection_id` (MySQL), with logging of terminated sessions. 10. Generate weekly transaction health reports summarizing peak transaction counts, P95/P99 duration percentiles, deadlock occurrences, and long-running transaction incidents. ## Output - **Transaction monitoring queries** tailored to the specific database engine in use - **Monitoring scripts** (shell or Python) for scheduled transaction health checks - **Alert configuration** with threshold definitions and notification channel setup - **Dashboard queries** showing transaction throughput, duration distribution, and lock metrics - **Weekly health report template** with transaction performance trends and anomaly highlights ## Error Handling | Error | Cause | Solution | |-------|-------|---------| | `pg_stat_activity` returns no rows for other sessions | Missing `pg_monitor` role or `track_activities` disabled | Grant `pg_monitor` role; set `track_activities = on` in postgresql.conf | | Lock monitoring query times out | Massive lock table during contention storm | Query `pg_locks` with a statement_timeout; reduce monitoring frequency during incidents | | False positive alerts for long-running transactions | Batch jobs or maintenance operations trigger duration alerts | Create an exclusion list for known batch job PIDs or application users; use separate thresholds for batch vs OLTP | | Transaction throughput drops to zero | Connection pool exhaustion or database crash | Check `max_connections` usage; verify database process is running; check for full disk or OOM conditions | | Monitoring queries add overhead | High-frequency polling of system catalogs | Reduce polling interval to every 60 seconds; use `pg_stat_statements` for aggregated stats instead of per-query monitoring | ## Examples **Detecting a connection leak in a web application**: Transaction count steadily increases over hours while commit rate remains flat. Monitoring reveals hundreds of `idle in transaction` sessions from the application server. Root cause: missing `connection.close()` in error handling paths. Resolution: terminate stale sessions and fix application connection management. **Identifying lock contention during peak hours**: Dashboard shows lock wait count spiking from 0 to 50+ between 2-4 PM daily. Lock analysis reveals a nightly reporting query overlapping with high-volume order processing. Resolution: reschedule reporting queries to off-peak hours and add `NOWAIT` hints to critical transaction paths. **Tracking transaction rollback ratio spike**: Rollback ratio jumps from 1% to 15% after a deployment. Transaction monitor logs show serialization failures on a frequently updated inventory table. Resolution: reduce transaction isolation level from SERIALIZABLE to READ COMMITTED for non-critical paths and add retry logic for serialization failures. ## Resources - PostgreSQL monitoring views: https://www.postgresql.org/docs/current/monitoring-stats.html - MySQL performance schema: https://dev.mysql.com/doc/refman/8.0/en/performance-schema.html - MongoDB currentOp: https://www.mongodb.com/docs/manual/reference/method/db.currentOp/ - pg_stat_statements extension: https://www.postgresql.org/docs/current/pgstatstatements.html - Lock monitoring best practices: https://wiki.postgresql.org/wiki/Lock_Monitoring
Related Skills
validating-database-integrity
Process use when you need to ensure database integrity through comprehensive data validation. This skill validates data types, ranges, formats, referential integrity, and business rules. Trigger with phrases like "validate database data", "implement data validation rules", "enforce data integrity constraints", or "validate data formats".
setting-up-synthetic-monitoring
This skill automates the setup of synthetic monitoring for applications. It allows Claude to proactively track performance and availability by configuring uptime, transaction, and API monitoring. Use this skill when the user requests to "set up synthetic monitoring", "configure uptime monitoring", "track application performance", or needs help with "proactive performance tracking". The skill helps to identify critical endpoints and user journeys, design monitoring scenarios, and configure alerts and dashboards.
scanning-database-security
Process use when you need to work with security and compliance. This skill provides security scanning and vulnerability detection with comprehensive guidance and automation. Trigger with phrases like "scan for vulnerabilities", "implement security controls", or "audit security".
implementing-real-user-monitoring
This skill assists in implementing Real User Monitoring (RUM) to capture and analyze actual user performance data. It helps set up tracking for key metrics like Core Web Vitals, page load times, and custom performance events. Use this skill when the user asks to "setup RUM", "implement real user monitoring", "track user experience", or needs assistance with "performance monitoring". It guides the user through choosing a RUM platform, designing an instrumentation strategy, and implementing the necessary tracking code.
pipeline-monitoring-setup
Pipeline Monitoring Setup - Auto-activating skill for Data Pipelines. Triggers on: pipeline monitoring setup, pipeline monitoring setup Part of the Data Pipelines skill category.
optimizing-database-connection-pooling
Process use when you need to work with connection management. This skill provides connection pooling and management with comprehensive guidance and automation. Trigger with phrases like "manage connections", "configure pooling", or "optimize connection usage".
monitoring-whale-activity
Track large cryptocurrency transactions and whale wallet movements in real-time. Use when tracking large holder movements, exchange flows, or wallet activity. Trigger with phrases like "track whales", "monitor large transfers", "check whale activity", "exchange inflows", or "watch wallet".
deploying-monitoring-stacks
This skill deploys monitoring stacks, including Prometheus, Grafana, and Datadog. It is used when the user needs to set up or configure monitoring infrastructure for applications or systems. The skill generates production-ready configurations, implements best practices, and supports multi-platform deployments. Use this when the user explicitly requests to deploy a monitoring stack, or mentions Prometheus, Grafana, or Datadog in the context of infrastructure setup.
monitoring-error-rates
Monitor and analyze application error rates to improve reliability. Use when tracking errors in applications including HTTP errors, exceptions, and database issues. Trigger with phrases like "monitor error rates", "track application errors", or "analyze error patterns".
monitoring-database-health
Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".
monitoring-cross-chain-bridges
Monitor cross-chain bridge TVL, volume, fees, and transaction status across networks. Use when researching bridges, comparing routes, or tracking bridge transactions. Trigger with phrases like "monitor bridges", "compare bridge fees", "track bridge tx", "bridge TVL", or "cross-chain transfer status".
monitoring-cpu-usage
Monitor this skill enables AI assistant to monitor and analyze cpu usage patterns within applications. it helps identify cpu hotspots, analyze algorithmic complexity, and detect blocking operations. use this skill when the user asks to "monitor cpu usage", "opt... Use when setting up monitoring or observability. Trigger with phrases like 'monitor', 'metrics', or 'alerts'.