databricks-webhooks-events

Configure Databricks job notifications, webhooks, and event handling. Use when setting up Slack/Teams notifications, configuring alerts, or integrating Databricks events with external systems. Trigger with phrases like "databricks webhook", "databricks notifications", "databricks alerts", "job failure notification", "databricks slack".

25 stars

Best use case

databricks-webhooks-events is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Configure Databricks job notifications, webhooks, and event handling. Use when setting up Slack/Teams notifications, configuring alerts, or integrating Databricks events with external systems. Trigger with phrases like "databricks webhook", "databricks notifications", "databricks alerts", "job failure notification", "databricks slack".

Teams using databricks-webhooks-events should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/databricks-webhooks-events/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/databricks-webhooks-events/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/databricks-webhooks-events/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How databricks-webhooks-events Compares

Feature / Agentdatabricks-webhooks-eventsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Configure Databricks job notifications, webhooks, and event handling. Use when setting up Slack/Teams notifications, configuring alerts, or integrating Databricks events with external systems. Trigger with phrases like "databricks webhook", "databricks notifications", "databricks alerts", "job failure notification", "databricks slack".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Databricks Webhooks & Events

## Overview
Configure notifications and event-driven workflows for Databricks jobs. Covers notification destinations (Slack, Teams, PagerDuty, email, generic webhooks), job lifecycle events, SQL alerts with automated triggers, and system table queries for event auditing.

## Prerequisites
- Databricks workspace admin access (for notification destinations)
- Webhook endpoint URL (Slack incoming webhook, Teams connector, etc.)
- Job permissions for notification configuration

## Instructions

### Step 1: Create Notification Destinations
```python
from databricks.sdk import WorkspaceClient
from databricks.sdk.service.settings import (
    CreateNotificationDestinationRequest,
    SlackConfig, EmailConfig, GenericWebhookConfig,
)

w = WorkspaceClient()

# Slack destination
slack = w.notification_destinations.create(
    display_name="Engineering Slack",
    config=SlackConfig(url="https://hooks.slack.com/services/T00/B00/xxxx"),
)

# Email destination
email = w.notification_destinations.create(
    display_name="Oncall Email",
    config=EmailConfig(addresses=["oncall@company.com", "data-team@company.com"]),
)

# Generic webhook (PagerDuty, custom endpoint)
pagerduty = w.notification_destinations.create(
    display_name="PagerDuty",
    config=GenericWebhookConfig(
        url="https://events.pagerduty.com/integration/YOUR_KEY/enqueue",
    ),
)

print(f"Slack: {slack.id}, Email: {email.id}, PD: {pagerduty.id}")
```

### Step 2: Attach Notifications to Jobs
```python
from databricks.sdk.service.jobs import (
    JobEmailNotifications, WebhookNotifications, Webhook,
)

# Update existing job with notifications
w.jobs.update(
    job_id=123,
    new_settings={
        "email_notifications": JobEmailNotifications(
            on_start=["team@company.com"],
            on_success=["team@company.com"],
            on_failure=["oncall@company.com", "team@company.com"],
            no_alert_for_skipped_runs=True,
        ),
        "webhook_notifications": WebhookNotifications(
            on_start=[Webhook(id=slack.id)],
            on_success=[Webhook(id=slack.id)],
            on_failure=[Webhook(id=slack.id), Webhook(id=pagerduty.id)],
        ),
    },
)
```

Or declaratively in Asset Bundles:
```yaml
# resources/jobs.yml
resources:
  jobs:
    daily_etl:
      email_notifications:
        on_failure: ["oncall@company.com"]
      webhook_notifications:
        on_failure:
          - id: "<notification-destination-id>"
```

### Step 3: Build Custom Webhook Handler
Receive Databricks job events at your own endpoint.

```python
# webhook_handler.py — FastAPI endpoint
from fastapi import FastAPI, Request
import httpx

app = FastAPI()

@app.post("/databricks/webhook")
async def handle_event(request: Request):
    payload = await request.json()

    event_type = payload.get("event_type")        # "jobs.on_failure", "jobs.on_success"
    run_id = payload.get("run", {}).get("run_id")
    job_name = payload.get("job", {}).get("name")
    result = payload.get("run", {}).get("result_state")  # SUCCESS, FAILED, TIMED_OUT
    error_msg = payload.get("run", {}).get("state_message", "")

    if result == "FAILED":
        # Route to PagerDuty
        await httpx.AsyncClient().post(
            "https://events.pagerduty.com/v2/enqueue",
            json={
                "routing_key": "YOUR_INTEGRATION_KEY",
                "event_action": "trigger",
                "payload": {
                    "summary": f"Databricks job failed: {job_name}",
                    "severity": "critical",
                    "source": f"databricks-run-{run_id}",
                    "custom_details": {"error": error_msg, "run_id": run_id},
                },
            },
        )

    return {"status": "ok"}
```

### Step 4: Monitor Events via System Tables
Query `system.access.audit` for event monitoring without webhooks.

```sql
-- Recent job events (last 6 hours)
SELECT event_time, user_identity.email AS actor,
       action_name, request_params.job_id, request_params.run_id,
       response.status_code, response.error_message
FROM system.access.audit
WHERE service_name = 'jobs'
  AND action_name IN ('runNow', 'submitRun', 'cancelRun', 'repairRun')
  AND event_date >= current_date()
  AND event_time > current_timestamp() - INTERVAL 6 HOURS
ORDER BY event_time DESC;

-- Permission changes (security audit)
SELECT event_time, user_identity.email, action_name, request_params
FROM system.access.audit
WHERE action_name IN ('changeJobPermissions', 'changeClusterPermissions',
                       'updatePermissions', 'grantPermission')
  AND event_date >= current_date() - 7
ORDER BY event_time DESC;
```

### Step 5: SQL Alerts with Automated Triggers
Create alerts that fire when query conditions are met.

```sql
-- Alert query: detect excessive failures
-- Create in SQL Editor > Alerts > New Alert
-- Trigger: failure_count > 3
-- Schedule: every 15 minutes
-- Destination: Slack notification destination

SELECT COUNT(*) AS failure_count,
       COLLECT_LIST(DISTINCT job_name) AS failed_jobs
FROM (
    SELECT j.name AS job_name
    FROM system.lakeflow.job_run_timeline r
    JOIN system.lakeflow.jobs j ON r.job_id = j.job_id
    WHERE r.result_state = 'FAILED'
      AND r.start_time > current_timestamp() - INTERVAL 1 HOUR
);
```

```python
# Create alert programmatically
alert = w.alerts.create(
    name="High Job Failure Rate",
    query_id="<saved-query-id>",
    options={"column": "failure_count", "op": ">", "value": "3"},
    rearm=900,  # Re-alert after 15 min if still triggered
)
```

### Step 6: Slack Message Formatter
```python
def format_slack_message(payload: dict) -> dict:
    """Format Databricks job event as a rich Slack Block Kit message."""
    run = payload.get("run", {})
    job = payload.get("job", {})
    status = run.get("result_state", "UNKNOWN")
    emoji = {"SUCCESS": ":white_check_mark:", "FAILED": ":x:", "TIMED_OUT": ":hourglass:"}.get(status, ":question:")
    duration_sec = run.get("execution_duration", 0) // 1000

    return {
        "blocks": [
            {"type": "header", "text": {"type": "plain_text", "text": f"{emoji} {job.get('name', 'Unknown')}"}},
            {"type": "section", "fields": [
                {"type": "mrkdwn", "text": f"*Status:* {status}"},
                {"type": "mrkdwn", "text": f"*Run ID:* {run.get('run_id')}"},
                {"type": "mrkdwn", "text": f"*Duration:* {duration_sec}s"},
                {"type": "mrkdwn", "text": f"*Error:* {run.get('state_message', 'none')[:200]}"},
            ]},
        ]
    }
```

## Output
- Notification destinations registered (Slack, email, PagerDuty)
- Job lifecycle notifications (on_start, on_success, on_failure)
- Custom webhook handler for advanced routing
- System table queries for event auditing
- SQL alerts with automated triggers and destinations

## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| `RESOURCE_DOES_NOT_EXIST` for destination | Destination deleted or wrong workspace | `w.notification_destinations.list()` to verify |
| Webhook not triggered | URL unreachable from Databricks network | Check firewall; Databricks needs outbound access to webhook URL |
| Duplicate notifications | Same destination on job AND task level | Configure at job level only |
| Alert never fires | Query returns 0 rows or wrong column | Test query in SQL Editor first |
| System tables empty | Unity Catalog not enabled | Enable system tables in Account Console |

## Examples

### List All Notification Destinations
```bash
databricks notification-destinations list --output json | \
  jq '.[] | {name: .display_name, type: .destination_type, id: .id}'
```

## Resources
- [Job Notifications](https://docs.databricks.com/aws/en/jobs/monitor)
- [Notification Destinations](https://docs.databricks.com/aws/en/admin/notification-destinations)
- [System Tables](https://docs.databricks.com/aws/en/admin/system-tables/)
- [SQL Alerts](https://docs.databricks.com/aws/en/sql/user/alerts/)

## Next Steps
For performance tuning, see `databricks-performance-tuning`.

Related Skills

server-sent-events-setup

25
from ComeOnOliver/skillshub

Server Sent Events Setup - Auto-activating skill for API Integration. Triggers on: server sent events setup, server sent events setup Part of the API Integration skill category.

exa-webhooks-events

25
from ComeOnOliver/skillshub

Build event-driven integrations with Exa using scheduled monitors and content alerts. Use when building content monitoring, competitive intelligence pipelines, or scheduled search automation with Exa. Trigger with phrases like "exa monitor", "exa content alerts", "exa scheduled search", "exa event-driven", "exa notifications".

evernote-webhooks-events

25
from ComeOnOliver/skillshub

Implement Evernote webhook notifications and sync events. Use when handling note changes, implementing real-time sync, or processing Evernote notifications. Trigger with phrases like "evernote webhook", "evernote events", "evernote sync", "evernote notifications".

emitting-api-events

25
from ComeOnOliver/skillshub

Build event-driven APIs with webhooks, Server-Sent Events, and real-time notifications. Use when building event-driven API architectures. Trigger with phrases like "add webhooks", "implement events", or "create event-driven API".

elevenlabs-webhooks-events

25
from ComeOnOliver/skillshub

Implement ElevenLabs webhook HMAC signature verification and event handling. Use when setting up webhook endpoints for transcription completion, call recording, or agent conversation events from ElevenLabs. Trigger: "elevenlabs webhook", "elevenlabs events", "elevenlabs webhook signature", "handle elevenlabs notifications", "elevenlabs post-call webhook", "elevenlabs transcription webhook".

documenso-webhooks-events

25
from ComeOnOliver/skillshub

Implement Documenso webhook configuration and event handling. Use when setting up webhook endpoints, handling document events, or implementing real-time notifications for document signing. Trigger with phrases like "documenso webhook", "documenso events", "document completed webhook", "signing notification".

deepgram-webhooks-events

25
from ComeOnOliver/skillshub

Implement Deepgram callback and webhook handling for async transcription. Use when implementing callback URLs, processing async transcription results, or handling Deepgram event notifications. Trigger: "deepgram callback", "deepgram webhook", "async transcription", "deepgram events", "deepgram notifications", "deepgram async".

databricks-upgrade-migration

25
from ComeOnOliver/skillshub

Upgrade Databricks runtime versions and migrate between features. Use when upgrading DBR versions, migrating to Unity Catalog, or updating deprecated APIs and features. Trigger with phrases like "databricks upgrade", "DBR upgrade", "databricks migration", "unity catalog migration", "hive to unity".

databricks-security-basics

25
from ComeOnOliver/skillshub

Apply Databricks security best practices for secrets and access control. Use when securing API tokens, implementing least privilege access, or auditing Databricks security configuration. Trigger with phrases like "databricks security", "databricks secrets", "secure databricks", "databricks token security", "databricks scopes".

databricks-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready Databricks SDK patterns for Python and REST API. Use when implementing Databricks integrations, refactoring SDK usage, or establishing team coding standards for Databricks. Trigger with phrases like "databricks SDK patterns", "databricks best practices", "databricks code patterns", "idiomatic databricks".

databricks-reference-architecture

25
from ComeOnOliver/skillshub

Implement Databricks reference architecture with best-practice project layout. Use when designing new Databricks projects, reviewing architecture, or establishing standards for Databricks applications. Trigger with phrases like "databricks architecture", "databricks best practices", "databricks project structure", "how to organize databricks", "databricks layout".

databricks-rate-limits

25
from ComeOnOliver/skillshub

Implement Databricks API rate limiting, backoff, and idempotency patterns. Use when handling rate limit errors, implementing retry logic, or optimizing API request throughput for Databricks. Trigger with phrases like "databricks rate limit", "databricks throttling", "databricks 429", "databricks retry", "databricks backoff".