Skip to main content

Overview

Prometheus is a leading open-source monitoring and alerting toolkit. EasyContact integrates with Prometheus via Alertmanager webhooks, receiving alert notifications when your alerting rules fire.
This integration receives webhooks from Alertmanager, not directly from Prometheus. Make sure you have Alertmanager configured.

Setup Instructions

1

Create Integration in EasyContact

  1. Go to ConfigurationIntegrations
  2. Click Add Integration
  3. Select Prometheus as the type
  4. Enter a name (e.g., “Production Alertmanager”)
  5. Save and copy the webhook URL
2

Configure Alertmanager

Add a webhook receiver to your alertmanager.yml:
receivers:
  - name: 'easycontact'
    webhook_configs:
      - url: 'YOUR_WEBHOOK_URL'
        send_resolved: true
3

Create Route

Route alerts to the EasyContact receiver:
route:
  receiver: 'easycontact'
  # Or use as a sub-route for specific alerts
  routes:
    - match:
        severity: critical
      receiver: 'easycontact'
4

Reload Alertmanager

Apply the configuration:
curl -X POST http://alertmanager:9093/-/reload
5

Test the Integration

Trigger a test alert and verify it appears in EasyContact

Alertmanager Configuration

Basic Configuration

global:
  resolve_timeout: 5m

route:
  group_by: ['alertname', 'job']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 4h
  receiver: 'easycontact'

receivers:
  - name: 'easycontact'
    webhook_configs:
      - url: 'https://api.easycontact.ai/api/v1/webhooks/ingest/YOUR_TOKEN'
        send_resolved: true
        http_config:
          follow_redirects: true

With Multiple Receivers

route:
  receiver: 'default'
  routes:
    - match:
        severity: critical
      receiver: 'easycontact-critical'
    - match:
        severity: warning
      receiver: 'easycontact-warning'

receivers:
  - name: 'easycontact-critical'
    webhook_configs:
      - url: 'YOUR_CRITICAL_WEBHOOK_URL'
        send_resolved: true

  - name: 'easycontact-warning'
    webhook_configs:
      - url: 'YOUR_WARNING_WEBHOOK_URL'
        send_resolved: true

Field Mapping

EasyContact automatically maps Alertmanager fields:
Alertmanager FieldEasyContact Field
labels.alertnameTitle
annotations.summaryTitle (fallback)
annotations.descriptionDescription
statusStatus (firing → problem, resolved → ok)
labels.severitySeverity
labels.instanceHost
labels.jobService
fingerprintEvent ID
All labelsTags

Severity Mapping

Standard Prometheus/Alertmanager severity labels:
Prometheus SeverityEasyContact Severity
criticalCritical
errorHigh
warningWarning
infoInfo

Custom Mapping

If your alerts use different severity labels:
{
  "severityMapping": {
    "sourceField": "severity",
    "mappings": {
      "page": "critical",
      "ticket": "warning",
      "notify": "info"
    },
    "default": "warning"
  }
}

Status Handling

Alertmanager StatusEasyContact StatusAction
firingProblemCreates/updates incident
resolvedOKResolves incident
Set send_resolved: true in your webhook config to automatically resolve incidents when alerts clear.

Example Payload

Alertmanager sends payloads in this format:
{
  "version": "4",
  "groupKey": "{}:{alertname=\"HighCPU\"}",
  "status": "firing",
  "receiver": "easycontact",
  "alerts": [
    {
      "status": "firing",
      "labels": {
        "alertname": "HighCPU",
        "severity": "critical",
        "instance": "web-01:9090",
        "job": "node-exporter",
        "environment": "production"
      },
      "annotations": {
        "summary": "High CPU usage detected",
        "description": "CPU usage is above 90% for more than 5 minutes"
      },
      "startsAt": "2024-01-15T10:30:00.000Z",
      "endsAt": "0001-01-01T00:00:00Z",
      "fingerprint": "abc123def456"
    }
  ],
  "commonLabels": {
    "alertname": "HighCPU"
  }
}

Labels as Tags

All Prometheus labels (except internal __ prefixed ones) are captured as tags:
labels:
  alertname: HighCPU
  severity: critical
  environment: production
  team: platform
  service: api
Results in tags:
  • alertname: HighCPU
  • severity: critical
  • environment: production
  • team: platform
  • service: api
These tags can be used in escalation routing rules.

Host Extraction

EasyContact looks for host information in these labels (in order):
  1. instance
  2. host
  3. hostname
  4. node
  5. pod
  6. container
The first non-empty value is used as the host.

Service Extraction

Service/application is extracted from:
  1. job
  2. service
  3. app
  4. application

Enrichment Examples

Add context to all Prometheus alerts:
{
  "enrichment": {
    "tags.monitoring_tool": "prometheus",
    "tags.cluster": "production-k8s",
    "tags.region": "eu-west-1"
  }
}

Alert Rule Examples

CPU Alert

- alert: HighCPU
  expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 90
  for: 5m
  labels:
    severity: critical
  annotations:
    summary: "High CPU usage on {{ $labels.instance }}"
    description: "CPU usage is {{ $value }}%"

Memory Alert

- alert: HighMemory
  expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100 > 90
  for: 5m
  labels:
    severity: warning
  annotations:
    summary: "High memory usage on {{ $labels.instance }}"
    description: "Memory usage is {{ $value }}%"

Disk Alert

- alert: DiskSpaceLow
  expr: node_filesystem_avail_bytes / node_filesystem_size_bytes * 100 < 10
  for: 5m
  labels:
    severity: critical
  annotations:
    summary: "Low disk space on {{ $labels.instance }}"
    description: "Only {{ $value }}% disk space remaining"

Troubleshooting

  1. Check Alertmanager logs for webhook errors
  2. Verify the webhook URL is accessible from Alertmanager
  3. Test connectivity: curl -X POST YOUR_WEBHOOK_URL -d '{}'
  4. Check firewall rules between Alertmanager and internet
  1. Verify send_resolved: true in webhook config
  2. Check that fingerprint matches between firing and resolved
  3. Review Alertmanager routing to ensure resolved alerts reach webhook
  1. Verify alert rules include required labels
  2. Check that labels aren’t being dropped by Alertmanager routes
  3. Review group_by settings
  1. Check group_by configuration
  2. Verify fingerprint is consistent
  3. Review group_interval and repeat_interval settings

Best Practices

Standardize on severity: critical|warning|info across all alert rules for consistent mapping.
Add meaningful summary and description annotations to help responders understand the alert.
Always set send_resolved: true to automatically resolve incidents.