Skip to content

Umbrella IP Logs

Overview

Cisco Umbrella offers flexible, cloud-delivered security. It combines multiple security functions into one solution, so that protection can be extended to devices, remote users, and distributed locations anywhere. CISCO Umbrella is a leading provider of network security and recursive DNS services.

  • Vendor: Cisco
  • Supported environment: SaaS
  • Detection based on: Telemetry
  • Supported application or feature: Host network interface, Netflow/Enclave netflow, Network device logs, Network protocol analysis

Configure

This section will guide you to configure the forwarding of Cisco Umbrella logs to Sekoia.io by means of AWS S3 buckets.

Prerequities

  • Administrator access to the Cisco Umbrella console
  • Access to Sekoia.io Intakes and Playbook pages with write permissions
  • Access to AWS S3 and AWS SQS

Create an AWS S3 Bucket

To create a new AWS S3 Bucket, please refer to this guide.

  1. On the AWS S3, go to Buckets and select our bucket.
  2. Select Permissions tab and go to Bucket Policy section
  3. Click Edit and paste the JSON Bucket policy from Cisco Umbrella
  4. In the Policy, replace the bucketname placeholde by the name of our bucket.
  5. Click Save changes.

Important

Keep in mind to conserve the /* when defining in the policy.

Configure Cisco Umbrella

  1. Log on the Cisco Umbrella console
  2. Go to Admin > Log Management
  3. In the Amazon S3 section, select Use your company-managed Amazon S3 bucket
  4. In Amazon S3 bucket, type the name of your bucket and click Verify.

  5. On your AWS console, go in your bucket.

  6. In the Objects tab, click on README_FROM_UMBRELLA.txt then click on Open
  7. Copy the token from the readme
  8. On the Cisco Umbrella console, in the field Token Number, paste the token and click Save

Note

After clicking Verify, the message Great! We successfully verified your Amazon S3 bucket must be displayed

Note

After clicking Save, the message We’re sending data to your S3 storage must be displayed

Important

According to the type of the logs, the objects will be prefixed with dnslogs/ for DNS logs, proxylogs for proxy logs, iplogs for ip logs, ...

Create a SQS queue

The collect will rely on S3 Event Notifications (SQS) to get new S3 objects.

  1. Create a queue in the SQS service by following this guide
  2. In the Access Policy step, choose the advanced configuration and adapt this configuration sample with your own SQS Amazon Resource Name (ARN) (the main change is the Service directive allowing S3 bucket access):
    {
      "Version": "2008-10-17",
      "Id": "__default_policy_ID",
      "Statement": [
        {
          "Sid": "__owner_statement",
          "Effect": "Allow",
          "Principal": {
        "Service": "s3.amazonaws.com"
          },
          "Action": "SQS:SendMessage",
          "Resource": "arn:aws:sqs:XXX:XXX"
        }
      ]
    }
    

Important

Keep in mind that you have to create the SQS queue in the same region as the S3 bucket you want to watch.

Create a S3 Event notification

Use the following guide to create S3 Event Notification. Once created:

  1. In the General configuration, type iplogs/ as the Prefix
  2. Select the notification for object creation in the Event type section
  3. As the destination, choose the SQS service
  4. Select the queue you created in the previous section

Configure Your Intake

This section will guide you through creating the intake object in Sekoia, which provides a unique identifier called the "Intake key." The Intake key is essential for later configuration, as it references the Community, Entity, and Parser (Intake Format) used when receiving raw events on Sekoia.

  1. Go to the Sekoia Intake page.
  2. Click on the + New Intake button at the top right of the page.
  3. Search for your Intake by the product name in the search bar.
  4. Give it a Name and associate it with an Entity (and a Community if using multi-tenant mode).
  5. Click on Create.
  6. You will be redirected to the Intake listing page, where you will find a new line with the name you gave to the Intake.

Note

For more details on how to use the Intake page and to find the Intake key you just created, refer to this documentation.

Configure Your Playbook

This section will assist you in pulling remote logs from Sekoia and sending them to the intake you previously created.

  1. Go to the Sekoia playbook page.
  2. Click on the + New playbook button at the top right of the page.
  3. Select Create a playbook from scratch, and click Next.
  4. Give it a Name and a Description, and click Next.
  5. Choose a trigger from the list by searching for the name of the product, and click Create.
  6. A new Playbook page will be displayed. Click on the module in the center of the page, then click on the Configure icon.
  7. On the right panel, click on the Configuration tab.
  8. Select an existing Trigger Configuration (from the account menu) or create a new one by clicking on + Create new configuration.
  9. Configure the Trigger based on the Actions Library (for instance, see here for AWS modules), then click Save.
  10. Click on Save at the top right of the playbook page.
  11. Activate the playbook by clicking on the "On / Off" toggle button at the top right corner of the page.

Info

Please find here the official documentation related to AWS Access Key.

Raw Events Samples

In this section, you will find examples of raw logs as generated natively by the source. These examples are provided to help integrators understand the data format before ingestion into Sekoia.io. It is crucial for setting up the correct parsing stages and ensuring that all relevant information is captured.

 "2020-06-12 14:31:52","FR123","1.1.1.1","54128","2.2.2.2","443","","Roaming Computers"

Detection section

The following section provides information for those who wish to learn more about the detection capabilities enabled by collecting this intake. It includes details about the built-in rule catalog, event categories, and ECS fields extracted from raw events. This is essential for users aiming to create custom detection rules, perform hunting activities, or pivot in the events page.

The following Sekoia.io built-in rules match the intake Cisco Umbrella IP. This documentation is updated automatically and is based solely on the fields used by the intake which are checked against our rules. This means that some rules will be listed but might not be relevant with the intake.

SEKOIA.IO x Cisco Umbrella IP on ATT&CK Navigator

Cryptomining

Detection of domain names potentially related to cryptomining activities.

  • Effort: master
Dynamic DNS Contacted

Detect communication with dynamic dns domain. This kind of domain is often used by attackers. This rule can trigger false positive in non-controlled environment because dynamic dns is not always malicious.

  • Effort: master
Exfiltration Domain

Detects traffic toward a domain flagged as a possible exfiltration vector.

  • Effort: master
Remote Access Tool Domain

Detects traffic toward a domain flagged as a Remote Administration Tool (RAT).

  • Effort: master
SEKOIA.IO Intelligence Feed

Detect threats based on indicators of compromise (IOCs) collected by SEKOIA's Threat and Detection Research team.

  • Effort: elementary
Sekoia.io EICAR Detection

Detects observables in Sekoia.io CTI tagged as EICAR, which are fake samples meant to test detection.

  • Effort: master
TOR Usage Generic Rule

Detects TOR usage globally, whether the IP is a destination or source. TOR is short for The Onion Router, and it gets its name from how it works. TOR intercepts the network traffic from one or more apps on user’s computer, usually the user web browser, and shuffles it through a number of randomly-chosen computers before passing it on to its destination. This disguises user location, and makes it harder for servers to pick him/her out on repeat visits, or to tie together separate visits to different sites, this making tracking and surveillance more difficult. Before a network packet starts its journey, user’s computer chooses a random list of relays and repeatedly encrypts the data in multiple layers, like an onion. Each relay knows only enough to strip off the outermost layer of encryption, before passing what’s left on to the next relay in the list.

  • Effort: master

Event Categories

The following table lists the data source offered by this integration.

Data Source Description
Host network interface every packets are logged
Netflow/Enclave netflow Umbrella IP logs are Netflow-like
Network device logs packets logged by Umbrella IP
Network protocol analysis traffic analysis at levels 2/3/4

Transformed Events Samples after Ingestion

This section demonstrates how the raw logs will be transformed by our parsers. It shows the extracted fields that will be available for use in the built-in detection rules and hunting activities in the events page. Understanding these transformations is essential for analysts to create effective detection mechanisms with custom detection rules and to leverage the full potential of the collected data.

{
    "message": " \"2020-06-12 14:31:52\",\"FR123\",\"1.1.1.1\",\"54128\",\"2.2.2.2\",\"443\",\"\",\"Roaming Computers\"",
    "event": {
        "outcome": "success"
    },
    "@timestamp": "2020-06-12T14:31:52Z",
    "action": {
        "name": "block",
        "outcome": "success",
        "target": "network-traffic"
    },
    "destination": {
        "address": "2.2.2.2",
        "ip": "2.2.2.2",
        "port": 443
    },
    "host": {
        "hostname": "FR123",
        "name": "FR123"
    },
    "related": {
        "hosts": [
            "FR123"
        ],
        "ip": [
            "1.1.1.1",
            "2.2.2.2"
        ]
    },
    "source": {
        "address": "1.1.1.1",
        "ip": "1.1.1.1",
        "port": 54128
    }
}

Extracted Fields

The following table lists the fields that are extracted, normalized under the ECS format, analyzed and indexed by the parser. It should be noted that infered fields are not listed.

Name Type Description
@timestamp date Date/time when the event originated.
action.target keyword Target of the action
destination.ip ip IP address of the destination.
destination.port long Port of the destination.
host.hostname keyword Hostname of the host.
source.ip ip IP address of the source.
source.port long Port of the source.

For more information on the Intake Format, please find the code of the Parser, Smart Descriptions, and Supported Events here.

Further Readings