Events, and Lists, and Rules, oh my!

DanDye
Staff

As a “Noogler” (new Google employee) on the Cloud Security team, I have configured a lab environment with a fresh instance of Google Security Operations for the purposes of learning and experimenting. This blank slate presents an opportunity to clearly observe the effects of creating my first Reference List, Unified Data Model (UDM) Events, Entities, and Detections via SecOps REST API. I’m still learning about Google SecOps, and this hands-on experience helped me understand how some things work. I hope you enjoy following along on my learning journey!

The examples in this blog are using the new v1alpha version of the Google SecOps REST API, so I need to provide this notice:

Note: This feature is covered by Pre-GA Offerings Terms of the Chronicle Service Specific Terms. Pre-GA features might have limited support, and changes to pre-GA features might not be compatible with other pre-GA versions. For more information, see the SecOps Technical Support Service guidelines and the SecOps Service Specific Terms.

Prerequisites

For authentication with new REST APIs, you may need to migrate from the original feature Role Based Access Control (RBAC) implementation (not to be confused with the new data RBAC), to IAM access control. 

Follow these steps to migrate existing access control permissions if needed.

After you create your own Service Account on the GCP Project that is associated with your Google SecOps instance, you generate your own JSON credentials and assign the required IAM Roles to the service account. I’ll note which IAM Role is required for each of the three APIs as I provide the sample Python code but at the time of writing, the only built-in role that can perform all three functions is roles/chronicle.admin.

 

admin_@cloudshell:~ (dandye-0324-chronicle)$ gcloud iam roles list | grep chronicle
name: roles/chronicle.admin
name: roles/chronicle.editor
name: roles/chronicle.limitedViewer
name: roles/chronicle.restrictedDataAccess
name: roles/chronicle.restrictedDataAccessViewer
name: roles/chronicle.serviceAgent
name: roles/chronicle.soarAdmin
name: roles/chronicle.soarServiceAgent
name: roles/chronicle.soarThreatManager
name: roles/chronicle.soarVulnerabilityManager
name: roles/chronicle.viewer
name: roles/chroniclesm.admin
name: roles/chroniclesm.viewer

 

To see if what roles are in effect for your Service Account run the following in your Cloud Console:

 

PROJECT_ID=dandye-0324-chronicle  # change to your project id
SA_USERNAME=chronicle-api-admin  # change to your sa username
SA_EMAIL="serviceAccount:${SA_USERNAME}@${PROJECT_ID}.iam.gserviceaccount.com"

gcloud projects get-iam-policy $PROJECT_ID --format=json | jq --arg SA_EMAIL "$SA_EMAIL" '.bindings[] | select(.members[] | contains($SA_EMAIL))'
{
  "members": [
    ...
    "serviceAccount:chronicle-api-admin@dandye-0324-chronicle.iam.gserviceaccount.com",
    ...
  ],
  "role": "roles/chronicle.admin"
}

 

If we want to follow the Principle of Least Privilege, we’ll need to create a custom role. To do that, first we create a yaml file with the required roles:

 

cat << EOF > EditEventsRefListsRulesInstances.yaml
title: "EditEventsRefListsRulesInstances"
description: "Role to Edit Events, Ref Lists, and Rules."
stage: "ALPHA"
includedPermissions:
- chronicle.entities.import
- chronicle.events.import
- chronicle.instances.get
- chronicle.referenceLists.create
- chronicle.referenceLists.get
- chronicle.referenceLists.list
- chronicle.referenceLists.update
- chronicle.rules.create
EOF

 

Then use that file to create the new role:

 

YAML_FILE_NAME=EditEventsRefListsRulesInstances.yaml
ROLE_NAME=EditEventsRefListsRulesInstances

gcloud iam roles create $ROLE_NAME \
--project=$PROJECT_ID \
--file=$YAML_FILE_NAME

 

The cloud shell output confirms the creation of that Custom Role:

 

Created role [EditEventsRefListsRulesInstances].
description: Role to Edit Events, Ref Lists, and Rules.
etag: BwYb81W27s0=
includedPermissions:
- chronicle.entities.import
- chronicle.events.import
- chronicle.instances.get
- chronicle.referenceLists.create
- chronicle.referenceLists.get
- chronicle.referenceLists.list
- chronicle.referenceLists.update
- chronicle.rules.create
name: projects/dandye-0324-chronicle/roles/EditEventsRefListsRulesInstances
stage: ALPHA
title: EditEventsRefListsRulesInstances

 

You can also verify the custom role was created with:

 

admin_@cloudshell:~ (dandye-0324-chronicle)$ gcloud iam roles list --project $PROJECT_ID 
...
---
description: Role to Edit Events, Ref Lists, and Rules.
etag: BwYb81W27s0=
name: projects/dandye-0324-chronicle/roles/EditEventsRefListsRulesInstances
title: EditEventsRefListsRulesInstances
---
...

 

Next, create a new Service Account and assign the custom role to it:

 

NAME=edit-events-reflists-rules
gcloud iam service-accounts create $NAME

Created service account [edit-events-reflists-rules].
SERVICE_ACCOUNT="${NAME}@${PROJECT_ID}.iam.gserviceaccount.com"

gcloud projects add-iam-policy-binding $PROJECT_ID \
 --member="serviceAccount:${SERVICE_ACCOUNT}" \
 --role="projects/$PROJECT_ID/roles/$ROLE_NAME"

 

Then verify the new service account has the custom role:

 

gcloud projects get-iam-policy $PROJECT_ID --format=json | jq --arg SA_EMAIL "$SERVICE_ACCOUNT" '.bindings[] | select(.members[] | contains($SA_EMAIL))'
{
  "members": [
    "serviceAccount:edit-events-reflists-rules@dandye-0324-chronicle.iam.gserviceaccount.com"
  ],
  "role": "projects/dandye-0324-chronicle/roles/EditEventsRefListsRulesInstances"
}

 

Of course, you can also view the custom role in the GCP Cloud Console

image.png

Lastly, you generate JSON credentials for that service account and save them to disk. When you make API calls, you’ll supply the path to that JSON file as an option (--credentials_file).

image.png

Now that we’ve got a Service Account with the required privileges, let’s set up the Python environment.

Setup

(Optional) Create a Conda environment (or Python virtual environment) for this work.

 

conda create --name chronicle-cli python=3.11
conda activate chronicle-cli

 

The API Samples project on GitHub has Python client code for the SecOps APIs we’ll be using. It also has example_input files for the Ingestion API. The samples, like lists/v1alpha/create_list.py, are executed from the command line as modules using the -m flag.  For example, python3 -m lists.v1alpha.create_list. There isn’t a Python package to pip install—you simply run the samples from their parent directory after installing the prerequisites defined in requirements.txt (as shown in the following snippet).

 

git clone \
  https://github.com/chronicle/api-samples-python.git  \
  chronicle-api-samples-python
cd chronicle-api-samples-python/
pip install -r requirements.txt

 

If you’d like a more detailed introduction to these API Samples, I have a Security Spotlight video on the subject and also an introductory blog post: Update Reference Lists with Python and the new Chronicle REST API

Creating our first Reference List in Google SecOps

Google SecOps supports three types of Reference List: Classless Inter-Domain Routing (CIDR) ranges, Regular Expressions (REGEX), or plain-text (STRING). Reference Lists can be used both in YARA-L detection rules and in “UDM Search” and are referred to by the list name with a % pre-pended (e.g. %my_ref_list). Let’s use the Reference Lists API to create a new STRING Reference List of IP Addresses in SecOps.

First, we’ll create a plain-text file with one IP Address per line. We can get a list of known-malicious IP Addresses from Abuse IPDB.

Download the Abuse IPDB Blocklist into a local JSON file. If you don’t have an API Key for this service, you can manually create a line-delimited text file with a few IPs from the “Recently Reported” section on the front page of AbuseIPDB.

 

ABUSE_IPDB_API_KEY=<api-key>  # replace with your API Key
curl -G https://api.abuseipdb.com/api/v2/blacklist \
  -d confidenceMinimum=90 \
  -H "Key: ${ABUSE_IPDB_API_KEY}" \
  -H "Accept: application/json" \
 > abuse_ipdb_blocklist.json

 

Use jq to extract the IP addresses from the JSON file and emit into a line-delimited text file.

 

jq -r  '.data[].ipAddress' < abuse_ipdb_blocklist.json > abuse_ipdb_blocklist.txt

 

Use the chronicle-api-samples-python module lists.v1alpha.create_list, to create a new STRING Reference List with the contents of that text file. 

 

PROJECT_ID=<your-project-id>
python -m lists.v1alpha.create_list \
 --project_instance=$PROJECT_INSTANCE \
 --project_id=$PROJECT_ID \
 --credentials_file=$HOME/.ssh/edit-events-reflists-rules.json \
 --name="AbuseIPDB_Blocklist" \
 --description='confidenceMinimum=90'  \
 --syntax_type=REFERENCE_LIST_SYNTAX_TYPE_PLAIN_TEXT_STRING \
 --list_file=./abuse_ipdb_blocklist.txt

 

NOTE: The following IAM permission is required on the parent resource:
chronicle.referenceLists.create

For more information, see the IAM documentation.

If all went well, you can verify in the Google SecOps Web UI that the new Reference List contains all of the same details.

image.png
Reviewing the new Reference List in Chronicle’s Reference List Manager.

Detection Rule

Next we’ll add a simple detection rule. Here is a simple YARA-L Rule that triggers on Events where an IP is in that %AbuseIPDB_Blocklist STRING Reference List.

 

rule ip_in_abuseipdb_blocklist {

 meta:
   author = "Dan Dye"
   description = "IP matches AbuseIPDB blocklist"
   severity = "Medium"

 events:
   $event.principal.ip in %AbuseIPDB_Blocklist

 condition:
   $event
}

 

You may get that Rule from my GitHub fork and create it in Google SecOps using the following commands:

 

curl -O https://raw.githubusercontent.com/dandye/detection-rules/geoip_user_login_from_multiple_states_or_countries_no_product/community/threat_intel/ip_in_abuseipdb_blocklist.yaral

python3 -m detect.v1alpha.create_rule \
--project_instance=$PROJECT_INSTANCE \
--project_id=$PROJECT_ID \
--rule_file=./ip_in_abuseipdb_blocklist.yaral \
--credentials_file=$HOME/ingestion-api.json

 

Alternatively, you can create the rule in the Rules Editor within Chronicle. We’ll see that Rule in action a little later.

NOTE: The following IAM permission is required on the parent resource: 
chronicle.rules.create

For more information, see the IAM documentation.

Ingesting a User Login UDM Event in Google SecOps

We’ve now used APIs to create a Reference List and to create a Detection Rule. Our next task is to add an “event” that includes one of the known-malicious IP addresses in order to trigger that rule. Events are typically created when security telemetry (i.e. logs) are sent to SecOps and then parsed into a common model, the Unified Data Model (UDM), as “UDM Events”. For the purposes of this experiment, we are going to bypass those parsers and directly add a USER_LOGIN UDM Event using the Ingestion API.

If you aren’t familiar with the Unified Data Model (UDM), John Stoner has written an excellent introduction in his blog post, New to Chronicle: Unified data model.

The api-samples-python GitHub repo has an example input file with USER_LOGIN UDM Event keys and sample values.

We’ll make two small changes to that example USER_LOGIN UDM Event: the eventTimestamp will be updated to the current time (in UTC) and the principal.ip will be changed to one of the known-malicious IP addresses from the Reference List we previously created (%AbuseIPDB_Blocklist).

The following three commands copy that example input file and update the values for the eventTimestamp and principle.ip fields.

 

cp ./ingestion/example_input/sample_udm_events{,2}.json

jq --arg now "$(date -u +"%Y-%m-%dT%H:%M:%S.000000Z")" \
'.[].metadata.eventTimestamp = $now' \
ingestion/example_input/sample_udm_events2.json > tmp.json && mv tmp.json \
ingestion/example_input/sample_udm_events2.json

jq --arg ip "$(head -n 1 abuse_ipdb_blocklist.txt)" '.[].principal.ip = [$ip]' \
ingestion/example_input/sample_udm_events2.json > tmp.json && mv tmp.json \
ingestion/example_input/sample_udm_events2.json

 

Those jq commands may seem cryptic, but they allow scripting and automation, which could come in handy later (hint, hint).

The updated contents of my modified file are shown below and 80.94.95.209 is the updated IP address:

 

[
  {
    "metadata": {
      "eventTimestamp": "2024-06-27T20:51:21.000000Z",
      "eventType": "USER_LOGIN"
    },
    "additional": {
      "id": "9876-54321"
    },
    "principal": {
      "hostname": "userhost",
      "ip": ["80.94.95.209"]
    },
    "target": {
      "hostname": "systemhost",
      "user": {
        "userid": "employee"
      },
      "ip": ["10.0.0.1"]
    },
    "securityResult": [
      {
        "action": ["ALLOW"]
      }
    ],
    "extensions": {
      "auth": {
        "type": "MACHINE",
        "mechanism": ["USERNAME_PASSWORD"]
      }
    }
  }
]

 

Send the modified USER_LOGIN UDM Event to the Ingestion API with the ingestion.v1alpha.create_udm_events Python sample:

 

python3 -m ingestion.v1alpha.create_udm_events \
 --project_instance=$PROJECT_INSTANCE \
 --project_id=$PROJECT_ID  \
 --json_events_file=./ingestion/example_input/sample_udm_events2.json \
 --credentials_file=$HOME/ingestion-api.json

 

NOTE: The following IAM permission is required on the parent resource: 
chronicle.events.import

For more information, see the IAM documentation.

To verify that the USER_LOGIN UDM Event was ingested, execute a Raw Log Search for that IP address with a Start and End (UTC) Time that includes the eventTimestamp that was used (again, in UTC). Note that this is NOT the ingestion time but rather the timestamp for when the event occurred.

image.pngPerforming a Raw Log Search for an IP Address.

Here is our USER_LOGIN UDM Event shown in the Raw Log Search results.

image.pngReviewing Raw Log Search Results in Chronicle.

By clicking the expand icon shown with the red arrow in the screenshot below, we load a table with the USER_LOGIN UDM Event’s fields and their values.

image.pngExpanded results for the Raw Log Search.

The unenriched fields on the top have the metadata from the JSON like additional.fields["id"], and principal.hostname. Since we directly sent the ingestion API a UDM Event (bypassing parsing), the resulting UDM Event in SecOps has all of the data from the uploaded JSON file.

In the red box in the screenshot above, I’ve highlighted many fields that were NOT in the uploaded JSON file such as principal.location.country_or_region. These demonstrate a powerful feature: the IP address has already been contextually enriched by Google SecOps:

To enable a security investigation, Google Security Operations ingests contextual data from different sources, performs analysis on the data, and provides additional context about artifacts in a customer environment. Analysts can use contextually enriched data in Detection Engine rules, investigative searches, or reports.

How Google Security Operations enriches event and entity data | Google Cloud

Performing contextual enrichment on demand during an investigation is a common feature in Security Information and Event Management (SIEM) platforms but doing it at the time of ingestion is exceedingly uncommon because it is so computationally expensive. The fact that Google SecOps provides this contextual enrichment automatically as events are ingested is a killer feature because it enables Search and Detection Rules that filter on that added context. My colleague David French has an excellent example of this in: Detecting Suspicious Domains in Google SecOps Using Entity Enrichment Data.

We have now used the API Samples to create a Reference List, a Detection Rule, and ingested a USER_LOGIN UDM Event from a known-malicious IP Address. Let’s now explore a Detection created from that Rule.

Testing the Detection Rule

In the Google Security Operations UI, use “Run Test” to test the rule on a date range that includes the eventTimestamp we previously sent and verify the 1 Detection that’s returned:

image.pngTest Rule Results in the Detection Editor

Click the expand event icon to see the Raw Logs and resulting UDM key:value pairs for the USER_LOGIN UDM Event.

image.png
Detection details table with Raw Log and UDM Event information

I’ve previously covered the UDM Event details, so I won’t dwell on those. Instead, let’s recap: we used the api-samples-python modules to interact with the SecOps Ingestion, Reference List, and Detection APIs. We created a Reference List, a Detection Rule, and a USER_LOGIN UDM Event. Even though we haven’t configured any log forwarding to this Google SecOps instance, we were able to see a good deal of functionality, like Raw Log Search results and Detection Results. We saw that the USER_LOGIN UDM Event was contextually enriched at ingest, which I did dwell on, because it is awesome. 🙂

We covered:

  • Creating a new reference list and populating it with content using the Reference List API
  • Detection/Create Rule API
  • Ingestion API
    • USER_LOGIN UDM Event
  • Raw Log Search Results
  • Detection Results

There is a big subject missing from that list that I plan to cover in the next installment: a completely different data model, this time for representing “Entities”, such as Users, Assets, Domains, Files, etc.  

So, stay tuned!

_______________________________________________________________________________________________________________________

Postscript: 100 User Logins From Around the World

Remember my hints about using jq to script updates to the USER_LOGIN UDM Event JSON file? Let’s do that! The script below will create 100 distinct login events from different IP addresses within a short period of time. It is highly likely that these IP addresses will have different geolocation data and so this is a classic User Entity and Behavior Analytics (UEBA) red-flag (i.e. “geo-infeasible logins”).

 

#!/bin/bash

# Ensure PROJECT_INSTANCE is set
if [ -z "$PROJECT_INSTANCE" ]; then
  echo "PROJECT_INSTANCE environment variable is not set."
  exit 1
fi
# Ensure PROJECT_ID is set
if [ -z "$PROJECT_ID" ]; then
  echo "PROJECT_ID environment variable is not set."
  exit 1
fi

for N in {1..100}
do
  jq --arg now "$(date -u +"%Y-%m-%dT%H:%M:%S.000000Z")" \
      '.[].metadata.eventTimestamp = $now' \
      ingestion/example_input/udm_login_event.json > tmp.json && mv tmp.json \
      ingestion/example_input/udm_login_event.json

  # print to verify the change
  jq '.[].metadata.eventTimestamp' ./ingestion/example_input/udm_login_event.json

  jq --arg ip "$(head -n $N abuse_ipdb_blocklist.txt | tail -n 1)" '.[].principal.ip = [$ip]' \
      ingestion/example_input/udm_login_event.json > tmp.json && mv tmp.json \
      ingestion/example_input/udm_login_event.json

 # print to verify the change
 jq '.[].principal.ip' ./ingestion/example_input/udm_login_event.json

  python -m ingestion.v1alpha.create_udm_events \
   --project_instance $PROJECT_INSTANCE  \
   --project_id=$PROJECT_ID \
   --credentials_file=$HOME/.ssh/ingestion-api.json \
   --json_events_file=./ingestion/example_input/udm_login_event.json

done

 

This geoip_user_login_from_multiple_states_or_countries rule from our collection of community rules on GitHub will detect that anomalous activity:

 

/*
 * Copyright 2024  Google LLC
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

rule geoip_user_login_from_multiple_states_or_countries {

  meta:
    author = "Google Cloud Security"
    description = "Detect multiple user logins from multiple states or countries using Google SecOps GeoIP enrichment."
    type = "alert"
    data_source = "microsoft ad, azure ad, okta, aws cloudtrail, google scc"
    tags = "geoip enrichment"
    severity = "Low"
    priority = "Low"

  events:
    $login.metadata.event_type = "USER_LOGIN"
    $login.security_result.action = "ALLOW"
    $login.principal.ip_geo_artifact.location.country_or_region != ""

    $login.principal.ip_geo_artifact.location.country_or_region = $country
    $login.principal.ip_geo_artifact.location.state  = $state
    $login.target.user.userid = $user

  match:
    $user over 1h

  outcome:
    $risk_score = max(35)
    $event_count = count_distinct($login.metadata.id)
    $state_login_threshold = max(2)
    $dc_state = count_distinct($login.principal.ip_geo_artifact.location.state)
    $array_state = array_distinct($login.principal.ip_geo_artifact.location.state)
    $dc_country_or_region = count_distinct($login.principal.ip_geo_artifact.location.country_or_region)
    $array_country_or_region = array_distinct($login.principal.ip_geo_artifact.location.country_or_region)
    $array_asn = array_distinct($login.principal.ip_geo_artifact.network.asn)
    $array_carrier_name = array_distinct($login.principal.ip_geo_artifact.network.carrier_name)
    //added to populate alert graph with additional context
    $principal_hostname = array_distinct($login.principal.hostname)
    $principal_ip = array_distinct($login.principal.ip)
    $target_hostname = array_distinct($login.target.hostname)
    $target_ip = array_distinct($login.target.ip)
    $principal_user_userid = array_distinct($login.principal.user.userid)
    $target_user_userid = array_distinct($login.target.user.userid)
    $principal_resource_name = array_distinct($login.principal.resource.name)
    $target_resource_name = array_distinct($login.target.resource.name)
    $target_url = array_distinct($login.target.url)

  condition:
    #country >= 1 and #state >= 2
}

 

And finally, the screenshot below shows an example detection from the geoip_user_login_from_multiple_states_or_countries rule:

image.pngDetection results for the geoip_user_login_from_multiple_states_or_countries YARA-L Rule.

This little exercise again showcases the power of on-ingest contextual enrichment because the location data wasn’t included in the UDM Events—it was added at the time of ingestion and immediately available for the detection rule, which examines Events over a one hour window. If the YARA-L Rule piques your curiosity, I highly recommend reading John Stoner’s “New to Chronicle Blog Series”, which does a fantastic job introducing features of the YARA-L detection language.

1 0 432
Authors