{"__v":0,"_id":"582e287f2752920f00b5d66c","category":{"__v":0,"_id":"573b7ea9ef164e2900a2b8ff","project":"5615790c0f5ed00d00483dd1","version":"5615790d0f5ed00d00483dd4","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-05-17T20:27:21.560Z","from_sync":false,"order":9999,"slug":"threat-grid-investigate-integration","title":"Splunk integration with the Investigate API"},"parentDoc":null,"project":"5615790c0f5ed00d00483dd1","user":"560b40145148ba0d009bd0b5","version":{"__v":6,"_id":"5615790d0f5ed00d00483dd4","project":"5615790c0f5ed00d00483dd1","createdAt":"2015-10-07T19:57:01.307Z","releaseDate":"2015-10-07T19:57:01.307Z","categories":["5615790d0f5ed00d00483dd5","56157b2af432910d0000f9fe","56157cfb0f5ed00d00483ddb","562684d95db46b1700fd4f48","573b7ea9ef164e2900a2b8ff","582e285d8373c20f00810608"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-11-17T22:00:31.796Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"The Cisco Umbrella Investigate add-on for Splunk leverages the Investigate API to enrich events within Splunk.  The add-on can be obtained from Splunkbase at no additional cost. A license for Investigate API is required in order to use it and as such, this is plugin is only available with the API in their Umbrella subscription package. \n\nThe Splunk add-on for Investigate combines your data sources from other security tools such as SIEM, IDS/IPS and firewall with the power of Investigate's API, allowing you to feed data against Investigate quickly and easily within Splunk.\n\nFor more information, please see our datasheet: [https://learn-umbrella.cisco.com/datasheets/splunk-add-on-for-investigate](https://learn-umbrella.cisco.com/datasheets/splunk-add-on-for-investigate)\n\nThe Splunk add-on allows for three types of data to be queried:  domain names, IP addresses and file hashes.  There are three key steps: installing the add-on with the appropriate settings, then creating a scheduled search to pull data and then reviewing the data with Investigate.\n\n### System Requirements\n\n* Cisco Investigate API key\n* A running Splunk instance\n\nThe instructions here have been tested most recently with Splunk 6.4.2  and 6.5.2 in a Linux environment and Splunk 6.4.2 in a Windows environment. \n\n*IMPORTANT:* The scheduled search should only be for domains that are alerting on your security events in a SIEM or other system. You cannot use the scheduled search for your entire traffic logs due to API throttling restrictions.  Please expect a maximum throughput of 5,000 unique domains per hour. Your saved search should not contain more than this number of domains. \n\nThe add-on can be found on Splunkbase: [https://splunkbase.splunk.com/app/3324/](https://splunkbase.splunk.com/app/3324/)\n\n\n### Third Party Dependencies\n\nThese are the external dependencies used to aid in this add-on's functionality.These dependencies are packaged with the add-on, so there's no need to perform any installation, but it is noted here so that you can make informed decisions about licensing, etc.\n* [dateutil](https://dateutil.readthedocs.io/en/stable/)\n* [splunklib](https://github.com/splunk/splunk-sdk-python/tree/master/splunklib)\n* [pyinvestigate](https://github.com/opendns/pyinvestigate)\n* [IPy](https://github.com/autocracy/python-ipy)\n\n\n### Installation\n\nInstall into Splunk with your method of choice:\n\n* Splunk Web: go into the Manage Apps page and click the “Install app from file” option, then follow the instructions.\n\n* Splunk CLI: download the opendns_investigate.tgz file to your Splunk node of choice and install with the following command:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"$SPLUNK_HOME/splunk/bin/splunk install app cisco-umbrella-investigate-add-on_040.tgz -auth <username>:<password>\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nBoth methods will require a restart of the Splunk node.\n\nAfter starting the node, navigate back to the Manage Apps page, find the listing for the Umbrella Investigate add-on and click the “Set up” option. This will load the standard setup page for the add-on.   \n\nTypically the Managed Apps page is found under the gear icon from the main launch page, next to Apps:\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/2fa2ff0-apps-splunk.png\",\n        \"apps-splunk.png\",\n        250,\n        93,\n        \"#68a13e\"\n      ]\n    }\n  ]\n}\n[/block]\nIf you wish to change the name of the add-on, click View Properties.  \n\n### Where to put your API key and and proxy username/password\n\nThe Investigate API key is needed to authenticate the API requests.  This can be obtained from the Investigate UI dashboard (note: not all Investigate customers have access to the API).   Once you have the key, you will need to enter your Cisco Umbrella Investigate API key in data inputs. This is to ensure your API key is stored in an encrypted format. Go to **Settings > Data Inputs > Cisco Investigate Credentials.** Click 'new'. Enter any name you like, and then enter the API key gathered from the Investigate dashboard.  Click next, and your API key will be encrypted and saved. If you are ever issued a new API key, you can update it here.\n\nAny proxy authentication is also set here.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/024fefc-api-key-input.png\",\n        \"api-key-input.png\",\n        2194,\n        1020,\n        \"#2b342e\"\n      ]\n    }\n  ]\n}\n[/block]\nNOTE: some earlier versions of the add-on had the API key in the setup screens shown below and this may still be the case if you have not yet updated your add-on to the newest version.\n\nOnce you've done that, pick the Set up and this screen is shown (the screenshot is in two parts for ease of reading):\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/e0d2743-Screen_Shot_2017-03-28_at_11.05.40_AM.png\",\n        \"Screen Shot 2017-03-28 at 11.05.40 AM.png\",\n        1892,\n        966,\n        \"#dde1df\"\n      ]\n    }\n  ]\n}\n[/block]\n### Configuration\n\nYou will be prompted to enter information into the setup page when you first start the add-on. These include:\n\n* Request destination fields: The add-on expects defined fields.  These are the same fields you’ll define in the search regex (e.g. domain, host, destination, etc), typically with the domain, IP or hash information.  These should be entered in comma separated format. \n* Scheduled search name: The name of the saved search you want us to pull domain information from.  More information on creating the saved search is below; if the search has not been created yet, this can be skipped. \n\nThe next two fields define the pruning of data to ensure the data store does not exceed a certain size.\n\n* Set how far back in time you want to save data: to limit the size of the data store, here you define how long you would like to save data.  The format should be saved as a Splunk time modifier for search.  For example, if you wish to save data for a week, you would enter -7d:::at:::d.  Leaving this blank will disable timestamp pruning\n\n* Set how much data you want to save: to prune data not to exceed a specific number of rows, set the max number of rows here.  Anything excess will be deleted in time-ascending order (i.e.: oldest first). Leaving this blank will disable size pruning. \n\nThe final three settings are tied to support for proxy servers and nonstandard hosts and ports:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/a95984a-Screen_Shot_2017-03-06_at_10.57.00_AM.png\",\n        \"Screen Shot 2017-03-06 at 10.57.00 AM.png\",\n        1896,\n        830,\n        \"#476346\"\n      ]\n    }\n  ]\n}\n[/block]\n* Proxies: Set the IP and port of your proxy server. requests to the investigate API. Make sure to use the following format: ip:port. Only IP and port are required, not protocol.  If this is blank, the add-on will make direct connections to the Investigate API.  Proxy authentication is handled  Note: We currently only support http/https proxying (and not SOCKS proxies).  \n* Host name: use this to set the hostname of the Splunk management server, if different from this host.\n\n* Port : set the Splunk Management port. \n\n\n### Create a Scheduled Search\n\nNext, we need to create a scheduled search. You will need to create one with the ability to get any kind of destination (domain, IP or hash) from log files. \n\nFor instance, the field itself may be called “dest”, and in the examples here, we're using example firewall logs, which use the field name “dest_host_blocked”.\n\nThe scheduled search should only be for domains that are alerting on your security events in a SIEM or other system.  You cannot use the scheduled search for your entire traffic logs  due to API throttling restrictions.\n\nA new scheduled search can be created from within Splunk Web by going to **Settings > Searches, reports, and alerts** and once in that section, clicking the New button.  Pick the app context first:\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/8cb0932-context-for-app.png\",\n        \"context-for-app.png\",\n        402,\n        128,\n        \"#e6e6e6\"\n      ]\n    }\n  ]\n}\n[/block]\nThis scheduled search should query for certain time ranges. For instance, it may poll every hour for the data from two hours before it is run. So it may run every 5 minutes after the hour (e.g. 11:05AM) and look for data within a one hour segment, beginning two hours before (9AM-10AM if the current time is 11AM).\n\nThe schedule should be made carefully to ensure one search is not still running while another one kicks off as this could lead to significant performance issues within Splunk.\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"NOTE:\",\n  \"body\": \"Make sure permissions are set correctly so the add-on and user have permissions to view the search report.  Make sure that the scheduled search name matches the exact name, as this is case-sensitive.\"\n}\n[/block]\nAn example saved search query would be:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"index=\\\"firewall_logs\\\" earliest=-2h latest=-1h | fields dest_host_blocked\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nA more complex example, specific to a single host:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"index=\\\"firewall_events\\\" earliest=-2h latest=-1h cs_host=adobe.com | fields cs_host, cs_hash\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nThis way, the dest_host_blocked field for requests in firewall logs index will be filtered in a simple to parse way for the add-on to process.  If you want to do a more logical conditional, like >, then you can use 'where'.\n\nSaved search queries and how they're constructed ultimately depend on your data sources and Umbrella support may not be able to recommend which fields are appropriate for your indexes.  However you pull data into the Splunk add-on as an index will depend entirely on your datasources.\n\nThe scheduled search should only be for domains that are alerting on your security events in a SIEM or other system. You cannot use the scheduled search for your entire traffic logs due to API throttling restrictions and you may find that volume of data exceeds the natural limits of your Splunk instance. Beyond that however the volume of data from non-security-related internet traffic will simply be of no use to your team. \nPlease expect a maximum throughput of 5,000 unique domains per hour. Your saved search should not contain more than this number of domains.\n\n#### Enable Scripted Input\n\nNext, be sure to enable the scripted input for the add-on. You will need to:\n\n1. Go to the Data Inputs settings under \"Settings\".\n2. Under \"Local inputs\", click \"Scripts\".\n3. Click to enable the add-on's scripted input: $SPLUNK_HOME/etc/apps/opendns_investigate/bin/investigate_input.py\n4.Configure the schedule it will run on by clicking its link and modifying the interval value\n\n\nOnce you've created your scheduled search, go back to the 'Set up' section and add the scheduled search name in the appropriate field.\n\n### Distributed System Installation\n\nWhen installing on a distributed cluster, the add-on (scripted input) must be installed on the search head (or one of the search heads). That node will run the add-on process.\n\n### App Usage\n\nThe basics of the Splunk App are three key collectors, each matching a particular set of API results:  one for domains, one for IP addresses and one for file hashes.\n\nTo view contents of the store containing your Investigate data, create a Splunk search with the following command for domains:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"| inputlookup investigate_domains\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nFor IP addresses use:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"| inputlookup investigate_ips\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nFor file hashes, use:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"| inputlookup investigate_hashes\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nYou can use the contents of the store to enrich event data within Splunk.\n\nEach of the three stores of data (domains, IP addresses and hashes) is treated as a separate set of keys for data.  This is because the data types are fundamentally different.  The output matches closely, but not exactly, to what you would typically see using the API to query.\n\nUse standard data sorting techniques to build queries, such as:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"| inputlookup investigate_domains | where not isnull('cooccurrences.0') | fields dest, cooccurrences.0, status_label, last_queried | sort -last_queried \",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n### Additional information for each store type:\n\nThe 'investigate_domains' query broadly covers the same fields as the API would for any given domain, such as the ASN of the domain, the content categories it matches, any cooccurrences or related domains, DGA score, whether it is in fast flux, general status (known bad or unknown) and WHOIS data. Information about all of these fields can be found earlier in the API documentation.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/cc70602-Screen_Shot_2016-12-09_at_10.14.44_AM.png\",\n        \"Screen Shot 2016-12-09 at 10.14.44 AM.png\",\n        2114,\n        430,\n        \"#eeeeee\"\n      ]\n    }\n  ]\n}\n[/block]\nThe 'investigate_ips' query covers the destination (the IP itself), the last queried time, the resource record history for that IP (DNS RR History for an IP), as well as the labels for the domains that resolved to this IP at one point:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/c0c8160-Screen_Shot_2016-12-09_at_10.00.24_AM.png\",\n        \"Screen Shot 2016-12-09 at 10.00.24 AM.png\",\n        2718,\n        654,\n        \"#eeeeef\"\n      ]\n    }\n  ]\n}\n[/block]\nThe 'investigate_hashes' query covers AV results, as well as network connections, file type (magic type) and security categories:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/8696569-Screen_Shot_2016-12-09_at_10.52.16_AM.png\",\n        \"Screen Shot 2016-12-09 at 10.52.16 AM.png\",\n        2392,\n        738,\n        \"#f2f2f2\"\n      ]\n    }\n  ]\n}\n[/block]\nFor more information about some of the above data, as well as information about any additional fields, see the Investigate API documentation above.\n\n###`investigatefilter` search command\n\nThere is a custom search command which can filter out search results to only contain hosts with a certain status from the Investigate API—e.g., you can filter out only search results that have a malicious host. \nYou must be in the Cisco Investigate app context to use this command. \n\nFor example, if you have an index named `proxy_logs` which stores hosts in a field named `host`, then you can run this command in the search box to filter out indices to only include those whose `host` field is a malicious host, according to the Investigate API:\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"index=\\\"proxy_logs\\\" | investigatefilter host_field=host\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\nBy default, the `status` parameter is assigned an argument of -1 (i.e. malicious). However, you can search for any supported status code (-1, 0, or 1). For example, to filter out indices to only include hosts that are deemed benign, you can run:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"index=\\\"proxy_logs\\\" | investigatefilter host_field=host status=1\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\nIf you like, you can make this your saved search for the Investigate add-on so that it only enriches data with malicious hosts.\n\n### Pruning Data\n\n#### KV Store Pruning ####\n\nA script has been provided for pruning of KV Store collections used by this add-on.\nThe following two methods can be configured and enabled.  This can also be done in the user interface as options in the set up. \n\n* **time-based**: Entries older than a user-supplied time modifier, e.g. \"-7d@d\" would\n  delete everything older than 7 days.\n* **size-based**: A limit can be set on the max number of rows in a collection.\n  When run, the pruning script will delete rows in time-ascending (i.e. oldest first)\n  order until the number of rows is equal to the maximum.\n  \nBoth of these options can be set in the add-on setup page.\n\n1. Go to the Data Inputs settings under \"Settings\".\n2. Under \"Local inputs\", click \"Scripts\".\n3. Click to enable the add-on's scripted input:\n`$SPLUNK_HOME/etc/apps/opendns_investigate/bin/investigate_prune_kv.py`  \n4. Configure the schedule it will run on by clicking its link and modifying the interval value\n\n#### Support \n\nSupport can be reached at: [umbrella-support@cisco.com](mailto:umbrella-support@cisco.com)","excerpt":"","slug":"splunk-plugin-for-investigate","type":"basic","title":"Splunk Add-on for Investigate: Installation and Use"}

Splunk Add-on for Investigate: Installation and Use


The Cisco Umbrella Investigate add-on for Splunk leverages the Investigate API to enrich events within Splunk. The add-on can be obtained from Splunkbase at no additional cost. A license for Investigate API is required in order to use it and as such, this is plugin is only available with the API in their Umbrella subscription package. The Splunk add-on for Investigate combines your data sources from other security tools such as SIEM, IDS/IPS and firewall with the power of Investigate's API, allowing you to feed data against Investigate quickly and easily within Splunk. For more information, please see our datasheet: [https://learn-umbrella.cisco.com/datasheets/splunk-add-on-for-investigate](https://learn-umbrella.cisco.com/datasheets/splunk-add-on-for-investigate) The Splunk add-on allows for three types of data to be queried: domain names, IP addresses and file hashes. There are three key steps: installing the add-on with the appropriate settings, then creating a scheduled search to pull data and then reviewing the data with Investigate. ### System Requirements * Cisco Investigate API key * A running Splunk instance The instructions here have been tested most recently with Splunk 6.4.2 and 6.5.2 in a Linux environment and Splunk 6.4.2 in a Windows environment. *IMPORTANT:* The scheduled search should only be for domains that are alerting on your security events in a SIEM or other system. You cannot use the scheduled search for your entire traffic logs due to API throttling restrictions. Please expect a maximum throughput of 5,000 unique domains per hour. Your saved search should not contain more than this number of domains. The add-on can be found on Splunkbase: [https://splunkbase.splunk.com/app/3324/](https://splunkbase.splunk.com/app/3324/) ### Third Party Dependencies These are the external dependencies used to aid in this add-on's functionality.These dependencies are packaged with the add-on, so there's no need to perform any installation, but it is noted here so that you can make informed decisions about licensing, etc. * [dateutil](https://dateutil.readthedocs.io/en/stable/) * [splunklib](https://github.com/splunk/splunk-sdk-python/tree/master/splunklib) * [pyinvestigate](https://github.com/opendns/pyinvestigate) * [IPy](https://github.com/autocracy/python-ipy) ### Installation Install into Splunk with your method of choice: * Splunk Web: go into the Manage Apps page and click the “Install app from file” option, then follow the instructions. * Splunk CLI: download the opendns_investigate.tgz file to your Splunk node of choice and install with the following command: [block:code] { "codes": [ { "code": "$SPLUNK_HOME/splunk/bin/splunk install app cisco-umbrella-investigate-add-on_040.tgz -auth <username>:<password>", "language": "text" } ] } [/block] Both methods will require a restart of the Splunk node. After starting the node, navigate back to the Manage Apps page, find the listing for the Umbrella Investigate add-on and click the “Set up” option. This will load the standard setup page for the add-on. Typically the Managed Apps page is found under the gear icon from the main launch page, next to Apps: [block:image] { "images": [ { "image": [ "https://files.readme.io/2fa2ff0-apps-splunk.png", "apps-splunk.png", 250, 93, "#68a13e" ] } ] } [/block] If you wish to change the name of the add-on, click View Properties. ### Where to put your API key and and proxy username/password The Investigate API key is needed to authenticate the API requests. This can be obtained from the Investigate UI dashboard (note: not all Investigate customers have access to the API). Once you have the key, you will need to enter your Cisco Umbrella Investigate API key in data inputs. This is to ensure your API key is stored in an encrypted format. Go to **Settings > Data Inputs > Cisco Investigate Credentials.** Click 'new'. Enter any name you like, and then enter the API key gathered from the Investigate dashboard. Click next, and your API key will be encrypted and saved. If you are ever issued a new API key, you can update it here. Any proxy authentication is also set here. [block:image] { "images": [ { "image": [ "https://files.readme.io/024fefc-api-key-input.png", "api-key-input.png", 2194, 1020, "#2b342e" ] } ] } [/block] NOTE: some earlier versions of the add-on had the API key in the setup screens shown below and this may still be the case if you have not yet updated your add-on to the newest version. Once you've done that, pick the Set up and this screen is shown (the screenshot is in two parts for ease of reading): [block:image] { "images": [ { "image": [ "https://files.readme.io/e0d2743-Screen_Shot_2017-03-28_at_11.05.40_AM.png", "Screen Shot 2017-03-28 at 11.05.40 AM.png", 1892, 966, "#dde1df" ] } ] } [/block] ### Configuration You will be prompted to enter information into the setup page when you first start the add-on. These include: * Request destination fields: The add-on expects defined fields. These are the same fields you’ll define in the search regex (e.g. domain, host, destination, etc), typically with the domain, IP or hash information. These should be entered in comma separated format. * Scheduled search name: The name of the saved search you want us to pull domain information from. More information on creating the saved search is below; if the search has not been created yet, this can be skipped. The next two fields define the pruning of data to ensure the data store does not exceed a certain size. * Set how far back in time you want to save data: to limit the size of the data store, here you define how long you would like to save data. The format should be saved as a Splunk time modifier for search. For example, if you wish to save data for a week, you would enter -7d@d. Leaving this blank will disable timestamp pruning * Set how much data you want to save: to prune data not to exceed a specific number of rows, set the max number of rows here. Anything excess will be deleted in time-ascending order (i.e.: oldest first). Leaving this blank will disable size pruning. The final three settings are tied to support for proxy servers and nonstandard hosts and ports: [block:image] { "images": [ { "image": [ "https://files.readme.io/a95984a-Screen_Shot_2017-03-06_at_10.57.00_AM.png", "Screen Shot 2017-03-06 at 10.57.00 AM.png", 1896, 830, "#476346" ] } ] } [/block] * Proxies: Set the IP and port of your proxy server. requests to the investigate API. Make sure to use the following format: ip:port. Only IP and port are required, not protocol. If this is blank, the add-on will make direct connections to the Investigate API. Proxy authentication is handled Note: We currently only support http/https proxying (and not SOCKS proxies). * Host name: use this to set the hostname of the Splunk management server, if different from this host. * Port : set the Splunk Management port. ### Create a Scheduled Search Next, we need to create a scheduled search. You will need to create one with the ability to get any kind of destination (domain, IP or hash) from log files. For instance, the field itself may be called “dest”, and in the examples here, we're using example firewall logs, which use the field name “dest_host_blocked”. The scheduled search should only be for domains that are alerting on your security events in a SIEM or other system. You cannot use the scheduled search for your entire traffic logs due to API throttling restrictions. A new scheduled search can be created from within Splunk Web by going to **Settings > Searches, reports, and alerts** and once in that section, clicking the New button. Pick the app context first: [block:image] { "images": [ { "image": [ "https://files.readme.io/8cb0932-context-for-app.png", "context-for-app.png", 402, 128, "#e6e6e6" ] } ] } [/block] This scheduled search should query for certain time ranges. For instance, it may poll every hour for the data from two hours before it is run. So it may run every 5 minutes after the hour (e.g. 11:05AM) and look for data within a one hour segment, beginning two hours before (9AM-10AM if the current time is 11AM). The schedule should be made carefully to ensure one search is not still running while another one kicks off as this could lead to significant performance issues within Splunk. [block:callout] { "type": "info", "title": "NOTE:", "body": "Make sure permissions are set correctly so the add-on and user have permissions to view the search report. Make sure that the scheduled search name matches the exact name, as this is case-sensitive." } [/block] An example saved search query would be: [block:code] { "codes": [ { "code": "index=\"firewall_logs\" earliest=-2h latest=-1h | fields dest_host_blocked", "language": "text" } ] } [/block] A more complex example, specific to a single host: [block:code] { "codes": [ { "code": "index=\"firewall_events\" earliest=-2h latest=-1h cs_host=adobe.com | fields cs_host, cs_hash", "language": "text" } ] } [/block] This way, the dest_host_blocked field for requests in firewall logs index will be filtered in a simple to parse way for the add-on to process. If you want to do a more logical conditional, like >, then you can use 'where'. Saved search queries and how they're constructed ultimately depend on your data sources and Umbrella support may not be able to recommend which fields are appropriate for your indexes. However you pull data into the Splunk add-on as an index will depend entirely on your datasources. The scheduled search should only be for domains that are alerting on your security events in a SIEM or other system. You cannot use the scheduled search for your entire traffic logs due to API throttling restrictions and you may find that volume of data exceeds the natural limits of your Splunk instance. Beyond that however the volume of data from non-security-related internet traffic will simply be of no use to your team. Please expect a maximum throughput of 5,000 unique domains per hour. Your saved search should not contain more than this number of domains. #### Enable Scripted Input Next, be sure to enable the scripted input for the add-on. You will need to: 1. Go to the Data Inputs settings under "Settings". 2. Under "Local inputs", click "Scripts". 3. Click to enable the add-on's scripted input: $SPLUNK_HOME/etc/apps/opendns_investigate/bin/investigate_input.py 4.Configure the schedule it will run on by clicking its link and modifying the interval value Once you've created your scheduled search, go back to the 'Set up' section and add the scheduled search name in the appropriate field. ### Distributed System Installation When installing on a distributed cluster, the add-on (scripted input) must be installed on the search head (or one of the search heads). That node will run the add-on process. ### App Usage The basics of the Splunk App are three key collectors, each matching a particular set of API results: one for domains, one for IP addresses and one for file hashes. To view contents of the store containing your Investigate data, create a Splunk search with the following command for domains: [block:code] { "codes": [ { "code": "| inputlookup investigate_domains", "language": "text" } ] } [/block] For IP addresses use: [block:code] { "codes": [ { "code": "| inputlookup investigate_ips", "language": "text" } ] } [/block] For file hashes, use: [block:code] { "codes": [ { "code": "| inputlookup investigate_hashes", "language": "text" } ] } [/block] You can use the contents of the store to enrich event data within Splunk. Each of the three stores of data (domains, IP addresses and hashes) is treated as a separate set of keys for data. This is because the data types are fundamentally different. The output matches closely, but not exactly, to what you would typically see using the API to query. Use standard data sorting techniques to build queries, such as: [block:code] { "codes": [ { "code": "| inputlookup investigate_domains | where not isnull('cooccurrences.0') | fields dest, cooccurrences.0, status_label, last_queried | sort -last_queried ", "language": "text" } ] } [/block] ### Additional information for each store type: The 'investigate_domains' query broadly covers the same fields as the API would for any given domain, such as the ASN of the domain, the content categories it matches, any cooccurrences or related domains, DGA score, whether it is in fast flux, general status (known bad or unknown) and WHOIS data. Information about all of these fields can be found earlier in the API documentation. [block:image] { "images": [ { "image": [ "https://files.readme.io/cc70602-Screen_Shot_2016-12-09_at_10.14.44_AM.png", "Screen Shot 2016-12-09 at 10.14.44 AM.png", 2114, 430, "#eeeeee" ] } ] } [/block] The 'investigate_ips' query covers the destination (the IP itself), the last queried time, the resource record history for that IP (DNS RR History for an IP), as well as the labels for the domains that resolved to this IP at one point: [block:image] { "images": [ { "image": [ "https://files.readme.io/c0c8160-Screen_Shot_2016-12-09_at_10.00.24_AM.png", "Screen Shot 2016-12-09 at 10.00.24 AM.png", 2718, 654, "#eeeeef" ] } ] } [/block] The 'investigate_hashes' query covers AV results, as well as network connections, file type (magic type) and security categories: [block:image] { "images": [ { "image": [ "https://files.readme.io/8696569-Screen_Shot_2016-12-09_at_10.52.16_AM.png", "Screen Shot 2016-12-09 at 10.52.16 AM.png", 2392, 738, "#f2f2f2" ] } ] } [/block] For more information about some of the above data, as well as information about any additional fields, see the Investigate API documentation above. ###`investigatefilter` search command There is a custom search command which can filter out search results to only contain hosts with a certain status from the Investigate API—e.g., you can filter out only search results that have a malicious host. You must be in the Cisco Investigate app context to use this command. For example, if you have an index named `proxy_logs` which stores hosts in a field named `host`, then you can run this command in the search box to filter out indices to only include those whose `host` field is a malicious host, according to the Investigate API: [block:code] { "codes": [ { "code": "index=\"proxy_logs\" | investigatefilter host_field=host", "language": "text" } ] } [/block] By default, the `status` parameter is assigned an argument of -1 (i.e. malicious). However, you can search for any supported status code (-1, 0, or 1). For example, to filter out indices to only include hosts that are deemed benign, you can run: [block:code] { "codes": [ { "code": "index=\"proxy_logs\" | investigatefilter host_field=host status=1", "language": "text" } ] } [/block] If you like, you can make this your saved search for the Investigate add-on so that it only enriches data with malicious hosts. ### Pruning Data #### KV Store Pruning #### A script has been provided for pruning of KV Store collections used by this add-on. The following two methods can be configured and enabled. This can also be done in the user interface as options in the set up. * **time-based**: Entries older than a user-supplied time modifier, e.g. "-7d@d" would delete everything older than 7 days. * **size-based**: A limit can be set on the max number of rows in a collection. When run, the pruning script will delete rows in time-ascending (i.e. oldest first) order until the number of rows is equal to the maximum. Both of these options can be set in the add-on setup page. 1. Go to the Data Inputs settings under "Settings". 2. Under "Local inputs", click "Scripts". 3. Click to enable the add-on's scripted input: `$SPLUNK_HOME/etc/apps/opendns_investigate/bin/investigate_prune_kv.py` 4. Configure the schedule it will run on by clicking its link and modifying the interval value #### Support Support can be reached at: [umbrella-support@cisco.com](mailto:umbrella-support@cisco.com)