Reference pages


Configure Splunk to index the data

Now it’s time to hook the output of CallManager to the Splunk and the Cisco CDR Reporting and Analytics app.

Standalone Installs – Use the Data Inputs Wizard to Index the Data

Go to the next section if you are using a separate indexer or a separate forwarder.  Use this section only if your indexer and search head are both on this one server.

  1. Log into the Splunk Server as an “admin” user.
  2. Navigate to “Splunk for Cisco CDR” app.
  3. Give the app’s landing page a few seconds to load and it will tell you that you have not yet indexed any CDR records into Splunk.
  4. Under that should be a large blue link that says “I am ready to proceed with indexing my CDR data”. Click that link and the wizard will guide you from there.
  5. When finished, you are done with indexing the data and can move on to What’s Next!

Note: if you do not see any such link it’s possible someone was here before you and partially completed setup. Don’t worry, you can manually get to the wizard by clicking Setup > Set up Data Inputs in the app’s navigation bar.)

After you have successfully added your data inputs for your standalone install, scroll all the way to the bottom of this page and see a few recommended but optional steps!

Distributed Installs – Creating an index for our data

It is recommended to create and use the default “cisco_cdr” index on the indexers and to index the data there. You can of course use any index name you’d like, though.

To use the default index name, there is only the single step of …

  1. Creating an index “cisco_cdr” (performed on/for the indexer).

To use a custom index name involves just a few more easy steps.

  1. Create the index that you’d like to use (performed on/for the indexer)
  2. Change the “custom_index” macro in the Cisco CDR Reporting and Analytics app (performed on the Search Head).  The contents should be index="custom_index_name".
    • To edit this, in Splunk’s menu click “Settings”, the under topic “Knowledge” click “Advanced search”.  In the page that opens, click “Search macros” and then scroll around and find the above search.
  3. Make sure the index in the “custom_index” macro above matches the index named in the lines “index = cisco_cdr” in your inputs.conf  (Performed in TA_cisco_cdr on the UF)

In any case creating and deploying the index itself should be done according to your own architecture and best practices.  Some examples:

  • In a simple, non-clustered environment not using Deployment Server –
    • use the UI or indexes.conf to add a new index to your indexer.
  • In non-clustered environments using Deployment Server –
    • add this new index to the indexes.conf you deploy to your Indexers.
  • In an Indexer Cluster –
    • add this new index to the indexes.conf on the Cluster Master and deploying this to your cluster members.

Distributed Installs – Prepare your Forwarder

If you are using an existing Universal Forwarder (UF) or Heavy Forwarder (HF)

  1. Confirm that your existing forwarder is forwarding into your indexing tier.
  2. Continue below with section

If you are setting up a new Universal Forwarder (UF)

  1. Download and install that Universal Forwarder instance. You can download the UF from here.
  2. Configure the UF to send to your indexers.  Read about that in Splunk’s documentation on configuring the UF.
  3. Once configured, please continue below with the next section.

Distributed Installs – Download and Extract TA_cisco_cdr

We now need to download, extract, configure and redeploy the TA_cisco_cdr.

  1. On your desktop log into to download the app as TA_cisco_cdr.tar.gz
    • This will end up on your UF after  you configure it in the steps below
    • But for now, we just need it in an easy to edit location.
  2. Extract the tar.gz file into a temporary location.
    • If you are running windows and can’t open that file, try using 7-zip.
  3. Once you have that downloaded and able to be edited, continue to the next section below.

Distributed Installs – Configuring TA_cisco_cdr

We now need to download, extract, configure and redeploy the TA_cisco_cdr.

Before we make any changes to the TA’s set of files and folders, the directory structure looks like this:

The below steps create a “batch” data input, often referred to as a “sinkhole” data input, on the Universal Forwarder.

  1. Locate your temporary extracted version of the TA_cisco_cdr
  2. Inside there, create a “local” folder so that you have a directory “TA_cisco_cdr/local/”.  If you already have that folder, then continue with the next step.
  3. Create a new file inside the “TA_cisco_cdr/local/” folder called “inputs.conf”, so you have a “TA_cisco_cdr/local/inputs.conf” file.
  4. To that file, add the following contents depending on your UF’s Operating System:
    • for Windows,  the contents of inputs.conf will look like these:
index = cisco_cdr
sourcetype = cucm_cdr

index = cisco_cdr
sourcetype = cucm_cmr
    • for Linux or Unix, the contents of inputs.conf will look like these:
index = cisco_cdr 
sourcetype = cucm_cdr 

index = cisco_cdr 
sourcetype = cucm_cmr 

Important Notes:

  • It is critical that no mistakes be made here. Only the sections in bold should be edited. Leave everything else exactly as it is written above.
  • Windows users double-check your permissions on the created file and folder!
  • Use appropriate slashes for your hosts Operating System, ie “/foo/bar/cdr_*” vs “C:\foo\bar\cdr_*”.
  • Make sure to match the format of the paths
    • Linux – Note the triple slashes at the front of the path – it’s “batch://” then the path starting with the leading slash, “/path/to/files/” hence three slashes.
    • Windows – Full path goes here, it’s “batch://” then your path, including drive letter, like “E:\SFTP”, for “batch://E:\SFTP\”.
  • That the index specified in both lines matches exactly the single index specified in the “custom_index” macro on the Search Heads’ apps.
    • Index names in Splunk are case-sensitive.  “index = cisco_CDR” is not the same as “index = cisco_cdr”.
    • If you used the default “cisco_cdr” index then the above file snippets should work correctly as-is.
  • That “cdr_*” and “cmr_*” are present respectively on the end of each path, and that they correspond to the “cucm_cdr” and “cucm_cmr” sourcetypes in that same stanza.
  • That both stanzas specify the exact same index. “cisco_cdr” is assumed here although you may have chosen a different index name.

After finished, you should now have a directory structure like this one:

With of course the contents of the inputs.conf file being like we describe it above.

NOTE: As mentioned above this is a sinkhole input and it will delete each file as it indexes it. Any existing Call Manager files that exist in this directory will be indexed and deleted almost immediately, and any new files Call Manager writes will be indexed and deleted as they arrive. If you have other intentions for these files besides putting them in Splunk, please see our documentation regarding Sinkhole vs. Monitor Inputs.

Distributed Installs – Deploy the TA to the UF

This step takes the TA, which we just configured in the preceding step above, and “installs” it into the Splunk UF.

This may be as simple as copying the TA’s folder (TA_cisco_cdr) in its entirety to the UF’s  $SPLUNK_HOME/etc/apps folder, then restarting Splunk.  This would mean that if you installed the Splunk UF in the default locations you’d end up with a folder:

  • Windows – perhaps C:\Program Files\Splunk\etc\apps\TA_cisco_cdr\
  • Linux – perhaps /opt/splunk/etc/apps/TA_cisco_cdr/

If you are using a Deployment Server, you will copy it to your Deployment Server’s $SPLUNK_HOME/etc/deployment-apps folder and add a serverclass to deploy it to your UF.  Make sure you have the flag set to have the UF restart after getting this deployed.

If you are using some other tool, well, you are mostly on your own here but we assume you have experience with it.

You should have data soon, and if you don’t be sure to let us know at!

What’s next?

One you have data coming in, proceed to the last “required” section, that of setting the cluster and locale.

If you have any comments at all about the documentation, please send it in to