Cisco Unified Border Element (CUBE), File Accounting
Note this setup deviates from Cisco’s setup for similar items for other services. We apologize for Cisco’s inconsistency.
Please be sure to be upgraded to the latest version of the Cisco CDR app, Sideview Utils, Canary, and the TA first! You might need to upgrade Splunk to do this.
To perform these steps, you will need to set up an FTP server. CUBE and vCUBE cannot use SSH or SFTP, so the SFTP server you may have set up to collect Callmanager’s CDR can not be used for this data.
On your FTP server, create a user and a new folder that user can write files to. For our example setup, we will be using server 10.0.0.100, and a user user with password splunk. As our prefix for filenames, we will use cube_.
After this is set up, you should be able to confirm via a manual test that this user can upload a file to the configured directory. Remember to delete the file after done.
Steps to configure the file accounting server
Log into the server used for file accounting with an account with administrative permissions. Then run the below listed commands to set up gw-accounting to file, change the cdr-format to “detailed”, configure the ftp server information, and tell the system to flush new data to file once per minute. Note the bold italic portions are ones you’ll change.
Be sure to change your server information in step 5 as appropriate.
Also in step 5 – be SURE to either use a different prefix from your cdr data so that they have names different from the cdr and cmr files, like “cube_” for the files, or use an entirely different folder structure so the cdr_*, cmr_* and cube_* files are all either in different places. If they do not, there’s a possibility of bad things happening because you’ll have all your inputs fighting over these files and they’ll ingest them incorrectly.
Note especially that this accepts many of the default settings for buffer sizes and number of reattempts. We assume these will work in most moderately sized installations, but please check and confirm them for your own environment.
Create a new Splunk input
We will now create a new “batch” input, similar to the ones for Configuring Splunk to index the data, for these new data files.
Important note: THIS INPUT will be set up to DELETE the files as they’re read in. If you need this to not happen, please see the notes at the end of this section.
All these steps happen in your FTP server’s Splunk Universal Forwarder’s configuration files:
1) Create the monitor input by adding this config to an inputs.conf file located at “$SPLUNK_HOME/etc/apps/TA_cisco_cdr/local/inputs.conf”. This file should exist already, but if it does not you may need to create the folder “local” and the file itself. Make sure the user Splunk runs under has permissions to this file and folder.
If your Universal Forwarder is on Windows, the contents of your inputs.conf will look like this:
[batch://D:\path\to\files\file_accounting\cube_*] index = cisco_cdr sourcetype = cube_cdr move_policy = sinkhole
If your Universal Forwarder is on Linux or Unix, the input will look like this:
[batch:///path/to/files/file_accounting/cube_*] index = cisco_cdr sourcetype = cube_cdr move_policy = sinkhole
NOTE: It is critical that no mistakes be made here. Only the sections in bold should be edited. Leave everything else exactly as it is written above.
Make sure :
NOTE: As mentioned above this is a sinkhole input and it will delete each file as it indexes it. Any existing csv files that exist in this directory will be indexed and deleted almost immediately, and any new files writen to here will be indexed and deleted as they arrive. If you have other intentions for these files besides putting them in Splunk, please contact us and we can help you come up with another solution.
Contact us to set up a Webex! We can help confirm everything is working properly and help you start using this data.
If you have any comments at all about the documentation, please send it in to email@example.com.