Cisco CDR Reporting & Analytics | Administration
In our testing, 1,000,000 calls typically ends up being between 1 and 2 GB of data in Splunk. There are various factors that go into your exact size, like long FQDNs or shorter names or how many legs each call has and other things, but a reasonable upper bound seems to be about 2 GB per 1,000,000 calls. It might be easier to calculate if you use 500,000 calls = 1 GB.
Given that ratio, to predict how much space you need per year is simple — calculate how many calls you expect per year and divide that result by 500,000. If you do 50,000 calls per month, that’s about 50,000×12=600,000 calls per year and 600,000÷500,000=1.2 GB.
To get your current size and consumption, if you are an admin our recommendation is to check Splunk’s Monitoring Console, especially the sections on Indexing: Indexes and Volumes. From there you can get usage and size statistics on the cisco_cdr index.
Our data retention policies are actually just Splunk data retention policies. Splunk has a page on how to use their retention policies under the topic Set a retirement and archiving policy in their documentation, and that page applies to our data as well as any other Splunk data.
If you left everything at their defaults when you created the cisco_cdr index, Splunk will keep
Whichever threshold is met first will trigger the deletion of the oldest data to start. In many cases leaving the defaults for the index may work out fine, just check disk space consumption from time to time.
If not you can carefully adjust the various settings in their retirement and archiving policy docs to meet your needs.
If you need some help with these, or need some specific advice, contact us!