We occasionally get asked questions involving data retention policies or involving disk consumption.


Our data retention policies are actually just Splunk data retention policies.  On individual indexes (like the one where our data is stored) you can set either size or time limits on how much data to keep.

Cisco CDR data is pretty small as Splunk data goes.  Most of our customers accumulate ten to perhaps the low hundreds of megabytes per day.  “Megabytes” was not a typo there!  With that little data, keeping several years of history isn’t a hard task.

Disk Space Consumption

The topic of disk consumption worries a lot of people, but for CDR records this is usually not much of an issue.

Around 1,000,000 calls in my test system ends up a bit more than 1 GB.  This can be affected by if your data is full of long FQDNs or shorter names, how many legs usually comprise each call (adding up call legs would probable be more accurate, but that seems like too much effort), how you use calling party normalization and things like that, but from my information a reasonable upper bound with ample space for cushion is about 2GB/million calls.  We’ll use that below.

Therefore however long it takes you to get to 1 million calls is about how long it’ll take you to get 2 GB of data in your index.  If you do 1000 calls per day, that’s maybe 3 years to get to 2 GB.  If you do a million calls per day, you’ll be at 2 GB/day or less.  Probably – see below on how to check!

Our recommendation is to, if you are an admin, check out the following screens.

  • Click Settings, then click on the Monitoring console in the left of that menu.
  • Click Indexing, then Indexes and Volumes, then Index Detail: Instance.
  • Set your Group and Instance correctly, then set the Index to cisco_cdr

That page is a whole topic on its own!  Hopefully you’ll find what you need, there.

If you aren’t an admin you can’t see this information.  And in that case, well, this really isn’t your problem, is it?  Make sure someone’s keeping their eye on it and move on.

One last tip on checking disk space – if you know you have 6 months of data sitting in your index, and your index and all its related files (the folder cisco_cdr under $SPLUNKHOME/var/lib/splunk/) are 500 MB, a little simple multiplication is all it takes to arrive at 1 GB/year disk space requirements (plus some cushion, maybe call it 2GB/year).  You know this number is right because you aren’t guessing,  you just measured it.  🙂

Data Retention

There are several aspects to this answer, depending on your actual needs.

Terminology and how Splunk handles data

Splunk stores data in what it calls “buckets”.  You can think of them as files.  These buckets “roll” (as in “roll over to…”) through a sequence of logical states – Hot, Warm, Cold and Frozen.

Hot buckets are those that Splunk’s actively writing to.

Warm and Cold are buckets that are searchable but not being written to.  In larger environments Warm is that data they search often and is sometimes kept on faster storage while Cold can often be on slower disks because it’s older and searched less often.   In smaller environments (and a lot of medium sized environments, and sometimes even in big ones) there’s no real distinction between warm and cold.

There are no Frozen buckets by default, because by default rolling Cold to Frozen deletes the data in them.  Luckily, if you want to keep the data in a non searchable “archive” format that’s easy to “make searchable” again, setting a directory to copy the files into with coldToFrozenDir will do the trick.  (There’s options – see the docs – on using custom scripts to handle this too, but that’s way beyond the scope of this simple docs page).

The settings we’ll need

There are adjustments to the length of time buckets stay in each stage, but most of those settings are for large environments with specific needs.  For the matter of retention, we only care about a couple of settings.

  • frozenTimePeriodInSecs is the numbers of seconds which controls when a bucket rolls from Cold to Frozen.  Frozen data is by default deleted.
  • maxTotalDataSizeMB is the size in MB that controls when a bucket rolls from Cold to Frozen.  Frozen data is by default deleted.
  • coldToFrozenDir is the simplest setting which makes the Cold to Frozen transition save data instead of delete it.  It becomes nonsearchable when it’s frozen, but it won’t be deleted and it’s easy to restore.

All of these are more fully explained in the Splunk Index Storage docs.

So, let’s explore two common scenarios!

I’m required to keep the data searchable for at least X days/months/years

In this case, your first step is to determine how much data you have over what time period.  Use the topic “Disk Space” above.

Let’s say you have 3 months of data right now, and the cisco_cdr index takes up 250 MB.  A little math shows that a year’s worth of data would be 1 GB.  I’d add some cushion – with sizes this small and with my laziness I’d just double it and be done with it, so that’s 2 GB/year.  (On bigger indexes I’d use 25%-40% cushion).  Note that in all cases, you still need to redo these numbers every now and then to account for changes in the environment!

So let’s say you need to keep the data for 5 years.  5 years of data at 2 GB/year is 10 GB. All right, I think we just determined that the amount of data isn’t going to be hard to store.

5 years is … some really large number of seconds.  The internet says it’s 157,680,000 seconds.  I’ll take their word for that.  (Also note, as per the Splunk Index Storage docs the default frozenTimePeriodInSecs setting is about 6 years, so maybe this is good enough already?  Do you care if there’s an extra year available if the storage for that data is cheap enough?)

Now that we have all our information…

  • On that indexes stanza in the local/indexes.conf (or in the UI) set frozenTimePeriodInSecs = 157680000
    • Or leave it at the default of about 6 years, if that was good enough.
  • Check that maxTotalDataSizeMB is at least 10000
    • That’s 10,000 MB, which is close enough to 10 GB for our needs
    • Also note the default is 500000, or 500 GB, so if it’s set at that level that’s perfectly fine.
    • Since we’re controlling everything with the frozenTimePeriodInSecs second, we just need to make sure this is big enough!
  • Leave coldToFrozenDir alone
    • (Well, it’s optional – you can do what you want.  I just mean we don’t NEED it to fulfill the requirements here).
  • Then make sure you don’t run out of disk space, which shouldn’t be hard!

As a safety factor, you could also set the coldToFrozenDir to make Splunk archive off buckets it would have deleted because they expired.  That way even if it the data does expire too early it’s easily recovered.  The Splunk Index Storage docs talk about this, and they have a link to take you to their page on Archiving Data.

I’m required to keep data for at least X days/months/years, but it doesn’t have to be searchable.

Generally, when folks mean “But it doesn’t have to be searchable” usually they mean “It’s OK to be not searchable, as long as I can ‘restore’ them or something if we DO need to search them.”  I’ll be using that definition below.

The answer above could also be the answer here too.  Just because it doesn’t *have* to be searchable doesn’t mean you have to actually make it *not* searchable, and with the fairly small data sizes we’re talking about, why not build the solution as simply as you can?  What’s 20 GB between friends?

But, if you’d prefer to keep only, say, the last 90 days searchable and all the older data can just be archived off, then you could set a coldToFrozenDir.  That tells Splunk to save those buckets off into that location instead of deleting them when they’re due to expire.  From that archive location you could back them up and save them for as long as you’d like, or leave them there (assuming your filesystem can handle the size and number of files).  It’s relatively easy to restore them to being searchable.

The answer for 90 days searchable, but still keep 5 years around in a form where we could restore it later and search…

  • On that indexes stanza in indexes.conf (or in the UI) set frozenTimePeriodInSecs = 7890000
    • That’s 3 months
  • Check that maxTotalDataSizeMB is at least 1000
    • That’s 1 GB, which is actually twice as big as we need it but since we’re controlling this with frozenTimePeriodInSecs and not by size, all we need to do is make sure this setting is bigger than we need.
    • Also note the default is 500000, or 500 GB, so if it’s set at that level that’s perfectly fine.
  • Set the coldToFrozenDir to some disk location
    • Make sure the folder EXISTS
    • Make sure the folder IS WRITABLE BY SPLUNK
    • Make sure the folder HAS ENOUGH ROOM
    • Make sure the folder gets backed up (if you are going to save the data, then make sure you save the data, right?)

Much like the above answer, The Splunk Index Storage docs talk about this, and they have a link to take you to their page on Archiving Data.

Another option

Also note that CallManager can keep the original records around for a long time.  I’m not sure what the default retention is, but most customers can go back years inside CM.  What that lets you do is simply export CDR records and re-ingest them into Splunk.  This process is actually pretty easy and documented.

Sometimes the ability to recover that data from CallManager back into Splunk is all they require for retention.

(Of course, confirm that CallManager is saving data for long enough, that is has enough redundancy, is being backed up, etc… )


I  hope this foray into retention helps!  Know that if you have difficulties, this is all documented in the Splunk docs.  If you have more issues or need more help, we recommend the following process:

a) Search using your favorite internet search engine for “Splunk retention” and perhaps  add a few more keywords if appropriate.

b) Read through any Splunk docs links returned.

c) Read through those answers, especially those from answers.splunk.com.  If you’d like to limit the search results to *only* answers.splunk.com results, use search string like “Splunk retention site:answers.splunk.com”.

d) If none of that answers your question, you can ask it on answers.splunk.com – just create and account and ask away!  Be sure to follow their guidelines on asking a good question!

e) Or you could also join the Splunk slack channel – several thousand people hang out on there helping each other with questions like this.  Google will tell you how to get on it!

If you have any comments at all about the documentation, please send it in to docs@sideviewapps.com.