Cisco CDR Reporting & Analytics | Administration
Here’s a sort of high-level overview of the pipeline that sends data from CallManager to our app. We also include the most common things that go wrong. Sometimes with additional commentary, because that’s just the way we roll around here.
Do note that we can’t cover every possible way to configure this! Fortunately though, with a minor variation or two, most installs use something similar enough to the steps below that it should be helpful.
Also note this is NOT “setup troubleshooting!” Instead, this assumes that you had data coming in properly yesterday, but none today. I’m not saying you can’t use it to help set up the flow too, but that’s not this document’s primary purpose. First, for all troubleshooting …
Run (or get an admin to do so if you aren’t one) our health check pages:
When they’re done, if you get warnings about no recent data, that’s what this will help you solve!
(Which, of course, is silly, especially since you’ve already done step 0, right? But I have a hankering for those old “choose your own adventure” books. They were such fun!).
The first pipe in the pipeline is that a billing server is set up in CUCM. This sends, via SFTP, data once per minute for all calls ending in that past minute. If no calls ended in the past minute, no SFTP is done.
Every once in a long while, we find that CallManager just “stops” sending that data – usually, but not always, coincident with a restart of the SFTP server.
Luckily, there’s an easy fix!
This does not interrupt anything important that we know of, but if the below checks didn’t make it work again, then this is likely going to fix it.
CUCM needs to be able to talk to your SFTP server over the network.
Double-check both the firewall settings and the firewall logs.* Check the change log and talk to the network admins. If anyone made any changes to the network or the firewall in the time since the last time you had data, be suspicious. Coincidences like that almost never happen – it’s almost a sure bet the changes caused this outage.
*You are putting those logs into Splunk aren’t you? If you aren’t, it’s a fantastic and common use case!
So CUCM sends data no more than once per minute, via SFTP over the network, to this SFTP server. This SFTP server receives that connection, accepts the transferred information, and writes a file on the filesystem (as per the SFTP server’s configuration). If you look in the folder where the files should be written to on the SFTP server, you’ll either see files or you won’t see files, and they’ll either be recent files or only old files. If that sounds confusing, I apologize. Just read it again a few times and hopefully, it’ll become clear!
If you set it up the way we have suggested, with a batch sinkhole input, then the Splunk software – either the UF or the “local server” depending on how you have things arranged – deletes the files immediately after reading them. There’s a fairly easy step to confirm this though:
If all is working up to this step, you’ll see one or more files show up in this folder. If you now start the UF/Server back up and those files disappear within a couple of minutes, we know that even if step 4 is right, you’ll want to probably rerun the Health Checks and see if everything may be fine now (e.g. restarting the UF or Server “fixed” it).
If no files show up while the server/UF is stopped, then we have a problem.
If there’s nothing that seems amiss in the SFTP server software, and you can send it a test file fine, then work backwards to step 2 and 1, because your problem is ‘before’ the SFTP server.
Otherwise, if this system is receiving files, be sure to turn the UF or server back on, then proceed to Step 4.
A Splunk Universal Forwarder walks into a bar… Ha, no. Your SFTP files are making it in, but they’re just sitting there and not getting sent to Splunk. This is actually a moderately easy step to check – there’s only a couple of moving pieces. The biggest question is if your files are sitting here on the Splunk server itself, or on a Universal Forwarder that’s sending them into your indexing tier.
index=...
and sourcetype=...
settings.There are actually more possibilities for breakage here, but the above are the most common. If you are unsure about any of them, feel free to shoot us an email at [email protected]sideviewapps.com!
If everything up through step 4 is working, then we should be getting data *somewhere.* Hopefully it’s the right place.
There’s one remaining piece to check, which is that all the searches in our app rely on one macro, custom_index
, to know where to look for the data.
custom_index
points to the right index.How can you tell? Well, first off, in step 4 you confirmed the input stanza has the correct index=...
settings, so this macro should point to that same index. And that’s really all there is to that.
If you have any comments at all about the documentation, please send them to [email protected]sideviewapps.com.