Here’s a sort of high-level overview of the pipeline that sends data from CallManager to our app.  We also include the most common things that go wrong.  Sometimes with additional commentary because that’s just the way we roll around here.

Do note that we can’t cover every possible way to configure this!  Fortunately though with a minor variation or two, most installs use something similar enough to the below that it should be helpful.

Also note this is NOT “setup troubleshooting”!  Instead this assumes that you had data coming in properly yesterday, but none today.  I’m not saying you can’t use it to help set up the flow too, but that’s not this document’s primary  purpose.
First for all troubleshooting …

0) Run our Health Checks!

Run (or get an admin to do so if you aren’t one) our health check pages
  • Open our app
  • Click Setup, then Run health checks.
  • Let those finish.

When they’re done, if you get warnings about no recent data… that’s what this will help you solve!

DO NOT START AT STEP 1.  It is better, usually, to start at step 3.

(Which of course is silly, especially since you’ve already done step 0, right?  But I have a hankering for those old “choose your own adventure” books.  They were such fun!)

1) The Billing Server inside CUCM

(You aren’t starting here, are you?  You really should start at step 3…)

The first pipe in the pipeline is that a billing server is set up in CUCM.  This sends, via SFTP, data once per minute for all calls ending in that past minute.  If no calls ended in the past minute, no SFTP is done.

There’s not too much that goes wrong here.  (ASSUMING that it was working yesterday or whatever…)

But “not much” and “nothing” aren’t exactly the same thing.  Every once in a long while we find that CallManager just “stops” sending that data.  I put that in quotes because there’s never a reason, it just stops – usually but not always coincident with a restart of the SFTP server.  This is NOT common, but it’s happened perhaps a half dozen times across all our customers.  This is unlikely to be the problem, but if you started at step 3 and worked your way back to here with nothing apparently wrong, then try this:

  • Restart the “Cisco CDR Repository Manager” service inside CUCM

This does not interrupt anything important that we know of, but sometimes kicks the configuration into gear again.

2) The network between CUCM and the SFTP server

CUCM needs to be able to talk to your SFTP server over the network.

Double check both the firewall settings, and also the firewall logs*.  Check the change log, talk to the network admins.  If anyone made any changes to the network or the firewall in the time since the last time you had data, be suspicious.  Coincidences like that almost never happen – it’s almost a sure bet the changes caused this outage.

*You are putting those logs into Splunk aren’t you?  If you aren’t, it’s a fantastic and common use case!

3) The SFTP server receives those files and writes them to disk.

So CUCM sends data no more than once per minute, via SFTP over the network, to this SFTP server.  This SFTP server receives that connection, accepts the transferred information and writes a file on the filesystem (as per the SFTP server’s configuration).
If you look in the folder where the files should be written to on the SFTP server, you’ll either see files or you won’t see files, and they’ll either be recent files or only old files.  If that sounds confusing, I apologize.  Just read it again a few times and hopefully it’ll become clear!
  • If you see files, and they’re recent
    • Everything’s working up to this point but nothing’s working after this point, so start at the next step, Step 4.
  • If you see files, but they’re NOT recent,
    • … well, you have more than one problem, my friend.
    • FIRST double check that you are looking in the right place!  Don’t skip this confirmation – it’s happened before!  But if you are …
    • Send us an email at support@sideviewapps.com with a subject line “Help I’m being held prisoner in a film developing shop”.
  • If you see no files, you may be totally fine.  Or maybe not.
    • What did you say?  Fine, with no files?  Yep, exactly that.  Read on below.

If you set it up the way we have suggested, with a batch sinkhole input, then the Splunk software – either the UF or the “local server” depending on how you have things arranged – deletes the files immediately after reading them.  There’s a fairly easy step to confirm this though –

  • Stop your UF/Server for a few minutes (long enough that calls will have completed plus at least a minute or two after that to let CUCM send the file.)

If all is working up to this step, you’ll see one or more files show up in this folder.  If you now start the UF/Server back up and those files disappear within a couple of minutes, … we know that even step 4 is right, you’ll want to probably rerun the Health Checks and see if everything may be fine now (e.g. restarting the UF or Server “fixed” it.)

If no files show up while the server/uf is stopped, then we have a problem.

  • Start by double-checking your SFTP server – is it running?
  • Can you SFTP a file to it using that username and password?
  • Does it show up in the right place?
  • What does its logs show?
  • ALSO check your local system/firewall logs to see if the OS is blocking it or whatever.

If there’s nothing seems amiss in the SFTP server software, and you can send it a test file fine, then work backwards to step 2 and 1, because your problem is ‘before’ the SFTP server.

Otherwise, if this system is receiving files be sure to turn the UF or server back on, then proceed to Step 4.

4) The Pickup

A Splunk Universal Forwarder walks into a bar…  Ha, no.
Your SFTP files are making it in, but they’re just sitting there and not getting sent to Splunk.
This is actually a moderately easy step to check – there’s only a couple of moving pieces.  The biggest question is if your files are sitting here on the Splunk server itself, or on a Universal Forwarder that’s sending them into your indexing tier.
  1. If you are on a UF, check
    • The UF is running.
    • There is a valid outputs.conf file that sends the data to your indexers.
    • That the indexers – as specified in the outputs.conf file! – exist.
    • That there’s a valid inputs.conf file that is set up like our documentation.
    • That the input file specification points to where your data lives.
    • That the input stanza has the correct index=... and sourcetype=... settings.
    • That the account the UF is running as has permissions to read *and delete* the files.
    • That there’s no broken networks or misconfigured/reconfigured firewalls between the UF and the Indexer.
  2. If your SFTP server saves files on the Splunk server itself, check
    1. That there’s a valid inputs.conf file that is set up like our documentation.
    2. That the input file specification points to where your data lives
    3. That the account Splunk is running under has permissions to read *and delete* the files.
There’s actually more possibilities for breakage here, but the above are the most common.  If you are unsure about any of them, feel free to shoot us an email at support@sideviewapps.com!

5) The data is in the same index as our app’s macro thinks it is.

If everything up through step 4 is working, then we should be getting data *somewhere*.  Hopefully it’s the right place.

There’s one remaining piece to check, which is that all the searches in our app rely on one macro, custom_index, to know where to look for the data.

  • confirm the macro custom_index points to the right index.

How can you tell?  Well, first off in step 4 you confirmed the input stanza has the correct index=... settings, this macro should point to that same index.  And that’s really all there is to that.

 





If you have any comments at all about the documentation, please send it in to docs@sideviewapps.com.