Creating new device types

The setup

We provide a reasonable coverage of device types, but we can’t know them all.  Did you know that you can edit existing names or add new ones?

Let’s jump right into an example of creating a new device type.


First Step – Find your devices

Suppose you have a situation where there are some Alpine Horns that you have connected to CallManager.  (I’ll bet you can hear me now!)

Also suppose they are all showing up with a device name that starts with AN.

Unfortunately, they do not show  up with a dest_device_type of “alpenhorn.”  Whatever are we to do?

(Note, please see the disclaimers at the end of this blog entry!)


Second step – Create a new field transformation

Our first task is to create the field transformation.

  1. Click Settings -> fields -> field transformations
  2. Search for device to make it easier to find our existing device transforms
  3. Let’s pick cisco-cdr-destjabberphone as our sample to clone to our new one
    • So find that line, and click Clone on the right.
  4. Give it a good name
    • I’d recommend sticking with a naming convention not too far from our own to keep it consistent
    • cisco-cdr-custom-dest-alphorn
  5. Leave the type as regex-based
  6. Adjust the regular expression to match your new string
    • You can see that it’s currently ^(CSF\w+), which matches items like CSFblahblah123blah
    • Change it to ^(AN\w+) so that it will match items like ANblahblah123blah
  7. Adjust the format to be the new name you want
    • Again, if you followed along precisely, it should already say destJabberDevice::$1 dest_device_type::jabber
    • That sets TWO fields.  destJabberDevice is set to the entire original name ($1).  dest_device_type is set to the string jabber
    • Change only the second to dest_device_type::alpenhorn
    • Also note if you just HAVE to use spaces, surround it with quotes.  But please don’t do this, it’ll work better with underscores!
  8. Leave the source key alone.
  9. Compare the screenshot just below, and if it looks OK, click Save.

When finished, you’ll have something like this:

After that you should have a new field transform called cisco-cdr-custom-dest-alphorn.  Find it and, in order to let everyone partake of alphorn naming goodness, change it’s permissions so everyone who is using the CDR app can read it:
  • Click Permissions
  • Change it to shared in “
  • Give everyone read permission
  • Click Save.

Special note

You will very, very likely want to repeat the above steps, starting with cisco-cdr-origjabberphone, to make the originating side transform, too!


Third step – Create a new field extraction

Now that we have the transform, we can assign it to where we need it to be seen by creating an extraction that references that transform.

  • Click settings -> fields -> field extractions
  • Click Add New
  • Name it cisco-cdr-custom-dest-alphorn
  • Set the Apply To to a sourcetype of cucm_cdr
    • This tells Splunk to apply this to anything of that sourcetype
  • Change the type to Uses transform
  • Paste into the Extraction/Transform the name of the transform we created earlier (which we keep using the same name, so this is easy)
    • cisco-cdr-custom-dest-alphorn
  • Confirm it looks like the screenshot below, then click Save.

The result:

After that you should have a new field extraction called cucm_cdr : REPORT-cisco-cdr-custom-dest-alphorn.  When you find it we need to fix permissions again, much like the last time
  • Click Permissions
  • Change it to shared in “
  • Give everyone read permission
  • Click Save.

Special note

You will very, very likely want to repeat the above steps and create a cisco-cdr-custom-orig-alphorn to make the originating side transform, too!


Test, and enjoy the alphorns!

Disclaimer ….

I am pretty sure that Alpine Horns are NOT registered under Cisco CallManager as devices that start with “AN”, so you should only take this as an example, not as truth.

Also note that I cannot find a canonical way to spell Alphorn/Alpenhorn/Alpine Horn so I mixed it up a little to be inclusive!

 

Tagged , |

Hunt pilots, groups, and browsing extensions – Making your own call center!

Oh no, you’ve just been given control of the IT group.  You are trying to build your own miniature call center with hunt groups, but you have no way to accurately report on it.  Who’s taking calls?  Who’s not?  How much time are your agents spending on the phone?  How would you ever reward your top performers if you don’t know who they are?

Did you know that The Cisco CDR Reporting and Analytics app has functionality that will let you track exactly this sort of information?

The Simple Setup

The setup is pretty simple, it only involves setting up groups and subgroups.  Click Setup then Define Groups (optional).  You’ll see a few options – you can import and export data as CSV so that you can use your favorite spreadsheet program to edit it more efficiently. Feel free to do that, but really for testing purposes we really just need  to add a few manually.  Click over to the “Add New Extension/Group”, fill out the fields and click add.  Repeat until you have either all your needed extensions or at least enough that you can start playing with the reporting to see how it works.

You may only need group defined, but in many cases subgroups can help a lot as an extra way to slice and dice your fledgling call center.

Browsing Extensions

Click Browse then Extensions.

By default you’ll see that all groups and all subgroups are selected.  Of course this isn’t what we need, so click the group dropdown and select the group you just created.

And that’s it!  Welcome to the world of reporting on your agents!

Well, I lied there.  It’s all that you have to do, but you’ll notice as you look around that screen you DO have more options.

I recommend just playing with them – most are pretty straightforward.  Subgroups, for example, work exactly like you’d expect – it only shows you subgroups that are defined under the group you have selected (assuming you have a group selected).  Name is just a searchable field, as is Number.

If you want instead to search/report on hunt pilot instead of or in addition to your recently defined groups, you can do that also by using the hunt pilot field.  Sometimes this is useful to weed out calls unrelated to the hunt group, like personal phone calls.

The time dropdown behaves as you would expect too.

There is one field with two options that may not be immediately clear: the show dropdown lets you pick to show two different (and separate) things.

  • Selecting numbers with zero calls includes all the defined group and subgroup entries.  So if you have 14 people’s numbers in the group “Call Center” but only 11 of those people were working today. Without this selected, you would only see the 11 people with records today.  With this selected, you’ll see all 14 (so it includes the people/numbers with “zero” calls).
  • By including calls with zero duration, the totals will include calls in the count that, for whatever reason, have a zero duration.  Obviously it shouldn’t affect the total durations, only the counts.

And one last field to point out – there’s a field selector near the top right of the data pane.  fields 12 of 14 selected.  Drop that down to enable or disable the fields available.

But Jack doesn’t believe Jill was on the phone more than he was?

All of the information contained in that report can be checked and confirmed on other screens in the  Cisco CDR Reporting and Analytics app.

The most powerful – and easiest! – way to get to the most important of the screens is to just click on any single row in the results.  This will show you the details of the calls, including the other calls from or to that number, directly to the Extension Detail page.

You could also just point your browser to Browse > Calls and change to the timeframes and numbers in question and see what’s really up.  “Nope, Jack, see?  There is who she talked to, when she talked to them and how long she spent.  Face it, she’s just better than you.”

Final Thoughts

There are lot of useful capabilities tucked away in the various screens in our menus and the impact they may have isn’t always obvious from a glance.

I hope this is enough to get you using groups, subgroups and the entire Browse Extensions page to their full advantage.

 

 

 

 

 

Tagged , |

Performance notes

Performance is a very big topic.  We try here to break down a few app-specific items on the browse calls page to help make sure your searches don’t take any longer than necessary. The tools we show you how to use should let you perform your own analysis on other screens in the Cisco CDR Reporting and Analytics app.

Prologue – how do I *see* performance information?

On the right, over by the app’s Save button, is a link to the Job Inspector.

When you click that, a new page opens with a whole world of interesting and arcane stuff, but for the most part we only care about one tiny little piece – the top line of returned information.

    This search has completed and has returned 19,273 results by scanning 45,510 events in 40.235 seconds

Most of that is pretty self-explanatory.  The search returned a little less than 20,000 results in about 40 seconds.  The middle piece – “by scanning 45,510 events” – tells you how many events it read off disk to do what it did.  This is the important – every disk read costs time because it’s the slowest operation in most Splunk installations.  Generally, minimizing the number of disk reads will increase performance the most.

The second most important piece is how many results we have to handle.  So in this case, 45,510 events were read off disk, but only 19,273 ended up having to be displayed.  This has a smaller effect than the number read off disk, but is probably our second most important result.

Those results were a result of a search that was basically “wide open” on the last 7 days – note specifically that the get only dropdown was set to get only all records so that it retrieved *everything* in that time frame.  We’ll use this as our baseline search and see which fields below can improve that time.

For more information on the Job Inspector and how to read all the other information that it contains, see the Splunk documentation on Viewing search job properties.

1) Browse Calls, and general app performance

As with many of the pages in the Cisco CDR app, the fields can be divided into three broad categories.  Those that the use of should improve performance, and ones that have no effect on performance, and ones that may or may not affect performance.

I’ve outlined the above three categories with colors and letters, so let’s start with the ones that make the biggest different in performance.

Category A (Green):

Using these three fields will generally see the largest speed improvement.

get only “the X most recent call legs”

This is a very “blunt object” optimization.  Many of the other fields are like a scalpel – sharp and precise.  This is more like using a sledgehammer to knock off the pieces that don’t fit.  It literally just “stops” retrieving records at X amount, then does its work on what did get returned.  The idea is that if you are in here browsing around calls, the recent results are all that you are going to manually look at, so only snagging the first 100 or 1000 legs is often plenty for your needs, but reduces the load on the system significantly over retrieving all the call records.

How significantly?

Remember my baseline – showing get only all records over the past 7 days:

    returned 19,273 results by scanning 45,510 events in 40.235 seconds

With it set to get only 1000 most recent call legs, the search:

    returned 425 results by scanning 1,702 events in 9.843 seconds

That’s more than a 4x improvement in speed!

Remember what I said about retrieving events from disk and displaying them being the slowest actions, and that how many were actually returned being the second?  You can see that the number of events read went from 45,510 to 1,702, the returned results went from 19,723 to 425,  and that the overall time went from 40 seconds to under 10 seconds.

There are some minor things to note about using this field, though.  With the drop-down set to a size smaller than the expected number of results, there are two main drawbacks. First, some older records won’t be returned. This is probably fine if you are just poking around at recent calls, but could be an issue if you are looking for a sparse number’s calls – if you are searching for extension 567 and that extension only makes one call per week in your 10,000 call/week system, well, setting that too low will obviously trim out most of the results so your information will not be accurate in aggregate. Similarly, the totals line (425 calls returned…) may not be accurate because it can’t count the records it didn’t retrieve.

These limits are all removed if you flip over to the “graph calls over time” General Report tab.  Or just start your searching with get only … set to something small for speedy results, then when your results seem to match what you want, change it to show all call legs to get a better, more definitive overall answer.  If you even need that.  Again, the definitive answers are probably best from the General Report tab.

Enter Number(s) (formerly the number/ext field).

Unlike the sledgehammer of how many recent call legs to retrieve, this is a surgical instrument, slicing into the data in a very precise way.  This is probably the finest-grained performance improvement you can make to a search and whenever appropriate you should include it.  How much faster does this make the search?

Remember my baseline – showing get only all records over the past 7 days:

    returned 19,273 results by scanning 45,510 events in 40.235 seconds

With it set exactly the same only except with adding a number to filter on:

    returned 42 results by scanning 84 events in 5.408 seconds

10x speed improvement!  This varies – a number that’s in a *large* portion of your events, say 30%, may only give you a 5X speed improvement.

As with the previous field, there IS a small caveat to using this, but it’s one that’s unlikely to cause problems. This field triggers the use of a subsearch inside Splunk, and Splunk has limits on subsearches to keep them from gobbling up all the RAM.  Because of that, somewhere around or above 10,000 returned results might trigger it to truncate results.  Of course, it’ll be the oldest records that get truncated and we don’t expect anyone to page through the 500 pages (at 20 per page) of results to get to where the truncation happened, but it’s good to know that its behavior is goofy if you try to go out that far.

The timeframe

I’m not going to belabor this point – reducing the time frame reduces the amount of searching Splunk has to do and the overall number of results it has to retrieve, and thus reduces the search time correspondingly.  Hopefully that is obvious now that I say it.

Category B (Yellow), The “it depends” fields

There are times when these fields reduce run-time fairly significantly, but other times they do not.  There are a lot of reasons for why this is, and they’re too lengthy to go into even with my propensity for voluminous meanderings in blogs.  An example may illustrate one common reason why this field may or may not improve performance:

The Clusters lookup – for the many people who have a single cluster, this setting won’t make any difference.  But suppose you have 10 clusters and the searches you are doing should be contained within “Cluster07”.  In that case, if the CDR data comes approximately equally from all 10 clusters, then changing the clusters dropdown to only show data from “Cluster07” you should get up to about a 10x speed improvement because the search only has to look at 10% of the data.  Sensible, isn’t it?

The other fields in this category are like that.  If it reduces the number of events having to be read off disk, it’ll improve performance.  If it won’t reduce them – generally because the search is already specific enough – then there will be no overall performance improvement by using it.

But it won’t slow them down, either!  So our suggestion is that when using them would make sense, do so because it can only improve search times.

And finally, Category C (Red) “search filters”

This field doesn’t improve search times.  It’s a filter that happens far later in the search, well after most optimizations can be done.  It is the ultimate in flexibility, though, and can search fields that are provided in the raw data or fields that were created by our app dynamically.

This doesn’t mean that there aren’t ways to optimize these searches, though!

Let’s suppose you are investigating a claim that calls being placed from your call center are failing.  Assuming your call center number is 7118, you may have ended up with a “other search terms”/”search filters” search of ‘callingPartyNumber=7118 NOT call_connected=1

    This search has completed and has returned 42 results by scanning 44,811 events in 36.645 seconds

Slow slow slow.

But think about this for a second. If the callingPartyNumber is 7118, then we know that at least ONE of the numbers involved was 7118 so we should be able to add that as the number/ext or number to search for too. If we do that, …

    This search has completed and has returned 42 results by scanning 84 events in 5.312 seconds

By using the optimization of searching for the number “7118”, we cut our time by 5x.  This makes for a much more pleasant experience for only a little extra cognitive load.

Final Words

The key takeaways are summed up thusly:

  • Whenever possible, use a number in the Enter Number(s) (formerly the number/ext field).  Or several.
  • Keep an eye on your timeframe
  • Know how the get only … dropdown works and use it wisely.
  • Trim results using clusters and other fields if they’re applicable to your search and environment.

Doing those things will make many, if not most, of your searches many times faster and make for a happier you.

Tagged , |

Field Gallery – or, “What does this field even mean?”

Have you ever wondered what all those “other” fields mean in your CDR data?  What exactly is a “OutgoingProtocolID” and why should I care?

Or the call center people keep telling me that 1 in 4 calls end up being garbled and low quality.  How do I find out what to even look for?

Your solution, in both cases, is the Cisco CDR Reporting and Analytics Field Gallery.

Field Gallery Basics

The field gallery is a searchable list of fields, with explanations, that’s on the home page of our app.  Many of them even include example searches!

To get to them, just click the Cisco CDR Reporting and Analytic’s Home button:

Then in the bottom half you’ll find the field gallery.

Obviously, we wouldn’t just throw you out in the cold without a blanket, so there’s a way to filter and make sense out of the fields.

Filtering the display

Most of these are fairly self explanatory, so I’m am only going to give a quick summary.

A – The default option is to display fields from all sources.  The other two are for displaying fields from the raw CDR and CMR and ones created by the app.

B – Slicing and dicing the field gallery based fields that are or are not in your actual data.  Useful to see what you are missing, and to confirm you *do* have a certain field available to search on.

C – Searching – try typing a field name, or even something found in the description like “quality”.

D – The neatest option when you aren’t sure what you are looking for; a drop down with categories like “Call quality and qos” or “Devices” giving you a way to filter the list to the fields important for a particular line of inquiry.

Finding Sample Reports

Be sure to keep your eye out on the right-hand side of the list of fields.  Many of the fields have sample reports available to illustrate how they can be used.  Obviously, not all the samples will make sense in your data, but the ones that do will give you a leg up on how to build reports showing that data.

Just give them a click and see both what they say and also how they’re built!

Final notes

We’ve committed to updating the descriptions to be more friendly and useful over the next few releases. If you have suggestions for some of them, we’d love to hear them.  Send them in to feedback@sideviewapps.com!

Tagged , |

4.4 interface changes

New in 4.4 we have rearranged a few key pieces of the Browse Calls interface.

Overall, the flow through these fields wasn’t as good as it could have been.  In our extensive customer testing (e.g. “we pay attention on Webexes with customers”) we realized that nearly everyone used the fields in an order that didn’t fit how it was laid out on screen.  So we moved the fields around a bit to better reflect the workflow most people use.  We also changed some wording to make it more clear what each portion does, and in some cases how it relates to the other fields.

The new look

Compare to the old look:

1) We renamed “Show all activity for” to find calls to/from and moved it to the front of the fields.  This makes it clear what this field is use for.

2) When it’s empty, the faded enter number(s) just *begs* to have something entered.  This was usually the first field most folks wanted, but it was previously stuck in the middle of the flow.

3) Again improving the flow, the time picker got moved to be next so you won’t miss it.

4) If you needed the cluster, it’s right there.

5) The call types picker never seems to be where people want to start even though it used to be left-most field.  Now it’s more logically placed.

6) The old location for “Scan only the X most recent call legs” did not indicate its importance.  Changing the wording a little to get only the X most recent call legs and moving it to the left-most item in the second row helps to define what you need it for.

7) Finally, search filters.  This incredibly powerful, full featured search field was also not given the emphasis it deserved.  It’s the last field  because it operates *after* the data has all be pulled off the disks, so everything else comes before it.

Like it?  Love it?  Or even hate it?

Let us know what you think of the new look and flow!

Tagged , |

Health checks for the Cisco CDR Reporting and Analytics app

Did you know that the Cisco CDR Reporting and Analytics app has a series of health checks built right into it?

These check for several of the most common configuration issues we’ve seen, like

  • CDR data going into the wrong index
  • CDR data not timestamped or extracted properly
  • Lookups that are missing or broken

And quite a few more.

How do I run them?

Under the Setup menu the very last item is Run health checks.

What do I do then?

If all your health checks are green and start with  “OK – …” then your job is done.  Mission accomplished.  Feel free to take the rest of the day off!

Otherwise, read the information provided closely.  We tried to use very specific words and generally be clear about what’s going on.  In many cases we try to give a hint where to look to correct the problem, for instance:

  • ERROR: The Clusters lookup has 1 Cluster(s) that have no valid locale set. Visit “Setup” > “Define clusters” to set them.

Obviously your first place to start would be Setup > Define Clusters and look for a field called locale that’s empty or invalid.

Sometimes we just hint at things that may or may not be a problem, like in this example:

  • WARNING – There were 0 calls during the last full hour. This is not the first time, perhaps it’s even normal.

That’s telling you that the previous hour had no calls, but that looking backwards in time shows this may not really be unusual.  Perhaps it’s your 6-7 AM Monday morning call volume is *typically* zero, so this isn’t unusual at all. Compare that with a different “failure” of that same health check:

  • ERROR – There were 0 calls during the last full hour. Is your forwarder still forwarding?

That’s actually the same final condition – in this case 6-7 AM Monday morning had zero calls – but with a different history – one where 6-7 AM had more than zero calls in each of the last few weeks.  That changes this from being a WARNING that something may or may not be wrong to an ERROR that’s it’s likely something *is* wrong.

Wrapping up

The health check pages can often help pinpoint things going  wrong – forwarders that went missing, data that stopped coming in and so on.  Knowing that we have checks built into the product for many of the most common issues can really save a lot of time in diagnosing why your reports suddenly turned blank.

Of course if you had any questions on any failed (or passed) health checks, if you found them especially useful in solving a problem, or even if you *didn’t* find them useful, please drop us a line and let us know!

 

Tagged |

Using Leg Types to make your life easier

First, a quick chat about call legs vs. calls.   We all know what a call is – that period of time from when you pick up a phone until you put it back down again.  In modern systems each call  is comprised of one or more call legs, with each leg being a single source/destination combination.  So if you try to call Sally, you could see the following legs:

  • Leg 1: You dialed Sally’s extension.  It rings.  No answer.
  • Leg 2: Your call gets forwarded to voicemail.  You leave a message.  You hang up.

That’s a two-leg call.  Simple, right?

Introducing Leg Types

Leg types are a way to add a reusable, human readable “tag” to the individual legs to make both searching and seeing call flow easier and better.

For instance, you could define leg types for “Abandoned at voicemail” to catch those legs where the caller went to voicemail and just hung up on it.  Another could be “Left voicemail”, which is just like before except that they actually left a voicemail.  Maybe even “jumped out of voicemail” to tag legs that went to voicemail but then transferred themselves out again.  This is a little more tricky –  these legs should be ones that went to voicemail, but which there is no “on hook” party – i.e. no  one actually hung up at that leg.

There are two things to be aware of –

  1. Make sure that each call leg is matched to only one leg type.
  2. These are leg types, not call types.  They operate on individual legs, not on what you or I think of as “calls”.

For the former: there should be no overlap on an individual leg, that gets weird, and may very well behave incorrectly or break something.  We do try to test for such overlaps in the Health Check page (under “Setup”), but it’s easier to just make sure they don’t overlap in the first place.  This means you shouldn’t have a generic “went to voicemail” leg and also a “went to voicemail and left message” leg – because a leg that went to voicemail and left a message would pick up *both* legs.  In that case, just redefine the more general one to be “not what that more specific one is”, so “went to voicemail” becomes “went to voicemail and didn’t leave a message”.  (This is in the example below, so if that sounds confusing perhaps continuing on will clear it up!)

For the latter: We hope to implement a call types feature on top of leg types and other functionality in the future, but baby steps.., so stay tuned.

Now that we understand what they’re about, let’s jump in to a short example!

Voicemail calls example

Let’s build a leg type for calls that ended in voice mail, and another for calls that ended up getting hung up on when they went to voice mail.

Step 1: Find your voicemail calls

Step 1.1: Define what a voicemail call is.

The first task is to define what it means “to end in voice mail”.   This is dependent on how your calls get routed and thus varies from place to place, but we’ve found several common threads in most people’s environment which I’ve outlined below:

device_type=”unityvm”

If you used the default voicemail naming scheme, you should be able to see voicemail by looking for device_type of “unitvm”.  This is the most commonly needed search for finding voicemails.

on_hook_party=”caller”

A leg which has the on_hook_party set as the caller means this is the leg where the caller hung up and thus terminated the call.  This helps determine if they *ended* at voicemail or if they then hopped elsewhere via some menu option.

deviceName=X

Underneath our created “device_type” is an assumption that the default voicemail naming convention was followed, specifically that voicemail devices are named like “CiscoUM-VI*.”  If “device_type” never says “unityvm” even though it should, this is probably the issue.  The resolution is to just use the deviceName(s) that you configured, perhaps with wildcards, like deviceName = MyVoiceMailSystem*

Step 1.2: Construct and validate the search

In our example, let’s assume that when the device type is the default “unityvm”  and the calling party hangs up, the call is one we want tagged as voicemail_left_message or voicemail_abandoned.

To test this, let’s open up Browse Calls, then in the field “other search terms” type in device_type=”unityvm” on_hook_party=”caller”

Review that list, make sure it looks right!

Now we have to decide on a duration to use – I’m going to pretend 5 seconds is our cutoff.  If the person was in voicemail for more than 5 seconds, they left a message.  If 5 seconds or under, they hung up right away.  Your own threshold may be different, but it seems between 5 and 10 seconds is the most common range used.

This gives us two non-overlapping cases:
When they “hung up” on the voicemail – device_type=”unityvm” on_hook_party=”caller” duration<=5
When they left a voicemail – device_type=”unityvm” on_hook_party=”caller” duration>5

Again, test both of those using search, make sure they look correct.  You might have to leave yourself a few voicemails – and abandon a few – to see how the duration should be set.  I recommend testing with some coworker who happens to not be at their desk.  Most folks feel loved when they come back from a break and find someone left them a few voicemails!

Step 2: Building leg types

Now that we have our searches, let’s build some leg types!

  • Click Settings, then Event Types.
    • We are going to assume you have no leg types already – if you did, search for them by typing “leg_type” in the search box and pressing enter, then review what you have to make sure we aren’t duplicating legs.
  • Click the green “New Event Type” in the upper right.  Fill it in as such:
    • Name: leg_type_voicemail_abandoned
    • Search: device_type=”unityvm” on_hook_party=”caller” duration<=5
    • (Note the initial “leg_type_…” in the name is important, it’s how our app knows these are for it!)
  • Click Save

Now, build the “left_voicemail” version.

  • Click Settings, then Event Types.
  • Click the green “New Event Type” in the upper right.  Fill it in as such:
    • Name: leg_type_voicemail_left_message
    • Search: device_type=”unityvm” on_hook_party=”caller” duration>5
    • (Note the initial “leg_type_…” in the name is important, it’s how our app knows these are for it!)
  • Click Save

Step 3: Testing.

Click back in your browser a few times to get back to your search, or reselect the Cisco CDR Reporting and Analytics app then go to Browse Calls.

If you do not see the field “leg_types”, click “Fields” in the upper right, search for “leg_type” on the left and add it into the right pane.  Then obviously click Save in the field selector.

As long as any calls matching the search we built above are in the results, you should see leg_type populated.  Fiddle with your timeframe a bit if you need to.

Step 4: Using Leg Types.

Now that we have leg types defined for a few cases, we can search for those using the ‘other search terms’ field.

Rather than bore you with prose, how about I just make a little table with some examples and see how that looks?

 

To see all calls that have: Search this:
Any leg_type defined
leg_type="*"
A leg_type of “voicemail_left_message”
leg_type="voicemail_left_message"
A leg_type starting with “voicemail…”
leg_type="voicemail_*"

The neat part is that though these are defined on individual call legs, those legs roll up into calls and keep their call legs as part of them.  Searching for leg types of X means the app returns your *calls* that include legs of those types.

 Additional possibilities:

This is by no means complete, but some random thoughts on leg types:

  • tagging legs that were placed to your call center as perhaps “call_center_received”
    • split that into “call_center_received”, “call_center_abandoned” and “call_center_no_answer”
  • tagging a certain DID block with “incoming_sales_took_call”
    • NOTE that “OR” is OK, but put parentheses around them, like “(finalCalledPartyNumber=7344 OR finalCalledPartyNumber=7345) duration>=15 on_hook_party=*”, which would legs that were to either 7344 or 7345, lasted more than 15 seconds, and where someone hung up without transferring (so they didn’t get transferred away).

There are many possibilities for building and using these and we’d love to see the system you come up with!

 

Tagged , |

Choropleth Maps!

If you read our last installment on Maps, you’ll know we can put calls on a map.

There are even more cool maps to display calls on!  In addition to Cluster maps, Splunk also has bundled with it Choropleth maps for both Countries and for US States.

A refresher

Before starting, you may want to go review our post on building Cluster Maps.  Come on back when you are done there and let’s get our hands dirty.

We assume you can find your data.

So we won’t tell you how to do it beyond Browse > Browse Calls.

Adding Required Fields

  • Way over on the right click the green Edit Fields button.
  • For users with a lot of international calls, search for and add the fields callingPartyCountry and finalCalledPartyCountry
  • Or if your calls are mostly just US, try adding callingPartyState and finalCalledPartyState
  • In either case, when you have your fields selected click Save

Change to showing raw data

Let’s now show this in the core Splunk UI to do the custom visualizations we need.

  • Click the link to >> see full search syntax in the upper right.
  • A New Search window will open with a big long search already populated.

Add the magic commands

This is where things go different from the previous article. For one thing, we’re going to go through using “Countries” here, if you are in the US and want to use States it’s this same process only with a slightly different command.  We will do US States as a second example below (but read through this one, we’ll use an abbreviated version of it so you need to be familiar with it anyway).

Last time we built a cluster map by adding one command, “geostats”.  To build a Choropleth map we need to add two commands, one (stats) to “sum” up the counts by country, another (geom) to tell Splunk how to display that “place”.

  • To the end of that search, paste in one of the two below commands, depending if you want the *calling* parties or the *called* parties to display.  (Calling is inbound, finalCalled is outbound).
    | stats count BY finalCalledPartyCountry | geom geo_countries featureIdField="finalCalledPartyCountry"
        -- OR --
    | stats count BY callingPartyCountry | geom geo_countries featureIdField="callingPartyCountry"
    
  • Click the Search button (or just press enter while your cursor is in the search text field).
  • Change to the Statistics tab and let’s take a quick look there to confirm.

Notice that I added the search from above and that I’m currently looking at the Statistics tab.  The stats part is responsible for coming up with the “count” of 53 for Australia.  The “geom” command is what came up with that big pile of numbers on the right, which if you squint really hard at is a polygon shaped just like Australia.  I promise.  You might have to squint *really* hard to see that, or maybe let’s have Splunk show us this!

Make it pretty

  • Change to the Visualization tab.

Splunk *should* pre-select the map type , because we’ve sent the data through the geom command. If so, there’s nothing else you need to do except wait a few moments for the data to populate.

If on the other hand you do not have a Choropleth Map showing,

  • Click the Visualization tab, then the Visualization type.
  • Change it to Choropleth. This should be under the Recommended section.  If not, look farther down.

Give that a little while to load…

For U.S. States

As promised, here is how to do U.S. States.  This relies on the process above, so if you have any questions on how to do a particular thing, refer to the Countries sections above.

  • Go to Browse Calls
  • Optionally filter/find certain calls.
  • Click >> see full search syntax
  • After getting your New Search window, paste into the end of it
    | stats count BY finalCalledPartyState | geom geo_us_states featureIdField="finalCalledPartyState"
        -- OR --
    | stats count BY callingPartyState | geom geo_us_states featureIdField="callingPartyState"
    
  • Click the Search button (or just press enter while your cursor is in the search text field).
  • Change to the Visualization tab
  • Change to the Choropleth map (if it doesn’t automatically load it).

Wrapping up

We hope to have given you the tools to create some nice visualizations using your CDR data.  Now maybe those dashboards of incoming calls won’t look so plain!

Tagged , |

Maps!

The question

Have you ever wondered where your inbound calls come from?  Do you suspect agents are placing a lot of calls on the company dime to Loja, Ecuador to find out if the high temp there is supposed to be 74F again today?

Well, you are in luck!  Today we’ll show you how to display the call counts in a Cluster Map!

Finding some data

First, let’s find the data you want to display.  This could be a lot of things, but for now let’s use your own main extension, let’s say it’s “2126”.

  • Browse > Browse Calls.
  • In the number/ext field, type in 2126.
  • Change the “scan only the last 1000 records” to “all records”.
  • Click the search icon.

There’s no reason you have to use your main extension – you could leave all these options blank and see all the calls that end up with location information in them. The sky is the limit here.

Adding latitude/longitude fields

  • Once you have calls showing up, way over on the right click the green “Edit Fields” button.
  • Search for keyword “lat” and in the resulting list, click on the green arrow to add the fields “callingPartyLat” and “finalCalledPartyLat” to the right side.
  • Do the same for “long”, adding “callingPartyLong” and “finalCalledPartyLong”.
  • Once you have all four fields added, click the Save button.

Change to showing raw data

Now that you have some useful, specific data, we need to display this data in the core Splunk UI to do some custom visualizations.

  • Click the link to “>> see full search syntax” in the upper right.
  • A “New Search” window will open with a big long search already populated.

Don’t fret if it just looks like a bunch of  gobbledygook – we already did the hard work for you so you just have to add a few small commands to the very end of it.

Add the magic commands

  • To the end of that search, paste in
    | geostats latfield=callingPartyLat longfield=callingPartyLong count
  • The result should look like this:
  • Then click the search button (or just press enter while your cursor is in the search text field).

This runs the geostats command, telling it to plot the ‘count’ for each latitude and longitude.  We have to tell the command which fields in our data contain the latitude and longitude, hence the “latfield=<my latitude field name> longfield=<my longitude field name>” in the middle.

Make it pretty

  • Change to the “Visualization” tab.

If Splunk is already displaying a Cluster Map, there’s nothing else you need to do except wait a few moments for the data to populate.

If on the other hand you do not have a Cluster Map showing,

  • Click the Visualization tab, then the Visualization type.
  • Change it to Cluster Map. This should be under the “Recommended” section.  If not, look farther down.

Note there are two “Maps” style visualization.  The other one (with shaded countries instead of dots) is called a Choropleth Map.  We don’t have the right data in this example for the Choropleth map, so be sure not to pick that one.  We will do a Choropleth map in a future blog, so stay tuned!

And that’s it, you should now have a map populated with the call counts.

Some minor variations

Display outbound call destinations instead of inbound call sources

To change from plotting the incoming calls’ location to the location of the outgoing, use fields ‘finalCalledPartyLat’ and ‘finalCalledPartyLong’.

| geostats latfield=finalCalledPartyLat longfield=finalCalledPartyLong count

Counting by the final disposition of the call

If you want your little dots to be something other than one single color, an option may be to count BY something.  One of the more popular ‘by’ clauses is by the field “cause_description”.  The field “cause_description” contains values like “Normal call clearing” (which is a call that ended normally), “Call split” (which is when a call gets transferred), “No answer from user (user notified)” which should be self explanatory, or maybe even the dreaded “No circuit/channel available” which means that you have filled your pipes and couldn’t get a free line to place a call with.

Anyway, enough description – adding the BY clause is easy.  To the end of either one of the above, simply add ‘ BY cause_description’.  So if you were doing the final called party version, it would now be

| geostats latfield=finalCalledPartyLat longfield=finalCalledPartyLong count BY cause_description

Now when you click search, your little blue dots should now be divided up into little slices for different cause descriptions.  Hold your mouse over them to see more detail.

Tagged , |

Enabling CUBE or vCUBE data

Cisco Contact Center gives you great visibility for Contact Center, and products like ours give you great visibility into CallManager…

…but have you noticed there’s a CUBE-sized blind spot in your picture of overall call flow?

Lucky for you, we can make sense of this data now. All the H.323 and SIP traffic, media streams (both RTP and RTPC), all the handoffs to DTMF and all the other things that CUBE and vCUBE can do – we shine a flashlight into that darkness and let you start using that data as part of the overall picture you can get from our Cisco CDR Reporting and Analytics app.

Prerequisite information and notes

We are going to assume that:

  • you have set up our product already following our install docs and you have an SFTP server running on a Splunk Universal Forwarder (UF).
  • that this UF is on a Linux box of some sort and that you some basic comfort with a *nix command line,
  • that your existing UF configuration is indexing the CallManager CDR and CMR using a sinkhole input,
  • that you can install software on that system,
  • that you will use your existing SFTP user account on that system for the new CUBE CDR data
  • and that you have admin access to your CUBE system or can find someone who does to run a half dozen commands for you.

The steps that we will perform to enable ingesting CUBE CDR data are:

  • Install an FTP server on the existing *nix UF
  • Configure vCUBE/CUBE to save CDR data to that FTP server
  • Reconfigure the existing UF to pick up that new data.

Step 1, Setting up an FTP server

CUBE/vCUBE (from now on I’m just going to write CUBE since it covers both products) only supports FTP as far as we can tell. This means that the standard and recommended method we use for collecting CDR data from CallManager – SFTP – can’t be used with CUBE.

There are many FTP packages that you could use and practically any of them should work fine.  If you don’t have one installed already, then follow along below to get some guidance on getting FTP up and running.

Find which distribution you are using:

If you already know the Linux distribution installed on your UF (Red Hat Enterprise Linux, Ubuntu, Slackware, etc…) you can skip this step.

  1. Log into an SSH session on the existing UF.
  2. Run the one line command cat /etc/*-release
  3. In the output, you’ll see either a release name like “Red Hat Enterprise Linux”, or somewhere in the output may be a “Description” field that says “Ubuntu 16.04 LTS”.  Yours may say something completely different, like Debian or Slackware.  Just note what it says.

Install the FTP server software

This step is distribution specific, so if you don’t know which distribution you are using please see the section immediately above this one, then come back here.

For Ubuntu, follow setting up an FTP server on Ubuntu.  You’ll only want the steps vsftpd – FTP Server Installation and User Authenticated FTP Configuration – DO NOT set up Anonymous FTP!  Also be very careful to not accidentally set the anon_upload_enable=YES flag, which for some reason is stuck in the middle of the Authenticated FTP configuration section.

For Red Hat and its various versions you can follow these instructions on setting up an FTP server on Red Hat.

Other Linuxes (Linuxen?  Linuxii?) – just search the internet for “<my distribution> FTP server” and try to find the most “official” looking instructions you can to enable non-anonymous FTP.  If you check out the two directions linked above you can get a feel for what that might look like.

Also, if you have a preference for an FTP server you are comfortable with, by all means use it instead of our instructions.  It won’t hurt our feelings.

Confirm the FTP server works

You can use any ftp client you have available to confirm this.  Preferably one on a different system so you can confirm there’s no firewall on the local system blocking you.

We recommend creating a temporary file with any content you want and confirming

  1. You can upload that file using the username and password for the existing SFTP user
  2. That the file ends up where you expect it to be

If you have any problems at this point, review the installation steps you used to install and also confirm there’s no firewall either between you and the FTP server or on the local FTP server itself.  If so, adjust the firewall settings to allow FTP traffic.

Step 2, Configuring CUBE to save CDR data to our FTP server

Log into the server used for file accounting (e.g. your CUBE server) with an account with administrative permissions.  Then run the below listed commands to set up gw-accounting to file, change the cdr-format to “detailed”, configure the ftp server information, and tell the system to flush new data to file once per minute.  Finally, we make sure this configuration gets saved.

  1. enable
  2. configure terminal
  3. gw-accounting file
  4. cdr-format detailed
  5. primary ftp 10.0.0.100/cube_ username cdr_user password cdr_user_passwd
  6. maximum cdrflush-timer 1
  7. end
  8. copy running-config startup-config

Step 5 is the one to pay attention to!  In step 5 be sure to change the information for your server IP, username and password. Also notice that in step 5 the cube_ in 10.0.0.100/cube_ is the file prefix.  The FTP software will put the file into the right place in the directory structure, the cube_ piece here tells it to prepend the word “cube_” to the front of the filename it creates.  This is later how we’ll tell the UF to pick up that data specifically.

To confirm, from that same SSH session to your CUBE server run the command file-acct flush with-close.  You should see a new file nearly immediately appear in your FTP folder.  This file might be nearly empty with only a timestamp in it if there were no phone calls in the short period involved, but in any case it should be there.

Step 3, Tell the UF to index this data

The UF needs only a few tiny pieces of configuration.  There should already be a working configuration for indexing the Cisco CDR data via the TA_cisco_cdr app and its inputs.conf file.  We will now edit that to get our new data files to be sent in as well

  1. Edit your $SPLUNK_HOME/etc/apps/TA_cisco_cdr/local/inputs.conf file
  2. You’ll see two stanzas already for your existing CDR data, with sourcetypes of cisco_cdr.  (If you do not see those two stanzas, you are in the wrong place.  Check other inputs.conf files on that system. )
  3. Go to the end of the file and add a third entry that looks like:
    [batch:///path/to/files/cube_*] 
    index = cisco_cdr 
    sourcetype = cube_cdr 
    move_policy = sinkhole 
    
  4. Save the file
  5. Restart the UF.

Finalizing.

Now that you have this data in, for all CallManager calls where we recognize the matching CUBE record(s), the fields from those CUBE events will be available in the field picker popup, in “Browse Calls”. To talk about desired functionality in other parts of the app (notably General Report), and about your needs in general give us call. We can help in the short term even if it’s a bit manual for now, and we’ll be very interested to hear all the messy details to help guide our next few releases as we flesh this out.   It’ll be fun!