In the Splunk search language there is almost always a better way, and someone on answers.splunk.com to teach you about it. Less commonly advertised though, is the fact that there is ALWAYS a worse way…
So let’s drive the wrong way down a one way street. Bear with me.
First, a warning. Driving the wrong way down a one way street is not something you should do, and likewise there are some searches here that you should NOT RUN EVER.
Challenge #1 Let’s make 1 empty row with foo=1!
No problem. You’ve probably seen someone do this:
| stats count | fields - count | eval foo="1"
Ooh neat. What if we need 7 empty rows with foo=1?
| stats count | fields - count | eval foo=mvrange(0,7) | mvexpand foo | eval foo="1"
Can we optimize this slightly to make sure it only runs on our search head?
| noop | stats count | fields - count | eval foo=mvrange(0,7) | mvexpand foo | eval foo="1"
or if you prefer
| localop | stats count | fields - count | eval foo=mvrange(0,7) | mvexpand foo | eval foo="1"
These are getting pretty clunky though. And on 6.4 there’s a much better way!
| makeresults count=7
So…. we made it better. What if we went the other way and made it…. WORSE.
Well, let’s do some unnecessary things, AND let’s make it break sometimes randomly! And lets force it to talk to every one of the indexers and ask them each to give us one event!!
index=* OR index=_* | head 1 | fields - * | eval foo="1" | table *
ooh. that’s horrible. If there’s nothing in the main index, or if a user only can see a subset of indexes that happen to be empty during a given timerange, it’ll produce no row at all. But it still hits every indexer.
Let’s keep going though. There’s a lot more Horrible down here.
Let’s get all the lookups that we can get from our current app context, and smash them together into one giant result set. then throw it all away and keep only one row.
| rest /services/data/lookup-table-files | fields title | map search="| inputlookup $title$" | head 1 | fields - * | eval foo="1"
oh that’s marvelous. We’re probably generating a ton of errors somewhere since not all lookups can be loaded from the app context we happen to be in. And depending on who we’re running this as, we might get no rows at all. So much fail.
To Be Continued…..
Challenge #2 Get the list of fields.
Well you might find the transpose command first and thus find yourself doing this:
* | table * | transpose | rename column as field | fields field
Which is pretty evil. We’re getting every event off every disk only to throw all the values away. The better way is:
* | fieldsummary | fields field
neat. Is there a worse way though?
You betcha. This isn’t quite as evil as our starting point of “* | table * | transpose” search but it’s pretty evil.
* | stats dc(*) as * | transpose | rename column as field | fields field
Wildcards in stats/chart and timechart are fantastic. As long as they’re used sparingly which here they are NOT. We’re forcing the pipeline to keep track of every single distinct value of every field. If you have 100 or 200 fields this can get pretty ugly.
More Horrible! OK let’s make it keep track of the actual distinct values themselves and give them back to the search head where…. we just throw them away. Mou ha ha!
* | stats values(*) as * | transpose | rename column as field | fields field
Even More Horrible. let’s make it keep track of ALL values of all fields as huge multivalue fields and send them all back to us so we can throw them away.
* | stats list(*) as * | transpose | rename column as field | fields field
Except wait! That was a trick! list() actually truncates at 100 values, whereas values() just keeps on going…. so that search is slightly less evil than the earlier values(*) one.
OK sorry. Let’s make up for lost ground. Can we make a horrible search that technically uses fieldsummary itself?
* | fieldsummary | fields field | map search="search index=* $field$=* | head 1 " | fieldsummary | fields field
And we didn’t even use join or append once!!
If this makes you think of any good “Evil SPL”, please email it to email@example.com.