Tag Archives: logs

Find top 10 requests returning 404 errors

I had a website where I was curious what the top 10 URLs that were returning 404s were along with how many hits those URLs got. This was after a huge site redesign so I was curious what old links were still trying to be accessed.

Getting a report on this can be accomplished with nothing more than the Linux command line and the log file you’re interested in. It involves combining grep, sed, awk, sort, uniq, and head commands. I enjoyed how well these tools work together so I thought I’d share. Thanks to this site for giving me the inspiration to do this.

This is the command I used to get the information I wanted:

grep '404' _log_file_ | sed 's/, /,/g' | awk {'print $7'} | sort | uniq -c | sort -n -r | head -10

Here is a rundown of each command and why it was used:

  • grep ‘404’ _log_file_ (replace with filename of your apache, tomcat, or varnish access log.) grep reads a file and returns all instances of what you want, in this case I’m looking for the number 404 (page not found HTTP error)
  • sed ‘s/, /,/g’ Sed will edit a stream of text in any way that you specify. The command I gave it (s/, /,/g) tells sed to look for instances of commas followed by spaces and replace them with just commas (eliminating the space after any comma it sees.) This was necessary in my case because sometimes the source IP address field has multiple IP addresses and it messed up the results. This may be optional if your server isn’t sitting behind any type of reverse proxy.
  • awk {‘print $7’} Awk has a lot of similar functions to sed – it allows you to do all sorts of things to text. In this case we’re telling awk to only display the 7th column of information (the URL requested in apache and varnish logs is the 7th column)
  • sort This command (absent of arguments) sorts our results alphabetically, which is necessary for the next command to work properly.
  • uniq -c This command eliminates any duplicates in the results. The -c argument adds a number indicating how many times that unique string was found.
  • sort -n -r Sorts the results in reverse alphabetical order. The -n argument sorts things numerically so that 2 follows 1 instead of 10. -r Indicates to reverse the order so the highest number is at the top of the results instead of the default which is to put the lowest number first.
  • head -10 outputs the top 10 results. This command is optional if you want to see all the results instead of the top 10. A similar command is tail – if you want to see the last results instead.

This was my output – exactly what I was looking for. Perfect.

2186 http://<sitename>/source/quicken/index.ini
2171 http://<sitename>/img/_sig.png
1947 http://<sitename>/img/email/email1.aspx
1133 http://<sitename>/source/quicken/index.ini
830 http://<sitename>/img/_sig1.png
709 https://<sitename>/img/email/email1.aspx
370 http://<sitename>/apple-touch-icon.png
204 http://<sitename>/apple-touch-icon-precomposed.png
193 http://<sitename>/About-/Plan.aspx
191 http://<sitename>/Contact-Us.aspx

Extract multiple Active Directory fields in Splunk

I had posted here about how to extract account names with a specific modifier (exclude account names ending in a dollar sign.) That worked for one specific instance, but I found I needed something better. Active Directory logs have multiples of the same value (Account_Name, Group_Name, etc.) that all depend on context, namely the value of the line two lines above it.

For example,

Message=A member was added to a security-enabled universal group.

Subject:
 Security ID: <Random long SID>
 Account Name: Administrator
 Account Domain: ExampleDomain
 Logon ID: <random hex value>

Member:
 Security ID: <Another random long SID>
 Account Name: CN=George Clooney,OU=ExampleDomain,OU=Hollywood,OU=California,DC=USA,DC=NA,DC=Terra

Group:
 Security ID: <Yet another long SID>
 Account Name: Old Actors
 Account Domain: ExampleDomain

You can see that there are three different Security ID fields, three different Account Name fields, and two different Account Domain fields. The key is the context: Subject account name, member account name, or group account name.

I wrestled for some time to find a regex expression for Splunk that would continue matching things after a line has ended. After much searching I came across this post which explained the need for a regex modifier to do what I wanted.

In my case I needed to use the (?s) modifier to include newline characters in my extraction. My new and improved AD regex extraction is as follows:

(?s)(Group:.+Account Name:\s+)(?P<real_group_name>[^\n]+)
  • (?s)  Regex modifier indicating to include new lines
  • Group:  Section I am interested in. You can replace this with Member: if you’re interested in member account names instead
  • .+ match one or more of any character (including new line as indicated by modifier above)
  • Account Name:\s+ This is in conjuction with the previous two items to create a match that includes the section name and anything after that until the spaces after Account Name
  • [^\n]+ Match one or more characters that is not a new line (since you might have an account name with spaces.)

Finally! This is the regex I’ve been looking for.

 

Extract Active Directory Account Names in Splunk

I don’t really understand Microsoft’s rationale when it comes to log verbosity. I suppose too much information is better than not enough information, but that comes at the cost of making it difficult if you have to try and actually read the information.

I’ve been trying to extract usernames from Active Directory controller logs and it turned out to be quite a pain. Why do the logs have more than one field with the same name? It confuses Splunk and seems to fly in the face of common sense and decency.. I will stop ranting now.

In my specific case, AD lockout logs have two Account Name fields, one for the controller and one for the user being locked out. I am interested only in the username and not the AD controller account name.  How do you tell Splunk to only include the second instance of Account Name?

The answer is to create a field extraction using negative lookahead (Thanks to this article which gave me the guidance I needed.) I had to tweak the regex to look for and exclude any matches ending in a dollar sign, as opposed to excluding dashes in the article’s example. My fine tuned regex statement is below:

Account Name:\s+(?!.+\$)(?P<FIELDNAME>\S+)

It looks for Account Name: followed by one or more spaces (there is excess spacing in the logs for some reason.) The real magic happens in the next bit – (?!.+\$)

  • Parenthesis group the expression together
  • ?! means negative lookahead – don’t include anything you find that matches the following regex
  • .+ – one or more characters
  • \$ – stop matching when you encounter a dollar sign

The second regex string is simply \S+ (one or more non-whitespace characters.)

Note this doesn’t satisfy all AD logs, just the ones I’m interested in (account lockouts – they all have a first Account Name ending in a dollar sign.)

The result of all this jargon and gnashing of teeth: clean Splunk logs revealing only what I want without excess information. Neat.


 

Update: I found an even better way to do this. The key is to use the regex modifier (?s) to include new lines. The better query is now this:

(?s)(<section name of the field you're interested in>:.+Account Name:\s+)(?P<real_group_name>[^\n]+)

A detailed explanation is located here.