Varnish-Cache: Scripts & Helpers

varnishhunt.pl – A ‚occasional‘ bug hunt perl script

The following small perl script sometimes helps to hunt down an issue with varnish-cache. It continously calls ‚varnishstat‘, greps for a certain pattern in the output and if the selected (by grep) counter increases it dumps the contents of the shared memory log to disk using ‚varnishlog -d‘ and ‚varnishncsa -d‘. You even have the option to limit the number of dumps in the script in order to limit disk space used up during nightly runs.

#!/usr/bin/perl -w
################################################################################
## Q(uick)&D(irty) Varnish-Cache Bug Hunt Script
## -> Version 1.0 (2011-05-27)
## ----------------------------------------------------------------------------
##  Copyright (C) 2011, Stefan Pommerening <pom@dmsp.de>, https://www.dmsp.de
##
##  This program is free software: you can redistribute it and/or modify
##  it under the terms of the GNU General Public License as published by
##  the Free Software Foundation, either version 3 of the License, or
##  (at your option) any later version.
##
##  This program is distributed in the hope that it will be useful,
##  but WITHOUT ANY WARRANTY; without even the implied warranty of
##  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
##  GNU General Public License for more details.
##
##  You should have received a copy of the GNU General Public License
##  along with this program.  If not, see <http://www.gnu.org/licenses/>.
##
## ----------------------------------------------------------------------------
## Parameters to change in source code:
## ----------------------------------------------------------------------------
## $dumps   : Number of maximum file dump (to limit disk space on nightly runs)
## $pattern : Pattern to grep for in varnishstat output (e.g. 'backend_fail')
##            Use 'varnishstat -1' for a complete list (first column ids)
## $instance: Varnish-Cache instance name (if applicable, otherwise leave empty)
################################################################################
use strict;

## Parameters to change ########################################################
#  (see above) 

my $dumps     = 10;
my $pattern   = "backend_fail";
my $instance  = "port80";

################################################################################

my $last_be_fail = &Fetch_Backend_Fail($pattern, $instance);
my $curr_be_fail;

while (1) {
   print "\n";

   sleep 10;
   $curr_be_fail = &Fetch_Backend_Fail($pattern, $instance);

   if ($curr_be_fail > $last_be_fail) {
      if ($dumps > 0) {
         &Dump_Varnishlog($instance);
         $dumps--;
      }
      $last_be_fail = $curr_be_fail;
   }
}

sub Fetch_Backend_Fail {
   my $pattern  = shift;
   my $instance = shift;

   my $VARNISHSTAT = "/usr/bin/varnishstat -1 ";
   $VARNISHSTAT .= "-n $instance " if ($instance);
   $VARNISHSTAT .= "| /usr/bin/grep $pattern | /usr/bin/awk '{ print \$2 }'";
   my $be_fail = `$VARNISHSTAT`;
   chomp $be_fail;
   my $date    = `/bin/date +"%Y-%m-%d %H:%M:%S"`;
   chomp $date;

   print "$date: backend_fail:[$be_fail] ";
   return $be_fail;
}

sub Dump_Varnishlog {
   my $date = `/bin/date +"%Y%m%d%H%M%S"`;
   chomp $date;

   my $VARNISHLOG  = "/usr/bin/varnishlog -d ";
   $VARNISHLOG .= "-n $instance " if ($instance);
   $VARNISHLOG .= " > /var/tmp/varnishlog.$date.log";
   my $VARNISHNCSA = "/usr/bin/varnishncsa -d ";
   $VARNISHNCSA .= "-n $instance " if ($instance);
   $VARNISHNCSA .= " > /var/tmp/varnishncsa.$date.log";

   print "Writing varnishlog ...\n$VARNISHLOG\n";
   `$VARNISHLOG`;

   print "Writing varnishncsa ...\n$VARNISHNCSA";
   `$VARNISHNCSA`;
}

1;

tophits.sh – Little stats script

Provided by Simon Lyall (Web: http://www.darkmere.gen.nz/) via the Varnish-Misc Mailing List (2011-05-27). Here is what he wrote:

  • In case anyone finds this useful. It is a little script that outputs the URLs that are doing the most hits and most bandwidth. It’s a bit of a hack (I see bits I could tidy just now) but works okay for me. Main bug is that URLs with different sizes (gziped/non-gziped mainly) are totalled seperately.
#!/bin/bash
varnishncsa -d > /tmp/vlog
#
START1=`head -1 /tmp/vlog | cut -f4 -d" " | cut -f2 -d"[" | sed "s/\/[0-9]*\:/\//" | awk -F/ ' { print $2" "$1" "$3 } ' `
START=`date +%s --date="$START1"`
FIN1=`tail -1 /tmp/vlog | cut -f4 -d" " | cut -f2 -d"[" | sed "s/\/[0-9]*\:/\//" | awk -F/ ' { print $2" "$1" "$3 } ' `
FIN=`date +%s --date="$FIN1"`
DIFF=` echo " $FIN - $START " | bc `

echo "Data for the last $DIFF seconds "

cat /tmp/vlog | sed "s/\%5F/_/g"  | sed "s/\%2E/\./g" > /tmp/tophits.tmp
echo ""
echo "Top Hits per second URLs"
echo ""
cat /tmp/tophits.tmp | awk -v interval=$DIFF ' { COUNT += 1 } END { OFMT = "%f" ; printf "Total Hits/second: %i\n" , COUNT/interval }'
echo ""
cat /tmp/tophits.tmp | awk ' { print $7 }' | sort | uniq -c | sort -rn | head -20  | awk -v interval=$DIFF ' { printf "%4.1f Hits/s %s\n" , $1/interval , $2 } '
echo ""
echo ""
echo "URLs using the most bandwidth"
echo ""
cat /tmp/tophits.tmp | awk -v interval=$DIFF ' { SUM += $10} END { OFMT = "%f" ; printf "Total Bits/second: %6.1f Kb/s \n", SUM*8/interval/1000 }'
echo ""
cat /tmp/tophits.tmp | awk  ' { print $10 " " $7 }' | sort | uniq -c | awk -v interval=$DIFF ' { printf "%6.1f Kb/s  %i h/min  %i KB  %s\n" , $1*$2/interval*8/1000,$1*60/interval,$2/1000,$3}' | sort -rn | head -20
echo ""
echo ""