DevOps Zone is brought to you in partnership with:

Mark is a graph advocate and field engineer for Neo Technology, the company behind the Neo4j graph database. As a field engineer, Mark helps customers embrace graph data and Neo4j building sophisticated solutions to challenging data problems. When he's not with customers Mark is a developer on Neo4j and writes his experiences of being a graphista on a popular blog at http://markhneedham.com/blog. He tweets at @markhneedham. Mark is a DZone MVB and is not an employee of DZone and has posted 534 posts at DZone. You can read more from them at their website. View Full User Profile

Embracing the Logs

04.03.2013
| 3524 views |
  • submit to reddit

Despite the fact that I’ve been working full time in software for almost 8 years now every now and then I still need a reminder of how useful reading logs can be in helping solve problems.

I had a couple of such instances recently which I thought I’d document.

The first was a couple of weeks ago when Tim and I were pairing on moving some applications from Passenger to Unicorn and were testing whether or not we’d done so successfully.

We were doing this by creating an /etc/hosts entry from our top level domain to an nginx proxy node which was to forward on the request to the application server.

Request -> nginx on proxy node -> nginx on app server node -> unicorn on app server node

This didn’t work and we got a 404 response code so I logged onto the server hosting the application server and started writing our a cURL command to simulate what the proxy should be doing to see if the problem was there.

After watching me do this a couple of times Tim suggested that we might be more successful if we opened a bunch of tabs on the shell tailing the various log files that the request should pass through.

We set up tail commands against the following files:

  • nginx access log on proxy node
  • nginx error log on proxy node
  • nginx access log on the app server node
  • unicorn log on the app server node

Immediately it became clear that we actually had a problem on the proxy node because we’d configured one of the nginx directives incorrectly.

Once we’d fixed this the request flowed through smoothly.

We extended this tailing of files idea when testing multiple nodes through a load balancer except this time wemade use of Fabric to make things a bit easier.

The second was earlier this week when Baris and I were trying to configure puppet so that we could install different Ruby versions on different machines.

We were having trouble figuring out why the wrong version was getting installed so eventually we chatted toAndy who amongst other things had a look at the apt history log @ /var/log/apt/history.log and was able to figure out how this was happening.

Lesson of the last two weeks: embrace the logs!

Published at DZone with permission of Mark Needham, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)