Monitoring 101 – Are You Running a Modern Application?

I promised that I would write about the complexity of modern applications and how that applies to your monitoring decisions. Consider this to be an initial dive into that subject. For many of you, this will be enough to get you going in the right direction and avoid making costly mistakes.

 

Applications are becoming increasingly complicated with more client-side processing and less server-side interaction. This makes it sound like the application is moving into the browser – a statement which can cause a bit of panic for application owners that want to monitor that application. However, what’s really going on is that the pendulum is simply swinging back to a well-understood client-server paradigm in which the server is responsible for data processing and business logic while the client takes care of manipulating the display and making the application highly interactive. Web applications had a tough time following that model for a long time, largely because they were caught up in how to produce the display itself. The evolution of common web technologies – HTML5, CSS3 and Javascript – have allowed web applications to begin to fulfill the promise of a true client-server architecture in which a highly-interactive UI makes more limited and focused use of potentially expensive server interactions. Applets and Flex were both attempts to arrive at this point much sooner, but they both suffered from being technologies that were not native to the browser itself.

 

So, we understand how modern applications are split into display vs. data fetching and control calls. What does this mean in terms of one’s monitoring strategy? The first important step is to realize that there are two questions, not one:

 

  • How well is my back-end supporting my UI?
  • How efficiently is my UI supporting my end user’s experience?

 

The first question can be answered using standard monitoring methodologies – passive tapping, agents, etc. This deals with the most vexing issue of the Operations team – whether the current infrastructure investment is having a positive or negative effect on the end user. If each server interaction is free from availability and performance issues, then any problems with the end user experience will be the result of issues with the client-side display – i.e., it is not a data center issue. That is a critically important distinction to make for an Operations team.

 

The second question can only really be answered by having a monitoring presence on the client. This can take the form of a desktop agent or some form of instrumentation, such as Javascript. Ideally, the solution should not be intrusive, but that is not always possible. Desktop agents don’t work if you don’t control the client’s platform, as is the case with public-facing applications. Browser vendors are starting to build monitoring directly into the browsers themselves, but the resulting data needs to be physically gathered by injecting some Javascript into the UI (which makes it intrusive).

 

In short, for public-facing applications, you are likely going to need to instrument the application. Keep in mind that modern applications work in terms of “application events” and not page views. An application event is a single interaction between the user and the application. It always starts with a user-initiated click on the UI and ends when the UI is ready for another interaction. The intervening gap can be filled with all kinds of activity, both on the client side as well as the server side, and it is all part of that application event. You can see how the instrumentation points for an event can vary enormously from one application to another, or even within the same application. Consequently, it is extremely hard to automate this kind of monitoring. Applications simply do not follow enough standards yet to allow for consistent automation. However, automated instrumentation is often suggested by tool vendors without discussing the risks or deficiencies.

 

To avoid suffering negative consequences from automated instrumentation, such as performance slowdowns or even breakage, you should strongly consider having the developers inject the instrumentation themselves as part of a larger culture of “Design for Monitoring” – a discipline that is sorely lacking in software development (and the subject of another blog entry in the future). Although there is a growing tide of libraries to help with instrumentation-based monitoring of web applications (e.g., Boomerang, Jiffy, etc), they are mostly focused on automating the instrumentation of page views. I have not seen any that are very effective at finding and defining application events. Hopefully, that will change in the near future. Browser vendors are once again in the best position to address this, since they can easily determine the two end points of any application event. I hope to see them step up. In the meantime, make sure that your tool set provides you with the ability to address this need in some fashion.

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share This Post