Franco R. Negri

Subscribe to Franco R. Negri: eMailAlertsEmail Alerts
Get Franco R. Negri: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Java EE Journal, SOA & WOA Magazine

J2EE Journal: Article

A Service-Oriented Management Approach for Service-Oriented Architecture

Creating the process application

Much has been written about service-oriented architecture (SOA) and the many technology and business benefits of adopting this approach. Poised to change the computing landscape once again, progressive IT departments, software vendors, and service providers have all been eager to embrace its concepts - familiar to anyone acquainted with the many past attempts to represent applications and IT infrastructure as modular reusable services.

Contrary to the views of some, SOA is not about .NET or J2EE or any specific platform or standards, although the continued adoption and early successes of Web services implementations will likely galvanize the industry around its standards. Rather, SOA is an application architecture approach to building distributed systems that deliver application functionality as services to end-user applications or to build other services.

Because SOA not only represents a philosophical shift for developers but also has great implications for IT operations, understanding the operational aspects of managing and monitoring, SOA with what I will call a "service centric, end-to-end approach" is critical. It offers IT professionals a great opportunity to finally get it right throughout the application life cycle, from the development process, through the operational management aspects.

Service-oriented architecture is an approach to loosely coupled, standards-based, and protocol-independent distributed computing, where coarse-grained software resources and functions are made accessible via the network. In an SOA, these software resources are considered "services," which are well defined, self-contained, and ideally do not depend on the state or context of other services. Services have a published interface and communicate with each other. Services that utilize Web services standards (WSDL, SOAP, UDDI) are the most popular type of services available today.

Many believe that SOA, by leveraging new and existing applications and abstracting them as modular, coarse-grained services that map to discrete business functions, represents the future enterprise technology solution that can deliver the flexibility and agility that business users want. These coarse-grained services can be organized/orchestrated and reused to facilitate the ongoing and changing needs of business.

Advantages of an SOA
Implementing SOA provides both technical and business advantages. From a technical point of view, the task of building business processes is faster and cheaper with SOA, because existing services can more easily be reused and combined to define the business processes themselves. Applications can expose their services in a standard way and, hence, to more diverse clients and consumers. From a business perspective, IT staff can communicate more easily with business people, who understand services. Because business processes become explicit, they can be understood and improved with greater ease. Additionally, applications or business processes can be managed internally more easily or outsourced, because they're well-defined and discrete. As business changes and new requirements are generated, IT can reuse services to meet new demands in a much more efficient and timely manner.

The value and ultimate success of SOA is based on the assumption that everything enterprise IT does is ultimately manifested in the service of some business process. Given this assumption, SOA is about making business processes better, cheaper to create, and easier to change and manage.

New Operational Challenges for Managing SOA
Ironically, IT operations tasked with managing and monitoring SOA face the same major challenges as developers: the fundamental philosophical shift that SOA represents. Operations staff currently manages IT assets from a technology perspective. With SOAs in place, the focus needs to shift to a service centricity. wihout understanding their interractions and interdependencies, or how they impact the services provided SOA, managing technology from the perspective of services was difficult for IT operations, which have difficulty understanding and defining services in general. In the absence of clear definitions of business services, IT operations have traditionally focused on managing and monitoring all of the individual tiers of technology separately without understanding their interactions and interdependencies or how they impact the services provided.

However, when you consider the adoption of SOA many obvious operational questions arise:

  • Who is going to own the management of business services?
  • How will the health, performance, and capacity of these services be monitored?
  • When a problem arises, how will operations personnel be able to relate coarse-grained business service degradation to infrastructure bottlenecks?
  • What enabling technologies or techniques need to be made available to enable personnel across multiple departments (development, QA/Test, operations support, etc.) to work together in real time to prevent service failures or performance degradations?
  • Do the current technology-segregated IT processes work in an SOA-enabled environment?
The answers lie in a best practice approach that manages as one cohesive and integrated solution, manages the interaction between the services and underlying infrastructure. This management id done from an end-to-end perspective, using measurements of capacity, availability, and performance (CAP) to integrate and simplify management functions.

Best Practices for Managing SOA
Using such an end-to-end approach offers IT operations far more flexibility and adaptability in an SOA environment than traditional, more piecemeal management of underlying systems or of services and their interfaces.

Measurements of CAP at the services layer should act as a trigger for all other management functions and actions so that the proper focus on services and service quality can be maintained throughout an organization. The advent of clearly articulated business services via SOA can and should drive all operational management functions from the perspective of service quality, expressed as measurements of service capacity, service availability, and service performance. This could eliminate once and for all the finger-pointing and ambiguities we all encounter in operations when finding and fixing problems during runtime. For the purpose of this best practice approach to service-oriented management, Web services standards are implied.

Figure 1 depicts a simple service. It is presumed that Web services standards are used to abstract and integrate the functions of two existing applications on different platforms, written in different languages and in different locations. Both the service producer and consumer's services become interoperable via Web services standards, including SOAP for messaging, XML for message and data format, WSDL for description of services, and UDDI for service discovery. With the application's services clearly articulated and defined, the opportunity exists to coherently "instrument" it and make its measurements available for the runtime management of the service.

Applying CAP metrics and measurements to the Web Service enables operations to clearly understand the behavior of the service and its interactions. For example:

  • Capacity/load metrics: Is the number of connections, sessions, and requests/responses within the intended design limits? Is the number of connections/requests within the defined service-levels for capacity?
  • Availability metrics: Is the service accessible and functioning? Is it returning the expected results? Is it operating within the defined service-levels for availability?
  • Performance metrics: Is the response time within an acceptable range? Is response being impacted by load? Is the response time within the defined service levels for performance?
Instrumentation Techniques: Getting the Service Measurements
Two fundamental principles can be applied to accurately and proactively measuring and monitoring services - active and passive monitoring. Active monitoring implies creating "synthetic transactions" that actively test a service by periodically executing over specified intervals. Passive monitoring looks at the transactions and interactions as they occur. Active monitoring is inherently proactive in that it, doesn't wait for an error or degradation to occur before detecting it even though it does not reflect the actual interactions between services. Experience shows that using both of these techniques together produces the best result.

Web services enable powerful instrumentation without the need to modify applications. Definitions exist for inserting instrumentation to reveal characterization of the transaction, its start time, its stop time, its transaction type, and the service with which it is communicating.

Instrumentation techniques fall into two categories: proxy and native instrumentation. The proxy method involves modifying the IP address to intercept messages between service providers and service consumers. The native approach requires you to use available exits in SOAP processors contained in both the service providers and consumers. Of course, both techniques involve tradeoffs. Basically, the proxy method enables you to be SOAP-processor neutral but you take a performance hit by being in-line with all messages. The native method doesn't entail these consequences, but it does require specificity to a particular SOAP processor.

Managing Interactions Between Services
Because Web services applications are likely to have many producers and consumers active at any time, the interactions between them must be managed. With this increased complexity (see Figure 2) comes increased concerns about service availability and performance. Being able to perform both active and passive monitoring of Web services and the interactions between them becomes paramount. From an operational perspective, managing interactions with external services (across enterprises) also represents an added complexity. Service-level-agreements (SLAs) need to be in place in order to clearly define and monitor the expected performance characteristics of the service.

Applying SOA Concepts to Infrastructure
As we have seen, a best practice approach for managing SOA-enabled business services will require the management of the interaction between the services and the underlying system from the perspective of capacity, availability, and performance and as one integrated solution. Using the right technology and approach, it is possible to manage applications stacks end-to-end and provide "coarse-grained" representations of CAP at each infrastructure tier instead of monitoring capacity, availability, and performance of IT assets in a piecemeal fashion. Unlike current methods, this approach would enable IT operations to quickly locate emerging problems.

However, from a service-support perspective, operations must be able to make sense of the torrent of information and events that they receive from a myriad of monitoring and analysis tools that neither abstract low-level measurements into cohesive and easy to understand information nor provide a contextual reference for interpretation. A solution to this problem is to limit the number of monitoring and analysis tools and insist that they minimally automate the analysis process out-of-the-box. Such tools should not require operations staff to set thousands of static thresholds to manually define how an alarm/event is generated. Modern management solutions should come configured to automatically detect any abnormal conditions in the environment. They should also provide a mechanism to aggregate and convert low-level events/alarms into coarse-grained and humanly understandable measurements of CAP.

In order to manage IT assets in the context of business services, all underlying infrastructure measurement and monitoring technology related to the service needs to be standardized into a unified taxonomy. Figure 3 depicts how individual infrastructure elements can be monitored and analyzed across tiers in real time. Individual metrics (informational events, alarms, etc.) from each managed element must be abstracted into overall measures of capacity, availability, and performance in order for them to be humanly consumable and then to enable automated and "standardized" monitoring across tiers of infrastructure.

Figure 3 is not meant to be "anatomically correct," but rather to illustrate that a cohesive, end-to-end CAP monitoring strategy is not only possible but also necessary as infrastructure stacks become more complex and dynamic. Like the SOA concept of abstracting fine-grain application functions into coarse-grained services, end-to-end application and infrastructure stack component measurements (Web server, application server, database, etc.) could be abstracted into higher-level measurements of CAP. New service-oriented management systems will leverage these standardized measurements and provide a means to aggregate and correlate them to the services that they provision. Imagine being able to categorize and find a capacity or performance bottleneck down to at least the level of an element or component. How are application server performance measurements impacted by network performance measurements did what impact do they have on the service layer? In my experience, most IT shops do not have this down to a science, but it is possible - and in an SOA-enabled enterprise, where services are providing business differentiation, service quality will be increasingly important.

Bridging the Gap Between Web Services and Application and Infrastructure Management
Having standardized measurements of both Web services and the supporting application stack makes it possible to trigger downstream infrastructure stack analysis and alerting to the measurements of Web service quality, as described earlier. If the Web service layer measurements and monitors detect a performance problem, they can automatically trigger downstream analysis in order to determine like-kind (performance) infrastructure problems that mayhave been occurring at the time the service degraded. Figure 4, a high-level diagram of an actual project to provide end-to-end proactive monitoring for a BEA WebLogic Integration 8.1 SOA platform, depicts a Web services-centric operational management diagram along with a simple SLA-based analysis workflow for correlated problem detection. BEA made it easier than usual by publishing Web services statistics via JMX and also publishing performance statistics of Web services that were organized into business processes via their Workshop product.

The SOA trend - already pronounced across the industry - will, in my view, only accelerate over the next several years. The promise of enhanced flexibility, adaptability, and agility in the context of "everything services" will win in the end. However, the complete value of SOA will be fully realized only when all parties involved in IT service delivery, and service support of the entire application life cycle, work together with the common goal of designing, coding, testing, deploying, and managing services from the common objectives of business.

This is an exciting time for IT developers and operations. In an SOA world, they are both seated at the head table as trusted advisors to the business and as critical partners for any key revenue-generating or cost-reduction objectives. By taking a unified, service-oriented approach to designing, deploying, and managing business services, they have a wonderful opportunity to get it right.

More Stories By Franco R. Negri

Franco Negri is the founder, CTO, and chief strategist of PANACYA. In a 23-year career with leading suppliers and consumers of advanced management technology, Franco developed a keen understanding of market needs and a strong vision for the next generation. He was most recently VP, Product Marketing and VP, Research & Development at Computer Associates, where he was responsible for Unicenter TNG - CA's flagship Enterprise Systems Management product line.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.