Friday, May 8, 2009

SOA - Developers Perspective

SOA - Step by Step

There is no dearth of literature on SOA, concepts, practices, design etc. But surprisingly, there seems to be no step by step walk through of how the various SOA concepts can be incrementally evolved. This article is an attempt at the same.

For SOA novices, one of most easily understood concepts about SOA is the functional decomposition hierarchy, which is just a fancy name for the practice of top-down decomposition of a business process into services at the lowest level. So lets start with such a top down approach to defining services in an SOA.

In a typical SOA, services are classified as

  • Basic services - data centric services and logic centric services
  • Composite / Intermdiary services
  • Process centric services

In the approach of top down functional decomposition, the following is resorted to

  • A functional business process is documented as a process definition in a process modelling language
  • The high level process is further broken down into numerous sub-processes
  • Each sub process is further decomposed into process-centric services
  • Each process-centric service is further decomposed into composite services
  • Each composite service into many basic data centric / logic centric services

The following diagram aptly depicts

We have so far identified services, is this what SOA is all about? How does the UI interact with the processes and services? How are the services deployed? Where does a service container fit in? Where and when does an ESB get used? All these questions seemingly orhogonal to the decomposition heirarchy, crop up in our minds.

Well, lets look at them in a logically incremental view....

To begin with lets consider we build services at basic, intermediary, process centric levels, as per SOA guidelines for designing and developing services. That is all our services are decoupled, cohesive etc. All these services are unit tested and faceless for the time being. All are implemented say as java classes which lie in one big codebase!

We already have a web application which acts as the primary UI client. In the simplest deployment scenario we will ship all our service classes to the WEB-INF/classes folder in the client's web application. In this case all the service interfaces are java based.

But are the services re-usable? Should another project come to us and ask us about re-using the billing service, can we say yes. Not right now. But lets improve our deployent units. Each intermediary service and its related classes can go into a seperate jar file. In the above exmaple we should have 2 jar files, shipping-services.jar and billing-services.jar. This way, we can put up, billing service as a resuable service, though only as a library. But re-using services in this fashion across the enterprise can be quite tedious. The service configuration needs to be explained through documents to other projects and every time there is an upgrade or a defect fix, you need to notify other projects which re-use your service. Projects maynot want such an integration at the API level. Instead re-usable would be much easier if you can expose your service to other projects through a variety of integration options.

A service can be exposed for integration using a variety of protocols and technologies

For example, our billing service can be exposed via HTTP as a simple JSP/servlet or a SOAP web service or a REST-ful webservice. Implementing this is quite trivial, all we need to do is wrap up our billing-services.jar in a war file, throw in third-party jars required for implementing the technology stack like SOAP-WS and we can easily host our billing service on a simple web application server. If we needed, to expose our service as EJBs, we can just as well wrap our war inside an EAR and host it on a EJB compliant application server.

Case for the light-weight Service container
If we are ready to forego the EJB interface option, we can choose thin, modular, service containers like Mule and service mix for hosting our billing-services.jar. Most service containers provide support for hosting services based on POJOs and also the support for common integration technologies like HTTP, SOAP, JMS, etc

Simpler service consumers with standard access to service
The projects intending to reuse our service, will act as service consumers and can have very simple, and standard, client code for accessing the service. For example to access a REST-ful webservice all that the service consumer will require is the ability to frame http requests.

In fact our own web application UI can act as a service consumer for the billing service hosted seperately in a service container like mule!

Which services to expose?
Though the billing service was a very good example of an independent service that can be re-used by other projects in the enterprise, often you will be faced with services, which you are tempted to expose, but which dont really have any takers. The reason is that not all intermediary services, make sense when used outside the context of a strong business process. What you might want to do in such a case is move, up the decomposition heirarchy and decide to expose the process-centric service if it makes business sense to expose the service. Finally only those services should be exposed which are being asked order to emphasize the advantages of SOA, there is not need to expose services that are not being asked for, in the larger context of the enterprise service requirements

Case for an ESB
Our billing service being network addressable, no doubt, helps in making it readily re-usable across the enterprise, but there is still coupling between the service consumers and our service in terms of the location and also the interfaces, that our consumers are aware of. If we were to change the location of the service or any of its interfaces, this would impact our consumers.

An elegant way out is provided by the introduction of an ESB. ESB can act as an intermediary between our physical service and our service consumers. The consumers need be aware only of a logical end-point on the ESB, the ESB will resolve this end-point to the physical address of the service. ESB can also provide routing/load balancing should traffic to our service increase. More importantly, the interface between our ESB and the consumer can be abstract, with the transformation between the abstract ESB interface and the concrete service interface being provided declaratively through ESB configuration. An ESB can also provide features such as logging, QOS add-ons, service call audit and SLA monitoring, security, etc

No doubt, SOA needs to be seen in the light of architecture, service reuse at the enterprise level, but the implications of an SOA to a single simple web application architecture are important as well.Hopefully our walk through of applying SOA to a traditional web application has helped bridge the gap between lofty SOA principles and guidelines and the ground reality we face as developers every day :)

Monday, April 27, 2009

Domain driven design in SOA application stack

Typical application design stack consists of the following
  • User Interface Controllers / Integration Layer / Process Layer
  • High Level Services Layer (Facades, Composite Services, etc)
  • Low Level Services Layer
  • Application Persistence Layer (custom DAOs or ORM framework)
  • Database Server

The domain layer cuts across all the above layers, domain objects are passed across interfaces spanning the layers
Cross cutting concerns like security and transactionality are ideally applied at the high level services layer

Ideally, where should the business logic related to an application reside?

The domain driven design camp seems to be perfectly convinced it should reside in the domain classes. They even go on to critisize applications that do not have business logic in the domain classes. Such a domain model which does not contain business logic but only contains data members and getters/ setters is often called an anaemic domain model.

The service driven design camp seems to think the best place for the business logic to reside is in the services, especially, the low level services. One argument in there favour stems from increasing usage of ORM frameworks like hibernate and JPA, which require persistent classes getting mapped to relational tables.In such cases, having a light domain model that can easily be recreated again and again from a tunable underlying database, seems to be advantageous

One of the reasons why domain driven camp, seems to want business logic in domain objects is that often in earlier frameworks like EJB 1.x and 2.x, the service layer had a coupling to technology, but with POJO based frameworks like Spring, EJB 3.1 etc, whether business logic resides in the service layer or the domain layer it is very much easily reusable due to it being in POJOs

The more important principle that is getting obscured in this domain-driven and service-driven business logic debate is that the design "needs to be able to capture business requirements, in appropriately designed Object Oriented abstractions". When going down the path of service based business logic, there is much more danger of falling into the procedural trap and designing business logic in terms of sequence of linearly executing steps. Low level service layer should have OO abstractions that deal with domain specific concepts, entities and operations


Business logic can reside in domain layer and alternatively in the service layer as well. But special care needs to be taken in design for either case to ensure that business logic gets modelled keeping sound object oriented principles and best practices in mind.

Tuesday, April 14, 2009

Cloud Computing - relevance to enterprise applications

Some considerations for moving applications to the cloud:
  • Different applications in the enterprise have different scalability and load requirements
  • From a ROI perspective cloud based computing offers huge cost advantage only to applications which exhibit characteristics of extremely variable loads.
  • Specifically, applications for which the difference between peak load and average load or difference between average load and minimum load is very large, offer best ROI, when considered for cloud computing.
  • With cloud computing, pay-as-you-use economics, you don’t end up incurring huge up-front capital costs, nor do you have to buy hardware and software capacities that have to contend with rare peak loads say holiday season/year-end processing peaks
  • Also cloud computing is suitable for applications with requirements of extreme scalability
  • Cloud computing applications should be ready to contend with less than perfect reliability, say 99.5 % reliability, which may not be an option in some mission critical applications


  • Not every enterprise application should be considered for being moved to the external cloud. For some applications, good old virtualization, within the realms of the organization, should give great enough value. This type of infrastructure is refered to by some vendors as internal cloud
  • New or existing enterprise applications can be architected for cloud computing depending on the application's suitability
  • Cloud computing can be used very effectively to complement existing intranet based enterprise applications

For a more graphical explanation please view the slide share presentation hosted on linked-in Cloud computing relevance to enterprise applications

Thursday, March 19, 2009

Desktop Virtualization Adoption issues

I think there is one major hurdle in the adoption of desktop virtualization on a massive scale. Even with desktop virtualization, thin clients will be needed for connecting to the virtual machines. Hardware cost of thin clients is far from negligible.

Consider what contemporary hardware could fit in as an affordable thin client. best I can come up with is either netbooks which are starting to cost under 400$ or maybe the most scaled down of desktops having say 512 MB RAM, 60 GB harddisks, but even these are not very cheap. Add to that full-blown operating systems like windows vista, which cost a bundle. The thin client cost can easily baloon upto 500$+ a piece. This can eat into the benefits of desktop virtualization. The OS problem is easily solved with linux, but hopefully in near future industry will make serious effort to provide scaled down thin client hardware which is very affordable (say under 100$)

If the thin client cost can be brought down, then the adoption of desktop virtualization will proportionally sky rocket.

The near future: Towards reducing the cost of thin client hardware most reports tell us that RAM, storage and chips are getting cheaper by the day and increasingly, the bottleneck in reducing cost for the thin client is the display technology. I dream of the day when, my laptop is as thin as can be and its LCD screen has been replaced by an small outlet for optical projection so that any flat surface can make do as a screen. Ah, but thats just wishful thinking for now...

Wednesday, January 14, 2009

Service Coupling - Service Interface evolution impact

Service providers should consider potential service consumer needs when coming up with service interfaces. However evolution of service interfaces is a fact of life we all have to contend with eventually. In an enterprise SOA, service providers and consumers are very much under enterprise SOA goverance and there are specific things which can be done, to minimize the pain of service consumers having to change with evolving service interfaces. Let us consider this problem through the example of the following Employee Information Service which has the following interface
public interface EmployeeInformationService {
    List getEmployeeInformation(Employee request);

class Employee {
    private long id;
    private String name;

1. A key thing in above interface is that the employee service's domain class Employee is exposed to consumers. To avoid this it would be desirable to wrap up the class in service integration abstractions like EmployeeInformationRequest and EmployeeInformationResponse

public interface EmployeeInformationService {
    EmployeeInformationResponse getEmployeeInformation(EmployeeInformationRequest r);

class EmployeeInformationRequest {
    private long requestId
    private String requestStatus;
    private Employee employee;

class EmployeeInformationResponse {
    private long responseId
    private String responseStatus;
    private List employee;

2. Even on the service consumer end, coupling between the service interface and the consumer's application logic can be greatly minimized using a consumer specific ServiceClient interface. In addition a object mapping tool such as dozer can be used to map attributes between consumer classes and service classes, declaratively. More about dozer

3. A key thing about reducing coupling is not the mere use of above abstractions but rather a conscious use of only the bare minimum, functionally necessary service interface attributes in the consumer. This judicious neglect of unrequired service object attributes by a consumer is what increases the consumers resilence to changes in service interface

For example if the Employee class in service interface contains 20 atttributes but our consumer needs to know only about 4 attributes then the consumer side classes should be aware of only 4 Employee attributes and mappings to translate to and from equivalent Employee attributes

The whole idea being that if Employee existing attributes were changed or deleted in the provider, unless the same were being used by our consumer, our consumer would be unaffected by the change.

Summary: Often in SOA implementation, providers communicate through XML messages and expose XSD schemas for message structures. This is equally applicable for SOAP based or Plain Old XML(POX) message exchanges. Here consumers often imnport the entire XSDs exposed by the provider and also do XSD based validations.

These consumers needlessly couple themselves to provider interface, instead consumers should do "just enough" validations and also "be aware of" minimum information being sent by the provider.This will go a long way in promoting reduced coupling between providers and consumers, and will allow provider interfaces to evolve with minimal impact on consumers