Saturday, August 27, 2005

Distributed Computing and SOA

Distributed Computing and Service Oriented Architectures

Few days before I had a discussion with Jostein from Markus Data Norway regarding distributed computing architectures. These days he is involving in designing the system architecture for next generation Procasso system. We were discussing about inter-component and B2B communication technologies such as RMI, Web Services, Remoting, ASPX and their usage in different scenarios. So I thought of making a blog post on Destributed computation and discuss to where it is heading today.

We know OOP approach has made a major revolution in programming models. It has been great success and therefore people tend to look at every problem in an object oriented way. So people came up with the concept of distrubuted objects to solve the distributed computing (DC) problem. Then we started to hear about technologies like java RMI, DCOM, CORBA, .NET Remoting used in solving destributed computing problems.

But in practice organizations started facing lots problems with OO approach of DC, and the messaging systems such as IBM MQSeries, MS MQ technologies, Web Services became much more popular due to the flexibility it provides.

So what is the problem with Destributed Objects. In my openion the concept it self has some problems. Think in this way.. The interface my organization provides to the outside world should present a service but not row data. For an example If my organization is specialized on creditcard processing then I should provide a creditcard processing service to the outside world. But objects are the representation of my business entities and they should be manipulated withing the organization boundry to provide a service. That is on the theory side.. Now on to practical issues..

Distributed Objects (DO) provide very convinient way for developers to work with as they can think and work on remote objects in a similar way as they work with general objects. But the problem is tight coupling between communicating parties. When we talk about tight coupling there can be different type of coupling in DO approach.

a) Code Coupling : In Distributed Objects approach both parties share common interface, i.e actual code. We see DOs works well in the lab, but once u deploy it on production, it is very defficult to evolve one side independent of the other side due to sharing of code. Assume i want to provide a new method on my DO interface because my new business partner wants a new service from me. Addition of this new method will break all my contracts with the existing partners as I have to change my interface code.

b) Time Coupling : DO works in a synchronous manner, i.e the caller waits for the callee to finish the operation. This is obviously not the way to perform a B2B operations.

c) Location Coupling : Assume u call a business method of another company but the response you need to direct to another party not back to you. This kind of addressing issues are not possible to handle with DO model.

On the other hand Web services being XML (text) based messaging technology provides a loosely-coupled communication betweens two parties. We do not share code but messaging meta-data (WSDL). We can define our transport mechanism making asynchronous invications possible. We see Web Services running on HTTP, SMTP, JMS and even on plain TCP. Web service addressing specification provides convinient way to loosen the location coupling between parties.

Due to these reasons we see Service Oriented Architectures based on loosely coupled messaging systems taking over the interconnection domain. But still I think DOs will still has a major role to play on inter-process or inter-component communications withing a sofrware system.

Saturday, August 20, 2005

Sunday, August 14, 2005

Make your design testable - Test driven design

Make your design testable - Test driven design

Everyday we hear great deal of words like "test driven development", "unit tests", "test-first programming" etc.. I know every developer loves if they can cover 100% of their code with unit tests. But most of the time, the complain is that they do not have enough time for writing unit tests.

In my view the root cause to the problem is not actually the time, but the design of our components. Most of the developers cant find enough time to write tests for their components, just because it is almost impossible to write test cases with the component design. Most of the time, the way our component is integrated with its environment makes it impossible to simulate the dependencies on a test environment.

Next obvious question is "How do we write our components to be testing friendly". One good answer to the question is "Use Dependency Injection". Two of the most popular injection mechanisms are "Constructor Injection" and "Setter Injection". Unlike their big names the concepts are pretty simple :) . Let me take an example.. Let's take a situation which is generally considered fairly difficult to write test cases, i.s testing UI workflows....

In our example the business flow will be as follows;
-- fetch all customers from the database via a DAO
-- show customer list to the user on a GUI View
-- user will pickup a customer from the list
-- send a billing mail to the customer using MailSender service

Our simple workflow will be dependent on a DAO, a GUI View and aMail sender service for it's operations. The business workflow under test will look like the following:

class BusinessProcess {

private ICustomerDAO dao;
private IView view;
private IMailSender
public void perform() {

Customer[] customers = dao.getAllCustomers(); //get all customers from db

view.setInput(customers); //ask ui view to display customerlist; //this method is blocked until user selects a customer

Customer selectedCustomer = view.getSelected(); //get user selected customer

mailSender.sendBillingMail(selectedCustomer); //send the billing mail



One important think to notice is inside the business process method we should not perform any dependency lookups. It will just use the instance variables dao, view, mailSender representing dependent services. These instance variables are initialized either by Constructor injection, or Setter Injection. In case of constructor injection, the constructor of the "BusinessProcess " will look like the following

class BusinessProcess{

private ICustomerDAO dao;
private IView view;
private IMailSender

public BusinessProcess( ICustomerDAO dao, IView view, IMailSender mailSender) {

this.dao = dao;
this.view = view;
this.mailSender = mailSender;



Next important point is that our BusinessProcess is not dependent on any Implementation of the services. It just dependent on the interfaces ICustomerDAO, IView and IMailSender. This will allow you to write mock services and inject those mock services to the business process inside your test environment.

If you take the above design it is easy to write a test which will simulate complete business process even including user interactions. U will need to write mock objects for ICustomerDAO, IView and IMailSender services and inject those to the BusinessProcess at your test case.

Probably by now you got the basics of a testable design. Once you get your design correct then it will be a fun writing tests to your component. Always when you design think about the testability. It will not only make your components testable but will improve the extensibility of your component. Also this will be a good starting point for a test-first development practice :) ....

Thursday, August 11, 2005

Intermediate Representations vs Software Performance

Intermediate Representations vs Software Performance

During last few days some of my friends at Scali-Norway were working on writing a Data Access Objects (DAOs) to access system data. They were writing their DAOs in C++ and used a third party driver to access the postgress database. In the simplest scenario DAO reads rows in a table and converts them to a intermediate object representation and returns the collection of objects. Problem they are facing now is the less performance and high memory usage. By going through their code I found that they would have avoided these problems if they have designed the DAOs correctly at the first place. Anyway in this post I will discuss how to represent your data in an intermediate representation without compromizing performance or memory usage.

To demonstrate the concept I will take a simple example. Assume u have a "customer_data" table. U need to send mails to all the customers who have not paid their bills for the last month. U need to interact with an external billing service to check whether a particular customer has paid or not. Also for extensibility u need to wrap customer's data in a "Customer" object which is your intermediate representation. Think a while about a design that you will propose to solve this problem....

The approach my friends have taken is as follows:

They have a DAO class which will iterate thruogh the database recordset and create a collection of Customer objects which will be returned to the upper layer.

public Customer[] getAllCustomers() {

Customer[] customers = new Customer[];
RecordSet rs = getSQLRecordSet();
while (rs.hasnext()) {

customers[i] = convertRecordToCustomer(;

return customers;


Then in the business method they iterate through the given Customer collection and send a mail if that particular customer has not paid his bill.

public void informCustomers() {

Customer[] customers = DAO.getAllCustomers();
for each customer {

if (customer has not paid the bill)



If we analyze the above solution, we can see to perform a given operation we need to go through two loops. Say they had 1000 records in the resultset, then first they will be running a loop of 1000 to convert their recordset rows in to intermediate representation. And another 1000 loop to perform actual business process on those records. The worst part is that you load all your data in to a collection of objects and pass the collection to the upper layer. Imagine you have 10M records in your database, you will probably run out of memory in trying to perform the above operation.

But the good news is you can easily find a better solution to the above problem by applying a simple design pattern. The important point here is actually the business process doesnt need all the customer objects at once to be in the memory, instead it needs one at a time to process. So in our new implementation we write our business logic in a listner class implementing a interface called "CustomerListner" as follows.

interface CustomerListner {

public void onRecord(Customer cust);

class BusinessProcessAction implements CustomerListner {

public void onRecord(Customer cust) {

if (customer not paid the bill)



Now we call the business process in the following manner. We create an instance of BusinessProcessAction which will deal with one customer instance and process the logic on it. Then we ask the DAO to notify the BusinessProcessAction instance as and when it reads a data record from the database. Code will look like as

BusinessProcessAction action = new BusinessProcessAction ();

Our DAO will act as a producer of customer instances. Once we called the DAO.process() method it will start fetching records from the database, convert that record to a Customer instance and then it will ask our BusinessProcessAction to perform the business logic on that customer.

class DAO {

public void process(CustomerListner listner) {

RecordSet rs = getSQLRecordSet();

while (rs.hasnext()) {

Customer customer = convertRecordToCustomer(;




This will do the job for you. If you closely look at this implementation u can see we have achived all our objectives.
1) Altogether we runs only 1000 loops for 1000 records
2) We haven't load more than one record to the memory at once
3) Most important is.. our business method still uses Intermediate Representation and it is independent of database formats..

Specially when the record production operation is asynchronous this pattern gives a definite edge. Assume in above example fetching data from the datasource takes 1 hour per record.. Then with earlier implementation it will take 1000hour before transfering the control to the business layer. But with the second approach the processing is real-time, as and when a record is fetched from the database a mail will be sent to that customer.

This example is just to demonstrate the concept of the pattern. You can extend this pattern to achive much more flexibility if u think more given a specific problem. If you start looking at problems relating them to this pattern you will be able to save most of the processing time and the memory usage of your programs....

Wednesday, August 10, 2005

Extensibility layers of a "SW Product"

Extensibility layers of a "SW Product"

Few days ago my friend Senthoor and I were discussing on use of Facades for sessionbeans to hide the complexity of EJB 2.x. That discussion made me to think more on the architecture of a "Product" based software system. Here is a little thought on SW product design..

Broadly we can catogorize software projects in to two catogories.
1) Product development
2) Custom service software develoment

In my openion, compared to "Service software architecture", we need to pay special attension on some of the design aspects when designning a "Product architecture". During the continuous development stage of a Product based software,
a) Developers need to customize the product for different customers.
b) Product need to withstand against technological changes which happens over the time.
So by paying a little more attension on following factors in the design phase, we will be able avoid many extensibility problems in the long run.

Layered aspect of a "Product" architecture needs to be designed to support high level of extensibility. In a product, we should distinguish and identify the Kernal layer, customization layer (business process layer), service interfaces, and the presentation layer of our product for better software management.

Kernal consists of fine-grained operations and will include "data access layer" and "utility services" ( In a J2EE project entitybeans and utility sessionbeans performing atomic operations will make your kernal of the product). For an example "sendSMS(to, messge)", "insertStudent(student)" can be two operations in the kernal components. These opeartion should be atomic and should not compose business processes.

It is really important to understant the difference between fine-grained operation and a business process. for an example "insertStudent(student)" is a fine-grained operation where as "registerStudent(studentData)" will be a business process. Your customization layer should compose your business processes. In the above example, "registerStudent" business process may look like the following.

registerStudent() {

You can define your transactions, security constraints on your business process. Also this is the layer which may be customized for different customers of your Product (For an example a customer may request to send a mail to the registered student instead of a SMS). Ideally kernal should not be touched in customizations and should be managed as a seperate sub-project. Kernal can be associated to customizations as binary libraries. This will also force developers to document the components and will improove project documentations :). Also it is not recommend to build the kernal with your customizations in a single build script. But Kernal should also be evolved by adding new features which are required by new customizations in a seperate sub-project.

Another important thing is to hide the technology used in your business layer from the presentation layer. This can be achived through simple facedes. For an example you may write POJO model Facades to hide your sessionbeans and let developer to instanciate Facade with "new" operator and call business methods. Your facede will hide JNDI lookups and EJB specific exceptions, etc.. This will allow the developers to consentrate on business logics rather than configuring framework specific stuffs. On the other hand this allows you to switch the technology managing your business components with out impacting your presentation layer. (e.g. switching from EJB2.x to EJB3)

In this blog posting I tried to discuss some design concepts related to the layered view of a "Product" software architecture. In my openion, even in a "Service Software" architecture, extensibility should be provided to a certain extent , but there you may have to compromize it with the cost and performance.....

Monday, August 08, 2005

Creating a remote file system browser, SWT vs Swing

Creating a remote file system browser, SWT vs Swing

Few days ago some of my friends at Scali AS, Norway were wondering of how to write a file select dialog in java, to browse a file system of a remote machine. They were writing a RCP application based on Eclipse framework and working with SWT/JFACE widgets. Remote machine was a Linux server which acts as the front node to a Linux Cluster.

Given the problem my first thought was to extend and override the SWT FileDialog class or related content provider classes with my own implementation that fetchs the remote file structure over SSH. So I went throgh the SWT source codes to find out the feasibility of this solution. As many of us know that the SWT uses higher level OS dependent graphical components and those are rendered by the operating system. Therefore I didn't find any way to solve my problem with SWT libraries. "Open" method of the SWT org.eclipse.swt.widgets.FileDialog just calls the native OS method. So I was not able to plug my own file system model to the SWT FileDialog.

So obviously the next choise was the Swing JFileChoser component. As the file browse dialog needs to be just a popup, it is possible to call Swing FileChooser withing a SWT view classes by a general java call.

During the analysis I found that the Swing classes are well designed based on the MVC model. Swing components are designed to be crosspatform so the model hides the platform differences from the view. In the case of Swing file select dialog, the View is implemented as "javax.swing.JFileChooser" class and the "javax.swing.filechooser.FileSystemView" class provides the file system model to be used. Swing has different FileSystemView sub classes for different OS'. (examples WindowsFileSystemView and UnixFileSystemView).

So it seemed to be ideal for my requirement. All I need to do is to provide a my own subclass of FileSystemView(model) and pass an instance to JFileChooser(View) constructor. My subclass of FileSystemView will fetch remote file structure through SSH and render it according to the FileSystemView interface agreement.

Further analysis showed that the FileSystemView(model) interface uses instances heavily to model file objects. In my requirement the File objects will not represent real file instances of the hosting machine but logical view of a file on the remote machine. For this reason I wanted File objects to behave in a disconnected manner from the hosting OS.

But unfortunately some methods like "exists()" of the class is bound to the OS via security constraints. The problem is these methods of class are badly designed to repersent only the real file entities of the hosting OS. It is not designed to work in a disconnected nature. If they had moved the OS dependent operations on to a util class and made File to be used in a disconnected manner then problems like mine would be solved perfectly. (Here someone may argue that is not the OOP way, but still it will make java file framework much more extensible)

Anyway even with some constraints it is possible to implement my idea with Swing JFileChooser where SWT fails to do so. This simply shows the extensibility provided by Swing compared to SWT. But SWT itself has many pros with it. In my expierience SWT provides a better user expierience in the sence of porformance and look n feel. Also the APIs are more natural for simple GUI programming.

Thus in my openion these two technoligies can be (and should be) used to complement each-other and not to substitute each-other....

Sunday, August 07, 2005

J2EE seems to be on the right track with EJB3 architecture

J2EE seems to be on the right track with EJB3 architecture

J2EE at its early stages was designed to cater a world having a component market place. The extent to which it has materialized this objectve is doubtful. Anyway it is clear most of the time J2EE is used to solve a complex business problem rather than to write course-grained components that can be plugged seemlessly between systems.

If we look at the EJB2.o or EJB2.1 frameworks they seems to be designed to favour big J2EE vendors not the poor J2EE developers :) . The complexity of the platform required complex tools in supporting the development. Further more developers were spending most of the time solving complex J2EE configuration issues rather than solving the business problem in hand.

Few of the most significant drawbacks of the EJB2.x architecture are
- its non-ability to test the business components out side the container.
- dependency lookup nature of JNDI requred high amount of coding in refering a resource.
- entity beans are not serializable so DTOs are to be used causing more coding and less performance.
- deployment discriptors are hugely complex and has a steep learning curve.
- making the business component classes completely independent of the EJB framework classes is not possible even with a complex patterns due to framework abstractions and wiered checked exceptions of the framework. (For an example you could write a POJI as a framework independent interface for your business object and let EJBObject interface to extend it, if the RemoteException class was a unhcecked exception)
- EJB-QL was unable to solve most of the complex querying requirements
... and more... and more....

Some times I was thinking like Sun may not have any people actually doing projects in J2EE, even they involve designing J2EE archtecture. The EJB2.x design is that much cumbersome and developer frustration was that high specially when your company doesnt want to spend on complex tools for J2EE development.

Any way as the time passes, Some open-source products like Spring for business tier and Hibernate for persistance seemed to be right tools compared to EJB2.x. Assembling your own container with Tomcat, Spring and Hibernate was seemed to be the better way for data centric web projects. Both hibernate and Spring uses POJO model, making out side the container testing to be very mush easier (to be more precise they dont run withing a context of a container).

Spring uses Dependency injection pattern to inject any resources required by the components. The setter injection model has prooved to be very useful specially in unit testing. Here you can find a very simple example demonstating the concept behind Sping framework. Also the light weightness of Spring and Hibernate attracted most of the frustrated EJB2.x developers.

But today EJB3.0 is well designed to solve most of the EJB drawbacks it had in early versions. It has learnt very much from Spring and Hibernate. Persistance model is completely redesigned following the Hibernate model. In EJB3 you enterprice beans are POJOs and not dependent of any framework classes. It largly make use Annotations and avoid use of abstractions as much as possible.

In my openion this trend brings a hope to a much better form of frameworks. if I remember right a framework is defined a set of abstractions on which developers can extend. Hibernate/Spring uses reflection to avoid framework specific abstract chasses and intefaces... EJB3.0 uses annotations and avoid use of framework classes as much as possible. This allows the developers to consentrate more on the actual business logic.

In future blogs I will describe how EJB3.0 integrates most of the powerful features of Hibernate and Spring to make our lives easier. For me it seems with EJB3.0 Java communty is well armed to face M$ challenge ...