Wednesday, November 30, 2005

Add Business Intelligence to Your System

Add Business Intelligence to Your System

Recently I involved in designing a reporting system and hence have done fair bit of research on the available reporting mechanisms and frameworks. Although the requirement was to use .NET and Windows technologies, I thought of looking in to some other reporting platforms (in java and open source world) as well.

First thing I got in to my mind was 'Eclipse BIRT framework'. I remembered looking at the BIRT proposal and architecture about a year back. Also I remembered it had proposed a complete reporting life cycle management system. So I visited Eclipse site and saw that they have released the first stable version of the system and it seemed to be very promising. You can access Eclipse BIRT home page here and a very cool flash demo here. (java programmers must see this demo if Microsoft was able to surprisee you with their Visual Studio productivity features :D )

Eclipse BIRT architecture provided me with a very good understanding of the functionality of a report system. Then I thought of having a look at 'Jasper Reports'’. Jasper also seems to be a rich reporting framework and I got a good understanding of reporting internals by looking at report samples and tutorials of Jasper reports.

So after looking in to couple of open source reporting frameworks then I was ready to get in to my real job, which is to explore .NET related reporting technologies. Visual Studio 2005 comes with two major reporting approaches. One approach is to use '‘Crystal Reports'’ version shipped with Visual Studio. This is a limited version compared to '‘Crystal Reports XI' but good enough for most of the reporting requirements. The '‘CrystalReportViewer'’ component provides easy embedding of reports to web forms or win forms. Basically if you have a good '‘Crystal Reports'’ experience and working knowledge on Visual Studio, you will be able to generate quality reports and embed them in to your web or windows application. One great this I noticed was that Crystal Reports engine is able to generate quality HTML reports having cross-browser support even in complex report navigation requirements (which ASP.NET 2.0 is failed to do in some cases).

The other approach is to use '‘Microsoft Visual Studio 2005 Reporting'’ features with or without 'SQL Server Reporting Services'’. In this approach Visual Studio provides a Ă‚‘ReportViewerĂ‚’ component which can be used either in local-mode (no reporting server involved) or in remote-mode (with SQL Server Reporting Serviback-endning as a backend report server).

Local-mode is seems overlapping with the '‘Visual Studio 2005 Crystal Reports'’ and good for simple reporting architectures. Local-mode provides you with more control over the data as you can bind datasets, custom objects in to your reports. If you want more complex/extensible system having report servers running you need to look in to '‘SQL Server Reporting Services'’ and use 'ReportViewer'’ remote-mode. In this mode the data binding and report rendering is done within the report server and the report will be exposed as a web service consumable by '‘ReportViewer'’ components embedded in web forms or win forms. Also SQL Server Reporting services can directly expose html reports with out a separate UI layer in place. Microsoft has done many improvements to their previous '“SQL Server 2000 Reporting Services'’ with their new releases of 'SQL Server 2005'’ and '‘Visual Studio 2005'. Even though these technologies are not very matured as '‘Crystal Reports', they seem to be very powerful and robust with the good industry impression they have on the '‘SQL Server 200 Reporting Services'’.

One thing I noticed after doing this research is that the reporting technologies have become much matured today with all sorts of user/developer friendly tools and complex/extensible architectural-styles. So it seems the true sense of the word '‘Business Intelligence'’ is not so far from today...

Sunday, November 06, 2005

Automate Your Builds and Tests

Automate Your Builds and Tests

This is going to be an ice-breaking post on my blog after a long silence. Couldn't write a post for some time as I was bit busy with a lot of work after coming back to Sri Lanka.

Last few days I was involved in automating build process of a J2EE project in Eurocenter. The project basically has a Struts based web front end, EJB/JDBC based db/business tier and a web service (XML over HTTP) interface for mobile devices. We have come up with an automation plan which will be implemented within next month and it seems progressing well curruntly.

So I thought of sharing some stuffs on the automation implementation of our project as it might help someone else in automating their products. As the first step we have to have a rough plan of our automation process. The plan used in our project goes as follows:

Write a one-step build script: Use Ant or Ant variants to come-up with a build file with will generate the complete build in one command. You may even consider using Ruby build scripts (Rake) for this purpose. (But we should be careful is selecting a immature technology for a core part like the build system of a project.

Add unit-test execution to the script: You should carefully select the unit tests which are properly written to run as stand alone tests. If you have followed my previous postings I have discussed some techniques explaining how to make your code testable. We can have an ant task which will run the unit test cases/suits on the compiled code.

Come up with a push button release: Once we have the build script we can extend it to produce single command releases. Basically this involves checking-out the code from a CVS tag and running the build task. This extended build script can be used by the build manager in producing builds for QA testing as well as producing production builds.

Schedule builds with automation engine: Here we go with automation. First we need to decide on a build engine. In the open source world there are several good automation engines such as ‘Cruise Control’, ‘AntHill’ and ‘BuildBot’. We have decided to go with ‘Cruise Control’ as it seems to be the most popular choice in the java community. In this step we configure our build engine to checkout all the code from CVS periodically (only if any modification is done after the last build) and build the product and run the unit tests. Once we have completed this step we track any build failures (such as compile errors) and unit test failures. In our project we have scheduled builds to run every one hour and this helps capturing introduced bugs within a period of one hour.

Establish reporting mechanisms: Through out the process we should be having basic notification mechanisms like Email and web console for reporting build status and failure notifications. All the build engines support these basic reporting mechanisms and all we have to do is to configure those services to suit the need. For an example if a build failed we can configure ‘Cruise Control’ to send notification emails to the developers who have committed code to CVS after the last successful build.

Automate integration testing: Here we attempt sub component integration in a test environment. In our project we wanted to make sure that the product components are integrating and deployable on the server. The integration testing will ensure that all the components are properly deployable (e.g. EJB testing) and the component communication channels (e.g. JMS queue testing, database connections) are established correctly. Most of the tests in this stage can be considered as in-container testing and test frameworks like ‘Cactus’ are good candidates to support the testing in this stage. In this stage we run “JBoss’ as our application server then deploy our application there and run integration test scripts.

Automate functional/acceptance testing: In this stage we add end-to-end testing to our automation. In our case basically this means performing tests on the web UI and web service interfaces. These test cases are written based on the end-user actions (based on use cases). Example test case is ‘user log-in to the system and changing his password’. In our product ‘HttpUnit’ test cases are written to simulate and validate user actions. Where ever response time is critical we may add performance test scripts written using ‘JMeter’, ‘JUnitPref’, etc… to ensure system response time.

Add code analyzers: Here with each build a code analyzer will inspect the code for “smelly code pieces”. With each build the team lead will get code analyzer report on any added/changed code fragments.

After coming up with an automation plan we can start implementing the plan. Once implemented, we get a set of actions which will run in a sequence in every scheduled build cycle. For you to get better understanding I will summarize the action steps we have in our project build cycle.

1. Checkout sources from CVS
2. Build the application from scratch
3. Execute build verification tests (unit tests)

4. Run code analyzer tool scripts

5. Install a ‘JBoss’ server instance

6. Configure the ‘JBoss’ instance
7. Create the test database

8. Populate test database with test data

9. Deploy the product over the JBoss/file system

10. Startup the server

11. Monitor log files for successful server startup

12. Execute integration test scripts

13. Execute acceptance test scripts

14. Shutdown the server

15. Clean up the build resources

16. Report build/test results

Once we got all the basics in place this is the time to play around extreme automation experiments. For example I have heard some project teams having flashing red bulbs in the development environment if a build failed. Trying these kind of extreme practices can bring team moral up towards testing/automation as well as can take attention of the rest of the project teams (in a good sense