Sunday, December 28, 2008

What is AJAX?

In the past we had server side technologies like servlets and JSP pages and the client side technologies like JavaScript for web development. The common characteristic of these technologies was that a web page was redisplayed in its entirety if any change needed to be communicated by the server. Asynchronous JavaScript and XML (AJAX) is the extension of various web technologies to facilitate client-server communication without reloading the current page. The XMLHttpRequest object enabled communication with server-side scripts from within JavaScript to allow one to update portions of a page based upon user events. A good example of AJAX is Google search which provides context-specific words list whenever we type something in the search bar without waiting for the page reload. Under the AJAX umbrella we have web technologies like XHTML, CSS, DOM, XML, XSLT, JavaScript and XMLHttpRequest working together to enhance user-experience through dynamic updates.

The Classic web applications vs AJAX application comparison diagram make the approach clearer. It is the introduction of the AJAX engine which governs asynchronous communication with the user-interface and the server to eliminate waits associated with traditional development while the server responds to HTTP request. The lynchpin of AJAX engine is the XMLHttpRequest object which exists in all the browsers now. As far as the server is concerned, it is not even aware that it is dealing with a browser sending asynchronous requests. It is normally the expertise in handling the methods and properties of the XMLHttpRequest technical component which sets AJAX developers apart from the traditional developers who are versed in DHTML. The steps in AJAX operation are occurrence of a client event, creation of XMLHttpRequest object, its configuration, an asynchronous request by it, server response, processing by the callback function of XMLHttpRequest object of the server response and, finally, updating of the HTML DOM.

Saturday, December 27, 2008

Heroic programming is inadequate

Heroic programming have got connotations of Herculean effort by a developer whereby he assume all the roles of analyst, designer, programmer, tester etc to deliver a piece of software. The feasibility of achieving this feat declines rapidly as the project acquires complexity or size. Despite his desire to deliver and undoubted expertise, it is not humanly possible to understand and absorb the problem domain for a relatively large and complex project in a limited time. In fact, it is inconceivable that projects requiring many man years can ever meet the quality metrics of useful, usable, reliable, flexible, available and affordable software if accomplished without a proven development methodology as the potential for number of errors increases with size and complexity. Even for small system where the approach can work, it is questionable whether maintenance would not be an issue in the absence of documentation which is a by-product of a proper development approach whereby the deliverables of one phase are used in a subsequent phase, thus producing traceability

Risk management through iteration

The risk manifests itself in various ways – the risk attached to not meeting a business deadline or delivery, the risk at all stages of development cycle like poor understanding of requirements, inflexible and unresponsive design, developers unfamiliarity with problem domain or even new technology, lack of availability of required resources, attritional team dynamics etc, the risk attached to budgetary, organisational and time constraints, the risk stemming from size, complexity, change and so on. Mitigating these risks and delivering a system to budget, on time and meeting the specification is the essence of project management so, unsurprisingly, risk-management should be a core concept in development methodologies.

      

 Iteration refines understanding through feedback and eliminates risk. The completeness and accuracy of requirement capture can be checked through prototyping thus reduces risk of project failure. The cost of early discovery is cheap. Algorithm, workflow, human-computer interface, alternative designs, stress-testing, package suitability etc can be prototyped and checked against objectives till they meet the need. The complexity and largeness never becomes overwhelming because of multiple development cycles in iterative approach. The final solution shouldn’t come as a surprise to users as the solution would have been implemented in stages and with user approval.

 

Friday, December 26, 2008

Challenges of cloud computing

The readiness to veer in the direction of cloud computing in the current economic climate is understandable. It makes sense to trade fixed cost for variable cost and shift from capital expenditure to operational expenditure. There are a number of CRM and other application players online which can readily meet our functional needs. So opting for cloud computing is a non-brainer but it is not a panacea. Apart from the normal operational concerns about training, regulatory compliance, security, connectivity, fragmented nature of offerings etc., a big question remains about populating the data from our existing systems into these cloud offerings.  We need to clearly think through as to how we are going to do the bulk data updates and, eventually, extract the data from these cloud offerings for analysis in other systems. The unavailability of API can further cause complications in integrating with other systems. The problem is compounded if we have to move the data from cloud-based offering to another.

A useful chart for understanding the landscape of cloud computing gives a clear understanding of the players and the utilities, services and applications they offer.

There are many similarities between the SaaS offerings of today and the bureaux services of yore but we should appreciate the difference. Computer bureaux date back to the era when computers were expensive and batch processing and dumb terminals multiplexed into central mainframe to maximise usage were the norm. A discrete service like payroll or accounting systems was offered on a centralised server on time-sharing basis for multiple-clients and the client data was transferred via magnetic tapes and disks for batch processing. The charging was usage-based for the computer time needed. The SaaS offerings provide web-based, installation-free access to managed services on centralised hosts providing integrated applications like enterprise resource planning systems or customer relationship management (SAP, Salesforce etc). The latter are truly distributed offerings, whereby data from the central repository could be manipulated on the local PC. The charging is normally based on user population and concurrent users. The motivation in 60-70s was sharing expensive resources but nowadays concerns like availability, scalability, reliability and security are paramount. The disjointed, slow, batch and cumbersome approach of the bureaux has acquired 24x7 availability, responsiveness and seamless-integration in SaaS world. The whole burden of performing license-management, version-control, resilient-configuration, secure-access, disaster recovery etc is devolved on ASP which is more complex nowadays. We have moved from pseudo-parallelism of bureau to distributed, concurrent environment of SaaS.

The complexity involved in deploying and upgrading software in distributed environment, the consequential difficulties in negotiating relevant licenses, the interoperability issues, ubiquity of browser-based client, fast-and-cheap communication, affordable scalability, the trend towards outsourcing etc have all aided the drive towards SaaS. Hardware and software technology is seen as purchasable commodity and the organisations prefer to concentrate on their core competencies, expecting secure and resilient service from experts. The ASPs also feel confident that benchmarks exist to provide requisite concurrency and performance from their server farms, allowing them to focus on their domain-expertise. This approach is cheaper than in-house solution. However, careful operational planning is required to make it a success.