There are many suppliers including household names such as Google, Amazon and Microsoft, offering instant accessibility as well as scalability. These services are offered on a basis of pay as you go to enable cost effective delivery. With all this on offer why would companies continue to deliver infrastructure in house? With the changes to how key business applications can be delivered what does this mean for the test manager who is accountable for upholding the quality of such deliverables.
Cloud computing services can be classified as public, private, and hybrid. As mentioned, a number of providers have entered the market with service offerings on which to build business offerings that included the following:
- Infrastructure as a services (IAAS)
- Software as a service (SAAS)
- Process as a service (PASS)
These services are already being embraced by many companies including Domino’s and Mazda to name just two.
So what does this mean? Although in reality this may not be a huge step different to solutions that have been hosted in dedicated third party data centres. The reason for this is the increase in the capability for the delivery of hardware virtualisation or platform virtualisation. This refers to the creation of a virtual machine that acts like a real computer with an operating system. Software deployed on virtual machines is separated from the underlying hardware resources. It is not just the cloud that can deliver this architecture, however, customers will benefit from “pay as you go” pricing models based around usage rather than constant service.
With the infrastructure headache reduced, organisations can focus on the delivery of quality business driven applications. For example, Transport for London have taken advantage of this opportunity to make the daily commute easier for many people with the introduction of the contactless payment system.
Testing applications deployed across the cloud should be no different from a functional perspective. However there are other aspects of testing that are to be taken into consideration for a complete release. Some of these are included below.
1. Performance Testing
Performance testing over a known infrastructure that is a representation of production will give a clear indication as to how the solution will perform based on volume, load and stress. By conducting analysis of the target infrastructure and the non-functional requirements a set of tests can be devised and executed. Where necessary, load injectors create the load and the system performance is monitored during the testing.
Deploying cloud based solutions introduces a new set of variables into the mix. Some of the considerations to be included are:
- As the solution is deployed on a shared infrastructure there may be other resources impacting performance.
- Where would the load injectors be located to give the most representative view of the performance profile?
- The results will need analysis and full fault diagnostics completed over an infrastructure that is not completely isolated. However, this may indeed be representative of the target operating model and the overall performance mix is valid, as long as it is known.
Architectural compliance with security controls on a closed infrastructure can be most readily defined where there is physical security. Across a shared cloud based architecture there are unknowns as there are with performance testing. For the solution under test there is a need to ensure full compliance with the security policy and there are no loopholes exposing vulnerabilities. Even so there are many finance and commerce solutions which are cloud hosted. It is also very much in the interest of the cloud provider to work with clients to ensure that the desired levels of security compliance are achieved.
3. Configuration / Replication
When operating in an internal infrastructure there are a number of known quantities that contribute to the configuration of the solution. This consists of the bare metal, the versions of operating systems and commercial off the shelf (COTS) software and the version of software under test. Even with all these knowns there have been cases where the application behaves differently from one environment to another. This challenge can be overcome by using any number of tools such as Jenkins that enable hands off deployment. When considering application deployment in the cloud then a full understanding is required for the entire configuration to ensure there is consistent build and application integrity.
4. System connectivity
As with understanding the details of a known infrastructure, the connectivity between applications requires validation. Enterprise applications can be hosted across alternate cloud providers or a hybrid of external and internal purely based on valid decisions where to deploy at the time of go live. In all cases the connectivity and integration will need to be validated for data integrity and completeness as well as resilience and performance.
This is not strictly a challenge that is specific to cloud based delivery, however, it is facilitated with the increase in cloud delivery. Internal infrastructure deployments are usually limited to a small number of device and browsers types. Where this is the case, there is a finite set of validation required. Increasing the availability of the applications to an increased customer base will require further testing to ensure there is complete compatibility with the multitude of desktop and mobile operating systems and browsers. The presence of a new problem has also seen the evolution of a new solution and there are a number of tools available to be incorporated into testing for browser compatibility.
6. Change management
Hosting agreements with suppliers are based on relationships. Sometimes these relationships change. When this occurs there will be a migration of the application base between cloud service providers or even back in house. When this does occur the levels of involvement in planning and testing should not be under-estimated.
So what does this mean?
There is a lot of focus centred on cloud computing and not without reason. There are a lot of advantages to be gained by business adopting this model. Is it really new? It is more of an extension of what business has already been doing with further enablement from continuing technology advances. From a testing perspective we are in a similar position as the testing is an extension of what we already do in terms of understanding the risks around the overall delivery. The accountability from testing is to ensure identified risks are mitigated via the delivery of a quality testing service.