Fotolia

Tip

Master application scalability in cloud computing alongside devs

Just because cloud scales endlessly doesn't mean every component in an application should, too. Teams throw money away and degrade app performance when ops isn't involved in design and test.

By the time an application gets to production, it's too late to fix scalability issues. Instead, focus on a communicative development and operations partnership and integration tests.

Scalability in cloud computing enables users to expand resource consumption as load increases, but generalized resource growth isn't enough. Not all components of an application need to multiply equally, and there are negative consequences for scaling components that aren't strained. Intelligent scaling only grows the resources that support overloaded application components. Operations teams need to engage with developers early in the design process about application scalability and nail down when and how components spin up. These teams should work together through integration tests to ensure an app retains performance and reliability as it scales to meet demand.

Application scalability is tricky business

This example shows what can go wrong with scaled resources: Two workers in different branch offices happen to start a transaction at about the same time to sell something. The transaction service checks inventory, discovers one product on hand and enters the order. In the cloud, two instances of the same application support these two workers. Each instance checks inventory, finds one item and orders, but in the second case, the inventory level doesn't reflect the fact that the first order was in process. Now, the second order can't be fulfilled.

Elastic scaling creates an unpredictable number of application component instances, and these instances are not necessarily aware of each other. Independent component scaling poses major challenges with this kind of collision, but traditional practices typically address only duplexed or N+1-redundant components. Cloud bursting is achieved via container hosting, virtualization and private cloud tools, and cloud scaling can come from public cloud autoscaling functions and hybrid cloud managers. There are likely hundreds of tools to achieve bursting and scaling, and these tools typically do not expect the components to know about the cloud bursting process.

The IT operations team tracks what workloads are highly or overutilized and where hosting resources should be scaled up to accommodate demand, but operations can't ensure a scalable architecture for the application or fix application components if they do not scale efficiently. One tenet of DevOps is to translate developer requirements for application deployment and management into operations terms. What translates operations requirements -- namely, scalability in cloud computing environments -- back to developers? And what specifics on application scalability and infrastructure flexibility should operations give developers?

The developer's role in app scaling

Application developers must understand context for the software's use. Not every transaction is a collision risk -- only those services that attempt to update related database elements. Some applications require stateful firewalls to ensure that all the messages associated with a given transaction go to the same processing component. Some also require state control so they behave like functional components or microservices. Context awareness for scaled components can address performance and functionality concerns as well.

These are issues that only developers can address. IT operations can scale up available cloud resources to underpin a software component, but there's no assurance the application will perform better. Developers must know how to design for application scalability and which components require it. Adding support for scaling where none is expected -- or useful -- will increase development cost and time and also potentially degrade application performance. The problem is especially acute when components are shared across multiple applications; one development team is not necessarily aware of the others who use the same component.

Deployment ranges and integration tests

Any attempt to resolve application scalability issues after development, when the software is in production, is inefficient at best and totally impractical in many cases. Instead, speak up with operations feedback on scalability assumptions early in development, then validate them before production. The most convenient stage in the application development cycle for this meeting of the minds is integration testing.

Developers create scalable architectures in various ways. For example, a microservices and container-based architecture naturally encourages independent scaling. Once developers have an idea of what to scale and how, it's appropriate to talk with IT operations to establish components' likely deployment parameters: within a data center, between data center and cloud, across cloud providers or strictly in one cloud provider.

If the best deployment scenario isn't good enough, there's no way to retool it to mask an unsuitable architecture. It's too late.

The infrastructure range over which an app must scale establishes how sensitive the component is to transit delay in network connections, spin-up delay for new instances and other practical performance factors. If the development goal for scalability can't be met operationally, then the development plan or deployment plans must adapt. Network connectivity, compliance and governance for the deployment, or even cloud provider choice could change.

Integration testing is the first opportunity for developers and operations specialists to view the information flows associated with componentized applications and examine how scalability influences app performance and stability. Testers combine individual application components to gauge how they work in actual workflows. Integration tests can expose scaling issues in isolated application components, as well as higher-level problems. Integration testing must mimic actual production deployment as closely as possible.

Feature developers and app owners tend to forget that a deployed component has to be load balanced and connected into workflows. Operations aims to deploy applications in ways that optimize hosting resources, network connectivity and other considerations, but how application components connect, maintain state and avoid collisions when updating databases is out of operations control. There's little difference a management tool can make once applications are already built. And if the best deployment scenario isn't good enough, there's no way to retool it to mask an unsuitable architecture. It's too late.

Make integration testing a proving ground for future production systems, and from that test environment, foster a development/operations partnership to handle the challenges when applications grow on their own.

Dig Deeper on Containers and virtualization