alphaspirit - Fotolia
Real-time app development helps minimize delays
The differences between real-time and near real-time application development are invisible to the naked eye, but everyone suffers from delays in app builds and execution.
With the advent of microservices, the most common approach to building cloud applications involves splitting up each component so they run in separate environments. This approach is ideal from a maintenance, scalability and development perspective, but can slow down the speed of a single transaction.
Developers can build real-time applications within 100 milliseconds and near real-time applications -- within a few seconds -- for workloads that need to be processed expediently. Near real-time is fast enough for many applications, but may be too slow for apps running in financial institutions or media companies. In this tip, we examine how to approach real-time app development for a blogging platform running on AWS.
Getting input from the user
When a user creates a new post, he submits it through an HTML form, which then notifies the back-end application that there is new content to process and display. This application might also need to run some text analysis, identify what type of image to associate with the post and then automatically post it to social media.
It makes sense to divide the application into separate parts to make it web-scalable so that millions of users could write posts simultaneously. When the user hits Submit, the platform immediately saves the content and notifies multiple services. In real-time app development, this is known as an event.
The listener or event pattern
The use of event-driven applications is becoming more prevalent. With the rise in popularity of Node.js, more developers are learning the concept of event emitters. In Node.js, many objects immediately notify blocks of code after completing a task. These blocks of code are listeners; the pattern that developers use with listeners is very similar to applications that use Amazon Simple Queue Service (SQS), Amazon Simple Notification Service (SNS) or Amazon Kinesis.
event.on('ready', postToTwitter);
event.on('ready', postToFacebook);
event.on('ready', postToSnapchat);
event.on('ready', addImage);
event.trigger('ready', articleData);
Batch processing with Amazon SQS
Most developers handle events asynchronously, which helps make apps scalable but creates additional management challenges. On the blog platform, if an asynchronous process is set up to read the story and post it, the whole process needs to happen nearly instantly. Bloggers who write and submit a post don't want to wait to see it published. There is some room for a delay as the browser reloads, but many users won't tolerate anything that takes longer than about 500 milliseconds (ms).
IT teams can encounter delays with real-time app development. For example, if an event is delivered to SQS, there is an automatic delay -- anywhere from 100 ms to several seconds -- before it is read. SQS is designed for batch processing, which helps when dealing with asynchronous operations but struggles to handle real-time events. Even worse, if the back-end system isn't keeping up with the SQS requests, there can be an extended delay until it handles that message.
Use Amazon SNS to trigger multiple actions
Similar to Amazon Kinesis, Amazon SNS is designed for near real-time event processing. It decouples the input from the individual processing nodes. But, unlike Kinesis, SNS is designed to trigger multiple actions based on one event. With the blog platform, a developer could have one SNS Topic, which is Post Created. Multiple AWS Lambda functions could subscribe to that topic, one that posts to Twitter, one to Facebook and one that processes the document to automatically identify the right image to associate with it. Decoupling happens within SNS, so when the application also needs to post to Snapchat, no code is changed on the posting side. Another listener attaches to the SNS topic to notify the Snapchat microservices when an event occurs.
SNS doesn't build up a queue, and it executes all events in parallel. Therefore, Twitter, Facebook, Snapchat and image microservices all receive the notification at the same time. SNS doesn't wait for Twitter to post before moving on to Facebook, for example.
SNS also allows developers to configure application endpoints to retry immediately and periodically until the event processes successfully -- up to a specified number of attempts. But the initial delay with SNS can be upwards of a few seconds, so it is not completely real time.
Test your knowledge of cloud application migration
Think you know all of the basics of cloud application migration? Take this five-question quiz and find out.
Queues vs. pipes
Amazon SQS and Amazon Simple Workflow Service both provide queues to decouple requesting and processing services. It's a simple pattern to build, and it's very scalable. Unfortunately it's also an approach that will delay the pipeline the most. Queues are not built for speed; they are built for scale -adding at least 100 ms of delay -- even if they are empty and they don't scale quickly. Queues have a backlog of events that wait for resources to become available. This can be useful if the application doesn't need to run in real time, especially when coupled with Spot Instances.
But for real-time app development, pipes such as Amazon Kinesis are more appropriate. Unlike queues, pipes are designed for real-time stream processing. Instead of adding items to a list, they're sent in a stream. While items can still be backlogged in a stream, that only happens when problems occur. Both queues and pipes allow for fault tolerance and retries, but pipes run immediately.
Unlike SQS, which can delay a message by hundreds of milliseconds, Kinesis delays are only tens of milliseconds. The most important reason to use a pipe service like Kinesis is because it integrates directly with AWS Lambda. It's not just a faster queue service; it's a different design pattern.
Real-time processing with Amazon Kinesis
Amazon Kinesis provides a better option for real-time processing. While Kinesis is similar to SQS, it doesn't queue directly and acts more like a pipe, allowing developers to direct requests into one or more handling services. While these pipes can have a backlog, the starting delay is usually significantly less than a queue service, and events can be received and ready in 10 ms.
Additionally, Kinesis connects directly with Lambda, which allows services to dynamically scale in real time. When an Auto Scaling group identifies an SQS backlog, it can take up to an hour before additional resources become available to reduce the delay. With Kinesis and Lambda, the pipe just triggers a Lambda function with each event piped into it; there are no resources to provision, which means no time is wasted in spinning up additional Elastic Compute Cloud instances.
Other events can also trigger Kinesis streams, such as Amazon DynamoDB write operations. In the blog example, when an end user adds a new post, it would be added directly into DynamoDB, which would trigger a Lambda function, which is triggered by a Kinesis Stream.
Tying together services and features
The connections between events and workers can be the difference between delays of 10 milliseconds or 10 minutes. For heavily compute-intensive tasks, SQS lets applications process for longer, properly handles timeout events and builds custom workers that may have a long startup time. These tasks take a while to process, so an additional second or two of delay from SQS is tolerable. They may also be expensive to run, so if delays for these types of events are acceptable, developers should consider Spot Instances for the worker processes.
Developers can use SNS in conjunction with SQS for processes that need to handle multiple events. SNS can send other notifications -- email or SMS -- that don't need to happen in real time. Remember, the delay for carriers or email providers will always be longer than the delay SNS imposes.
For the blog, a developer would use Amazon Kinesis for real-time delivery to the blogging platform. For Twitter, Facebook and Snapchat message, SNS would be delivered to Lambda functions. For the image processing system, which may be a lot more compute-intensive, a developer can add another SNS listener, but deliver it to an SQS queue to use custom, long-running processes to handle the compute-intensive workloads that go along with image processing. If other clients need notifications through custom channels, set up additional Kinesis streams for them to occur in real time or add listeners onto the SNS topic for email, SMS or other near real-time notifications.
In all cases, the transport medium is either real-time and near real-time processing. It just depends on how quickly an IT team needs to transfer information.