Microservices On Aws
Microservices On Aws
Copyright © 2024 Amazon Web Services, Inc. and/or its affiliates. All rights reserved.
Implementing Microservices on AWS AWS Whitepaper
Amazon's trademarks and trade dress may not be used in connection with any product or service
that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any
manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are
the property of their respective owners, who may or may not be affiliated with, connected to, or
sponsored by Amazon.
Implementing Microservices on AWS AWS Whitepaper
Table of Contents
Abstract and introduction ................................................................................................................ i
Introduction ................................................................................................................................................... 1
Are you Well-Architected? .......................................................................................................................... 2
Modernizing to microservices ..................................................................................................................... 2
Simple microservices architecture on AWS ................................................................................... 4
User interface ................................................................................................................................................ 4
Microservices .................................................................................................................................................. 5
Microservices implementations ............................................................................................................ 5
Continuous integration and continuous deployment (CI/CD) ............................................................. 6
Private networking ....................................................................................................................................... 6
Data store ....................................................................................................................................................... 6
Simplifying operations ................................................................................................................................ 7
Deploying Lambda-based applications ............................................................................................... 8
Abstracting multi-tenancy complexities ............................................................................................. 9
API management ..................................................................................................................................... 9
Microservices on serverless technologies .................................................................................... 11
Resilient, efficient, and cost-optimized systems ......................................................................... 13
Disaster recovery (DR) ............................................................................................................................... 13
High availability (HA) ................................................................................................................................ 13
Distributed systems components ................................................................................................. 14
Distributed data management ...................................................................................................... 16
Configuration management .......................................................................................................... 19
Secrets management ................................................................................................................................. 19
Cost optimization and sustainability ........................................................................................... 20
Communication mechanisms ........................................................................................................ 21
REST-based communication ..................................................................................................................... 21
GraphQL-based communication .............................................................................................................. 21
gRPC-based communication .................................................................................................................... 21
Asynchronous messaging and event passing ....................................................................................... 22
Orchestration and state management .................................................................................................. 24
Observability ................................................................................................................................. 27
Monitoring ................................................................................................................................................... 27
Centralizing logs ......................................................................................................................................... 29
Distributed tracing ..................................................................................................................................... 30
iii
Implementing Microservices on AWS AWS Whitepaper
iv
Implementing Microservices on AWS AWS Whitepaper
This whitepaper explores three popular microservices patterns: API driven, event driven, and data
streaming. We provide an overview of each approach, outline microservices' key features, address
the challenges in their development, and illustrate how Amazon Web Services (AWS) can help
application teams tackle these obstacles.
Considering the complex nature of topics like data store, asynchronous communication, and service
discovery, you are encouraged to weigh your application's specific needs and use cases alongside
the guidance provided when making architectural decisions.
Introduction
Microservices architectures combine successful and proven concepts from various fields, such as:
While microservices offer many benefits, it's vital to assess your use case's unique requirements and
associated costs. Monolithic architecture or alternative approaches may be more appropriate in
some cases. Deciding between microservices or monoliths should be made on a case-by-case basis,
considering factors like scale, complexity, and specific use cases.
Introduction 1
Implementing Microservices on AWS AWS Whitepaper
Lastly, we examine the overall system and discuss cross-service aspects of a microservices
architecture, such as distributed monitoring, logging, tracing, auditing, data consistency, and
asynchronous communication.
This document focuses on workloads running in the AWS Cloud, excluding hybrid scenarios and
migration strategies. For information on migration strategies, refer to the Container Migration
Methodology whitepaper.
In the Serverless Application Lens, we focus on best practices for architecting your serverless
applications on AWS.
For more expert guidance and best practices for your cloud architecture—reference architecture
deployments, diagrams, and whitepapers—refer to the AWS Architecture Center.
Modernizing to microservices
Microservices are essentially small, independent units that make up an application. Transitioning
from traditional monolithic structures to microservices can follow various strategies.
Modernizing to microservices 3
Implementing Microservices on AWS AWS Whitepaper
User interface
Modern web applications often use JavaScript frameworks to develop single-page applications
that communicate with backend APIs. These APIs are typically built using Representational State
User interface 4
Implementing Microservices on AWS AWS Whitepaper
Transfer (REST) or RESTful APIs, or GraphQL APIs. Static web content can be served using Amazon
Simple Storage Service (Amazon S3) and Amazon CloudFront.
Microservices
APIs are considered the front door of microservices, as they are the entry point for application logic.
Typically, RESTful web services API or GraphQL APIs are used. These APIs manage and process
client calls, handling functions such as traffic management, request filtering, routing, caching,
authentication, and authorization.
Microservices implementations
AWS offers building blocks to develop microservices, including Amazon ECS and Amazon EKS as
the choices for container orchestration engines and AWS Fargate and EC2 as hosting options. AWS
Lambda is another serverless way to build microservices on AWS. Choice between these hosting
options depends on the customer’s requirements to manage the underlying infrastructure.
AWS Lambda allows you to upload your code, automatically scaling and managing its execution
with high availability. This eliminates the need for infrastructure management, so you can move
quickly and focus on your business logic. Lambda supports multiple programming languages and
can be triggered by other AWS services or called directly from web or mobile applications.
• App2Container, a command line tool for migrating and modernizing Java and .NET web
applications into container format. AWS A2C analyzes and builds an inventory of applications
running in bare metal, virtual machines, Amazon Elastic Compute Cloud (EC2) instances, or in the
cloud.
• Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service
(Amazon EKS) manage your container infrastructure, making it easier to launch and maintain
containerized applications.
• Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-
premises data centers (Amazon EKS Anywhere). This extends cloud services into on-premises
environments for low-latency, local data processing, high data transfer costs, or data residency
requirements (see the whitepaper on "Running Hybrid Container Workloads With Amazon EKS
Microservices 5
Implementing Microservices on AWS AWS Whitepaper
Anywhere"). You can use all the existing plug-ins and tooling from the Kubernetes community
with EKS.
• Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration
service that simplifies your deployment, management, and scaling of containerized
applications. Customers choose ECS for simplicity and deep integration with AWS services.
For further reading, see the blog Amazon ECS vs Amazon EKS: making sense of AWS container
services.
• AWS App Runner is a fully managed container application service that lets you build, deploy, and
run containerized web applications and API services without prior infrastructure or container
experience.
• AWS Fargate, a serverless compute engine, works with both Amazon ECS and Amazon EKS to
automatically manage compute resources for container applications.
• Amazon ECR is a fully managed container registry offering high-performance hosting, so you can
reliably deploy application images and artifacts anywhere.
Private networking
AWS PrivateLink is a technology that enhances the security of microservices by allowing private
connections between your Virtual Private Cloud (VPC) and supported AWS services. It helps isolate
and secure microservices traffic, ensuring it never crosses the public internet. This is particularly
useful for complying with regulations like PCI or HIPAA.
Data store
The data store is used to persist data needed by the microservices. Popular stores for session data
are in-memory caches such as Memcached or Redis. AWS offers both technologies as part of the
managed Amazon ElastiCache service.
Putting a cache between application servers and a database is a common mechanism for reducing
the read load on the database, which, in turn, may allow resources to be used to support more
writes. Caches can also improve latency.
Relational databases are still very popular to store structured data and business objects. AWS offers
six database engines (Microsoft SQL Server, Oracle, MySQL, MariaDB, PostgreSQL, and Amazon
Aurora) as managed services through Amazon Relational Database Service (Amazon RDS).
Relational databases, however, are not designed for endless scale, which can make it difficult and
time intensive to apply techniques to support a high number of queries.
NoSQL databases have been designed to favor scalability, performance, and availability over the
consistency of relational databases. One important element of NoSQL databases is that they
typically don’t enforce a strict schema. Data is distributed over partitions that can be scaled
horizontally and is retrieved using partition keys.
Because individual microservices are designed to do one thing well, they typically have a simplified
data model that might be well suited to NoSQL persistence. It is important to understand
that NoSQL databases have different access patterns than relational databases. For example,
it's not possible to join tables. If this is necessary, the logic has to be implemented in the
application. You can use Amazon DynamoDB to create a database table that can store and
retrieve any amount of data and serve any level of request traffic. DynamoDB delivers single-
digit millisecond performance, however, there are certain use cases that require response times in
microseconds. DynamoDB Accelerator (DAX) provides caching capabilities for accessing data.
DynamoDB also offers an automatic scaling feature to dynamically adjust throughput capacity
in response to actual traffic. However, there are cases where capacity planning is difficult or
not possible because of large activity spikes of short duration in your application. For such
situations, DynamoDB provides an on-demand option, which offers simple pay-per-request pricing.
DynamoDB on-demand is capable of serving thousands of requests per second instantly without
capacity planning.
For more information, see Distributed data management and How to Choose a Database.
Simplifying operations
To further simplify the operational efforts needed to run, maintain, and monitor microservices, we
can use a fully serverless architecture.
Simplifying operations 7
Implementing Microservices on AWS AWS Whitepaper
Topics
• Deploying Lambda-based applications
• API management
You can deploy your Lambda code by uploading a zip file archive or by creating and uploading
a container image through the console UI using a valid Amazon ECR image URI. However, when
a Lambda function becomes complex, meaning it has layers, dependencies, and permissions,
uploading through the UI can become unwieldy for code changes.
Using AWS CloudFormation and the AWS Serverless Application Model (AWS SAM), AWS
Cloud Development Kit (AWS CDK), or Terraform streamlines the process of defining serverless
applications. AWS SAM, natively supported by CloudFormation, offers a simplified syntax for
specifying serverless resources. AWS Lambda Layers help manage shared libraries across multiple
Lambda functions, minimizing function footprint, centralizing tenant-aware libraries, and
improving the developer experience. Lambda SnapStart for Java enhances startup performance for
latency-sensitive applications.
Integration with tools like AWS Cloud9 IDE, AWS CodeBuild, AWS CodeDeploy, and AWS
CodePipeline streamlines authoring, testing, debugging, and deploying SAM-based applications.
The following diagram shows deploying AWS Serverless Application Model resources using
CloudFormation and AWS CI/CD tools.
However, they should not extend to encapsulating business logic due to the complexity and
risk they may introduce. A fundamental issue with shared libraries is the increased complexity
surrounding updates, making them more challenging to manage compared to standard code
duplication. Thus, it's essential to strike a balance between the use of shared libraries and
duplication in the quest for the most effective abstraction.
API management
Managing APIs can be time-consuming, especially when considering multiple versions, stages of
the development cycle, authorization, and other features like throttling and caching. Apart from
API Gateway, some customers also use ALB (Application Load Balancer) or NLB (Network Load
Balancer) for API management. Amazon API Gateway helps reduce the operational complexity
of creating and maintaining RESTful APIs. It allows you to create APIs programmatically, serves
as a "front door" to access data, business logic, or functionality from your backend services,
Authorization and access control, rate limiting, caching, monitoring, and traffic management and
runs APIs without managing servers.
Figure 3 illustrates how API Gateway handles API calls and interacts with other components.
Requests from mobile devices, websites, or other backend services are routed to the closest
CloudFront Point of Presence (PoP) to reduce latency and provide an optimal user experience.
API management 10
Implementing Microservices on AWS AWS Whitepaper
Figure 4 demonstrates a serverless microservice architecture using AWS Lambda and managed
services. This serverless architecture mitigates the need to design for scale and high availability,
and reduces the effort needed for running and monitoring the underlying infrastructure.
11
Implementing Microservices on AWS AWS Whitepaper
Figure 5 displays a similar serverless implementation using containers with AWS Fargate, removing
concerns about underlying infrastructure. It also features Amazon Aurora Serverless, an on-
demand, auto-scaling database that automatically adjusts capacity based on your application's
requirements.
12
Implementing Microservices on AWS AWS Whitepaper
Disaster recovery strategies for microservices should focus on downstream services that maintain
the application's state, such as file systems, databases, or queues. Organizations should plan for
recovery time objective (RTO) and recovery point objective (RPO). RTO is the maximum acceptable
delay between service interruption and restoration, while RPO is the maximum time since the last
data recovery point.
For more on disaster recovery strategies, refer to the Disaster Recovery of Workloads on AWS:
Recovery in the Cloud whitepaper.
Amazon EKS ensures high availability by running Kubernetes control and data plane instances
across multiple Availability Zones. It automatically detects and replaces unhealthy control plane
instances and provides automated version upgrades and patching.
Amazon ECR uses Amazon Simple Storage Service (Amazon S3) for storage to make your container
images highly available and accessible. It works with Amazon EKS, Amazon ECS, and AWS Lambda,
simplifying development to production workflow.
Amazon ECS is a regional service that simplifies running containers in a highly available manner
across multiple Availability Zones within a Region, offering multiple scheduling strategies that
place containers for resource needs and availability requirements.
AWS Lambda operates in multiple Availability Zones, ensuring availability during service
interruptions in a single zone. If connecting your function to a VPC, specify subnets in multiple
Availability Zones for high availability.
• Code modification: Can you get the benefits without modifying code?
• Cross-VPC or cross-account traffic: If required, does your system need efficient management of
communication across different VPCs or AWS accounts?
• Deployment strategies: Does your system use or plan to use advanced deployment strategies
such as blue-green or canary deployments?
• Performance considerations: If your architecture frequently communicates with external
services, what will be the impact on overall performance?
AWS offers several methods for implementing service discovery in your microservices architecture:
• Amazon ECS Service Discovery: Amazon ECS supports service discovery using its DNS-based
method or by integrating with AWS Cloud Map (see ECS Service discovery). ECS Service Connect
further improves connection management, which can be especially beneficial for larger
applications with multiple interacting services.
• Amazon Route 53: Route 53 integrates with ECS and other AWS services, such as EKS, to
facilitate service discovery. In an ECS context, Route 53 can use the ECS Service Discovery
feature, which leverages the Auto Naming API to automatically register and deregister services.
• AWS Cloud Map: This option offers a dynamic API-based service discovery, which propagates
changes across your services.
For more advanced communication needs, AWS provides two service mesh options:
• Amazon VPC Lattice is an application networking service that consistently connects, monitors,
and secures communications between your services, helping to improve productivity so that
your developers can focus on building features that matter to your business. You can define
policies for network traffic management, access, and monitoring to connect compute services in
a simplified and consistent way across instances, containers, and serverless applications.
14
Implementing Microservices on AWS AWS Whitepaper
• AWS App Mesh: Based on the open-source Envoy proxy, App Mesh caters to advanced needs with
sophisticated routing, load balancing, and comprehensive reporting. Unlike Amazon VPC Lattice,
App Mesh does support the TCP protocol.
In case you're already using third-party software, such as HashiCorp Consul, or Netflix Eureka for
service discovery, you might prefer to continue using these as you migrate to AWS, enabling a
smoother transition.
The choice between these options should align with your specific needs. For simpler requirements,
DNS-based solutions like Amazon ECS or AWS Cloud Map might be sufficient. For more complex or
larger systems, service meshes like Amazon VPC Lattice or AWS App Mesh might be more suitable.
In conclusion, designing a microservices architecture on AWS is all about selecting the right tools
to meet your specific needs. By keeping in mind the considerations discussed, you can ensure
you're making informed decisions to optimize your system's service discovery and inter-service
communication.
15
Implementing Microservices on AWS AWS Whitepaper
One such challenge arises from the trade-off between consistency and performance in distributed
systems. It's often more practical to accept slight delays in data updates (eventual consistency)
than to insist on instant updates (immediate consistency).
Sometimes, business operations require multiple microservices to work together. If one part
fails, you might have to undo some completed tasks. The Saga pattern helps manage this by
coordinating a series of compensating actions.
To help microservices stay in sync, a centralized data store can be used. This store, managed with
tools like AWS Lambda, AWS Step Functions, and Amazon EventBridge, can assist in cleaning up
and deduplicating data.
16
Implementing Microservices on AWS AWS Whitepaper
A common approach in managing changes across microservices is event sourcing. Every change in
the application is recorded as an event, creating a timeline of the system's state. This approach not
only helps debug and audit but also allows different parts of an application to react to the same
events.
Event sourcing often works hand-in-hand with the Command Query Responsibility Segregation
(CQRS) pattern, which separates data modification and data querying into different modules for
better performance and security.
On AWS, you can implement these patterns using a combination of services. As you can see in
Figure 7, Amazon Kinesis Data Streams can serve as your central event store, while Amazon S3
provides a durable storage for all event records. AWS Lambda, Amazon DynamoDB, and Amazon
API Gateway work together to handle and process these events.
17
Implementing Microservices on AWS AWS Whitepaper
Remember, in distributed systems, events might be delivered multiple times due to retries, so it's
important to design your applications to handle this.
18
Implementing Microservices on AWS AWS Whitepaper
Configuration management
In a microservices architecture, each service interacts with various resources like databases,
queues, and other services. A consistent way to configure each service's connections and operating
environment is vital. Ideally, an application should adapt to new configurations without needing
a restart. This approach is part of the Twelve-Factor App principles, which recommend storing
configurations in environment variables.
A different approach is to use AWS App Config. It’s a feature of AWS Systems Manager which
makes it easy for customers to quickly and safely configure, validate, and deploy feature flags and
application configuration. Your feature flag and configurations data can be validated syntactically
or semantically in the pre-deployment phase, and can be monitored and automatically rolled back
if an alarm that you have configured is triggered. AppConfig can be integrated with Amazon ECS
and Amazon EKS by using the AWS AppConfig agent. The agent functions as a sidecar container
running alongside your Amazon ECS and Amazon EKS container applications. If you use AWS
AppConfig feature flags or other dynamic configuration data in a Lambda function, then we
recommend that you add the AWS AppConfig Lambda extension as a layer to your Lambda
function.
GitOps is an innovative approach to configuration management that uses Git as the source of truth
for all configuration changes. This means that any changes made to your configuration files are
automatically tracked, versioned, and audited through Git.
Secrets management
Security is paramount, so credentials should not be passed in plain text. AWS offers secure services
for this, like AWS Systems Manager Parameter Store and AWS Secrets Manager. These tools can
send secrets to containers in Amazon EKS as volumes, or to Amazon ECS as environment variables.
In AWS Lambda, environment variables are made available to your code automatically. For
Kubernetes workflows, the External Secrets Operator fetches secrets directly from services like
AWS Secrets Manager, creating corresponding Kubernetes Secrets. This enables a seamless
integration with Kubernetes-native configurations.
Secrets management 19
Implementing Microservices on AWS AWS Whitepaper
Stateless components (services that store state in an external data store instead of a local data
store) in your architecture can make use of Amazon EC2 Spot Instances, which offer unused EC2
capacity in the AWS cloud. These instances are more cost efficient than on-demand instances
and are perfect for workloads that can handle interruptions. This can further cut costs while
maintaining high availability.
With isolated services, you can use cost-optimized compute options for each auto-scaling group.
For example, AWS Graviton offers cost-effective, high-performance compute options for workloads
that suit ARM-based instances.
Optimizing costs and resource usage also helps minimize environmental impact, aligning with the
Sustainability pillar of the Well-Architected Framework. You can monitor your progress in reducing
carbon emissions using the AWS Customer Carbon Footprint Tool. This tool provides insights into
the environmental impact of your AWS usage.
20
Implementing Microservices on AWS AWS Whitepaper
Communication mechanisms
In the microservices paradigm, various components of an application need to communicate over
a network. Common approaches for this include REST-based, GraphQL-based, gRPC-based and
asynchronous messaging.
Topics
• REST-based communication
• GraphQL-based communication
• gRPC-based communication
• Asynchronous messaging and event passing
• Orchestration and state management
REST-based communication
The HTTP/S protocol, used broadly for synchronous communication between microservices, often
operates through RESTful APIs. AWS's API Gateway offers a streamlined way to build an API that
serves as a centralized access point to backend services, handling tasks like traffic management,
authorization, monitoring, and version control.
GraphQL-based communication
Similarly, GraphQL is a widespread method for synchronous communication, using the same
protocols as REST but limiting exposure to a single endpoint. With AWS AppSync, you can create
and publish GraphQL applications that interact with AWS services and datastores directly, or
incorporate Lambda functions for business logic.
gRPC-based communication
gRPC is a synchronous, lightweight, high performance, open-source RPC communication protocol.
gRPC improves upon its underlying protocols by using HTTP/2 and enabling more features such as
compression and stream prioritization. It uses Protobuf Interface Definition Language (IDL) which is
binary-encoded and thus takes advantage of HTTP/2 binary framing.
REST-based communication 21
Implementing Microservices on AWS AWS Whitepaper
• Message Queues: A message queue acts as a buffer that decouples senders (producers) and
receivers (consumers) of messages. Producers enqueue messages into the queue, and consumers
dequeue and process them. This pattern is useful for asynchronous communication, load
leveling, and handling bursts of traffic.
• Event-Driven Messaging: Event-driven messaging involves capturing and reacting to events that
occur in the system. Events are published to a message broker, and interested services subscribe
to specific event types. This pattern enables loose coupling and allows services to react to events
without direct dependencies.
To implement each of these message types, AWS offers various managed services such as Amazon
SQS, Amazon SNS, Amazon EventBridge, Amazon MQ, and Amazon MSK. These services have
unique features tailored to specific needs:
• Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service
(Amazon SNS): As you can see in Figure 8, these two services complement each other, with
Amazon SQS providing a space for storing messages and Amazon SNS enabling delivery of
messages to multiple subscribers. They are effective when the same message needs to be
delivered to multiple destinations.
Remember, the best service for you depends on your specific needs, so it's important to understand
what each one offers and how they align with your requirements.
Step Functions provides a workflow engine to manage service orchestration complexities, such as
error handling and serialization. This allows you to scale and change applications quickly without
adding coordination code. Step Functions is part of the AWS serverless platform and supports
Lambda functions, Amazon EC2, Amazon EKS, Amazon ECS, SageMaker, AWS Glue, and more.
Figure 9: An example of a microservices workflow with parallel and sequential steps invoked by AWS
Step Functions
AWS Managed Workflows for Apache Airflow (MWAA) is an alternative to Step Functions. You
should use Amazon MWAA if you prioritize open source and portability. Airflow has a large and
active open-source community that contributes new functionality and integrations regularly.
Observability
Since microservices architectures are inherently made up of many distributed components,
observability across all those components becomes critical. Amazon CloudWatch enables this,
collecting and tracking metrics, monitoring log files, and reacting to changes in your AWS
environment. It can monitor AWS resources and custom metrics generated by your applications and
services.
Topics
• Monitoring
• Centralizing logs
• Distributed tracing
• Log analysis on AWS
Monitoring
CloudWatch offers system-wide visibility into resource utilization, application performance, and
operational health. In a microservices architecture, custom metrics monitoring via CloudWatch is
beneficial, as developers can choose which metrics to collect. Dynamic scaling can also be based on
these custom metrics.
CloudWatch Container Insights extends this functionality, automatically collecting metrics for
many resources like CPU, memory, disk, and network. It helps in diagnosing container-related
issues, streamlining resolution.
Monitoring 27
Implementing Microservices on AWS AWS Whitepaper
Monitoring 28
Implementing Microservices on AWS AWS Whitepaper
Centralizing logs
Logging is key to pinpoint and resolve issues. With microservices, you can release more frequently
and experiment with new features. AWS provides services like Amazon S3, CloudWatch Logs, and
Amazon OpenSearch Service to centralize log files. Amazon EC2 uses a daemon for sending logs
to CloudWatch, while Lambda and Amazon ECS natively send their log output there. For Amazon
EKS, either Fluent Bit or Fluentd can be used to forward logs to CloudWatch for reporting using
OpenSearch and Kibana. However, due to the smaller footprint and performance advantages,
Fluent Bit is recommended over Fluentd.
Figure 12 illustrates how logs from various AWS services are directed to Amazon S3 and
CloudWatch. These centralized logs can be further analyzed using Amazon OpenSearch Service,
inclusive of Kibana for data visualization. Also, Amazon Athena can be employed for ad hoc queries
against the logs stored in Amazon S3.
Centralizing logs 29
Implementing Microservices on AWS AWS Whitepaper
Distributed tracing
Microservices often work together to handle requests. AWS X-Ray uses correlation IDs to track
requests across these services. X-Ray works with Amazon EC2, Amazon ECS, Lambda, and Elastic
Beanstalk.
AWS Distro for OpenTelemetry is part of the OpenTelemetry project and provides open-source
APIs and agents to gather distributed traces and metrics, improving your application monitoring. It
sends metrics and traces to multiple AWS and partner monitoring solutions. By collecting metadata
from your AWS resources, it aligns application performance with the underlying infrastructure data,
accelerating problem-solving. Plus, it's compatible with a variety of AWS services and can be used
on-premises.
Distributed tracing 30
Implementing Microservices on AWS AWS Whitepaper
CloudWatch Logs has the capability to stream log entries to Amazon Data Firehose, a service
for delivering real-time streaming data. QuickSight then utilizes the data stored in Redshift for
comprehensive analysis, reporting, and visualization.
Figure 15: Log analysis with Amazon Redshift and Amazon QuickSight
Moreover, when logs are stored in S3 buckets, an object storage service, the data can be loaded
into services like Redshift or EMR, a cloud-based big data platform, allowing for thorough analysis
of the stored log data.
Some key tools for managing chattiness are REST APIs, HTTP APIs and gRPC APIs. REST APIs offer
a range of advanced features such as API keys, per-client throttling, request validation, AWS WAF
integration, or private API endpoints. HTTP APIs are designed with minimal features and hence
come at a lower price. For more details on this topic and a list of core features that are available in
REST APIs and HTTP APIs, see Choosing between REST APIs and HTTP APIs.
Often, microservices use REST over HTTP for communication due to its widespread use. But
in high-volume situations, REST's overhead can cause performance issues. It’s because the
communication uses TCP handshake which is required for every new request. In such cases, gRPC
API is a better choice. gRPC reduces the latency as it allows multiple requests over a single TCP
connection. gRPC also supports bi-directional streaming, allowing clients and servers to send and
receive messages at the same time. This leads to more efficient communication, especially for large
or real-time data transfers.
If chattiness persists despite choosing the right API type, it may be necessary to reevaluate your
microservices architecture. Consolidating services or revising your domain model could reduce
chattiness and improve efficiency.
Auditing
In a microservices architecture, it's crucial to have visibility into user actions across all services. AWS
provides tools like AWS CloudTrail, which logs all API calls made in AWS, and AWS CloudWatch,
which is used to capture application logs. This allows you to track changes and analyze behavior
across your microservices. Amazon EventBridge can react to system changes quickly, notifying the
right people or even automatically starting workflows to resolve issues.
For instance, if an API Gateway configuration in a microservice is altered to accept inbound HTTP
traffic instead of only HTTPS requests, a predefined AWS Config rule can detect this security
violation. It logs the change for auditing and triggers an SNS notification, restoring the compliant
state.
Conclusion
Microservices architecture, a versatile design approach that provides an alternative to traditional
monolithic systems, assists in scaling applications, boosting development speed, and fostering
organizational growth. With its adaptability, it can be implemented using containers, serverless
approaches, or a blend of the two, tailoring to specific needs.
However, it's not a one-size-fits-all solution. Each use case requires meticulous evaluation given
the potential increase in architectural complexity and operational demands. But when approached
strategically, the benefits of microservices can significantly outweigh these challenges. The key is in
proactive planning, especially in areas of observability, security, and change management.
It's also important to note that beyond microservices, there are entirely different architectural
frameworks like Generative AI architectures such as Retrieval Augmented Generation (RAG),
providing a range of options to best fit your needs.
AWS, with its robust suite of managed services, empowers teams to build efficient microservices
architectures and effectively minimize complexity. This whitepaper has aimed to guide you through
the relevant AWS services and the implementation of key patterns. The goal is to equip you with
the knowledge to harness the power of microservices on AWS, enabling you to capitalize on their
benefits and transform your application development journey.
37
Implementing Microservices on AWS AWS Whitepaper
Contributors
The following individuals and organizations contributed to this document:
38
Implementing Microservices on AWS AWS Whitepaper
Document history
To be notified about updates to this whitepaper, subscribe to the RSS feed.
39
Implementing Microservices on AWS AWS Whitepaper
Note
To subscribe to RSS updates, you must have an RSS plug-in enabled for the browser you are
using.
40
Implementing Microservices on AWS AWS Whitepaper
Notices
Customers are responsible for making their own independent assessment of the information in
this document. This document: (a) is for informational purposes only, (b) represents current AWS
product offerings and practices, which are subject to change without notice, and (c) does not create
any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or
services are provided “as is” without warranties, representations, or conditions of any kind, whether
express or implied. The responsibilities and liabilities of AWS to its customers are controlled by
AWS agreements, and this document is not part of, nor does it modify, any agreement between
AWS and its customers.
41
Implementing Microservices on AWS AWS Whitepaper
AWS Glossary
For the latest AWS terminology, see the AWS glossary in the AWS Glossary Reference.
42