Distribute an event in CQRS with .NET on AWS

In the last post we talked about CQRS and how it is possible to implement it in AWS by using .NET and taking advantage from using all the power of the Cloud Native approach.

The proposed solution comes with a big limitation: it is only possible to subscribe 2 different Lambda functions to one DynamoDB Stream. Obviously, this works fine if we don’t need to build more than two different read models, but if this is the case, how we can overcome the limit?

The idea here is to use another service to propagate the event to the different subscribers. We have a couple of different options:

  • Simple Notification Service (SNS): a service designed to send asynchronous notifications;
  • EventBridge: a service useful to send real-time notifications;

One service to rule them all

In our simple use case, we need to asynchronously update all the consumers that want to be notified when something in the Product table changes. We doesn’t have complicated rules and the application doesn’t require to manage complex processing. This means that SNS fit perfectly for our needs. So, let’s see how to integrate in our application.

First of all, we need to update our schema to include the new model. At the moment, is not possible to subscribe SNS directly to DynamoDB Stream (which is possible with EventBridge), so we need to use something else to process the message from DynamoDB and send it to SNS. As already done, we can still use a Lambda function, which will be responsible to receive the notification from DynamoDB Stream and send to the SNS. On the other side, the notification will be distributed to different consumers by the SNS, and we can still use a set of Lambda functions to process the notification according to the read model needs. This set of Lambda functions will adapt the input message to the different read models.

Following, you can see the updated schema:

Just to provide you a more complete idea of the possibilities available with this approach, in the above architecture schema I’ve included also different types of Query Handlers and Storage. You can build an handler or a storage by using the most useful technology that fits better the specific domain. As an example, the schema show a Fargate solution in both Cart Query Service and Store Query Service, with two different storage: Aurora and RDS.

Connect your Simple Notification Service

In the previous post we have seen how to connect a Lambda function to the DynamoDB Stream. By using the same approach, we can configure a SNS as destination of the Lambda function. So, we can go to our Lambda and update it according to the following code (full code on previous post):

Product product = new Product
{
    ProductId = Convert.ToInt32(newImage["ProductId"].N),
    Name = newImage["Name"].S,
    Price = Convert.ToDecimal(newImage["Price"].N)
};
string productJson = JsonSerializer.Serialize(product);

var publishRequest = new PublishRequest
{
    TopicArn = "arn:aws:sns:...:ProductUpdated",
    Message = productJson
};
await snsClient.PublishAsync(publishRequest);

But, first of all, we need to create the SNS topic and than configure the AWS Lambda that needs to receive the published message:

Finally, we can rewrite the Lambda function to receive the message from SNS and than write into the target database:

[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]
public class Function
{
    private readonly AmazonSNS _snsClient;

    public Function()
    {
        _snsClient = new AmazonSNS(); // Replace with your SNS client configuration if needed
    }

    public async Task FunctionHandler(SNSEvent snsEvent, ILambdaContext context)
    {
        foreach (var record in snsEvent.Records)
        {
            var snsRecord = record.Sns;
            
            Product product = JsonSerializer.Deserialize<Product>(snsRecord.Message);
            using (MySqlConnection connection = new MySqlConnection(connectionString))
            {
                connection.Open();

                using (MySqlCommand cmd = connection.CreateCommand())
                {
                    cmd.CommandText = "INSERT INTO Product (ProductId, Name, Price) VALUES (@ProductId, @Name, @Price)";
                    cmd.Parameters.AddWithValue("@ProductId", product.ProductId);
                    cmd.Parameters.AddWithValue("@Name", product.Name);
                    cmd.Parameters.AddWithValue("@Price", product.Price);

                    cmd.ExecuteNonQuery();
                }
            }
        }
    }
}

Conclusion

As show in this post, it is really easy to use that approach with the different set of services available in AWS. In the next post, we’ll see another way to reach out the same result.

Enjoy!

CQRS with .NET and AWS

One of the most talked about and appreciated patterns in recent years is undoubtedly CQRS: Command-Query Responsibility Segregation. Its potential is offset by implementation complexity, which is too often underestimated in the design phase. Moreover, we too often approach the pattern without a real need, lured by the promise of potential scalability but underestimating its implementation and maintenance costs.

But let’s take a step back and first understand what scenario CQRS proposes to solve and why we decided to talk about it today.

What is CQRS

CQRS is an architectural pattern suitable for developing applications that must provide a high level of scalability while still handling high volumes of traffic and data. The primary use case involves applications that need to handle significantly more read requests than read requests.

The underlying concept is simple: the logical, but not limited to, separation of write requests from read requests. Basically, instead of the usual, single model for all CRUD-type operations, there is the creation of two separate models that are particularly optimized for the different purposes they must serve.

This split allows for a streamlined, simplified model tailored to each specific read operation, leaving the complexity of business logic on the write operations. The end result is a flexible application that allows the two models to scale independently. In this way the two worlds are completely parallel, perfectly designed for their purpose, even going so far as to use two completely separate databases.

In this first article, therefore, we are going to explore the possibilities offered by the AWS cloud to develop a simple solution by applying the CQRS pattern, focusing mainly on the data storage part.

CQRS in Cloud-Native mode with AWS

Let us start by defining the ideal Cloud-Native technology, in our context, to implement the write model. Our requirement is to save an entity or, rather, the state of an aggregate in our storage and notify that this operation was successful and the state of our application has changed. AWS offers several storage technologies, but to take full advantage of the ability to historicize the entire payload, the best choice can only be DynamoDB.

DynamoDB, in fact, is the leading NoSQL storage solution offered by AWS. It is designed to offer high performance, allows for fully automatic scaling, effectively handling large volumes of data while offering low latency. Most importantly, it is relatively inexpensive.

Having chosen the database for the write model, let us now focus on the read model. In the introduction, we described the possibility of using different databases for the two models. For the read model, we need to build an efficient database that can support the different queries that can be performed, and most importantly, scale horizontally where needed, without intervening at the application level. Amazon Aurora responds worthily to these characteristics. It is a MySQL- and PostgreSQL-compatible database that has the attractive feature of an easy replication mechanism.

Finally, we need to define how to transfer data from the write model to the read model. The strategies can be different, but to take full advantage of the Cloud’s capabilities, AWS provides us with an interesting service: DynamoDB Stream. Basically, by enabling DynamoDB Stream on a DynamoDB table, we can generate an event stream to which a Lambda function can then be hooked. The idea then is that, with each write to the table, an event is generated that is in turn captured by a Lambda function.

It will then be the job of the Lambda function to extract the necessary information to create the write pattern and populate the corresponding database.

The diagram below describes the desired architecture of our solution:

Il nostro modello di scrittura e DynamoDB

Prerequisite of our solution, as shown in the diagram above, is the creation of a DynamoDB table and an Amazon Aurora database, as well as the development of the necessary AWS Lambda functions. We then start with the creation of the table. We then access the AWS console, and from there the DynamoDB service management page, and click on Create Table:

Now we can set table name on Products and Partition key on ProductId, and finally we can click on Create table:

Let us now proceed with writing our AWS Lambda function. If we have not already installed the Amazon Lambda Templates, we use the dotnet command to proceed with the installation:

dotnet new install Amazon.Lambda.Templates

In order to create Lambda functions, we must also proceed with the installation of the .NET Global CLI Tools:

dotnet tool install -g Amazon.Lambda.Tools

Finally we proceed with the creation of the project:

dotnet new lambda.EmptyFunction --name ProductCommandFunction

Within the project, we define our simple model for saving a product:

[DynamoDBTable("Products")]
public class Product
{
    [DynamoDBHashKey]
    public int ProductId { get; set; }

    [DynamoDBProperty]
    public string Name { get; set; }

    [DynamoDBProperty]
    public decimal Price { get; set; }

    [DynamoDBProperty]
    public bool OutOfStock { get; set; }
}

As a result, the Lambda function that does the saving appears to be very simple indeed:

using Amazon.Lambda.Core;
using Amazon.Lambda.Serialization.SystemTextJson;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using System;
using System.Threading.Tasks;

[assembly: LambdaSerializer(typeof(DefaultLambdaJsonSerializer))]

public class Function
{
    private static AmazonDynamoDBClient _dynamoDbClient;
    private static DynamoDBContext _dbContext;

    static Function()
    {
        AmazonDynamoDBConfig clientConfig = new AmazonDynamoDBConfig
        {
            RegionEndpoint = Amazon.RegionEndpoint.YOUR_REGION
        };

        _dynamoDbClient = new AmazonDynamoDBClient("YourAccessKey", "YourSecretKey", clientConfig);
        _dbContext = new DynamoDBContext(_dynamoDbClient);
    }

    public async Task<string> FunctionHandler(Product inputProduct)
    {
        await _dbContext.SaveAsync(inputProduct);
    }
}

Our writing model and Amazon Aurora

Having created and prepared the read model (Query), we now need to create the Amazon Aurora database that we will use as the basis for the write model (Command). We have already anticipated how Amazon Aurora was the choice. To create it, therefore, we access the AWS console and select Amazon RDS, also simply using the search box, as shown below:

From the Create database section, we click on the defined button to proceed with the creation of our database:

We choose Amazon Aurora as the database engine and the engine we want to use, in our case we opt for compatibility with MySQL:

For the purpose of this article, we can leave the default settings and possibly choose a more suitable name to assign to our database.

We then click on Create database to generate our storage. When finished, we will have a situation like the following:

Now our read database, which will simply contain the product list, is ready to be used. The next step is to connect the two databases to generate the read table from the write table. For this, we can take advantage of a native feature of DynamoDB called DynamoDB Stream.

Let’s create our DynamoDB Stream

Having created our table in DynamoDB, we must now enable the stream that will then allow us to capture write events and eventually redirect them to a dedicated Lambda function.

But what is a DynamoDB stream? We can define it as a stream of events, sorted by their natural order, generated from operations performed on the table such as insert, modify or delete.

This allows us to intercept what is happening on the DynamoDB table and act accordingly.

We then create our DynamoDB Stream. From the management console, we access our previously created Products table and, in the Exports and streams section, click on the Turn on button at DynamoDB stream details:

We then select New and old images in order to get the new record directly, as a result of the change made on the database:

At the same time that the DynamoDB stream is enabled, an ARN will also be created, a unique identifier valid within AWS that is needed to locate resources and connect them to each other. It will in fact come in handy in the next step when we go to create the Lambda function that will take care of generating the data in the write tables.

A Lambda function to generate the reading model.

Having set everything up for writing, we now proceed to generate our read model. We then create a new AWS Lambda project

dotnet new lambda.EmptyFunction --name ProductHandlerFunction

The task of this function is relatively simple: handle the insert, edit, and delete events of an item in the DynamoDB Products table and write it to an Amazon Aurora table. To do this, we modify the Function class in the Function.cs file as shown below:

[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

public class Function
{
    private readonly string connectionString;

    public Function()
    {
        // Replace these values with your Aurora database connection details
        string serverName = "your-aurora-server-name";
        string dbName = "your-aurora-database-name";
        string username = "your-aurora-username";
        string password = "your-aurora-password";

        connectionString = $"Server={serverName};Database={dbName};User ID={username};Password={password};";
    }

    public void FunctionHandler(DynamoDBEvent dynamoEvent, ILambdaContext context)
    {
        foreach (var record in dynamoEvent.Records)
	      {
            Dictionary<string, AttributeValue> newImage = record.Dynamodb.NewImage;
            if (newImage != null)
            {
                Product product = new Product
                {
                    ProductId = Convert.ToInt32(newImage["ProductId"].N),
                    Name = newImage["Name"].S,
                    Price = Convert.ToDecimal(newImage["Price"].N)
                };
                
                using (MySqlConnection connection = new MySqlConnection(connectionString))
                {
                    connection.Open();

                    using (MySqlCommand cmd = connection.CreateCommand())
                    {
		                    cmd.CommandText = "INSERT INTO Product (ProductId, Name, Price) VALUES (@ProductId, @Name, @Price)";
		                    cmd.Parameters.AddWithValue("@ProductId", product.ProductId);
		                    cmd.Parameters.AddWithValue("@Name", product.Name);
		                    cmd.Parameters.AddWithValue("@Price", product.Price);

		                    cmd.ExecuteNonQuery();
                    }
                }
            }            
        }
    }
}

For the correct execution of the Lambda function, the following packages must be installed:

dotnet add package AWSSDK.Extensions.NETCore.Setup
dotnet add package Newtonsoft.Json
dotnet add package MySql.Data
dotnet add package AWSSDK.DynamoDBv2
dotnet add package Amazon.Lambda.DynamoDBEvents

Having finished the development of our Lambda function, we can now publish it to AWS, as we saw earlier, with the command:

dotnet lambda deploy-function ProductHandlerFunction

After choosing an appropriate name for the IAM role, we select option 2 AWSLambdaInvocation-DynamoDB as the policy to apply for our purpose:

Upon completion we will find our Lambda function in the relevant section on AWS Console:

We can now click on Add trigger to add DynamoDB stream as the source of our Lambda function:

The trigger, as shown above, will allow us to intercept any write event on the Products table in DynamoDB. Finally, the Lambda function will take care of transforming the received information in the most appropriate way and save it to the Amazon Aurora database, ready to be used later by the read model.

Maximizing performance with Amazon Aurora replication

The choice of Amazon Aurora as the repository for read operations is not accidental. As anticipated, Amazon Aurora allows the configuration of read replicas, mirror copies of the primary database whose purpose is to improve system performance by allowing multiple users to access data simultaneously.

When a read operation is performed on a replica, the load is distributed between the primary database and the read replicas, thus reducing the load on the primary database and allowing more read requests to be handled.

The Amazon Aurora read replicas are synchronized continuously with the primary database to ensure that the data is always up to date. This ensures that, in the event of a loss, one of the read replicas can be promoted to the primary database to ensure continuity of service.

Configuring Amazon Aurora and the read replicas

The replication capability of Amazon Aurora can thus be an additional weapon to enable highly scalable and performant services.

Configuration of Amazon Aurora and read replicas can be done by directly accessing the Amazon RDS service and selecting the previously created Aurora database:

If not already done, add a reader node by clicking on Actions and then on Add reader:

We assign it a name, check all the creation parameters, and confirm the creation. With this action we can create several read replicas and increase the scalability of our solution. In addition, we can also take advantage of the auto scaling capability by configuring, again from the Actions menu, auto scaling for read nodes based on CPU utilization or number of connections, also setting the minimum and maximum nodes to be used.

Conclusion

In this first article, we have seen how we can manage two different storage to support an architecture based on the CQRS model, with the ability to replicate and adapt the data appropriately based on the different needs of the read and write models.

AWS offers important solutions that can be leveraged to achieve the desired performance and scalability. Of course, it is always a good idea to keep a close eye on the costs arising from each and every choice we place in our architectural solutions.

In the next articles we will see how to improve the solution for which we have just laid the foundation.

As always, any feedback is welcome!

Enjoy!

Il tuo Secondo Cervello - Tiago Forte

Il tuo Secondo Cervello

Primo libro, prima recensione. O meglio, le mie sincere impressioni. Innanzitutto ho scelto la versione italiana del libro per avere una lettura più semplice ed agevole, senza il sovraccarico cognitivo derivante dalla necessaria “traduzione contestuale”. Scelta personale, ovviamente, e per me più che centrata. Dispiace che la versione italiana, al contrario delle altre traduzioni internazionali, abbia optato per un cambio totale di copertina, con una scelta secondo me discutibile. Ad ogni modo, a noi interessano i contenuti.

Il Libro

Il libro spiega e condivide l’approccio dell’autore, partendo da un’esperienza personale, sulla costruzione di un metodo utile per l’organizzazione della personale memoria. L’idea di base è quella di non raccogliere le informazioni “che generano una qualche emozione”, o che riteniate possano essere utili in futuro, e raccoglierle in un’area digitale, la cui funzione è, appunto, quella di un secondo cervello, al quale è possibile accedere in qualsiasi momento. Vengono illustrati principalmente due metodi:

  • il metodo CODE (Capture, Organize, Distill, Express), la sequenza di stati attraverso i quali passa ogni singolo appunto, così da prepararlo ad essere facilmente fruibile;
  • il metodo PARA (Projects, Areas, Resources, Archives), utilizzato per l’organizzazione degli appunti in apposite “categorie”, così da semplificare la loro ricerca;

In che modo mi è stato utile?

L’approccio è davvero interessante. Sono costantemente alla ricerca di un metodo per la gestione dei miei appunti, ma più in generale per la gestione delle mie attività, da quelle lavorative a quelle personali. Il dover ricordare tutto non è facile. Come riportato nel libro, un articolo del New York Times stima che consumiamo quotidianamente l’equivalente di circa 34 Gigabytes. Una quantità enorme di dati che complica decisamente la loro fruizione qualitativa. Fatichiamo a districarci nella enorme mole di appunti, segnalibri, articoli nella categoria del “lo leggo dopo”, e puntualmente finiamo per perderli completamente di vista (e non leggerli mai). A quanti capita? A me spesso. Ho quindi iniziato ad applicarlo nella pratica quotidiana. Lungi dall’esser perfetto, mi ha permesso di trovare in pochissimo tempo le informazioni di cui avevo bisogno, anche pochi minuti prima di una riunione. Davvero utile, ma serve molta disciplina e costanza nel tempo.

Conclusioni

Libro scorrevole e molto ben fatto. Non ho risentito molto della traduzione italiana, per cui lo consiglio senza dubbio, soprattutto se siete alla ricerca di un metodo o semplicemente un’ispirazione per l’organizzazione della vostra vita.

12 Books in 12 Months

Books

Leggere mi ha sempre appassionato. Purtroppo nell’ultimo periodo sono stato molto discontinuo e per questo ho deciso di avviare una challenge personale e leggere “almeno” 12 libri in 12 mesi. L’obiettivo è anche quello di condividere poi una breve recensioni dei libri letti. Di seguito la roadmap del progetto (si, si tratta proprio di un progetto, con le sue scadenze):

  • Gennaio: Il tuo Secondo Cervello Tiago Forte
  • Febbraio: Team Topologies Matthew Skelton, Manuel Pais
  • Marzo: Le Organizzazioni Agili Carmela Flaminio
  • Aprile: Fundamentals of Software Architecture Neal Ford, Mark Richards
  • Maggio: Pensieri Lenti e Veloci Daniel Kahneman
  • Giugno: Platform Strategy Gregor Hohpe
  • Luglio: Accelerate, The Science Behind Devops Nicole Forsgren, Jez Humble, Gene Kim
  • Agosto: Detto, fatto! L’arte di fare bene le cose David Allen
  • Settembre: Building Micro-Frontend Luca Mezzalira
  • Ottobre: Learning Domain-Driven Design Vlad Khononov
  • Novembre: Atomic habits. Piccole abitudini per grandi cambiamenti James Clear
  • Dicembre: Flow Architectures James Urquhart

Bonus books (ovvero, altro che vorrei leggere):

  • Monolith to Microservice Sam Newman
  • Software Architecture: The Hard Parts Neal Ford, Mark Richards, Pramod Sadalage & Zhamak Dehghani
  • Building Event-Driven Microservices Adam Bellemare

E la lista potrebbe anche essere più lunga. Ovviamente, come ogni roadmap che si rispetti, potrebbe subire variazioni in corso d’opera.

Let’s start reading them!

Post of the year 2023

E’ stato un anno davvero complicato. Nuove sfide lavorative e la voglia di vivere ogni momento possibile con la mia famiglia, mi hanno lasciato poco tempo da dedicare alle community. E’ una questione di priorità, direbbe qualcuno. Nulla di più vero. Sono riuscito comunque a ritagliarmi piccoli spazi e partecipare all’Agile O’Day 2023 di Napoli con la sessione “I don’t need an Architect, I’m Agile!”, poi rivista e ampliata durante uno SmallTalk di UGIdotNET a giugno.

In entrambi i casi, l’opportunità di parlare di un argomento che mi sta molto a cuore: la Software Architecture nei contesti Agili.

Il prossimo anno sarà ancora più intenso e pieno di nuove occasioni per migliorare, crescere e sperimentare, con la promessa di riprendere a scrivere nel mio blog, o anche in altre sedi, e proseguire il lavoro fatto negli ultimi anni nelle community con rinnovato impegno e dedizione.

Buon 2024 a tutti!

Conversation with ChatGPT

Conversation with ChatGPT

Disclaimer! This post is generated from a conversation with ChatGPT. I asked a couple of things and wait for an answer. Let’s see what happened!

Question 1: What is the better way to create a realtime distributed software architecture?

There are many different ways to create a real-time distributed software architecture, and the “best” way will depend on your specific requirements and use case. Some common approaches to creating real-time distributed systems include using message brokers, event-driven architectures, and distributed databases.

One way to create a real-time distributed system is to use a message broker, such as Apache Kafka or Amazon Simple Queue Service (SQS), to enable communication and data sharing between different components of the system. This allows each component to operate independently and asynchronously, while still being able to share data and communicate with other components in real time.

Another approach is to use an event-driven architecture, where components of the system communicate and coordinate their actions through the publication and subscription of events. This allows the system to respond to changes in data or user input in real time, without the need for continuous polling or synchronization between components.

Finally, you can use a distributed database, such as Apache Cassandra or Amazon DynamoDB, to store and manage data in a distributed and highly-available manner. This allows different components of the system to access and update the data in real time, without the need for a central point of coordination or control.

Ultimately, the best approach for your system will depend on your specific requirements and constraints, and may involve a combination of these and other techniques. It’s important to carefully evaluate your options and choose the approach that best fits your needs.

Question 2: How I can create a multi cloud realtime distributed software architecture?

To create a multi-cloud real-time distributed software architecture, you can use a combination of the techniques discussed above, such as message brokers and event-driven architectures, along with cloud-specific services and tools.

One way to create a multi-cloud real-time distributed system is to use a cloud-agnostic message broker, such as Apache Kafka, which can be deployed across multiple cloud platforms and support communication and data sharing between different components of the system. This allows you to take advantage of the benefits of multiple cloud platforms, such as flexibility, scalability, and cost-effectiveness, while still maintaining a consistent and cohesive architecture for your system.

Another approach is to use cloud-specific event-driven architectures, such as AWS Lambda and Azure Functions, to enable real-time data processing and coordination across different cloud platforms. This allows you to leverage the unique capabilities and features of each cloud platform, while still maintaining a consistent event-driven approach to your architecture.

Additionally, you can use cloud-agnostic or cloud-specific distributed databases, such as Apache Cassandra and Amazon DynamoDB, to store and manage data in a distributed and highly-available manner across multiple cloud platforms. This allows you to take advantage of the scalability and reliability of distributed databases, while still maintaining the flexibility and portability of a multi-cloud architecture.

Overall, creating a multi-cloud real-time distributed architecture involves carefully balancing the benefits and challenges of using multiple cloud platforms, and choosing the right mix of tools and techniques to support your specific requirements and use case.

Question 3: Let me know how I can use Apache Kafka from C#

To use Apache Kafka from C#, you will need to have the Apache Kafka and the .NET client for Apache Kafka, known as Confluent.Kafka, installed on your system. Once you have these, you can use the Producer and Consumer classes in the Confluent.Kafka namespace to send and receive messages from Kafka topics.

Here’s an example of how you can use these classes to send a message to a Kafka topic:

using Confluent.Kafka;

namespace ApacheKafkaExample
{
    class Program
    {
        static void Main(string[] args)
        {
            // Create a new Kafka producer.
            var producerConfig = new ProducerConfig
            {
                BootstrapServers = "localhost:9092"
            };
            using (var producer = new ProducerBuilder<string, string>(producerConfig).Build())
            {
                // Send a message to the topic.
                producer.Produce("my-topic", new Message<string, string>
                {
                    Key = "my-key",
                    Value = "Hello, World!"
                });
            }
        }
    }
}

This code creates a new Producer object and uses it to send a message with the key “my-key” and the value “Hello, World!” to the topic “my-topic”. The Kafka server to connect to is specified in the BootstrapServers property of the ProducerConfig object that is passed to the ProducerBuilder when creating the Producer object.

To receive messages from a Kafka topic, you can use the Consumer class in the same way. Here’s an example of how you can use this class to receive and process messages from a Kafka topic:

using Confluent.Kafka;

namespace ApacheKafkaExample
{
    class Program
    {
        static void Main(string[] args)
        {
            // Create a new Kafka consumer.
            var consumerConfig = new ConsumerConfig
            {
                BootstrapServers = "localhost:9092",
                GroupId = "my-group"
            };
            using (var consumer = new ConsumerBuilder<string, string>(consumerConfig).Build())
            {
                // Subscribe to the topic.
                consumer.Subscribe("my-topic");

                // Poll for messages.
                while (true)
                {
                    var message = consumer.Consume();
                    Console.WriteLine($"Received message: {message.Value}");
                }
            }
        }
    }
}

This code creates a new Consumer object and uses it to subscribe to the topic “my-topic”. It then enters an infinite loop and uses the Consume method of the Consumer object to poll for new messages from the topic. Whenever a new message is received, it is printed to the console.

To learn more about how to use the Producer and Consumer classes in the Confluent.Kafka namespace to interact with Apache Kafka from C#, please see the Confluent.Kafka documentation.

Conclusion

Really, really awesome.

Enjoy it at https://chat.openai.com/chat

Deploy .NET 6 Minimal API to AWS Lambda via GitHub

Create the pipeline

Serverless currently is for sure one of the most popular words of the last few years, at least in the Cloud Computing world. But, what does it mean? This is one of the most simple definitions I found:

Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers (RedHat).

So, this means that servers are still there (fiuuu…).

Serverless doesn’t mean “without server”, but it is more related to ownership of resources, load balancing, scalability, and the other server things that a developer doesn’t need to manage. Servers technologies are abstracted away from development.

AWS currently has a lot of services useful to implement serverless applications. One of the most well knows is AWS Lambda. An AWS Lambda is composed of two-part: a function, the code and runtime that process events, and a trigger, the AWS service or application that causes the function execution. In this post we are going to see out to deploy an AWS Lambda function developed with the new .NET 6 Minimal API, using GitHub as a source repository.

Toolbox

Before we get into the steps, a few words about the AWS services and tools we are going to use:

  • AWS Lambda: a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. One of the most important points is that you only pay for what you use, which in that case means that you only pay for the execution time. More info at the official page.
  • AWS CodePipeline: a fully managed continuous delivery service. With CodePipeline you can automate the build and deploy service. Check the official page.
  • AWS Lambda Tools for .NET Core: a set of commands to create and deploy .NET-based Lambda applications. See here for more info.
  • GitHub: the git repository that we are going to use for our source code. You can use what you prefer, like Bitbucket.
  • Visual Studio Code: the cross-platform editor to write our .NET AWS Lambda application.

So, now let’s start and see what happens.

Create the .NET Lambda serverless project

The .NET Core CLI is the easiest way to create a .NET Lambda project. As always, you need to use the predefined AWS Lambda .NET Project templates with the dotnet new command. First, if you need to install the templates, open your command line tool, or terminal on macOS, and use the dotnet new -i command to install the Lambda Project Templates:

dotnet new -i Amazon.Lambda.Templates

after the installation was completed, you can proceed with creating the project. So, in your command line program, go to the base directory of your repository and use the serverless.AspNetCoreMinimalAPI as shown here:

dotnet new serverless.AspNetCoreMinimalAPI -n myAwesomeLambda

the project is now ready. Under the folder src/myAwesomeLambda, in the Program.cs, you’ll find all the useful code to run your Minimal API. You can change the code and implement the APIs based on your needs.

Setup your project for AWS CodePipeline

For this post, it is useful to look at these two specific files created by the serverless template:

  • aws-lambda-tools-defaults.json
  • serverless.template

the aws-lambda-tools-defaults.json contains all the deployment info that you can use in the command line to deploy the lambda function. We’ll see the command line instruction later. The serverless.template, instead, is the JSON template that allows the creation of the serverless service by using AWS CloudFormation. you can find more info here. In this article we are going to use only the AWS Lambda Tools for .NET Core.

As seen in the previous post, we need to use the buildspec.yml file to build our solution by using AWS CodePipeline. Before we can proceed with the build and deploy command, we need to be sure that all the CLI are correctly installed on the build machine. To do that, we first need to install the latest dotnet version and then install, or update, the Amazon.Lambda.Tools by using the dotnet tool update command, as you can see in the following file buildspec.yml file:

version: 0.2

phases:
    install:
        commands:
            - /usr/local/bin/dotnet-install.sh --channel LTS
            - dotnet tool update -g Amazon.Lambda.Tools
            
    build:
        commands:
            - dotnet lambda deploy-function myAwesomeLambda --project-location ./src/myAwesomeLambda/ --function-role myAwesomeLambdaRole --config-file aws-lambda-tools-defaults.json

The dotnet lambda deploy-function is the command you can call to build, package, and deploy your AWS Lambda function written in .NET. As written above, all the options specified here can be set also in the aws-lambda-tools-defaults-json file. Here is an example:

{
  "profile": "default",                                                            
  "region": "eu-west-1",                                                           
  "configuration": "Release",                                                      
  "function-runtime": "dotnet6",                                               
  "function-memory-size": 256,                                                     
  "function-timeout": 30,                                                          
  "function-handler": "myAwesomeLambda", 
  "s3-prefix": "myAwesomeLambda/"
}

Now we can push our first repository version and then start configuring our AWS CodePipeline.

Configuring and building the AWS CodePipeline

In a very similar way to what we have done in the previous post, we are going to create our AWS CodePipeline to build and publish the .NET AWS Lambda function. The AWS CodePipeline will pull the source code from GitHub and then pass the artifact to the build server.

So, first of all, we need to go to CodePipeline section on our AWS Console, and click the Create Pipeline button:

Create the pipeline

Define the Pipeline name and click on Next:

Select the source

Select GitHub (Version 2) as Source provider. Then create a new GitHub connection by clicking on Connect to GitHub button:

Connect to GitHub

Configure the GitHub connection

To use GitHub as a source, we first need to connect to our account. So on this page, set a name for the new connection and click Connect to GitHub:

Set the connection name

Click Install a new app to install AWS Connector for GitHub on your GitHub account and follow the instructions on the page. Then click Connect to complete this step:

Install AWS Connector for GitHub

Now you can complete the GitHub source configuration by selecting the Repository and the Branch name and clicking to Next:

Configure GitHub repository and branch

Setting up the build stage

After defining the source code repository, we need to create the build stage. In our project, we select AWS CodeBuild as the engine for the build. Specify your preferred region and create a new build project by clicking Create project:

Create the build project

Here, after setting the Project name, go to the Environment section and choose Ubuntu as the operating system, as you can see in the image below:

Select the operating system for build

Make sure that the Use a buildspec file option is already selected in the Buildspec section. As above specified, we are going to use the buildspec.yml:

Use a buildspec file

Define the Deploy stage

Since the deployment is made by the build server, we don’t need to set the Deploy stage, so we can skip this step.

Run the pipeline

We are ready. Now we can push our code to the remote repository and start the pipeline. At this time, you could encounter this error message:

Error creating Lambda function: User: arn:aws:sts::assumed-role/build-role/AWSCodeBuild-xxx is not authorized to perform: iam:PassRole on resource: arn:aws:iam::xxx:role/myAwesomeLambdaRole because no identity-based policy allows the iam:PassRole action

To solve the issue, we need to assign the iam:PassRole permission to the running role of codebuild. So, go to IAM > Roles, select the role created for the AWS CodeBuild service, then create a specific policy by clicking on Add permission > Create inline policy:

Add permission

and then select the rules as in the following image (be sure to have the target Lambda service role ARN):

.NET on Lambda inline policy

After a few minutes, you can go to the AWS Lambda console section and test your running code.

Test your Lambda function

Now all the things are ready. Based on our configuration, the pipeline runs after each change in the GitHub source repository. In the end, you can go to the Lambda section, select your Lambda function instance and check if it is running fine. In AWS console, you can also test your Lambda function. Simply click on Test tab and select your preferred template from the list:

Lambda test template selection

The simpler way to test the Lambda function is by using the API Gateway AWS Proxy template. Our Lambda function is built to reply to HTTP requests. An HTTP request can be made internally in your private network or could come from an external client, through an API Gateway. We’ll see this alternative way in the next post. For the sake of this post, to test the call, we can use the following JSON document and set all the attributes useful to execute the request by invoking the HTTP GET method:

{
  "body": "",
  "resource": "/{proxy+}",
  "path": "/",
  "httpMethod": "GET"
}

Now you can push the Test button on upper right corner and see the result:

Lambda test result

As always, any feedback is welcome!

Enjoy!

Cost usage of Blazor on AWS Elastic Beanstalk

In the latest post we explore a possibile way to deploy a Blazor WebAssembly. Now, I would like to check how much will cost the solution and look at the AWS services.

The solution, requires usage of different services:

  • CodeBuild
  • Elastic Beanstalk
    • Elastic Load Balancer
    • EC2 Instance
    • EC2 Others (Volume Usage, in my case)

As said in my previous post, AWS Elastic Beanstalk give you the ability to have an application running without the necessity to think about the service needed, like the Load Balancer. This is why you read Elastic Load Balancer in the cost list. By using AWS Cost Explorer, part of the AWS Cost Management, we can create reports and analyze the costs of our AWS solution.

CodeBuild costs

The CodeBuild cost for a Linux build machine is calculated in 0,01 euro cents per minute of execution. So if your run 3 release pipeline in a day, and each pipeline requires 4 minutes to complete, you’ll have a cost of 0,12 cents per day. An average of 2,14 in a month (considering 20 working days). Not bad.

Elastic Beanstalk

My Elastic Beanstalk solution, very simple, is composed by one instance of Elastic Cloud Computing, with auto-scaling enabled, and one Elastic Load Balancing. To calculate the costs, we must keep in mind two main values: execution time and volume usage. Respectively we will have an EC2 running cost and a EBS (Elastic Block Store) consumption costs.

So, each Elastic Beanstalk instance will costs:

  • [EC2] € 0,061 per On Demand Windows t3.medium Instance Hour: € 1,46/day
  • [EBS] € 0,10 per GB-month of General Purpose SSD (gp2) provisioned storage - EU (Ireland): in my case 2,629 GB/Month

Each Elastic Load Balancing

  • $0.008 per used Application load balancer capacity unit-hour (or partial hour): € 0,192/day
  • $0.0252 per Application LoadBalancer-hour (or partial hour): € 0,60/day

Ok, so how much I’ll pay per month?

We see the cost in details. Assuming we are in a month of 30 days, here the sum of your cost:

Service Hypotetic cost per day Hypotetic cost per month
CodePipeline € 0,12 € 2,14
EC2 € 1,46 € 43,80
EBS   € 0,26
ALB capacity € 0,192 € 5,76
ALB hour € 0,60 € 18,00
     
Total   € 69,96

So, final cost will about € 70 per month. This is a small forecast of the app running with very small traffic. The app is not well optimized. In the next posts we will see how we can mantain an acceptable cost, or reducing it, by introducing new services.

As always, any feedback is welcome!

Deploy .NET 6 Blazor WebAssembly on AWS Elastic Beanstalk

This post is the first in a new series looking at .NET on AWS. Why AWS? The answer is … why not! Perhaps not everyone knows that .NET is a first class citizen on AWS. Right after the Java SDK, .NET SDK was one of the first SDKs released in early 2010.

In this post, we will explore one of the alternative ways to deploy a Blazor WebAssembly application on AWS Elastic Beanstalk. We will use GitHub as the repository and AWS CodePipeline to retrieve the source code, build the project, generate the required artifacts, and then deploy it to the AWS Elastic Beanstalk instance. And the target framework for the project is .NET 6.

Before we get into the steps, a few words about the AWS services we will be using:

  • AWS Elastic Beanstalk: an easy-to-use service for deploying and scaling web applications and services. This means that we can simply work on our code and the engine automatically handles the environment stuff needed to successfully execute the application, like deployment, capacity, load balancing, auto-scaling, and things like that. If you prefer, you can also modify all the environment settings to better fit your needs. More info at the official page.
  • AWS CodePipeline: a fully managed continuos delivery service. With CodePipeline you can automate the build and deploy service. More info at the official page.

Create the AWS Elastic Beanstalk instance

First, we create the Beanstalk project that will host the application. In the AWS console, we can search for Beanstalk and select the appropriate scope:

Select the AWS Elastic Beanstalk

To run our Blazor application on AWS, we need to create a new Windows environment in the Elastic Beanstalk section. To do this, click on Create a new environment:

Create a new environment

then select Web server environment:

Select environment tier

and finally, after setting the name, we need to set the .NET on Windows Server platform:

Select the environment platform

Be sure to leave Sample Application selected on Application Code, this is a good starting point to have a preconfigured environment, and then click the Create environment button at the bottom of the page. After a few minutes the environment is ready and we can start the next step: building the pipeline.

Build with AWS CodePipeline

With CodePipeline, you can create your build pipeline on AWS, pull source code from GitHub, and deploy all artifacts to Elastic Beanstalk. Now go to CodePipeline and click the Create Pipeline button:

Create the pipeline

Define the Pipeline name and click on Next:

Select the source

Select GitHub (Version 2) as Source provider. Then create a new GitHub connection by clicking on Connect to GitHub button:

Connect to GitHub

Configure the GitHub connection

To use GitHub as a source, we first need to connect to our account. So on this page, set a name for the new connection and click Connect to GitHub:

Set the connection name

Click Install a new app to install AWS Connector for GitHub on your GitHub account and follow the instructions on the page. Then click Connect to complete this step:

Install AWS Connector for GitHub

Now you can complete the GitHub source configuration by selecting the Repository and the Branch name and clicking to Next:

Configure GitHub repository and branch

Setting up the build stage

After defining the source code repository, we need to create the build stage. In our project, we select AWS CodeBuild as the engine for the build. Specify your preferred region and create a new build project by clicking Create project:

Create the build project

Here, after setting the Project name, go to the Environment section and choose Ubuntu as the operating system, as you can see in the image below:

Select the operating system for build

Make sure that the Use a buildspec file option is already selected in the Buildspec section. This file is needed for configuring the build phase in the Blazor project. We’ll talk about the buildspec.yml file later:

Use a buildspec file

Define the Deploy stage

At the end of the pipeline, all artifacts in our environment must be deployed. So, configure the deployment phase for AWS Elastic Beanstalk, as you can see in the figure below:

Create the deploy stage

Configure the Blazor project

The final step is project configuration. AWS CodeBuild, which is used by AWS CodePipeline, requires a set of specific instructions to build your project. All of these instructions must be written in the buildspec.yml, a build specification file. This file must be located in the root directory of your source code. For more information about this file, see the following page.

To build a Blazor project, I found the following ‘buildspec.yml’ very useful and simple:

version: 0.2

phases:
    install:
        commands:
            - /usr/local/bin/dotnet-install.sh --channel LTS
            
    build:
        commands:
            - dotnet build -c Release ./BlazorOnAWS.csproj
            - dotnet publish -o dist
            
artifacts:
    files:
        - dist/**/*
        - aws-windows-deployment-manifest.json

The above file consists of two main parts: the phase definition and the artifact output. In the phase definition, we first need to be sure that the latest .NET versions is already installed. Unfortunately, the images used in AWS CodeBuild don’t currently support .NET 6. Therefore, we need to use the dotnet-install.sh command to install it just before the build commands. For more information about the script, see this page. After the installation phase is complete, the build phase runs the dotnet build and the dotnet publish commands and copies the output to the dist, the custom output folder. The final step is to create a package with the output from the dist/**/* directory and the aws-windows-deployment-manifest.json file, which the Elastic Beanstalk Windows container reads to determine how to deploy the application. Here’s the content of the file I used in my example:

{
    "manifestVersion": 1,
    "deployments": {
        "aspNetCoreWeb": [
        {
            "name": "test-dotnet-core",
            "parameters": {
                "appBundle": "dist",
                "iisPath": "/",
                "iisWebSite": "Default Web Site"
            }
        }
        ]
    }
}

The manifest file, stored in the generated artifact as a zip file, indicate the dist folder as appBundle, giving instruction on AWS Elastic Beanstalk on how to deploy the application. More info about the file specification are available here.

Run the app

Now all the things are ready. Based on our configuration, the pipeline runs after each change in the GitHub source repository. At the end, you can go to the Elastic Beanstalk instance, click on the instance urls, and enjoy your Blazor WASM app:

Blazor app running on AWS

As always, any feedback is welcome!

Enjoy!

Blog Resurrection and priorities

It’s been a really long time from my last post. Many things are changed in my personal life: my son was born on march (WOW!).

As you know, it is a life-changing event. No sleep for months and less free time for passions (sports, writing, etc.). Priorities are changed. It’s always a problem of priorities. When you said “I have no time for…”, you’re actually saying that activity has a lower priority. You have other priorities.

So, now, let’s restart my blog and community activities. 2022 is going to be a great year for developers.

Enjoy!