Conversation with ChatGPT

Conversation with ChatGPT

Disclaimer! This post is generated from a conversation with ChatGPT. I asked a couple of things and wait for an answer. Let’s see what happened!

Question 1: What is the better way to create a realtime distributed software architecture?

There are many different ways to create a real-time distributed software architecture, and the “best” way will depend on your specific requirements and use case. Some common approaches to creating real-time distributed systems include using message brokers, event-driven architectures, and distributed databases.

One way to create a real-time distributed system is to use a message broker, such as Apache Kafka or Amazon Simple Queue Service (SQS), to enable communication and data sharing between different components of the system. This allows each component to operate independently and asynchronously, while still being able to share data and communicate with other components in real time.

Another approach is to use an event-driven architecture, where components of the system communicate and coordinate their actions through the publication and subscription of events. This allows the system to respond to changes in data or user input in real time, without the need for continuous polling or synchronization between components.

Finally, you can use a distributed database, such as Apache Cassandra or Amazon DynamoDB, to store and manage data in a distributed and highly-available manner. This allows different components of the system to access and update the data in real time, without the need for a central point of coordination or control.

Ultimately, the best approach for your system will depend on your specific requirements and constraints, and may involve a combination of these and other techniques. It’s important to carefully evaluate your options and choose the approach that best fits your needs.

Question 2: How I can create a multi cloud realtime distributed software architecture?

To create a multi-cloud real-time distributed software architecture, you can use a combination of the techniques discussed above, such as message brokers and event-driven architectures, along with cloud-specific services and tools.

One way to create a multi-cloud real-time distributed system is to use a cloud-agnostic message broker, such as Apache Kafka, which can be deployed across multiple cloud platforms and support communication and data sharing between different components of the system. This allows you to take advantage of the benefits of multiple cloud platforms, such as flexibility, scalability, and cost-effectiveness, while still maintaining a consistent and cohesive architecture for your system.

Another approach is to use cloud-specific event-driven architectures, such as AWS Lambda and Azure Functions, to enable real-time data processing and coordination across different cloud platforms. This allows you to leverage the unique capabilities and features of each cloud platform, while still maintaining a consistent event-driven approach to your architecture.

Additionally, you can use cloud-agnostic or cloud-specific distributed databases, such as Apache Cassandra and Amazon DynamoDB, to store and manage data in a distributed and highly-available manner across multiple cloud platforms. This allows you to take advantage of the scalability and reliability of distributed databases, while still maintaining the flexibility and portability of a multi-cloud architecture.

Overall, creating a multi-cloud real-time distributed architecture involves carefully balancing the benefits and challenges of using multiple cloud platforms, and choosing the right mix of tools and techniques to support your specific requirements and use case.

Question 3: Let me know how I can use Apache Kafka from C#

To use Apache Kafka from C#, you will need to have the Apache Kafka and the .NET client for Apache Kafka, known as Confluent.Kafka, installed on your system. Once you have these, you can use the Producer and Consumer classes in the Confluent.Kafka namespace to send and receive messages from Kafka topics.

Here’s an example of how you can use these classes to send a message to a Kafka topic:

using Confluent.Kafka;

namespace ApacheKafkaExample
{
    class Program
    {
        static void Main(string[] args)
        {
            // Create a new Kafka producer.
            var producerConfig = new ProducerConfig
            {
                BootstrapServers = "localhost:9092"
            };
            using (var producer = new ProducerBuilder<string, string>(producerConfig).Build())
            {
                // Send a message to the topic.
                producer.Produce("my-topic", new Message<string, string>
                {
                    Key = "my-key",
                    Value = "Hello, World!"
                });
            }
        }
    }
}

This code creates a new Producer object and uses it to send a message with the key “my-key” and the value “Hello, World!” to the topic “my-topic”. The Kafka server to connect to is specified in the BootstrapServers property of the ProducerConfig object that is passed to the ProducerBuilder when creating the Producer object.

To receive messages from a Kafka topic, you can use the Consumer class in the same way. Here’s an example of how you can use this class to receive and process messages from a Kafka topic:

using Confluent.Kafka;

namespace ApacheKafkaExample
{
    class Program
    {
        static void Main(string[] args)
        {
            // Create a new Kafka consumer.
            var consumerConfig = new ConsumerConfig
            {
                BootstrapServers = "localhost:9092",
                GroupId = "my-group"
            };
            using (var consumer = new ConsumerBuilder<string, string>(consumerConfig).Build())
            {
                // Subscribe to the topic.
                consumer.Subscribe("my-topic");

                // Poll for messages.
                while (true)
                {
                    var message = consumer.Consume();
                    Console.WriteLine($"Received message: {message.Value}");
                }
            }
        }
    }
}

This code creates a new Consumer object and uses it to subscribe to the topic “my-topic”. It then enters an infinite loop and uses the Consume method of the Consumer object to poll for new messages from the topic. Whenever a new message is received, it is printed to the console.

To learn more about how to use the Producer and Consumer classes in the Confluent.Kafka namespace to interact with Apache Kafka from C#, please see the Confluent.Kafka documentation.

Conclusion

Really, really awesome.

Enjoy it at https://chat.openai.com/chat

Deploy .NET 6 Minimal API to AWS Lambda via GitHub

Create the pipeline

Serverless currently is for sure one of the most popular words of the last few years, at least in the Cloud Computing world. But, what does it mean? This is one of the most simple definitions I found:

Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers (RedHat).

So, this means that servers are still there (fiuuu…).

Serverless doesn’t mean “without server”, but it is more related to ownership of resources, load balancing, scalability, and the other server things that a developer doesn’t need to manage. Servers technologies are abstracted away from development.

AWS currently has a lot of services useful to implement serverless applications. One of the most well knows is AWS Lambda. An AWS Lambda is composed of two-part: a function, the code and runtime that process events, and a trigger, the AWS service or application that causes the function execution. In this post we are going to see out to deploy an AWS Lambda function developed with the new .NET 6 Minimal API, using GitHub as a source repository.

Toolbox

Before we get into the steps, a few words about the AWS services and tools we are going to use:

  • AWS Lambda: a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. One of the most important points is that you only pay for what you use, which in that case means that you only pay for the execution time. More info at the official page.
  • AWS CodePipeline: a fully managed continuous delivery service. With CodePipeline you can automate the build and deploy service. Check the official page.
  • AWS Lambda Tools for .NET Core: a set of commands to create and deploy .NET-based Lambda applications. See here for more info.
  • GitHub: the git repository that we are going to use for our source code. You can use what you prefer, like Bitbucket.
  • Visual Studio Code: the cross-platform editor to write our .NET AWS Lambda application.

So, now let’s start and see what happens.

Create the .NET Lambda serverless project

The .NET Core CLI is the easiest way to create a .NET Lambda project. As always, you need to use the predefined AWS Lambda .NET Project templates with the dotnet new command. First, if you need to install the templates, open your command line tool, or terminal on macOS, and use the dotnet new -i command to install the Lambda Project Templates:

dotnet new -i Amazon.Lambda.Templates

after the installation was completed, you can proceed with creating the project. So, in your command line program, go to the base directory of your repository and use the serverless.AspNetCoreMinimalAPI as shown here:

dotnet new serverless.AspNetCoreMinimalAPI -n myAwesomeLambda

the project is now ready. Under the folder src/myAwesomeLambda, in the Program.cs, you’ll find all the useful code to run your Minimal API. You can change the code and implement the APIs based on your needs.

Setup your project for AWS CodePipeline

For this post, it is useful to look at these two specific files created by the serverless template:

  • aws-lambda-tools-defaults.json
  • serverless.template

the aws-lambda-tools-defaults.json contains all the deployment info that you can use in the command line to deploy the lambda function. We’ll see the command line instruction later. The serverless.template, instead, is the JSON template that allows the creation of the serverless service by using AWS CloudFormation. you can find more info here. In this article we are going to use only the AWS Lambda Tools for .NET Core.

As seen in the previous post, we need to use the buildspec.yml file to build our solution by using AWS CodePipeline. Before we can proceed with the build and deploy command, we need to be sure that all the CLI are correctly installed on the build machine. To do that, we first need to install the latest dotnet version and then install, or update, the Amazon.Lambda.Tools by using the dotnet tool update command, as you can see in the following file buildspec.yml file:

version: 0.2

phases:
    install:
        commands:
            - /usr/local/bin/dotnet-install.sh --channel LTS
            - dotnet tool update -g Amazon.Lambda.Tools
            
    build:
        commands:
            - dotnet lambda deploy-function myAwesomeLambda --project-location ./src/myAwesomeLambda/ --function-role myAwesomeLambdaRole --config-file aws-lambda-tools-defaults.json

The dotnet lambda deploy-function is the command you can call to build, package, and deploy your AWS Lambda function written in .NET. As written above, all the options specified here can be set also in the aws-lambda-tools-defaults-json file. Here is an example:

{
  "profile": "default",                                                            
  "region": "eu-west-1",                                                           
  "configuration": "Release",                                                      
  "function-runtime": "dotnet6",                                               
  "function-memory-size": 256,                                                     
  "function-timeout": 30,                                                          
  "function-handler": "myAwesomeLambda", 
  "s3-prefix": "myAwesomeLambda/"
}

Now we can push our first repository version and then start configuring our AWS CodePipeline.

Configuring and building the AWS CodePipeline

In a very similar way to what we have done in the previous post, we are going to create our AWS CodePipeline to build and publish the .NET AWS Lambda function. The AWS CodePipeline will pull the source code from GitHub and then pass the artifact to the build server.

So, first of all, we need to go to CodePipeline section on our AWS Console, and click the Create Pipeline button:

Create the pipeline

Define the Pipeline name and click on Next:

Select the source

Select GitHub (Version 2) as Source provider. Then create a new GitHub connection by clicking on Connect to GitHub button:

Connect to GitHub

Configure the GitHub connection

To use GitHub as a source, we first need to connect to our account. So on this page, set a name for the new connection and click Connect to GitHub:

Set the connection name

Click Install a new app to install AWS Connector for GitHub on your GitHub account and follow the instructions on the page. Then click Connect to complete this step:

Install AWS Connector for GitHub

Now you can complete the GitHub source configuration by selecting the Repository and the Branch name and clicking to Next:

Configure GitHub repository and branch

Setting up the build stage

After defining the source code repository, we need to create the build stage. In our project, we select AWS CodeBuild as the engine for the build. Specify your preferred region and create a new build project by clicking Create project:

Create the build project

Here, after setting the Project name, go to the Environment section and choose Ubuntu as the operating system, as you can see in the image below:

Select the operating system for build

Make sure that the Use a buildspec file option is already selected in the Buildspec section. As above specified, we are going to use the buildspec.yml:

Use a buildspec file

Define the Deploy stage

Since the deployment is made by the build server, we don’t need to set the Deploy stage, so we can skip this step.

Run the pipeline

We are ready. Now we can push our code to the remote repository and start the pipeline. At this time, you could encounter this error message:

Error creating Lambda function: User: arn:aws:sts::assumed-role/build-role/AWSCodeBuild-xxx is not authorized to perform: iam:PassRole on resource: arn:aws:iam::xxx:role/myAwesomeLambdaRole because no identity-based policy allows the iam:PassRole action

To solve the issue, we need to assign the iam:PassRole permission to the running role of codebuild. So, go to IAM > Roles, select the role created for the AWS CodeBuild service, then create a specific policy by clicking on Add permission > Create inline policy:

Add permission

and then select the rules as in the following image (be sure to have the target Lambda service role ARN):

.NET on Lambda inline policy

After a few minutes, you can go to the AWS Lambda console section and test your running code.

Test your Lambda function

Now all the things are ready. Based on our configuration, the pipeline runs after each change in the GitHub source repository. In the end, you can go to the Lambda section, select your Lambda function instance and check if it is running fine. In AWS console, you can also test your Lambda function. Simply click on Test tab and select your preferred template from the list:

Lambda test template selection

The simpler way to test the Lambda function is by using the API Gateway AWS Proxy template. Our Lambda function is built to reply to HTTP requests. An HTTP request can be made internally in your private network or could come from an external client, through an API Gateway. We’ll see this alternative way in the next post. For the sake of this post, to test the call, we can use the following JSON document and set all the attributes useful to execute the request by invoking the HTTP GET method:

{
  "body": "",
  "resource": "/{proxy+}",
  "path": "/",
  "httpMethod": "GET"
}

Now you can push the Test button on upper right corner and see the result:

Lambda test result

As always, any feedback is welcome!

Enjoy!

Cost usage of Blazor on AWS Elastic Beanstalk

In the latest post we explore a possibile way to deploy a Blazor WebAssembly. Now, I would like to check how much will cost the solution and look at the AWS services.

The solution, requires usage of different services:

  • CodeBuild
  • Elastic Beanstalk
    • Elastic Load Balancer
    • EC2 Instance
    • EC2 Others (Volume Usage, in my case)

As said in my previous post, AWS Elastic Beanstalk give you the ability to have an application running without the necessity to think about the service needed, like the Load Balancer. This is why you read Elastic Load Balancer in the cost list. By using AWS Cost Explorer, part of the AWS Cost Management, we can create reports and analyze the costs of our AWS solution.

CodeBuild costs

The CodeBuild cost for a Linux build machine is calculated in 0,01 euro cents per minute of execution. So if your run 3 release pipeline in a day, and each pipeline requires 4 minutes to complete, you’ll have a cost of 0,12 cents per day. An average of 2,14 in a month (considering 20 working days). Not bad.

Elastic Beanstalk

My Elastic Beanstalk solution, very simple, is composed by one instance of Elastic Cloud Computing, with auto-scaling enabled, and one Elastic Load Balancing. To calculate the costs, we must keep in mind two main values: execution time and volume usage. Respectively we will have an EC2 running cost and a EBS (Elastic Block Store) consumption costs.

So, each Elastic Beanstalk instance will costs:

  • [EC2] € 0,061 per On Demand Windows t3.medium Instance Hour: € 1,46/day
  • [EBS] € 0,10 per GB-month of General Purpose SSD (gp2) provisioned storage - EU (Ireland): in my case 2,629 GB/Month

Each Elastic Load Balancing

  • $0.008 per used Application load balancer capacity unit-hour (or partial hour): € 0,192/day
  • $0.0252 per Application LoadBalancer-hour (or partial hour): € 0,60/day

Ok, so how much I’ll pay per month?

We see the cost in details. Assuming we are in a month of 30 days, here the sum of your cost:

Service Hypotetic cost per day Hypotetic cost per month
CodePipeline € 0,12 € 2,14
EC2 € 1,46 € 43,80
EBS   € 0,26
ALB capacity € 0,192 € 5,76
ALB hour € 0,60 € 18,00
     
Total   € 69,96

So, final cost will about € 70 per month. This is a small forecast of the app running with very small traffic. The app is not well optimized. In the next posts we will see how we can mantain an acceptable cost, or reducing it, by introducing new services.

As always, any feedback is welcome!

Deploy .NET 6 Blazor WebAssembly on AWS Elastic Beanstalk

This post is the first in a new series looking at .NET on AWS. Why AWS? The answer is … why not! Perhaps not everyone knows that .NET is a first class citizen on AWS. Right after the Java SDK, .NET SDK was one of the first SDKs released in early 2010.

In this post, we will explore one of the alternative ways to deploy a Blazor WebAssembly application on AWS Elastic Beanstalk. We will use GitHub as the repository and AWS CodePipeline to retrieve the source code, build the project, generate the required artifacts, and then deploy it to the AWS Elastic Beanstalk instance. And the target framework for the project is .NET 6.

Before we get into the steps, a few words about the AWS services we will be using:

  • AWS Elastic Beanstalk: an easy-to-use service for deploying and scaling web applications and services. This means that we can simply work on our code and the engine automatically handles the environment stuff needed to successfully execute the application, like deployment, capacity, load balancing, auto-scaling, and things like that. If you prefer, you can also modify all the environment settings to better fit your needs. More info at the official page.
  • AWS CodePipeline: a fully managed continuos delivery service. With CodePipeline you can automate the build and deploy service. More info at the official page.

Create the AWS Elastic Beanstalk instance

First, we create the Beanstalk project that will host the application. In the AWS console, we can search for Beanstalk and select the appropriate scope:

Select the AWS Elastic Beanstalk

To run our Blazor application on AWS, we need to create a new Windows environment in the Elastic Beanstalk section. To do this, click on Create a new environment:

Create a new environment

then select Web server environment:

Select environment tier

and finally, after setting the name, we need to set the .NET on Windows Server platform:

Select the environment platform

Be sure to leave Sample Application selected on Application Code, this is a good starting point to have a preconfigured environment, and then click the Create environment button at the bottom of the page. After a few minutes the environment is ready and we can start the next step: building the pipeline.

Build with AWS CodePipeline

With CodePipeline, you can create your build pipeline on AWS, pull source code from GitHub, and deploy all artifacts to Elastic Beanstalk. Now go to CodePipeline and click the Create Pipeline button:

Create the pipeline

Define the Pipeline name and click on Next:

Select the source

Select GitHub (Version 2) as Source provider. Then create a new GitHub connection by clicking on Connect to GitHub button:

Connect to GitHub

Configure the GitHub connection

To use GitHub as a source, we first need to connect to our account. So on this page, set a name for the new connection and click Connect to GitHub:

Set the connection name

Click Install a new app to install AWS Connector for GitHub on your GitHub account and follow the instructions on the page. Then click Connect to complete this step:

Install AWS Connector for GitHub

Now you can complete the GitHub source configuration by selecting the Repository and the Branch name and clicking to Next:

Configure GitHub repository and branch

Setting up the build stage

After defining the source code repository, we need to create the build stage. In our project, we select AWS CodeBuild as the engine for the build. Specify your preferred region and create a new build project by clicking Create project:

Create the build project

Here, after setting the Project name, go to the Environment section and choose Ubuntu as the operating system, as you can see in the image below:

Select the operating system for build

Make sure that the Use a buildspec file option is already selected in the Buildspec section. This file is needed for configuring the build phase in the Blazor project. We’ll talk about the buildspec.yml file later:

Use a buildspec file

Define the Deploy stage

At the end of the pipeline, all artifacts in our environment must be deployed. So, configure the deployment phase for AWS Elastic Beanstalk, as you can see in the figure below:

Create the deploy stage

Configure the Blazor project

The final step is project configuration. AWS CodeBuild, which is used by AWS CodePipeline, requires a set of specific instructions to build your project. All of these instructions must be written in the buildspec.yml, a build specification file. This file must be located in the root directory of your source code. For more information about this file, see the following page.

To build a Blazor project, I found the following ‘buildspec.yml’ very useful and simple:

version: 0.2

phases:
    install:
        commands:
            - /usr/local/bin/dotnet-install.sh --channel LTS
            
    build:
        commands:
            - dotnet build -c Release ./BlazorOnAWS.csproj
            - dotnet publish -o dist
            
artifacts:
    files:
        - dist/**/*
        - aws-windows-deployment-manifest.json

The above file consists of two main parts: the phase definition and the artifact output. In the phase definition, we first need to be sure that the latest .NET versions is already installed. Unfortunately, the images used in AWS CodeBuild don’t currently support .NET 6. Therefore, we need to use the dotnet-install.sh command to install it just before the build commands. For more information about the script, see this page. After the installation phase is complete, the build phase runs the dotnet build and the dotnet publish commands and copies the output to the dist, the custom output folder. The final step is to create a package with the output from the dist/**/* directory and the aws-windows-deployment-manifest.json file, which the Elastic Beanstalk Windows container reads to determine how to deploy the application. Here’s the content of the file I used in my example:

{
    "manifestVersion": 1,
    "deployments": {
        "aspNetCoreWeb": [
        {
            "name": "test-dotnet-core",
            "parameters": {
                "appBundle": "dist",
                "iisPath": "/",
                "iisWebSite": "Default Web Site"
            }
        }
        ]
    }
}

The manifest file, stored in the generated artifact as a zip file, indicate the dist folder as appBundle, giving instruction on AWS Elastic Beanstalk on how to deploy the application. More info about the file specification are available here.

Run the app

Now all the things are ready. Based on our configuration, the pipeline runs after each change in the GitHub source repository. At the end, you can go to the Elastic Beanstalk instance, click on the instance urls, and enjoy your Blazor WASM app:

Blazor app running on AWS

As always, any feedback is welcome!

Enjoy!

Blog Resurrection and priorities

It’s been a really long time from my last post. Many things are changed in my personal life: my son was born on march (WOW!).

As you know, it is a life-changing event. No sleep for months and less free time for passions (sports, writing, etc.). Priorities are changed. It’s always a problem of priorities. When you said “I have no time for…”, you’re actually saying that activity has a lower priority. You have other priorities.

So, now, let’s restart my blog and community activities. 2022 is going to be a great year for developers.

Enjoy!

A FileSystem subscriptions manager, your first Rebus extension

Today I would like to talk about Rebus, a simple and lean message bus implementation for .NET. Originally developed by Mogens Heller Grabe and supported by the community, Rebus is robust and works well with a minimum level of configuration, but its main strength is extensibility. With this in mind, Rebus offers many ways to customize things like:

  • transport
  • subscriptions
  • logging
  • serialization
  • encryption
  • and more…

If you want to reead the basics of Rebus, please check the official documentation wiki. The main thing in Rebus is the concept of Transport. Basically, the transport is the mechanism used to transfer your messages. You can choose from a great list of transport already developed, like InMemory, RabbitMQ or Azure Service Bus, or you can develop your own transport. It depends from you architectural model. Another main point, necessary in some context like Publish and Subscribe implementation, is the Subscription Storage. Every time that a subscription is added to a specific topic, Rebus needs to keep track of it and finally use that storage to get the list of subscribers and dispatch them the published messages. In this post, we’ll see how to implement a simple Subscription Storage to store subscriptions on FileSystem.

Extending Rebus: Implements ISubscriptionStorage interface

The first thing we need to do is implement the ISubscriptionStorage interface:

public interface ISubscriptionStorage
{
    /// <summary>
    /// Gets all destination addresses for the given topic
    /// </summary>
    Task<string[]> GetSubscriberAddresses(string topic);

    /// <summary>
    /// Registers the given <paramref name="subscriberAddress"/> as a subscriber of the given topic
    /// </summary>
    Task RegisterSubscriber(string topic, string subscriberAddress);

    /// <summary>
    /// Unregisters the given <paramref name="subscriberAddress"/> as a subscriber of the given topic
    /// </summary>
    Task UnregisterSubscriber(string topic, string subscriberAddress);

    /// <summary>
    /// Gets whether the subscription storage is centralized and thus supports bypassing the usual subscription request
    /// (in a fully distributed architecture, a subscription is established by sending a <see cref="SubscribeRequest"/>
    /// to the owner of a given topic, who then remembers the subscriber somehow - if the subscription storage is
    /// centralized, the message exchange can be bypassed, and the subscription can be established directly by
    /// having the subscriber register itself)
    /// </summary>
    bool IsCentralized { get; }
}

So, now we proceed by creating the FileSystemSubscriptionStorage that implements the ISubscriptionStorage:

internal class FileSystemSubscriptionStorage : ISubscriptionStorage
{
    private readonly string folderPath;

    public FileSystemSubscriptionStorage(string folderPath)
    {
        this.folderPath = folderPath;
    }
    ...
}

We need to know the root folder where subscribers will be stored, so the constructor accept the full path as parameter. Now, the first method will go to implement is RegisterSubscriber:

public Task RegisterSubscriber(string topic, string subscriberAddress)
{
    return Task.Run(() =>
    {
        var topicPath = Path.Combine(folderPath, Hash(topic));
        if (!Directory.Exists(topicPath))
        {
            Directory.CreateDirectory(topicPath);
        }

        var subscriberAddressFile = Path.Combine(topicPath, Hash(subscriberAddress) + ".subscriber");
        if (!File.Exists(subscriberAddressFile))
        {
            File.WriteAllText(subscriberAddressFile, subscriberAddress);
        }
    });
}

The RegisterSubscriber method accept two parameters: topic and subscriberAddress. In our implementation, we are going to create a folder for each topic and then a file for each subscriber. Both will be created by using a simple hash, so we can easily get a correct path name avoiding wrong chars. The file will be a simple text file with the clear subscriberAddress.

The GetSubscriberAddresses method, instead, retrieve the list of subscribers based on input topic. So, we could simply read all files in a folder to get the full list:

public Task<string[]> GetSubscriberAddresses(string topic)
{
    return Task.Run(() =>
    {
        var topicPath = Path.Combine(folderPath, Hash(topic));
        if (!Directory.Exists(topicPath))
        {
            return new string[0];
        }
        return Directory.GetFiles(topicPath, "*.subscriber").Select(f => File.ReadAllText(f)).ToArray();
    });
}

last, but not least, the UnregisterSubscriber will delete the required subscriberAddress from the input topic:

public Task UnregisterSubscriber(string topic, string subscriberAddress)
{
    return Task.Run(() =>
    {
        var topicPath = Path.Combine(folderPath, Hash(topic));
        if (!Directory.Exists(topicPath))
        {
            Directory.CreateDirectory(topicPath);
        }
        
        var subscriberAddressFile = Path.Combine(topicPath, Hash(subscriberAddress) + ".subscriber");
        if (File.Exists(subscriberAddressFile))
        {
            File.Delete(subscriberAddressFile);
        }
    });
}

Using the FileSystemSubscriptionStorage

Following the Configuration API patterns, we’ll develop an extensions method to configure the FileSystemSubscriptionStorage:

public static class FileSystemSubscriptionStorageConfigurationExtensions
{
    public static void UseFileSystem(this StandardConfigurer<ISubscriptionStorage> configurer, string folderPath)
    {
        configurer.Register(context =>
        {
            return new FileSystemSubscriptionStorage(folderPath);
        });
    }
}

Then, in the configuration section, we’ll use it in this way:

adapter = new BuiltinHandlerActivator();

Configure.With(adapter)
    .Subscriptions(s => s.UseFileSystem(subscriptionsPath))
    .Start();

Conclusion

In this post we have explored one way to extend Rebus. Today, a framework with extensibility built in mind is a great starting point. You can use it as is, or you can join the wonderful community and extend it.

Enjoy!

How to evolve your .NET gRPC service

Evolving is a necessary step to survive and the software architecture is not an exception. Also designing a gRPC service means that something may change in the future. So, what happen if we change the ProtoBuf definition? Evolving a contract definition means that we can add a new field, for example, or remove an existing one. Or we could introduce a new service and deprecate an existing one. And obviusly we’d like that the client still continue to work. Let’s see what happens.

Break the ProtoBuf definition

We can start with the previously seen .proto file:

// The bookshelf service definition
service BookService {
  // Get full list of books
  rpc SaveBook (BookRequest) returns (BookReply);
}

// The Book message represents a book instance
message BookRequest {
  string title = 1;
  string description = 2;
}

// The Book message represents a book instance
message BookReply {
  int32 bookId = 1;
  string title = 2;
  string description = 3;
}

what will happens if we change the message? Let me explore the different ways we can break the contract!

Adding new fields

In the brand new version of our service we need to carry on author information in the BookRequest message. To do that, we add a new message called Author and a new author field:

message BookRequest {
  string title = 1;
  string description = 2;
  Author author = 3;
}

message Author {
  string firstName = 1;
  string lastName = 2;
}

Adding new fields will not break the contract, so all the previusly generated clients will still work fine! The new fields will simply have their default value. Note that fields are optional by default, but you can declare them mandatory by using the keyword required. The most important thing is not the field name, but only the field number. Preserve it, don’t change the field types, and your contract will not be broken.

NOTE: The message fields name or their order are not important. Each field in the message definition has a unique field number, used to identify your field in the message binary format. Don’t change it in live environment, it ill break the contract!

Remove a field

We can remove a field from a message? Obviusly we can do it, but all the old clients still continue to send unnecessary data. Note that if a client send an unexpected field, the server will ignore it without throwing exception.

You need to establish a plan to softly replace the property with the new one:

  1. Introduce the new field int the message contract and leave the old field
  2. In the next release, introduce a warning when old client still doesn’t send new field
  3. Finaly, two release after new field introduction, remove the old field and accept value only from the new field

Obviously you could adapt the plan as you wish! Note that if you want to use a new field name without change its type or order, do it, no one will notice.

Conclusion

Things can change and your gRPC service must evolve. Don’t worry, do it carefully.

Enjoy!


check full code on github

In gRPC, g stands for?

If your answer is Google, you are not wrong. But actually, the gRPC team change the meaning of ‘g’ every release. In short words, ‘g’ stands for:

  • 1.0 ‘g’ stands for ‘gRPC’
  • 1.1 ‘g’ stands for ‘good’
  • 1.2 ‘g’ stands for ‘green’
  • 1.3 ‘g’ stands for ‘gentle’
  • 1.4 ‘g’ stands for ‘gregarious’
  • 1.6 ‘g’ stands for ‘garcia’
  • 1.7 ‘g’ stands for ‘gambit’
  • 1.8 ‘g’ stands for ‘generous’
  • 1.9 ‘g’ stands for ‘glossy’
  • 1.10 ‘g’ stands for ‘glamorous’
  • 1.11 ‘g’ stands for ‘gorgeous’
  • 1.12 ‘g’ stands for ‘glorious’
  • 1.13 ‘g’ stands for ‘gloriosa’
  • 1.14 ‘g’ stands for ‘gladiolus’
  • 1.15 ‘g’ stands for ‘glider’
  • 1.16 ‘g’ stands for ‘gao’
  • 1.17 ‘g’ stands for ‘gizmo’
  • 1.18 ‘g’ stands for ‘goose’
  • 1.19 ‘g’ stands for ‘gold’
  • 1.20 ‘g’ stands for ‘godric’
  • 1.21 ‘g’ stands for ‘gandalf’
  • 1.22 ‘g’ stands for ‘gale’
  • 1.23 ‘g’ stands for ‘gangnam’
  • 1.24 ‘g’ stands for ‘ganges’
  • 1.25 ‘g’ stands for ‘game’
  • 1.26 ‘g’ stands for ‘gon’
  • 1.27 ‘g’ stands for ‘guantao’
  • 1.28 ‘g’ stands for ‘galactic’

check full info on github

Enjoy!

Build gRPC Client with .NET Core

In the previous posts on the series about gRPC, we have seen how to build a simple gRPC request/reply service and a gRPC server streaming service by using .NET Core and the new grpc-dotnet, the managed library entirely written in C#. Now it’s the time to create and build a .NET gRPC client. And it’s really easy to do.

Create and configure our client project

First of all, we need to create a client project. For the purpose of this article, a simple console project will be enough. So, you can open the terminal, go to your preferred folder and execute the following command:

dotnet new console -o GrpcClient

Then go to the folder just created and add the necessary reference with the following commands:

dotnet add package Google.Protobuf
dotnet add package Grpc.Net.Client
dotnet add package Grpc.Tools

Now, we can create the bookshelf.proto file (full code available on my github repository:

syntax = "proto3";

option csharp_namespace = "BookshelfService";

package BookshelfService;

// The bookshelf service definition.
service BookService {
  // Get full list of books
  rpc GetAllBooks (AllBooksRequest) returns (stream AllBooksReply);
  // Save a Book
  rpc Save (NewBookRequest) returns (NewBookReply);
}

// The request message containing the book's title and description.
message AllBooksRequest {
  int32 itemsPerPage = 1;
}

// The request message containing the book's title and description.
message AllBooksReply {
  repeated Book Books = 1;
}

message Book {
  string title = 1;
  string description = 2;
}

// The request message containing the book's title and description.
message NewBookRequest {
  string title = 1;
  string description = 2;
}

// The response message containing the book id.
message NewBookReply {
  string id = 1;
}

We can then add the just created file to the project by using dotnet-grpc CLI. If you haven’t installed yet, execute the following command:

dotnet tool install -g dotnet-grpc

then add the bookshelf.proto to the client project:

dotnet grpc add-file bookshelf.proto --services Client

Finally, be sure to set the right GrpcService value of the Protobuf element in your .csproj file:

You can set the GrpcService attribute to decide the kind of grpc generated code. The accepted values are: Both, Client, Default, None, Server.

<ItemGroup>
  <Protobuf Include="..\Protos\bookshelf.proto" GrpcServices="Client" />
</ItemGroup>

Let’s start coding

Calling a Grpc Service is a very simple operation. Just create the channel, connect to the service endpoint, and then pass it to the generated client as a constructor parameter. Now you can use the client instance to invoke the service methods:

using (var channel = GrpcChannel.ForAddress("http://localhost:5000"))
{
    var request = new NewBookRequest();
    request.Title = "1984";
    request.Description = "A George Orwell novel";

    var client =  new BookService.BookServiceClient(channel);
    client.Save(request);
}

NOTE: if you are on macOs, HTTP/2 on TLS is still not supported, so you need to deactivate it by using the following instruction before connect to the service: AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);

Enjoy!


check full code on github

Server streaming with .NET Core gRPC service

In the first post of this .NET Core gRPC services, we have seen how to build a simple request-reply service by using .NET Core 3 and the brand new grpc-dotnet library entirely written in C#.

Now, it’s time to extend our scenario by exploring the next kind of service: server streaming.

NOTE: Remember that gRPC offers four kinds of service: request-reply, server streaming, client streaming, and bidirectional streaming. We’ll see the others in dedicated posts

Server Streaming Scenarios

First of all, what is server streaming? This is an excerpt the gRPC site:

Server streaming RPCs where the client sends a request to the server and gets a stream to read a sequence of messages back. The client reads from the returned stream until there are no more messages. gRPC guarantees message ordering within an individual RPC call.

Typically, server streaming may be useful when you have a set of data that needs to be continuously send to the client while the server is still working on that. Let me explain with some example: imagine you need to send back a list of items. Instead of sending a full list, with bad performance, you can send back a block of n items per message, allowing the client start its operations asynchronously. This is a very basic usage of server streaming.

Ok, now we can start coding

Based on the BookshelfService implemented in the previous post and available on my github repository, we must update the bookshelf.proto by adding a new service called GetAllBooks and the related AllBooksRequest and AllBooksReply. That service will return the full list of books from our shelf:

// The bookshelf service definition
service BookService {
  // Get full list of books
  rpc GetAllBooks (AllBooksRequest) returns (stream AllBooksReply);
}

// The Request message containing specific parameters
message AllBooksRequest {
  int32 itemsPerPage = 1;
}

// The Reply message containing the book list
message AllBooksReply {
  repeated Book Books = 1;
}

// The Book message represents a book instance
message Book {
  string title = 1;
  string description = 2;
}

After changing the .proto file, now you’ll be able to override the GetAllBooks method in the BookshelfService class to implement the server-side logic:

public override async Task GetAllBooks(AllBooksRequest request, IServerStreamWriter<AllBooksReply> responseStream, ServerCallContext context)
{
    var pageIndex = 0;
    while (!context.CancellationToken.IsCancellationRequested)
    {
        var books = BooksManager.ReadAll(++pageIndex, request.ItemsPerPage);
        if (!books.Any())
        {
            break;
        }

        var reply = new AllBooksReply();
        reply.Books.AddRange(books);
        await responseStream.WriteAsync(reply);
    }
}

Finally, we can run the service with the dotnet run command and test it with BloomRPC:

gRPC Server Streaming

In the next post we’ll see how to create the client for the server streaming service type.

Enjoy!


check full code on github