What is gRPC?
gRPC is an RPC framework whose default Interface Definition Language (IDL) is Protocol Buffers. But IDL in gRPC is pluggable which means that other serialization formats like Thrift, Avro etc can also be used with gRPC instead of Protocol Buffers.
If we supply gRPC with a service definition of our application in any IDL, it generates the client and server interfaces for us in any of the several languages that it currently supports. You can read more about gRPC here.
The purpose of this article is to briefly explain how we can setup gRPC on AWS, develop a C++ server application with the help of gRPC and then deploy this application inside a docker container on AWS. A client application running on a different machine can then directly call the methods of this server application.
Setup gRPC on AWS
Our first task is to get an EC2 instance from AWS and install gRPC on it. Login to your AWS console and do the following.
Step 1) Setup a key-value pair on AWS.
This is required to SSH into the EC2 instance that we will create in the next step. This article will help you set it up.
Step 2) Create an EC2 Instance
I used a free-tier eligible Linux AMI to create the instance. You can use any AMI you want. They should also work just fine. But I have not tried them out. So if you run into issues in the subsequent steps, I am not going to take the blame. I wash my hands of it. Be sure to associate the key-value pair we created in the previous step with your EC2 instance. You can read the detailed steps here.
Step 3) Setup SSH to access this EC2 instance remotely.
There are several methods to access the EC2 instance remotely. SSH is just one of them. Follow this tutorial to learn more.
Step 4) SSH into your EC2 instance
ssh -i file_name.pem ec2-user@public_dns_of_your_ec2_instance
Here ‘file_name.pem’ is the file containing the private key that you downloaded in Step 1. You can get the address of your EC2 instance from the AWS console. The default user in a Linux AMI in AWS is ec2-user.
Step 5) Install gRPC
Run the following commands to install gRPC in your instance. If you want further details of these steps, check out the official gRPC documentation on gitHub. It is well documented there.
sudo yum install build-essential autoconf libtool pkg-config
sudo yum install libgflags-dev libgtest-dev
sudo yum install clang libc++-dev
sudo yum install git
git clone -b $(curl -L https://grpc.io/release) https://github.com/grpc/grpc
cd grpc
git submodule update --init
make
sudo make install
cd grpc/third_party/protobuf
sudo make install
That’s it! We have successfully set up gRPC on our AWS EC2 instance. Now, let us proceed with the creation of our C++ application.
Develop the Application
Step 1) Create a .proto file (eg:- chat.proto) for the application.
This file is used by the GRPC framework to generate the client side and server side interfaces which we will need to implement our client application and server application. You can create this file remotely and then move it to the EC2 instance (via scp command in Linux or using WinSCP in Windows) or you can create it directly within the EC2 instance. This article describes how to write a .proto file.
Step 2) Generate the client side and the server side interface from the .proto file.
Compiling the .proto file using the protocol buffer compiler called ‘protoc’ will generate the interfaces. We already installed the ‘protoc’ compiler.. So, navigate to the directory within the EC2 instance where you have placed the .proto file and run the following commands to compile it.
protoc -I =. --grpc_out=. --plugin=protoc-gen-grpc=`which grpc_cpp_plugin` chat.proto
protoc -I =. --cpp_out=. chat.proto
Now if you check the directory, you should find the following files.
chat.grpc.pb.h
chat.grpc.pb.cc
chat.pb.h
chat.pb.cc
The classes, methods, variables etc generated within these files are dependent on the contents of the .proto file we defined. This link gives a detailed explanation of the same.
Step 3) Implement the Server application with the help of the generated code.
We need to create a new class that inherit from the ‘Service’ class defined in ‘chat.grpc.pb.h’.
Step 4) Make sure that the PKG_CONFIG_PATH and LD_LIBRARY_PATH are setup properly. Else run the following commands to make them point to the correct locations.
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
export LD_LIBRARY_PATH=/home/ec2-user/grpc/libs/opt/
Step 5) Open the port on AWS to access the application from outside. If you are not sure how to do it, follow this article. Running the below commands from your local system can help you determine if the server port has indeed been opened.
nc -zv ip_address port_number
nmap ip_address
Step 6) Build and Run the Application.
I used ‘make’ to build the application. Put your makefile containing the build commands in the same directory as your source files and run. Check out my gitHub link to find example code.
make
If you don’t encounter any errors, you are good to go. Run the application by running
./file_name
If you are not keen to run the server within a Docker container, this is it. Your server application is up and running. Otherwise, stop the application and follow the subsequent steps to deploy the application within a Docker container.
Deploy the Application inside Docker
Now lets deploy our application inside a docker container.
Step 1) Install Docker. Docker is available by default with certain AMIs as a package.
sudo yum install docker
Step 2) Create a Dockerfile.
Step 3) Build and Run a Docker image.
sudo docker build -t grpc-server
sudo docker images
docker run -it -p 5300:5300 grpc-server
Yep, That’s it! Give yourself a pat on your back. We have successfully created our gRPC application on AWS.
Load Balancing
To manage the load and ensure availability of our service, we can run multiple instances of the application on separate docker containers. In real production environments, these instances are run on different physical machines (possibly in different physical locations) to ensure availability.
Once we have these application instances up and running, we can distribute client requests equally between them. This can be done either at the client end or at the server end. To understand the different approaches to perform load balancing, please go through this link.
Is it possible to deploy a GRPC app on an existing Elastic Beanstalk. Alas, creating a new EC2 instance might not be one of my current options.
LikeLike