Demystifying gRPC: A brief history of Remote execution.

It has been a year and a half since I’ve been exploring the gRPC framework. Most of the time, I’ve either written services and glued them together with their implementations without having to explore it why and how it came into the picture?  I find people relating the gRPC framework to the “Microservices” world. If you dig on the Internet, you can find written materials like “Microservice at Scale with gRPC framework” or “Building scalable Microservices with gRPC”.

If you are trying to answer questions like what “gRPC” is and why does it fundamentally exist, and how request/response format matters to Web service performance? you should continue to read this blog. In addition to what has been said, we will also walk through a brief history of the distributed computation and will come to the definition of the remote procedure call and finally we will switch why “gRPC” exists.

What is gRPC?

Let me be a little lazy and refer you to the wiki’s definition of what a gRPC is?

gRPC (gRPC Remote Procedure Calls[1]) is an open source remote procedure call (RPC) system initially developed at Google in 2015[2]. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or non-blocking bindings, and cancellation and timeouts.

Before we describe gRPC in detail, let us understand what a Remote Procedure Call is and a little bit of history of distributed programming before in indulge in understanding what remote procedure call was and what Google has done with gRPC.

A Flashback of Remote Procedure Call

It was a protocol for distributed programming and computer networks developed by Bruce J. Nelson. It coined the idea of invoking methods or programs residing on a remote machine. Although the theoretical terms related to distributed programming have been there from the 1960s one of the practical implementations of RPC came in light in the 1980s with a paper named Implementing Remote Procedure Call. When a procedure call is invoked in a normal world (Same machine scenario), the transfer of control and the associated data transferred to the calling method and a return is expected with some computation. In RPC, the same process is extended to the network. With the popularity of object-oriented programming in the 1990s, there were multiple implementations of the same. Having said that, let us quickly recap how some of the famous implementations of the RPC worked. You eagerly waiting to know more about the gRPC, you can simply jump to Protobuf section.

Distributed Computing Environment/Remote Procedure Call (DCE/RPC)

The earliest standard example I could found for RPC implementation is DCE/RPC. You can further dig-in the link for implementation details.

DCE/RPC has the familiar client/server architecture in which a client invokes a procedure that executes on the server. Arguments can be passed from the client to the server and return values can be passed from the server to the client. The framework is platform- and language-neutral in principle, although strongly tilted toward C in practice. DCE/RPC includes utilities for generating client and server artifacts (stubs and skeletons, respectively). DCE/RPC also provides software libraries that hide the transport details.

It uses interface definition language as an agreement for request-response patterns between client and server. for e.g.

/* echo.idl */
[uuid(2d6ead46-05e3-11ca-7dd1-426909beabcd), version(1.0)]
interface echo {
const long int ECHO_SIZE = 512;
void echo(
[in] handle_t h,
[in, string] idl_char from_client[ ],
[out, string] idl_char from_server[ECHO_SIZE]

As you can notice in the above example, the interface is defined in a “C” like syntax in which the remote procedure “echo” returns nothing but accept three parameters. First, two of the parameters are the input to the procedure while the “out” is the output returned by the server. The payload type of the DCE/RPC protocol is binary and it can run on the top of TCP, UDP, SMB protocols. The protocol used for many remote procedure call-based applications, including many applications from Microsoft. MS-RPCE being one of the RPCs based on the same.


In the late 1990s, a person named Dave Winer innovated a lightweight language neutral XML based RPC which requires Marshalling and UnMarshalling at the client and server-side in order to convert the object from and to XML as a native object. This protocol supports request/response patterns for distributed programming. As a wire protocol, it uses HTTP. Therefore any system which uses XML-RPC requires an HTTP library to generate, parse, transform the request and respond. For example, look at the below xml which works as a service contract for sqr method which accepts a 4 byte Int. The value of the argument is passed as text.


Java RMI

Java released it’s implementation of RPC named Java RMI (Remote Method Invocation). It allowed distributed programming in Java. The simplest diagram for this I could find on the Internet is on JavaPoint. Here we want to call a method on Remote Object residing on Machine 2 from a client residing on Machine 1. RMI

There are two objects here which brokering the request between client and remote object called stub and skeleton. 

Here is what stub does:

  • Acts as a gateway for the client side.
  • Client invokes the remote method through stub
  •  Initiates a connection with remote, marshalls the request and transmit it to remote JVM and waits for the result.
  • It reads (unmarshals) the return value or exception, and
  • It finally, returns the value to the caller.


  1. It reads the parameter for the remote method
  2. It invokes the method on the actual remote object, and
  3. It writes and transmits (marshals) the result to the caller.

In the Java 2 SDK, an stub protocol was introduced that eliminates the need for skeletons.

Java RMI wire Protocol

The RMI protocol makes use of two other protocols for its on-the-wire format: Java Object Serialization and HTTP. The Object Serialization protocol is used to marshal call and return data. The HTTP protocol is used to “POST” a remote method invocation and obtain return data

  • The wire format is binary which are serialised Java objects.
  • The Interface definition language (IDL) is java interfaces.

gRPC Framework:

Now let us revisit the definition of gRPC in first part of the blog. Because we are now familiar with terms like IDL and Wire format of a RPC protocol, you will be able to relate.

Concept Diagram

Protocol Buffers:

Protocol buffers are the Interface Definition Language of gRPC. 

  • Protocol buffers are a flexible, efficient, automated mechanism for serialising structured data – think XML, but smaller, faster, and simpler.
  • They are written with a .proto extension. e.g.
syntax = "proto3";
package awake.packet_analysis;

message Person {
    string name = 1;
    int32 id = 2;
    string email = 3;

Protobuf supports flexible type system to support type system available in most of the languages. We will go through Protobuf in detail as part of the next blog and will see how we generate and use access classes using the Protobuf. For now, let us see some the benefits using them.


  • are simpler
  • are 3 to 10 times smaller *
  • are 20 to 100 times faster **
  • are less ambiguous
  • generate data access classes that are easier to use programmatically

Having adopted the Protobuf as wire format  for gRPC framework, it adds a lot to performance of the application as well as encourages a Ubiquitous language for Microservices to communicate to each other.


The gRPC framework communicates over HTTP2 connection. The Http2 protocol has improved over its previous version. Here are a few things to count as benefits in layman’s term which I found on this Quora thread.

Constant Connection : HTTP/2 delivers constant connection between client (web/mobile browser) and server that decreases page load time plus it reduce the amount of data being transferred.

Binary Language : It transfer of data in binary language rather then textual format, so computer don’t need to waste time to translate text data into binary format.

Multiplexing: HTTP/2 can send & receive multiple message/data at same time, additionally it also gives features.

  • Prioritization : Priority based data transmission, important data will transfer first.
  • Compression : It compress the size of data into smaller pieces.
  • Server Push : Server makes a pre-guess about the next request & send data.

If you would like to go over little bit of history of how HTTP2 came into picture, you can visit Google developers page.

How gRPC is different?

  • Ability to break free from the call-and-response architecture. gRPC is built on HTTP/2, which supports traditional Request/Response model and bidirectional streams.
  • Switch from JSON to protocol buffers.
  • Multiplexing. (See details here)
  • Duplex Streaming. (See how)
  • Because of binary data format, it gets much lighter.
  • Polyglot (Option to generate access classes out of the box)

Having gone through the basics of what a RPC framework is and what gRPC is made of, as part of the next blog in this series, we will create a very basic web service using gRPC framework.  Hope you enjoyed reading. Stay tuned!!


Java Web Services: Up and Running: A Quick, Practical, and Thorough Introduction.

Blue Green Deployments: Reducing the downtime of apps

Knoldus Blogs

Ever heard of “application outage”? As part of agile practice we release our work frequently and often when a newer version of an application released to production, we get application outages due to issues like unexpected traffic, introduced a bug into the newer version or other unknown PITA issues. This cause some (actually a lot!) of chaos in terms of time efforts to recover from failures and having a hassle-free release.  In this blog, we will talk about “blue-green” deployment which overcomes such problems in a great way and minimizes the downtime of the applications.

What is Blue Green deployment?

The origin of the term blue-green deployment is unknown to me but I came across this while reading Martin Fowler’s blog. The blue-green deployment is a pattern used to reduce downtime by using two identical production environment such that one of them is live at a time with stable production…

View original post 299 more words

Error handling in Scala: What, where and how?

Knoldus Blogs

The honest way of Handling an errors is to making it to the end user and telling the exact thing that has been happened over failures. Sometimes we can’t afford longer debugging sessions in case of mission critical things. The following phrases won’t help much in debugging when seen by users:

“Blah” went wrong!! (“What?
Your Transaction can’t be complete !! (“why ?
Not possible … (“why?”
Engine did not start… (“why?
Blah.. Blah..

Should not the above phrases have been informative. E.g. “Engine did not start because the Pressure tolerance is 0.05 exceeding to safe pressure tolerance 0.0005”

In this blog, we will talk about some monadic constructs in Scala and other scala based library which saves a lot of debugging sessions when it comes to failure. In this blog,  how we would choose between them according to our need.

An Easy Example?

Let us take the example of…

View original post 882 more words

AMPS: Empowering real time message driven applications.

Knoldus Blogs


In this blog, we will talk about AMPS, a pub-sub engine which delivers messages in real time with a subject of interest. AMPS is mainly used by Financial Institutions as enterprise message bus. We will also demonstrate how we can use AMPS with to publish and subscribe messages with an example. So, let’s start with introducing AMPS. 

What is AMPS?

Advanced Message Processing System (AMPS) is a publish and subscribe engine developed by 60East technologies. It  is highly scalable and allow publishing and subscribing messages in real time. It is equipped with in built support of multiple messaging protocols such as FIX, NVFIX, JSON, XML which are mainly used in financial services such as trade processing. It empowers applications to deliver messages in real time with flexible topic and content based routing options. 

AMPS Flow (1)How does it work?

The above diagram describes how messaging looks like in AMPS. AMPS provides…

View original post 535 more words

Why I love foldLeft :)


FoldLeft is the one of the my favourite function in Scala. In this blog,  I will explain capabilities of foldLeft. After reading this blog foldLeft will be your favourite function if you like Scala. In this blog I am taking example of List’s foldLeft. Of course, It’s also available on many Scala collection like Vector, Set, Map,Option.

Let’s see foldLeft definition from Scala doc:
According definition, foldLeft can do everything which required iteration of all elements of list. Really ?

Yes. Let’s understand by examples.

  1. Reverse the list:
  2. Remove duplicate element from list:
  3. Split into two list. first list contains all element which satisfies the predicate  and remaining into second list.
  4. Splitting into two list not big deal 🙂 My use case is different. I have 4 predicate. That means split input list into four list according predicates. First list satisfy first predicate and second list   satisfy second…

View original post 56 more words

Object Oriented JavaScript: Polymorphism with examples

Knoldus Blogs

oopjsAgain this is not the advance topic of JavaScript but it relies under Object Oriented JavaScript & polymorphism is one of the tenets of Object Oriented Programming (OOP), we all know what is Polymorphism from other languages (like C#, Java etc) but we always think when to use it, why to use it and how to this, and most of us are still in confusion that should we really use it? blah !!!.

JavaScript is dynamically typed language [a big issue to discuss], but for understanding purpose:

Statically typed programming languages do type checking (the process of verifying and enforcing the constraints of types) at compile-time as opposed to run-time. (Java, C etc)

Dynamically typed programming languages do type checking at run-time as opposed to Compile-time. (JavaScript etc)

Though in JavaScript it is a bit more difficult to see the effects of polymorphism because the more classical types of polymorphism…

View original post 241 more words

Knoldus Bags the Prestigious Huawei Partner of the Year Award

Knoldus Blogs

Knoldus was humbled to receive the prestigious partner of the year award from Huawei at a recently held ceremony in Bangalore, India.


It means a lot for us and is a validation of the quality and focus that we put on the Scala and Spark Ecosystem. Huawei recognized Knoldus for the expertise in Scala and Spark along with the excellent software development process practices under the Knolway™ umbrella. Knolway™ is the Knoldus Way of developing software which we have curated and refined over the past 6 years of developing Reactive and Big Data products.

Our heartiest thanks to Mr. V.Gupta, Mr. Vadiraj and Mr. Raghunandan for this honor.


About Huawei

Huawei is a leading global information and communications technology (ICT) solutions provider. Driven by responsible operations, ongoing innovation, and open collaboration, we have established a competitive ICT portfolio of end-to-end solutions in telecom and enterprise networks, devices, and cloud computing…

View original post 170 more words

Blending Cucumber, Cassandra and Akka-Http

Knoldus Blogs


Knoldus has always pioneered the deep diving into the best ways to use cutting edge technologies. In the past few days, one of our team carried this deed by integrating Cucumber with Akka-Http, Cassandra and of course, Scala. In this blog, we reach out to you to explain and show how this can be done.


Cucumber is for Behavior Driven Design (BDD). The approach of Cucumber is to write the behavior of the application and then run them for acceptance testing.


Akka-Http is a general toolkit provided by Akka to implement HTTP services. It supports both client and server side services.


Cassandra is a database that provides high scalability and availability with best performance.

View original post 197 more words

Business Intelligence-Data Visualization: Tableau

Knoldus Blogs

Image result for tableauSpark, Bigdata, NoSQL, Hadoop are some of the most using and top in charts technologies that we frequently use in Knoldus, when these terms used than one thing comes into picture is ‘Huge Data, millions/billions of records’ Knoldus developers use these terms frequently, managing (and managing means here- storing data, rectifying data, normalizing it, cleaning it and much more) such amount of data is really not at all an easy task.

But user do no understand what they are talking about they just need to know the real essence of whole matter/data/story/facts. From here the term ‘visualization’ comes into picture, so Data Visualization/Intelligence is as important & vast as handling it.

Data visualization brings Business Intelligence Tools for accomplishing visualization goals and the market of BI tools is really huge, there are number of tools with different features, pricing, capabilities etc., if we start comparing them than there is no…

View original post 538 more words