Category: Ubuntu

Analyzing Memory Usage of an Application

Introduction

Memory usage of an application is a key factor to monitor. Specially in production systems you need to set alarms to make sure that system is stable. Therefore memory usage can be considered as a probe to measure the health of the system. Usually production systems are installed on Linux servers. OS itself help in many ways to provide a clear view of application’s memory usage.

featured_image

In this post, I am going to discuss different commands and tools which can be used to measure memory usage of applications, specially Java applications. This post will guide you from higher-level to lower-level under following topics.

  1. Monitoring Overall System Memory Usage
  2. Monitoring Application’s Memory Usage
  3. Analyzing Java Application’s Memory Usage

Further information on commands and tools can be gained by going through the external links provided.

Monitoring Overall System Memory Usage

For this purpose I am going to discuss using “free” command in Linux. Free command gives an overview of complete system memory usage. Therefore please note that this is not an efficient way of measuring an applications performance. Because the system can host many applications and each application has its own boundaries. However, let’s look in to some usages of free command. (hint: use free -h to get a human readable output)

[root@operation_node ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          7873       7360        512          0         70        920
-/+ buffers/cache:       6369       1503
Swap:        11143        431      10712

In the above output, in the first line, you can see total physical memory is 7873 MB and 7360 MB is used. Because of that only 512 MB is remaining. However this does not imply that memory is completely being used. Linux OS is good at using memory effectively. Therefore it uses caching for make the memory access efficient. And that cached memory is showing as used in the first line.
What you should look at is the second line which removed the cache & buffer usage of the physical memory. In used column of second line shows actual use of memory without cache & buffer. In the free column of second row you can see 1503 MB of free memory, which is generated by accumulating free + cache & buffer. So actually you have 1503 MB of free physical memory for use. In addition, according to the third line, you have around 10 GB of swap memory ready for use. Please refer [1] for further information.

In modern versions of Linux kernels, the output of free command has changed. It would be look like something below.

me@my-pc ~ $ free -m

                  total        used        free      shared  buff/cache   available

Mem:           7882        3483         299         122        4100        3922

Swap:          9535           0        9535

In above case, the formula for calculation is as below [2] [3]:

total = used + free + buffers + cache

available : is the amount of memory which is available for allocation to a new process or to existing processes.

free : is the amount of memory which is currently not used for anything. This number should be small, because memory which is not used is simply wasted.

Monitoring Application's Memory Usage

In many cases we want to target monitoring of a single application than overall system. Overall memory usage reflects the use of memory including OS-level operations. Therefore we can use top command in Linux for this purpose. Following is a sample output of top command.

top-cmd-result
A result of a top command

What you should actually focus is the RES value and %MEM value of the application (for this, first you need to identify the process-id of an application using ps -aux | grep "application_name" command). You can use simple "e" to toggle the unit of displaying memory.

RES -- Resident Memory Size : The non-swapped physical memory a task has used.

%MEM -- Memory Usage (RES) : A task's currently used share of available physical memory (RAM).

According to above discussion, top command directly reveals the memory consumption of an application. For further information on top command, you may refer [4] [5].

Analyzing Java Application's Memory Usage

If your application is a Java application, then you might need to look at what objects consumes the high amount of memory. For that one option is taking a heap-dump of that Java application. You may use following command to take a heap dump of the application. Prior to that you should know the process-id of that running Java application.

jmap -dump:format=b,file=heap_dump.hprof <process_id>

Once you execute the command, you will get the file heap_dump.hprof  containing heap usage of Java program. Since the file is in binary format, you need to use a special tool to analyze it. Commonly using tool to inspect heap-dump is Eclipse Memory Analyzer Tool (MAT) [6], which is built on top of Eclipse platform. You just need to download the pack and extract it. Executing MemoryAnalyzer will open up a GUI application to analyze the heap-dump. When you open the heap-dump using MAT, tool will prompt you to generate reports based on heap-dump. You may interest about Leak Suspects Report which shows the large object which takes large portion of the memory.

MAT-leak_suspects
Memory Analyzer Tool with Leak Suspects Report

Another interesting view of this tool is the Dominator Tree, which shows large objects along with ones who keep them. According to the definition [7];

An object x dominates an object y if every path in the object graph from the start (or the root) node to y must go through x.

In the dominator tree view, you will see list of objects and the amount of memory they took when you take the heap-dump.

dominator_tree
Dominator Tree view of Memory Analyzer

In dominator tree view, you can go expanding each entry and see how they have composed. Two columns showing in this view are Shallow Heap and Retained Heap. By default the list is sorted by Retained Heap value, descending order. Following definition [8] clearly explain the meaning of those two values.

Shallow heap is the memory consumed by one object. An object needs 32 or 64 bits (depending on the OS architecture) per reference, 4 bytes per Integer, 8 bytes per Long, etc. Depending on the heap dump format the size may be adjusted (e.g. aligned to 8, etc...) to model better the real consumption of the VM.

Retained set of X is the set of objects which would be removed by GC when X is garbage collected.

Retained heap of X is the sum of shallow sizes of all objects in the retained set of X, i.e. memory kept alive by X.

Generally speaking, shallow heap of an object is its size in the heap and retained size of the same object is the amount of heap memory that will be freed when the object is garbage collected.

Therefore in case of out-of-memory or high memory indications, you should definitely focus on this Retained Heap values of the dominator tree view.

Conclusion

In this post I want to give an clear idea on using several tools to analyze the memory usage of an application running on Linux OS. I have used commands which comes with Linux itself. However you may find tools which can be installed to analyze memory. In the last section I spent on discussing how Eclipse Memory Analyzer can be used to examine heap usage of a Java program. Hope those will help you as well.

References

[1] Understanding Linux free memory : https://thecodecave.com/understanding-free-memory-in-linux/

[2] Usage of free memory : https://stackoverflow.com/questions/30772369/linux-free-m-total-used-and-free-memory-values-dont-add-up

[3] Ask Ubuntu clarification on free command : https://askubuntu.com/questions/867068/what-is-available-memory-while-using-free-command

[4] Super-user forum top command explanation : https://superuser.com/questions/575202/understanding-top-command-in-unix

[5] Linuxarea blog top command explanation : https://linuxaria.com/howto/understanding-the-top-command-on-linux

[6] Eclipse Memory Analyzer : https://www.eclipse.org/mat/

[7] MAT Dominator Tree : https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.mat.ui.help%2Fconcepts%2Fdominatortree.html

[8] MAT Shallow Heap and Retained Heap explanation : https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.mat.ui.help%2Fconcepts%2Fdominatortree.html

Advertisements

Inspecting Solr Index in WSO2 API Manager

Introduction

Apache Solr project [1] helps you to run a full-featured search server on a server. Also you can integrate Solr with your project to make searching faster. In WSO2 API Manager, Solr is using to make searching faster in store and publisher. In WSO2 API Manager, Solr indexing keeps the frequently using meta-data of APIs. Thereafter, to retrieve complete information about an API, API Manager uses its database. This mechanism makes searching faster and less burden on the databases.

adressenboek

However, in some situations things may go wrong. We have seen several cases Solr indexing does not tally with the information at database. Due to that, when displaying complete information about an API, you may see inconsistent information. In such situations you may want to inspect the Solr index at API Manager.

Setting Up Solr Server

Setting up the solr server is quite easy, as its a matter of downloading binary file from the project page [1]. The important thing here is to make sure you download the proper version. WSO2 API Manager 2.0.0 version is using Solr 5.2.1 version. I figured out it by going through the API Manager release tag pom, identify the registry version and searching registry pom file.

Once you download the binary package, extract it. You can start the Solr server by going to solr-5.2.1/bin directory and execute “./solr start“. Then Solr server will start as a background process. Then access its admin UI by location “http://localhost:8983/solr” in your browser.

Inspecting WSO2 API Manager Index

Before doing so, you must stop WSO2 API Manager and Solr server. To stop Solr server, execute command “./solr stop” inside bin directory. Then you need to copy Solr indexing configs and index from API Manager.

  • To copy configs, go to location “APIM_HOME/repository/conf/solr” and copy “registry-indexing” to “solr-5.2.1/server/solr” folder.
  • To copy indexed data, go to the location “APIM_HOME” and copy “solr” folder to same folder “solr-5.2.1” resides. This is done to comply the “dataDir” value at “solr-5.2.1/server/solr/registry-indexing/core.properties” file.

Now start the Solr server and go to admin UI. You should see a drop-down on the left pane and “registry-indexing” menu item in that. Select “registry-indexing” and now you will be able to query indexed data by going to Query section. To query Solr index you need to use specific query language, which is not actually difficult to understand. But in here I’m not going to discuss too much on query language, and it’s up to you to refer [2] and learn it. You can try-out those queries from admin UI directly.

registry-indexing in Solr admin UI

Writing a Java client to query information

In some cases, you may need to write a client to client which can talk to a Solr server and retrieve results. So here I am giving out an example Java code which you can use to retrieve results from a Solr server [3]. However, I am not going to explain the code in detail, because I believe it’s self-explanatory.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.buddhima.solr</groupId>
<artifactId>solr-testing</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
    <!-- https://mvnrepository.com/artifact/org.apache.solr/solr-solrj -->
    <dependency>
        <groupId>org.apache.solr</groupId>
        <artifactId>solr-solrj</artifactId>
        <version>7.1.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.solr/solr-common -->
    <dependency>
        <groupId>org.apache.solr</groupId>
        <artifactId>solr-common</artifactId>
        <version>1.3.0</version>
    </dependency>
</dependencies>
</project>
package com.solr.testing;

import java.io.IOException;

import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrDocumentList;

/**
 * Created by buddhima.
 */
public class SolrTesting {

    public static void main(String[] args) throws IOException, SolrServerException {
        // Default Solr port: 8983, and APIM using 'registry-indexing'
        SolrClient client = new HttpSolrClient.Builder("http://localhost:8983/solr/registry-indexing").build();

        SolrQuery query = new SolrQuery();
        query.setQuery("*:*");

        // Fields use for filtering as a list of key:value pairs
        query.addFilterQuery("allowedRoles:internal/everyone", "mediaType_s:application/vnd.wso2-api+xml");

        // Fields to show in the result
        query.setFields("overview_name_s", "overview_status_s", "updater_s", "overview_version_s", "overview_context_s");

        // Limit the query search space
        query.setStart(0);
        query.setRows(500);

        // Execute the query and print results
        QueryResponse response = client.query(query);
        SolrDocumentList results = response.getResults();
        for (int i = 0; i < results.size(); ++i) {
            System.out.println(results.get(i));
        }
    }
}

In addition to that you can refer [4] [5] for further learning on Solr syntax.

Conclusion

In this post I have discussed the use of Solr in WSO2 API Manager and how to investigate existing Solr index. In addition to that I have shown how to construct a Java client which can talk to a Solr server. I hope that the above explanation will help you to solve issues with Solr indexing.

Special thank goes to WSO2 support for providing guidance.

References

[1] Apache Solr project : http://lucene.apache.org/solr/

[2] Solr query syntax : http://www.solrtutorial.com/solr-query-syntax.html

[3] Using SolrJ : http://www.solrtutorial.com/solrj-tutorial.html

[4] Solr Query Syntax : http://yonik.com/solr/query-syntax/

[5] Solr df and qf explanation : https://stackoverflow.com/questions/17363677/solr-df-and-qf-explanation

JSON Split Aggregate with WSO2 ESB

Introduction

Split-Aggregate (Scatter-Gather) is a common messaging pattern [1] use in enterprise world. In split-aggregate pattern, client’s request sends to multiple endpoint simultaneously. Responses from multiple endpoints aggregated and sends back as a single response to the client. There are plenty of use-cases you will find that this scenario plays when you try to integrate enterprise systems.

scattergather

WSO2 ESB (currently a part of WSO2 EI), is a famous middleware which is used to integrate enterprise system. It is also famous for its comprehensive middleware stack which comprises all the functionalities you need to integrate enterprise systems. WSO2 ESB provides an in-built set of mediators for you to achieve this commonly using Split-Aggregate pattern. They are Iterate Mediator, Clone Mediator and Aggregate Mediator. You will find a sample use-case of using those mediators in this documentation [2].

Existing Problem

Existing mediators provides a good support for Split-Aggregate scenarios when you are working with XML payloads. However current trend is more towards using JSON payloads for message exchanges. Although those existing mediators still can be used with JSON payloads, they do not provide a convenient support. Because of that, when using the existing mediators, you need to map your JSON payload to XML payload. This conversion most of the time adds extra burden to the mediation logic.

In this post I am discussing about two mediators which are optimized for JSON payload handling in Split-Aggregate scenarios. They are Json Iterate Mediator and Json Aggregate Mediator. Those mediators handle JSON payloads in its own way and do not convert to XML (native JSON support). Please note that these mediators do not come with WSO2 ESB out-of-box. You can find the relevant documentation at this location [3]

Configuring Mediators

To use these mediators, you need to build the sourcecode at here [3] and get the resultant jar files. Put the Json (Iterate/Aggregate) Mediator-1.0.0.jar file to ESB_HOME/repository/components/dropins folder. Along with those, add json-path-2.1.0.jar [4], json-smart-2.2.jar [5] and accessors-smart-1.1.jar [9] to the same location. Then start WSO2 ESB (sh bin/wso2server.sh)

Sample Scenario

Once you add those artifacts to ESB, you can refer two new mediators similar to inbuilt mediators. The respective xml tags are <jsonIterate> and <jsonAggregate>. In this post I’m showing a sample configuration of using those mediators and describing it briefly. The same scenario is discussed more descriptive manner at here [3]

<api xmlns="http://ws.apache.org/ns/synapse" name="sampleApi" context="/sample">
<resource methods="POST" uri-template="/*">
<inSequence>
    <log level="full"/>
    <jsonIterate continueParent="false" preservePayload="true" expression="$.messages" attachPath="$.messages">
        <target>
            <sequence>
                <log level="full"/>
                <header name="To" value="http://www.mocky.io/v2/58d6459b100000e601949cb7"/>
                <call/>
            </sequence>
        </target>
    </jsonIterate>
    <log level="full"/>
    <jsonAggregate>
        <completeCondition>
            <messageCount min="-1" max="-1"/>
        </completeCondition>
        <onComplete expression="$.message" enclosingElementProperty="responses">
            <log level="full"/>
            <send/>
        </onComplete>
    </jsonAggregate>
</inSequence>
</resource>
</api>

 

The above example shows how to do split-aggregate on a message receiving to an API. You can send the following request payload to the API create by ESB at http://localhost:8280/sample

curl -X POST \

http://localhost:8280/sample \

-H ‘content-type: application/json’ \

-d ‘{“originator”:”my-company”,”messages”:[{“country”:”Sri Lanka”,”code”:”94″},{“country”:”America”,”code”:”01″},{“country”:”Australia”,”code”:”61″}]}’

The expression at the jsonIterate mediator takes the responsibility of deciding where to split the message payload. It should be written as a JSON Path [6]. Within the sequence inside jsonIterate mediator, you will find splitted message payloads. They are sending to the backend URL given in the config. You can refer additional configurations relate to JSON Iterate mediator at here [7]

Each response comes into the jsonAggregate mediator. At the expression of JSON Aggregate mediator, you need to specify which part of the response should be taken for aggregation. This expression is again a JSONPath expression. Once it satisfies completion condition, aggregated message comes in to the onComplete sequence. You can do further processing on the message inside onComplete sequence. If you are more interested, you can look into documentation [8] which gives a complete guide on configuring JSON Aggregate mediator.

Conclusion

Split-Aggregate is a very common message exchanging pattern in enterprise world. JSON is becoming more popular message format across enterprise world too. However WSO2 ESB has lack of support to JSON message exchanging with split-aggregate scenarios. To cater that requirement I have built two custom mediators which makes life easier. Those two mediators can be configured to do split-aggregate with JSON payloads without converting to XML (native JSON support).

References

[1] EIP patterns reference : http://www.enterpriseintegrationpatterns.com/patterns/messaging/BroadcastAggregate.html

[2] WSO2 ESB Doc : https://docs.wso2.com/display/ESB500/Split-Aggregate+Pattern

[3] GitHub repository : https://github.com/Buddhima/Json-EIP-Mediators

[4] json-path-2.1.0 : https://mvnrepository.com/artifact/com.jayway.jsonpath/json-path/2.1.0

[5] json-smart-2.2.1 : https://mvnrepository.com/artifact/net.minidev/json-smart/2.2.1

[6] JSON Path documentation : https://github.com/jayway/JsonPath/blob/json-path-2.1.0/README.md

[7] JSON Iterate Mediator documentation : https://github.com/Buddhima/Json-EIP-Mediators/tree/master/JsonIterateMediator

[8] JSON Aggregate Mediator documentation : https://github.com/Buddhima/Json-EIP-Mediators/tree/master/JsonAggregateMediator

[9] accessors-smart-1.1.jar : https://mvnrepository.com/artifact/net.minidev/accessors-smart/1.1

Kubernetes and related technologies

Introduction

For this experiment I have used Ubuntu 16.04 machine. I believe still using ubuntu machine is much convenient to play around with these kind of technologies. I am not going to go deep with any of these technologies. And I am more focusing on kubernetes commands which I got familiar recently.

 

Install Docker

First you need to setup docker environment in your machine to develop the docker image. For this post, I’m using NodeJS server which responds a simple text message. First I have created NodeJS application locally and used following Dockerfile to create a docker image of it. You need to put the Dockerfile in the same directory where the NodeJS project resides.

Sample docker file I used is as follow:

FROM node:boron

WORKDIR /usr/src/app

COPY package.json .

RUN npm install

COPY . .

EXPOSE 3000

CMD [ “npm”, “start” ]

You can refer this article for installation and getting familiar with docker (https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04)

Once you create the docker image, push it to docker hub. This is for kubernetes to pick it up.

Install Virtualbox

You can use ubuntu software center for installing virtual box. In addition to that, you can use commadline to install virtualbox (https://askubuntu.com/questions/367248/how-to-install-virtualbox-from-command-line). I personally like to recommend using virtualbox, compared to other hypervisors.

Install minikube and kubectl

You may wonder why I was asking to install Virtualbox. The reason is minikube. At the moment Kubernetes recommended way of testing is using minikube with virtualbox. To install minikube and kubectl, please follow the instructions given in this document (https://kubernetes.io/docs/tasks/tools/install-minikube/) . While doing that, please make sure that to install minikube first and install kubectl which supports.

Minikube gives you a single node cluster. In that you can create a new kubernetes deployment. Docker is a dependency when using Kubernetes (can be used with rkt too). Once minikube is setup, you need to start it using minikube start command. Then you can interact with kubernetes cluster with kubectl commands. Here I have list down some important kubectl commands

For the docker image I created, I used following command to create a kubernete deployment

kubectl run my-test-app –image=docker.io/buddhima/node-web-app:v1 –port=3000

Other useful commands;

kubectl get <resource_type> – Get listed information about a resource type. Resource type can be nodes/deployments/pods/services etc

kubectl describe <resource_type> – get a descriptive information about a resource type

kubectl describe <resource_type>/ID – get a descriptive information about a single resource given by the ID

kubectl logs – print the logs from a container in a pod

kubectl exec – to execute a command on a container (eg: kubectl exec -it POD_NAME -c CONTAINER_NAME bash – to execute bash shell of a container in a pod)

 

Deployment – a deployment is a configuration which instruct how Kubernetes can create/update instances of app.

Pod – a pod is a collection of one or more application containers which are tightly coupled. A pod shares a same IP and port space. A pod in kubernetes cluster has a unique IP

Service – A service is a logical set of pods defined by YAML/JSON. Pods are selected by a LabelSelector. Types are ClusterIP, NodePort, LoadBalancer, ExternalName. This abstraction allows pods to die and replicate match set of pods using labels and selectors

 

kubectl expose deployment/<deployment_name> –type=”NodePort” –port 8080  – create a new service and expose to external traffic

kubectl label pod POD_NAME app=foo  – this is use to add a new label to a pod

kubectl delete <resource_type> <id>  –  deletes a resource given by id

kubectl set image deployments/<deployment_id> <deployment_name> = docker.io/buddhima/node-web-app:v2 – set the image to the given docker hub url

kubectl rollout status deployment/<deployment_name> – confirms the update status

kubectl rollout undo deployment/<deployment_name> – undo rollout update

kubectl scale deployment/<deployment_name> –replicas=4  – scale the deployment to 4 replicas. After scaling use kubectl get pods -o wide to view pods’ status

Conclusion

The objective of this post to give you a summarized set of important docker, kubernetes related commands. Actually I have talked more about kubernetes commands which might help you in future.

Simple Process Using Activiti

Introduction

Recently I came across this interesting area, “business processes and workflows”. Going further from simple workflows, this is a vast area to explore. There exists numbers of workflow engines which provides generic functionality to develop workflows for any use-case. In this article, I am discussing about my first experimental work done with Activiti [1]. For this experiment I have used version 5.18.0 (latest at the moment is 6.0.0).

Prerequisites

At first you need to install Java JDK and Tomcat in your machine. Then you need to download Activiti from official website [1], or github [2]. Copy activiti-explorer.war and activiti-rest.war files in to the webapps folder of Tomcat. Finally start the Tomcat and go to URL: http://localhost:8080/activiti-explorer. To login use demo user Kermit. (username: kermit, password: kermit)

Use-case

The use-case I’m going to discuss is a software feature development process of a small company. For this example, let’s assume there are 3 roles in the company as developers; who write codes, tech-leads; who review the code and QAs; who test the code. So the team members are as follows:

Developers: Mike, Jack

Tech-leads: Chris, Brian

QAs: Sandy, Alice

To cater this requirement, you have to go to “Manage” and use “groups” and “users” tabs to set up user accounts ans assign users to groups.

Implementation

Let’s develop this scenario with Activiti. Go to “Processes” tab, “Model Workspace” and start creating a new model. Activiti Explorer provides a convenient web-UI to develop models. You can drag and drop elements from panel and create the desired workflow. You can get the explanation about each element from the User Guide [3].

Final workflow looks like follows:

workflow

In the process I have used 3 User Tasks;

Develop features: assigned to “developers” group

Code Review: assigned to “tech-leads” group

Quality Checking: assigned to “qas” group

In addition to assigning, I have put form properties for Code Review and Quality Checking to indicate the user’s opinion.

Once you save the model, you can deploy it using the Activiti Explorer and “Deployed Process Definitions” shows the deployed model. You can start a process by clicking “Start process” button on top right.

activiti-ui.png

Now the process goes as described in the workflow model. You can open new web browser and login as one of the developers and claim the task. Once a developer completes the task, tech-leads and QAs can complete their tasks respectively. Below figure depicts the view of a developer.

jack's view

So, I have discussed a very brief amount of things that you can do with Activiti, but there’s much more.

Activiti access with Java: https://www.activiti.org/userguide/#_rest_support
(You need to sync activiti-explorer and activity-rest web apps to view results in activiti-explorer)

Explore history: https://www.activiti.org/userguide/#history

Eclipse designer: https://www.activiti.org/userguide/#activitiDesigner

Activiti REST API: https://www.activiti.org/userguide/#_rest_api

Conclusion

This post is to give you a start up with Activiti workflow engine. You can associate Activiti with your application and design flexible workflows according to your necessity.

References

[1] https://www.activiti.org/

[2] https://github.com/Activiti/Activiti/releases

[3] https://www.activiti.org/userguide/#bpmnConstructs

Start Using Java 8 Lambda Expressions

Introduction

Java 8 comes with a bunch of new features for its developers. One such improvement is Lambda expressions. Lambda Expressions allows Java developers to use functions as values, and pass it as a value to a method. This might be familiar for people from functional programming background, but a bit difficult for people who tends to think in object-oriented mindset. In this article, I am going through series of examples to show how we can use lambda expressions in coding.

Prerequisites

Before you start, you need to install Java 8 in your machine and create a project using Java 8. Here I’m going describe things in more abstract-level. So you should possess knowledge on using your IDE for basic things such as creating classes, executing Java code etc.

Example Scenario

For this article I have selected a scenario where you need to go though set of books and select books based on different criteria. Those selection criteria are list all the books, list books which are novels, and list title of the books written in 20th century. So let’s start coding.

Creating entity class

First step of solving such problem is to create an entity class called Book which can represent a single instance of a book. So here it is.


public class Book {

private String name;
 private String author;
 private int year;
 private String language;
 private String category;

public Book(String name, String author, int year, String language, String category) {
 this.name = name;
 this.author = author;
 this.year = year;
 this.language = language;
 this.category = category;
 }

public String getName() {
 return name;
 }

public void setName(String name) {
 this.name = name;
 }

public String getAuthor() {
 return author;
 }

public void setAuthor(String author) {
 this.author = author;
 }

public int getYear() {
 return year;
 }

public void setYear(int year) {
 this.year = year;
 }

public String getLanguage() {
 return language;
 }

public void setLanguage(String language) {
 this.language = language;
 }

public String getCategory() {
 return category;
 }

public void setCategory(String category) {
 this.category = category;
 }

@Override
 public String toString() {
 return "Book{" +
 "name='" + name + '\'' +
 ", author='" + author + '\'' +
 ", year=" + year +
 ", language='" + language + '\'' +
 ", category='" + category + '\'' +
 '}';
 }
}

I think I no need to explain the above simple Java class as it should be familiar to you.

Conventional Method

This is what we usually do to solve this type of problems.

public class BookFinderExample1 {

    public static void main(String[] args) {
        List<Book> books = Arrays.asList(
                new Book("Moby-Dick", "Herman Melville", 1851, "EN", "Novel"),
                new Book("War and Peace", "Leo Tolstoy", 1869, "RU", "Novel"),
                new Book("The Three Musketeers", "Alexandre Dumas", 1844, "FR", "Novel"),
                new Book("Les Miserables", "Victor Hugo", 1862, "FR", "Fiction"),
                new Book("Journey to the West", "Wu Cheng'en", 1592, "ZH", "Fiction"),
                new Book("Wild Swans", "Jung Chang", 1991, "ZH", "Biography"),
                new Book("The Reader", "Bernhard Schlink", 1995, "DE", "Novel"),
                new Book("Perfume", "Patrick Suskind", 1985, "DE", "Fiction")
        );

        // 1. print all books
        System.out.println("Print all books");
        printAllBooks(books);

        // 2. print all novels
        System.out.println("Print all novels");
        printAllNovels(books);

        // 3. print all books in 20th century
        System.out.println("Print all books in 20th century");
        printAllIn20thCentury(books);
    }

    private static void printAllBooks(List<Book> books) {
        for (Book book : books) {
            System.out.println(book.toString());
        }
    }

    private static void printAllNovels(List<Book> books) {
        for (Book book : books) {
            if (book.getCategory().equals("Novel"))
                System.out.println(book.toString());
        }
    }

    private static void printAllIn20thCentury(List&lt;Book&gt; books) {
        for (Book book : books) {
            if (book.getYear() > 1900 && book.getYear() < 2001)
                System.out.println(book.getName());
        }
    }
}

First it created a list of books (I won’t going to repeat this step in next examples). Then we create 3 methods which serves our purpose. And we call those methods one by one. Though this fulfills out requirement, seems that’s not a scalable solution. Once we have a new requirement, we need to create a new method and call it.

Using Generic Solution

If we carefully look into the methods we have implemented, they all do a common thing. They iterate though a given list of books, checks a condition and perform an action (eg: print the book). So we can use interfaces for this and stick with just one method. Let’s see how.

public class BookFinderExample2 {

    public static void main(String[] args) {
        List<Book> books = Arrays.asList(
                .......
        );

        // 1. print all books
        System.out.println("Print all books");
        printBooks(books, new Checker() {
            public boolean check(Book book) {
                return true;
            }
        }, new Action() {
            public void perform(Book book) {
                System.out.println(book.toString());
            }
        });

        // 2. print all novels
        System.out.println("Print all novels");
        printBooks(books, new Checker() {
            public boolean check(Book book) {
                return book.getCategory().equals("Novel");
            }
        }, new Action() {
            public void perform(Book book) {
                System.out.println(book.toString());
            }
        });

        // 3. print all books in 20th century
        System.out.println("Print all books in 20th century");
        printBooks(books, new Checker() {
            public boolean check(Book book) {
                return (book.getYear() > 1900 && book.getYear() < 2001);
            }
        }, new Action() {
            public void perform(Book book) {
                System.out.println(book.getName());
            }
        });
    }

    private static void printBooks(List<Book> books, Checker checker, Action action) {
        for (Book book : books) {
            if (checker.check(book)) {
                action.perform(book);
            }
        }
    }

    interface Checker {
        boolean check(Book book);
    }

    interface Action {
        void perform(Book book);
    }
}

In the above solution I have introduced two interfaces and both of them contains a single method exposed. With the use of just one method printBooks and objects implementation of those two interfaces we have achieved the same results as before. Now we have taken out the action and inject in to the printBooks method. The instances of Checker and Action is created with anonymous inner classes.

The above seems to be fine for generalizing the solution but the syntax seems to be so tedious. So far we have not used anything new in Java 8. Let’s use Java 8 new features to ease our coding.

Using Lambda Expressions

Lets have a look at one of our anonymous inner class

new Action() {
    public void perform(Book book) {
        System.out.println(book.toString());
    }
}

The above is an anonymous inner class created for Action interface. Action interface has only a single method called perform. That method takes one argument and prints it. So such Anonymous Inner classes can be written as follows.

(Book book) -> System.out.println(book.toString())

Since the Action interface has only one method, we no need to say it’s name. Since it has one line of code, no need to use curly-brackets. Now with single argument, no need of using brackets, and specifying argument type.

book -> System.out.println(book.toString())

That’s a lambda expression. So let’s substitute lambda expressions.

public class BookFinderExample3 {

    public static void main(String[] args) {
        List<Book> books = Arrays.asList(
                ....
        );

        // 1. print all books
        System.out.println("Print all books");
        printBooks(books, book ->  true, book -> System.out.println(book.toString()));

        // 2. print all novels
        System.out.println("Print all novels");
        printBooks(books, book -> book.getCategory().equals("Novel"), book -> System.out.println(book.toString()));

        // 3. print all books in 20th century
        System.out.println("Print all books in 20th century");
        printBooks(books, book ->  (book.getYear() > 1900 && book.getYear() < 2001), book -> System.out.println(book.getName()));
    }

    private static void printBooks(List<Book> books, Checker checker, Action action) {
        for (Book book : books) {
            if (checker.check(book)) {
                action.perform(book);
            }
        }
    }

    @FunctionalInterface
    interface Checker {
        boolean check(Book book);
    }

    @FunctionalInterface
    interface Action {
        void perform(Book book);
    }
}

We have introduced 2 interfaces for our work. But is it necessary? No. Java SDK developers have identified this and provide us a set of already defined interfaces inside java.util.function package. All we have to do it, reuse them!

public class BookFinderExample4 {

    public static void main(String[] args) {
        List<Book> books = Arrays.asList(
                ...
        );

        // 1. print all books
        System.out.println("Print all books");
        printBooks(books, book ->  true, book -> System.out.println(book.toString()));

        // 2. print all novels
        System.out.println("Print all novels");
        printBooks(books, book -> book.getCategory().equals("Novel"), book -> System.out.println(book.toString()));

        // 3. print all books in 20th century
        System.out.println("Print all books in 20th century");
        printBooks(books, book ->  (book.getYear() > 1900 && book.getYear() < 2001), book -> System.out.println(book.getName()));
    }

    private static void printBooks(List<Book> books, Predicate<Book> checker, Consumer<Book> action) {
        for (Book book : books) {
            if (checker.test(book)) {
                action.accept(book);
            }
        }
    }
}

In the above example we have used two interfaces Predicate and Consumer. you can find more information about those in Oracle’s official website. You can find plenty of such interfaces which can help your coding.

Using streams

We are not going to stop our simplification from there. If we look in to the printBooks method, it iterates through the list of books and perform things mention in the interface implementations. This kind of iteration is called external iteration, because for-loop iterates the list. In Java 8, streams provide the functionality of internal iteration, which is implemented by converting the list in to a stream. The stream can be considered as a conveyor belt which brings list items one by one. We can use filters etc., to filter-out the required elements and so what every action you like. So when using streams our code would look like as follows.

public class BookFinderExample5 {

    public static void main(String[] args) {
        List<Book> books = Arrays.asList(
                ...
        );

        // 1. print all books
        System.out.println("Print all books");
        books.stream()
                .filter(book ->  true)
                .forEach(book -> System.out.println(book.toString()));

        // 2. print all novels
        System.out.println("Print all novels");
        books.stream()
                .filter(book -> book.getCategory().equals("Novel"))
                .forEach(book -> System.out.println(book.toString()));

        // 3. print all books in 20th century
        System.out.println("Print all books in 20th century");
        books.stream()
                .filter(book ->  (book.getYear() > 1900 && book.getYear() < 2001))
                .forEach(book -> System.out.println(book.getName()));
    }

}

Conclusion

We have started our example from conventional coding and step-by-step brings the Java 8 new features such as lambda expressions, streams and forEach. Similarly you can apply Java 8 concepts in your development as well. Resources in references section below will help you to find more information about Java 8 Lambda Expressions.

References

[1] Oracle’s Java 8 website : http://docs.oracle.com/javase/8/docs/

[2] Oracle’s website on util-functions : https://docs.oracle.com/javase/8/docs/api/java/util/function/package-summary.html

[3] Java Brains Tutorial on Lambda Expressions : https://www.youtube.com/watch?v=gpIUfj3KaOc&list=PLqq-6Pq4lTTa9YGfyhyW2CqdtW9RtY-I3

Microservices with Spring Boot

Introduction

Currently enterprise application development is more interested towards building them as microservices. This trend started about 2 years back and some organizations take this as an opportunity to do a complete re-write of their products. To help developing microservices, several organizations have done framework implementations. In here I am talking about using Spring Boot to create a very basic microservice.

Use-case

This system is about handling patient records. So it is more like an CRUD application. To persist data, I am using a Mongo DB (embedded version). First, let’s see what would be structure of this project.

proj-structure

Fist you need to create a project with the above structure. You may find maven arc-types which helps to do that. Next the pom file should be created properly. Here, I’m showing the important sections of the pom file.

 <parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.4.1.RELEASE</version>
</parent>
...
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jersey</artifactId>
</dependency>

<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>

</dependencies>

 

Application.java file contains the main method to start the microservice. So it should looks like as follow:

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {

 public static void main(String[] args) {
     SpringApplication.run(Application.class, args);
 }
}

ApplicationConfig.java file is used to provide configurations to Spring framework. Here we provide the location of the service and REST-Template. So it should look like follows:

 

import org.glassfish.jersey.server.ResourceConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.client.RestTemplate;

import javax.inject.Named;

@Configuration
public class ApplicationConfig {
    @Named
    static class JerseyConfig extends ResourceConfig {
        public JerseyConfig() {
            this.packages("com.project.capsule.rest");
        }
    }

    @Bean
    public RestTemplate restTemplate() {
        RestTemplate restTemplate = new RestTemplate();
        return restTemplate;
    }
}

Next we can extend MongoRepository and create PatientReportRepository. This is very interesting capability of Spring framework as it can convert method names in to SQL queries directly.

import com.project.capsule.bean.PatientReport;
import org.springframework.data.mongodb.repository.MongoRepository;
import java.util.List;

public interface PatientReportRepository extends MongoRepository&lt;PatientReport, String&gt; {

 public List<PatientReport> findByName(String name);

 public List<PatientReport> findByNameLike(String name);

 public List<PatientReport> findByTimeBetween(long from, long to);

}

Now let’s create the bean class, PatientReport

 

import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import org.springframework.data.annotation.Id;
import java.util.Map;

@JsonIgnoreProperties(ignoreUnknown = true)
public class PatientReport {

@Id
public String id;

public String name;
public int age;
public String sex;
public String doctorName;
public long time;
public String reportType;
public Map<String, Object> reportData;
}

Finally the service class, PatientReportService. You can define any number of methods and implement a custom logic.

import com.project.capsule.PatientReportRepository;
import com.project.capsule.bean.PatientReport;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestParam;

import javax.inject.Named;
import javax.ws.rs.*;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import java.util.*;

@Named
@Path("/report")
public class PatientReportService {

@Autowired
private PatientReportRepository repository;

@POST
@Path("")
@Consumes(MediaType.APPLICATION_JSON)
public Response storePatientReport(@RequestBody PatientReport patientReport) {
repository.save(patientReport);
return Response.status(201).build();
}

@GET
@Path("{id}")
@Produces(MediaType.APPLICATION_JSON)
public PatientReport retrievePatientReport(@PathParam("id") int id) {
PatientReport patientReport = repository.findOne(String.valueOf(id));
return patientReport;
}

@POST
@Path("find")
public List<PatientReport> findReports(@RequestParam Map<String, Object> map) {
List<PatientReport> patientReports = new ArrayList<PatientReport>();
Map<String, PatientReport> resultantMap = new HashMap<String, PatientReport>();
List<PatientReport> resultantReports;

if (map.containsKey("name") && map.get("name") != null) {
String patientName = (String) map.get("name");
if (!patientName.trim().equalsIgnoreCase("")) {
resultantReports = repository.findByNameLike(patientName);

for (PatientReport report : resultantReports)
resultantMap.put(report.id, report);

}
}
return patientReports;
}

}

Once you run the Application.java file, microservice will start from port 8080. You can change the post by giving argument “-Dserver.port=8090” etc. Thereafter you can use a REST client to send HTTP requests and see how it works!

References

[1] https://spring.io/blog/2015/07/14/microservices-with-spring

[2] http://blog.scottlogic.com/2016/11/22/spring-boot-and-mongodb.html

[3] https://dzone.com/articles/spring-boot-creating