Analyzing Memory Usage of an Application


Memory usage of an application is a key factor to monitor. Specially in production systems you need to set alarms to make sure that system is stable. Therefore memory usage can be considered as a probe to measure the health of the system. Usually production systems are installed on Linux servers. OS itself help in many ways to provide a clear view of application’s memory usage.


In this post, I am going to discuss different commands and tools which can be used to measure memory usage of applications, specially Java applications. This post will guide you from higher-level to lower-level under following topics.

  1. Monitoring Overall System Memory Usage
  2. Monitoring Application’s Memory Usage
  3. Analyzing Java Application’s Memory Usage

Further information on commands and tools can be gained by going through the external links provided.

Monitoring Overall System Memory Usage

For this purpose I am going to discuss using “free” command in Linux. Free command gives an overview of complete system memory usage. Therefore please note that this is not an efficient way of measuring an applications performance. Because the system can host many applications and each application has its own boundaries. However, let’s look in to some usages of free command. (hint: use free -h to get a human readable output)

[root@operation_node ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          7873       7360        512          0         70        920
-/+ buffers/cache:       6369       1503
Swap:        11143        431      10712

In the above output, in the first line, you can see total physical memory is 7873 MB and 7360 MB is used. Because of that only 512 MB is remaining. However this does not imply that memory is completely being used. Linux OS is good at using memory effectively. Therefore it uses caching for make the memory access efficient. And that cached memory is showing as used in the first line.
What you should look at is the second line which removed the cache & buffer usage of the physical memory. In used column of second line shows actual use of memory without cache & buffer. In the free column of second row you can see 1503 MB of free memory, which is generated by accumulating free + cache & buffer. So actually you have 1503 MB of free physical memory for use. In addition, according to the third line, you have around 10 GB of swap memory ready for use. Please refer [1] for further information.

In modern versions of Linux kernels, the output of free command has changed. It would be look like something below.

me@my-pc ~ $ free -m

                  total        used        free      shared  buff/cache   available

Mem:           7882        3483         299         122        4100        3922

Swap:          9535           0        9535

In above case, the formula for calculation is as below [2] [3]:

total = used + free + buffers + cache

available : is the amount of memory which is available for allocation to a new process or to existing processes.

free : is the amount of memory which is currently not used for anything. This number should be small, because memory which is not used is simply wasted.

Monitoring Application's Memory Usage

In many cases we want to target monitoring of a single application than overall system. Overall memory usage reflects the use of memory including OS-level operations. Therefore we can use top command in Linux for this purpose. Following is a sample output of top command.

A result of a top command

What you should actually focus is the RES value and %MEM value of the application (for this, first you need to identify the process-id of an application using ps -aux | grep "application_name" command). You can use simple "e" to toggle the unit of displaying memory.

RES -- Resident Memory Size : The non-swapped physical memory a task has used.

%MEM -- Memory Usage (RES) : A task's currently used share of available physical memory (RAM).

According to above discussion, top command directly reveals the memory consumption of an application. For further information on top command, you may refer [4] [5].

Analyzing Java Application's Memory Usage

If your application is a Java application, then you might need to look at what objects consumes the high amount of memory. For that one option is taking a heap-dump of that Java application. You may use following command to take a heap dump of the application. Prior to that you should know the process-id of that running Java application.

jmap -dump:format=b,file=heap_dump.hprof <process_id>

Once you execute the command, you will get the file heap_dump.hprof  containing heap usage of Java program. Since the file is in binary format, you need to use a special tool to analyze it. Commonly using tool to inspect heap-dump is Eclipse Memory Analyzer Tool (MAT) [6], which is built on top of Eclipse platform. You just need to download the pack and extract it. Executing MemoryAnalyzer will open up a GUI application to analyze the heap-dump. When you open the heap-dump using MAT, tool will prompt you to generate reports based on heap-dump. You may interest about Leak Suspects Report which shows the large object which takes large portion of the memory.

Memory Analyzer Tool with Leak Suspects Report

Another interesting view of this tool is the Dominator Tree, which shows large objects along with ones who keep them. According to the definition [7];

An object x dominates an object y if every path in the object graph from the start (or the root) node to y must go through x.

In the dominator tree view, you will see list of objects and the amount of memory they took when you take the heap-dump.

Dominator Tree view of Memory Analyzer

In dominator tree view, you can go expanding each entry and see how they have composed. Two columns showing in this view are Shallow Heap and Retained Heap. By default the list is sorted by Retained Heap value, descending order. Following definition [8] clearly explain the meaning of those two values.

Shallow heap is the memory consumed by one object. An object needs 32 or 64 bits (depending on the OS architecture) per reference, 4 bytes per Integer, 8 bytes per Long, etc. Depending on the heap dump format the size may be adjusted (e.g. aligned to 8, etc...) to model better the real consumption of the VM.

Retained set of X is the set of objects which would be removed by GC when X is garbage collected.

Retained heap of X is the sum of shallow sizes of all objects in the retained set of X, i.e. memory kept alive by X.

Generally speaking, shallow heap of an object is its size in the heap and retained size of the same object is the amount of heap memory that will be freed when the object is garbage collected.

Therefore in case of out-of-memory or high memory indications, you should definitely focus on this Retained Heap values of the dominator tree view.


In this post I want to give an clear idea on using several tools to analyze the memory usage of an application running on Linux OS. I have used commands which comes with Linux itself. However you may find tools which can be installed to analyze memory. In the last section I spent on discussing how Eclipse Memory Analyzer can be used to examine heap usage of a Java program. Hope those will help you as well.


[1] Understanding Linux free memory :

[2] Usage of free memory :

[3] Ask Ubuntu clarification on free command :

[4] Super-user forum top command explanation :

[5] Linuxarea blog top command explanation :

[6] Eclipse Memory Analyzer :

[7] MAT Dominator Tree :

[8] MAT Shallow Heap and Retained Heap explanation :


Inspecting Solr Index in WSO2 API Manager


Apache Solr project [1] helps you to run a full-featured search server on a server. Also you can integrate Solr with your project to make searching faster. In WSO2 API Manager, Solr is using to make searching faster in store and publisher. In WSO2 API Manager, Solr indexing keeps the frequently using meta-data of APIs. Thereafter, to retrieve complete information about an API, API Manager uses its database. This mechanism makes searching faster and less burden on the databases.


However, in some situations things may go wrong. We have seen several cases Solr indexing does not tally with the information at database. Due to that, when displaying complete information about an API, you may see inconsistent information. In such situations you may want to inspect the Solr index at API Manager.

Setting Up Solr Server

Setting up the solr server is quite easy, as its a matter of downloading binary file from the project page [1]. The important thing here is to make sure you download the proper version. WSO2 API Manager 2.0.0 version is using Solr 5.2.1 version. I figured out it by going through the API Manager release tag pom, identify the registry version and searching registry pom file.

Once you download the binary package, extract it. You can start the Solr server by going to solr-5.2.1/bin directory and execute “./solr start“. Then Solr server will start as a background process. Then access its admin UI by location “http://localhost:8983/solr” in your browser.

Inspecting WSO2 API Manager Index

Before doing so, you must stop WSO2 API Manager and Solr server. To stop Solr server, execute command “./solr stop” inside bin directory. Then you need to copy Solr indexing configs and index from API Manager.

  • To copy configs, go to location “APIM_HOME/repository/conf/solr” and copy “registry-indexing” to “solr-5.2.1/server/solr” folder.
  • To copy indexed data, go to the location “APIM_HOME” and copy “solr” folder to same folder “solr-5.2.1” resides. This is done to comply the “dataDir” value at “solr-5.2.1/server/solr/registry-indexing/” file.

Now start the Solr server and go to admin UI. You should see a drop-down on the left pane and “registry-indexing” menu item in that. Select “registry-indexing” and now you will be able to query indexed data by going to Query section. To query Solr index you need to use specific query language, which is not actually difficult to understand. But in here I’m not going to discuss too much on query language, and it’s up to you to refer [2] and learn it. You can try-out those queries from admin UI directly.

registry-indexing in Solr admin UI

Writing a Java client to query information

In some cases, you may need to write a client to client which can talk to a Solr server and retrieve results. So here I am giving out an example Java code which you can use to retrieve results from a Solr server [3]. However, I am not going to explain the code in detail, because I believe it’s self-explanatory.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">
    <!-- -->
    <!-- -->
package com.solr.testing;


import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrDocumentList;

 * Created by buddhima.
public class SolrTesting {

    public static void main(String[] args) throws IOException, SolrServerException {
        // Default Solr port: 8983, and APIM using 'registry-indexing'
        SolrClient client = new HttpSolrClient.Builder("http://localhost:8983/solr/registry-indexing").build();

        SolrQuery query = new SolrQuery();

        // Fields use for filtering as a list of key:value pairs
        query.addFilterQuery("allowedRoles:internal/everyone", "mediaType_s:application/vnd.wso2-api+xml");

        // Fields to show in the result
        query.setFields("overview_name_s", "overview_status_s", "updater_s", "overview_version_s", "overview_context_s");

        // Limit the query search space

        // Execute the query and print results
        QueryResponse response = client.query(query);
        SolrDocumentList results = response.getResults();
        for (int i = 0; i < results.size(); ++i) {

In addition to that you can refer [4] [5] for further learning on Solr syntax.


In this post I have discussed the use of Solr in WSO2 API Manager and how to investigate existing Solr index. In addition to that I have shown how to construct a Java client which can talk to a Solr server. I hope that the above explanation will help you to solve issues with Solr indexing.

Special thank goes to WSO2 support for providing guidance.


[1] Apache Solr project :

[2] Solr query syntax :

[3] Using SolrJ :

[4] Solr Query Syntax :

[5] Solr df and qf explanation :

JSON Split Aggregate with WSO2 ESB


Split-Aggregate (Scatter-Gather) is a common messaging pattern [1] use in enterprise world. In split-aggregate pattern, client’s request sends to multiple endpoint simultaneously. Responses from multiple endpoints aggregated and sends back as a single response to the client. There are plenty of use-cases you will find that this scenario plays when you try to integrate enterprise systems.


WSO2 ESB (currently a part of WSO2 EI), is a famous middleware which is used to integrate enterprise system. It is also famous for its comprehensive middleware stack which comprises all the functionalities you need to integrate enterprise systems. WSO2 ESB provides an in-built set of mediators for you to achieve this commonly using Split-Aggregate pattern. They are Iterate Mediator, Clone Mediator and Aggregate Mediator. You will find a sample use-case of using those mediators in this documentation [2].

Existing Problem

Existing mediators provides a good support for Split-Aggregate scenarios when you are working with XML payloads. However current trend is more towards using JSON payloads for message exchanges. Although those existing mediators still can be used with JSON payloads, they do not provide a convenient support. Because of that, when using the existing mediators, you need to map your JSON payload to XML payload. This conversion most of the time adds extra burden to the mediation logic.

In this post I am discussing about two mediators which are optimized for JSON payload handling in Split-Aggregate scenarios. They are Json Iterate Mediator and Json Aggregate Mediator. Those mediators handle JSON payloads in its own way and do not convert to XML (native JSON support). Please note that these mediators do not come with WSO2 ESB out-of-box. You can find the relevant documentation at this location [3]

Configuring Mediators

To use these mediators, you need to build the sourcecode at here [3] and get the resultant jar files. Put the Json (Iterate/Aggregate) Mediator-1.0.0.jar file to ESB_HOME/repository/components/dropins folder. Along with those, add json-path-2.1.0.jar [4], json-smart-2.2.jar [5] and accessors-smart-1.1.jar [9] to the same location. Then start WSO2 ESB (sh bin/

Sample Scenario

Once you add those artifacts to ESB, you can refer two new mediators similar to inbuilt mediators. The respective xml tags are <jsonIterate> and <jsonAggregate>. In this post I’m showing a sample configuration of using those mediators and describing it briefly. The same scenario is discussed more descriptive manner at here [3]

<api xmlns="" name="sampleApi" context="/sample">
<resource methods="POST" uri-template="/*">
    <log level="full"/>
    <jsonIterate continueParent="false" preservePayload="true" expression="$.messages" attachPath="$.messages">
                <log level="full"/>
                <header name="To" value=""/>
    <log level="full"/>
            <messageCount min="-1" max="-1"/>
        <onComplete expression="$.message" enclosingElementProperty="responses">
            <log level="full"/>


The above example shows how to do split-aggregate on a message receiving to an API. You can send the following request payload to the API create by ESB at http://localhost:8280/sample

curl -X POST \

http://localhost:8280/sample \

-H ‘content-type: application/json’ \

-d ‘{“originator”:”my-company”,”messages”:[{“country”:”Sri Lanka”,”code”:”94″},{“country”:”America”,”code”:”01″},{“country”:”Australia”,”code”:”61″}]}’

The expression at the jsonIterate mediator takes the responsibility of deciding where to split the message payload. It should be written as a JSON Path [6]. Within the sequence inside jsonIterate mediator, you will find splitted message payloads. They are sending to the backend URL given in the config. You can refer additional configurations relate to JSON Iterate mediator at here [7]

Each response comes into the jsonAggregate mediator. At the expression of JSON Aggregate mediator, you need to specify which part of the response should be taken for aggregation. This expression is again a JSONPath expression. Once it satisfies completion condition, aggregated message comes in to the onComplete sequence. You can do further processing on the message inside onComplete sequence. If you are more interested, you can look into documentation [8] which gives a complete guide on configuring JSON Aggregate mediator.


Split-Aggregate is a very common message exchanging pattern in enterprise world. JSON is becoming more popular message format across enterprise world too. However WSO2 ESB has lack of support to JSON message exchanging with split-aggregate scenarios. To cater that requirement I have built two custom mediators which makes life easier. Those two mediators can be configured to do split-aggregate with JSON payloads without converting to XML (native JSON support).


[1] EIP patterns reference :

[2] WSO2 ESB Doc :

[3] GitHub repository :

[4] json-path-2.1.0 :

[5] json-smart-2.2.1 :

[6] JSON Path documentation :

[7] JSON Iterate Mediator documentation :

[8] JSON Aggregate Mediator documentation :

[9] accessors-smart-1.1.jar :

Kubernetes and related technologies


For this experiment I have used Ubuntu 16.04 machine. I believe still using ubuntu machine is much convenient to play around with these kind of technologies. I am not going to go deep with any of these technologies. And I am more focusing on kubernetes commands which I got familiar recently.


Install Docker

First you need to setup docker environment in your machine to develop the docker image. For this post, I’m using NodeJS server which responds a simple text message. First I have created NodeJS application locally and used following Dockerfile to create a docker image of it. You need to put the Dockerfile in the same directory where the NodeJS project resides.

Sample docker file I used is as follow:

FROM node:boron

WORKDIR /usr/src/app

COPY package.json .

RUN npm install

COPY . .


CMD [ “npm”, “start” ]

You can refer this article for installation and getting familiar with docker (

Once you create the docker image, push it to docker hub. This is for kubernetes to pick it up.

Install Virtualbox

You can use ubuntu software center for installing virtual box. In addition to that, you can use commadline to install virtualbox ( I personally like to recommend using virtualbox, compared to other hypervisors.

Install minikube and kubectl

You may wonder why I was asking to install Virtualbox. The reason is minikube. At the moment Kubernetes recommended way of testing is using minikube with virtualbox. To install minikube and kubectl, please follow the instructions given in this document ( . While doing that, please make sure that to install minikube first and install kubectl which supports.

Minikube gives you a single node cluster. In that you can create a new kubernetes deployment. Docker is a dependency when using Kubernetes (can be used with rkt too). Once minikube is setup, you need to start it using minikube start command. Then you can interact with kubernetes cluster with kubectl commands. Here I have list down some important kubectl commands

For the docker image I created, I used following command to create a kubernete deployment

kubectl run my-test-app – –port=3000

Other useful commands;

kubectl get <resource_type> – Get listed information about a resource type. Resource type can be nodes/deployments/pods/services etc

kubectl describe <resource_type> – get a descriptive information about a resource type

kubectl describe <resource_type>/ID – get a descriptive information about a single resource given by the ID

kubectl logs – print the logs from a container in a pod

kubectl exec – to execute a command on a container (eg: kubectl exec -it POD_NAME -c CONTAINER_NAME bash – to execute bash shell of a container in a pod)


Deployment – a deployment is a configuration which instruct how Kubernetes can create/update instances of app.

Pod – a pod is a collection of one or more application containers which are tightly coupled. A pod shares a same IP and port space. A pod in kubernetes cluster has a unique IP

Service – A service is a logical set of pods defined by YAML/JSON. Pods are selected by a LabelSelector. Types are ClusterIP, NodePort, LoadBalancer, ExternalName. This abstraction allows pods to die and replicate match set of pods using labels and selectors


kubectl expose deployment/<deployment_name> –type=”NodePort” –port 8080  – create a new service and expose to external traffic

kubectl label pod POD_NAME app=foo  – this is use to add a new label to a pod

kubectl delete <resource_type> <id>  –  deletes a resource given by id

kubectl set image deployments/<deployment_id> <deployment_name> = – set the image to the given docker hub url

kubectl rollout status deployment/<deployment_name> – confirms the update status

kubectl rollout undo deployment/<deployment_name> – undo rollout update

kubectl scale deployment/<deployment_name> –replicas=4  – scale the deployment to 4 replicas. After scaling use kubectl get pods -o wide to view pods’ status


The objective of this post to give you a summarized set of important docker, kubernetes related commands. Actually I have talked more about kubernetes commands which might help you in future.

Setting up XDebug with Joomla


Joomla! CMS is platform for website development which is using by quite a large number of people today. I have started using this few years back and contributed in many ways. Every time a development is going on, we came across situations where we need to look in to variables’s instance values. In some situations I just print that variable value. However in complex situation a help of a debugger is essential. There are several methods to debug Joomla during development process and most of them are mentioned in this reference article [1]. In this article I am describing one of those method in detail.


In my case I had the following setup;

  • Joomla CMS installed
  • Phpstrom on Windows
  • XAMPP which runs Apache and SQL

Now it’s time to start setting up.

Configuring XDebug on XAMPP

  • Download xdebug library suites for php version and OS (
  • Put the library in php/ext folder
  • Add Xdebug configuration to the end of php.ini file
  • Add desired host and port
  • Restart XAMPP server and check phpinfo to verify xdebug section

Sample Xdebug configuration for php.ini file [2]. “zend_extension_ts” is the path to locate downloaded library.

zend_extension = “c:\xampp\php\ext\php_xdebug-2.5.5-7.1-vc14.dll”
xdebug.remote_autostart = 1
xdebug.profiler_append = 0
xdebug.profiler_enable = 0
xdebug.profiler_enable_trigger = 0
xdebug.profiler_output_dir = “c:\xampp\tmp”
;xdebug.profiler_output_name = “cachegrind.out.%t-%s”
xdebug.remote_enable = 1
xdebug.remote_handler = “dbgp”
xdebug.remote_host = “”
xdebug.remote_port = 9000
xdebug.trace_output_dir = “c:\xampp\tmp”
; 3600 (1 hour), 36000 = 10h
xdebug.remote_cookie_expire_time = 36000

Configuring Web Browser

  • Install xdebug-helper extension in chrome browser [3] (there are other extensions for other browsers such as Firefox)
  • Go to extension options and change IDE key to PHPSTROM
  • Once you are logged to Joomla, switch to the Debug mode of plugin

Configuring Joomla to debug

Go to Global Configuration -> System and enable Debug System

Configuring PhpStrom

There are several ways of configuring remote debug from IDE side. Here I discuss on setting up as a remote debugger.

  • Go to debug configuration
  • Click + sign and click remote debugger
  • Click servers and add server’s debug configuration (xdebug host and port)
  • Add the correct IDE key and start debugger
  • Put some breakpoints and perform some actions Joomla!


There are several other ways of debugging Joomla as mention in [1]. Depending on the operating system, IDE and other facts, you may need to use different options.  I believe this article may help you to quickly setup debugging for Joomla! developments.


[1] How to debug your code –

[2] Edit php.ini for XDebug –

[3] xdebug-helper –


Simple Process Using Activiti


Recently I came across this interesting area, “business processes and workflows”. Going further from simple workflows, this is a vast area to explore. There exists numbers of workflow engines which provides generic functionality to develop workflows for any use-case. In this article, I am discussing about my first experimental work done with Activiti [1]. For this experiment I have used version 5.18.0 (latest at the moment is 6.0.0).


At first you need to install Java JDK and Tomcat in your machine. Then you need to download Activiti from official website [1], or github [2]. Copy activiti-explorer.war and activiti-rest.war files in to the webapps folder of Tomcat. Finally start the Tomcat and go to URL: http://localhost:8080/activiti-explorer. To login use demo user Kermit. (username: kermit, password: kermit)


The use-case I’m going to discuss is a software feature development process of a small company. For this example, let’s assume there are 3 roles in the company as developers; who write codes, tech-leads; who review the code and QAs; who test the code. So the team members are as follows:

Developers: Mike, Jack

Tech-leads: Chris, Brian

QAs: Sandy, Alice

To cater this requirement, you have to go to “Manage” and use “groups” and “users” tabs to set up user accounts ans assign users to groups.


Let’s develop this scenario with Activiti. Go to “Processes” tab, “Model Workspace” and start creating a new model. Activiti Explorer provides a convenient web-UI to develop models. You can drag and drop elements from panel and create the desired workflow. You can get the explanation about each element from the User Guide [3].

Final workflow looks like follows:


In the process I have used 3 User Tasks;

Develop features: assigned to “developers” group

Code Review: assigned to “tech-leads” group

Quality Checking: assigned to “qas” group

In addition to assigning, I have put form properties for Code Review and Quality Checking to indicate the user’s opinion.

Once you save the model, you can deploy it using the Activiti Explorer and “Deployed Process Definitions” shows the deployed model. You can start a process by clicking “Start process” button on top right.


Now the process goes as described in the workflow model. You can open new web browser and login as one of the developers and claim the task. Once a developer completes the task, tech-leads and QAs can complete their tasks respectively. Below figure depicts the view of a developer.

jack's view

So, I have discussed a very brief amount of things that you can do with Activiti, but there’s much more.

Activiti access with Java:
(You need to sync activiti-explorer and activity-rest web apps to view results in activiti-explorer)

Explore history:

Eclipse designer:

Activiti REST API:


This post is to give you a start up with Activiti workflow engine. You can associate Activiti with your application and design flexible workflows according to your necessity.





WebSocket with Spring Boot


In this post I am going to talk briefly about developing a WebSocket based application using Spring Boot framework. WebSocket[1] is a full duplex protocol allows bi-directional communication. At the moment, widely using protocols such as HTTP is uni-directional and use long polling mechanisms to achieve bi-directional behavior. But using protocols such as WebSocket allows sending requests from server to client-side. However unlike HTTP, WebSocket does not form a strong application-level protocol. WebSocket sits as a thin layer on-top-of TCP protocol and allow application developers to come-up with a high-level messaging protocol design.


Simple Text-Oriented Messaging Protocol (STOMP) is selected by the Spring framework for its WebSocket support. STOMP uses message brokers to provide bi-directional communication.

Server-side Code

Server-side code of a websocket server is as follows:

import org.springframework.messaging.handler.annotation.MessageMapping;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.stereotype.Controller;

public class GreetingController {

 public Greeting greeting(HelloMessage helloMessage) throws Exception {
 return new Greeting("Hello," + helloMessage.getName() + "!");

In the above code @MessageMapping annotation is used to map the messages come with “/hello” in the path. @SendTo is used to specify the destination of the result. Here, HelloMessage and Greeting are bean classes which can be found in [2].

The server-side configuration is as follows:

public class WebsocketConfigurer extends AbstractWebSocketMessageBrokerConfigurer {

 public void configureMessageBroker(MessageBrokerRegistry registry) {

 public void registerStompEndpoints(StompEndpointRegistry stompEndpointRegistry) {

WebsocketConfigurer class is annotated as the configuration class for the application. By configureMessageBroker method it enables message broker and add “/topic” as topic it holds. “/app” is used to identify messages which need to send to the controller.

In the second method, “/gs-guide-websocket” is used as a endpoint which clients can connect. So let’s move in to the client-side code. It contains 2 files, a static HTML and a JS file.

<title>Hello WebSocket</title>
	<link href="/webjars/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<!--	<link href="/main.css" rel="stylesheet">-->
<script src="/webjars/jquery/jquery.min.js"></script>
    <script src="/webjars/sockjs-client/sockjs.min.js"></script>
<script src="/webjars/stomp-websocket/stomp.min.js"></script>
    <script src="/app.js"></script>
<h2 style="color: #ff0000">Seems your browser doesn't support Javascript! Websocket relies on Javascript being
enabled. Please enable
Javascript and reload this page!</h2>
<div id="main-content" class="container">
<div class="row">
<div class="col-md-6">
<form class="form-inline">
<div class="form-group">
<label for="connect">WebSocket connection:</label>
<button id="connect" class="btn btn-default" type="submit">Connect</button>
<button id="disconnect" class="btn btn-default" type="submit" disabled="disabled">Disconnect
<div class="col-md-6">
<form class="form-inline">
<div class="form-group">
<label for="name">What is your name?</label>
<input type="text" id="name" class="form-control" placeholder="Your name here...">
<button id="send" class="btn btn-default" type="submit">Send</button>
<div class="row">
<div class="col-md-12">
<table id="conversation" class="table table-striped">
<tbody id="greetings"></tbody>

var stompClient = null;

function setConnected(connected) {
$("#connect").prop("disabled", connected);
$("#disconnect").prop("disabled", !connected);
if (connected) {
else {

function connect() {
var socket = new SockJS('/gs-guide-websocket');
stompClient = Stomp.over(socket);
stompClient.connect({}, function (frame) {
console.log('Connected: ' + frame);
stompClient.subscribe('/topic/greetings', function (greeting) {

function disconnect() {
if (stompClient != null) {

function sendName() {
stompClient.send("/app/hello", {}, JSON.stringify({'name': $("#name").val()}));

function showGreeting(message) {
<td>" + message + "</td>

$(function () {
$("form").on('submit', function (e) {
$( "#connect" ).click(function() { connect(); });
$( "#disconnect" ).click(function() { disconnect(); });
$( "#send" ).click(function() { sendName(); });

Finally the pom file of the project is as follows:

<project xmlns=""







[1] WebSocket protocol RFC –

[2] Using WebSocket to build an interactive web application –

[3] WebSocket Support –