Category: WSO2

Microservices with Netflix stack and Spring Boot


In my previous post on microservices [1], I have discussed how a microservicec can be implemented using Spring Boot. In this post I am going to discuss how a microservice implementation can be leveraged using Netflix OSS. Netflix can be considered as one of the early adopters of this trend. Therefore, they have launched released several projects that they have used in their implementation. Some of those outcomes are Eureka, Zuul, and Feign which I am going to use for the implementation.

Use case

For this implementation, I have chosen a simple use case where a payment-service is using customer-info-service to retrieve customer information. Apart from that customer-info-service can be directly invoked to retrieve data. Those two systems are exposed through a gateway service. Also there is a registry service which manage registrations of those services.

Plan for the implementation


For this implementation I have used spring boot version 1.5.2.RELEASE. Before starting up, make sure you have setup Java and configured it with your favorite IDE (in my case it’s IntelliJ). Also I have used maven as the build tool for this project. So let’s move to implementation of the use-case discussed.

Implementing the Registry Service

As the first step you need to create a maven project with your IDE and create a sub-module called “eureka-service” in it. In the src/java folder, create the package as you prefer (I used “com.buddhima.example.service.eureka”). In side that package create a new Java file called and add the following code.

package com.buddhima.example.service.eureka;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

public class Application {
 public static void main(String[] args) {
 new SpringApplication(Application.class).run(args);

The annotation @EnableEurekaServer expresses that the microservice is going to be registry service. It requires to have “” dependency in your pom file. Eureka is a project in Netflix stack which easily transform this microservice in to the registry of other projects. Other microservices can register to this as clients and this registry service will help to figure-out registered services’ locations when requested. You will find more information on Eureka project in its wiki page [2].

Implementing CustomerInfo service

This will be a typical microservice, which provides information about a customer (which is hardcoded for this sample). Without going further explanations, following are the relevant,, application.yml and pom.xml files.

package com.buddhima.example.service.customerinfo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

public class Application {

    public static void main(String[] args) {
        new SpringApplication(Application.class).run(args);

package com.buddhima.example.service.customerinfo;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

@RequestMapping(value = "/customerinfo")
public class CustomerInfoController {

    @RequestMapping(method = RequestMethod.GET, value = "/name")
    public String getName() {
        System.out.println("Name requested");

        return "foo";

    @RequestMapping(method = RequestMethod.GET, value = "/age")
    public int getAge() {
        System.out.println("Age requested");

        return 28;

  port: 8000

    name: customerinfo-service

      defaultZone: http://localhost:8761/eureka/
    preferIpAddress: true

customer-info service pom file :

Special thing to note in the above code is the @EnableEurekaClient annotation which enables this microservice to register as a service in eureka registry.

Implementing Payment service with Feign client

Payment service is almost similar to customer-info service, but having a feign client which calls customer-info service. For that purpose, the CustomerInfoClient and PaymentController classes are as follows.

package com.buddhima.example.service.payment.clients;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

@FeignClient(value = "customerinfo-service", path = "/customerinfo")
public interface CustomerInfoClient {

    @RequestMapping(method = RequestMethod.GET, value = "/name")
    public String getName();

    @RequestMapping(method = RequestMethod.GET, value = "/age")
    public int getAge();

package com.buddhima.example.service.payment;

import com.buddhima.example.service.payment.clients.CustomerInfoClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMethod;

@RequestMapping(value = "/payment")
public class PaymentController {

    private CustomerInfoClient customerInfoClient;

    @RequestMapping(value = "/name", method = RequestMethod.GET)
    public String getName() {
        return customerInfoClient.getName();

    @RequestMapping(value = "/age", method = RequestMethod.GET)
    public int getAge() {
        return customerInfoClient.getAge();

To activate feign client, you need to add @EnableFeignClients annotation in class and add “” dependency.

Implementing Edge Service with Zuul

The purpose of edge service is closely similar to a software load-balancer. Zuul project [3] is focusing on dynamic routing of messages. It uses Eureka service to find and resolve the addresses for incoming requests and route them to proper microservice. So microservices at the backend can be scaled up/down without changing a single configuration in the rest of environment.

  port: 6000

    name: edge-service

      defaultZone: http://localhost:8761/eureka/
    preferIpAddress: true

    request: true
        path: /payment-service/**
        serviceId: payment-service
        stripPrefix: true
        path: /customerinfo-service/**
        serviceId: customerinfo-service
        stripPrefix: true

OK, that’s it! I have quickly gone through important points. I have added the complete source-code to github [4] for you to refer.

What you can do more

Well, this is not the end. There are so many paths you can take from here. I have highlighted few of those in the following list.

  1. Config-Service : creating a centralized place to manage all the configurations belongs to micrcoservices
  2. Circuit Breaker implementation : circuit breaker pattern avoids propagating backend failures to clients. Another project by Netflix called Hystrix [5], is very popular for this purpose. You may use Turbine project to aggregate multiple microservice information to a single dashboard.
  3. Docker and Kubenetes : Microservices deployments can be leveraged using docker and kubenetes to make it work in fault-tolerant manner.
  4. Analytics using ELK stack : You may heard of ELK stack [6] which provide various forms of support for analyzing data.

Where you can learn more

While doing the experiment, I came across numbers of resources which are written as tutorials for microservices. Some interesting ones are listed below.

  1. Fernando Barbeiro Campos’s Blog :
  2. Quimten De Swaef’s Blog :
  3. Piotr’s TechBlog :
  4. Piggy Metrices, a POC app :



[1] Microservices with Spring Boot :

[2] Eureka project :

[3] Zuul project :

[4] Github project :

[5] Hystrix :

[6] ELK stack :


Simple Fault Statistic Dashboard for WSO2 API Manager with ELK


WSO2 API Manager (WSO2 AM) [1] is a product in WSO2 stack which is fully open source and provides a complete API Management solution. It includes API creation, publishing and managing all aspects of an API and its lifecycle, and is ready for massively scalable deployments. Several products including WSO2 AM provides analytics support out of box via WSO2 analytic components.

On the other hand, ELK stack is very powerful and popular product in analytics domain. Specially Elasticsearch product, the heart of ELK stack, provides a convenient way to integrate with other systems with the help of other components in the stack. One key thing to remember when using ELK stack is, you need to stick with the same version across all products [2]. For this article I am using version 6.0.1 of the ELK stack.


Use case

I have done this as my first experiment with ELK stack. For this experiment I used following products in ELK stack.

  1. Filebeat
  2. Logstash
  3. Elasticsearch
  4. Kibana

WSO2 API Manager logs almost all errors in its main log (a.k.a. carbon log). Therefore this experiment is to identify faulty situations in the carbon log and use ELK stack to analyse those. So I decided to make it work with following frequent errors.

  1. Deployment failures of APIs : When there’s an incorrect syntax in synapse configurations.
  2. Message Building failures : When incoming message payload is not valid to the given content-type
  3. Backend connection failures : When API fails to establish connection with given backend service
  4. Authentication failures : When access token missing or invalid when calling the API.

In upcoming topics, I am going to show on how to use each component in ELK stack for implementing the use case.


Document 1

The carbon log keeps growing by the APIM server. In the same server, set up filebeat to read the carbon log. Filebeat push the logs to logstash to do filtering. After filtering logs, logstash pushes logs to elasticsearch for indexing. For visualizing purpose, kibana is set to retrieve data from elasticsearch. According to the plan I have discussed how each component in ELK stack fits together to provide a complete solution.

In next sections, I will describe relevant configurations for each component and the meaning. So that you can customize those according to your environment (here I am using the same host). I assume, you have already setup the WSO2 APIM, and have some understanding on carbon logs.

Setting up filebeat

The responsibility of filebeat is to read logs from the file and start the process. You can download filebeat from the website [3]. You can download filebeat manually or set it as a service. Depending on your setup, you need to place configuration file accordingly. Following are segments of filebeat.yml file need to modify. You will find a template file with name “filebeat.reference.yml”.


# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

# Change to true to enable this prospector configuration.
 enabled: true

# Paths that should be crawled and fetched. Glob based paths.
 - /home/buddhima/telco/expreiments/elk/latest/wso2am-2.0.0/repository/logs/wso2carbon.log

. . .

 # Array of hosts to connect to.
 #hosts: ["localhost:9200"]

. . .

 # The Logstash hosts
 hosts: ["localhost:5044"]

In above configuration, log reading prospector is enabled and points to carbon log file of the WSO2 APIM. By default logstash output to elasticsearch is enabled. But according to the plan we need to send logs thorough logstash to filter. Therefore I commented the output.elasticsearch and host followed by that. The uncommented the output.logstash and the host followed by that. I let the hostname and ports remain the default as I have done this on the same machine. Once you complete it, you can start filebeat by following command (for standalone version). It will detect filebeat.yml as default configuration (which I have modified).

./filebeat run

Setting up logstash

Logstash plays a major role in this setup. Logstash is capable of filtering logs according to given criteria. You can download logstash at website [4]. Logstash uses input -> filter -> output order to process log inputs from filebeat. I created logstash-beat.conf file to write the configuration. Following is the logstash configuration to cater detection of above mentioned failures.



In the above config I have configured filebeat as the input and elasticsearch as the output. The filter section specifies how input logs should be handled before sending to output. If you have closely observed carbon logs, you will find Deployment failures and Message Build failures are recorded as error logs whereas Backend connection failures and Authentication failures are recorded as warning logs.

Within the filter, logs are initially separated according to log levels. Then filter logs according to subcategories. The grok filter is used to map some fields in the log message. mutate filter is used to add a new field “error_type” to the logs sent by filebeat. The drop filter is used to avoid forwarding unnecessary logs.

To start logstash with the configuration, execute the following command:

./logstash -f ../logstash-beat.conf

Setting up elasticsearch

Now you are getting closer to the final part. Elasticsearch is the most crucial component of this setup. However you no need to configure it for this experiment. Just download it from the website [5] and execute the following to start it.


Setting up kibana

Kibana is a powerful visualizing tool in the ELK stack. You can point kibana to different indexes in elasticsearch and visualize data. Though kibana provides view of processed data, it is capable of visualizing data periodically with the minimum of 5 seconds interval. Similar to elasticsearch, you no need to do any configuration changes to kibana for this experiment. You just need to download it from website [6] and execute following command to start it:


Once it starts, you can access http://localhost:5601/ via web browser to access web UI. At the web UI, you can select the index as logstash-* as the index. Then you can go to visualizing section and add graphs you like and aggregate them to a dashboard. I’m going to stop here and let you to compose a dashboard you wish. Remember that elasticsearch is getting only the faulty logs according to logstash configuration. Following is a sample dashboard I have created based on 4 types on errors in WSO2 API Manager.

Configuring refresh time in Kibana
Kibana dashboard view


This is my first experiment with ELK stack and I have shown how to compose an fault analysis dashboard with ELK stack for WSO2 API Manager. Here I used WSO2 APIM 2.0.0 version. I have discussed the reason for using each component in ELK stack and configurations relevant for those components. You will find configuration files at the following link [7]. Hope this will be your starting point and will help to build up a useful monitoring dashboard with ELK stack.


[1] WSO2 API Manager Documentation :

[2] ELK product compatibility :

[3] Filebeat website :

[4] Logstash website :

[5] Elasticsearch :

[6] Kibana :

[7] Resources :

Inspecting Solr Index in WSO2 API Manager


Apache Solr project [1] helps you to run a full-featured search server on a server. Also you can integrate Solr with your project to make searching faster. In WSO2 API Manager, Solr is using to make searching faster in store and publisher. In WSO2 API Manager, Solr indexing keeps the frequently using meta-data of APIs. Thereafter, to retrieve complete information about an API, API Manager uses its database. This mechanism makes searching faster and less burden on the databases.


However, in some situations things may go wrong. We have seen several cases Solr indexing does not tally with the information at database. Due to that, when displaying complete information about an API, you may see inconsistent information. In such situations you may want to inspect the Solr index at API Manager.

Setting Up Solr Server

Setting up the solr server is quite easy, as its a matter of downloading binary file from the project page [1]. The important thing here is to make sure you download the proper version. WSO2 API Manager 2.0.0 version is using Solr 5.2.1 version. I figured out it by going through the API Manager release tag pom, identify the registry version and searching registry pom file.

Once you download the binary package, extract it. You can start the Solr server by going to solr-5.2.1/bin directory and execute “./solr start“. Then Solr server will start as a background process. Then access its admin UI by location “http://localhost:8983/solr” in your browser.

Inspecting WSO2 API Manager Index

Before doing so, you must stop WSO2 API Manager and Solr server. To stop Solr server, execute command “./solr stop” inside bin directory. Then you need to copy Solr indexing configs and index from API Manager.

  • To copy configs, go to location “APIM_HOME/repository/conf/solr” and copy “registry-indexing” to “solr-5.2.1/server/solr” folder.
  • To copy indexed data, go to the location “APIM_HOME” and copy “solr” folder to same folder “solr-5.2.1” resides. This is done to comply the “dataDir” value at “solr-5.2.1/server/solr/registry-indexing/” file.

Now start the Solr server and go to admin UI. You should see a drop-down on the left pane and “registry-indexing” menu item in that. Select “registry-indexing” and now you will be able to query indexed data by going to Query section. To query Solr index you need to use specific query language, which is not actually difficult to understand. But in here I’m not going to discuss too much on query language, and it’s up to you to refer [2] and learn it. You can try-out those queries from admin UI directly.

registry-indexing in Solr admin UI

Writing a Java client to query information

In some cases, you may need to write a client to client which can talk to a Solr server and retrieve results. So here I am giving out an example Java code which you can use to retrieve results from a Solr server [3]. However, I am not going to explain the code in detail, because I believe it’s self-explanatory.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">
    <!-- -->
    <!-- -->
package com.solr.testing;


import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrDocumentList;

 * Created by buddhima.
public class SolrTesting {

    public static void main(String[] args) throws IOException, SolrServerException {
        // Default Solr port: 8983, and APIM using 'registry-indexing'
        SolrClient client = new HttpSolrClient.Builder("http://localhost:8983/solr/registry-indexing").build();

        SolrQuery query = new SolrQuery();

        // Fields use for filtering as a list of key:value pairs
        query.addFilterQuery("allowedRoles:internal/everyone", "mediaType_s:application/vnd.wso2-api+xml");

        // Fields to show in the result
        query.setFields("overview_name_s", "overview_status_s", "updater_s", "overview_version_s", "overview_context_s");

        // Limit the query search space

        // Execute the query and print results
        QueryResponse response = client.query(query);
        SolrDocumentList results = response.getResults();
        for (int i = 0; i < results.size(); ++i) {

In addition to that you can refer [4] [5] for further learning on Solr syntax.


In this post I have discussed the use of Solr in WSO2 API Manager and how to investigate existing Solr index. In addition to that I have shown how to construct a Java client which can talk to a Solr server. I hope that the above explanation will help you to solve issues with Solr indexing.

Special thank goes to WSO2 support for providing guidance.


[1] Apache Solr project :

[2] Solr query syntax :

[3] Using SolrJ :

[4] Solr Query Syntax :

[5] Solr df and qf explanation :


JSON Split Aggregate with WSO2 ESB


Split-Aggregate (Scatter-Gather) is a common messaging pattern [1] use in enterprise world. In split-aggregate pattern, client’s request sends to multiple endpoint simultaneously. Responses from multiple endpoints aggregated and sends back as a single response to the client. There are plenty of use-cases you will find that this scenario plays when you try to integrate enterprise systems.


WSO2 ESB (currently a part of WSO2 EI), is a famous middleware which is used to integrate enterprise system. It is also famous for its comprehensive middleware stack which comprises all the functionalities you need to integrate enterprise systems. WSO2 ESB provides an in-built set of mediators for you to achieve this commonly using Split-Aggregate pattern. They are Iterate Mediator, Clone Mediator and Aggregate Mediator. You will find a sample use-case of using those mediators in this documentation [2].

Existing Problem

Existing mediators provides a good support for Split-Aggregate scenarios when you are working with XML payloads. However current trend is more towards using JSON payloads for message exchanges. Although those existing mediators still can be used with JSON payloads, they do not provide a convenient support. Because of that, when using the existing mediators, you need to map your JSON payload to XML payload. This conversion most of the time adds extra burden to the mediation logic.

In this post I am discussing about two mediators which are optimized for JSON payload handling in Split-Aggregate scenarios. They are Json Iterate Mediator and Json Aggregate Mediator. Those mediators handle JSON payloads in its own way and do not convert to XML (native JSON support). Please note that these mediators do not come with WSO2 ESB out-of-box. You can find the relevant documentation at this location [3]

Configuring Mediators

To use these mediators, you need to build the sourcecode at here [3] and get the resultant jar files. Put the Json (Iterate/Aggregate) Mediator-1.0.0.jar file to ESB_HOME/repository/components/dropins folder. Along with those, add json-path-2.1.0.jar [4], json-smart-2.2.jar [5] and accessors-smart-1.1.jar [9] to the same location. Then start WSO2 ESB (sh bin/

Sample Scenario

Once you add those artifacts to ESB, you can refer two new mediators similar to inbuilt mediators. The respective xml tags are <jsonIterate> and <jsonAggregate>. In this post I’m showing a sample configuration of using those mediators and describing it briefly. The same scenario is discussed more descriptive manner at here [3]

<api xmlns="" name="sampleApi" context="/sample">
<resource methods="POST" uri-template="/*">
    <log level="full"/>
    <jsonIterate continueParent="false" preservePayload="true" expression="$.messages" attachPath="$.messages">
                <log level="full"/>
                <header name="To" value=""/>
    <log level="full"/>
            <messageCount min="-1" max="-1"/>
        <onComplete expression="$.message" enclosingElementProperty="responses">
            <log level="full"/>


The above example shows how to do split-aggregate on a message receiving to an API. You can send the following request payload to the API create by ESB at http://localhost:8280/sample

curl -X POST \

http://localhost:8280/sample \

-H ‘content-type: application/json’ \

-d ‘{“originator”:”my-company”,”messages”:[{“country”:”Sri Lanka”,”code”:”94″},{“country”:”America”,”code”:”01″},{“country”:”Australia”,”code”:”61″}]}’

The expression at the jsonIterate mediator takes the responsibility of deciding where to split the message payload. It should be written as a JSON Path [6]. Within the sequence inside jsonIterate mediator, you will find splitted message payloads. They are sending to the backend URL given in the config. You can refer additional configurations relate to JSON Iterate mediator at here [7]

Each response comes into the jsonAggregate mediator. At the expression of JSON Aggregate mediator, you need to specify which part of the response should be taken for aggregation. This expression is again a JSONPath expression. Once it satisfies completion condition, aggregated message comes in to the onComplete sequence. You can do further processing on the message inside onComplete sequence. If you are more interested, you can look into documentation [8] which gives a complete guide on configuring JSON Aggregate mediator.


Split-Aggregate is a very common message exchanging pattern in enterprise world. JSON is becoming more popular message format across enterprise world too. However WSO2 ESB has lack of support to JSON message exchanging with split-aggregate scenarios. To cater that requirement I have built two custom mediators which makes life easier. Those two mediators can be configured to do split-aggregate with JSON payloads without converting to XML (native JSON support).


[1] EIP patterns reference :

[2] WSO2 ESB Doc :

[3] GitHub repository :

[4] json-path-2.1.0 :

[5] json-smart-2.2.1 :

[6] JSON Path documentation :

[7] JSON Iterate Mediator documentation :

[8] JSON Aggregate Mediator documentation :

[9] accessors-smart-1.1.jar :


JSON Enrich Mediator for WSO2 ESB


JSON support for WSO2 ESB [1] was introduced sometime back. But only small number of mediators support manipulating JSON payloads. In this article I am going to introduce a new mediator called JsonEnrichMediator [2], which work quite similar to existing Enrich mediator [3], but aiming JSON payloads. The specialty of this mediator is, since this is working with native JSON payload, JSON payload will not be converted to an XML representation. Hence there won’t be any data loss due to transformations.

Please note that this is a custom mediator I have created and will not ship with WSO2 ESB pack.

Configuring Mediator

  1. Clone the GitHub repository:
  2. Build the repository using maven (mvn clean install)
  3. Copy the built artifact in target folder to ESB_HOME/repository/components/dropins
  4. Download json-path-2.1.0.jar [5], json-smart-2.2.jar [6] and put them to the same folder (dropins).
  5. Start WSO2 ESB (sh bin/

Sample Scenario

For this article I am using a sample scenario which moves a JSON property within the payload. For that you need to add the following API to WSO2 ESB.

<api xmlns="" name="sampleApi" context="/sample">
<resource methods="POST" uri-template="/*">
<log level="full"/>
<source type="custom" clone="false" JSONPath="$"/>
<target type="custom" action="put" JSONPath="$" property="country"/>

The above configuration will take the value pointed by JSONPath “$” and move it to the main body. You can find further details about JSONPath at the location [4].

Once the API is deployed, you need to send following message to ESB.

curl -H "Content-Type: application/json"
-X POST -d '{
"country": "Sri Lanka",
"language" : "Sinhala"

The output of the ESB should look like below

"me": {
"language": "Sinhala"
"country": "Sri Lanka"


I have shown a simple use-case of using JSON Enrich Mediator. You can see the comprehensive documentation at the code repository [2].


[1] WSO2 ESB JSON support :

[2] Code Repository for JSON Enrich Mediator :

[3] WSO2 ESB Enrich Mediator :

[4] JSON Path documentation :

[5] json-path-2.1.0 :

[6] json-smart-2.2.1 :


WSO2 ESB Endpoint Error Handling


WSO2 ESB can be used as an intermediary component to connect different systems. When connecting those systems the availability of those systems is a common issue. Therefore ESB has to handle those undesirable situations carefully and take relevant actions. To cater that requirement outbound-endpoints of the WSO2 ESB can be configured. In this article I discuss two common ways of configuring endpoints.

Two common approaches to configure endpoints are;

  1. Configure with just a timeout (without suspending endpoint)
  2. Configure with a suspend state

Configure with just a timeout

This would suitable if the endpoint failure is not very frequent.

Sample Configuration:

<endpoint name="SimpleTimeoutEP">
    <address uri="http://localhost:9000/StockquoteService">


In this case we only focus on the timeout of the endpoint. The endpoint will stay as Active for ever. If a response does not receive within duration, the responseAction triggers.

duration – in milliseconds

responseAction – when response comes to a time-out message one of the following actions trigger.

  • fault – calls the fault-sequence associated
  • discard – discards the response
  • none – will not take any specific action on response (default action)

The rest of the configuration avoids the endpoint going to suspend state.

If you specify responseAction as “fault”, you can define define customize way of informing the failure to the client in fault-handling sequence or store that message and retry later.

Configure with a suspend state

This approach is useful when connection failures are very often. By suspending endpoint, ESB can save resources without unnecessarily waiting for responses.

In this case endpoint goes through a state transition. The theory behind this behavior is the circuit-breaker pattern. Following are the three states:

  1. Active – Endpoint sends all requests to backend service
  2. Timeout – Endpoint starts counting failures
  3. Suspend – Endpoint limits sending requests to backend service

Sample Configuration:

<endpoint name="Suspending_EP">
    <address uri="http://localhost:9000/StockquoteServicet">
        <errorCodes>101504, 101505</errorCodes>
        <errorCodes>101500, 101501, 101506, 101507, 101508</errorCodes>


In the above configuration:

If endpoint error codes are 101504, 101505; endpoint is moved from active to timeout state.

When the endpoint is in timeout state, it tries 3 attempts with 1 millisecond delays.

If all those retry attempts fail, the endpoint will move to suspend state. If a retry succeed, then endpoint will move to active state.

If active endpoint receives error codes 101500, 101501, 101506, 101507, 101508; endpoint will directly move to suspend.

After endpoint somehow moves to suspend state, it waits initialDuration before attempting any furthermore. Thereafter it will determine the time period between requests according to following equation.

Min(current suspension duration * progressionFactor, maximumDuration)

In the equation, “current suspension duration” get updated for each reattempt.

Once endpoint succeed in getting a response to a request, endpoint will go back to active state.

If endpoint will get any other error codes (eg: 101503), it will not do any state transition, and remain in active state.


In this article I have shown two basic configurations that would be useful to configure endpoints of WSO2 ESB. You can refer WSO2 ESB documentation for implementing more complex patterns with endpoints.


WSO2 ESB Documentation:

Timeout and Circuit Breaker Pattern in WSO2 Way:

Endpoint Error Codes:

Endpoint Error Handling:


Reliable Messaging with WSO2 ESB


Web-Service Reliable Messaging (WS-ReliableMessaging) is a standard which describes a protocol on how to deliver messages reliably between distributed applications. Message failures due to software component, system, network failures can be overcome though this protocol. This protocol describes a transport-independent protocol, such that messages can be exchanged between systems. For further information, please go through the WS-RM specification [1] which completely describes about this topic. For WSO2 ESB, WS-RM is not a novel concept, as it has been there in previous releases. But with new release, WSO2 ESB 4.9.0, WSO2 has separated QoS from fresh pack. Instead, you can installed WS-RM as a feature from p2-repo. Another major changes are that, now WS-RM operates on top of CXF WS-RM [2] , and acting as an inbound endpoint [3].

In this post, I’m not going into comprehensively describe on WS-RM, but going to show how that can be configured in ESB. If you need to read more on WS-RM protocol, I recommend to access WS-RM specification [1], which is a good source for that. Now, let’s move on step-by-step with a sample use-case.

Setting up

First you need to understand that, WS-RM inbound is designed to reliably exchange message between client and WSO2 ESB. So, the message flow diagram can be shown as follows:

Sample Setup Diagram

For this example, I’m using SimpleStockQuote service which comes with WSO2 ESB. You can read more on configuring and starting the service on default port from the documentation. If you have properly configured it, you should be able to access its wsdl via “http://localhost:9000/services/SimpleStockQuoteService?wsdl“.

Next, you need to install “CXF WS Reliable Messaging” feature from p2-repo. About installing features please go through the documentation Installing Features. With this step, you have completed setting up the infrastructure for use-case. Also please note that, this feature requires cxf-bundle, and jetty-bundle. Make-sure, you have no conflicts regarding installation of those bundles.

In order to configure CXF server, we need to give a configuration file. Sample configuration can be also found at CXF Inbound Endpoint Documentation. In that configuration file, you may need to configure the paths to key stores. A sample can be found at [5]. For this sample, configure it’s key store paths and place it in “<ESB_HOME>/repository/conf/cxf” folder.

Now, I’m going to create a new WS-RM inbound endpoint. For that select “Inbound Endpoints” from left panel. Then click “Add Inbound Endpoint“. Then you’ll get a page to initiate an inbound endpoint. At this stage you need to give a name to WS-RM inbound endpoint, and select type as “Custom“. You have to do that because, I have initially mentioned that WS-RM does not come along with fresh ESB pack. Moving to next step, you will get the chance of doing rest of configurations. Following image depicts the configuration of a sample WS-RM inbound endpoint.


At this point, you may already have some idea about the inbound endpoint. I have configured it to listen port 20940, in localhost. The Class of custom inbound should be “org.wso2.carbon.inbound.endpoint.ext.wsrm.InboundRMHttpListener” (without quotes). The configuration “inbound.cxf.rm.config-file” describes where you have placed the CXF server configuration file.

Messages coming into that specified port will go to “sequence” specified, in this case RMIn sequence and faulty messages will go to “fault” sequence. Other configurations related details are described at the official documentation [4].

You can do the above step straight forward by adding the inbound configuration directly from synapse-configuration.

Inbound Endpoint:

<inboundEndpoint xmlns="" name="RM_INBOUND_NEW_EXT" sequence="RMIn" onError="fault" class="org.wso2.carbon.inbound.endpoint.ext.wsrm.InboundRMHttpListener" suspend="false">
      <parameter name="inbound.cxf.rm.port">20940</parameter>
      <parameter name="inbound.cxf.rm.config-file">repository/conf/cxf/server.xml</parameter>
      <parameter name="coordination">true</parameter>
      <parameter name=""></parameter>
      <parameter name="inbound.behavior">listening</parameter>
      <parameter name="sequential">true</parameter>

RMIn sequence:

<sequence xmlns="" name="RMIn" onError="fault">
      <property name="PRESERVE_WS_ADDRESSING" value="true"/>

<header xmlns:wsrm="" name="wsrm:Sequence" action="remove"/>

<header xmlns:wsa="" name="wsa:To" action="remove"/>

<header xmlns:wsa="" name="wsa:FaultTo" action="remove"/>
      <log level="full"/>

<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>

Now you have completed the sample setting up. One more step to go. Let’s test this sample.

Running the sample

For that, ESB provides the client which can send reliable messages. Go to <ESB_HOME>/samples/axis2Client folder from terminal and apply following command:

ant stockquote -Dsymbol=IBM -Dmode=quote -Daddurl=http://localhost:20940 -Dwsrm=true

The command will send a getQuote request to ESB using WS-RM and projects the expected result.

Message flow

As specified in the WS-RM spec [1], between client and ESB, several messages exchange in this scenario. If you use a packet capturing tool like Wireshark, you’ll see those messages. I have already attached the message flow [6],  I observed to make it more clear. In brief, following messages  are exchanged, you can follow the messages at text file with these points:

  1. “CreateSequence” message to initiate reliable messaging
  2. “CreateSequenceResponse” from ESB to client
  3. Actual message with data from client to ESB. This is the 1st and the last message in this case. ESB will send this message to backend server and get the response.
  4. “SequenceAcknowledgement” message along with the response from backend server send from ESB to client
  5. “TerminateSequence” message from client to ESB


Through this post, I wanted to introduce you to the new approach of implementing WS-ReliableMessaging. This implementation will come along with WSO2 ESB 4.9.0 release,and prior have different approach than this. Therefor this post will help anyone who is interested in doing WS-RM with newer ESB versions.


[1] WS-ReliableMessaging spec. –

[2] CXF WS-RM –

[3] WSO2 ESB, Inbound Endpoint –

[4] CXF WS-RM Inbound Endpoint –

[5] Sample CXF configuration –

[6] Message flow – link-to-file