Category: WSO2

Inspecting Solr Index in WSO2 API Manager

Introduction

Apache Solr project [1] helps you to run a full-featured search server on a server. Also you can integrate Solr with your project to make searching faster. In WSO2 API Manager, Solr is using to make searching faster in store and publisher. In WSO2 API Manager, Solr indexing keeps the frequently using meta-data of APIs. Thereafter, to retrieve complete information about an API, API Manager uses its database. This mechanism makes searching faster and less burden on the databases.

adressenboek

However, in some situations things may go wrong. We have seen several cases Solr indexing does not tally with the information at database. Due to that, when displaying complete information about an API, you may see inconsistent information. In such situations you may want to inspect the Solr index at API Manager.

Setting Up Solr Server

Setting up the solr server is quite easy, as its a matter of downloading binary file from the project page [1]. The important thing here is to make sure you download the proper version. WSO2 API Manager 2.0.0 version is using Solr 5.2.1 version. I figured out it by going through the API Manager release tag pom, identify the registry version and searching registry pom file.

Once you download the binary package, extract it. You can start the Solr server by going to solr-5.2.1/bin directory and execute “./solr start“. Then Solr server will start as a background process. Then access its admin UI by location “http://localhost:8983/solr” in your browser.

Inspecting WSO2 API Manager Index

Before doing so, you must stop WSO2 API Manager and Solr server. To stop Solr server, execute command “./solr stop” inside bin directory. Then you need to copy Solr indexing configs and index from API Manager.

  • To copy configs, go to location “APIM_HOME/repository/conf/solr” and copy “registry-indexing” to “solr-5.2.1/server/solr” folder.
  • To copy indexed data, go to the location “APIM_HOME” and copy “solr” folder to same folder “solr-5.2.1” resides. This is done to comply the “dataDir” value at “solr-5.2.1/server/solr/registry-indexing/core.properties” file.

Now start the Solr server and go to admin UI. You should see a drop-down on the left pane and “registry-indexing” menu item in that. Select “registry-indexing” and now you will be able to query indexed data by going to Query section. To query Solr index you need to use specific query language, which is not actually difficult to understand. But in here I’m not going to discuss too much on query language, and it’s up to you to refer [2] and learn it. You can try-out those queries from admin UI directly.

registry-indexing in Solr admin UI

Writing a Java client to query information

In some cases, you may need to write a client to client which can talk to a Solr server and retrieve results. So here I am giving out an example Java code which you can use to retrieve results from a Solr server [3]. However, I am not going to explain the code in detail, because I believe it’s self-explanatory.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.buddhima.solr</groupId>
<artifactId>solr-testing</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
    <!-- https://mvnrepository.com/artifact/org.apache.solr/solr-solrj -->
    <dependency>
        <groupId>org.apache.solr</groupId>
        <artifactId>solr-solrj</artifactId>
        <version>7.1.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.solr/solr-common -->
    <dependency>
        <groupId>org.apache.solr</groupId>
        <artifactId>solr-common</artifactId>
        <version>1.3.0</version>
    </dependency>
</dependencies>
</project>
package com.solr.testing;

import java.io.IOException;

import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrDocumentList;

/**
 * Created by buddhima.
 */
public class SolrTesting {

    public static void main(String[] args) throws IOException, SolrServerException {
        // Default Solr port: 8983, and APIM using 'registry-indexing'
        SolrClient client = new HttpSolrClient.Builder("http://localhost:8983/solr/registry-indexing").build();

        SolrQuery query = new SolrQuery();
        query.setQuery("*:*");

        // Fields use for filtering as a list of key:value pairs
        query.addFilterQuery("allowedRoles:internal/everyone", "mediaType_s:application/vnd.wso2-api+xml");

        // Fields to show in the result
        query.setFields("overview_name_s", "overview_status_s", "updater_s", "overview_version_s", "overview_context_s");

        // Limit the query search space
        query.setStart(0);
        query.setRows(500);

        // Execute the query and print results
        QueryResponse response = client.query(query);
        SolrDocumentList results = response.getResults();
        for (int i = 0; i < results.size(); ++i) {
            System.out.println(results.get(i));
        }
    }
}

In addition to that you can refer [4] [5] for further learning on Solr syntax.

Conclusion

In this post I have discussed the use of Solr in WSO2 API Manager and how to investigate existing Solr index. In addition to that I have shown how to construct a Java client which can talk to a Solr server. I hope that the above explanation will help you to solve issues with Solr indexing.

Special thank goes to WSO2 support for providing guidance.

References

[1] Apache Solr project : http://lucene.apache.org/solr/

[2] Solr query syntax : http://www.solrtutorial.com/solr-query-syntax.html

[3] Using SolrJ : http://www.solrtutorial.com/solrj-tutorial.html

[4] Solr Query Syntax : http://yonik.com/solr/query-syntax/

[5] Solr df and qf explanation : https://stackoverflow.com/questions/17363677/solr-df-and-qf-explanation

Advertisements

JSON Split Aggregate with WSO2 ESB

Introduction

Split-Aggregate (Scatter-Gather) is a common messaging pattern [1] use in enterprise world. In split-aggregate pattern, client’s request sends to multiple endpoint simultaneously. Responses from multiple endpoints aggregated and sends back as a single response to the client. There are plenty of use-cases you will find that this scenario plays when you try to integrate enterprise systems.

scattergather

WSO2 ESB (currently a part of WSO2 EI), is a famous middleware which is used to integrate enterprise system. It is also famous for its comprehensive middleware stack which comprises all the functionalities you need to integrate enterprise systems. WSO2 ESB provides an in-built set of mediators for you to achieve this commonly using Split-Aggregate pattern. They are Iterate Mediator, Clone Mediator and Aggregate Mediator. You will find a sample use-case of using those mediators in this documentation [2].

Existing Problem

Existing mediators provides a good support for Split-Aggregate scenarios when you are working with XML payloads. However current trend is more towards using JSON payloads for message exchanges. Although those existing mediators still can be used with JSON payloads, they do not provide a convenient support. Because of that, when using the existing mediators, you need to map your JSON payload to XML payload. This conversion most of the time adds extra burden to the mediation logic.

In this post I am discussing about two mediators which are optimized for JSON payload handling in Split-Aggregate scenarios. They are Json Iterate Mediator and Json Aggregate Mediator. Those mediators handle JSON payloads in its own way and do not convert to XML (native JSON support). Please note that these mediators do not come with WSO2 ESB out-of-box. You can find the relevant documentation at this location [3]

Configuring Mediators

To use these mediators, you need to build the sourcecode at here [3] and get the resultant jar files. Put the Json (Iterate/Aggregate) Mediator-1.0.0.jar file to ESB_HOME/repository/components/dropins folder. Along with those, add json-path-2.1.0.jar [4], json-smart-2.2.jar [5] and accessors-smart-1.1.jar [9] to the same location. Then start WSO2 ESB (sh bin/wso2server.sh)

Sample Scenario

Once you add those artifacts to ESB, you can refer two new mediators similar to inbuilt mediators. The respective xml tags are <jsonIterate> and <jsonAggregate>. In this post I’m showing a sample configuration of using those mediators and describing it briefly. The same scenario is discussed more descriptive manner at here [3]

<api xmlns="http://ws.apache.org/ns/synapse" name="sampleApi" context="/sample">
<resource methods="POST" uri-template="/*">
<inSequence>
    <log level="full"/>
    <jsonIterate continueParent="false" preservePayload="true" expression="$.messages" attachPath="$.messages">
        <target>
            <sequence>
                <log level="full"/>
                <header name="To" value="http://www.mocky.io/v2/58d6459b100000e601949cb7"/>
                <call/>
            </sequence>
        </target>
    </jsonIterate>
    <log level="full"/>
    <jsonAggregate>
        <completeCondition>
            <messageCount min="-1" max="-1"/>
        </completeCondition>
        <onComplete expression="$.message" enclosingElementProperty="responses">
            <log level="full"/>
            <send/>
        </onComplete>
    </jsonAggregate>
</inSequence>
</resource>
</api>

 

The above example shows how to do split-aggregate on a message receiving to an API. You can send the following request payload to the API create by ESB at http://localhost:8280/sample

curl -X POST \

http://localhost:8280/sample \

-H ‘content-type: application/json’ \

-d ‘{“originator”:”my-company”,”messages”:[{“country”:”Sri Lanka”,”code”:”94″},{“country”:”America”,”code”:”01″},{“country”:”Australia”,”code”:”61″}]}’

The expression at the jsonIterate mediator takes the responsibility of deciding where to split the message payload. It should be written as a JSON Path [6]. Within the sequence inside jsonIterate mediator, you will find splitted message payloads. They are sending to the backend URL given in the config. You can refer additional configurations relate to JSON Iterate mediator at here [7]

Each response comes into the jsonAggregate mediator. At the expression of JSON Aggregate mediator, you need to specify which part of the response should be taken for aggregation. This expression is again a JSONPath expression. Once it satisfies completion condition, aggregated message comes in to the onComplete sequence. You can do further processing on the message inside onComplete sequence. If you are more interested, you can look into documentation [8] which gives a complete guide on configuring JSON Aggregate mediator.

Conclusion

Split-Aggregate is a very common message exchanging pattern in enterprise world. JSON is becoming more popular message format across enterprise world too. However WSO2 ESB has lack of support to JSON message exchanging with split-aggregate scenarios. To cater that requirement I have built two custom mediators which makes life easier. Those two mediators can be configured to do split-aggregate with JSON payloads without converting to XML (native JSON support).

References

[1] EIP patterns reference : http://www.enterpriseintegrationpatterns.com/patterns/messaging/BroadcastAggregate.html

[2] WSO2 ESB Doc : https://docs.wso2.com/display/ESB500/Split-Aggregate+Pattern

[3] GitHub repository : https://github.com/Buddhima/Json-EIP-Mediators

[4] json-path-2.1.0 : https://mvnrepository.com/artifact/com.jayway.jsonpath/json-path/2.1.0

[5] json-smart-2.2.1 : https://mvnrepository.com/artifact/net.minidev/json-smart/2.2.1

[6] JSON Path documentation : https://github.com/jayway/JsonPath/blob/json-path-2.1.0/README.md

[7] JSON Iterate Mediator documentation : https://github.com/Buddhima/Json-EIP-Mediators/tree/master/JsonIterateMediator

[8] JSON Aggregate Mediator documentation : https://github.com/Buddhima/Json-EIP-Mediators/tree/master/JsonAggregateMediator

[9] accessors-smart-1.1.jar : https://mvnrepository.com/artifact/net.minidev/accessors-smart/1.1

JSON Enrich Mediator for WSO2 ESB

Introduction

JSON support for WSO2 ESB [1] was introduced sometime back. But only small number of mediators support manipulating JSON payloads. In this article I am going to introduce a new mediator called JsonEnrichMediator [2], which work quite similar to existing Enrich mediator [3], but aiming JSON payloads. The specialty of this mediator is, since this is working with native JSON payload, JSON payload will not be converted to an XML representation. Hence there won’t be any data loss due to transformations.

Please note that this is a custom mediator I have created and will not ship with WSO2 ESB pack.

Configuring Mediator

  1. Clone the GitHub repository: https://github.com/Buddhima/JsonEnrichMediator
  2. Build the repository using maven (mvn clean install)
  3. Copy the built artifact in target folder to ESB_HOME/repository/components/dropins
  4. Download json-path-2.1.0.jar [5], json-smart-2.2.jar [6] and put them to the same folder (dropins).
  5. Start WSO2 ESB (sh bin/wso2server.sh)

Sample Scenario

For this article I am using a sample scenario which moves a JSON property within the payload. For that you need to add the following API to WSO2 ESB.

<api xmlns="http://ws.apache.org/ns/synapse" name="sampleApi" context="/sample">
<resource methods="POST" uri-template="/*">
<inSequence>
<log level="full"/>
<jsonEnrich>
<source type="custom" clone="false" JSONPath="$.me.country"/>
<target type="custom" action="put" JSONPath="$" property="country"/>
</jsonEnrich>
<respond/>
</inSequence>
</resource>
</api>

The above configuration will take the value pointed by JSONPath “$.me.country” and move it to the main body. You can find further details about JSONPath at the location [4].

Once the API is deployed, you need to send following message to ESB.


curl -H "Content-Type: application/json"
-X POST -d '{
"me":{
"country": "Sri Lanka",
"language" : "Sinhala"
}
}'
http://127.0.0.1:8280/sample

The output of the ESB should look like below


{
"me": {
"language": "Sinhala"
},
"country": "Sri Lanka"
}

Conclusion

I have shown a simple use-case of using JSON Enrich Mediator. You can see the comprehensive documentation at the code repository [2].

References

[1] WSO2 ESB JSON support : https://docs.wso2.com/display/ESB500/JSON+Support

[2] Code Repository for JSON Enrich Mediator : https://github.com/Buddhima/JsonEnrichMediator

[3] WSO2 ESB Enrich Mediator : https://docs.wso2.com/display/ESB500/Enrich+Mediator

[4] JSON Path documentation : https://github.com/jayway/JsonPath/blob/json-path-2.1.0/README.md

[5] json-path-2.1.0 : https://mvnrepository.com/artifact/com.jayway.jsonpath/json-path/2.1.0

[6] json-smart-2.2.1 : https://mvnrepository.com/artifact/net.minidev/json-smart/2.2.1

WSO2 ESB Endpoint Error Handling

Introduction

WSO2 ESB can be used as an intermediary component to connect different systems. When connecting those systems the availability of those systems is a common issue. Therefore ESB has to handle those undesirable situations carefully and take relevant actions. To cater that requirement outbound-endpoints of the WSO2 ESB can be configured. In this article I discuss two common ways of configuring endpoints.

Two common approaches to configure endpoints are;

  1. Configure with just a timeout (without suspending endpoint)
  2. Configure with a suspend state

Configure with just a timeout

This would suitable if the endpoint failure is not very frequent.

Sample Configuration:

<endpoint name="SimpleTimeoutEP">
    <address uri="http://localhost:9000/StockquoteService">
    <timeout>
        <duration>2000</duration>
        <responseAction>fault</responseAction>
    </timeout>
    <suspendOnFailure>
        <errorCodes>-1</errorCodes>
        <initialDuration>0</initialDuration>
        <progressionFactor>1.0</progressionFactor>
        <maximumDuration>0</maximumDuration>
    </suspendOnFailure>
    <markForSuspension>
        <errorCodes>-1</errorCodes>
    </markForSuspension>
</address>
</endpoint>

 

In this case we only focus on the timeout of the endpoint. The endpoint will stay as Active for ever. If a response does not receive within duration, the responseAction triggers.

duration – in milliseconds

responseAction – when response comes to a time-out message one of the following actions trigger.

  • fault – calls the fault-sequence associated
  • discard – discards the response
  • none – will not take any specific action on response (default action)

The rest of the configuration avoids the endpoint going to suspend state.

If you specify responseAction as “fault”, you can define define customize way of informing the failure to the client in fault-handling sequence or store that message and retry later.

Configure with a suspend state

This approach is useful when connection failures are very often. By suspending endpoint, ESB can save resources without unnecessarily waiting for responses.

In this case endpoint goes through a state transition. The theory behind this behavior is the circuit-breaker pattern. Following are the three states:

  1. Active – Endpoint sends all requests to backend service
  2. Timeout – Endpoint starts counting failures
  3. Suspend – Endpoint limits sending requests to backend service

Sample Configuration:

<endpoint name="Suspending_EP">
    <address uri="http://localhost:9000/StockquoteServicet">
    <timeout>
        <duration>6000</duration>
    </timeout>
    <markForSuspension>
        <errorCodes>101504, 101505</errorCodes>
        <retriesBeforeSuspension>3</retriesBeforeSuspension>
        <retryDelay>1</retryDelay>
    </markForSuspension>
    <suspendOnFailure>
        <errorCodes>101500, 101501, 101506, 101507, 101508</errorCodes>
        <initialDuration>1000</initialDuration>
        <progressionFactor>2</progressionFactor>
        <maximumDuration>60000</maximumDuration>
    </suspendOnFailure>
</address>
</endpoint>

 

In the above configuration:

If endpoint error codes are 101504, 101505; endpoint is moved from active to timeout state.

When the endpoint is in timeout state, it tries 3 attempts with 1 millisecond delays.

If all those retry attempts fail, the endpoint will move to suspend state. If a retry succeed, then endpoint will move to active state.

If active endpoint receives error codes 101500, 101501, 101506, 101507, 101508; endpoint will directly move to suspend.

After endpoint somehow moves to suspend state, it waits initialDuration before attempting any furthermore. Thereafter it will determine the time period between requests according to following equation.

Min(current suspension duration * progressionFactor, maximumDuration)

In the equation, “current suspension duration” get updated for each reattempt.

Once endpoint succeed in getting a response to a request, endpoint will go back to active state.

If endpoint will get any other error codes (eg: 101503), it will not do any state transition, and remain in active state.

Conclusion

In this article I have shown two basic configurations that would be useful to configure endpoints of WSO2 ESB. You can refer WSO2 ESB documentation for implementing more complex patterns with endpoints.

References

WSO2 ESB Documentation: https://docs.wso2.com/display/ESB500/Endpoint+Error+Handling#EndpointErrorHandling-timeoutSettings

Timeout and Circuit Breaker Pattern in WSO2 Way: http://ssagara.blogspot.com/2015/05/timeout-and-circuit-breaker-pattern-in.html

Endpoint Error Codes: https://docs.wso2.com/display/ESB500/Error+Handling#ErrorHandling-codes

Endpoint Error Handling: http://wso2.com/library/articles/wso2-enterprise-service-bus-endpoint-error-handling/

Reliable Messaging with WSO2 ESB

Introduction

Web-Service Reliable Messaging (WS-ReliableMessaging) is a standard which describes a protocol on how to deliver messages reliably between distributed applications. Message failures due to software component, system, network failures can be overcome though this protocol. This protocol describes a transport-independent protocol, such that messages can be exchanged between systems. For further information, please go through the WS-RM specification [1] which completely describes about this topic. For WSO2 ESB, WS-RM is not a novel concept, as it has been there in previous releases. But with new release, WSO2 ESB 4.9.0, WSO2 has separated QoS from fresh pack. Instead, you can installed WS-RM as a feature from p2-repo. Another major changes are that, now WS-RM operates on top of CXF WS-RM [2] , and acting as an inbound endpoint [3].

In this post, I’m not going into comprehensively describe on WS-RM, but going to show how that can be configured in ESB. If you need to read more on WS-RM protocol, I recommend to access WS-RM specification [1], which is a good source for that. Now, let’s move on step-by-step with a sample use-case.

Setting up

First you need to understand that, WS-RM inbound is designed to reliably exchange message between client and WSO2 ESB. So, the message flow diagram can be shown as follows:

Sample Setup Diagram

For this example, I’m using SimpleStockQuote service which comes with WSO2 ESB. You can read more on configuring and starting the service on default port from the documentation. If you have properly configured it, you should be able to access its wsdl via “http://localhost:9000/services/SimpleStockQuoteService?wsdl“.

Next, you need to install “CXF WS Reliable Messaging” feature from p2-repo. About installing features please go through the documentation Installing Features. With this step, you have completed setting up the infrastructure for use-case. Also please note that, this feature requires cxf-bundle, and jetty-bundle. Make-sure, you have no conflicts regarding installation of those bundles.

In order to configure CXF server, we need to give a configuration file. Sample configuration can be also found at CXF Inbound Endpoint Documentation. In that configuration file, you may need to configure the paths to key stores. A sample can be found at [5]. For this sample, configure it’s key store paths and place it in “<ESB_HOME>/repository/conf/cxf” folder.

Now, I’m going to create a new WS-RM inbound endpoint. For that select “Inbound Endpoints” from left panel. Then click “Add Inbound Endpoint“. Then you’ll get a page to initiate an inbound endpoint. At this stage you need to give a name to WS-RM inbound endpoint, and select type as “Custom“. You have to do that because, I have initially mentioned that WS-RM does not come along with fresh ESB pack. Moving to next step, you will get the chance of doing rest of configurations. Following image depicts the configuration of a sample WS-RM inbound endpoint.

wsrm_sample_config

At this point, you may already have some idea about the inbound endpoint. I have configured it to listen port 20940, in localhost. The Class of custom inbound should be “org.wso2.carbon.inbound.endpoint.ext.wsrm.InboundRMHttpListener” (without quotes). The configuration “inbound.cxf.rm.config-file” describes where you have placed the CXF server configuration file.

Messages coming into that specified port will go to “sequence” specified, in this case RMIn sequence and faulty messages will go to “fault” sequence. Other configurations related details are described at the official documentation [4].

You can do the above step straight forward by adding the inbound configuration directly from synapse-configuration.

Inbound Endpoint:


<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse" name="RM_INBOUND_NEW_EXT" sequence="RMIn" onError="fault" class="org.wso2.carbon.inbound.endpoint.ext.wsrm.InboundRMHttpListener" suspend="false">
   <parameters>
      <parameter name="inbound.cxf.rm.port">20940</parameter>
      <parameter name="inbound.cxf.rm.config-file">repository/conf/cxf/server.xml</parameter>
      <parameter name="coordination">true</parameter>
      <parameter name="inbound.cxf.rm.host">127.0.0.1</parameter>
      <parameter name="inbound.behavior">listening</parameter>
      <parameter name="sequential">true</parameter>
   </parameters>
</inboundEndpoint>

RMIn sequence:


<sequence xmlns="http://ws.apache.org/ns/synapse" name="RMIn" onError="fault">
   <in>
      <property name="PRESERVE_WS_ADDRESSING" value="true"/>

<header xmlns:wsrm="http://schemas.xmlsoap.org/ws/2005/02/rm" name="wsrm:Sequence" action="remove"/>

<header xmlns:wsa="http://www.w3.org/2005/08/addressing" name="wsa:To" action="remove"/>

<header xmlns:wsa="http://www.w3.org/2005/08/addressing" name="wsa:FaultTo" action="remove"/>
      <log level="full"/>
      <send>
         <endpoint>

<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
         </endpoint>
      </send>
   </in>
   <out>
      <send/>
   </out>
</sequence>

Now you have completed the sample setting up. One more step to go. Let’s test this sample.

Running the sample

For that, ESB provides the client which can send reliable messages. Go to <ESB_HOME>/samples/axis2Client folder from terminal and apply following command:

ant stockquote -Dsymbol=IBM -Dmode=quote -Daddurl=http://localhost:20940 -Dwsrm=true

The command will send a getQuote request to ESB using WS-RM and projects the expected result.

Message flow

As specified in the WS-RM spec [1], between client and ESB, several messages exchange in this scenario. If you use a packet capturing tool like Wireshark, you’ll see those messages. I have already attached the message flow [6],  I observed to make it more clear. In brief, following messages  are exchanged, you can follow the messages at text file with these points:

  1. “CreateSequence” message to initiate reliable messaging
  2. “CreateSequenceResponse” from ESB to client
  3. Actual message with data from client to ESB. This is the 1st and the last message in this case. ESB will send this message to backend server and get the response.
  4. “SequenceAcknowledgement” message along with the response from backend server send from ESB to client
  5. “TerminateSequence” message from client to ESB

Conclusion

Through this post, I wanted to introduce you to the new approach of implementing WS-ReliableMessaging. This implementation will come along with WSO2 ESB 4.9.0 release,and prior have different approach than this. Therefor this post will help anyone who is interested in doing WS-RM with newer ESB versions.

References

[1] WS-ReliableMessaging spec. – http://specs.xmlsoap.org/ws/2005/02/rm/ws-reliablemessaging.pdf

[2] CXF WS-RM – http://cxf.apache.org/docs/ws-reliablemessaging.html

[3] WSO2 ESB, Inbound Endpoint – https://docs.wso2.com/display/ESB490/Working+with+Inbound+Endpoints

[4] CXF WS-RM Inbound Endpoint – https://docs.wso2.com/display/ESB490/CXF+WS-RM+Inbound+Protocol

[5] Sample CXF configuration – http://www.filedropper.com/server_23

[6] Message flow – link-to-file

JDBC Message Store for WSO2 ESB

Introduction

Message stores in WSO2 ESB is important in implementing various scenarios like store & forwarding, and reliable delivery. They can be used to persist messages during mediation or even after. Currently synapse uses in-memory message stores, which is incapable of persists after execution. It is also consuming memory too. JMS message stores can save messages, but it need more additional resources and speed becomes slower. So as a good alternative for those is a JDBC message store. This has been added to WSO2 ESB 4.9.0 onwards.

JDBC message store

In this implementation ESB uses the same implementation of message store already in ESB but in a different form. It is designed closely similar to the design of JMS message store implementation in WSO2 ESB. It uses a JDBC connector to connect to external relational databases (eg: mysql, h2 are tested).

WSO2_jdbc_store_foward
JDBC Message Store for store and Forward Pattern

 

From the very beginning of the designing phase JDBC Message Store focused on eliminating the difficulties that faced when using JMS queues as message stores. So following list of aims are achieved from JDBC store,

Easy to connect – It’s rather easy to connect with databases compared to JMS queues
More operations on data – In JMS queues methods like random selecting messages are not supposed to support. But with JDBC it become a reality, and has operations very close to in-memory stores.
Fast transactions – In test I have seen that JDBC stores are capable of handling about 2300 transactions per second which is 10 times faster than the existing system.
Work with high capacity and long-time – Since JDBC Stores uses databases as the medium to store data, and can depend on up to Terabytes of data. It is also generally accepted that Databases are capable of handling data for long-time compared to JMS queues.
After having tests in different backgrounds with different configurations, I have seen that the outcomes of JDBC message store has achieved more than expected at the initial stages.

In this store implementation, the message is converted to serializable Java object, so it to be able to store as a blob.
Construction of the persisting message has two basic parts JDBC Axis2 Message and JDBC Synapse Message. Combination of those two will produce the Storable Message , which is sent to database.

wso2_jdbc_store_message
Constructing a Storable Message

Other than message constructing classes following are explanation on what rest of the classes are doing.

JDBCMessageStore– Provides the fundamental interface to external parties by encapsulating the underlying implementations. This class exposes store, poll, get, clear, peek and other generic methods of messages stores to outside parties.
JDBCMessageStoreConstants – This class defines the related constant values for JDBC message store. This class make it easy for maintain JDBC store implementation by gathering all the constants in to a single place.
JDBCConfiguration – This class was defined to provide necessary utility functionalities to JDBC operations. Basically it deals with creating connections and terminating connections, querying database tables etc.
JDBCMessageConverter – This class is to help with converting SOAP messages in to serializable Java objects and the reverse process after querying the required. This works as an adaptor between database and ESB.
JDBCProducer – This is to produce messages into store which is used by the store mediator.
JDBCConsumer – This is to consume messages from a message-store, and to be used by message processors.
Those classes along with the classes mentioned previously, creates a successful JDBC Message Store.

Configuration of JDBC Message Store

To use JDBC Message store customer has to add the required JDBC support. There after following configuration will allow any message processor to use JDBC message store as same as other message store. Configuration can be specified as an inline or points to a datastore (which gives you additional control over database).

<store messageStore="MyStore"/>

<messageStore class="org.apache.synapse.message.store.jdbc.JDBCMessageStore" name="MyStore">

(
<parameter name="store.jdbc.driver">com.mysql.jdbc.Driver</parameter>
<parameter name="store.jdbc.connection.url">jdbc:mysql://localhost:3306/mystore</parameter>
<parameter name="store.jdbc.username">root</parameter>
<parameter name="store.jdbc.password"></parameter>
<parameter name="store.jdbc.table">store_table</parameter>

|

<parameter name="store.jdbc.dsName">reportDB</parameter>
<parameter name="store.jdbc.table">store_table</parameter>

)

</messageStore>

In-lined Data Source

store.jdbc.driver – Database driver class name
store.jdbc.connection.url– Database URL
store.jdbc.username – User name for access Database
store.jdbc.password – Password for access Database
store.jdbc.table – Table name of the database

External Data Source

store.jdbc.dsName – The name of the Datasource to be looked up
store.jdbc.table – Table name of the database
Optionally;
store.jdbc.icClass – Initial context factory class. The corresponding java environment property is java.naming.factory.initial
store.jdbc.connection.url– The naming service provider url . The corresponding java environment property is java.naming.provider.url
store.jdbc.username – This is corresponding to the java environment property java.naming.security.principal
store.jdbc.password – This is corresponding to the java environment property This is corresponding to the java environment property java.naming.security.principal

Database script

For creation database table you can use following scripts.

MySQL :
CREATE TABLE jdbc_store_table(
indexId BIGINT( 20 ) NOT NULL AUTO_INCREMENT ,
msg_id VARCHAR( 200 ) NOT NULL ,
message BLOB NOT NULL ,
PRIMARY KEY ( indexId )
)

H2 :
CREATE TABLE jdbc_store_table(
indexId BIGINT( 20 ) NOT NULL AUTO_INCREMENT ,
msg_id VARCHAR( 200 ) NOT NULL ,
message BLOB NOT NULL ,
PRIMARY KEY ( indexId )
)

You can create similar SQL script according to your database.

Sample

First you need to put the relevant database driver into repository/components/lib folder. [2]
Following sample configuration is based on a mqsql database name “mystore” and table “store_table”.

<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="MessageStoreProxy"
       transports="https http"
       startOnLoad="true"
       trace="disable">
   <description/>
   <target>
      <inSequence>
         <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>
         <property name="OUT_ONLY" value="true"/>
         <property name="target.endpoint" value="StockQuoteServiceEp"/>
         <store messageStore="MyStore"/>
      </inSequence>
   </target>
   <publishWSDL uri="http://localhost:9000/services/SimpleStockQuoteService?wsdl"/>
</proxy>

<messageStore xmlns="http://ws.apache.org/ns/synapse"
              class="org.apache.synapse.message.store.impl.jdbc.JDBCMessageStore"
              name="MyStore">
   <parameter name="store.jdbc.password"/>
   <parameter name="store.jdbc.username">root</parameter>
   <parameter name="store.jdbc.driver">com.mysql.jdbc.Driver</parameter>
   <parameter name="store.jdbc.table">store_table</parameter>
   <parameter name="store.jdbc.connection.url">jdbc:mysql://localhost:3306/mystore</parameter>
</messageStore>

<messageProcessor xmlns="http://ws.apache.org/ns/synapse"
                  class="org.apache.synapse.message.processor.impl.forwarder.ScheduledMessageForwardingProcessor"
                  name="ScheduledProcessor"
                  messageStore="MyStore">
   <parameter name="max.delivery.attempts">5</parameter>
   <parameter name="interval">10</parameter>
   <parameter name="is.active">true</parameter>
</messageProcessor>

Message processor also can be added via management console through following section:

wso2_message_store_screen
JDBC Message Processor

 

Use axis2 client as follows to send messages to the proxy:

ant stockquote -Daddurl=http://localhost:8280/services/MessageStoreProxy

Rough comparison between JMS and JDBC message store performance

For the testing the selected database details are as follows:

Type : MySQL on XAMPP server for database (JMS is just for comparison)
Storage Engine : MyISAM
Machine Details : Ubuntu 14.04 on Intel® Core™ i7-4800MQ CPU @ 2.70GHz × 8 with 16 GB RAM
Messages sent via : JMeter client

# of Threads Messages/Thread Total Messages Throughput / sec
JMS Store Producer JDBC Store Producer
1 100 100 63.9 552.5
1 500 500 62.8 566.3
1 1000 1000 63.5 577.4
1 2000 2000 62.3 629.3
1 3000 3000 62.8 649.9
10 10 100 73.7 108.8
10 50 500 70.4 511.8
10 100 1000 70.7 928.5
10 200 2000 71.3 1537.3
10 300 3000 72.3 1827
50 10 500 71.6 494.1
50 100 5000 72.5 3494.1
100 250 25000 73.4 5529.8
500 100 50000 7055.2

(Note: At small number of messages, JDBC figures have not reached stable throughput)

References:

[1] Pull request containing the implementation – https://github.com/wso2/wso2-synapse/pull/91

[2] MySQL connector – https://dev.mysql.com/downloads/connector/j/

[3] WSO2 ESB 4.9.0-ALPHA – https://github.com/wso2/product-esb/releases/tag/esb-parent-4.9.0-ALPHA

 

Local Transport in WSO2 ESB

Introduction

WSO2 ESB [1] is considered as the fastest 100% open-source enterprise service bus on the planet. It has number of features that cater for different enterprise-integration scenarios. It also supports number of transports including NHTTP, JMS, VFS, Local, SMS, Mail and domain-specific transports such as FIX, HL7. Among those transports, recently I got an opportunity to work a bit on Local Transport. So this post is based on what I have experienced with Local Transport.

Local Transport [2] was first introduced to WSO2 ESB in version 4.0. It helps to communicate with proxy-services in an efficient manner. This gain has obtained by using In-JVM calls when calling to proxy-services. Considering technical side, the sender of Local Transport is implemented based on org.apache.axis2.transport.local.NonBlockingLocalTransportSender , and there’s no receiver implementation.

Enabling Local Transport

To enable Local Transport, you need to follow the following steps:
1. Go to /repository/conf/carbon.xml
2. Replace local://services/ with https://${carbon.local.ip}:${carbon.management.port}${carbon.context}/services/

3. Go to /repository/conf/axis2/axis2.xml
4. Comment the following two lines:

<transportReceiver name="local" class="org.wso2.carbon.core.transports.local.CarbonLocalTransportReceiver"/>
<transportSender name="local" class="org.wso2.carbon.core.transports.local.CarbonLocalTransportSender"/>

5. Add the following line:

<transportSender name="local" class="org.apache.axis2.transport.local.NonBlockingLocalTransportSender"/>

If you need to use local transport with callout mediator, you do not need to perform configuration mentioned in this section as callout mediator requires blocking local transport which is configured by default in WSO2 ESB distribution.

Scenarios

Sample scenario [3] can be find along with WSO2 documentation.
Sample URL: https://docs.wso2.com/display/ESB481/Sample+268%3A+Proxy+Services+with+the+Local+Transport

In the sample you can see there are 3 proxy-services. Once client sends a message to ESB, LocalTransportProxy, SecondProxy and StockQuoteProxy get called sequentially. In that scenario communication between proxy-services handles by In-JVM calls, and that makes less overhead on network traffic.

You can try the same execution chain by resetting the Enabling Local Transport settings and replacing “local://localhost” prefixes with “http://localhost:8280 ” . Then the intercommunication with proxies happen across the network.

If you capture the network traffic with Wireshark for the above 2 scenarios (TCP traffic is filtered);

With Local Transport:

with_local

Without Local Transport:

without_local

As depicts in the first image, with Local Transport you can observe just the requests and responses which are exchanged with external parties (Client and BE service). But without Local Transport, you can see the proxy-service calls happening through the network.

To consider Local Transport

Though Local Transport seems to be an efficient way to communicate, it comes with several limitations which make Local Transport is not the choice.

  1. Local Transport cannot be used to send REST API calls, which require the HTTP/S transports.
  2. WS-Security cannot be used with the local transport. Since the local is mainly used to make calls within the same VM, WS-Security is generally not required in scenarios where it is used.
  3. If you want to make calls across tenants, you should use a non Local Transport even if they run from the same VM.

Conclusion

This post is based on the facts I found about Local Transport so far. So it’s your integration scenario which decides whether Local Transport caters or not. But it’s worth always go for the most efficient way to get the full benefit of service integration software.

References

[1] WSO2 ESB – http://wso2.com/products/enterprise-service-bus/

[2] Local transport – https://docs.wso2.com/display/ESB481/Local+Transport

[3] Local Transport Sample – https://docs.wso2.com/display/ESB481/Sample+268%3A+Proxy+Services+with+the+Local+Transport