Category: Windows

What is SMPP Protocol

Introduction

What is SMPP protocol? The simplest answer that you can give is “Short Message Peer to Peer” protocol. This protocol is use to define how you can communicate with Message Centres such as Short Message Service Centre (SMSC), GSM Unstructured Supplementary Services Data (USSD) Server etc.

The client which communicate with a Message Centre is known as External Short Message Entity (ESME). The ESME needs to comply the same protocol version as SMSC does to communicate. The current established SMPP protocol version is version 3.4 [1].

smpp-15-728

Binding Types

To start communicating with a Messaging Centre, the ESME needs to initiate a session by sending a bind request. In SMPP protocol, there are 3 binding types defined.

  1. Transmitter: Messages sent from the ESME to the SMSC
  2. Receiver: Messages sent from the SMSC to the ESME
  3. Transceiver: Messages sent from the SMSC to the ESME and vise-versa

If ESME wants to transfer messages both sides either it has to use Transmitter + Receiver binds or a Transceiver bind.

Exchanging Data

The elements on SMPP protocol are request and response Protocol Data Units (PDUs). Data exchange in SMPP is defined using types of PDUs over an underlying TCP/IP or X.25 network connection. Here I will discuss on some important PDUs.

bind_

Bind PDU is use to register an ESME to a Message Centre. Following binding types; bind_transmitter, bind_receiver and bind_transceiver PDUs are use to initiate Transmitter, Receiver and Transceiver bindings respectively. As the response, Messaging Centre sends bind_***_resp PDU indicating the status of binding.

unbind

Unbind PDU is use by ESME to terminate the session and unregister from Message Centre.

submit_sm

This PDU is use to submit a short message to SMSC for onward transmission.

submit_multi

This PDU is use to submit a SMPP message for delivery to multiple recipients or one.

deliver_sm

This PDU is issued by SMSC to send a message to an ESME. Two main purposes of using this PDU by SMSC are;

  1. SMSC may route a short message to the ESME for delivery.
  2. SMSC Delivery Receipt: The message from SMSC indicating the delivery status of the submitted message. If ESME wishes to receive this, it has to change registered_delivery accordingly during submit_sm.

query_sm

This PDU is issued by ESME to query about the delivery state of a submitted short message. ESME must indicate the message_id generated from SMSC, received with submit_sm_resp.

enquire_link

This PDU is issued either by SMSC or ESME to provide a confidence-check of the communication path between an ESME and an SMSC.

Implementing an ESME

Implementation of an ESME can be in any programming language. You could either implementing SMPP specification from scratch or use an existing library. I found jsmpp library [2] for implementing an ESME using Java. You may refer Java SMPP library comparison discussion at [3]. You will also find SMPP libraries for wide range of languages including C#, NodeJS, PHP etc.

Conclusion

In this discussion I wanted to discuss the basic of SMPP protocol and its use. I also discussed some important PDUs you should know. The implementation of ESME is now become simpler as you will find convenient methods are exposed by different libraries.

References

[1] SMPP specification version 3.4: http://opensmpp.org/specs/SMPP_v3_4_Issue1_2.pdf

[2] JSMPP library: https://jsmpp.org/

[3] Java SMPP library comparison: https://stackoverflow.com/questions/14368611/java-smpp-library-comparison

 

Advertisements

Microservices with Netflix stack and Spring Boot

Introduction

In my previous post on microservices [1], I have discussed how a microservicec can be implemented using Spring Boot. In this post I am going to discuss how a microservice implementation can be leveraged using Netflix OSS. Netflix can be considered as one of the early adopters of this trend. Therefore, they have launched released several projects that they have used in their implementation. Some of those outcomes are Eureka, Zuul, and Feign which I am going to use for the implementation.

Use case

For this implementation, I have chosen a simple use case where a payment-service is using customer-info-service to retrieve customer information. Apart from that customer-info-service can be directly invoked to retrieve data. Those two systems are exposed through a gateway service. Also there is a registry service which manage registrations of those services.

microservice-blog
Plan for the implementation

Implementation

For this implementation I have used spring boot version 1.5.2.RELEASE. Before starting up, make sure you have setup Java and configured it with your favorite IDE (in my case it’s IntelliJ). Also I have used maven as the build tool for this project. So let’s move to implementation of the use-case discussed.

Implementing the Registry Service

As the first step you need to create a maven project with your IDE and create a sub-module called “eureka-service” in it. In the src/java folder, create the package as you prefer (I used “com.buddhima.example.service.eureka”). In side that package create a new Java file called Application.java and add the following code.


package com.buddhima.example.service.eureka;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class Application {
 public static void main(String[] args) {
 new SpringApplication(Application.class).run(args);
 }
}

The annotation @EnableEurekaServer expresses that the microservice is going to be registry service. It requires to have “org.springframework.cloud:spring-cloud-starter-eureka-server” dependency in your pom file. Eureka is a project in Netflix stack which easily transform this microservice in to the registry of other projects. Other microservices can register to this as clients and this registry service will help to figure-out registered services’ locations when requested. You will find more information on Eureka project in its wiki page [2].

Implementing CustomerInfo service

This will be a typical microservice, which provides information about a customer (which is hardcoded for this sample). Without going further explanations, following are the relevant Application.java, CustomerInfoController.java, application.yml and pom.xml files.


package com.buddhima.example.service.customerinfo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

@SpringBootApplication
@EnableEurekaClient
public class Application {

    public static void main(String[] args) {
        new SpringApplication(Application.class).run(args);
    }
}


package com.buddhima.example.service.customerinfo;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping(value = "/customerinfo")
public class CustomerInfoController {

    @RequestMapping(method = RequestMethod.GET, value = "/name")
    public String getName() {
        System.out.println("Name requested");

        return "foo";
    }

    @RequestMapping(method = RequestMethod.GET, value = "/age")
    public int getAge() {
        System.out.println("Age requested");

        return 28;
    }
}

server:
  port: 8000

spring:
  application:
    name: customerinfo-service

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/
  instance:
    preferIpAddress: true

customer-info service pom file : https://github.com/Buddhima/project-mss/blob/step-1/customerinfo-service/pom.xml

Special thing to note in the above code is the @EnableEurekaClient annotation which enables this microservice to register as a service in eureka registry.

Implementing Payment service with Feign client

Payment service is almost similar to customer-info service, but having a feign client which calls customer-info service. For that purpose, the CustomerInfoClient and PaymentController classes are as follows.


package com.buddhima.example.service.payment.clients;

import org.springframework.cloud.netflix.feign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

@FeignClient(value = "customerinfo-service", path = "/customerinfo")
public interface CustomerInfoClient {

    @RequestMapping(method = RequestMethod.GET, value = "/name")
    public String getName();

    @RequestMapping(method = RequestMethod.GET, value = "/age")
    public int getAge();
}


package com.buddhima.example.service.payment;

import com.buddhima.example.service.payment.clients.CustomerInfoClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMethod;

@RestController
@RequestMapping(value = "/payment")
public class PaymentController {

    @Autowired
    private CustomerInfoClient customerInfoClient;

    @RequestMapping(value = "/name", method = RequestMethod.GET)
    public String getName() {
        return customerInfoClient.getName();
    }

    @RequestMapping(value = "/age", method = RequestMethod.GET)
    public int getAge() {
        return customerInfoClient.getAge();
    }
}

To activate feign client, you need to add @EnableFeignClients annotation in Application.java class and add “org.springframework.cloud:spring-cloud-starter-feign” dependency.

Implementing Edge Service with Zuul

The purpose of edge service is closely similar to a software load-balancer. Zuul project [3] is focusing on dynamic routing of messages. It uses Eureka service to find and resolve the addresses for incoming requests and route them to proper microservice. So microservices at the backend can be scaled up/down without changing a single configuration in the rest of environment.

server:
  port: 6000

spring:
  application:
    name: edge-service

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/
  instance:
    preferIpAddress: true

zuul:
  debug:
    request: true
  routes:
    payment-service:
        path: /payment-service/**
        serviceId: payment-service
        stripPrefix: true
    customerinfo-service:
        path: /customerinfo-service/**
        serviceId: customerinfo-service
        stripPrefix: true

OK, that’s it! I have quickly gone through important points. I have added the complete source-code to github [4] for you to refer.

What you can do more

Well, this is not the end. There are so many paths you can take from here. I have highlighted few of those in the following list.

  1. Config-Service : creating a centralized place to manage all the configurations belongs to micrcoservices
  2. Circuit Breaker implementation : circuit breaker pattern avoids propagating backend failures to clients. Another project by Netflix called Hystrix [5], is very popular for this purpose. You may use Turbine project to aggregate multiple microservice information to a single dashboard.
  3. Docker and Kubenetes : Microservices deployments can be leveraged using docker and kubenetes to make it work in fault-tolerant manner.
  4. Analytics using ELK stack : You may heard of ELK stack [6] which provide various forms of support for analyzing data.

Where you can learn more

While doing the experiment, I came across numbers of resources which are written as tutorials for microservices. Some interesting ones are listed below.

  1. Fernando Barbeiro Campos’s Blog : https://fernandoabcampos.wordpress.com/2016/02/04/microservice-architecture-step-by-step-tutorial/
  2. Quimten De Swaef’s Blog : https://blog.de-swaef.eu/the-netflix-stack-using-spring-boot/
  3. Piotr’s TechBlog : https://piotrminkowski.wordpress.com/2017/02/05/part-1-creating-microservice-using-spring-cloud-eureka-and-zuul/
  4. Piggy Metrices, a POC app : https://github.com/sqshq/PiggyMetrics

 

References

[1] Microservices with Spring Boot : https://buddhimawijeweera.wordpress.com/2017/05/04/microservices-with-spring-boot/

[2] Eureka project : https://github.com/Netflix/eureka/wiki/Eureka-at-a-glance

[3] Zuul project : https://github.com/Netflix/zuul/wiki

[4] Github project : https://github.com/Buddhima/project-mss/tree/step-1

[5] Hystrix : https://github.com/Netflix/Hystrix/wiki

[6] ELK stack : https://www.elastic.co/

Analyzing Memory Usage of an Application

Introduction

Memory usage of an application is a key factor to monitor. Specially in production systems you need to set alarms to make sure that system is stable. Therefore memory usage can be considered as a probe to measure the health of the system. Usually production systems are installed on Linux servers. OS itself help in many ways to provide a clear view of application’s memory usage.

featured_image

In this post, I am going to discuss different commands and tools which can be used to measure memory usage of applications, specially Java applications. This post will guide you from higher-level to lower-level under following topics.

  1. Monitoring Overall System Memory Usage
  2. Monitoring Application’s Memory Usage
  3. Analyzing Java Application’s Memory Usage

Further information on commands and tools can be gained by going through the external links provided.

Monitoring Overall System Memory Usage

For this purpose I am going to discuss using “free” command in Linux. Free command gives an overview of complete system memory usage. Therefore please note that this is not an efficient way of measuring an applications performance. Because the system can host many applications and each application has its own boundaries. However, let’s look in to some usages of free command. (hint: use free -h to get a human readable output)

[root@operation_node ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          7873       7360        512          0         70        920
-/+ buffers/cache:       6369       1503
Swap:        11143        431      10712

In the above output, in the first line, you can see total physical memory is 7873 MB and 7360 MB is used. Because of that only 512 MB is remaining. However this does not imply that memory is completely being used. Linux OS is good at using memory effectively. Therefore it uses caching for make the memory access efficient. And that cached memory is showing as used in the first line.
What you should look at is the second line which removed the cache & buffer usage of the physical memory. In used column of second line shows actual use of memory without cache & buffer. In the free column of second row you can see 1503 MB of free memory, which is generated by accumulating free + cache & buffer. So actually you have 1503 MB of free physical memory for use. In addition, according to the third line, you have around 10 GB of swap memory ready for use. Please refer [1] for further information.

In modern versions of Linux kernels, the output of free command has changed. It would be look like something below.

me@my-pc ~ $ free -m

                  total        used        free      shared  buff/cache   available

Mem:           7882        3483         299         122        4100        3922

Swap:          9535           0        9535

In above case, the formula for calculation is as below [2] [3]:

total = used + free + buffers + cache

available : is the amount of memory which is available for allocation to a new process or to existing processes.

free : is the amount of memory which is currently not used for anything. This number should be small, because memory which is not used is simply wasted.

Monitoring Application's Memory Usage

In many cases we want to target monitoring of a single application than overall system. Overall memory usage reflects the use of memory including OS-level operations. Therefore we can use top command in Linux for this purpose. Following is a sample output of top command.

top-cmd-result
A result of a top command

What you should actually focus is the RES value and %MEM value of the application (for this, first you need to identify the process-id of an application using ps -aux | grep "application_name" command). You can use simple "e" to toggle the unit of displaying memory.

RES -- Resident Memory Size : The non-swapped physical memory a task has used.

%MEM -- Memory Usage (RES) : A task's currently used share of available physical memory (RAM).

According to above discussion, top command directly reveals the memory consumption of an application. For further information on top command, you may refer [4] [5].

Analyzing Java Application's Memory Usage

If your application is a Java application, then you might need to look at what objects consumes the high amount of memory. For that one option is taking a heap-dump of that Java application. You may use following command to take a heap dump of the application. Prior to that you should know the process-id of that running Java application.

jmap -dump:format=b,file=heap_dump.hprof <process_id>

Once you execute the command, you will get the file heap_dump.hprof  containing heap usage of Java program. Since the file is in binary format, you need to use a special tool to analyze it. Commonly using tool to inspect heap-dump is Eclipse Memory Analyzer Tool (MAT) [6], which is built on top of Eclipse platform. You just need to download the pack and extract it. Executing MemoryAnalyzer will open up a GUI application to analyze the heap-dump. When you open the heap-dump using MAT, tool will prompt you to generate reports based on heap-dump. You may interest about Leak Suspects Report which shows the large object which takes large portion of the memory.

MAT-leak_suspects
Memory Analyzer Tool with Leak Suspects Report

Another interesting view of this tool is the Dominator Tree, which shows large objects along with ones who keep them. According to the definition [7];

An object x dominates an object y if every path in the object graph from the start (or the root) node to y must go through x.

In the dominator tree view, you will see list of objects and the amount of memory they took when you take the heap-dump.

dominator_tree
Dominator Tree view of Memory Analyzer

In dominator tree view, you can go expanding each entry and see how they have composed. Two columns showing in this view are Shallow Heap and Retained Heap. By default the list is sorted by Retained Heap value, descending order. Following definition [8] clearly explain the meaning of those two values.

Shallow heap is the memory consumed by one object. An object needs 32 or 64 bits (depending on the OS architecture) per reference, 4 bytes per Integer, 8 bytes per Long, etc. Depending on the heap dump format the size may be adjusted (e.g. aligned to 8, etc...) to model better the real consumption of the VM.

Retained set of X is the set of objects which would be removed by GC when X is garbage collected.

Retained heap of X is the sum of shallow sizes of all objects in the retained set of X, i.e. memory kept alive by X.

Generally speaking, shallow heap of an object is its size in the heap and retained size of the same object is the amount of heap memory that will be freed when the object is garbage collected.

Therefore in case of out-of-memory or high memory indications, you should definitely focus on this Retained Heap values of the dominator tree view.

Conclusion

In this post I want to give an clear idea on using several tools to analyze the memory usage of an application running on Linux OS. I have used commands which comes with Linux itself. However you may find tools which can be installed to analyze memory. In the last section I spent on discussing how Eclipse Memory Analyzer can be used to examine heap usage of a Java program. Hope those will help you as well.

References

[1] Understanding Linux free memory : https://thecodecave.com/understanding-free-memory-in-linux/

[2] Usage of free memory : https://stackoverflow.com/questions/30772369/linux-free-m-total-used-and-free-memory-values-dont-add-up

[3] Ask Ubuntu clarification on free command : https://askubuntu.com/questions/867068/what-is-available-memory-while-using-free-command

[4] Super-user forum top command explanation : https://superuser.com/questions/575202/understanding-top-command-in-unix

[5] Linuxarea blog top command explanation : https://linuxaria.com/howto/understanding-the-top-command-on-linux

[6] Eclipse Memory Analyzer : https://www.eclipse.org/mat/

[7] MAT Dominator Tree : https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.mat.ui.help%2Fconcepts%2Fdominatortree.html

[8] MAT Shallow Heap and Retained Heap explanation : https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.mat.ui.help%2Fconcepts%2Fdominatortree.html

Inspecting Solr Index in WSO2 API Manager

Introduction

Apache Solr project [1] helps you to run a full-featured search server on a server. Also you can integrate Solr with your project to make searching faster. In WSO2 API Manager, Solr is using to make searching faster in store and publisher. In WSO2 API Manager, Solr indexing keeps the frequently using meta-data of APIs. Thereafter, to retrieve complete information about an API, API Manager uses its database. This mechanism makes searching faster and less burden on the databases.

adressenboek

However, in some situations things may go wrong. We have seen several cases Solr indexing does not tally with the information at database. Due to that, when displaying complete information about an API, you may see inconsistent information. In such situations you may want to inspect the Solr index at API Manager.

Setting Up Solr Server

Setting up the solr server is quite easy, as its a matter of downloading binary file from the project page [1]. The important thing here is to make sure you download the proper version. WSO2 API Manager 2.0.0 version is using Solr 5.2.1 version. I figured out it by going through the API Manager release tag pom, identify the registry version and searching registry pom file.

Once you download the binary package, extract it. You can start the Solr server by going to solr-5.2.1/bin directory and execute “./solr start“. Then Solr server will start as a background process. Then access its admin UI by location “http://localhost:8983/solr” in your browser.

Inspecting WSO2 API Manager Index

Before doing so, you must stop WSO2 API Manager and Solr server. To stop Solr server, execute command “./solr stop” inside bin directory. Then you need to copy Solr indexing configs and index from API Manager.

  • To copy configs, go to location “APIM_HOME/repository/conf/solr” and copy “registry-indexing” to “solr-5.2.1/server/solr” folder.
  • To copy indexed data, go to the location “APIM_HOME” and copy “solr” folder to same folder “solr-5.2.1” resides. This is done to comply the “dataDir” value at “solr-5.2.1/server/solr/registry-indexing/core.properties” file.

Now start the Solr server and go to admin UI. You should see a drop-down on the left pane and “registry-indexing” menu item in that. Select “registry-indexing” and now you will be able to query indexed data by going to Query section. To query Solr index you need to use specific query language, which is not actually difficult to understand. But in here I’m not going to discuss too much on query language, and it’s up to you to refer [2] and learn it. You can try-out those queries from admin UI directly.

registry-indexing in Solr admin UI

Writing a Java client to query information

In some cases, you may need to write a client to client which can talk to a Solr server and retrieve results. So here I am giving out an example Java code which you can use to retrieve results from a Solr server [3]. However, I am not going to explain the code in detail, because I believe it’s self-explanatory.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.buddhima.solr</groupId>
<artifactId>solr-testing</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
    <!-- https://mvnrepository.com/artifact/org.apache.solr/solr-solrj -->
    <dependency>
        <groupId>org.apache.solr</groupId>
        <artifactId>solr-solrj</artifactId>
        <version>7.1.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.solr/solr-common -->
    <dependency>
        <groupId>org.apache.solr</groupId>
        <artifactId>solr-common</artifactId>
        <version>1.3.0</version>
    </dependency>
</dependencies>
</project>
package com.solr.testing;

import java.io.IOException;

import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrDocumentList;

/**
 * Created by buddhima.
 */
public class SolrTesting {

    public static void main(String[] args) throws IOException, SolrServerException {
        // Default Solr port: 8983, and APIM using 'registry-indexing'
        SolrClient client = new HttpSolrClient.Builder("http://localhost:8983/solr/registry-indexing").build();

        SolrQuery query = new SolrQuery();
        query.setQuery("*:*");

        // Fields use for filtering as a list of key:value pairs
        query.addFilterQuery("allowedRoles:internal/everyone", "mediaType_s:application/vnd.wso2-api+xml");

        // Fields to show in the result
        query.setFields("overview_name_s", "overview_status_s", "updater_s", "overview_version_s", "overview_context_s");

        // Limit the query search space
        query.setStart(0);
        query.setRows(500);

        // Execute the query and print results
        QueryResponse response = client.query(query);
        SolrDocumentList results = response.getResults();
        for (int i = 0; i < results.size(); ++i) {
            System.out.println(results.get(i));
        }
    }
}

In addition to that you can refer [4] [5] for further learning on Solr syntax.

Conclusion

In this post I have discussed the use of Solr in WSO2 API Manager and how to investigate existing Solr index. In addition to that I have shown how to construct a Java client which can talk to a Solr server. I hope that the above explanation will help you to solve issues with Solr indexing.

Special thank goes to WSO2 support for providing guidance.

References

[1] Apache Solr project : http://lucene.apache.org/solr/

[2] Solr query syntax : http://www.solrtutorial.com/solr-query-syntax.html

[3] Using SolrJ : http://www.solrtutorial.com/solrj-tutorial.html

[4] Solr Query Syntax : http://yonik.com/solr/query-syntax/

[5] Solr df and qf explanation : https://stackoverflow.com/questions/17363677/solr-df-and-qf-explanation

JSON Split Aggregate with WSO2 ESB

Introduction

Split-Aggregate (Scatter-Gather) is a common messaging pattern [1] use in enterprise world. In split-aggregate pattern, client’s request sends to multiple endpoint simultaneously. Responses from multiple endpoints aggregated and sends back as a single response to the client. There are plenty of use-cases you will find that this scenario plays when you try to integrate enterprise systems.

scattergather

WSO2 ESB (currently a part of WSO2 EI), is a famous middleware which is used to integrate enterprise system. It is also famous for its comprehensive middleware stack which comprises all the functionalities you need to integrate enterprise systems. WSO2 ESB provides an in-built set of mediators for you to achieve this commonly using Split-Aggregate pattern. They are Iterate Mediator, Clone Mediator and Aggregate Mediator. You will find a sample use-case of using those mediators in this documentation [2].

Existing Problem

Existing mediators provides a good support for Split-Aggregate scenarios when you are working with XML payloads. However current trend is more towards using JSON payloads for message exchanges. Although those existing mediators still can be used with JSON payloads, they do not provide a convenient support. Because of that, when using the existing mediators, you need to map your JSON payload to XML payload. This conversion most of the time adds extra burden to the mediation logic.

In this post I am discussing about two mediators which are optimized for JSON payload handling in Split-Aggregate scenarios. They are Json Iterate Mediator and Json Aggregate Mediator. Those mediators handle JSON payloads in its own way and do not convert to XML (native JSON support). Please note that these mediators do not come with WSO2 ESB out-of-box. You can find the relevant documentation at this location [3]

Configuring Mediators

To use these mediators, you need to build the sourcecode at here [3] and get the resultant jar files. Put the Json (Iterate/Aggregate) Mediator-1.0.0.jar file to ESB_HOME/repository/components/dropins folder. Along with those, add json-path-2.1.0.jar [4], json-smart-2.2.jar [5] and accessors-smart-1.1.jar [9] to the same location. Then start WSO2 ESB (sh bin/wso2server.sh)

Sample Scenario

Once you add those artifacts to ESB, you can refer two new mediators similar to inbuilt mediators. The respective xml tags are <jsonIterate> and <jsonAggregate>. In this post I’m showing a sample configuration of using those mediators and describing it briefly. The same scenario is discussed more descriptive manner at here [3]

<api xmlns="http://ws.apache.org/ns/synapse" name="sampleApi" context="/sample">
<resource methods="POST" uri-template="/*">
<inSequence>
    <log level="full"/>
    <jsonIterate continueParent="false" preservePayload="true" expression="$.messages" attachPath="$.messages">
        <target>
            <sequence>
                <log level="full"/>
                <header name="To" value="http://www.mocky.io/v2/58d6459b100000e601949cb7"/>
                <call/>
            </sequence>
        </target>
    </jsonIterate>
    <log level="full"/>
    <jsonAggregate>
        <completeCondition>
            <messageCount min="-1" max="-1"/>
        </completeCondition>
        <onComplete expression="$.message" enclosingElementProperty="responses">
            <log level="full"/>
            <send/>
        </onComplete>
    </jsonAggregate>
</inSequence>
</resource>
</api>

 

The above example shows how to do split-aggregate on a message receiving to an API. You can send the following request payload to the API create by ESB at http://localhost:8280/sample

curl -X POST \

http://localhost:8280/sample \

-H ‘content-type: application/json’ \

-d ‘{“originator”:”my-company”,”messages”:[{“country”:”Sri Lanka”,”code”:”94″},{“country”:”America”,”code”:”01″},{“country”:”Australia”,”code”:”61″}]}’

The expression at the jsonIterate mediator takes the responsibility of deciding where to split the message payload. It should be written as a JSON Path [6]. Within the sequence inside jsonIterate mediator, you will find splitted message payloads. They are sending to the backend URL given in the config. You can refer additional configurations relate to JSON Iterate mediator at here [7]

Each response comes into the jsonAggregate mediator. At the expression of JSON Aggregate mediator, you need to specify which part of the response should be taken for aggregation. This expression is again a JSONPath expression. Once it satisfies completion condition, aggregated message comes in to the onComplete sequence. You can do further processing on the message inside onComplete sequence. If you are more interested, you can look into documentation [8] which gives a complete guide on configuring JSON Aggregate mediator.

Conclusion

Split-Aggregate is a very common message exchanging pattern in enterprise world. JSON is becoming more popular message format across enterprise world too. However WSO2 ESB has lack of support to JSON message exchanging with split-aggregate scenarios. To cater that requirement I have built two custom mediators which makes life easier. Those two mediators can be configured to do split-aggregate with JSON payloads without converting to XML (native JSON support).

References

[1] EIP patterns reference : http://www.enterpriseintegrationpatterns.com/patterns/messaging/BroadcastAggregate.html

[2] WSO2 ESB Doc : https://docs.wso2.com/display/ESB500/Split-Aggregate+Pattern

[3] GitHub repository : https://github.com/Buddhima/Json-EIP-Mediators

[4] json-path-2.1.0 : https://mvnrepository.com/artifact/com.jayway.jsonpath/json-path/2.1.0

[5] json-smart-2.2.1 : https://mvnrepository.com/artifact/net.minidev/json-smart/2.2.1

[6] JSON Path documentation : https://github.com/jayway/JsonPath/blob/json-path-2.1.0/README.md

[7] JSON Iterate Mediator documentation : https://github.com/Buddhima/Json-EIP-Mediators/tree/master/JsonIterateMediator

[8] JSON Aggregate Mediator documentation : https://github.com/Buddhima/Json-EIP-Mediators/tree/master/JsonAggregateMediator

[9] accessors-smart-1.1.jar : https://mvnrepository.com/artifact/net.minidev/accessors-smart/1.1

Kubernetes and related technologies

Introduction

For this experiment I have used Ubuntu 16.04 machine. I believe still using ubuntu machine is much convenient to play around with these kind of technologies. I am not going to go deep with any of these technologies. And I am more focusing on kubernetes commands which I got familiar recently.

 

Install Docker

First you need to setup docker environment in your machine to develop the docker image. For this post, I’m using NodeJS server which responds a simple text message. First I have created NodeJS application locally and used following Dockerfile to create a docker image of it. You need to put the Dockerfile in the same directory where the NodeJS project resides.

Sample docker file I used is as follow:

FROM node:boron

WORKDIR /usr/src/app

COPY package.json .

RUN npm install

COPY . .

EXPOSE 3000

CMD [ “npm”, “start” ]

You can refer this article for installation and getting familiar with docker (https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04)

Once you create the docker image, push it to docker hub. This is for kubernetes to pick it up.

Install Virtualbox

You can use ubuntu software center for installing virtual box. In addition to that, you can use commadline to install virtualbox (https://askubuntu.com/questions/367248/how-to-install-virtualbox-from-command-line). I personally like to recommend using virtualbox, compared to other hypervisors.

Install minikube and kubectl

You may wonder why I was asking to install Virtualbox. The reason is minikube. At the moment Kubernetes recommended way of testing is using minikube with virtualbox. To install minikube and kubectl, please follow the instructions given in this document (https://kubernetes.io/docs/tasks/tools/install-minikube/) . While doing that, please make sure that to install minikube first and install kubectl which supports.

Minikube gives you a single node cluster. In that you can create a new kubernetes deployment. Docker is a dependency when using Kubernetes (can be used with rkt too). Once minikube is setup, you need to start it using minikube start command. Then you can interact with kubernetes cluster with kubectl commands. Here I have list down some important kubectl commands

For the docker image I created, I used following command to create a kubernete deployment

kubectl run my-test-app –image=docker.io/buddhima/node-web-app:v1 –port=3000

Other useful commands;

kubectl get <resource_type> – Get listed information about a resource type. Resource type can be nodes/deployments/pods/services etc

kubectl describe <resource_type> – get a descriptive information about a resource type

kubectl describe <resource_type>/ID – get a descriptive information about a single resource given by the ID

kubectl logs – print the logs from a container in a pod

kubectl exec – to execute a command on a container (eg: kubectl exec -it POD_NAME -c CONTAINER_NAME bash – to execute bash shell of a container in a pod)

 

Deployment – a deployment is a configuration which instruct how Kubernetes can create/update instances of app.

Pod – a pod is a collection of one or more application containers which are tightly coupled. A pod shares a same IP and port space. A pod in kubernetes cluster has a unique IP

Service – A service is a logical set of pods defined by YAML/JSON. Pods are selected by a LabelSelector. Types are ClusterIP, NodePort, LoadBalancer, ExternalName. This abstraction allows pods to die and replicate match set of pods using labels and selectors

 

kubectl expose deployment/<deployment_name> –type=”NodePort” –port 8080  – create a new service and expose to external traffic

kubectl label pod POD_NAME app=foo  – this is use to add a new label to a pod

kubectl delete <resource_type> <id>  –  deletes a resource given by id

kubectl set image deployments/<deployment_id> <deployment_name> = docker.io/buddhima/node-web-app:v2 – set the image to the given docker hub url

kubectl rollout status deployment/<deployment_name> – confirms the update status

kubectl rollout undo deployment/<deployment_name> – undo rollout update

kubectl scale deployment/<deployment_name> –replicas=4  – scale the deployment to 4 replicas. After scaling use kubectl get pods -o wide to view pods’ status

Conclusion

The objective of this post to give you a summarized set of important docker, kubernetes related commands. Actually I have talked more about kubernetes commands which might help you in future.

Setting up XDebug with Joomla

Introduction

Joomla! CMS is platform for website development which is using by quite a large number of people today. I have started using this few years back and contributed in many ways. Every time a development is going on, we came across situations where we need to look in to variables’s instance values. In some situations I just print that variable value. However in complex situation a help of a debugger is essential. There are several methods to debug Joomla during development process and most of them are mentioned in this reference article [1]. In this article I am describing one of those method in detail.

Pre-requisites

In my case I had the following setup;

  • Joomla CMS installed
  • Phpstrom on Windows
  • XAMPP which runs Apache and SQL

Now it’s time to start setting up.

Configuring XDebug on XAMPP

  • Download xdebug library suites for php version and OS (https://xdebug.org/download.php)
  • Put the library in php/ext folder
  • Add Xdebug configuration to the end of php.ini file
  • Add desired host and port
  • Restart XAMPP server and check phpinfo to verify xdebug section

Sample Xdebug configuration for php.ini file [2]. “zend_extension_ts” is the path to locate downloaded library.

[XDebug]
zend_extension = “c:\xampp\php\ext\php_xdebug-2.5.5-7.1-vc14.dll”
xdebug.remote_autostart = 1
xdebug.profiler_append = 0
xdebug.profiler_enable = 0
xdebug.profiler_enable_trigger = 0
xdebug.profiler_output_dir = “c:\xampp\tmp”
;xdebug.profiler_output_name = “cachegrind.out.%t-%s”
xdebug.remote_enable = 1
xdebug.remote_handler = “dbgp”
xdebug.remote_host = “127.0.0.1”
xdebug.remote_log=”c:\xampp\tmp\xdebug.txt”
xdebug.remote_port = 9000
xdebug.trace_output_dir = “c:\xampp\tmp”
; 3600 (1 hour), 36000 = 10h
xdebug.remote_cookie_expire_time = 36000

Configuring Web Browser

  • Install xdebug-helper extension in chrome browser [3] (there are other extensions for other browsers such as Firefox)
  • Go to extension options and change IDE key to PHPSTROM
  • Once you are logged to Joomla, switch to the Debug mode of plugin

Configuring Joomla to debug

Go to Global Configuration -> System and enable Debug System

Configuring PhpStrom

There are several ways of configuring remote debug from IDE side. Here I discuss on setting up as a remote debugger.

  • Go to debug configuration
  • Click + sign and click remote debugger
  • Click servers and add server’s debug configuration (xdebug host and port)
  • Add the correct IDE key and start debugger
  • Put some breakpoints and perform some actions Joomla!

Conclusion

There are several other ways of debugging Joomla as mention in [1]. Depending on the operating system, IDE and other facts, you may need to use different options.  I believe this article may help you to quickly setup debugging for Joomla! developments.

Reference

[1] How to debug your code – https://docs.joomla.org/How_to_debug_your_code

[2] Edit php.ini for XDebug – https://docs.joomla.org/Edit_PHP.INI_File_for_XDebug

[3] xdebug-helper – https://chrome.google.com/webstore/detail/xdebug-helper/eadndfjplgieldjbigjakmdgkmoaaaoc