Simple Fault Statistic Dashboard for WSO2 API Manager with ELK

Introduction

WSO2 API Manager (WSO2 AM) [1] is a product in WSO2 stack which is fully open source and provides a complete API Management solution. It includes API creation, publishing and managing all aspects of an API and its lifecycle, and is ready for massively scalable deployments. Several products including WSO2 AM provides analytics support out of box via WSO2 analytic components.

On the other hand, ELK stack is very powerful and popular product in analytics domain. Specially Elasticsearch product, the heart of ELK stack, provides a convenient way to integrate with other systems with the help of other components in the stack. One key thing to remember when using ELK stack is, you need to stick with the same version across all products [2]. For this article I am using version 6.0.1 of the ELK stack.

image

Use case

I have done this as my first experiment with ELK stack. For this experiment I used following products in ELK stack.

  1. Filebeat
  2. Logstash
  3. Elasticsearch
  4. Kibana

WSO2 API Manager logs almost all errors in its main log (a.k.a. carbon log). Therefore this experiment is to identify faulty situations in the carbon log and use ELK stack to analyse those. So I decided to make it work with following frequent errors.

  1. Deployment failures of APIs : When there’s an incorrect syntax in synapse configurations.
  2. Message Building failures : When incoming message payload is not valid to the given content-type
  3. Backend connection failures : When API fails to establish connection with given backend service
  4. Authentication failures : When access token missing or invalid when calling the API.

In upcoming topics, I am going to show on how to use each component in ELK stack for implementing the use case.

Plan

Document 1

The carbon log keeps growing by the APIM server. In the same server, set up filebeat to read the carbon log. Filebeat push the logs to logstash to do filtering. After filtering logs, logstash pushes logs to elasticsearch for indexing. For visualizing purpose, kibana is set to retrieve data from elasticsearch. According to the plan I have discussed how each component in ELK stack fits together to provide a complete solution.

In next sections, I will describe relevant configurations for each component and the meaning. So that you can customize those according to your environment (here I am using the same host). I assume, you have already setup the WSO2 APIM, and have some understanding on carbon logs.

Setting up filebeat

The responsibility of filebeat is to read logs from the file and start the process. You can download filebeat from the website [3]. You can download filebeat manually or set it as a service. Depending on your setup, you need to place configuration file accordingly. Following are segments of filebeat.yml file need to modify. You will find a template file with name “filebeat.reference.yml”.


filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

# Change to true to enable this prospector configuration.
 enabled: true

# Paths that should be crawled and fetched. Glob based paths.
 paths:
 - /home/buddhima/telco/expreiments/elk/latest/wso2am-2.0.0/repository/logs/wso2carbon.log

. . .

#output.elasticsearch:
 # Array of hosts to connect to.
 #hosts: ["localhost:9200"]

. . .

output.logstash:
 # The Logstash hosts
 hosts: ["localhost:5044"]

In above configuration, log reading prospector is enabled and points to carbon log file of the WSO2 APIM. By default logstash output to elasticsearch is enabled. But according to the plan we need to send logs thorough logstash to filter. Therefore I commented the output.elasticsearch and host followed by that. The uncommented the output.logstash and the host followed by that. I let the hostname and ports remain the default as I have done this on the same machine. Once you complete it, you can start filebeat by following command (for standalone version). It will detect filebeat.yml as default configuration (which I have modified).


./filebeat run

Setting up logstash

Logstash plays a major role in this setup. Logstash is capable of filtering logs according to given criteria. You can download logstash at website [4]. Logstash uses input -> filter -> output order to process log inputs from filebeat. I created logstash-beat.conf file to write the configuration. Following is the logstash configuration to cater detection of above mentioned failures.

 

download
logstash-beat.conf

In the above config I have configured filebeat as the input and elasticsearch as the output. The filter section specifies how input logs should be handled before sending to output. If you have closely observed carbon logs, you will find Deployment failures and Message Build failures are recorded as error logs whereas Backend connection failures and Authentication failures are recorded as warning logs.

Within the filter, logs are initially separated according to log levels. Then filter logs according to subcategories. The grok filter is used to map some fields in the log message. mutate filter is used to add a new field “error_type” to the logs sent by filebeat. The drop filter is used to avoid forwarding unnecessary logs.

To start logstash with the configuration, execute the following command:


./logstash -f ../logstash-beat.conf

Setting up elasticsearch

Now you are getting closer to the final part. Elasticsearch is the most crucial component of this setup. However you no need to configure it for this experiment. Just download it from the website [5] and execute the following to start it.


./elasticsearch

Setting up kibana

Kibana is a powerful visualizing tool in the ELK stack. You can point kibana to different indexes in elasticsearch and visualize data. Though kibana provides view of processed data, it is capable of visualizing data periodically with the minimum of 5 seconds interval. Similar to elasticsearch, you no need to do any configuration changes to kibana for this experiment. You just need to download it from website [6] and execute following command to start it:


./kibana

Once it starts, you can access http://localhost:5601/ via web browser to access web UI. At the web UI, you can select the index as logstash-* as the index. Then you can go to visualizing section and add graphs you like and aggregate them to a dashboard. I’m going to stop here and let you to compose a dashboard you wish. Remember that elasticsearch is getting only the faulty logs according to logstash configuration. Following is a sample dashboard I have created based on 4 types on errors in WSO2 API Manager.

dashboard2
Configuring refresh time in Kibana
dashboard
Kibana dashboard view

Conclusion

This is my first experiment with ELK stack and I have shown how to compose an fault analysis dashboard with ELK stack for WSO2 API Manager. Here I used WSO2 APIM 2.0.0 version. I have discussed the reason for using each component in ELK stack and configurations relevant for those components. You will find configuration files at the following link [7]. Hope this will be your starting point and will help to build up a useful monitoring dashboard with ELK stack.

References

[1] WSO2 API Manager Documentation : https://docs.wso2.com/display/AM200/WSO2+API+Manager+Documentation

[2] ELK product compatibility : https://www.elastic.co/support/matrix#matrix_compatibility

[3] Filebeat website : https://www.elastic.co/products/beats/filebeat

[4] Logstash website : https://www.elastic.co/products/logstash

[5] Elasticsearch : https://www.elastic.co/products/elasticsearch

[6] Kibana : https://www.elastic.co/products/kibana

[7] Resources : https://www.dropbox.com/s/k8b7js9jbrobxjj/elk-resources.zip?dl=0

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s