APIs for IOT and FOG computing

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

IOT (Internet Of Things) is transforming whole business and bringing new revolution in all kinds of business. These IOT devices generating terabytes of data. To handle unprecedented volume, variety and velocity of data, IOT needs new kind of infrastructure to support whole IOT eco system. FOG computing is a part of IOT eco system to support large volume of data with quick response. I explained in my previous blog, how FOG computing is now becoming major role in IOT devices. FOG is intermediate platform to collaborate between Cloud computing and Edge computing(IOT) to transfer data. Fog can hold small number of data and less computing power. Large data is stored in cloud and heavy computing is done in Cloud.

API (Application Programming Interface) have major role to transfer data from edge device (IOT) to Fog node and from fog node to Cloud (Internet). API is helping to collaborate between edge device to Fog node and Fog node to Cloud. API is playing major role to maintain volume, variety and velocity of data in IOT infrastructure.

API works on HTTP/HTTPS protocol. APIs are light weight and simple. Enabling APIs take very small amount of resource. So, API can enable in small system and consume without losing too much resources.  This API property helps to transfer data from Edge device(IOT) to Fog node and from Fog node to Cloud. API is not part of mechanical role. API is responsible for the optimization of data transfer. Proper enabling of APIs between these nodes increase the efficiency and computational power to all IOT devices. Fog node is intermediate node between IOT device and cloud. So, Fog node will be responsible to receive data from edge(IOT) device and transfer these data to Cloud. Communication between Edge(IOT) device to Fog node is very frequent. Data provided by API is responsible for all intermediate and quick computation on FOG node.

Cloud is still big stake holder for holding all data and large computation from IOT device.  API is providing data to cloud from FOG node in certain interval for heavy computation. As Edge(IOT) system getting more complex Fog computation responsibility will increase and API will come on picture to provide more data to Fog and from fog node to cloud.

API Integration of IOT with Fog and Cloud computing.

These are few benefits by enabling APIs for IOT devices and Fog Nodes

  • API provides flexibility to connect any IOT device to FOG node and FOG node to cloud network.
  • API provides seamless connectivity between these systems.
  • API brings whole IOT system in one seamless environment So, it is very easy to debug these systems.
  • API is very easy to develop and deploy so it’s easy to maintain these systems.
  • Provisioning of IOT device has also become very easy by enabling API.
  • According to Gartner study, Security of IOT is one of big concern. API provides whole one seamless system and network to mitigate this risk.

Fog Computing and Edge computing

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

IOT, Connect car, Automated car getting lot of traction in current word. All big companies want to be part of this process. All kind of sensors are installed in these vehicles. These sensors generate Terabytes of data and computing these data to run vehicle smoothly. Connected car or IOT based devices are completely based on computing power and quick response.

Sending data and computing these data in cloud could be catastrophic. Any network latency and processing delay might end with bad result. For example, your automated car is traversing through busy street. Suddenly a person comes in front of automated car. In this scenario, any network latency, slowness of computation and analysis effects the decision and subsequent action (Apply brake on car).

In IOT based device, any computing near to IOT device can reduce this risk. So how can we make this happen, if your all computing power are in cloud and data is in cloud.

Fog Computing & Edge computing

FOG Computing–In context of IOT, if intelligence pushes d down to the local area network (LAN) and compute these data in IOT gateway or FOG node will reduce network latency risk. Fogging or FogNetwork is decentralized computing and stores data in most logical and efficient place between IOT device and the cloud.

In FOG computing, data transported from IOT to Cloud need many steps.

  1. Signals from IOT is transported through wire to I/O point of device programmable automation controller(PLC). PLC execute control system program to automate system.
  2. Control system program sends data to protocol gateway, which convert this data into a protocol, understand internet systems such as MQTT or HTTP.
  3. At the end, data is send to fog node or IOT gateway on the LAN, which collects the data and preform analysis and computing on data. This even stores the data to transfer further to cloud network for later processing and intelligence.

Edge Computing — Edge computing refers to any computing infrastructure near to source of data (i.e. IOT device). So, Making IOT device smart and intelligent enough to take decision near to data gateway. The role of edge computing is to process data, store data in local device and transfer data to fog or cloud network. Above all processes are automated through PAC (Programmable automation controller) by executing board controlled system program. In edge computing, intelligence literally push to edge of network where our IOT device and outside network first connect to each other.

Mulesoft: Twilio API Integration

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

mulesoft-logoplustwilio

Twilio is a cloud based communication company that enables users to use standard web languages to build voice, VoIP, and SMS apps via a web API. Twilio provides a simple hosted API and markup language for businesses to quickly build scalable, reliable and advanced voice and SMS communications applications. Twilio based telephony infrastructure enable web programmer to integrate real time phone call, SMS or VOIP to their application.

Mulesoft provides cloud connector to integrate Twilio Api within Mulesoft. Mulesoft Cloud connector provides a simple and easy way to integrate with these Twilio APIs, and then use them as services within Mulesoft. Mulesoft-Twilio connector provides a platform for developer to develop and integrate their application easily and quickly with Twilio.

Before start integration of Mulesoft with Twilio, create your Twilio account and get “ACCOUNT SID” and “AUTH TOKEN”.

twilio-account

Now download and install Twilio connector into Anypoint studio.

Anypoint Studio –>Help –>Install New Software

twilio-connector

Configure pom.xml to pull Twilio jar dependency in maven based project.

Add plugin in plugin section and dependency in pom.xml file. This section will also add into pom.xml file when Twilio connector drag into AnypointStudio canvas and use it into flow.

<plugin>
   <groupId>org.mule.tools.maven</groupId>
<artifactId>mule-app-maven-plugin</artifactId>
<version>${mule.tools.version}</version>
<extensions>true</extensions>
<configuration>
<copyToAppsDirectory>true</copyToAppsDirectory>
<inclusions>
<inclusion>
<groupId>org.mule.modules</groupId>
<artifactId>mule-module-apikit</artifactId>
</inclusion>
        <inclusion>
                   <groupId>org.mule.modules</groupId>
                   <artifactId>mule-module-twilio</artifactId>
         </inclusion>
     </inclusions>
</configuration>
</plugin>

Dependency tag

<dependency>
<groupId>org.mule.modules</groupId>
<artifactId>mule-module-twilio</artifactId>
<version>1.4</version>
</dependency>

Now configure Twilio Global Elements to connect your application with Twilio into Mule-config.xml file

<twilio:config name="Twilio" accountSid="${TwilioSID}" authToken="${TwilioAuthToken}" doc:name="Twilio">
<twilio:http-callback-config />
</twilio:config>

In above code TwilioSID and TwilioAuthToken are coming from Twilio account.

Mulesoft Twilio connector provides a  number of methods to integrate with your application. Below image show some of methods expose by Mulesoft-Twilio connector.

twilio-method

I am using “send SMS message” method form Mulesoft-Twilio connector for my example.

Now you can integrate Twilio to send SMS with your application. Here is example code.

<logger message="#[payload.recipientPhoneNumber]" level="INFO" doc:name="Logger"/>
<twilio:send-sms-message config-ref="Twilio" accountSid="${TwilioSID}" body="Hello World Sending SMS from Twilio" from="+15555555555" to="#[payload.recipientPhoneNumber]" doc:name="Twilio"/>

Twilio API does not support bulk SMS for recipient. So, to initiate messages to a list of recipients, you must make a request for each number to which you would like to send a message. The best way to do this is to build an array of the recipients and iterate through each phone number.

Here is small flow for Twilio integration.

twilio-flow

Code for this flow.

<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:twilio="http://www.mulesoft.org/schema/mule/twilio" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:spring="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/twilio http://www.mulesoft.org/schema/mule/twilio/current/mule-twilio.xsd">

<http:listener-config name="HTTP_Listener_Configuration" host="0.0.0.0" port="8081" doc:name="HTTP Listener Configuration"/>
<twilio:config name="Twilio" accountSid="${TwilioSID}" authToken="${TwilioAuthToken}" doc:name="Twilio">
    <twilio:http-callback-config />
</twilio:config>
  <flow name="twilio-mulesoftFlow">
     <http:listener config-ref="HTTP_Listener_Configuration" path="/twilio" doc:name="HTTP"/>
     <set-payload value="" doc:name="Set Payload"/>
     <logger message="#[payload.recipientPhoneNumber]" level="INFO" doc:name="Logger"/>
     <twilio:send-sms-message config-ref="Twilio" acc#[payload.recipientPhoneNumber]: #339966;">${TwilioSID}" body="#[payload]" from="+15555555555" to="+12222222222" doc:name="Twilio"/>
  </flow>
</mule>

If you are getting exception, make sure twilio-mulesoft jar is in classpath and properly configured.

RAML:Schema Validation for APIs

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

Initially when REST was introduced there was always challenge to validate your request against prerequisite requirement. This was available in SOAP web services as XSD schema validation but it was not available in REST webservice. Architect and developer had to face the challeng to implement some kind of schema to validate their request.

YAML based RAML (Restful API modeling Language) was introduced in 2013. RAML gives flexibility to define schema to validation request/response. This breakthrough helps Architect and developer to define schema for REST API to validate request/response.

RAML schema validation can be defined in two formats.
1) XSD Based
2) Json Based.

Schema validation can be defined in two ways inside RAML
1) Inline schema definition
2) XSD or json schema definition file.

Schema definition can be defined in schema tag within RAML file.
“!Include” tag uses to include schema file for file based schema definition within RAML.
Json based schema definition

/car:
  post:
    description: Getting car info from Car Application
    body:
      application/json:
        schema: !include schemas/cars-schema-request.json 

XSD based schema definition

responses: 
      200:
        body:
          application/xml: 
              schema: !include schemas/vanrish-car-response.xsd

Inline definition of schema

/car:
  post:
    description: Getting car info from Car Application
    body:
      application/json:
        schema: |
	     {
             "type": "object",
             "$schema": "http://json-schema.org/draft-03/schema",
             "required": true,
             "properties": {              
              "vin": {
                  "type": "string",
                  "required": true
              },
              "model": {
                  "type": "string",
                  "required": true
              },
              "make": {
                  "type": "string",
                  "required": true              
              }
            }
          } 

Here are few tips to use json based schema validation.

1) If request/response is object based then in schema it is defined as type:Object .

"$schema": "http://json-schema.org/draft-03/schema",
"type": "object"

2) If request/response is list based then it is defined as

"$schema": "http://json-schema.org/draft-03/schema",
"type": "array",
"items": {

3) If request/response is list and it contain object, it is defined as

"items": {
  "type": "object",
  "properties": {
    "vin": {
      "type": "string"
}

4) Type of field for Object can be defined as string,integer, or boolean

"vin": {
  "type": "integer"
}

5) Field can be restricted for known value with defining enum

"isCdl": {
  "description": "State",
  "type": "string",
  "enum": [ "true", "false" ]
}

6) Field can be made mandatory by introducing required field

"name": {
  "type": "string",
  "required": true
}

7) Any field can be validated against regular expression by defining regex code

"deviceName": {
  "type": "string",
  "pattern": "^/dev/[^/]+(/[^/]+)*$"
}

8) Maximum and minimum field length can be validated with defining maxLength and minLength field

"id": {
  "description": "A three-letter id",
  "type": "string",
  "maxLength": 3,
  "minLength": 3
}

9) Reuse of validation by importing validated schema from different file system by defining like that

"credentials": { "$ref": "app-credential.json#/definitions/credential" }

10) anyOf, allOf, oneOf, not- these four keywords intend to bring logical processing primitives to Schema validation

{
  "anyOf": [
   { "type": [ "string", "boolean" ] },
   { "schema1": "#/definitions/nfs" },
   { "schema2": "#/definitions/pfs" }
 ]
}

Mulesoft Connector Devkit : Coding & Deployment

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

In my previous blog I explained configuration and setup for Mulesoft connector Devkit. In this blog I am going to explain how to write and deploy your connector. As I mentioned in my previous blog Devkit is a platform to develop Mulesoft connector. Devkit is very powerful tool to develop extreme complex connector or simple connector.

Here are few steps to develop Mulesoft connector.

1) Create project from anypoint studio

connector-project-create

2) Select SDK Based connector. This selection supports standalone java as well as REST based API. Once you select this selection below window will come. Name your connector project, select working directory and  then click next

connector-project-create-box

3)  Now next step you need to select maven Group Id and Artifact Id and click next.

4) Next step you need to select icon and logo for your connector then click finish.
connector-project-icon-select

After clicking finish connector project will generate.

Two java files are generated in your connector project. Here my project name is Vanrish, so it generated VanrishConnector.java and ConnectorConfig.java.

Generated VanrishConnector.java 


@Connector(name="vanrish", friendlyName="Vanrish")

public class VanrishConnector {
 
@Config
ConnectorConfig config;
 

In this code snippet annotation defines your connector name and display name. In above annotation “name” is for connector name and “friendlyName” will display connector name once you install this connector in Anypoint studio. This annotated class is main class for creating connector

In 2nd line we are initiating config class to add all configuration related with this connector.

If you are adding any method to execute this connector you need to define your method with  @Processor annotated method.


@Processor
public String getVehicleInfo(String deviceId) throws Exception {
return "Hello World"+deviceId;
}

Here is Full code snippet for this class


package org.mule.modules.vanrish;

import org.mule.api.annotations.Config;
import org.mule.api.annotations.Connector;
import org.mule.api.annotations.Processor;
import org.mule.api.annotations.lifecycle.Start;
import org.mule.api.annotations.oauth.OAuthProtected;
import org.mule.modules.vanrish.config.ConnectorConfig;

@Connector(name = "vanrish", friendlyName = "Vanrish")
public class VanrishConnector {

 @Config
 ConnectorConfig config;

 @Start
 public void init() {
 }

 public ConnectorConfig getConfig() {
  return config;
 }

 public void setConfig(ConnectorConfig config) {
  this.config = config;
 }

 @Processor
 public String getVehicleInfo(String deviceId) throws Exception {
  return "Hello World" + deviceId;
 }
}

Now in 2nd class we define connector configuration. This class is annotated with @Configuration

In this class I defined couple of methods to access external REST api for this connector.

I define apiURL and their version to use inside my annotated method


@Configurable
@Optional
@Default("https://platform.vanrish.com/api")
private String apiUrl;

@Configurable
@Optional
@Default("v1")
private String apiVersion;

Here are annotation definition for connector
@Configurable — Allow to configure this field
@Optional —This field is not mandatory
@Default —This is providing default value for field

Here is full code snippet


package org.mule.modules.vanrish.config;

import org.mule.api.annotations.components.Configuration;
import org.mule.api.annotations.Configurable;
import org.mule.api.annotations.param.Default;
import org.mule.api.annotations.param.Optional;

@Configuration(friendlyName = "Configuration")
public class ConnectorConfig {

/**

* Vanrish API Url

*/

 @Configurable
 @Optional
 @Default("https://platform.vanrish.com/api")
 private String apiUrl;

 @Configurable
 @Optional
 @Default("v1")
 private String apiVersion;

 public String getApiUrl() {
  return apiUrl;
 }

 public void setApiUrl(String apiUrl) {
  this.apiUrl = apiUrl;
 }

 public String getApiVersion() {
  return apiVersion;
 }

 public void setApiVersion(String apiVersion) {
  this.apiVersion = apiVersion;
 }
}

In advance connector writing you can create client java class and use above apiURL and version to access api method and execute to get result.

Now to build this project in Anypoint studio, you need to select project and right click. This action will pop up option window. Here in this window you need to select Anypoint Connector then click Build connector.

Steps —  Right Click on project –>Anypoint Connector –> Build Connector
Here it is shown in the picture below
connector-project-build

This action will build your connector.

Follow the same steps to install your connector into Anypoint studio.
Steps — Right Click on project –> Anypoint Connector –> Install or Update
This action will install your connector into Anypoint studio.

After installing your connector,you can search your connector name into Anypoint studio.

vanrish-connector

Connector Testing
you can create small flow in Anypoint studio and test your connecotor.

Here is example to test my connector
vanrish-connector-flow

DataWeave:A New Era in Mulesoft

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

Dataweave is a new data mapping tool which comes with MuleSoft 3.7 run time. Before Mule 3.7 runtime, Datamapper was there for data mapping. Dataweave inherit some of the functionality from Datamapper but due to restriction of complex mapping in Datamapper, Dataweave emerged with Mulsesoft 3.7 runtime.
There are three section of Dataweave.
1) Input
2) Output
3) Data transformation language

Data transformation language is based on JSON like language. If you are familiar with JSON, it is very easy to write Dataweave transformation logic and maintain this logic.

Here are some tips to write and maintain Dataweave transformation logic.

1)   Default Output of Dataweave Transformation is set into payload. But you can easily     change from dropdown and set this output as variable or session variable
dataweave

2) Output of transformed data you can easily define in Dataweave. If you are transforming data into XML you just need to define as “ %output application/xml” without touching underline transformation logic. Same way if you are transforming your data into json or any other format you just need to define output like without touching underline transformation logic as “ %output application/json”, “ %output application/java”, “ %output application/csv”..

%dw 1.0
%output application/xml
%namespace ns0 http://schemas.vanrish.com/2010/01/ldaa

3)  Dataweave transformation logic gives leverage to skip any null field during data transformation. This is only declarative. Here is declaration to skipping null fields everywhere during transformation skipNullOn=”everywhere”

%dw 1.0
%output application/xml skipNullOn=”everywhere”
%namespace ns0 http://schemas.vanrish.com/2010/01/ldaa


4)  Dataweave transformation allows to access flowVars and sessionVars directly into transformation field.

orderParameter:{
         name:“MINOR_LI_ID”,
         value:flowVars.payloadVar.minorLiId
}

5)  Dataweave transformation reads properties value directly from properties file. You can access properties value in Dataweave like you are accessing during flow.

teamName:“$($.teameName)${teamNameSuffice},
  teamNameSuffice is defined in properties file.

6)  Dataweave transformation allows implementing condition logic for each field. It is very easy to implement and transform your data based on these condition logic. Here is example to implement to condition logic for field partnershipType.

partnershipType:“NEW” when $.partnership-type==”NEW”
                            otherwise “USED” when $.partnership-type==”USED”
                            otherwise “EM” when $.partnership-type==”EM”
                            otherwise “PU” when $.partnership-type==”PU”
                            otherwise “PN” when $.partnership-type==”PN”
                            otherwise $.partnership-type,
Here is another example

creativeType:“GRAPHIC” when ($.creative-type == “GRAPHIC” or $.creative-type == null)
                         otherwise “TEMPLATED_AD” when $.creative-type == “TEMP_AD”
                         otherwise “AD_TAG” when $.creative-type == “AD_TAG”
                         otherwise “”,

7)  During Dataweave transformation logic you can call global function and transform your field based on these function output. This is one of the ways you can call java object into Dataweave transformation.

Here is example
global-functions is tag for mule-config file where you can define function for entire mule flow

<global-functions>

def toUUID() {
return java.util.UUID.randomUUID().toString()
}

def getImageType(imageName) {
return imageName.substring(imageName.lastIndexOf(‘.’)+1)
}

</global-functions>

Now you can call this function inside Dataweave transformation logic.

 imageType:getImageType($.origin-name as :string) when $.origin-name != null otherwise “”,

Mulesoft:Flow Execution Time

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

Mulesoft application is based on flows. Every flow has their own execution time. We can calculate this flow execution is couple of ways. But Mulesoft provides one of the easy way to calculate this flow execution time by using interceptor . Timer interceptor (<timer-interceptor/>) is one of the mule interceptor to calculate Mulesoft flow execution time.

Here is flow diagram for timer-interceptor

muleTimerInterceptorFlow

Here is code for timer-interceptor to implement in your application


<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"       xmlns:spring="http://www.springframework.org/schema/beans"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsdhttp://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd">

<http:listener-config name="HTTP_Listener_Configuration" host="0.0.0.0" port="8081" basePath="/demo" doc:name="HTTP Listener Configuration"/>

<flow name="muleTimerInterceptorFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/" doc:name="HTTP"/>               <timer-interceptor/>
     <set-payload doc:name="Set Payload" value="Hello World"/>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</flow>
</mule>

 <timer-interceptor/> tag display time in milliseconds.

You can customize flow execution time to replace <timer-interceptor/>  with  <custom-interceptor>.
In this custom interceptor you need to mention your custom interceptor java class.

<custom-interceptor class=”com.vanrish.interceptor.TimerInterceptor” />

Here is flow diagram for custom timer-interceptor

muleTimerCustomInterceptorFlow

Here is mule-config.xml  code for custom timer-interceptor


<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"       xmlns:spring="http://www.springframework.org/schema/beans"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd">

<http:listener-config name="HTTP_Listener_Configuration" host="0.0.0.0" port="8081" basePath="/demo" doc:name="HTTP Listener Configuration"/>

<flow name="muleTimerInterceptorFlow">

<http:listener config-ref="HTTP_Listener_Configuration" path="/" doc:name="HTTP"/>

 <custom-interceptor class="com.vanrish.interceptor.TimerInterceptor" />

<set-payload doc:name="Set Payload" value="Hello World"/>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</flow>
</mule>

Java TimerInterceptor  code for custom timer-interceptor tag


package com.vanrish.interceptor;

 

import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;
import org.mule.api.MuleEvent;
import org.mule.api.MuleException;
import org.mule.api.interceptor.Interceptor;
import org.mule.processor.AbstractInterceptingMessageProcessor;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

/**
* <code>TimerInterceptor</code> simply times and displays the time taken to
* process an event.

*/

public class TimerInterceptor extends AbstractInterceptingMessageProcessor

implements Interceptor {

/**

* logger used by this class

*/

private static Log logger = LogFactory.getLog(TimerInterceptor.class);

public MuleEvent process(MuleEvent event) throws MuleException {
long startTime = System.currentTimeMillis();
DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
Date stdate = new Date();
String start = dateFormat.format(stdate);
System.out.println(start);

MuleEvent resultEvent = processNext(event);

Date enddate = new Date();
String end = dateFormat.format(enddate);

if (logger.isInfoEnabled()) {

long executionTime = System.currentTimeMillis() - startTime;
            logger.info("Custom Timer : "+resultEvent.getFlowConstruct().getName() + " Start at "+start+" and end at "+end +" it took " + executionTime + "ms to process event ["                   + resultEvent.getId() + "]");
}
return resultEvent;
}
}

Cloud Security

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

Now most of the company wants to embrace cloud computing but security is one of the main concern for these companies. Still CEO or CTO of the companies are feeling uncomfortable to use cloud computing. As an Architect, I also feel this is the one of the main area that cloud based application should focus. According to Gartner, There are seven risk factors for cloud computing.

1. Privileged user access
2. Regulatory compliance
3. Data location
4. Data segregation
5. Recovery
6. Investigative support
7. Long-term viability

There are different levels of risk for different type of cloud. Public cloud is front runner in risk among all other types of cloud.
In top of these risks still companies are thinking about cloud implementation in their organization. Companies are saying, risk is everywhere and you should mitigate these risks or overcome these risks.
Cloud is on demand service by provider to consumer, so there should be good understanding of cloud security between provider and consumer like good service level agreement and contract requirement between provider and consumer.
Here are few points to mitigate risks on cloud.

1. Secure logon – In cloud make sure every user has unique user id with proper authorization on cloud. It should be managed properly and it should access directory structure to provide access control.


2. Encrypted data – When you are accessing data on cloud particularly SAAS on public cloud, data should be properly encrypted and it should follow government privacy law (GLBA, DPPA, FCRA, HIPAA, etc.).

3. Secure Data backup – Data backup is one of the key areas where provider and subscriber should focus about security. There should be clear understanding between provider and subscriber in SLA (Service Level Agreement) about data backup security. There should be secure tool to data transfer, backup data and restore data in cloud.


4. Virtualization Security – Virtualization is back bone of cloud computing. There are multiple risks associated with hardware or software virtualization like VM (Virtual machine) isolation, hypervisor or multi-tenancy. To mitigate risk there should be strong and clear isolation level among different VM. There should be good administrative access and control of VM and also good reporting and logging tool for different VM and administration.

5. Application Security — There are big challenges of application security in different layers of cloud as SAAS, PAAS or IAAS. Application vulnerability is available in almost all level and layer of cloud. To mitigate application vulnerability in cloud we should focus on some of security point as given below.

a) Secure communication between application host machine and consumer.

b) Audit and review the application security on cloud in each level of SDLC (Software
    Development Life Cycle).

c) There should be clear security SLA (Service Level Agreement) of application between
    cloud provider and consumer for each layer of clouds (SAAS, PAAS, and IAAS).

d) Encrypted Application data should transit over network.

What is virtualization?

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

Virtualization concept came in 1960. It was brought by IBM for the Mainframe server to fully utilize hardware resources by logical partitioning them in virtual machine (VM). In 1980’s and 1990’s era, we almost forgot this technology due to rise of desktop and client server computing.
After this era we jumped on distributed computing technology. Company started to use multiple servers to execute their application. Each server took extra space and used more power and cooling which gave rise to extra expenditure cost to run application.
To overcome all this extra expenditure company started to explore virtualization. VM ware is one of the leading companies which provide virtualization. Virtualization is old technology in new box with more powerful resources and options.
Virtualization is the partitioning of not only mainframe server but any physical server into multiple virtual servers. It gives organization maximum utilization of hardware with same CAPEX (ongoing capital expenditure) and OPEX (ongoing Operational expenditure). Each server acts like a real physical server that can run on operating system with just like physical server. Now companies are partitioning their physical server into multiple virtual servers and run their application on virtual servers with same resources and less expenditure.

There are three different types of virtualization
  1.        Hardware virtualization – Hardware virtualization allow us to run different OS (Operating Server) and different servers simultaneously on the same hardware. 
  2.        Desktop virtualization – Desktop virtualization allow us to run different desktop for different users simultaneously on the same hardware.
  3.        Storage virtualization – Storage virtualization is the pooling of physical storage from multiple network devices on the same hardware.

Types of Cloud Computing

Rajnish Kumar
Follow me

Rajnish Kumar

Enabling APIs, IOT (Internet Of Things), Artificial Intelligence ecosystem to our customer and latest technology trend. Worked with our client over 20 years in Project Management, Architecture of Enterprise application and application development .
Extensively worked on IOT, Microservices, APIs,SOA application, Cloud, Amazon AWS,Big Data, Analytics, Artificial intelligence and Security.
Rajnish Kumar
Follow me

Latest posts by Rajnish Kumar (see all)

In my earlier post, I explained about cloud? Now I am going to explain about different types of cloud computing and layers of cloud computing.
Based on organization’s business, economy and technical need, we divide Cloud in different category.
Cloud computing is define in three major technology layers. These are SAAS (Software As A service), PAAS (Platform As A Service) and IASS (Infrastructure As A Service).

        1. SAAS (Software As A Service) – This is the top technology layer of Cloud Computing and oldest among these three. Under this layer organization gets fully functional applications on-demand to provide specific services such as email management, CRM, ERP, web conferencing and an increasingly wide range of other applications. These software licenses are managed by Cloud computing company.
      2. PAAS (Platform As A Service) – Second layer of cloud computing is PAAS (Platform As A Service). In this layer organization gets mostly an operating environment to develop application, to run application or to deploy application. PAAS provides operating environment like Java, J2EE, .Net, Window, Linux etc.
      3. IAAS (Infrastructure AS A Service) – This layer provides all basic, physical and virtual resources used in application for any organization. This includes virtual platform (space on server) on which required operating environment and application are deployed. It also includes storage and datacenter.
 In other dimension, there are 4 types of cloud computing service available.  These are Public, Private, Community and Hybrid computing.
      1. Public cloud (External Cloud) – Public cloud is offering service by third party vendor over internet. If any vendor provides infrastructure, data center, search or other service to any organization, then it comes in public cloud type. This type of cloud shares  some benefit like efficiency, High availability, elastic capacity, Low upfront cost, less or no hardware setup and less or no system management.  This type of cloud computing service is provided by Amazon EC2, Microsoft Azure, Sun Microsystem cloud, Salesforce etc.  
      2. Private cloud (Internal cloud) – Private cloud is set up and managed by an enterprise’s own IT department and run inside the organization firewall. If any organization has large number of user and resources, then organization hosts cloud computing within their own firewall. This type of cloud computing is dedicated to that organization. It does not share any resource outside their organization. Any big organization like AT&T, Verizon or Bank of America open their infrastructure or data center near to low cost area and makes the service  available  through their own cloud, then it called as private Cloud computing. It shares some benefit like efficiency, High availability, elastic capacity, Lower cost over time, full access and flexibility, direct control over quality, service and security.
       3. Community cloud (Semi-private cloud) – Community cloud is offering service for similar type of business company. This type of cloud is public cloud but it focuses on same vertical domain companies. Like if any cloud dedicated to government or banking organization and it is serving only those types of organization, then it come as community cloud. It shares some benefit like efficiency, High availability, elastic capacity, expertise in domain knowledge, less cost over time compare to public cloud.
       4. Hybrid cloud (Integrated cloud) – Hybrid cloud is combination of any or all of the other types of cloud.  This type of cloud is  gaining lot of popularity among organizations.  This type of cloud computing give organization more flexibility to manage and share resource between private and public cloud. Like if  any organization host their application in public cloud and during peak sales time they need more server and space to handle this request, they can go for public cloud. In this type of cloud computing Model Company keeps  all sensitive data (transaction or credit card data) in private cloud and less sensitive data in public cloud. It shares  benefits like efficiency, High availability, elastic capacity, more control over quality, service and security, less cost over time compare to public and community cloud.