Kubernetes for Web Developers

Kubernetes, also known as k8s, is very popular term now a days among the developers. Many companies are adopting it now a days. The primary reason for this is rapid shift from monolithic architecture to micro-services and micro-frontend architecture. This blog post is about web developers who are curious about K8S. We will understand,

  • Why a web developer should know about k8s
  • Fundamentals of K8S
  • Getting started with K8S
Kubernetes
Kubernetes

Why would a Web Developer learn K8S?

Adopting micro-service architecture has many benefits. It makes developing, deploying, and scaling very effective for backend applications. Same goes for micro-frontend architecture. You can read more micro-frontend in my previous blog Frontend Development Trends of 2022

For both micro-service and micro-frontend k8s plays vital role. It allows developers to build containerised application, which is highly scalable. K8S helps developers build the ‘infrastructure as code’ and also helps them manage coding environment configurations as code. Hence as a web developer it’s good to know fundamentals k8s.

Let’s now understand some of the fundamentals of k8s.

What is K8S?

Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Google originally designed K8S, but the Cloud Native Computing Foundation now maintains the project. It handles the container deployment, scaling and load balancing of containers.

What is Docker?

Docker is an open source containerization platform. It is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. For your backend or frontend applications, we have the Dockerfile. It defines the build process for the application. When we build it with docker build command, it will install all the necessary dependencies, and create an immutable image, which can be used to run the application.

How Docker and K8S works together ?

As mentioned above using Docker you can containerize your application, means it helps creating the container. Kubernetes will help you in managing these containers.

Let’s first understand few terms of K8S. 

K8S containers

Containers are standardized, self-contained execution enclosures for applications. For each application and Microservice, a container is created which will have all the necessary dependencies required to run the application like node modules.

K8S PODs

Pods are the smallest execution unit in a Kubernetes. In Kubernetes, containers do not run directly on nodes. One or more containers are encased in a pod. All applications in a pod share the same resources. For each application and Microservice, a POD is created which encases the container for each application.

K8S Nodes

Node is the smallest unit of compute hardware in Kubernetes cluster. Nodes can be physical on-premises servers, or VMs which are there in cloud provider. Together Nodes will form a Kubernetes Cluster. For each Pod we can assign Nodes in the Kubernetes cluster. That means your application will be running from different Nodes as a part of scaling and load balancing with Kubernetes. Every node runs an agent called kubelet, which communicates with the cluster control plane.

Easy way to start with Managed K8S

From above all terminologies it looks difficult to start with Kubernetes. That’s why it’s recommended to start with managed kubernetes. Managed Kubernetes is like third-party providers. They take over responsibility for some or all of the work necessary for the successful set-up and operation of K8s. By opting for a managed Kubernetes solution, you don’t have to deal with complexity associated with deploying and operating your containerized applications. There are many vendors available who provide you managed kubernetes platform.

Google K8S Engine (GKE)

GKE

GKE is one of the most advanced managed platforms available. Designed for use on Google Cloud. GKE gives you complete control over every aspect of container orchestration.

Learn more about GKE from official GKE website

Amazon EKS

Amazon EKS

It is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS.

Learn more about it from official Website

In the next blog post we will learn how to contanarize your frontend and backend applications.

OpenFGA – Open Source Authorization by Auth0

Few days back I came to know about OpenFGA – Open Source Authorization by Auth0. Since then I am exploring it a bit. Here in this blog post we will understand what is OpenFGA.

OpenFGA
OpenFGA – Open Source Authorization by Auth0

Whenever we discuss about authorization, there is always a confusion about difference between authentication and authorization. Let’s first clear that.

Difference between Authorization and Authentication

Authentication is all about identity of the user. Authorization is all about what user can do with your system. As an application developer or system architect we always face challenges with Authorization.

There are lots of things involved with Authorization. It includes,

  • Access to system entities.
  • What all URLs user can access
  • Types of actions, like create / read / update/ delete
  • Grant and revoke permissions etc.

Considering above factors it’s always a challenge to implement robust architecture for the authorization.

OpenFGA – Fine Grained Authorization Solution

It’s a fast, scalable and flexible system. It’s inspired by Google’s Zanzibar project. This paper presents the design, implementation, and deployment of Zanzibar, a global system for storing and evaluating access control lists.

Auth0 / Okta developed OpenFGA on top of Google’s Zanzibar and made is open source. It is easy for application developers to develop their access control layer, and to add and integrate authorization which is consistent across all of their applications.

Why it is important?

As mentioned earlier in the post authorization is very critical and complex aspect of modern applications and platforms. With increasing demands from users, it’s crucial to build robust system without compromising cyber security.

Let’s understand building blocks of OpenFGA one by one.

OpenFGA Server

This act as permission engine. It uses expressive language to define your authorization model. This server consists of HTTP APIs using which one can,

  • Define permission models
  • Querying and modifying them
  • Checking and granting permission

It has modular data storage. It supports an in-memory database and PostgreSQL. Also it has graph querying implementation inspired by Google Zanzibar. It is also available as docker image, you can check the quick start guide here.

OpenFGA Client

There are no of choices of SDK available to use in your application to set up a client. Available SDKs are GO SDK, .NET SDK, JS SDK. The client allows you to interact with the server API in an idiomatic way.

Client works with store. A store is a OpenFGA entity that contains your authorization data. You will need to create a store in OpenFGA before adding an authorization model and relationship tuples to it.

How does it work?

Let’s first get high level overview of how does it work before we see the code example. Let’s take an example of project management system.

Define the model

  • A project can be created by an employee.
  • It can be approved by manager of employee.
  • Team member can view project.

To solve this problems, we will define the authorization model.

type project
relations
define creator as self
define approver as manager from creator
define team-member as self or creator or approver

type employee
relations
define approver as self or manager

Define the tuples

Once model is defined you will pass data by using the OpenFGA’s Write API and a few “tuples”.

Define employee manager relation. Here John is the manager of Smith. Hence he become approver.

{ 
    "user": "employee:john",  
    "relation": "approver", 
    "object": "employee:smith" 
}

Employee create project

{
"user": "employee:smith",
"relation": "creator",
"object": "smith-project"
}

Team member added.

{
"user": "employee:jane",
"relation": "team-member",
"object": "smith-project"
}

Now we can use OpenFGA’s Check API, which will return true or false for the tuples. Based on the user, relation and object the API will return true or false.

In the next post, we will understand about js-sdk for OpenFGA and see how we can use it.

Better Development Experience with Vite

Few days back I came to know about an amazing frontend tool Vite. Since then I am exploring it a bit. Here in this blog post we will go through some of the feature of Vite. We will also see how Vite can make your web development experience faster and better.

Vite – Next generation frontend tooling

There is already a discussion going on around Vite as a replacement of Webpack. In this post we will avoid this discussion. We will focus more on features of it.

What’s the need of bundling tools like Vite in your web app?

Since the ES6 modules introduced it’s a good practice to write JS code in a modular fashion. But in early days a lot of browsers were not handling loading ES6 modules natively. So we have the concept of bundling our code, using tools like webpack. The tools like Webpack and Vite converts source modules into the code that can run in the browser. These tools crawls through your code, find out dependencies and process it. Once processed it generates the bundle of all your module which can run in browser.

What are the problems with existing tools?

As we mentioned above the tools like Webpack and all, crawls through source code and process it. This works fine when you have smaller project and less no of modules. Once project grows and no of modules grows, we start experience following issues.

  • Very Slow start of dev server
  • Slow file updates
  • Slow HMR
  • Longer build time

The reason is, code transpile and concatenate takes time once your project grows. This affects the developer ‘s productivity. Every time we have to wait for the build and to start the dev server for every small changes.
So how Vite solves these issues? There are two primary things.

Pre bundling of Dependencies

When we work with web apps we use lots of vendor packages. These are external dependencies which are not going to change during the development. What Vite does is, it pre bundles these dependencies esbuild. esbuild is extremely fast JavaScript bundler.

Dependency resolving and serve

For the application source code, it resolves the dependencies and only transform and serve source code on demand. That means no time spent on bundling your source code before server starts. Here browser will take care of bundling and requesting the module as and when it’s required.

This leads to a great developer experience as we don’t have to have for serve to start. Also there is blazing fast HMR.

Here is how Vite works.

Vite work
How Vite works?

Getting Started with Vite in your React Project

Let’s see an example of how to work with Vite in your React Application. First we will start by adding necessary dependencies in your app.

"devDependencies": {
    "@vitejs/plugin-react": "^1.0.7",
    "vite": "^2.7.2"
  }

Also you have to add a config file. Following is the bare minimum configuration needed. For more options you can check out the documentation here. Vite config

import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'

// https://vitejs.dev/config/
export default defineConfig({
  plugins: [react()]
})

To start the dev server with Vite, you just have to use command vite or npm run dev. Once this command is executed it will look for index.html file where you should specify your entry point.

<script type="module" src="/src/main.jsx"></script>

That’s it and it will start the dev server in fraction of seconds. If you check the network tab of your browser you can see the main.jsx is loaded as native ES module.

Vite ES native module
Vite ES native module

Fast HMR

HMR is the strategy to allow module to Hot Replace it self without affecting the page. This way we can see our changes instantly on the screen. This works fine when have smaller code but as your project grows HMR takes lots of time. This greatly affects the productivity as every time we have to wait for bundling.

When file is edited and saved it just invalidate the edited module and and its closest HMR boundary. Also if you see network tab, Vite requests the source cod with specific HTTP headers. For dependencies module requests, it uses the cache control header to cache the requests so it does not hit server. For source code module requests it uses 304 Not Modified to conditionally load it.

Because of all these features Vite is becoming popular among developers and its adoption is increasing day by day.

I will soon share my experience of replacing Webpack with Vite in larger project.

TensorFlow.js Image Classification with MobileNet Model

TensorFlow.js is a library for machine learning in JavaScript. Using TensorFlow.js one can,

  • Run existing model
  • Develop ML models in JavaScript
  • Use existing model and retrain it

Here in this blog post we will see first use case which is to run existing model. We will use MobileNet model to predict object through webcam.

TensorFlow.js Image Classification
tensorflow.js

TensorFlow.js Image Classification HTML

Let’s start it with basic HTML and add necessary scripts in it. First of all create one index.html file. Add following code in that file. Also create css folder and tfjs folder. Here we will out our style.css and our JS file to start image classification

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Image Classification</title>
    <link rel="stylesheet" href="css/style.css">
  </head>
  <body>
    <button id="webCam" class="btn webCam-button" onclick="openWebCam()">Start Web Cam</button>
    <div class="webcam-popup" id="webCamDisplay">
      <video autoplay playsinline muted id="webcam" width="224" height="224"></video>
      <button type="button" class="btn" id="predict" onclick="predictObject()">Predict</button>
      <button type="button" class="btn cancel" onclick="closeWebCamDisplay()">Close</button>
    </div>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs/dist/tf.min.js"></script>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/mobilenet"></script>
    <script src="tfjs/imageClassification.js"></script>
  </body>
</html>

As you can see in above HTML, we have very simple HTML here. We have button to start web cam. There is video element to show the preview of web cam. Then we have couple of buttons to start prediction. Now let’s add CSS. Create style.css in css folder and add following styles to it.

CSS Part

html {
  font-size: 60%;
  overflow-x: hidden;
  scroll-padding-top: 5rem;
  scroll-behavior: smooth;
}

header {
  margin-bottom: 2rem;
  position: relative;
  margin-left: 70px;
  margin-top: 30px;
  font-weight: bold;
}
header h1 {
  font-size: 30px;
  margin-left: 20px;
  font-weight: bold;
}

.btn {
  margin-top: 1rem;
  display: inline-block;
  padding: 1rem 2rem !important;
  border-radius: 1rem;
  color: #fff;
  background: #189ab4;
  font-size: 1.5rem !important;
  cursor: pointer;
  font-weight: 600 !important;
  margin-right: 10px;
}

.webCam-button {
  color: white;
  padding: 16px 20px;
  border: none;
  cursor: pointer;
  position: fixed;
  bottom: 140px;
  right: 28px;
}

.webcam-popup {
  display: none;
  position: fixed;
  bottom: 0;
  right: 15px;
  border: 3px solid #f1f1f1;
  z-index: 9;
  background-color: black;
}

JavaScript Part

Now we have html and css ready. It’s time for some JavaScript logic. We will create file with name imageClassification,js in tfjs folder and add following code to it.

const webcamElement = document.getElementById('webcam');
let startPrediction =  false;

async function app() {
  console.log('Loading mobilenet..');

  net = await mobilenet.load();
  console.log('Successfully loaded model');
  
  const webcam = await tf.data.webcam(webcamElement);
  while (true) {
    const img = await webcam.capture();
    const result = await net.classify(img);

    for(var i = 0 ; i < result.length ; i++ ) {
      res = result[i];
      if(res.probability > 0.5) {
        alert(res.className);
        location.reload();
        break;
      }
    }
    img.dispose();
    await tf.nextFrame();
  }
}

function openWebCam() {
  document.getElementById("webCamDisplay").style.display = "block";
}

function closeWebCamDisplay() {
  document.getElementById("webCamDisplay").style.display = "none";
}

function predictObject() {
  app();
}

Here we have functions to open & close pop up. There is predict button. Where we will start web cam preview and load the MobileNet Model. We are using async await syntax to wait till MobileNet model is loaded and then start the prediction. Following line loads the model.

net = await mobilenet.load();

Then we set the data source which is our webcam preview with following line.

const webcam = await tf.data.webcam(webcamElement);

Then we have while loop added where we capture the image frame and try to classify it with MobileNet model.

const img = await webcam.capture();
const result = await net.classify(img);

We are checking the results with accuracy over 50% to check if the classification is correct or not. If probability is good we show the classname in alert. This class name contains the name of object. It’s coming from MobileNet trained data.

When you run the code it may take a while to predict the object. You can check it out by placing multiple objects in front of web cam and check. Following are few examples.

As you can see in few cases the prediction is correct for example, mobile phone and mouse. But in the first case it’s wrong. I am not wearing bullet proof vest.

You can read more about TenorFlow.js on Official Website

Here is the link to MobileNet Model GitHub

React Developer Tools for Better Development

In this blog post we will go through some of the useful React Developer Tools, that can help you in development and debugging of your React apps. Using this tool one can write better and cleaner code.

React

React Developer Tools

Some of the tools which we are going to cover here are Debugging Tools, Libraries, Test utilities etc. Using these tools you can enhance code quality and performance of your react app.

React Developer Tools I have mentioned here are based on my experience with it and use in the projects I am working on. If you know any other React Developer Tools, feel free to mention about it in comments.

React Developer Tool

This tool is available as browser extension for major browsers like Chrome, Firefox, Edge. It is also available as separate NPM package if you want to use it in mobile browser for debugging. Here is the link react-devtools

This is very popular tool, useful for both developing and debugging React apps by all the developers worldwide. Also known as React dev tool gives you a way to debug the rendered components through tab in dev tools. You can view all the props and its value for a particular component.

The feature, I like most is the, visual representation of components re render with timestamps. This is most useful for enhancing the performance of the React App.

Storybook

This is a tool for developing your React UI components separately. It helps in terms of development, testing and documentation of UI components. Imagine that you want to showcase some UI component before using it in project. Using Storybook this is possible.

As a developer, we can create develop the component and share with teams to play with it. It works outside your react project so you don’t need to make any changes in project. You can simply create components and share the storybook with your teams. Checkout official Github of Storybook

Why Did You Render

While React Developer Tool can show the visualisation of re renders, WDYR can notify you about potentially avoidable re-renders. In short it can show you the possible root cause and then you can debug more and fix it. It also works with React Native as well.

This is a nice tool for performance tweaking of your React apps. It is available as NPM package, you can install and use it in your React app. For documentation please check the official Github page of WDYR

Webpack Bundle Analyzer

This is very useful tool to for analysing the application bundle to find out which modules are heavy and which modules can be removed. It uses the webpack stats JSON file to provide us with an interactive treemap visualization of the contents of our bundle.

It is available as npm plugin so you can install and configure it to use in your React application. Checkout the official GitHub of Bundle Analyzer

JEST

Jest was created by Facebook and is a testing framework to test javascript and React apps. You can do unit testing with JEST for your React App. Unit testing is utmost important to reduce bugs in your production app.

JEST is pretty much simple to configure and use with your React App. You can add it as npm package and set up the test. Writing unit test cases are considered to be good practices for the development. You can refer to official GitHub of JEST

Frontend Development trends to Look For in 2022

As we are moving to new year 2022, it’s good to keep an eye on Frontend Development trends. You should keep up with this latest trends so you don’t miss any opportunity that could be a turning point of your career.

In this blog I am going to mention some Frontend Development trends, you should not miss out. This is based on what I have learned from industry. If you have some different opinion, feel free to share your feedback in comments.

1) Progressive Web Apps

Progressive Web Applications
Progressive Web Applications

PWA is already trending since last few years thanks to its native app like capabilities. It can be installed like native app. You can work with it offline. It is blazing fast. Major mobile browsers now support it. With PWA, you can access hardwares like camera, microphone etc.

Above all capabilities, made PWA favourite among developers and businesses operating over internet. Lots of businesses are using it and many more are vouching for it in upcoming year. Hence if you are a PWA developer you may have great chances to work with major online businesses.

2) Micro Frontend Architecture

Micro Fronend Architecture

In a simple words, it’s about breaking monolith frontend to smaller and independent applications. Together it works as one single application. This gives you more control over development and CI /CD. Many teams can work together and combining it together makes your entire frontend.

Many companies have already adopted this architecture and their productivity has been increased. This trend is going to dominate in upcoming year. If you are front end developer or architect it will be good to learn about this.

3) Rise of Voice User Interface (VUI)

Voice User Interface (VUI)
Voice User Interface (VUI)

Accessibility is a prime thing to look for in web application now a days. This gives rise to Voice User Interface. As a human, voice / language is the most natural way of communication for us. This is also another major reason behind the rise of VUI. Major companies like Amazon, Google, Apple has introduced the platforms like Alexa, Assistant and Siri. These platforms are open for developers to build applications on top of that.

Power of text to speech and speech to text technologies has increased exponentially. These gives good accessibility to your web apps. Your users can interact with apps with voice and also it is blessing for visually impaired people. There are still major challenges in this area. Like understanding regional languages. But it’s worth adding VUI in web app.

4) Rise of JAMstack

JAMstack ==> JavaScript + API + Markup

JAMstack

Core principles of JAMstack are Pre rendering and decoupling. JAMstack makes web app faster, more secure and easily scalable. You can convert your entire frontend to highly optimised prebuilt static pages which can be delivered from CDN. Let’s understand each part of JAMstack

JavaScript

As we know JavaScript is most popular when it comes to building web application. Using the modern frameworks like ReactJS, Angular, Vue and Svelte, we can develop web apps which is modularised and easy to maintain.

API

API plays vital role in JAMstack. As a developer it’s up to you that how you can make your JAMstack app dynamic by utilizing the API in proper way. From JavaScript you can make API call to multiple host which will deliver the content.

Markup

This is the most critical part of JAMstack. To be considered a JAMstack app, your app should server the HTML statically, which means not being dynamically rendered from a server. Markups is your prebuilt and pre rendered content. It is requested by JavaScript through API. Frontend developers can use frameworks like Gatsby, NextJs for this.

5) Rise of GraphQL

GraphQL

Facebook introduced GraphQL in 2018 to address the issues with REST APIs. The issue with REST APIs are you have make multiple network requests to get the entire data. This leads to performance degrade both on front end and back end. Using GraphQL we can obtain entire data with one single quest from one end point.

Few advantages of GrapQL is, it’s much faster, strongly typed and follows hierarchical structure where relationships between objects are defined in a graphical structure. Many big companies are using GraphQL. As a front end developer it’s good learn about GraphQL as more and more businesses are moving towards GraphQL

6) Web Assembly

Web Assembly

Most of the time we use JavaScript for the web development. With web assembly developer can use code written in C and C++ in web browser. WebAssembly can help client apps to run on the web that previously was not possible. Sometimes we need to build app in lower level code like C, C++ etc. For example games or streaming apps. But it can’t run on web but with Web Assembly we can write apps in C, C++, RUST, Go Lang and run on the web with near-native performance. Web assembly works along with JavaScript in browser.

7) Machine Learning / Augmented Reality / Virtual Reality

How you can leverage the power of ML, AR & VR in your web app. To know more about this check out my session on Google DevFest India 2021 Day3

Hope you liked this blog about Frontend Development trends and keep exploring it more.

Laravel Copy Tables To Other Server

Recently in one of my Laravel project, I had a situation where some of the tables are required to copy from one db server to other db server on real time basis.

Laravel Observer
Laravel

First of all I thought about using binary logging but it, but my hosting provider refused to enable binary logging. Other option is to use federated storage engine. But hosting provider refused to enable it too. In fact the tech support team asked me a question that what is Fedarated storage engine? So I gave up the hope on them. Finally I implemented with the strategy that I am going to explain in this blog.

Using Laravel Observer and Eloquent

In Laravel we can use model observers to listen to the events on models and take actions. Here are steps you have to perform.

Step 1 : Create Observer

Create a new Class in App\Http\Observers folder and name is ModelObserver

Step 2 : Assign Observer to Model

In your model file add boot function as following.

public static function boot()
{
parent::boot();
YourModelName::observe(new \App\Http\Observers\ModelObserver);
}

Step 3 : Add Methods to Your Observer

In your ModelObserver class add following methods.

public function created($model)
{

}

public function updated($model)
{

}

Now when you insert or update record with Eloquent it will invoke created and updated method and will send model instance as param.

Replication to Other Server

Now next step is to implement replication strategy. For this we are going to create separate model and use remote server connection with that model.

Step 1 : Save Remote Host Connection

For this first of all create saparate MySql connection in your config/database.php file

‘replica’ => [
‘driver’ => ‘mysql’,
‘host’ => env(‘REPLICA_DB_HOST’, ‘remotehost’),
‘port’ => env(‘REPLICA_DB_PORT’, ‘3306’),
‘database’ => env(‘REPLICA_DB_NAME’, ‘your_db’),
‘username’ => env(‘REPLICA_DB_USERNAME’, ‘your_username’),
‘password’ => env(‘REPLICA_DB_PASSWORD’, ‘your_password’),
‘charset’ => ‘utf8’,
‘collation’ => ‘utf8_unicode_ci’,
‘prefix’ => ”,
‘strict’ => false,
‘engine’ => null,
]

Step 2 : Create Model to Connect to Remote Host

Now create a separate model and assign remote server connection to it.

<?php

namespace App\Http\Models;

use Illuminate\Database\Eloquent\Model;

class RemoteModel extends Model
{
protected $connection= ‘replica’;
protected $table = ‘remote_table’;
}

Step 3 : Replicate Model

Now inside your observer’s created method add following code.

if($model instanceOf \App\Http\Models\Model){
$modelArray = $model->toArray();
$newModel = new \App\Http\Models\RemoteModel;
foreach( $modelArray as $key => $value ){
if(gettype($modelArray[$key]) != ‘array’){
$newModel[$key] = $modelArray[$key];
}
}
$newModel->save();
}

As you can see in above code that we are checking first if model created is type of our Model Class and then creating object of RemoteModel and assigning values of Model to RemoteModel object and then finally call save method, this will create row in remote table on remote server.

Access Amazon RDS from AWS Lambda

Recently I was working with serverless application created in NodeJs which was deployed to AWS Lambda. This app is connecting to MySQL database which was deployed to RDS. While I was testing in local machine it worked very well.

AWS Lambda Amazon RDS

After deployed it stopped working. After struggling for half a day I was able to finally make it working. In this blog I am going to mention what it takes to make it working.

Different Security Group is the First Solution

This is really strange, if your Amazon RDS and AWS Lambda function shares the security group it will never connect. Not sure why it’s this way, but to solve the problem.

  1. Create New Security Group and assign it to your Lambda function.
  2. Create Other Security Group and assign it your RDS
  3. Allow MySQL connection in RDS Security Group

To allow MySQL connection, add inbound rules. Select MYSQL/Aurora fromType and allow it from 0.0.0.0/0

Allow VPC Access to Lambda Function

  1. Go to Roles page in IAM consle.
  2. Create new role and add policy AWSLambdaVPCAccessExecutionRole
  3. Assign this role to your Lambda function

Use Connection Pool in Your Code

While using serverless you have to be careful about asynchronous operations. Connection to MySQL is also asynchronous operations. If you do not handle it properly, lambda function will give you timeout as it’s expecting callback function to return result but because it’s async operations it will take time. So you have to explicitly let it know that, there is async operation so wait for it.

So for that create connection pool.

var mysql = require(‘mysql’);
var pool = mysql.createPool({
user : ‘username’,
password : “password”,
database: ‘database’,
host : ‘rds_end_point’
});

Now wait to get connection from the pool.

pool.getConnection((err, connection) => {
connection.query(‘SQL’,function(err, rows, fields) {
const response = {
statusCode: 200,
headers: {
“Access-Control-Allow-Origin” : “*” // Required for CORS support to work
},
body: JSON.stringify({
success: true,
rows: rows
})
};
connection.release();
callback(null, response);
});
})

As you can see in above code we are waiting for getting connection. Once we have the connection we query the database and get the result. After that we invoke the callback and release the connection so it can be used for next requests.

DialogFlow API – Use in Laravel

DialogFlow is one of the best tool available for Machine Learning. It is developed and maintained by Google Team. You can use DialogFlow for various applications like Chat bot, Customer support service etc. In this blog I am going to explain how you can connect DialogFlow with Laravel using APIs.

DialogFlow Logo

DialogFlow

Generate Connection File

For using API, you first have to generate connection file which will be used for Authentication. For this here are the steps.

  1. Login to DialogFlow Console
  2. Select the Project
  3. Go to Settings
  4. Click on Service Account Link
  5. You will be redirect to Google Project Console.
  6. Select Service Account menu from left hand side.
  7. Click on Add New Service Account
  8. Follow the steps and add last step create and download JSON file.

This file will be used as connection credentials.

Integrate SDK in the Laravel App

First of all you have to download SDK using composer. Execute command in terminal

composer require google/cloud-dialogflow

This will download the SDK in vendor folder. Now set a link of credential file in ENV file.

GOOGLE_APPLICATION_CREDENTIALS= /path/to/your/file

Now you can use the API to connect to DialogFlow. Here are few examples.

First of all add following dependencies in your file to use the SDK.

use Google\Cloud\Dialogflow\V2\IntentsClient;
use Google\Cloud\Dialogflow\V2\Intent;
use Google\Cloud\Dialogflow\V2\Intent\TrainingPhrase\Part;
use Google\Cloud\Dialogflow\V2\Intent\TrainingPhrase;
use Google\Cloud\Dialogflow\V2\Intent\Message;
use Google\Cloud\Dialogflow\V2\Intent\Message\Text;

Get List of All Intents

Now use following code to get list of all the intents.

$intentsClient = new IntentsClient();
$parent = $intentsClient->projectAgentName(“YOUR DIALOGFLOW PROJECT NAME”);
$intents = $intentsClient->listIntents($parent,array(“intentView”=>1));
$allIntents = array();
$iterator = $intents->getPage()->getResponseObject()->getIntents()->getIterator();
while($iterator->valid()){
$intent = $iterator->current();
$allIntents[] = array(“id”=>$intent->getName(),”name”=>$intent->getDisplayName());
$iterator->next();
}
return Response::json(array(‘success’=>true,”allIntents”=>$allIntents));

Get Text Prediction

Use following code to get text prediction.

$sessionsClient = new SessionsClient($credentials);
$session = $sessionsClient->sessionName($projectName, uniqid());
$languageCode = ‘en’;
// create text input
$textInput = new TextInput();
$textInput->setText($text);
$textInput->setLanguageCode($languageCode);

// create query input
$queryInput = new QueryInput();
$queryInput->setText($textInput);

// get response and relevant info
$response = $sessionsClient->detectIntent($session, $queryInput);
$queryResult = $response->getQueryResult();
$queryText = $queryResult->getQueryText();
$intent = $queryResult->getIntent();
$displayName = $intent->getDisplayName();
$confidence = $queryResult->getIntentDetectionConfidence();
$fulfilmentText = $queryResult->getFulfillmentText();

Lumen Microservice Architecture

I posted a blog on implementing Microservice Architecture. Here is the link. This blog is second part of it. In first post I just explained about Microservices and proposed architecture. In this post I will give practical example of Lumen Microservice.

Microservice Using Laravel

Lumen Microservice

Lumen Microservice

Lumen Microservice

You can either install Lumen in public folder of main Laravel app or you can have different installation all together.

composer global require “laravel/lumen-installer”

After it’s installed, now lets create an API.

$router->post(‘getDetails’,’APIController@getDetails’);
Now in a controller define the API.
public function getDetails(){
      return response()->json(array(
           “key1″=>”value1”,
           “key2″=>”value2”,
           “key3″=>”value3”,
           “key4″=>”value4”,
           “key5″=>”value5”
      ));
}
Now we can call these API either through our JavaScript app as Ajax request or we can call it in Laravel app using GuzzleHttp

Benefits of GuzzleHttp as mentioned on their website

  • Simple interface for building query strings, POST requests, streaming large uploads, streaming large downloads, using HTTP cookies, uploading JSON data, etc…
  • Can send both synchronous and asynchronous requests using the same interface.
  • Uses PSR-7 interfaces for requests, responses, and streams. This allows you to utilize other PSR-7 compatible libraries with Guzzle.
  • Abstracts away the underlying HTTP transport, allowing you to write environment and transport agnostic code; i.e., no hard dependency on cURL, PHP streams, sockets, or non-blocking event loops.
  • Middleware system allows you to augment and compose client behavior.

Install GuzzleHttp

Install it using Composer.

composer require guzzlehttp/guzzle

Once installed we can use in controller by name space.

use GuzzleHttp\Client;
Now call API using this client.

$client = new Client();

$response = $client->post(‘/lumenapi/public/getDetails’, [
‘form_params’ => []
]);

$data = json_decode($response->getBody()->getContents(), true);

Here in above code if you want to pass any param in POST request, you can do so inside form_params.
Above code is for the synchronous request. We can also have async request.
$request = new Request(‘GET’,’/lumenapi/public/getDetails’);
$client->sendAsync($request)->then(function ($response) {
       $data = json_decode($response->getBody()->getContents(), true);
});
So this way we can build Lumen Microservice and consume it with Laravel app.