Why software enterprises should start using Nodejs as backend



Scalability, security, flexibility, and speed are some of the catchwords for enterprises today. When we specifically talk about application development, javascript has proven to be the most versatile technology. Tools, frameworks, and libraries built on this technology are gaining ground amongst the developers for several reasons.
One of the popular javascript platforms that we have today is Nodejs. Backend development (including APIs), full-stack & frontend development are three prime reasons why Nodejs is so popular amongst the developer’s community worldwide. 
Apart from this, improvement in developers’ productivity, increased application performance, faster development, reduced cost are some of the factors that are in favor of Nodejs for application development.
In the next segment, we will be discussing some of the significant factors that contribute to making Nodejs as a development platform for enterprise organizations. 
1. Improved Application Performance 
In general, a javascript engine interprets or compiles a JS code into machine code. Javascript, being a higher level of dynamic language does not interact with low-level machine logic.
Node.js is built on Google’s V8 engine which utilizes ‘Just-in-Time compilation to bytecode. Unlike most of the JS engines that interpret the JS code, V8 compiles uses just-in-time (JIT) compilation where the JS code is converted into bytecode, instead of a machine language. 
Since this engine interacts directly with the OS (system’s kernel), its performance is faster as compared to other JS engines. Since Nodejs uses this javascript engine, there is an assurance of improved application performance. 
2. Scalability and Low Latency 
One of the major problems that synchronous applications have is, one function must execute before the next function occurs. The best thing about Nodejs is- it’s event-driven, single-threaded, and asynchronous. Let’s understand the benefits of Nodejs having this architecture. 
Imagine you’re in a long queue of a coffeehouse, waiting to place your order. There is only one cashier at the counter who is taking the order, money, and also preparing the coffee and food in the kitchen. Once an order is prepared and served, the cashier continues to take another order and proceed likewise. When the cashier is working in such a way, there are more chances of delay, frustrated customers as the cashier is unable to take more orders until one is being completely processed. 
The above scenarios exemplify a blocking system. In order to resolve this problem, one might think of adding more cashiers (additional resources). However, this won’t prove to be a feasible solution as the number of customers can increase at any point in time. 
Usually, in a thread-based model, when a server receives a connection, it keeps it open until the connection performs the request. This request can either be a page request or writing a transaction to the database. For the time a request is being processed, the server stays blocked on that I/O operation. When such a server type needs to be scaled, additional copies of the server are required where each copy would have separate OS thread. While such a multi-threaded system works fine, they don’t work out as the application scales and it becomes resource-intensive with more client-requests, leading to the requirement of more memory and related resources. 
Now, let’s bring a small change to the coffeehouse model. Along with the cashier, there is a chef who is responsible for preparing the coffee and food in the kitchen. As the cashier accepts the order, he passes the request to the chef (to prepare the order). While the customers receive their order, they are given a token number and are requested to wait while their order is being prepared. 
This coffeehouse model is non-blocking, single-threaded, and resembles that of Nodejs. The prime component of it is the event loop which is responsible for continuously listening to the requests (cashier in the coffeehouse model). As the requests come, the event loop keeps assigning the task to another entity (worker thread pool) for performing actual operation. As a task is performed, a response is sent to the client via callbacks (which, in the coffeehouse model is the token number that the chef calls for the customer to collect the order). 
Nodejs allows for handling multiple connections simultaneously. Most of the application development platforms create new threads for every new request, taking up all the RAM for processing a request. Contrary to this, Nodejs operates on a single-thread, making use of the event loop and delegating tasks to the worker threads. This allows Nodejs to handle hundreds of tasks concurrently. 
3. Problem Traceability is Simplified 
To avoid I/O blocking, Nodejs replies up on callbacks. These callbacks are nothing but the functions that run when a request is finished. There are times when a callback function is in another callback function, which further, is in a different callback. Such a situation (nested callback) makes the code difficult to read and that’s when the need for the promise, async/await & try/catch statement comes in. 
A promise in Node is an action that will either be completed or rejected. A promise is kept if a request is completed and otherwise, the promise is broken. And just link callbacks, promises can be chained. 
And to further simplify this problem of chained promises, there are async/await and try/catch statements. These statements help in code understanding & maintenance, error handling, trace the root cause of a problem, and more. 
4. API Development Using Frameworks
Nodejs is one of the preferred choices for developing RESTFUL APIs. Using Nodejs frameworks such as Hapi, express, etc., it is possible for developers to develop APIs in the least time possible. Since Nodejs uses the javascript language for development, it is a versatile option for JS programmers to build a complete application (frontend, backend, and APIs). 
5. Security Modules to Protect Data 
The Crypto module in Nodejs offers algorithms that perform data decryption and decryption. This is specifically used for data security purposes like user authentication wherein the requirement is to store a password in the database in an encrypted format. For such functionality, the crypto module has a set of classes like HMAC, cipher, decipher, hash, sigh, and verify. 
Choosing Nodejs for Application Development 
Apart from the functionalities mentioned above, Nodejs has an amazing community of contributors, has features like clustering for maximum utilization of resources, Zlib for compressing files and data, and has the ability to avoid communication latency between the client/server, and more. 
Moreover, Nodejs is a suitable choice for building dynamic applications. If you have chosen Nodejs as the technology for your next app, then hire Nodejs developers who can help you to make the most of this technology. 

async await vs promises

In this post, we will understand the difference between async await vs promises. async/await is a special syntax to work with promises in a more comfortable fashion, called “async/await”. It’s surprisingly easy to understand and use.

Async/await in Javascript

Async/await is a new way to write asynchronous code. It is built on top of promises, therefore, it is also non-blocking.
The big difference is that asynchronous code looks and behaves a little more like synchronous code. This is where all its power lies.
The async function declaration defines an asynchronous function, which returns an AsyncFunction object. An asynchronous function is a function which operates asynchronously via the event loop, using an implicit Promise to return its result. But the syntax and structure of your code using async functions is much more like using standard synchronous functions:
function resolveAfter2Seconds() {
  return new Promise(resolve => {
    setTimeout(() => {
      resolve('resolved');
    }, 2000);
  });
}

async function asyncCall() {
  console.log('calling');
  var result = await resolveAfter2Seconds();
  console.log(result);
  // expected output: 'resolved'
}

asyncCall();
Asynchronous functions can be paused with await, the keyword that can only be used inside an async function. Await returns whatever the async function returns when it is done.

Promises in Javascript


Promises In Javascript

The Promise object represents the eventual completion (or failure) of an asynchronous operation and its resulting value.

A promise is an object that may produce a single value some time in the future: either a resolved value or a reason that it’s not resolved (e.g., a network error occurred). A promise may be in one of 3 possible states: fulfilled, rejected, or pending.

Fulfilled: onFulfilled() will be called (e.g., resolve() was called)
Rejected: onRejected() will be called (e.g., reject() was called)
Pending: not yet fulfilled or rejected

Promise users can attach callbacks to handle the fulfilled value or the reason for rejection.

Promises following the spec must follow a specific set of rules:
A promise or “thenable” is an object that supplies a standard-compliant .then() method.
A pending promise may transition into a fulfilled or rejected state.
A fulfilled or rejected promise is settled, and must not transition into any other state.
Once a promise is settled, it must have a value (which may be undefined). That value must not change.

Syntax:
let promise = new Promise(function(resolve, reject) {
  // some stuff
});

Promise Code Example:
var promise1 = new Promise(function(resolve, reject) {
  setTimeout(function() {
    resolve('foo');
  }, 300);
});
promise1.then(function(value) {
  console.log(value);
  // expected output: "foo"
});

console.log(promise1);
// expected output: [object Promise]


NodeJS Best Practices

NodeJS Best Practices

In this post, we will look into some NodeJS Best Practices, as we know that Node.js, a platform built on Chrome’s JavaScript engine helps to develop fast, scalable network applications. It uses an event-driven, non-blocking I/O model which makes node.js lightweight and efficient. Node.js has become one of the most popular platforms over the last couple of years.

Let us look at a few of the best Node.js practices that can protect you from the common Node.js traps:
1. There should be one config.json file for all the globals(environment variables) we use like SQL password, username, mail account credentials, etc. This should be encrypted so that no one can get info from git and file.
2. Separate server and routes(app.js) code.
3. Create a message (language) file and data models.
4. Error handling using try-catch and process.on for uncaughtException.
5. Errors: Always return an error string with Standard (HTTP) error code.
6. Response structure should be changed like {status: true/false, data {}, message: ''}
7. Keep lines shorter than 130 characters.
8. Keep your functions short.
A good function fits on a slide that the people in the last row of a big room can comfortably read.
So don't count on them having a perfect vision and limit yourself to ~30 lines of code per function.

9. Curly braces belong on the same line as the thing that necessitates them.
/* Bad: */
function ()
{
 //Some stuff
}

/*Good: */
function () {
//Some stuff
}

10. Choosing Wisely Among Event Handler And Callback. Generally, developers are confused which is better to use, event handler or callback?
Usually, developers face a dilemma that when an event is executing a function on success or failure of a process then why not use callback?
Let us have a deep discussion on Callbacks and Event Handlers.

(a) Callbacks
Callbacks are used when you have an asynchronous operation that needs caller a notification about its’ completion.
function greeting(name) {
  alert('Hello ' + name);
}
function processUserInput(callback) {
  var name = prompt('Please enter your name.');
  callback(name);
}
processUserInput(greeting);

The above example is an asynchronous callback, as it is executed immediately.
Note, however, that callbacks are often used to continue code execution after an asynchronous operation has completed — these are called asynchronous callbacks. A good example is the callback functions executed inside a .then() block chained onto the end of a promise after that promise fulfills or rejects. This structure is used in many modern web APIs, such as fetch().

(b)- Event Handler
An event handler is a type of callback. The event handler is called whenever an event occurs. Such a phrase of the event handler is normally used in terms of user interfaces where events are like clicking somethings, moving the mouse and so on.
A callback is a procedure that you pass as an argument to another procedure. An event handler is a procedure which is called when an event happens. Such an event can be a callback also. The events and callbacks can be comprehended in a better way.

Events – Think of a Server (as Employee) and Client (as Boss). One Employee can have many Bosses. Whenever a task is finished, the employee raises the event and the bosses may decide to listen to the employee event or not. Here the employee is the publisher and the bosses are a subscriber.

Callback – Here in the callback, the Boss specifically asks an employee to do a task and once the task is done, the Boss wants to be notified. In this employee, it must make sure that once the task is done, he notifies only to the boss who requested, not necessarily all the bosses. The employee will not notify the boss if the job is partially done. If only one boss has requested the information an employee will post a reply to one boss only and the notification is sent to the boss only when all the task is done.
Therefore, after having a clear understanding of event and callback, developers must decide wisely which to use when.

11. Use named functions. They make stack traces a lot easier to read.
12. Trailing whitespace
13. Quotes
Use single quotes, unless you are writing JSON
/* Right: */
var foo = 'bar';

/* Wrong: */
var foo = "bar";

14. Variable declarations
Declare one variable per var statement, it makes it easier to re-order the lines. Ignore Crockford on this, and put those declarations wherever they make sense.
/* Right: */
var keys = ['foo', 'bar'];
var values = [23, 42];

var object = {};
while (keys.length) {
var key = keys.pop();
object[key] = values.pop();
}

/* Wrong: */
var keys = ['foo', 'bar'],
values = [23, 42],
object = {},
key;

while (keys.length) {
key = keys.pop();
object[key] = values.pop();
}

15. Case, naming, etc.
Use lowerCamelCase for multiword identifiers when they refer to functions, methods, properties, or anything not specified in this section.
Use UpperCamelCase for class names (things that you'd pass to "new").
Use all-lower-hyphen-css-case for multiword filenames and config keys.
Use CAPS_SNAKE_CASE for constants, things that should never change and are rarely used.

16. null, undefined, false, 0
Boolean variables and functions should always be either true or false. Don't set it to 0 unless it's supposed to be a number.
When something is intentionally missing or removed, set it to null.
Don't set things to undefined. Reserve that value to mean "not yet set to anything."

17. Avoid console.log().
18. Conditions
Any non-trivial conditions should be assigned to a descriptive variable:
Right:
var isAuthorized = (user.isAdmin() || user.isModerator());
if (isAuthorized) {
 console.log('winning');
}

Wrong:
if (user.isAdmin() || user.isModerator()) {
 console.log('losing');
}

19. To avoid deep nesting of if-statements, always return a function's value as early as possible.
Right:

function isPercentage(val) {
 if (val < 0) {
  return false;
 }
 if (val > 100) {
  return false;
 }
 return true;
}

Wrong:
function isPercentage(val) {
 if (val >= 0) {
 if (val < 100) {
 return true;
} else {
 return false;
}
} else {
return false;
}
}

20. Named closures
Feel free to give your closures a name. It shows that you care about them, and will produce better stack traces:
Right:
req.on('end', function onEnd() {
 console.log('winning');
});

Wrong:
req.on('end', function() {
 console.log('losing');
});

21. Automatic Restart Of node.js Application.
Even after following all the best practices of handling errors there could be a chance some error may bring your application down. This is where it is important to ensure you must use a process manager to make sure the application recovers gracefully from a runtime error.
The other scenario is when you need to restart it, is when the entire server you are running on went down. In that situation, you want minimal downtime and for your node.js application to restart as soon as the server is alive again!

Under this here are a few Node JS development tools

Gulp
A toolkit that allows launching several apps simultaneously. It might be useful if you’d like to run several services at the same time with one command/request.

Nodemon
Hot reload feature for Node.js. This tool automatically updates/ resets your project after any code change is made. A quite handy tool during the  Node.js project architecture development.

Forever,  pm2
These two packages ensure the app’s launch during the (OC) system’s start.

Winston
It provides the opportunity to record the app’s logs to the primary source (file or database). The package comes to help when you need the app to work remotely and don’t have full access to it.

Threads
A tool designed for better work with threads.


22. Testing Is Crucial
On production applications, it is quite critical to get notified if something goes wrong. It could happen that you do not want to check your feeds and thousands of angry users tells you that your server is down and your node.js application is broken for the last few hours.
Hence, it is imperative to tool for alerting you for your critical behavior. Some of the best performance monitoring tools for node.js are Loggly and NewRelic. Developers must have handy tools for monitoring the performance of node.js applications.

What is difference between JSON and BSON

difference between JSON and BSON

MongoDB represents JSON documents in a binary-encoded format so-called BSON behind the scenes.

BSON is a format specializing in efficient storing of JSON-like documents, which besides supporting the traditional JSON data types it also supports dates and binary data natively.

BSON extends the JSON model to provide additional data types such as Date and binary which was are not supported in JSON also provide ordered fields,.

In other words, we can say  BSON is just binary JSON ( a superset of JSON with some more data types, most importantly binary byte array ).

MongoDB using as a serialization format of JSON include with encoding format for storing and accessing documents. simply we can say BSON is a binary encoded format for JSON data.

Difference Between JSON and BSON

JavaScript Object Notation (JSON) is a standard file format which uses human type readable text to transmit data with attribute value pairs and array data types. This is one of the most common data formats which are mainly used for asynchronous browser-server communication.
JSON is a language independent format. The fundamental of it includes JavaScript and there are many programming languages today which makes use of the code to generate and parse the JSON format related data.
JSON Data Format:
{"widget": {
    "debug": "on",
    "window": {
        "title": "Sample Konfabulator Widget",
        "name": "main_window",
        "width": 500,
        "height": 500
    },
    "image": { 
        "src": "Images/Sun.png",
        "name": "sun1",
        "hOffset": 250,
        "vOffset": 250,
        "alignment": "center"
    },
    "text": {
        "data": "Click Here",
        "size": 36,
        "style": "bold",
        "name": "text1",
        "hOffset": 250,
        "vOffset": 100,
        "alignment": "center",
        "onMouseUp": "sun1.opacity = (sun1.opacity / 100) * 90;"
    }
}} 

BSON, on the other hand, is a computer interchange format which is mainly used for data storage and as a network transfer format in the MongoDB database. It is a simple binary form which is used to represent data structures and associative arrays (often called documents or objects in MongoDB). BSON stands for binary JSON which consist of a list of ordered elements containing a field name, type, and a value. Field name types are typically a string.

The BSON type supports the dates and binary data and because of its nature, this is not in a readable form whereas normal JSON files consist of a key-value pair. It is not a mandate that the BSON files are always smaller than JSON files but it surely skips the records which are irrelevant while in case of JSON you need to parse each byte. This is the main reason for using it inside MongoDB.

The BSON type format is lightweight, highly traversable and fast in nature. BSON implementation is used for supporting embedding objects and arrays within other objects. Inside BSON objects indexes can be built and the objects are matched against query expressions on the top level and BSON keys. BSON is that binary encoding JSON document which is used to store documents in collections. Support for data types like binary and date which aren’t supported in JSON are added into BSON.

In practicality, much information about BSON is not needed. Using only the native types of the language and the supplied types such as the ObjectID of the driver is needed and the mapping will be done on its own to the BSON type.
How BSON Data Formatted:






Implementing node.js security checklist

In this post, we will look into some of the security checklists while implementing a node.js project for n numbers of the user.

node-js-security-checklist
1. Sensitive Data on the Client Side
When deploying front end applications make sure that you never expose API secrets and credentials in your source code, as it will be readable by anyone.

There is no good way to check this automatically, but you have a couple of options to mitigate the risk of accidentally exposing sensitive data on the client side:
a. use of pull requests
b. regular code reviews

2. Session Management
The importance of secure use of cookies cannot be understated: especially within dynamic web applications, which need to maintain state across a stateless protocol such as HTTP.

Cookie Flags
The following is a list of the attributes that can be set for each cookie and what they mean:
Secure - this attribute tells the browser to only send the cookie if the request is being sent over HTTPS.
HttpOnly - this attribute is used to help prevent attacks such as cross-site scripting since it does not allow the cookie to be accessed via JavaScript.

Cookie Scope
domain - this attribute is used to compare against the domain of the server in which the URL is being requested. If the domain matches or if it is a sub-domain, then the path attribute will be checked next.

path - in addition to the domain, the URL path that the cookie is valid for can be specified. If the domain and path match, then the cookie will be sent in the request.
expires - this attribute is used to set persistent cookies since the cookie does not expire until the set date is exceeded

In Node.js you can easily create this cookie using the cookies package. Again, this is quite low
-level, so you will probably end up using a wrapper, like the cookie-session.
var cookieSession = require('cookie-session');
var express = require('express');
var app = express();
app.use(cookieSession({
  name: 'session',
  keys: [
    process.env.COOKIE_KEY1,
    process.env.COOKIE_KEY2
  ]
}));
app.use(function (req, res, next) {
  var n = req.session.views || 0;
  req.session.views = n++;
  res.end(n + ' views');
});
app.listen(3000);


3. CSRF
Cross-Site Request Forgery is an attack that forces a user to execute unwanted actions on a web application in which they're currently logged in. These attacks specifically target state-changing requests, not theft of data, since the attacker has no way to see the response to the forged request.

In Node.js to mitigate this kind of attacks, you can use the csrf module. As it is quite low-level, there are wrappers for different frameworks as well. One example of this is the csurf module: an express middleware for CSRF protection.

On the route handler level you have to do something like this:
var cookieParser = require('cookie-parser');
var csrf = require('csurf');
var bodyParser = require('body-parser');
var express = require('express');

// setup route middlewares
var csrfProtection = csrf({ cookie: true });
var parseForm = bodyParser.urlencoded({ extended: false });

// create express app
var app = express();

// we need this because "cookie" is true in csrfProtection
app.use(cookieParser());

app.get('/form', csrfProtection, function(req, res) {
  // pass the csrfToken to the view
  res.render('send', { csrfToken: req.csrfToken() });
});

app.post('/process', parseForm, csrfProtection, function(req, res) {
  res.send('data is being processed');
});

While on the view layer you have to use the CSRF token like this:
<form action="/process" method="POST">
  <input type="hidden" name="_csrf" value="{{csrfToken}}">
  Favorite color: <input type="text" name="favoriteColor">
  <button type="submit">Submit</button>
</form>

4. SQL Injection

SQL injection consists of injection of a partial or complete SQL query via user input. It can read sensitive information or be destructive as well.
Take the following example:
select title, author from books where id=$id
In this example
$id is coming from the user - what if the user enters 2 or 1=1?
The query becomes the following:
select title, author from books where id=2 or 1=1
The easiest way to defend against this kind of attacks is to use parameterized queries or prepared statements.
If you are using PostgreSQL from Node.js then you probably using the node-postgres module. To create a parameterized query all you need to do is:
var q = 'SELECT name FROM books WHERE id = $1';
client.query(q, ['3'], function(err, result) {});
sqlmap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of database servers. Use this tool to test your applications for SQL injection vulnerabilities.

5. Security HTTP Headers

There are some security-related HTTP headers that your site should set. These headers are:

Strict-Transport-Security enforces secure (HTTP over SSL/TLS) connections to the server
X-Frame-Options provides clickjacking protection
X-XSS-Protection enables the Cross-site scripting (XSS) filter built into most recent web browsers
X-Content-Type-Options prevents browsers from MIME-sniffing a response away from the declared content-type
Content-Security-Policy prevents a wide range of attacks, including Cross-site scripting and other cross-site injections

In Node.js it is easy to set these using the Helmet module:
var express = require('express');
var helmet = require('helmet');
var app = express();

app.use(helmet());
Helmet is available for Koa as well: koa-helmet.

Also, in most architectures, these headers can be set in web server configuration (Apache, nginx), without changing the actual application's code. In nginx it would look something like this:
# nginx.conf
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Content-Security-Policy "default-src 'self'";
For a complete example take a look at this nginx configuration file.
If you quickly want to check if your site has all the necessary headers check out this online checker:
http://cyh.herokuapp.com/cyh.

6. Error Handling using Error Codes, Stack Traces

During different error scenarios, the application may leak sensitive details about the underlying infrastructure, like X-Powered-By: Express.

Stack traces are not treated as vulnerabilities by themselves, but they often reveal information that can be interesting to an attacker. Providing debugging information as a result of operations that generate errors is considered a bad practice. You should always log them, but do not show them to the users.

NPM
With great power comes great responsibility - NPM has lots of packages what you can use instantly, but that comes with a cost: you should check what you are requiring to your applications. They may contain security issues that are critical.

The Node Security Project
Luckily the Node Security project has a great tool that can check your used modules for known vulnerabilities.

npm i nsp -g
# either audit the shrinkwrap
nsp audit-shrinkwrap
# or the package.json
nsp audit-package

You can also use requireSafe to help you with this.





What is the difference between directive and component in Angular?


difference between directive and component
Difference between directive and component


In this post, we will see the difference between directive and component in Angular?
Directives
In short, you can say that directives define functionality, mostly to manipulate the HTML DOM, it doesn’t define any UI. You attach the directive to an existing view or template. On the other hand, components define view or template as well as the logic to manipulate the template.
Basically, there are three types of directives in angular as per the documentation.
  • Component
  • Structural directives
  • Attribute directives

Component
Component decorator allows you to mark a class as an Angular component and provide additional metadata that determines how the component should be processed, instantiated and used at runtime.
Components are the most basic building block of a UI in an Angular application. An Angular application is a tree of Angular components. Angular components are a subset of directives. Unlike directives, components always have a template and only one component can be instantiated per an element in a template.
A component must belong to a NgModule in order for it to be usable by another component or application. To specify that a component is a member of a NgModule, you should list it in the declarations field of that NgModule.
is also a type of directive with template, styles and logic part which is the most famous type of directive among all in angular. 
In this type of directive you can use other directives whether it is custom or built-in in the @component annotation like following:
@Component({
    selector: "my-app"
    directives: [custom_directive_here]
})
use this directive in your view as:
<my-app></my-app>

Structural directives
like *ngFor and *ngIf used for changes the DOM layout by adding and removing DOM elements. 
Attribute directives
are used to give custom behavior or style to the existing elements by applying some functions/logic. like ngStyle is an attribute directive to give style dynamically to the elements. we can create our own directive and use this as Attribute of some predefined or custom elements, here is the example of a simple directive:
firstly we have to import directive from angular2/core
import {Directive, ElementRef, Renderer, Input} from 'angular2/core';

@Directive({
  selector: '[Icheck]',
})
export class RadioCheckbox {
   custom logic here,,,,
}
and we have to use this in the view like below:
<span Icheck>HEllo Directive</span>


What is the difference between Sessionstorage, Localstorage and Cookies?

Storage Types


This is an extremely broad scope question, and a lot of the pros/cons will be contextual to the situation.
In all cases, these storage mechanisms will be specific to an individual browser on an individual computer/device. Any requirement to store data on an ongoing basis across sessions will need to involve your application server side - most likely using a database, but possibly XML or a text/CSV file.
localStorage, sessionStorage, and cookies are all client storage solutions. Session data is held on the server where it remains under your direct control.

LocalStorage:

Web storage can be viewed simplistically as an improvement on cookies, providing much greater storage capacity. Available size is 5MB which considerably more space to work with than a typical 4KB cookie.
The data is not sent back to the server for every HTTP request (HTML, images, JavaScript, CSS, etc) - reducing the amount of traffic between client and server.
The data stored in localStorage persists until explicitly deleted. Changes made are saved and available for all current and future visits to the site.
It works on same-origin policy. So, data stored will only be available on the same origin.

Cookies:

We can set the expiration time for each cookie
The 4K limit is for the entire cookie, including name, value, expiry date etc. To support most browsers, keep the name under 4000 bytes, and the overall cookie size under 4093 bytes.
The data is sent back to the server for every HTTP request (HTML, images, JavaScript, CSS, etc) - increasing the amount of traffic between client and server.

SessionStorage:

It is similar to localStorage.
Changes are only available per window (or tab in browsers like Chrome and Firefox). Changes made are saved and available for the current page, as well as future visits to the site on the same window. Once the window is closed, the storage is deleted
The data is available only inside the window/tab in which it was set.
The data is not persistent i.e. it will be lost once the window/tab is closed. Like localStorage, it works on same-origin policy. So, data stored will only be available on the same origin.


IndexedDB

When you select an origin inside the Indexed DB storage type in the storage tree, the table lists the details of all the databases present for that origin. Databases have the following details:
  • Database Name — The name of the database
  • Storage — The storage type specified for the database (new in Firefox 53)
  • Origin — Its origin
  • Version — The database version
  • Object Stores — Number of objects stored in the database
When an IndexedDB database is selected in the storage tree, details about all the object stores are listed in the table. Any object store has the following details:

  • Object Store Name — The name of the object store
  • Key — The keyPath property of the object store.
  • Auto Increment — Whether auto-increment is enabled
  • Indexes — Array of indexes present in the object store


Cache Storage

Under the Cache Storage type, you can see the contents of any DOM caches created using the Cache API. If you select a cache, you'll see a list of the resources it contains. For each resource, you'll see:

1. the URL for the resource

2. the status code for the request that was made to fetch it.



Copyright © 2012-2018 All Rights Reserved