Monday, September 8, 2014

Database multi-tenancy using Hibernate 4

With many business applications being written, with intent to be deployed on cloud, there is increasingly a need to design multi-tenancy into your application.


For persistence layer multi-tenant design offers a few options:
http://msdn.microsoft.com/en-us/library/aa479086.aspx


  1. separate database per tenant
  2. separate schema, same database per tenant
  3. same schema, same database for all tenant with discriminator column for tenant rows in same table


In this article, I am going to talk more about implementing the options 1 and 2 using the support in hibernate 4, for multi-tenancy.

Also for the database connections, we have choice of having a common database connection pool, from which connections are allocated to each tenant request, or separate connection pools for each tenant.
For the common connection pool approach, each connection before being used for data access, needs to be "primed" for usage against a tenant database or schema, using statements like "use <tenant_schema>" or "use <tenant_database>"


Multi-tenancy in Hibernate


For the "separate connection pool per tenant approach"

First we need to implement a custom connection provider as follows

import java.sql.Connection;
import java.sql.SQLException;

import org.apache.commons.dbcp2.BasicDataSource;
import org.hibernate.engine.jdbc.connections.spi.ConnectionProvider;

public class ConnectionProviderImpl implements ConnectionProvider {
 
 private final BasicDataSource basicDataSource = new BasicDataSource();
 
 public ConnectionProviderImpl(String database){
                //this should be read from properties file
  basicDataSource.setDriverClassName("com.mysql.jdbc.Driver");
  basicDataSource.setUrl("jdbc:mysql://localhost:3306/"+database);
  basicDataSource.setUsername("myuser");
  basicDataSource.setPassword("mypassword");
  basicDataSource.setInitialSize(2);
  basicDataSource.setMaxTotal(10);
 }

 @Override
 public boolean isUnwrappableAs(Class arg0) {
  return false;
 }

 @Override
 public  T unwrap(Class arg0) {
  return null;
 }

 @Override
 public void closeConnection(Connection arg0) throws SQLException {
  arg0.close();
 }

 @Override
 public Connection getConnection() throws SQLException {
  return basicDataSource.getConnection();
 }

 @Override
 public boolean supportsAggressiveRelease() {
  return false;
 }

}




Next we need to implement AbstractMultiTenantConnectionProvider of hibernate, as follows
Here I am maintaining a map of database identifiers against, connection providers.
When hibernate invokes the selectConnectionProvider( ) method with the tenant identifier, we use the tenant identifier to return the "appropriate" connection provider from the map.
It is necessary to implement a method like getAnyConnectionProvider( ), which should return, a sensible - default connection provider.

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;

import org.hibernate.engine.jdbc.connections.spi.AbstractMultiTenantConnectionProvider;
import org.hibernate.engine.jdbc.connections.spi.ConnectionProvider;

public class MultiTenantConnectionProvider extends AbstractMultiTenantConnectionProvider {
 
 private HashMap connProviderMap = new HashMap();
 
 public MultiTenantConnectionProvider(){
  

  List providerNames = new ArrayList();
  providerNames.add("default_db");
  providerNames.add("db1");
  providerNames.add("db2");
  //need to get above from properties file
  
    for (String providerName : providerNames) {
       connProviderMap.put(providerName, new ConnectionProviderImpl(providerName));
    }
  
 }
 
 @Override
 protected ConnectionProvider getAnyConnectionProvider() {
  System.out.println("inside MultiTenantConnectionProvider::getAnyConnectionProvider");
  return connProviderMap.get("default_db");
 }

 @Override
 protected ConnectionProvider selectConnectionProvider(String tenantId) {
  ConnectionProvider connectionProvider = connProviderMap.get(tenantId);
  if(connectionProvider == null)
   connectionProvider =  new ConnectionProviderImpl("default_db");
  
  return connectionProvider;
 } 

}



For common connection pool for all tenants.

We need to implement MultiTenantConnectionProvider of hibernate, as follows
Since we are using the same connection pool, in getConnection(), we need to prime the connection using a SQL statement like 'use <database>', so that the connection's further use will be in database as per tenant_id.
Also in releaseConnection( ), we have a 'use <default_db>' as a fallback.



import java.sql.Connection;
import java.sql.SQLException;

import org.hibernate.HibernateException;
import org.hibernate.engine.jdbc.connections.spi.ConnectionProvider;
import org.hibernate.engine.jdbc.connections.spi.MultiTenantConnectionProvider;

public class MultiTenantConnectionProviderWithSingleDBPool implements
  MultiTenantConnectionProvider {
 
 private final ConnectionProvider connectionProvider = new ConnectionProviderImpl(CurrentTenantIdentifierResolver.DEFAULT_TENANT_ID); 
 

 @Override
 public boolean isUnwrappableAs(Class arg0) {
  return false;
 }

 @Override
 public  T unwrap(Class arg0) {
  return null;
 }

 @Override
 public Connection getAnyConnection() throws SQLException {
  System.out.println("inside MultiTenantConnectionProvider::getAnyConnection");
  return connectionProvider.getConnection();
 }

 @Override
 public void releaseAnyConnection(Connection connection) throws SQLException {
  connectionProvider.closeConnection( connection );
 }

 @Override
 public Connection getConnection(String tenantIdentifier) throws SQLException {
  final Connection connection = getAnyConnection();
  try {
     connection.createStatement().execute( "USE " + tenantIdentifier );
  }
  catch ( SQLException e ) {
     throw new HibernateException(
       "MultiTenantConnectionProvider::Could not alter JDBC connection to specified schema [" +tenantIdentifier + "]",e);
  }
  return connection;
 }

 @Override
 public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {
  try {
   connection.createStatement().execute( "USE default_db" );
  }
  catch ( SQLException e ) {
     throw new HibernateException(
     "Could not alter JDBC connection to specified schema [" +
   tenantIdentifier + "]",e);
  }
  connectionProvider.closeConnection( connection );
 }

 @Override
 public boolean supportsAggressiveRelease() {
  return false;
 }

}





refer hibernate docs
http://docs.jboss.org/hibernate/orm/4.1/devguide/en-US/html/ch16.html



Now, to persuade hibernate to use tenant identifier before any database access, we can
  1. manually set the tenant identifier on hibernate session factory
  2. we can implement hibernate tenant identifier resolver


Manually set the tenant identifier on hibernate session factory
This can be done, in a web application in any suitable interceptor where hibernate session is available
Session session = sessionFactory.withOptions()
        .tenantIdentifier( yourTenantIdentifier )
        ...
        .openSession();


Hibernate tenant identifier resolver can be implemented as follows.
Here I am using a ThreadLocal to get the tenant identifier. The threadlocal itself can be set early on in the http request thread in say, a servlet filter.


import org.hibernate.context.spi.CurrentTenantIdentifierResolver;

public class CurrentTenantIdentifierResolver implements
  CurrentTenantIdentifierResolver {
 
 public static ThreadLocal _tenantIdentifier = new ThreadLocal();
 public static String DEFAULT_TENANT_ID = "default_db";

 @Override
 public String resolveCurrentTenantIdentifier() {
  System.out.println("from inside resolveCurrentTenantIdentifier....");
  String tenantId = _tenantIdentifier.get();
  if(tenantId == null)
   tenantId = DEFAULT_TENANT_ID;
  
  System.out.println("threadlocal tenant id ="+tenantId);
  return tenantId;
 }

 @Override
 public boolean validateExistingCurrentSessions() {
  return true;
 }

}



Finally, the hibernate configuration for specifying that we are using hibernate mutli-tenancy, in hibernate.cfg.xml





That's it people,
Happy multi-tenanting :-)

Cheers!

Thursday, April 17, 2014

Using free openshift java PAAS, to run a web application on an online version of tomcat

Many times you have a war file, that you would like to deploy such that the application becomes available online.A classical use case is using a REST service as stub for your mobile app or thin client web app.

Openshift online PAAS is an easy solution for such free hotsitng of a java web application


Create a free account at open shift online 
Download their rhc command line tool
Fire following command to create 1 of 3 free apps, where name of new hosted web app is for example say, restgrails

rhc create-app restgrails tomcat-7

Go to the open shift web console and note down the ssh url of the git repository
setup ssh keys for the open shift git repo using command

rhc setup

If required, place the keys generated in <user-dir>\.ssh into <git-install-dir>\.ssh

fire following command on your local machine

git clone <git_url> restgrails

on your local machine the above git command will create a new directory called restgrails and checkout default web app from openshift git repo to your local machine

cd restgrails

Fire following commands to get rid of the default web app's src and pom.xml, since we will deploy a ready made war file onto the openshift PAAS

git rm -rf src/ pom.xml

git commit -am "deleted default source code"

Now prepare the ready made war file for deploying onto open shift

copy war file to restgrails/webapps/
rename war file to ROOT.war  --this is important if you want to map your webapp to the openshift online app URL

git add .
git commit -am "added new root.war to webapps"
git push --force

Above git push command, will trigger a deploy of the war file into the remote tomcat on open shift and soon the web app will become available online as say

http://restgrails-ideafountain.rhcloud.com/
ie
http://<web-app-name>-<domain-name-reg-with-openshift-online>.rhcloud.com/

you can tail the tomcat server logs with the following command
rhc tail restgrails

For databases, openshift online does support databases like mongodb, mysql and postgresql, so if your app uses hibernate, it can easily be configured to use online database like say mysql or postgresql, while running it online


to check logs

Go to openshift web console, and click on Remote access, want to login to your application


Fire ssh command, in textbox on web console, from your dos prompt

You will be get logged-in to the VM.
Here give following commands:


[plateletdonation-ideafountain.rhcloud.com ]\> find  . -name *.log
find: `./.tmp': Permission denied
find: `./.sandbox': Permission denied
find: `./.ssh': Permission denied
./app-root/logs/jbossews.log
./jbossews/tmp/stacktrace.log


[plateletdonation-ideafountain.rhcloud.com ]\> tail -f ./app-root/logs/jbossews.log

Wednesday, March 26, 2014

Why is enterprise application development different from just programming?

Any monkey can write code that works!
Writing code that fulfills a functionality is a very important first step in the process of developing an enterprise application, but it is only the beginning, the enterprise application is far from done...

So what else is involved in developing an enterprise application?
As it turns out there are several other aspects of development that have to be considered:

  • Is the code unit tested with sufficient test coverage?
  • Is the code thread safe, in the face of multiple users accessing it?
  • Is the code transactional?
  • Is the code secure?
  • Is the code easy to debug? Is it instrumented with logging at appropriate levels?
  • Does the code handle exceptions and errors?
  • Is the code optimized for performance and for memory consumption?
  • Is the code resilient? Can it recover, from a failure?
  • Is the code easily extensible and maintainable?
  • Is the code portable across deployment environments?
  • Are all the configurable properties externalized and easily changed during deployment?
  • Is the code able to inter-operate with legacy systems to be integrated?

Answering all the above questions to the satisfaction of the non-functional requirements, is equally important. Many developers are aware of all the above aspects of development but completely ignore them, when asked about estimates when they think they can complete a particular functionality or feature of the enterprise application under development :-(

On a side note, even when we as developers decide to write our own reusable component or framework, please take into consideration that all the above aspects need to be built into the component/framework and hence many times it makes better sense to use an opensource component which is popular used or has withstood the test of time.Don't re-invent the wheel :-)


Taking an application from developers machine to production
Taking a functionally complete application from a developers machine upto production is a long walk. Lets see some of the common things that need to be done:

  • automated integration tests especially for regression testing
  • scripts for automated checkout, packaging and deployment
  • automated configuration of application  as per environment eg. integration, qc, uat and prod
  • scripts for application life cycle management - start, restart, stop, application
  • scripts/tools for application database cleanup and archival
  • scripts/tools for application log files cleanup and archival
  • scripts/tools for monitoring application health and sending alerts
  • scripts/tools for scrapping application logs for exceptions, etc and sending out alerts
  • scripts/tools for performance and memory monitoring of application
  • migrating existing database to new application database + testing app against migrated data
  • scripts for scheduling house keeping work around the application


As can be easily seen, there is a very substantial and important component of "operations" which needs to be undertaken, for successful implementation of any application. All these activities can hardly be classified as typical coding. These need to be factored into time estimates as well as workforce planning



Tuesday, March 25, 2014

When and why to use NODE.JS



Node.js ! 

The new kid on the block, has definitely been turning main stream. So, you must have heard buzzwords like async io, server side javascript, etc getting associated with the node hype. Some remarks from the sceptics as well - you cant handle dynamic javascript with your immature development workforce or node.js is ok for hello-world apps but not necessarily for primetime on the enterprise applications landscape.

Well heres my attempt clearing up the mist, with regards to node.js and related frameworks...

When to use Node.js

As very rightly pointed out by the authors of node.js, it is an excellent fit for DIRT-y applications. DIRT stands for Data Intensive, Real Time.

How do Java EE web apps work

Lets start out with a traditional Java EE web applications hosted on say a tomcat container, when a 1000 users concurrently access the web application, for each user request a thread is spawned by the web container. Each thread, does processing, passing control through, various application layers like view, controller, service, data access to finally hit a database or backend, get the relevant data and bring it back to presentation tier from where it may be formatted like an html response and sent back to the client browser as an Http response. The hundreds/thousands of threads running in the web container, ensure scalability of the application. But each thread contains synchronous method calls. Each thread is held up, while the sync call completes.The best to emphasize this is through a database call such as
ResultData data = personDao.updatePersonDetails(person);
 ( above call blocks till all DB rows are fetched )

it is not uncommon for many threads in such an application to be waiting for IO either disk io or network io (as in a database call). As a result one can clearly observe that hardly 10%-20% of cpu is utilized on the web tier, most latency is due to threads waiting on IO.

Can the above latency be reduced?

Enter Node.js with its aysnc io

Node.js at its simplest and most fundamental level prefers non-blocking function calls, through incessant use of callbacks. For example the above blocking database call can be simplified logically to the following async call:
personDao.updatePersonDetails(person, function(err, numRowsAffected ){....} );
Above call issues request to database driver for update but does not wait for results, it continues to next line of execution, and when the database update has been made, the callback i.e. the anonymous functiona gets executed, returning back the error if any and the number of rows affected by the updated.

Though the above difference between using sync blocking functions v/s using async functions with callbacks may seem trivial, if we adopt the async non-blocking function calling as the default way of programming soon things start adding up.

Instead of having 1 thread dedicated for an http request and having thousands of such threads, in node.js we have async functions, which do processing in bits and dont hold up a single thread. The node.js model starts exhibiting more scalability, due to non-blocking nature of calls in the processing stack.

Where are the benchmarks?

Well, consider this, apache webserver uses the multi threaded model whereas nginx uses non-blocking IO, it has been proven several benchmarks that nginx with its non-blocking io shows roughly 20% better scalability than apache webserver. So even for serving static content, we know, async io is always going to be more scalable.

Can Node.js leverage multiple cores? Is it fault tolerant?

This was another valid criticism raised against node,js platform in early days of the framework. We need to understand that node.js server has only a single thread available for the application' usage (there may be others for doing the server's internal housekeeping). Due to the above limitation, node.js processing runs on a single core and is also susceptible to outage since one thread crashing can bring processing down for all clients. Thankfully multiple node.js servers can be run on the same machine with a load balancer upfront. Or node.js modules like "cluster", "upstart" and "forever" ensure, node.js servers can be horizontally scaled and will have features like guaranteed uptime and automatic restarts, on crash.

Node.js for realtime apps

Out of the acronym DIRT, we already saw how node.js is a good fit for "data intensive" applications, well its time to talk about one more category of applications, namely "realtime applications" for which node.js would be suitable.
Again lets consider, traditional web apps, which require to show data that changes frequently at the backend. The traditional approach using Http is for browser based javascript clients to "poll" the server using "auto-refresh" tags. This approach apart from loading up the server is also very unoptimized, instead newer full-duplex protocols like websocket, can be used by the servers to push content, messages, notifications to the client browsers, "whenever" the server events occur.

This event driven approach, to pushing from server to client is very compatible with node.js async and non-blocking APIs, hence there is excellent support in node.js based modules like socket.io, for server push. This makes node.js very suitable for web applications that require high interactivity, server push and notifications.

What is node.js and how useful is it to build real life web apps from scratch?

Well node.js core is really a http server and a platform for hosting web apps. The core node.js hence does not include any features which directly are used in the making of a web application. To create a real life web application, one needs a framework with many features like ability to host static content from the server's native file system, user authentication, session and cookie management, templatable views, nagivation and routing framework, binding model and views, error handling, request body parsing etc.
For providing all above functionalities the core node.js needs to have relevant modules added on top of it
Node.js provides a robust module system called npm stands for Node package manager, which helps in installing such node.js modules.
The most critical module which provides pluggable middle-ware is a module called connect.
On top of connect is a module called express.js which provides a full fledged web application framework.
More custom requirements for the web app can be fulfilled by installing appropriate modules freely available via the npm (node package manager).
Similarly, templating engines like jade and ejs can be plugged into frameworks like express.js, thereby enriching the node.js ecosystem for your web application development.

In subsequent articles, I will be walking through developing a basic node.js web application for doing CRUD (create, read, update, delete) into a mongo database. You will be pleasantly surprised how easy and elegant it is to develop web applications in node.js based frameworks.

Cheers!


Friday, February 7, 2014

Securing your REST API



With developers exposing application functionality as REST services, willy nilly all over the enterprise, the onus of securing these REST services is on them as well. Here I will discuss the options for means of authenticating your REST services. For authenticating REST services you have the following broad options:


  1. Use HTTP Basic Auth, Digest, Form based or Client Certificates
  2. Use custom token based authentication mechanisms
  3. Use OAuth 2.0 - Two legged or Three Legged Auth


Using HTTP Basic Auth, Digest, Form based or Client Certificates
Since HTTP is the most popular implementation of REST, Java EE based authentication mechanisms hold true for REST service calls as well. Here we have 3 options, as mentioned well in Java EE Security documentation.

To secure REST API exposed at http://<hostname>:<portnumber>/web-app-context/webapi/rest-resource we will use the following configuration in web application's web.xml




Above, configuration uses the "Basic Auth" scheme, similarly REST based (logical) resources, can be secured using DIGEST, Form Based and client-certificates schemes as well. Please refer Java EE security documentation for further details.

Also, suitable realm can be used to refer to credential stores. If you want to make the actual authentication to be portable across application servers you can use JAAS and write-up your own custom LoginModule and CallbackHandler.

So far so good, securing REST services is like any conventional Java EE web application. But....

Many times, the REST services are consumed by not conventional browser based UIs, but rather by mobile applications based on say android or iOS. In such cases, having a mobile app which needs a re-login after  HttpSession timeout of say 30 minutes(for tomcat), can be very tedious and undesirable. Instead most mobile apps allow user to enter the username and password, once and then store it on the mobile device as an app preference.

In order to provide a feature such as above, with mobile clients in mind, we can use "custom tokens with expirations". The idea itself is quite simple.

  1. The first time a mobile user uses the app, he is asked for a username and password for authentication. 
  2. This server backend provides a REST API such as /login which takes a Http POST and username and password as part of Http request body. 
  3. The password itself can be sent across using encoding such as base-64 (over SSL obviously!) 
  4. The server validates the username and password going against a credential store such as LDAP based or ActiveDirectory based. Once the user is authenticated, a unique token is generated by server, the server saves this token against the username, along with say a timestamp and expiry date time and also the token is sent back to the client.
  5. The client stores this token say in HTML5 localstorage and uses it in subsequent REST requests, in a custom http header
  6. For subsequent requests, the server looks at the custom header retrieves the token, validates that it is valid by checking against the username and also ascertaining that it has not expired as per expiry policy.
  7. If token is valid, the request goes through, if token is invalid or expired, the server throws back a 401 error, which makes the client go back to login page for re-login
The above mechanism is depicted in representative code below, to clarify the explanation.

The server side authentication and token check can be done using, a servlet filter interceptor as depicted below:

package com.mastek.opensource.security.filters;

import java.io.IOException;

import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

public class SecurityTokenFilter implements Filter {
 
 @Override
 public void doFilter(ServletRequest req, ServletResponse resp, 
   FilterChain chain) throws IOException, ServletException {
  
  HttpServletRequest request = (HttpServletRequest)req;
  String token = request.getHeader("MyTokenField");
  
  //additionally check if the token is valid
  //valid can also check if token is expired as per policy
  if(token == null || token.length() == 0)  
     //send HTTP resp code 401: Unauthorized
     ((HttpServletResponse)resp).setStatus(HttpServletResponse.SC_UNAUTHORIZED); 
  else
     chain.doFilter(req, resp);
  
 }


 @Override
 public void destroy() {

 }
 
 @Override
 public void init(FilterConfig arg0) throws ServletException {

 }
 
}


The servlet filter entry in web.xml is shown below:

The ajax request which sends out the token in custom Http header and redirects to login page if server throws a 401 is depicted below:


$(document).ready(function() {
$.ajax({
    type: 'GET',
    url: "my-url",
    dataType: 'json',
    data: "{ 'name': 'somedata'}",
    beforeSend: function (request){
        request.setRequestHeader("MyTokenField", localStorage.userToken);
    },
    success: function(mydata) {
        console.log('data ret='+JSON.stringify(mydata)) 
    },
    error: function(mydata){
        if(mydata.status === 401){
            alert('Redirecting you back to login page....');
            window.location = "login.html";
        }
    }
});
 
});

I have'nt put up the /login API, but its easy to imagine that, as a service which takes in username and password, and returns back a token.

The above mechanism is still suspectible to man-in-the-middle attacks, for which the transport needs to be secured with SSL, any which ways. Libraries for generating hash as the unique token are available in javascript as well as java.

The flexible thing about custom tokens, is that you can put in code on your server side to implement any particular policy for the token expiration. Example tokens can be to expire in a day or in a week. Also, keeping track of active users in your system is easier.

I will leave the OAuth 2.0 two-legged mechanism for a subsequent article, since this one is getting too lengthy as it is :-)

Hope you enjoyed the discussion above, related to basics of securing REST services.

Cheers!

Tuesday, February 4, 2014

Backbone.js - Getting Started Tutorial

When I started learning backbone.js, though there were several getting started tutorials, all of them seemed a little aggressive, since they introduced all the backbone features in a single, supposedly simple application. In this tutorial I am going to introduce each of backbone basic concepts incrementally and kind of provide a less steeper introduction to backbone.js for beginners.

Lets jump right in....

We will start out with a bare bones html file, which can be thought off, as the anchor or home page for the backbone application. Lets name it bb-intro.html.

Initial listing of bb-intro.html




Next we will create a "router" in backbone, it is like a navigator component, which maps URLs to views/templates.

Lets have 3 pages(views) in our application, to be able to display these on demand we will define the following routes.


 var AppRouter = Backbone.Router.extend({
   routes : {
    "" : "showPage1",
    "page1" : "showPage1",
    "page2" : "showPage2",
    "page3" : "showPage3"
   },
   showPage1 : function() {
    new AppViewPage1().render();
   },
   showPage2 : function() {
    new AppViewPage2().render();
   },
   showPage3 : function() {
    new AppViewPage3().render();
   }
});

  
var appRouter = new AppRouter();
Backbone.history.start();


The route entry "" implies the default html page and refers to the following URL
http://<host-name>:<port number>/<web-context>/<path-to-html>/bb-intro.html
or
http://<host-name>:<port number>/<web-context>/<path-to-html>/bb-intro.html#


The entry "page1" refers to the following URL:
http://<host-name>:<port number>/<web-context>/<path-to-html>/bb-intro.html#page1

The entry "page2" refers to the following URL:
http://<host-name>:<port number>/<web-context>/<path-to-html>/bb-intro.html#page2

and so on....I think you get the picture here :)


Next up, is the views like AppViewPage1, AppViewPage2, AppViewPage3, these are as you might have guessed javascript objects defined as follows:



 var AppViewPage1 = Backbone.View.extend({
  template : _.template($('#page1').html()),
  render : function() {
   $('#container').html(this.template());
  }
 });
  
  
 var AppViewPage2 = Backbone.View.extend({
  template : _.template($('#page2').html()),
  render : function() {
   $('#container').html(this.template());
  }
 });
  
 var AppViewPage3 = Backbone.View.extend({
  template : _.template($('#page3').html()),
  render : function() {
          $('#container').html(this.template());
  }
 });


The code for each AppViewPage is quite simple stereotypical, for illustration purposes.
We use the underscore library's api _.template( ) to load a template string
And in the view's render method, we simply assign html content of the div with id "container" using familiar jQuery syntax.

Next up is the definitions of the template themselves. These are html templates containing valid html elements.





Thats it! When you access the above html in browser, it should show the following screens with proper navigation when clicked on the hyperlinks.

Notice that the hrefs in the templates, refer to relative URLs, with values like "#page1", "#page2" and "#page3". This is same as the values we specified in the mapping in our router definition.







Now that we are comfortable with navigating between views, using backbone. Lets turn our attention to features called model and collection provided by backbone.

Lets jump right in by defining a model and collection and try to use the same in our view.


 MyModel = Backbone.Model.extend({
  defaults : {
   title : 'Default Title',
   status : 'Initiated at '+new Date(),
   attrib1 : 'Value1',
   attrib2 : 'Value2',
   attrib3 : 'Value3'
  }
});

 MyCollection = Backbone.Collection.extend({
  model : MyModel,
 });
 
 var myCollection = new MyCollection();
 myCollection.add(new MyModel({title:'Page1'}));
 myCollection.add(new MyModel({title:'Page2'}));
 myCollection.add(new MyModel({title:'Page3'}));
 myCollection.add(new MyModel());

The above code defines a MyModel and MyCollection. We then instantiate a MyCollection object and add 4 MyModel objects with custom titles.

When the template is attached to the view, we need to additionally pass the collection as a template parameter. This can be done as shown in the code below:
  
var AppViewPage1 = Backbone.View.extend({
template : _.template($('#page1').html()),
   render : function() {
 $('#container').html(this.template({mydata:myCollection.toJSON()}));
   }
});



The template itself needs to be tweaked to iterate over the collection and start displaying the passed data parameters. This can be easily done as seen in the snippet below:


Apart from above mentioned features, the backbone model does provide the following important features:

Ability to listen to changes in model
In the model's initialize function, we can listen for changes to specific model attributes as follows:
initialize: function(){
            this.on("change:myattrib1", function(model){
                var newvalue = model.get("myattrib1");
                alert("Changed myattrib to " + newvalue );
            });
}


Interact transparently with a REST service back end
On the model we can invoke stock framework methods like save( ), destroy( ) and fetch( ), which result in HTTP methods PUT, POST, DELETE and GET, getting fired under the hood by the framework against a URL specified in the model's attribute called 'urlRoot'.

Thus there is no need to explicitly use jquery or ajax to make server side requests, the backbone.js model does it for you transparently.

    var UserModel = Backbone.Model.extend({
        urlRoot: '/user',
        defaults: {
            name: '',
            email: ''
        }
    });

    var user = new Usermodel();
    // Notice that we haven't set an `id`
    var userDetails = {
        name: 'Ganesh Ghag',
        email: 'ganeshghag@gmail.com'
    };

    // Because we have not set a `id` the server will call
    // issue a post  /user with a payload of userDetails data above
    // The server should save the data and return a response containing the new 'id'
    user.save(userDetails, {
        success: function (user) {
            alert(user.toJSON());
        }
   })



Support for Model data validation
There is a provision in the backbone model to provide standard callback to validate model data before submission to server and also to raise error( ) callback in case validation fails. This is depicted below:


// If you return a string from the validate function,
// Backbone will throw an error
validate: function( attributes ){
            if( attributes.age < 0 ){
                return "Negative values are not allowed";
            }
},
initialize: function(){
            this.bind("error", function(model, error){
                alert( error );
            });
}

So far we have been able to use backbone to display views, navigate between views and bind and show model data and collections, into the views.

Lastly to attach events to elements in our backbone views is as easy as specifying events attribute of Backbone.View as follows:


SearchView = Backbone.View.extend({

        events: {
            "click input[type=button]": "doBtnClick"
        },
        doBtnClick: function( event ){
            alert( "from inside search button click" );
        }
}

});


With this we have covered most basics about the the backbone framework. You should now be heading towards more advanced tutorials on backbone.

Cheers!

Thursday, January 16, 2014

Easy Bootstrap 3 based responsive web design using drag drop

Designing responsive web sites and user interfaces, which adjust their layouts and sizes depending on the device on which the website is being used, is all the rage now.

Twitter bootstrap is the most popular framework for designing responsive websites. Bootstrap primarily uses CSS3, javascript, jquery and most importantly media queries to make a website responsive.

But having to add all the botostrap css classes by-hand is quite painful and makes developers who are not designers cringe. Just when HTML/CSS builder tools were becoming easy to use, comes this little twist of using bootstrap responsive css classes all over html markup.

Well, just found this little website, http://www.layoutit.com, which allows developers to quickly build basic skeletons of bootstrap 3 based components, using drag and drop, instead of hand coding.

I use it to quickly build a basic responsive page markup, then paste it in my IDE and fine-tune it from there on.

This has definitely taken the pain out of remembering and putting in bootstrap class names by hand.

Do check it out.

This is a good convenience till more sophisticated and free tools for building responsive UIs come up in the market.

Modern Web UI - Options

Deciding on a modern Web UI framework can be a daunting task, with hype, buzzwords like responsive design, single page apps, full stack frameworks and the like. It helps to get a birds eye view of the various options at our disposal and then seeing which one is most appropriate for the requirements at hand.

 Following tries to structure the options, in a hierarchical tree. The idea is to use it as a decision tree and navigate down the tree, keeping your project requirements in mind.