Thursday, November 19, 2015

A gentle introduction to java based electronic trading using FIX

I recently had the chance to work on a java based web application which provides electronic trading functionality using FIX(Financial Information eXchange). Wanted to share some of my experiences in this post :-)

Lets start out with the basics, though...

Electronic Trading the big picture, where does FIX fit in?
Trade life cycle can be divided into the following phases:
  1. Pre-Trade - analytics and price discovery
  2. Trade - order creation and execution
  3. Post Trade - clearing, allocation and settlement
  4. Post Settlement - risk mgmt, profit/loss account, position monitoring
FIX is mostly used in the Phases 1 and 2, whereas SWIFT is used in Phase 3 and beyond.

FIX protocol features like quote(RFQ), Indication of Interest (IOI) are used for price discovery, whereas features like DealAtQuote, Market Order and Limit Order are used for order placement.

How FIX is relevant  | Who Uses FIX
FIX® has become the way the world trades. Virtually every major stock exchange and investment bank uses FIX for electronic trading, alongside the world's largest mutual funds, money managers and thousands of smaller investment firms. Leading futures exchanges offer FIX connections and major bond dealers either have or are implementing them. Identifying an exact number of users is impossible, as FIX is a free and open standard, but it is very clear that the world’s financial community now speaks FIX.

Enterprise apps - Using Open Source QuickFixJ, the java library for FIX
Most enterprise applications, act as the BUY side of FIX protocol ie these applications place trade requests, to the SELL side of FIX, which is provided by vendors like winterflood, which have connectivity to stock markets, traders, etc and enterprise applications require to, send FIX messages to vendor's FIX engine.

In order to be able to interface with a FIX engine, of a provider like winterflood, using java code, there is an excellent open source framework called QuickFixJ. This is a java library which can be used as a JVM embedded FIX engine. Detailed documentation can be found at their website.

In following article, the java code I will  be show-casing code that uses the QuickFixJ APIs

Intro to common FIX flows
First, your application needs to communicate with a vendor who provides electronic trading services. Such vendors have a FIX server running with electronic trading capabilities and DMA(direct market access).

The whole idea is that our application will send a FIX message asking for a trade to be executed, to the vendors FIX engine. The vendor service will execute the trade and send back a response to our application. The responses are always asynchronous and can come in delayed or not at all.

Such trade instructions between our application and vendor FIX engine, using FIX messages format and protocol are called logically as FIX trade flows.

There are typically huge number of trade flow, sub-flows, combinations, varied responses, but the trick is to narrow down the number of trade flows by pre-deciding with vendor, which trade flows we will consider and also clamp down on the possible requests and responses.

This logical ring fencing of the possible trade flows will ensure, you can provide robust, deterministic functionality related to electronic trading, in your own application.

Quote with Deal At Quote Order
The most common trade interaction which can be done during market hours is the Quote and Deal functionality.

As per this trade flow, user sends out a quote request, for a ticker/symbol in the form of a FIX Message of type R. Asynchronously, but within a few milli seconds, a Quote response is received by the application. This quote response contains the market price of the ticker/symbol as a quote. This price in quote is applicable only for a a few seconds say for 30 seconds

The user now can send another FIX message within 30 seconds, asking to book order for above mentioned stock, with certain number of shares at the price agreed upon in the quote above

A more interactive and verbose narrative is mentioned below, along with the sample FIX messages, good luck deciphering :-)

Quote and Deal
Quote (R) - give me quote for stock with symbol VOD
<20150923-07:47:20, FIX.4.2:APP->VEND, outgoing> (8=FIX.4.2 9=194 35=R 34=2 49=APP 50=D2C 52=20150923-07:47:20.027 56=VEND 115=12345 131=1442994439993 146=1 55=VOD.L 48=GB00BH4HKS39 22=4 207=XLON 54=2 38=10 64=20150925 60=20150923-07:47:20.026 15=GBP 120=GBP 10=009 )

Quote Response (S) - Unit market price for VOD is 2.445 pounds, valid for next 30 seconds
<20150923-07:47:20, FIX.4.2:APP->VEND, incoming> (8=FIX.4.2 9=245 35=S 49=VEND 56=APP 34=2 52=20150923-07:47:20 131=1442994439993 117=X90FZI23H4719001 55=VOD.L 22=4 48=GB00BH4HKS39 167=CS 207=XLON 132=2.1325 134=3750 62=20150923-07:47:50 64=20150925 15=GBP 120=GBP 152=21.325 76=VENDOR 60=20150923-07:47:20 10=088 )

Deal At Quote (D) - book an order for VOD @ 2.445 pounds for 100 shares
<20150923-07:47:20, FIX.4.2:APP->VEND, outgoing> (8=FIX.4.2 9=238 35=D 34=3 49=APP 50=D2C 52=20150923-07:47:20.575 56=VEND 115=12345 11=1442994440565 15=GBP 21=1 22=4 38=10 40=D 44=2.1325 48=GB00BH4HKS39 54=2 55=VOD.L 59=4 60=20150923-07:47:20.575 63=3 64=20150925 117=X90FZI23H4719001 120=GBP 207=XLON 10=153 )

Exceution Report (8) for VOD @ 2.445 pounds for 100 shares with order status=FILLED
<20150923-07:47:20, FIX.4.2:APP->VEND, incoming> (8=FIX.4.2 9=359 35=8 49=VEND 56=APP 34=3 52=20150923-07:47:20 57=D2C 128=12345 1=WINTERG0 6=2.1325 11=1442994440565 14=10 15=GBP 17=b90FDI23H4720002 20=0 22=4 29=4 30=XLON 31=2.1325 32=10 37=b90FDI23H4720002 38=10 39=2 40=D 44=2.1325 48=GB00BH4HKS39 54=2 55=VOD.L 59=4 60=20150923-07:47:20 63=6 64=20150925 76=VENDOR 110=0 119=21.33 120=GBP 150=2 151=0 167=CS 207=XLON 10=244 )

other possible order status are: CANCELLED,  REJECTED, PARTIALLY_FILLED, etc

A quote can also be rejected, in which case a FIX message of type b is received by the application

Market Order
A market order is an order request which asks to buy/sell a certain number of shares of a symbol at the best price, at the time of request.

Market Order Request - 35=D order, 40=1 market,  54=2 sell, 55=VOD symbol, 38=110 num shares
<20150923-08:21:39, FIX.4.2:APP->VEND, outgoing> (8=FIX.4.2 9=234 35=D 34=2 49=APP 50=D2C 52=20150923-08:21:39.383 56=VEND 115=12345 11=1442996499366 15=GBP 21=3 22=4 38=110 40=1 48=GB00BH4HKS39 54=2 55=VOD.L 59=6 60=20150923-08:21:39.380 63=3 64=20150925 120=GBP 126=20150925-08:21:39.381 207=XLON 10=110 )

Execeution Report (8) with order status=FILLED
<20150923-08:21:39, FIX.4.2:APP->VEND, incoming> (8=FIX.4.2 9=357 35=8 49=VEND 56=APP 34=2 52=20150923-08:21:39 57=D2C 128=12345 1=WINTERG0 6=2.1325 11=1442996499366 14=110 15=GBP 17=0D0FDI23I2139010 20=0 22=4 29=4 30=XLON 31=2.1325 32=110 37=0D0FDI23I2139010 38=110 39=2 40=1 44=2.1325 48=GB00BH4HKS39 54=2 55=VOD.L 59=3 60=20150923-08:21:39 63=6 64=20150925 76=VENDOR 119=234.58 120=GBP 150=2 151=0 167=CS 207=XLON 10=120 )


Limit Order
A limit order is an order request which asks to buy/sell a certain number of shares of a symbol, when the stock price of the symbol, reaches a certain specified limit price.

Limit Order Request - 35=D order, 40=2 market,  54=2 sell, 55=VOD symbol, 38=110 num shares
<20151009-09:05:22, FIX.4.2:APP->VEND, outgoing> (8=FIX.4.2 9=206 35=D 34=229 49=APP 50=D2C 52=20151009-09:05:22.933 56=VEND 115=12345 11=1444381522933 15=GBP 21=2 22=4 38=110 40=2 44=7.885 48=GB0004082847 54=1 55=STAN 59=0 60=20151009-09:05:22.933 63=3 120=GBP 207=XLON 10=112 )

Exceution Report (8) with order status=NEW (Early ACK for limit order)
Limit orders will be filled only when limit is reached
<20151009-09:05:23, FIX.4.2:APP->VEND, incoming> (8=FIX.4.2 9=296 35=8 49=VEND 56=APP 34=229 52=20151009-09:05:23 57=D2C 128=12345 6=0 11=1444381522933 14=0 15=GBP 17=y6u2743950 20=0 21=2 22=4 29=1 31=0 32=0 37=y6u2743950 38=110 39=0 40=2 44=7.885 48=GB0004082847 54=1 55=STAN 59=0 60=20151009-09:05:23 63=3 120=GBP 150=0 151=110 167=CS 198=y6u2743950 207=XLON 10=065 )

when limit price is reached, Exceution Report (8) with order status=FILLED
<20151009-09:22:41, FIX.4.2:APP->VEND, incoming> (8=FIX.4.2 9=342 35=8 49=VEND 56=APP 34=268 52=20151009-09:22:41 57=D2C 128=12345 6=7.4 11=1444381522933 14=110 15=GBP 17=1040601735 20=0 21=2 22=4 29=4 30=XLON 31=7.4 32=110 37=y6u2743950 38=110 39=2 40=2 44=7.4 48=GB0004082847 54=1 55=STAN 59=0 60=20151009-09:22:41 63=3 64=20151013 76=VENDOR 119=814 120=GBP 150=2 151=0 152=814 155=0 167=CS 207=XLON 10=208 )



The Java Code (finally!)

Quote request and deal-at-quote
Writing java code to submit a quote is as easy seen below

quickfix.fix42.QuoteRequest quote = new quickfix.fix42.QuoteRequest(new QuoteReqID(quoteReqId));

//optionally set header feilds that your Trade Provider may require
quote.getHeader().setField(new OnBehalfOfCompID(ONBEHALF_COMP_ID));
quote.getHeader().setField(new SenderSubID(SENDER_SUB_ID));

//create a new 'repeating group' as per FIX spec
quickfix.fix42.QuoteRequest.NoRelatedSym group = new quickfix.fix42.QuoteRequest.NoRelatedSym();

group.set(new Symbol("VOD"));
group.set(new SecurityID(GB00BH4HKS39 )); //value of ISIN: GB00BH4HKS39
group.set(new IDSource("4"));  //4=ISIN, 2=SEDOL
group.set(new SecurityExchange("XLON"));  //stock exchange ID
group.set(Side.BUY);   //this could just as well be Side.SELL

group.set(new OrderQty(10));  //u can buy 10 shares of ticker VOD
//group.setField(new CashOrderQty(4000.00));    //u can buy shares worth 4000 instead of 10 shares

group.setField(new SettlCurrency("GBP"));
group.set(new Currency("GBP"));
group.set(new TransactTime(new Date()));

quote.addGroup(group);

System.out.println("getQuoteForVEND FIX42 message "+quote);
Session.sendToTarget(quote, sessionId);

Deal@Quote
Writing java code to submit a Deal@Quote is as easy seen below
NewOrderSingle order = new NewOrderSingle(new ClOrdID(orderId),
new HandlInst(HandlInst.AUTOMATED_EXECUTION_ORDER_PUBLIC),
new Symbol("VOD"),
new Side(Side.SELL), new TransactTime(new Date()), new OrdType(OrdType.PREVIOUSLY_QUOTED));

//optionally set header feilds that your Trade Provider requires
order.getHeader().setField(new OnBehalfOfCompID(ONBEHALF_COMP_ID));
order.getHeader().setField(new SenderSubID(SENDER_SUB_ID));

order.set(new SecurityID("GB00BH4HKS39"));   //ISIN for ticker VOD
order.set(new IDSource("4"));  //4=ISIN, 2=SEDOL
order.set(new SettlmntTyp(SettlmntTyp.T_PLUS_2));

order.set(new SecurityExchange("XLON"));  //stock exchange london

order.set(new QuoteID(origQuoteId));  //original quote ID
order.set(new Price(request.getPrice())); //price of previous quote

//order.setField(new OrderQty(10)); //u can buy 10 shares of ticker VOD
order.set(new CashOrderQty(4000.00)); //u can buy shares worth 4000 instead of 10

order.set(new TimeInForce(TimeInForce.DAY));  //can be other values like
order.set(new SettlCurrency("GBP"));
order.set(new Currency("GBP"));

System.out.println("sending out DealAtQuotye FIX42 message "+order);


Session.sendToTarget(order, sessionId);


Market Order
Writing java code to send a market order is as easy as seen below

NewOrderSingle order = new NewOrderSingle(new ClOrdID(orderId),
new HandlInst(HandlInst.AUTOMATED_EXECUTION_ORDER_PUBLIC),
new Symbol("VOD"),
new Side(Side.SELL), new TransactTime(new Date()), new OrdType(OrdType.MARKET));

//optionally set header feilds that your Trade Provider requires
order.getHeader().setField(new OnBehalfOfCompID(ONBEHALF_COMP_ID));
order.getHeader().setField(new SenderSubID(SENDER_SUB_ID));

order.set(new SecurityID("GB00BH4HKS39"));   //ISIN for ticker VOD
order.set(new IDSource("4"));  //4=ISIN, 2=SEDOL
order.set(new SettlmntTyp(SettlmntTyp.T_PLUS_2));

order.set(new SecurityExchange("XLON"));  //stock exchange london

order.setField(new OrderQty(10)); //u can buy 10 shares of ticker VOD
//order.set(new CashOrderQty(4000.00)); //u can buy shares worth 4000 instead of 10

order.set(new TimeInForce(TimeInForce.DAY));  //can be other values like
order.set(new SettlCurrency("GBP"));
order.set(new Currency("GBP"));

System.out.println("sending out MarketOrderForVEND FIX42 message "+order);
Session.sendToTarget(order, sessionId);


Limit Order
Writing java code to send a Limit order is as easy as seen below.
Only change from above code is using the OrdType.LIMIT

NewOrderSingle order = new NewOrderSingle(new ClOrdID(orderId),
new HandlInst(HandlInst.AUTOMATED_EXECUTION_ORDER_PUBLIC),
new Symbol("VOD"),
new Side(Side.SELL), new TransactTime(new Date()), new OrdType(OrdType.LIMIT));



Using the above concepts, frameworks and codes, provides some idea of the  electronic trading functionalities, that can be integrated into your java based enterprise application.

Cheers!

Friday, August 28, 2015

REST APIs for Simpler Payment Gateway Integration into your web and mobile apps

I recently had the chance to integrate the current web application I am working on with a popular UK based payment gateway for credit and debit cards, named Worldpay Online . Carrying out the above integration was surprisingly easy and below are the details.

Our requirement was that our end users should be able to pay for our application services using their debit / credit cards. I embarked on the payment gateway integration journey initially thinking of traditional ways of integration. We did not wish to incur the additional overhead of any PCI DSS compliance that occurs, if you choose to store any card details on your site. Hence, I started considering more traditional approaches like our site will redirect payment requests to the payment gateway hosted "payment pages".

These typically gives the end user the ability to choose their payment method, input their card details and ask for their card to be charged for relevant amounts.

Even worldpay as our preferred payment gateway provider had these options under the integration type of "HTML Redirect". However the main drawbacks of this kind of "payment page" based integration were as follows:

  1. No control and limited customisation of the payment pages
  2. Lack of a strong application context while the user was on the payment pages
  3. The mechanism of gateway provider letting us know that the card was charged successfully made the whole process asynchronous, as it was a post back to our application servers
Well, it turns out there is a much simpler approach available in most modern payment gateways. Using the payment gateways REST APIs. I used the worldpay online REST APIs, to quickly integrate the payment gateway into our application. Here I will talk a bit about my experience using the worldpay online payment APIs.

Though we use the payment REST APIs, we still do not need to have an PCI-DSS compliance. The reason is that we never capture or store the card details on our application. The entire process is broken down into 2 steps:
Step 1: Our application displays inline or as pop-up a worldpay online screen. This worldpay screen captures the card details and gives back a temporary token to the application.

Step 2: The application uses the above temporary token and actually calls the REST payment API. The token acts as a proxy for the card details like card number, expiration date, CVV etc. Application never knows about card details, they are known only to worldpay online!



The best part is that:

  • Except for a small portion of page which captures card details, rest of webpage is under applications control and hence customizable
  • The actual REST API call is synchronous so error handling effort is reduced by a huge amount



The actual payment REST call is as simple as shown below

Example Request

curl https://api.worldpay.com/v1/orders
    -H "Authorization:your-service-key"
    -H "Content-type: application/json"
    -X POST
    -d
'{
    "token" : "your-token",
    "orderDescription" : "order-description",
    "amount" : 500,
    "currencyCode" : "GBP"
}'

Example Response

{
    "orderCode": "worldpay-order-code",
    "token" : "your-token",
    "orderDescription" : "your-order-description",
    "amount" : 500,
    "currencyCode" : "GBP",
    "customerOrderCode":"my-customer-order-code",
    "paymentStatus": "SUCCESS",
    "paymentResponse": {
        "type": "ObfuscatedCard",
        "name": "name",
        "expiryMonth": 2,
        "expiryYear": 2015,
        "cardType": "VISA_CREDIT",
        "maskedCardNumber": "**** **** **** 1111",
        "billingAddress": {
            "address1": "18 Linver Road", 
            "postalCode": "SW6 3RB", 
            "city": "London", 
            "countryCode": "GB"
        },
        "cardSchemeType": "consumer",
        "cardSchemeName": "VISA CREDIT",
        "cardIssuer": "LLOYDS BANK PLC",
        "countryCode": "GB",
        "cardClass": "credit",
        "cardProductTypeDescNonContactless": "unknown",
        "cardProductTypeDescContactless": "unknown",
        "prepaid": "false"
    },
    "shopperEmailAddress": "name@domain.co.uk",
    "environment": "TEST",
    "riskScore": : {
        "value": "1" 
    }
}


For more information please visit Worldpay online documentation

Worldpay as a gateway provides various features, (many of which I implemented successfully) like

  • Recurring Payments
  • Card-on-File (for re-using previously entered card information)
  • 3d-Secure integration
  • Refunds
  • Dispute Defence
  • Telephone orders
  • .. and so on 

Happy integration, people :-)

Wednesday, April 8, 2015

Databases for HA and DR

During a recent participation in a bid, responding to an RFP, I had to wear my ops hat and it was a refreshing change, from my dev endeavours in recent past.

One of the things, that I was required to do was to come with a solution that includes HA and DR. When considering the database options for HA and DR, I had to brush up my basics related to database replications, options and also discuss and validate with other senior tech architects who specialize in ops. Following a brief of the learnings that came out of the experience.

To know the basics of replication, the wikidpedia link for replication is very good. But first some basics.

Background: why database synchronization is required?
Scalability in the middle tier (for example from app servers) is easy to achieve through horizontal scaling, all you would need is mutiple app servers, all symmetrical and fronted by a load balancer.But in the database tier achieving scalability is lot tougher. Especially for data centric applications like OLTP, the database is the single most stressed tier. Though scale-up is an option, it does not provide for a good high availability option. Hence we have to think of 2 or more copies of database, to provide scalability as well as high availability.

Multiple databases can be a tricky option. We can have a cluster of databases all symmetric masters, fronted by a load balancer, but synchronizing data across the databases, is a huge overhead and has potential for failures such as deadlocks, during synchronization.

But to think up other options such a single master, multiple read-only slaves, one must appreciate that, not all data access is equal. If in our application, we can visualize data access as being divided into readonly and readwrite, we may find that only small percentage of application's data access is read write, say 30%. In such cases having a single master database which caters to read-write and multiple slave databases which cater to read-only can be resorted to.

In any case, having mutiple copies of the database and trying to synchronize data across those databases is something you will have to inevitably deal with, either for database load balancing, high availability(HA) or on a more remote note for disaster recovery (DR).

So what are the options for database synchronization?
There are 3 broad options for replication:

  • Storage based replication
  • File based replication
  • Database replication: this is usually supported natively by the specific database


Storage based replication
Active (real-time) storage replication is usually implemented by distributing updates of a block device to several physical hard disks. This way, any file system supported by the operating system can be replicated without modification, as the file system code works on a level above the block device driver layer. It is implemented either in hardware (in a disk array controller) or in software (in a device driver).

When storage replication is done across locally connected disks it is known as disk mirroring. A replication is extendable across a computer network, so the disks can be located in physically distant locations. For replication, latency is the key factor because it determines either how far apart the sites can be or the type of replication that can be employed.

Basic options here are synchronous replication, which guarantees no data loss at the expense of reduced performance and aysnchronous replication where-in remote storage is updated asynchronously hence it cannot guarantee zero data loss.


File based replication
File-based replication is replicating files at a logical level rather than replicating at the storage block level. There are many different ways of performing this. Unlike with storage-level replication, the solutions almost exclusively rely on software.

File level replication solution yield a few benefits. Firstly because data is captured at a file level it can make an informed decision on whether to replicate based on the location of the file and the type of file. Hence unlike block-level storage replication where a whole volume needs to be replicated, file replication products have the ability to exclude temporary files or parts of a filesystem that hold no business value. This can substantially reduce the amount of data sent from the source machine as well as decrease the storage burden on the destination machine.

On a negative side, as this is a software only solution, it requires implementation and maintenance on the operating system level, and uses some of machine's processing power (CPU).

File based replication can be done by a kernel driver that intercepts calls to the filesystem functions, filesystem journal replication or batch replication, where-in source and destination file systems are monitored for changes.

Database replication
Database replication can be used on many database management systems, usually with a master/slave relationship between the original and the copies. The master logs the updates, which then ripple through to the slaves. The slave acknoledges the updates stating that it has received the update successfully, thus allowing the master, sending (and potentially re-sending until successfully applied) of subsequent updates.

Also database replication becomes difficult when it scales up to support either larger number of databases or increasing distances and latencies between remote slaves.

Some common aspects to consider while choosing database replication:
  • Do you wish to cater to HA only or you want load balancing as well?
  • What is latency for replicaton? Are participant servers in replication, local or remote(site)?
  • Are you ready to contend with multiple masters or is single master good enough?
  • Can you split your database access into read-write and read-only queries?

The actual process of database replication usually involves shipping database transaction logs, from master to slave. This shipping can be done as file based transfer of logs or logs can be streamed from master to slaves. Databases can support "specialized transaction logs" which are solely for purposes of replication and are hence optimized.

If you wish only HA or  redundancy, than you can have slave/s in warm standby mode. In this case, master sends all updates to slave but the slave is not used even for read-only access, it is only an up-to-date standby.For HA, using shared storage for active master and inactive standby can be resorted to. But the shared storage becomes the SPOF(single point of failure). 

If you intend to have single master and mutiple read-only slaves, either your application must be smart enough to balance, database access as readwrite connections and read-only connections. Some databases like postgresql provide third party middle-ware libraries which bifurcate client's database access and send read-write request to master and read-only request balanced across multiple slaves.

If your application has need to load balance even the read-write requests, then you might think of going in for multiple masters. Many databases refer to this as database clustering or mutli-master support. Here, read-write commands are sent out to one of the masters and masters are synchronized through mechanisms like 2 phase commits. Multi master replication has challenges such as reduced performance and increased probability of transactional conflicts and related issues like deadlocks.

In addiiton, databases can support features like ability to load balance read-only queries, by allowing parallel execution of single query across multiple read-only slaves.

Synchronous replication implies, the synchronizations across slaves is guaranteed at the cost of commit performance against the master

For DR, remoteness introduces high latency, hence log shipping or async streaming of logs, to a remote warm standby is acceptable.


















Wednesday, March 18, 2015

Agile - Tips From Live Projects

I recently had a chance to learn about a live agile project as part of agile training programme.
Below are some of the tips I picked up, while interacting with the key members on the team.

The Big Picture
The overall release is divided into several phases as mentinoned below.
Notice the SIT and UAT are run independently, towards end of release.
This is time consumimg but does reduce risk of any bugs in release.
This is more relevant for softwares which needs extremely thorough testing.

  1. Pre foundation Phase (1 week) 
  2. Foundation Phase (2 weeks) 
  3. Sprint 1 (2 weeks) 
  4. Sprint 2 
  5. Sprint 3 
  6. Sprint 4 
  7. Sprint 5 Final / Hardening Sprint
  8. SIT Test (1 month) 
  9. UAT Test (1 month) 
  10. Release 


During foundation phase high level estimation done using technique called T-Shirt sizing
(S, M, L, XL, XXL, XXXL). This helps in deciding scope of sprints.

Plan for sprints to progessively gain velocity

Balancing RnD and delivery
Instead of running 2 week Spikes like Sprints, only Spike Tasks are run to balance RnD efforts with delivering usable functionality.

1 Story = Spike Task1 + Task2 + Task3....

Sprint includes a sensible mix of techincal stories and functional stories.
Stubs are used for technical components planned for future technical stories.


Estimation 
Stories estimated as story points using estimation poker
Task are always estimated / tracked at hours spent.
There is provision in tool, to estimate task hours and also put in actual hours.
Over several successive sprints, a good average estimate of 1 Story point = xyz hours crystalizes


Sample Story Status: 
New
Analyzed
Approved
Dev WIP
Dev Blocked
Dev Done
Test WIP
Test Blocked
Test Done

Dev Done includes code review
Test Done includes show-n-tell, given by tester to convince the  BA



Very succint definition of done

Sample DOD for dev complete:

  • Impact Analysis document done
  • Code complete
  • Code reviewed
  • NFR, performance, security tests pass
  • Regression tests pass
  • All known bugs raised by test team resolved

About Code and CI

Release-wise branch in SVN (version control)
No branches per sprint
No parallel sprints
SVN checkin format:
<Release>_<Module>_<Story>_<Task/Defect>: Description
Automated regression testing in place


Other Takeaways: 
1.
Very detailed acceptance criteria, Yes No questions, fill blanks, unambigious answers
Is xxx panel visible as seen in logical diagram? Yes/No
Does the table/grid contain 13 rows Yes/No
Quality of story determined by how detailed is acceptance criteria

 2.
Story Status like "Test Blocked", if tester cannot test a story, calls up dev immediately or writes email.
All Blockers get mentioned in standup

 3.
Always testing team is lightly loaded at start of sprint and overloaded towards end of sprint, to reduce this pain point ....
Have stories at very fine-grained level, eg each story should add up to only few hours of dev task.
This way, dev always has something testable for tester throughout the sprint Idle time for testers (waiting for testable functionality) is reduced.


Friday, February 6, 2015

Rapid development of REST webservices with persistence

Development of modern business applications, has to become faster and easier, if we are to meet business delivery demands, spanning over ever shorter cycles. Also we are now required to deliver softwares continuosly and incrementally, in an agile manner.

Jump-starting the development process with the right tools and technolgies is the need of the hour.

Rapid development, using currently popular technology, has always been in vogue. Right now exposing middle tier services as REST APIs is all the rage. So how do we develop REST services with persistence, in a rapid manner, with least effort?

I had written an earlier post on grails 2_3 providing out of box support for REST CRUD , writing only a simple java bean (DTO), using the grails framework. But that would mean using an entirely new framework like grails only to exploit its REST CRUD functionality. The engineer in me, thought that was an unoptimized use of grails. Why shoudlnt we have this, out of box REST CRUD using spring, hibernate the technologies that we know best already :-)

Well more recently, advancements made in the Spring Data JPA framework, makes it very very easy to develop CRUD REST services out of the box, writing only the entity classes with JPA annotations.
Well here is how...

Effectively, we will see, how to write only a simple java bean, annotated with JPA annotations for persistence and using spring data jpa and spring boot, we can automatically generate REST CRUD services for our bean / entity. All this using very little code :-)



Create a typical maven java project (not a web project) 
eg. using maven-archetype-quickstart

We will use spring boot based spring data jpa and spring data rest

Use a pom.xml similar to below

https://github.com/ganeshghag/RestSpringDataJpa/blob/master/pom.xml



As per spring data jpa, create persistent entity class say Person as a POJO and annotate it with JPA annotations

package com.ghag.rnd.rest.domain;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

@Entity
public class Person {

@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private long id;

private String firstName;
private String lastName;
private String email;
private String address;
private String mobile;
private Integer employeeId;

//getters and setters go here
}

Again as per spring data jpa framework, create an interface which will tell the spring data jpa framework, to create hibernate jpa based CRUD functionality with respect to the entity class Person we defined above

package com.ghag.rnd.rest.repository;

import java.util.List;
import org.springframework.data.repository.PagingAndSortingRepository;
import org.springframework.data.repository.query.Param;
import org.springframework.data.rest.core.annotation.RepositoryRestResource;
import com.ghag.rnd.rest.domain.Person;


@RepositoryRestResource(collectionResourceRel = "people", path = "people")
public interface PersonRepository extends PagingAndSortingRepository<Person, Long> {

List<Person> findByLastName(@Param("name") String name);

}

The methods in the interface like findByLastName, impart filtering capability on the the entity queries by entity attributes like lastName.


The annotation below
@RepositoryRestResource(collectionResourceRel = "people", path = "people")
ensures that, the CRUD rest services for the Person entity are auto-magically generated for us. This is the main fetaure, I wanted to publicize in this article.


Thats it, now using Spring Boot and its default sensible configurations, you can just right a class similar to below and you should be able to start up the applicatoin with an embedded tomcat server.

package com.ghag.rnd.rest;


import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
import org.springframework.data.rest.webmvc.config.RepositoryRestMvcConfiguration;

@Configuration
@EnableJpaRepositories
@Import(RepositoryRestMvcConfiguration.class)
@EnableAutoConfiguration
@ComponentScan  //important and required
public class Application {

public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}


The application will expose the CRUD REST services for persistent entity Person at
http://server:port/people

you can easily test the services using curl commands like following:
curl -i -X POST -H "Content-Type:application/json" -d @insert.cmd http://localhost:8080/people
where file insert.cmd contains json for the Person object like thus:
{
"firstName" : "Ganesh",
"lastName" : "Ghag",
"email":"some@some.com",
"address":"flower valley, thane",
"mobile":"3662626262",
"employeeId":8373
}

In order to customize the application, writing custom REST controllers is as easy as writing below POJO

package com.ghag.rnd.rest.controllers;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import com.ghag.rnd.rest.repository.PersonRepository;


@RestController
@RequestMapping("/custom")
public class MyCustomController {

@Autowired
PersonRepository personRepository;

@RequestMapping(value="/sayHello/{input}", method=RequestMethod.GET)
public String sayHello(@PathVariable String input){

System.out.println("findall="+personRepository.findAll());
personRepository.deleteAll();
return "Hello!"+input;
}
}


To further enhance the CRUD application, we can easily
  • use above application as a microservice, using spring boot as is
  • export above application as a war and deploy it on traditional app server
  • use various logging for debugging
  • use spring based validations
  • use spring security
  • switch CRUD database from current default H2 to any of our choice using spring boot config file application.properties


Please refer spring boot documentation for accomplishing the above mentioned enhancments.
http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/
http://docs.spring.io/spring-data/jpa/docs/current/reference/html/


By using hibernate tools reverse engineering, given an existing database, auto-generating JPA annotated entities is a no-brainer and with the above knowledge of generating REST CRUD services, it means, you can expose a database, as  REST services, with less than a days work.