Monday, December 7, 2015

NOSQL with RDBMS fallback:

NOSQL adoption becoming prominent across different critical applications to reap the benefits of performance, fault tolerance, high availability for bigger volume database needs. While migrating to NOSQL one of the risk that architects feel is what if the application gets into some unseen issues and take more time to fix , as NOSQL adoption is not battle tested across different domain and sectors and how to design some fallback strategy.

Few factors that people may think while migrating to NOSQL

  • What will happen if we get into unexpected errors in production and if takes more time to fix?
  • What if the product vendors itself haven’t faced such scenarios?
  • What if I have reporting or other dependent system that are well integrated with RDBMS and difficult to migrate from RDBMS in the current phase?
  • Architects would like to design some fallback option as RDBMS where application can switch to RDBMS on unrecoverable NOSQL issues. This raises few questions in mind on how to design the same.

  • How do I sync up data both in NOSQL and RDBMS for a high data volume without losing the order of update?
  • How do I sync up without adding much overhead to application? Synchronous update to both NOSQL and RDBMS will be too much of overhead...
  • How to reliably update the data between the two systems without any loss?
  • What if RDBMS goes down and how can I design to sync up reliably even on failures?
  • I can think of design depicted below to address the same.

    Few components involved in the design are Apache Kafka receiving the updates and Apache Storm process the data to update the same to RDBMS. Both of these system are designed to work for big data needs in a reliable and distributed form.

    Apache Kafka is a high performance message queuing system. Application pose the messages (Insert / Update / Delete) to Kafka message queue. To improve the performance with parallel processing the queue can be partitioned by table / region / logical data design as per the NOSQL model.

    Apache Storm is a real time processing engine that can consume message through Spout component, do some processing through bolts and update the data to RDBMS. Storm has topologies to process guaranteed data processing, transactional mode of commitments that makes it suitable to handle partial failures and during commitments.

    Benefits:

  • Information can be updated in an asynchronous way to RDBMS reliably.
  • Apache Kafka providing high available, reliable, partitioned queuing system fits best to handle huge data volume.
  • Storm doing real time processing for Kafka messages on the partitioned queue and provides a reliable way to update RDBMS
  • On RDBMS failures Kafka will persist the messages and Storm can continue to sync up the messages when it is comes back.
  • No comments:

    Post a Comment