- RabbitMQ is good in terms of Performance based on the reading that has been done from different blogs etc.
- In general it has following main components
- Broker : The RabbitMQ instance Running under ErLang node is called Broker. It contains all the Exchanges, Queues and Bindings.
- Queue : Queue is the Endpoint where Consumer of the Message connects to the Broker. Broker fills the Queue based on the Exchange to which Queue is connected and Routing Rules (Bindings) defined.
- Exchange : Its the one whom Publishers connects to while sending the Message out. Exchanges are attached with Queue using Bindings. Exchanges decouples the Queues and Producers so that single message can be routed to multiple places based on the Binding/Routing rules. There are muliplr types of Exchanges available :
- Fanout : Message sent to Exchange will be sent to all the queues connected to the Exchange. Its kind of Broadcasting. Its like havong no routing key or "#" as routing key.
- Direct : Message sent to Exchange will go to only that queue which has Exact Routing Key as Biding Key.
- Topic : Message sent to Exchange will go to only those Queue which matches Routing Key and Binding key in a Regular expression form.
- Binding Key : It defines which message will be put in which queue when it arrives to Exchange. Its defined at the Server end.
- Routing Key : It defines the message destination queue routing which is patten matched or compared as it is based on the Exchange type.
- It supports In-memory and Persistent Message Queues.
- Non-Durable : Its default behavior. Upon Broker crash/restart nothing will be recovered. All Exchange. Queues, Bindings info will be lost.
- Durable : Exchange, Queues, Bindings meta-data is stored so that post crash/restart, it will be recreated. But data will be lost.
- Persistent : If Broker node crashes, all Exchange, Queues, Bindings and data in the queue are recovered upon the Broker restart.
- RabbitMQ supports Clustering.
- But clustering concept is not allowing Data federation. It just does he federation/replication of Meta-Data of Exchanges, Queues and Bindings. Queue data is always Local to the Node only. i.e. Data will never get replicated across the Cluster nodes to have High Availability. Strangely, but, in-short we can say RabbitMQ focuses on on the Scalability rather than Availability. Following are some links which highlights this points:
http://www.rabbitmq.com/faq.html#replication-scope
http://dev.rabbitmq.com/irclog/index.php?date=2008-08-27 : Read it around "[16:39]
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2009-July/004234.html
- There is a project called Pace-Maker : http://www.rabbitmq.com/pacemaker.html It Creates infrastructure for HA RabbitMQ cluster, but it again has many dependencies. Haven't gone in to details as such though.
- RabbitMQ Nodes under a Cluster can't really share same files except for the cookie file. Script itself makes sure that it creates folders and files names prefixed with "$NODE_ID$" while starting the broker so that all the files for that node will be created inside a single folder ill it. It basically creates two main folders inside folders does following thing:
a. db : Creates Folder named "$NODE_ID$"-mnesia and creates all db files inside it.
b. log : Creates files with name prefixed with "$NODE_ID$"
Even if we tweak the script for both nodes to point to same mnesia folder, 2nd instance of the broker will fail to start because of mnesia locking issue with following error
{"init terminating in do_boot",{{nocatch,{error,{cannot_start_application,mnesia,{killed,{mnesia_sup,start,[normal,[]]}}}}},[{init,start_it,1},{init,start_em,1}]}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
- Also as per the documentation at "http://www.rabbitmq.com/faq.html#migrate-to-another-machine" queue data is not stored in the DB.
And in any case, there are still two issues to be considered :
a. For performance reasons we might opt for In-Memory queues, then how we can make Nodes share the queue as all nodes will be in different Physical machine and even if they run under single machine, they will occupy different memory location
b. If we try to use Persistent Queues, couldn't see any any config so far which allows me to specify the location where to keep the presistent queue data. When I started a node with persistent queue, found checked the folder, it creates a new file with extension "DCL"
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2009-July/004234.html - Some Reading Materials :
- http://skillsmatter.com/podcast/erlang/rabbitmq-internal-architecture-tony-garnock-jones
- http://skillsmatter.com/podcast/design-architecture/mike-bridgen-application-of-rabbitmq
- http://www.skaag.net/2010/03/12/rabbitmq-for-beginners/
- http://blog.agoragames.com/2010/07/28/grabbing-the-rabbit-by-the-horns/
- http://bhavin.directi.com/rabbitmq-vs-apache-activemq-vs-apache-qpid/
- http://www.lshift.net/blog/2010/02/01/rabbitmq-shovel-message-relocation-equipment
- http://www.rabbitmq.com/pacemaker.html
1 comment:
Great research. We are thinking about using a front end persistent transaction queue for RabbitMQ. The transaction queue can be backed up on another machine and replayed to regenerate the RabbitMQ state after a crash.
Post a Comment