Look channel139 in the logs, I have the following observations: 1. SERVICE_UNAVAILABLE 2019-02-26 21:23:19.850 UTC [orderer.common.broadcast] ProcessMessage -> WARN 50ee3 [channel: testorgschannel139] Rejecting broadcast of normal message from 10.188.208.23:64192 with SERVICE_UNAVAILABLE: rejected by Order: no Raft leader 2019-02-26 21:23:23.942 UTC [orderer.common.broadcast] ProcessMessage -> WARN 51634 [channel: testorgschannel139] Rejecting broadcast of normal message from 10.188.208.23:64192 with SERVICE_UNAVAILABLE: rejected by Order: no Raft leader 2019-02-26 21:23:25.481 UTC [orderer.common.broadcast] ProcessMessage -> WARN 517fc [channel: testorgschannel139] Rejecting broadcast of normal message from 10.188.208.23:64192 with SERVICE_UNAVAILABLE: rejected by Order: no Raft leader 2019-02-26 21:23:26.622 UTC [orderer.common.broadcast] ProcessMessage -> WARN 518e8 [channel: testorgschannel139] Rejecting broadcast of normal message from 10.188.208.23:64192 with SERVICE_UNAVAILABLE: rejected by Order: no Raft leader 2019-02-26 21:23:32.087 UTC [orderer.common.broadcast] ProcessMessage -> WARN 51cb4 [channel: testorgschannel139] Rejecting broadcast of normal message from 10.188.208.23:64192 with SERVICE_UNAVAILABLE: rejected by Order: no Raft leader 2019-02-26 21:23:33.178 UTC [orderer.common.broadcast] ProcessMessage -> WARN 51dfb [channel: testorgschannel139] Rejecting broadcast of normal message from 10.188.208.23:64192 with SERVICE_UNAVAILABLE: rejected by Order: no Raft leader 2019-02-26 21:23:33.948 UTC [orderer.common.broadcast] ProcessMessage -> WARN 51f43 [channel: testorgschannel139] Rejecting broadcast of normal message from 10.188.208.23:64192 with SERVICE_UNAVAILABLE: rejected by Order: no Raft leader 2. Why the leader is 0 instead of 1, or, 2 or, 3? does 0 mean no leader? 2019-02-26 21:26:29.611 UTC [orderer.consensus.etcdraft] run -> INFO 61edf raft.node: 1 lost leader 3 at term 30 channel=testorgschannel139 node=1 2019-02-26 21:26:29.613 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 61ee3 Raft leader changed: 3 -> 0 channel=testorgschannel139 node=1 3. term mismatch between orderers, 6 seconds after orderer 1 becomes leader for term 31, orderer 2 is still in term 30. This might be the delay communication between orderers caused by heavy traffic, right? 2019-02-26 21:26:47.341 UTC [orderer.consensus.etcdraft] run -> INFO 6305f raft.node: 1 lost leader 3 at term 30 channel=testorgschannel139 node=1 2019-02-26 21:26:47.383 UTC [orderer.consensus.etcdraft] becomeLeader -> INFO 632f5 1 became leader at term 31 channel=testorgschannel139 node=1 2019-02-26 21:26:47.383 UTC [orderer.consensus.etcdraft] run -> INFO 632f6 raft.node: 1 elected leader 1 at term 31 channel=testorgschannel139 node=1 2019-02-26 21:26:53.786 UTC [orderer.consensus.etcdraft] run -> INFO b00ca raft.node: 2 lost leader 3 at term 30 channel=testorgschannel139 node=2 4. orderer 1 became leader but stepped down in 25 seconds because of quorum is inactive 2019-02-26 21:26:47.383 UTC [orderer.consensus.etcdraft] becomeLeader -> INFO 632f5 1 became leader at term 31 channel=testorgschannel139 node=1 2019-02-26 21:26:47.383 UTC [orderer.consensus.etcdraft] run -> INFO 632f6 raft.node: 1 elected leader 1 at term 31 channel=testorgschannel139 node=1 2019-02-26 21:26:47.391 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 63316 Raft leader changed: 0 -> 1 channel=testorgschannel139 node=1 2019-02-26 21:26:47.398 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 63344 Start accepting requests as Raft leader at block 91 channel=testorgschannel139 node=1 2019-02-26 21:27:12.806 UTC [orderer.consensus.etcdraft] stepLeader -> WARN 64e5d 1 stepped down to follower since quorum is not active channel=testorgschannel139 node=1 5. there are 11 starting a new election in 5 min 15 seconds, see output of `grep "starting" FAB-14350-orderer1st.log | grep channel139` 2019-02-26 21:23:23.304 UTC [orderer.consensus.etcdraft] Step -> INFO 515e2 1 is starting a new election at term 23 channel=testorgschannel139 node=1 2019-02-26 21:23:41.145 UTC [orderer.consensus.etcdraft] Step -> INFO 52095 1 is starting a new election at term 24 channel=testorgschannel139 node=1 2019-02-26 21:23:57.528 UTC [orderer.consensus.etcdraft] Step -> INFO 5304d 1 is starting a new election at term 24 channel=testorgschannel139 node=1 2019-02-26 21:24:14.942 UTC [orderer.consensus.etcdraft] Step -> INFO 55004 1 is starting a new election at term 25 channel=testorgschannel139 node=1 2019-02-26 21:24:55.801 UTC [orderer.consensus.etcdraft] Step -> INFO 5826c 1 is starting a new election at term 27 channel=testorgschannel139 node=1 2019-02-26 21:25:11.379 UTC [orderer.consensus.etcdraft] Step -> INFO 59794 1 is starting a new election at term 27 channel=testorgschannel139 node=1 2019-02-26 21:25:11.379 UTC [orderer.consensus.etcdraft] Step -> INFO 59799 1 is starting a new election at term 27 channel=testorgschannel139 node=1 2019-02-26 21:25:39.807 UTC [orderer.consensus.etcdraft] Step -> INFO 5d43a 1 is starting a new election at term 29 channel=testorgschannel139 node=1 2019-02-26 21:25:48.542 UTC [orderer.consensus.etcdraft] Step -> INFO 5e4c6 1 is starting a new election at term 29 channel=testorgschannel139 node=1 2019-02-26 21:26:47.341 UTC [orderer.consensus.etcdraft] Step -> INFO 6305a 1 is starting a new election at term 30 channel=testorgschannel139 node=1 2019-02-26 21:28:37.403 UTC [orderer.consensus.etcdraft] Step -> INFO 69553 1 is starting a new election at term 32 channel=testorgschannel139 node=1 6. send queue overflown 2019-02-26 21:24:49.973 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 579c8 Failed to send StepRequest to 3, because: send queue overflown channel=testorgschannel139 node=1 2019-02-26 21:25:52.476 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 5f441 Failed to send StepRequest to 3, because: send queue overflown channel=testorgschannel139 node=1 2019-02-26 21:25:52.536 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 5f5a9 Failed to send StepRequest to 3, because: send queue overflown channel=testorgschannel139 node=1 7. consenter error (no leader) 2019-02-26 21:23:27.268 UTC [common.deliver] deliverBlocks -> WARN 5197f [channel: testorgschannel139] Rejecting deliver request for 172.30.231.125:35394 because of consenter error 2019-02-26 21:23:30.905 UTC [common.deliver] deliverBlocks -> WARN 51bde [channel: testorgschannel139] Rejecting deliver request for 172.30.240.149:33222 because of consenter error 2019-02-26 21:23:31.729 UTC [common.deliver] deliverBlocks -> WARN 51c57 [channel: testorgschannel139] Rejecting deliver request for 172.30.240.149:33308 because of consenter error 2019-02-26 21:23:34.012 UTC [common.deliver] deliverBlocks -> WARN 51f79 [channel: testorgschannel139] Rejecting deliver request for 172.30.240.149:33404 because of consenter error 8. Aborting deliver 2019-02-26 21:23:18.013 UTC [common.deliver] deliverBlocks -> WARN 50b16 Aborting deliver for request because of background error 2019-02-26 21:23:18.023 UTC [common.deliver] deliverBlocks -> WARN 50b83 Aborting deliver for request because of background error 2019-02-26 21:23:18.040 UTC [common.deliver] deliverBlocks -> WARN 50bc8 Aborting deliver for request because of background error 2019-02-26 21:23:18.040 UTC [common.deliver] deliverBlocks -> WARN 50bc9 Aborting deliver for request because of background error 2019-02-26 21:23:18.040 UTC [common.deliver] deliverBlocks -> WARN 50bca Aborting deliver for request because of background error 2019-02-26 21:23:18.042 UTC [common.deliver] deliverBlocks -> WARN 50be8 Aborting deliver for request because of background error 2019-02-26 21:23:18.062 UTC [common.deliver] deliverBlocks -> WARN 50c2a Aborting deliver for request because of background error 9. handshake error 2019-02-26 21:30:11.843 UTC [core.comm] ServerHandshake -> ERRO 6a2cd TLS handshake failed with error EOF server=Orderer remoteaddress=172.30.49.98:37478 2019-02-26 21:30:15.435 UTC [core.comm] ServerHandshake -> ERRO 6a436 TLS handshake failed with error read tcp 172.30.49.99:7050->172.30.231.125:56274: read: connection reset by peer server=Orderer remoteaddress=172.30.231.125:56274 10. Error from reading channel 2019-02-26 21:23:18.694 UTC [common.deliver] deliverBlocks -> ERRO 50e6d [channel: testorgschannel327] Error reading from channel, cause was: NOT_FOUND 2019-02-26 21:23:18.697 UTC [common.deliver] deliverBlocks -> ERRO 50e71 [channel: testorgschannel18] Error reading from channel, cause was: NOT_FOUND